text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
UFDC Home | Digital Library of the Caribbean | Caribbean Newspapers myUFDC Home | The Tribune. ( 11: 11: 11-17-2011.pdf Full Text PAGE 1 By CELESTE NIXON Tribune Staff Reporter cnixon@tribunemedia.net A GROUNDBREAKING ceremony was held yes terday to mark the launch of the first phase of construction on the highly anticipated Princess Margaret Hospitals redevelopment project. The project starts with the construction of a $75 million, world-class Critical Care Block, which, according to Prime Minister Hubert Ingraham, will be the countrys single largest investment in health care since the hospital was built nearly six decades ago. During his address at the ground-breaking ceremony, the Prime Minister said the block is just part of the governments holistic approach to health care and public well being. We are gathered here this morning to continue in our efforts to provide quality health care for the Bahamian people, he said. The project includes the construction of the Critical Care Block, a new entryway to the hospital and what are N ASSA U AND BAHAMA ISLANDS LEADING NEWSPAPER PM:Boundary claims false Volume: 107 No.326THURSDAY, NOVEMBER 17, 2011 PRICE 75 (Abaco and Grand Bahama $1.25 WEATHER SUNNY, HUMID HIGH 84F LOW 75F B y CELESTE NIXON Tribune Staff Reporter cnixon@tribunemedia.net BLAMING the opposition, Prime Minister Hubert Ingraham said published accounts o f the Boundaries Commis sion's recommendations on constituency cuts were false. W hile admitting yesterday that some of the 41 existing constituencies might ber emoved in the run up to next year's general election, Mr Ingraham said reports naming Montagu, Clifton and Eight Mile Rock were inac curate and must have come from the Progressive Liberal P arty (PLP He said: In order to reduce the seats by three, wem ust eliminate three seats. I do not know where the story came from that has been carried in the newspapers. It c ould only have been leaked Details on cuts m ust ha v e come from the PLP TRY OUR PINA COLADA McFLURRY The Tribune THEPEOPLESPAPER BIGGESTANDBEST LATESTNEWSON REDUCE YOUR POWER BILL TODAY!GET SUN CONTROL WINDOW FILMS to START SAVING NOW!REDUCE HEAT, FADE, AND GLARE CALL STORM FRAME WINDOWS FOR YOUR FREE ESTIMATE TODAY325-6633 By AVA TURNQUEST Tribune Staff Reporter aturnquest@tribunemedia.net THE armed robbery of a student at the College of the Bahamas has added to criti cism over campus safety. Police are questioning a 21year-old Nassau Village man over the incident which occurred at the Oakes Field Campus on Tuesday night. B y TANEKA THOMPSON D eputy Chief Reporter tthompson@tribunemedia.net POLICE have beefed up patrols in the Fox Hill and Wulff Road areas following the killing of Randino Dinghy Pratt who was shot o utside a bar last weekend. A special duty team from the Central Detective Unit, along with investigators fromt he Fox Hill and Wulff Road p olice stations, are on the ground to gain intelligence to offset possible retaliation forP ratts murder. Were working as a team, we have beefed up our patrolsa nd our intelligence to ensure that no retaliation is done where innocent persons will be hurt, said Bernard K B onamy, head of the homi cide squad, yesterday. AT THE official groundbreaking, pictured from left, Camille Johnson, Permanent Secretary; Veta Brown, Public Hospitals Authority Board Chairman; Minister of Health Hubert Minnis; Prime Minister Hubert Ingraham; Minister of Works and Transport Neko Grant; Coralie Adderley, Chief Hospital Administrator; and Herbert Brown, Managing Director, Public Hospitals Authority. Photo:Patrick Hanna/BIS GR OUND BR OKEN ON NEW HOSPIT AL BUILDING EXTRA POLICE IN F OX HILL By DENISE MAYCOCK Tribune Freeport Reporter dmaycock@tribunemedia.net FREEPORT Convicted murderer Justin Mader was sentenced to 33 years in prison on Wednesday by the Supreme Court after a very emotional plea to the victims family asking for forgiveness. With teary eyes, Mader expressed his sincere remorse to Yvette Patton, the mother of a young man who was discovered burned to death in his car on Grand Bahama Highway on March 12, 2010. Mader asked Ms Patton and her family to forgive him for what he had done. The two then embraced each oth er something never done before in the Supreme Court. On Tuesday, the second day of the trial, Mader pleaded guilty to the murder of 23year-old Devon Fritz. By SANCHESKA BROWN Tribune Staff Reporter sbrown@tribunemedia.net FNM MP Phenton Neymour said he is leaving it up to the party to decide where he runs for the 2012 election as he is torn between South Beach and Exuma. In an exclusive interview with The Tribune Mr Neymour said he cannot choose between the people of South Beach, who elected him, and Exuma, his hometown. There has been much talk about whether I will be running in South Beach or Exu ma. By TANEKA THOMPSON D eputy Chief Reporter t thompson@tribunemedia.net POLICE urged families and friends of six men wanted for questioning in connection with murder investigations to turn them in or face charges o f harbouring a fugitive. Superintendent Stephen Dean warned those in contact with wanted persons that police may trace phone records to track down those wanted for questioning and loved ones who have been in communication with them. Police want to question Elandro Missick, alias Fifty, Andre Wallace alias Mugs, brothers Desmond and Deangelo Wilson, Oman Leon and Garrison Pyfrom Jr in connection with ongoing murder investigations. S S E E E E p p a a g g e e 2 2 S S E E E E p p a a g g e e 1 1 4 4 S S E E E E p p a a g g e e 1 1 4 4 S S E E E E p p a a g g e e 1 1 5 5 S S E E E E p p a a g g e e 1 1 2 2 S S E E E E p p a a g g e e 1 1 2 2 S S E E E E p p a a g g e e 1 1 3 3 33 YEARS F OR C ONVICTED MURDERER T HARBOUR THESE FUGITIVES ROBBERY ADDS TO SAFETY CONCERNS JUSTINMADER, outside court MORE PICTURES IN THIS WEEKENDS... S EEPAGE 5 FOR ABIGT SNEAKPREVIEW W W H H O O I I S S T T H H E E B B E E L L L L E E O O F F T T H H E E B B A A L L L L ? ? im lovin it NEYMOUR: LET PARTY DECIDE PAGE 2 By DENISE MAYCOCK Tribune Freeport Reporter dmaycock@tribunemedia.net FREEPORT The Eight Mile Rock High School Bas ketball Team visited Grand Bahama Shipyard executives on Tuesday to present them with the teams championship trophy in appreciation of the companys continued support. The EMR Blue Jays won the Vitamalt Classic Basket ball Tournament last week. Principal Dwayne Higgins said it is the first time the school has captured the Vitamalt Championship title. We came here today to say thank you, in a tangible way, to GB Shipyard for the support they continue to give us, he said. Mr Higgins said it is important for corporate citizens to give back to the community. He commended the shipyard for assisting in the development of a sporting facility at the school. Work on the facility is underway and will consist of a softball field, a baseball court and several soccer fields. Carl-Gustaf Rotkirch, CEO of Grand Bahama Shipyard, and Reuben Byrd, senior vice-president of operations, commended the team on their win. It is a pleasure to involve ourselves with the EMR High School, which we have been supporting for several years, Mr Rotkirch said. He said the shipyard is committed to giving back to the community. Mr Rotkirch noted that through its apprenticeship pro gramme, the company launched a number of community initiatives, including a food drive called Lend a Hand Give A Can; a Fishing Tournament slated for March 11; and the EMRH Sports Field Project. Mr Byrd said team work and leadership are very important at the shipyard. It is something we strive for here and we want to instil in our employees to give back to the community, he said. described as critical utility upgrades. T he facility will include six state-of-the-art surgical suites, a 20-bed intensive care unit, a neonatal intensive care unit for 48 newborns, new laborat ory facilities, a new sterile s upplies department and a n ew surgical supplies department. Mr Ingraham said: In a ddition to significantly enhancing access to more lifesaving and life-enhancing pre-s cription medication, we are m odernising and expanding tertiary, secondary and primary care facilities. Advances in computer and communications technologies are going to be used toi mprove health care quality, save lives and reduce costs, Mr Ingraham said. One of the most cuttinge dge technological innova tions has been the introduction of a pilot tele-medicine p rogramme, he said, enabling patients to be examined and assessedt hrough the use of communic ations technologies by doc tors in New Providence, with out having to leave their Fami ly Islands. Mr Ingraham said con struction of the new facility w ill also provide employment opportunities for hundreds of Bahamians. Already, he said, in p reparation for the completion of the Critical Care Block, 130 or more Bahami ans, exclusive of physicians and nurses, are being engaged to be trained. The completion of this proj ect will translate not only into i mproved health services, but a lso into more timely delivery of services improving the quality of health care facili ties for current and future generations, Mr Ingraham said. H ealth Minister Dr Hubert M innis said in order for the Bahamas to meet various new challenges ranging from HIV/AIDS to the high prevalence of chronic diseases, such as diabetes and hypertension,a nd significant increases in morbidity and mortality from criminal and family violence more resources and spacem ust be available. The new critical care complex will be housed in one b uilding. Services that are now provided in several dif ferent locations will be r eplaced with updated models a nd centralised, resulting in the a more efficient and effec tive approach to health care. The PMHs critical care facility and specialised criti cal team are on the front lines o f the 21st century battlefield of violence and trauma, the Prime Minister said. The construction of the first p hase will begin on Thursday and is expected to be completed by November 2013. LOCAL NEWS P AGE 2, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE HEALTH MINISTER Dr Hubert Minnis speaks at the groundbreaking ceremony Wednesday for the P rincess Margaret Hospital's new Critical Care Block. Ground broken on new hospital building f f r r o o m m p p a a g g e e o o n n e e HUNDREDS pack the Princess Margaret Hospital grounds on Novem ber 16 to witness the groundbreaking ceremony forthe new Critical Care Block. TROPHY FOR SHIPYARD PAGE 3 POLICE are looking for t hree men who robbed a convenience store on Jennie Street. At 9.25pm on Tuesday, the m en one of whom was armed with a handgun entered Blessed Hands Convenience Store near Balfour A venue and demanded cash. They robbed the store of cash and a laptop, then left in a white four door Nissan Max i ma and headed south towards Robinson Road. Investigations continue. LOCAL NEWS THE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 3 POLICE have removed 494 firearms and more than 11,000r ounds of ammunition from t he streets of New Providence t his year. At the end of 2010, police h ad seized just over 300 firearms. Yesterday, Superintendent S tephen Dean called on pers ons with illegal firearms in their possession to turn the weapons over to police or a community leader, or face prison time. Come to the police station, c all a police officer, or give it to your pastor because if you're found with the firearm i t's non-negotiable. The magistrate don't even have a discretion, you're goingt o prison and you will do the full time, not nine months you will do 12 months in prison. The police will find you and the chances are, you willb e the next person we will b ring into custody, said Mr D ean. Mr Dean said the increase i n illegal gun seizure is due to good police work. He added that 61 new offic ers were placed on the streets o f New Providence this week, which will add to the force's intelligence gathering capability. Police intelligence has increased, we have more per s ons on the road. Just today we put 61 new officers on the streets and most of them are g oing on the front line of policing today they were well trained particularly ino ur inner city communities to reassure the members of the public that they will get full police coverage. B y DENISE MAYCOCK T ribune Freeport Reporter dmaycock@tribunemedia.net FREEPORT Grand Bahama Power Company president and CEO SarahM acDonald announced plans f or a fuel hedging policy t hat will provide greater fuel cost predictability for consumers. Plans to develop the hedging policy which wouldi nvolve a contract establishing a fixed or capped cost on fuel were discussed at the ICD Utilities annual general meeti ng on Monday. Ms MacDonald explained that hedging will smooth out the dramatic peaks that customers often experience in the surcharge and will help them to plan and budget. T he Power Company has come under fire over high electricity costs, and exorbitant fuel surcharge costs to consumers. In a press release issued by the company on Tuesday, MsM acDonald said fuel hedgi ng can be used to either partially or fully lock in the price of the fuel supply that will help stabilize the fuel surcharge for GBPC customers. This will mean GBPCs fuel purchases will be basedo n an average of prices over time instead of one price in a given month. The hedging programme will not reduce the long term price of GBPCs fuel oil but rather reduce the marketv olatility in what it pays for o il and what its customers pay for electricity, she explained. Ms MacDonald noted that the price of oil reacts to a number of forces that drives the price up and down. S he stated that in some cases the movements can be quite dramatic from month to month, and it is this volatility that creates changes month to month in the fuel surcharge portion of cus-t omers bills. M s MacDonald explained that both GBPC and customers are exposed to the market movements in the price of oil. She said that the hedging programme is used in otherE mera companies to stabilise costs. In reference to the fuel surcharge, Ms MacDonald said that improvements of efficiency due to the diligence and hard work of the GBPCs taff has resulted in steady declines in the fuel surcharge, s ince Julys 24.66/kWh fuel s urcharge. While the fuel surcharge is largely driven by the world oil market price of light and heavy fuel which we have no control over, weh ave been taking measures t o control the areas we can, l ike the efficiency of our generation mix, said Ms MacDonald. Due to our efforts, we saw the fuel surcharge drop inA ugust to 21.54/kWh and f urther declines resulting in 21.03/kWh for the month of November, she said. FUEL HEDGING TO HELP GB POWER CONSUMERS 494 guns taken off streets this year SUPERINTENDENT Stephen Dean POLICE SEARCH FOR ROBBERS POLICE seized a box containing a handgun, ammuni-t ion and marijuana at the San Andros Airport yesterday. R esponding to a tip, officers on Andros went to the airport and found the items at around 5.20pm. D etails were sketchy up to press time and no arrests were made, but police on the islands ay they are following significant leads. POLICE arrested an 18year-old man after confiscat ing an imitation gun. Officers patrolling Moore Avenue off of Wulff Road at 11.40pm Tuesday made the arrest after they heard gun shots. The officers followed the sound and saw a man in a blue hooded jacket and blue jeans. The man ran but was soon caught by officers. After the imitation gun turned up in a search, the Raymond Road, Marathon Estates resident was taken into custody for question ing. Active police investigations continue. HANDGUN SEIZED AT AIRPORT TEEN A GER ARRESTED OVER FAKE FIREARM DURING an armed robbery, stay calm and dont resist, police advise. Get a good look at the robber and if possible a description of the vehicle used to escape. Remember your safety comes first money and merchandise can be replaced, your life cannot, said Sergeant Chrislyn Skip pings. CRIME TIP PAGE 4 EDITOR, The Tribune. Re: Money the motive for many inmates The Tribune, November 4, 2011. Urban legend has it that when the extremely prolific depression-era robber, Amer ican Willie Sutton, was asked by a reporter: Why do you rob banks? his reply was: Because thats where the money is. Seventy plus years later, the COBs cutting-edge research has now contributed much further to our understanding of the cause of crime, and promises to be a very useful tool in reducing it. Hopefully, the COB will also inform the FBI and Interpol etc. that money is the motive for a lot of crimes. KEN W KNOWLES, MD Nassau, November 12, 2011. EDITOR, The Tribune. Today, I had the misfortune to try and drive from the Westv ia Independence Drive and Prince Charles to Village Road. I can fully understand the frustrations of anyone who drives regularly in that area and was surprised not to seea crowd of frustraters (a new word) doing the occupy Wall Street act on Prince C harles. I t seems to me that fast p rogress on this road improvement effort is sadly l acking and since having been a way in the summer things h ave got worse rather than better. T ravelling East along P rince Charles towards Soldier Road I came across a s ign that said road closed the detour signs were lacking a ny sort of information that m ight help the driver decide what to do about getting to Wulff Road and then VillageR oad. I turned left and then immediately got lost andf inally ended up on the r oad to St Augustines going e ast. I noticed that the traffic was now going West so followedt he car in front and ended up on Bernard Road near Kingsway Academy and final-l y managed to make it to Village Road. The road construction company should be thoroughly ashamed of thems elves for their total lack of d etour singage that made sense or perhaps they care less and the Ministry of Works should be ashamed of themselves for not insisting the company do a better j ob. I very much hope that the c ontract has lots of penalties as in the time it is taking to improve Prince Charles it seems to me they could have built the entire road system in New Providence. I rather think that the Independence Drive roundabout comes under the same companys contract and it is taking e ven longer. Get a grip somebody and either throw the contractorso ut or make them work to some deadline which we should all know about and not vaguely before Christmas. PATRICK H THOMS ON Nassau, N ovember 15, 2011. EDITORIAL/LETTERS TO THE EDITOR P AGE 4, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE.C.S.G., (Hon. Publisher/Editor 1919-1972 Contributing Editor 1972-1991 EILEEN DUPUCH CARRON, C.M.G., M.S., B.A., LL.B. Publisher/Editor 1972Published Daily Monday to Saturday Shirley Street, P.O. Box N-3207, Nassau, Bahamas ECONOMIST Mario Monti formed a n ew Italian government without a single politician on Wednesday, drawing from the ranks of bankers, diplomats and business executives to make sure Italye scapes looming financial disaster. T he 68-year-old former European Union competition commissioner told reporters he will serve as Italys econo-m y minister as well as premier for now a s he seeks to implement sacrifices to heal the countrys finances and set the economy growing again. Monti and his new cabinet ministers w ere to be sworn later Wednesday, formally ending Silvio Berlusconis 3 1/2y ear-old government as well as his 17year-long run of political dominance. Monti said he would lay out his emer g ency anti-crisis policies in the Senate on Thursday, ahead of a confidence vote. A second vote, in the lower Chamber of Deputies, will follow, likely on Friday. He stressed that Italys economicg rowth is a top priority. Hopes for Italys new administration w on it some respite in financial markets Wednesday. The yield on its ten-year bonds dropped 0.16 percentage point to 6 .77 per cent. In the last week, that bor rowing rate had flirted over 7 per cent the level that forced fellow eurozone members Greece, Ireland and Portugal to seek international bailouts. Up until summer, Italy had mostly a voided the European debt turmoil d espite having a jaw-dropping amount o f debt: $2.6 trillion, or nearly 120 per c ent of its GDP. But after frequent delays and backtracking on austerity measures, markets lost faith that any Berlusconi government could fix Italys e conomic issues. Restoring confidence in Italys finan c ial future is crucial because, as the third-largest economy in the eurozone, it is too big for Europe to rescue. A debt default by Italy would threaten the euro itself and shake the global economy. Monti gave few hints about his politi c al programme Wednesday, sidestep ping a question about whether the government would dip into citizens bank accounts as it did decades ago during another debt crisis. You may ask, he replied, but went no further. E xplaining why his Cabinet contained no one from Italys fractious political parties, Monti said that his talks with party leaders led him to the conclusion that the non-presence of politicians in t he government would help it. His ministers include Corrado Passera, CEO of Italys second-largestb ank, Intesa Sanpaolo SpA, to head D evelopment and Infrastructure; Piero Gnudi, a longtime chairman of Enel utility company, as Tourism and Sport minister in a country heavily dependent on t ourist revenues; and the current Italian ambassador to Washington, Giulio Terzi d i SantAgata, to be foreign minister. An historian of the Catholic church with close ties to the Vatican, Andrea R iccardi, was named minister of international and domestic cooperation, a choice that seemed to reward pro-Vatican lawmakers in Parliament. Still, his choices raised some eye b rows. This government, ties to banks, to b usiness, to the Vatican, to private universities to the usual names is the opposite of what this country needs, s aid Paolo Ferrero, leader of Rifon dazione Comunista, a tiny, far-left party. Passera also sits on the board of direc tors of Milans Bocconi University, which forms Italys business elite. Monti is currently the head of the Bocconi. B ut analysts gave Montis selections a t op mark, insisting the Cabinet ministers w ere independent. I think the quality of the people is very high, said Roberto DAlimonte, a political science professor at Rome's LUISS University. All these people are v ery high-caliber, and highly respected, independent. M onti. By Colleen Barry and Frances DEmilio, Associated Press. Prince Charles trials and tribulations LETTERS letters@tribunemedia.net A government without politicians EDITOR, The Tribune. I stumbled across a beautiful poem a few years ago, written in reference to Ango-l a. R ecently I re-read this beautiful piece of material and it speaks large vol-u mes of my expectations within the Bahamas. I also want to dedicate this to Mar c o Archer. I want to see here alongside this silent hero of twelve y ears those men who are so ardent for the equality of men. I want to see here on this soil stained with the blood of a twelve-year-old youngster t he mothers of the free chil dren of the same age. I want to see here along side this tortured body the c lamour of those who cry out against war here alongside the brave heart of such as die at the age of twelvet hose who speak of tomorr ow and promise the distant future. I want to see here the men w ho know about space and control the cosmic flights and do heart transplants and d ecode the electronics of sound and sing to burst the eardrums and paint good pictures and argue the fine points o f issues in front of this ravaged corpse of a twelve-yearold. Here alongside this child cut off at the age of twelve I want to see oceans lakesp alm groves and paper toyboats. Here the weapons from all sources promising solid arity on the sure path to life. I want to see here alongside the cold body of the smil-i ng twelve-year-old children w ith pencils and exercise books learning to write just his name. A nd purged at last of these cliffs of anger the day will be filled with roundelays on the e vergreen youth around the stone raised in remem brance. T he name of this poem is Augusto Ngangula; it was written in 1961 by Costa Andrade. ELKIN B SUTHERL AND Jr Nassau, November 2011. Oh Bahamas, I want to see here... C C r r i i m m e e r r e e s s e e a a r r c c h h r r e e s s u u l l t t s s a a r r e e s s t t a a t t i i n n g g t t h h e e o o b b v v i i o o u u s s PAGE 5 See this Saturdays Big T for full photo coverage and to find out who made the best dr essed and best co-ordinated couple lists. Who is the belle of the ball? LOCAL NEWS THE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 5 5LFKSSUHVVRUV:LOO%H-XGJHG &RPHQRZ\RXULFKZHHSDQGKRZO IRU\RXUPLVHULHVWKDWDUFRPLQJXSRQ \RX PAGE 6 THE Bahamas officially bade farewell to US Ambassador Nicole Avant during a w arm reception at the Balmoral Club. A mbassador Avant is scheduled to leave the country on Tuesday, November 22, after a two-year tour of duty. S he is the first ambassador appointed to the Bahamas under President Barack Obama's administration. Deputy Prime Minister and M inister of Foreign Affairs Brent Symonette recognised t he work done by Ambassador Avant, who he said ensured that President Oba mas message of hope and i nspiration resonated with the Bahamian people. Your Excellency, we will always remember your spirit of giving, inspiration and h ope, and we trust this spirit will continue to positively a ffect those wherever you go, Mr Symonette said. The assistance you pro vided to children in need of l ove, hope and support has renewed efforts to bring awareness that individuals, e specially children with intel lectual or physical challenges, have just as much of an equi-t able stake in society as those w ithout such challenges, he said. As we support them t ogether, we work toward a m ore inclusive, tolerant and compassionate Bahamas. Mr Symonette thanked the ambassador for supporting families of children with autism, aiding the local Spe c ial Olympics programme, and encouraging children to read more books. He also recognised her efforts to strengthen diplo matic relations between the Bahamas and the United States in the fight against drugs, arms and human traf-f icking. Mr Symonette hailed the s uccess of the recent Caribbean Basin Security Ini tiative, at which the United States reaffirmed it's com mitment to regional partnership to enhance safety in the region. Prior to taking up her post in the Bahamas, Ambassador Avant was the Southern Cal ifornia finance co-chairwoman of the Barack Obama Presidential Campaign. She also served as the vice president of Interior Music Publishing from 1998-2009. Ambassador Avant focused on five priority initiatives in the Bahamas: education, alternative energy, economic and small business development, womens empowerment, and raising awareness about the challenges facing people with disabilities. LOCAL NEWS PAGE 6, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE Fond farewell to departing USambassador AMBASSADORAVANT, centre, with Deputy Prime Minister and Minister of Foreign Affairs Brent Symonette and Robin Symonette. Photo:Kris Symonette/BIS PAGE 7 LOCAL NEWS THE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 7 G OVERNOR General Sir A rthur Foulkes will be in Grand Bahama on Friday for a massive One Bahamas celebration at Independence Park in Freeport. S ir Arthur will be making h is second consecutive visit to Grand Bahama for the occasion, which is observed nationwide during the month of November. He will be the featured s peaker at the flag raising ceremony which gets underway at 10am. Also travelling to Grand Bahama will be co-chairmenof the One Bahamas Foundation Sir Orville Turn-q uest, the countrys fifth g overnor general, and sailing legend Sir Durward K nowles. Deputy director of educat ion and former chairman of t he One Bahamas Grand Bahama committee, Cecil Thompson, said hundreds of students from every corner a nd settlement on Grand Bahama will assemble at Independence Park on Friday. It should be noted that b ecause of the creative, passionate and extraordinary part icipation of the schools in this district in the One Bahamas programmes, during the past1 6 years Grand Bahama has maintained her reputation as our countrys undisputed capi tal of One Bahamas celebrat ions, he said. O ne Bahamas came about in November 1992, when the t hen minister for Youth, Sports and Culture Algernon Allen sought to bring then ation together in love and unity. T he country had just gone t hrough a tough election p rocess which saw a change in power for the first time in 25 years. M r Allen felt it was time for a national healing effort, and encouraged Bahamians to speak about the things that should unite us. T he first major event organised by the Grand B ahama committee was a church service held on November 13 at Community Holiness Church in Eight M ile Rock. T he high point of the celebrations will be Flag and TShirt Day on November 18. O n that day, all local radio stations will be invited to play the national anthem at 10am, and the public will be invited to take a break from theirn ormal routine and appreciate what it means to be B ahamian. Another major One Bahamas event is the fun run/ walk and health screening s cheduled for November 19 a t the Government Complex in Freeport. CDW + taxes + fees + unlimited miles alamo.com MINIVAN W EEKLY FROM $ 265USFor reservations, please contact Going Places Travel at (242 or (786 245.0520 or at 1.800.468.3334 Be sure to request rate code RC1 Ryour minivan for this holiday season! ESERVE R ate valid through December 31st, 2011 at participating Florida locations. A peak season surcharge of US$50/day and US$250/week applies from December 15th through the 31st. Rates, terms and conditions are subject to change without notice. in Florida Governor general to visit Freeport for celebration GOVERNORGENERAL Sir Arthur Foulkes. PAGE 8 PAGE 8, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE RESIDENT PROMOTION Adults 79$Kids 49$No reservations required, based on availability. For general inquiries call 363-6950.Aquaventure passes and lunch vouchers are available at the Discover Atlantis Desk in the Coral Towers. Proof of residency required for discounted rate.Includes: Complimentary Parking included with Package PurchaseLocated at the Atlantis Self Park Facility at the rear of the Craft CenterComplimentary Lunch Voucher Redeemable at express outlets onlyAccess to all Atlantis Pools, Slides and Rides August 20th November 20th Cable Beach Branch Relocation...We wish to advise our valuable customers that effective Monday, November 21, 2011, the Cable Beach branch will relocate to its new location on West Bay Street in the new Baha Mar Commercial Village.NEW PHONE NUMBER: 242-702-8100 We look forward to serving you at our new home.*Trademark of The Bank of Nova Scotia, used under licence (where applicable) PAGE 9 MINISTER of State for the Environment Phenton Neymour has commissioned a r everse osmosis water plant for the island of Eleuthera. I n his official address, he s aid desalinating natural water throughout the B ahamas has been a goal of the government for the past1 8 years. M r Neymour said: I was a y oung engineer there when w e began the desalination supply of water throughout the entire Bahamas. We began with Windsor F ield Reverse Osmosis Facili ty, then we went to the various Family Islands, from San S alvador to Inagua to Bimini, so it was the beginning of a t ransition. We are now to the point w here, in New Providence, we a re approaching 90 per cent of water being provided by reverse osmosis with the objective of providing better quality water for Bahamians. T he new Tarpum Bay/Rock Sound Reverse Osmosis Plant i n Winding Bay, Eleuthera w as commissioned as part of National Energy Awareness Week, held from November 4t o 11. Tarpum Bay and Rock Sound communities have long depended on ground water f or their water supplies, Mr Neymour said. As populations grew in t hese communities, systems expanded and the demand e xceeded the safe water yields, resulting in the increas-i ng deterioration of water q uality as a result of high s alinity. Despite major projects in the mid-1990s and the early 2000s, which addressed infrastructure needs throughout E leuthera, including the dist ribution system in Tarpum Bay and Rock Sound, ground w ater continued to be the source of supply. T he Water and Sewerage C orporation (WSC a greement in December 2010 w ith Aqua Design Bahamas Ltd, a subsidiary of General Electric, to build a 200,000 imperial gallon per day desalination plant. The construction works were completed in June, w hich was only in six m onths, said Mr Neymour. For the first time in decades, your communitiesc an boast of water quality that many of us still buy in bottles for over 100 times the price. What is equally important and unique about this facility a nd how it ties in with our cele bration of National Energy Awareness Week, is the powe r purchase agreement (PPA between the Water and Sew-e rage Corporation and B ahamas Renewable Energy C orporation, called BREC, w hich seeks to utilise renewable energy as a power source. The PPA is based on wind e nergy and is intended to r educe the cost of electricity, which is typically 30 to 45 per c ent of the total cost of desalinated water. LOCAL NEWS THE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 9 The Mercedes-Benz C-ClassYour most enjoyable drive ever.The Mercedes-Benz C-Class is a pleasure tobe flexibler esponse. New water plant for Eleuthera HOUSE SPEAKER and MP for North Eleuthera Alvin Smith drinks water produced at the plant. P hoto:Kris Ingraham/BIS PAGE 10 LOCAL NEWS PAGE 10, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE By DANA SMITH dsmith@tribunemedia.net R EPRESENTATIVES for Cable Bahamas Limited officially launched Revoice, a new telephone voice service which they call the voice alternative for the Bahamas. Revoice was announced y esterday afternoon and is exclusive to New Providence, with Family Island service coming very soon in the new year. The product brings, for the first time, a reliable and affordable fixed line voicea lternative to the incumbent phone company, said Keith W isdom, Cable 12 Director. Customers will experience significant improvement with enhanced service quality and all inclusive features and, ofc ourse, affordability, Dr Wisd om said. He also spoke about a special feature of Revoice, where calls can be rerouted to another phone if service ever drops. Most importantly, in the e vent of a temporary disruption of service, Revoice has a s pecial feature that allows customers to automatically route calls to an alternate number, Dr Wisdom said. This can be any number such as am obile phone, friend, or work n umber. When the disruption is restored, calls automatically ring back to your Revoice number. This means customers will never loose a call. Mark Cabrelli, Vice-Presid ent of Marketing and Sales, stated that although the service i s exclusively for New Providence at the moment, the launch will extend to the Family Islands as soon as next year. The intent is to get everyw here we can provide service, M r Cabrelli said. But from now until the end of this calendar year, itll be New Providence only. Adding that the Family Island launch will arrive very soon in the new year. Mr Cabrelli also revealed t hat although the official launch was yesterday, they already h ave customers on board. We have had trial customers already and paying customers because we did a soft launch to introduce thep roduct, he said. The numb er (of customers thousands, now... and we hope thats going to increase substantially. The new service includes a multitude of features such as: voicemail, call forwarding,c all waiting, three-way calling, caller originator trace, a nd selective call acceptance or rejection. Cable Bahamas will continue to demonstrate that we are the proven partner ofc hoice for communications in t he Commonwealth of the Bahamas and the development of the Rev product set reflects this, Dr Wisdom said. New telephone voice service is launched PAGE 11 Were also working along with officers from the Wulff Road station who have responsibility for that area and also the (officer bility for the Fox Hill area because we know thats the area he lived in. We're trying to bring some closure to family members and try to make some sense of what actually took place early Saturday morning. Pratt, 29, was celebrating with friends at a nightspot on St James Road during the early hours of Saturday morning before he was killed. Police described him as a known criminal but could not confirm if he was on bail for an offence at the time of his death. Pratt and Deslin Nichols who was also killed earlier this year were both charged with murder in 2005 after being on the run for nearly three years. The pair were accused of the 2 002 murder of Kirk Tank Dog Ferguson which is believed to have sparked the retaliation killing of Pratt's mother and her son. Ferguson, 30, was shot near Sandilands Primary school. The double murder of Rosemary Bennett Wright, Pratt'sm other and her seven-year-old son Jakeel Wright on March 6, 2005, is believed to have been a revenge killing for Ferguson's death. Both were shot dead in their beds at their home on Adderley Street. LOCAL NEWS P AGE 12, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE According to reports, a male student was approached by four men, one of whom was allegedly armed with a handgun. The thugs stole the students laptop and fled the scene, according to police, who arrested one suspect at Carter Street, Oakes Field. The incident comes on the heels of an alleged robbery at a classroom block last month. The reports were investigated by campus security and administration, however, no factual information could be obtained. As unconfirmed reports of criminal activity on campus continue to trickle in, Renbert Mortimer III, student union president, called out for greater accountability from students and administration to address mounting security concerns. In incidents like this, we only have one angle, Mr Mortimer said. Students need to report what is actually happening, and provide more evidence to assist investigations so that the right s teps can be taken. There is no defence for students, and theres a fear of the student body to publish complaints properly. Come forward, provide evidence so that the college can make a more justified decision. I dont feel justice is happening. R umors of physical conflicts with security, armed robberies, attempted rapes and other crimes alleged to have occurred on the Oakes Field Campus are often not explored due to insufficient evidence, Mr Mortimer said. Last week, security guards at the Oakes Field campus distributed flyers stipulating safety guidelines for using the campus at night, and tips on how to avoid robberies and what to do during an assault. I hope that students will s tep up and rally for a more secure campus, said Mr Mortimer Although students arent speaking out, theres a lot of injustice and criminal activity not being reported. Mr Mortimer added: We need students to bring it to the f orefront, so we can approach the college and address these issues. A college spokesperson said an official statement would be released on the incident; however, no response was given last night. ROBBERY ADDS TO SAFETY CONCERNS f f r r o o m m p p a a g g e e o o n n e e f f r r o o m m p p a a g g e e o o n n e e EXTRA POLICE IN FOX HILL PAGE 12 Police also want to question Kenny Roberts and Keith Oliver in connection with fraud investigations. Mr Dean said police intellig ence suggests that the men are all in New Providence. These persons are sleeping in homes in New Providence, these persons are driving in cars in New Provi-d ence, these persons are at s ocial events in New Provid ence. They are living in neighbourhoods where (people) know who these persons are. We are saying to you, p lease, these prolific offenders w ho continue to be like some pillars in some of our communities, we want them weeded out. We want to reverse those pillars with more positive role models. These (persons sleeping in bushes, they are sleeping in homes. Were saying to you, it is a criminalo ffence to harbour anyone who has (allegedly ted a crime, particularly if you have knowledge of that (alleged He warned families and loved ones of the men that they could face harsh penal-t ies if they are found hiding men wanted for questioning. The days are gone where we will be negotiating withp eople who harbour (wanted persons). If we find you, which we will do, we will find you, we a sk you to turn those persons in and if we meet them on your premises you and all sundry will be arrested and dealt with in the full extent of the law. If we track the (phone r ecords (and find are in communication with him we will be dealing with you, said Mr Dean. P olice want Missick's assistance with their investigations into the murder of Damien Bowe who was shot in Kemp Road. Wallace is wanted for questioning in connection with the murder of Leonardo Lewis,w ho was shot in the morning at Palmetto Avenue as he was heading home around 6am on Thursday, September 15. T he two brothers are wanted for questioning in connection with the murder of Bradley Viticus who was shot i n the Crooked Island Street area, but died in hospital. Ormand Leon is wanted for questioning in the investigation of Franscico Hannas murder. Hanna was shot in Wilson Tract. D etectives think Pyfrom can help their investigation into the stabbing death of a 1 7-year-old girl at Moss Town, Exuma, which reportedly occurred around 8.30pm o n Friday, August 12. Ormand Leon, 22, of H omestead Street off Wulff Road is wanted for questioning in connection with a murder at Wilson Track, which occurred on Sunday, J uly 10. Officers of the Central Detective Unit think Chisolm m ay have some information about a double shooting on Fowler Street off East BayS treet, which occurred on Wednesday, October 26. The countrys homicide c ount was 110 as of last night. LOCAL NEWS THE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 13 About RBC and RBCWealth ManagementRoyal Bank of Canada, which operates under the brand name of RBC, is Canadas l argest bank, one of North Americas leading nancial services companies, and among the largest banks in the world as measured by market capitalization. Through a network of ofces worldwide, the international division of RBC p rovides comprehensive wealth management services to high net worth individuals and institutional clients in select markets around the world. R oyal Bank of Canada Trust Company (Bahamas) Limited is a leading international private bank and trust company in the Bahamas, one of the worlds premier nancial centers, serving high net worth individuals and corporate i nstitutional clients.R oyal Bank of Canada Trust Company (Bahamas) Limited is looking to hire a Senior Trust Of cerThe Senior Trust Ofcer will report into the Head of Trust Services, RBC Wealth M anagement Caribbean and will be responsible for administering a portfolio o f complex trust structures for high net worth individuals as well as providing s upport, strong leadership and fostering teamwork amongst a group of highly motivated professional Trust Ofcers and Trust Administrators, ensuring that all administrative issues are dealt with accurately and ef ciently. K ey accountabilities include: Ensure that strong technical knowledge of all aspects of trust and company administration is delivered: this includes attending client meetings and understanding the correct administrative needs associated with the structure. Providing assistance to increase prtability of the company/shareholder value by identifying opportunities to extend the trust services, and to use the bank offering to implement solutions for clients where appropriate. Proven superior sales acumen, with ability to attract, build and strengthen r elationships with key clients and intermediaries and identify new ideas in r elation to products and services that may be offered by the company. Maintains and grows revenue through building relationship with the PRM in retention and extension of existing clients accounts, cross selling and o btaining new clients through existing client referrals. Review pr tability o f each administered trust, company and other duciary structure and take remedial action where appropriate taking into account the degree of risk and complexity associated with the structure and the value given to Client. A key role in the on boarding of new trusts and companies Working closely with referral sources, internal and external partners to deliver superior client experience during the take on process. Responsible for the supervision, training and development of a team of Trust Ofcers and Administrators. Provide input on trust policies and procedures to other members within the unit as and when required. Work in a fast paced, high growth environment and demonstrate leadership in c hallenging situations with aggressive deadlines and service standards. R equired Quali cationsand Skills: A University degree in business, accounting, or other related discipline. A minimum of ten years relevant experience. Professionally qualied, e.g. accounting/ nance qualication, STEP, ICSA, TEP, ACCA or a qualied attorney who has experience working in the eld of trust law and company law. Possess a superior knowledge of Trust (complex and simpleompany and Fiduciary structures, and tax and legal issues affecting the administration of Trusts and Companies. Experience with the preparation and presentation of nancial and estate planning proposals to high net worth individuals. Fully knowledgeable on the abilities of the trustee, and strong decision making demonstrated. Self-motivation with excellent project management skills. Demonstrably strong technical knowledge of all aspects of trust and company administration, including the nuances and statutory requirements of the major offshore jurisdictions used in connection with clients structures. Strong analytical and problem-solving skills, methodical, thorough and attentive to detail. Strong supervisory skills coupled with the ability to lead by example. Fluency in a foreign language preferred. (Spanish or French preferred) Strong skills in time management and prioritization. Excellent oral and written communication skills. Cultural awareness and sensitivity on both an individual and corporate basis. Ability to work in other RBC Wealth Management ofces within the Caribbean as required Excellent at relationship management and working with others, as demonstrated through experience and references. About Our People, Our Culture We believe our people are our main strength, and to this end we are dedicated to continually developing our employees. This position offers opportunities for career progression and appropriate training will be provided. We offer an attractive compensation package, which includes incentive bonuses and a comprehensive health & bene ts plan. Remuneration will be commensurate with qualications and experience. Interested persons should apply by November 24, 2011 to: Royal Bank of Canada Trust Company (Bahamas) Limited P. O. Box N-3024 Nassau, NP, Bahamas Attention: Human Resource Department Via Email: shelly.mackey@rbc.com Only applications from suitable qualied candidates will be acknowledged f f r r o o m m p p a a g g e e o o n n e e Harbour fugitives and you commit a crime PAGE 13 LOCAL NEWS PAGE 14, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE by the PLP as to which seats w ill be eliminated but it is n ot a true story. M r Ingraham said the Free National Movement has been committed to reducing the size of the House of Assembly since first elected. In 1997, hep ointed out how his party cut the number of seats from 49 to 40 while under the subsequent PLP government, it was increased to 41. Mr Ingraham said it has always been the FNMs inten-t ion to reduce seats to 38, the minimum under the Constitution. In terms of how the lines are configured, I am not familiar with the details of that, said Mr Ingraham. All I know is the FNM w ould have drawn equitable and fair lines consistent with its mandate to see as many seats as possible have an equal number of votes, and where they are not equal, ensure the inequality does not exceed a certain percentage. T he Prime Minister said the Family Islands will continue to have 10 seats, even thought hey have 10 to 15 per cent of the total registered voter population in the Bahamas. G rand Bahama will also m aintain its current five seats. There are now 96,000 reg istered voters in New Provid ence, Mr Ingraham confirmed. He said: New Providence h as 77 per cent of the populat ion and so we are so we are seeking to have 23 seats which should produce ana verage number of voters per constituency of 4,170 or there about. Mr Ingraham also confirmed the Boundaries Commission will propose that some constituency names be changed and boundary lines a ltered. Responding to recent PLP claims that a smaller number of seats will put a strain on members of Parliament, MrI ngraham said with boundary changes, constituencies in New Providence will grow on average by 500 voters. The argument that such an increase would spread MPst oo thin is nonsensical, according to Mr Ingraham. I do not understand what they mean by a strain on MPs, the lazy ones among us will always be lazy. I cannot imagi ne why there would be a s train on an MP in New Providence to visit his constituency and be responsive to them, after all the government gives them $1,500 a month to maintain an office and be available t o them, he said. Let me make this clear, South Beach has elected me and supported me and I am proud and thankful for that. In fact, I never knew how much I was appreciated since the issue of Exuma came up, he said. I was born in Exuma and the majority of my family still resides there. My family has requested me to go there. I feel at home there. It is the island I love and I hope one day to represent them. Mr Neymour said if he is chosen to run in Exuma, he is not worried about the current Member of Parliament, Anthony Moss, because he is sure he can defeat him. I am not overly concerned about Mr Moss. He should not be concerned about me. He has his own concerns. From what I hear, his own party (PLP him. What I am concerned about is the island of Exuma and its development, he said. The race is not about me or Mr Moss, it is about finding the best possible leadership for the island. Exuma is in need of leadership. It is in transition and needs someone strong and influential who has the full support of their party.I have always assisted Exuma and I will continue to assist them. The Boundaries Commission recently proposed joining Ragged Island to Exuma to f orm one constituency. Currently it is joined to Long Island. Ragged island is perceived as an FNM stronghold and political observers have speculated it was merged with Exuma to secure a win for the party. Mr Neymour denied this was a move by the government to increase his chances in Exuma, but rather the only sensible thing to do. Joining Ragged Island to Exuma is the right choice since that is how it was for years. Ragged Island has always been in Exuma. Only recently, over the last two elections, has it been attached to Long Island. So we are only reuniting it to where it has always been, he said. The daily operations for Ragged Island all run out of Exuma already. If you go to Exuma you will notice most of t he people there originated from Ragged Island. Mr Neymour said regardless of whether he is ratified for Exuma or South Beach, he will serve to the best of his ability. The Boundaries Commission is expected to complete its final report by the end of the week. f f r r o o m m p p a a g g e e o o n n e e f f r r o o m m p p a a g g e e o o n n e e Phenton Neymour: Let the party decide PM:BOUNDARY CLAIMS FALSE P MHUBERTINGRAHAM PAGE 14 LOCAL NEWS THE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 15 He also pleaded guilty to armed robbery, possession of an unlicensed firearm and possession of ammunition. Bradley Burrows, who was also charged with Mader, was on Monday acquitted of all the charges in exchange for becoming a key witness for the Crown. At the sentencing on Wednesday, the prosecution informed J ustice Hartman Longley that the mother of the deceased wished to address the court before sentence was passed. D efence Attorney Carlson Shurland did not object. Ms Patton told Mader that her son had plans for his future and did not deserve to die the way he did. The only thing I have left of my son is his ashes. You can talk to your family, but I can only talk to a picture because you took him away from me, said Ms Patton, wiping tears from her eyes. Ms Patton said Devon was h er first born and a good brother to his two sisters. She told Mader that she does not hate him. I pray for you everyday. I forgive you, she said. Prosecutor Erica Kemp recommended a prison term of 33 years on the murder count, 15 years on the second count, and 10 years each on the third and fourth counts. She said sentences are to run concurrently. Mrs Kemp noted that Mader had been in prison awaiting trial since last April and said the time would be taken into account. Attorney Carlson Shurland said that he approves of the sentence handed down on his client. I feel (the sentence and it allows him to be rehabilitated and to become a productive member of society. Itw ill serve as a lesson he can carry to prison. Mr Shurland stated that his client did the right thing by pleading guilty to the offences and by expressing his remorse. He could have received 50 to 60 years. It was a no brainer to plead guilty of the offence which he admitted and confessed, and saving himself 30 years of jail time, he said. Maders family did not take the sentence well. As Mader was led away in handcuffs by police, a family member, who was identified as the mother, wailed and collapsed to the ground outside the courthouse. f f r r o o m m p p a a g g e e o o n n e e 33 YEARS FOR CONVICTED MURDERER PAGE 15 AMANwho received a l ife-saving defribrillator has r eturned to the Bahamas hospital that helped him in order to receive a surgical upgrade. In 2002, Guyanese native Professor Ulric Trotz became the first person to receive ac ardiac resynchronization t herapy defibrillator (CRTD from a Bahamas surgical team. The CRTD is a life-saving device for patients with heart conditions. He returned to the B ahamas this month for an u pgrade, deciding to work with the same cardiac team at The Bahamas Interventional Cardiology Centre (Cath Lab pital that helped to turn his l ife around. Professor Trotz expressed his confidence and satisfaction with the care and serviceshe received and nine years late r, he returns to The Bahamas and The Bahamas Heart Centre from Belize where it was arranged to have his life-saving device replaced, said Domica Davis, marketing and public rela t ions officer at the Bahamas Heart Centre. The team of Dr Delton Farquharson, surgeon, Dr Pablo DeSouza, anaesthetist, Antoine Roberts, cardiovas-c ular technologist, and Celeste King-Dorsett, chief cardiac nurse, completed theu pgrade successfully. Professor Trotz returned to Belize with a hearty smile. H e expressed that he feels g reat and is so confident of the surgery, he is going to wear a white shirt every dayf or the rest of his vacation. Gods willing, outside of pleasure, he will return to theB ahamas to Dr Conville Brown and team for another check-up, said Ms Davis. T he implanted device resets t he timing of the hearts ventricles while also providing a backup system in case of sud-d en cardiac arrest. In 2002, Professor Trotzs routine vacation to theB ahamas turned into a nightmare when he became seriously ill with acute heart failu re. He was treated by Dr C onville Brown, who with the wider team, oversaw the complete recovery of ProfessorT rotz. The fact that Professor Trotz flew all the way to theB ahamas for his device to be replaced, makes him and his team extremely pleased and happy, said Dr Brown. He could have gone anywhere in the United States, but he chose the Bahamas. Its howed the confidence he had in our services; it was enough to return nine years later, he s aid. T he case of Professor Trotz demonstrates the multiplier effect of medical tourism,s aid Dr Brown. What we did was ensure that shortly after his proce-d ure, we were able to get him to join his wife and enjoy the amenities here in the Bahamas. This is somethingt hat needs to be encouraged at a much larger scale in the Bahamas, he said. N urse King-Dorsett, of the Bahamas Heart Centre, said: This speaks volumes f or the medical tourism i ndustry. Tourists can travel to the Bahamas and know that therea re doctors here who are trained and fully qualified to not only take care of them,b ut provide any intervention that may be required. LOCAL NEWS PAGE 16, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE Patients trip shows faith in team E XPERTSTAFF: P ictured, from left, at the Bahamas Interventional Cardiology Centre at Doctors Hospital are cardiac nurse Math-e w Sebastian, head n urse Celeste KingDorsett, surgeon Dr Delton Farquharson, Professor Ulric Trotz, cardiologist Dr Bimal Francis,C ath Lab technolog ist Antoine Roberts and anaesthetist Dr Pablo DeSouza. PAGE 16 B y NEIL HARTNELL Tribune Business Editor The Government has little choice but to continue privatising key revenue-generating agencies such as the Registrar Generals Department, a leading accountant warning yesterday that thepublic sector was failing to attract the talent that is needed to take the country into the next century. Raymond Winder, managing partner of Deloitte & Touche ( Bahamas), told Tribune Business that despite the Governments commitment to reducing public sector response times and improve efficiencies, it not appear to be making the headway we need to make to be more competitive. Suggesting that this reduced the Bahamas attractiveness as a for B y NEIL HARTNELL T ribune Business Editor FIRSTCARIBBEAN International Bank (Bahamas of following its Jamaican affil i ates lead by de-listing from the Bahamas International Securities Exchange (BISXT ribune Business was told yesterday, even though the percentage of its stock in Bahamian public handsr emains well below the exchanges minimum 25 per cent threshold. Marie Rodland-Allen, FirstCaribbean International Bank (Bahamas director, responding to Tribune Businesss inquiries after its Caribbean affiliate unveiled plans to de-list from the Jamaican Stock Exchange, said: We have for mulated no plans to de-list or increase the local shareholding at this stage. The action in Jamaica was done at the request of the Jamaican authorities, and we have had no such request from the Bahamas authori ties. The Jamaican board's decision was in direct response to repeated requests $4. 68 $4. 51 $4. 69T he information contained is from a third p arty and The Tribune can not be held responsible for errors and/or omission from the daily report.$ $5.19 $5.19 $5.17 T HETRIBUNE SECTIONB business@tribunemedia.netTHURSDAY, NOVEMBER 17, 2011 JOHN TRAVOLTA. PROFESSIONAL PILOT.NAVITIMER BREITLING.COM By NEIL HARTNELL Tribune Business Editor A LEADINGBahamian law firm yesterday told Tribune Business that an wreck salvaging industry worth potentially hundreds of millions of dollars might have been unleashed by law changes passed this week, disclosing that it had been contacted by three-fourm ajor salvage groups already. T he Bahamian law firm, well-known to Tribune Business but requesting anonymity because it wanted to protect clients still in the infancy of their exploration discussions, said amendments to t he Antiquities, Monuments and Museums Bill passed by the House of Assembly had paved the way for a sector that could create numerous tourism and cultural spin-offs. T he firm was meeting with one party interested in salvage/excavation opport unities in Bahamian waters in Miami y esterday, and said it was sure the amendments which lay out the statutory framework governing such operations in this nations Exclusive Economic Zone (EEZb ased, viable and sustainable industry, By CHESTER ROBARDS Tribune Senior Reporterc robards@tribunemedia.net SANDALS Royal Bahamian is eyeing a 2012 first quarter opening for the guest rooms refurbished via a $20 million investment, its general manager yesterday saying the allinclusive resort was eyeing an average 60 per cent occupanc y rate for the remainder of 2011. SANDALS TARGETS 60 PER CENT OCCUPANCY TO 1 YEAR-END Cable Beach all-inclusive holding 33% average guest return rate SEE page 3B FIRS TC ARIBBEAN: N O PL ANS FOR BAHAMAS DE-LIST KEITHDAVIES SEE page 6B PUBLIC SECTOR NOT TAKING BAHAMAS INTO THIS CENTURY Top accountant calls for further privatising, as Govt not making headway to make us competitive Calls for Registrar General, revenue collection to be targeted Says Govt needs to get more for money spent on civil service RAYMOND WINDER SEE page 5B B y NEIL HARTNELL Tribune Business Editor T HE long-awaited $8 million Arawak Cay port initial public offering (IPOt y much there and likely to finally launch some time next week, a variety of capital markets players telling Tribune Business it needed to come $8M PORT IPO ALMOST THERE P rospectus said to be with Commission for approval, as next week l aunc h targeted to beat Christmas rush SEE page 8B SALVAGING A MULTI MILLION INDUSTRY Law firm contacted by three-four major international groups on wreck exploration/recovery in Bahamas een interest after reforms passed, with 200 wrecks said t o be near GB alone errific tourism and cultural potential SEE page 7B PAGE 17 BUSINESS PAGE 2B, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE NASSAU INSURANCE BROKERS AND AGENTS LIMITEDAtlantic (not Freeport) Surprise yourself with a home and motor quote from NIBA! Pay less for your insurance.Checking your home or motor insurance cover? Check your prices too.You can buy a lot of cover for a lot less with NIBA! Your insurance is backed by a company which has settled over $300 million in claims for 11 hurricanes since 2000.SAVE $$$ when you insure your home! Low rates and low deductibles for motor cover! Interest-free installment payments (home insurance) Fast claims service,generous liability cover Tel.Nassau 677-6422/Freeport 352-6422 or visit Excellent Location Near New Courthouse Contact: Sophia Moss 477-6433For Lease By DEIDRE M. BASTIAN WITHthe power of the I nternet and trained eyes w atching, it is important for a b usiness to be uniquely identified and communicate its message clearly. Equally, one of the easiest ways to recognise a company and distinguish it from its competitors is by its logo. That is arguably one of the most significant and valued elements of branding for any organisation. Wikipedia defines a logo as being a graphic mark or e mblem commonly used by organisations, even individuals, to aid and promote recogn ition. Sounds simple, but in a n utshell a logo plays a life-size role in the overall development of a business. I n this context, a logo provides feedback t o a potential customer, and its purpose is to m ake spectators say something like: Hey, l ook at this, its so cool. In my humble o pinion, a great logo instantly connects peop le with product, hence that wow facto. It can also be considered an art form, not a math factor, so here is a list of some of its common principles. Flexibility: Every logo should be flexible s o that it can be used on various media (print, online, mobile p rinted in full colour, so ensure it has adequate contrast to allow for black and white printing R esearch/Questions: It is always a good i dea for designers to talk to their client at the start to ascertain future plans for the logo.A sk if it will be used for stationary, t-shirts, b usiness cards, billboards, banners. Your logo should be able to answer the questions: Why? Who? and What? Why do you need t his logo? What is its purpose a nd who is the target? This constitutes good p lanning, and can assist designers in fine-tuni ng the logo for a variety of media. Colour: When we see blue we think of the sea. Red represents danger, while green gives a feeling of calm with a reflection of grass and nature. Using these colours in the right context controls our thoughts in a good w ay. But choosing colour should be the last d ecision a designer makes when brain storming a logo. Timeless: Style changes, but logos shouldn t. As a result, being timeless should not alter the quality of your logo. Changing a l ogo every year is a grave error, especially if t he customer hardly learned your logo or bonded with it in the first year. Simplicity: Should everything in life be GET THE PICTURE ON YOUR LOGOS T HE A RTOF G RAPHIX B Y DEIDRE M BASTIAN SEE page 19B W W i i k k i i p p e e d d i i a a d d e e f f i i n n e e s s a a l l o o g g o o a a s s b b e e i i n n g g a a g g r r a a p p h h i i c c m m a a r r k k o o r r e e m m b b l l e e m m c c o o m m m m o o n n l l y y u u s s e e d d b b y y o o r r g g a a n n i i s s a a t t i i o o n n s s , e e v v e e n n i i n n d d i i v v i i d d u u a a l l s s , t t o o a a i i d d a a n n d d p p r r o o m m o o t t e e r r e e c c o o g g n n i i t t i i o o n n . S S o o u u n n d d s s s s i i m m p p l l e e , b b u u t t i i n n a a n n u u t t s s h h e e l l l l a a l l o o g g o o p p l l a a y y s s a a l l i i f f e e s s i i z z e e r r o o l l e e i i n n t t h h e e o o v v e e r r a a l l l l d d e e v v e e l l o o p p m m e e n n t t o o f f a a b b u u s s i i n n e e s s s s . PAGE 18 Patrick Drake saidthat for the rest of the year, the Cable Beach-based resort propertyw ill host numerous travel agents from the US andCanada, who will sell Sandals newly-remodelled guestr ooms to their customers. Mr Drake added that Sandals mega familiarisation t rips had already paid off, as several agents have made bookings for their clients thisy ear and into 2012 since visiti ng the property. We had a group last week, close to 200 agents, and thisw eek already we have had a dozen bookings from those people that were here just lastw eek. That gives us a pretty good feel for what is happen ing, said Mr Drake. It is very costly to bring t hese people in and house them and feed them and entertain them for a time, but t he rewards are without a doubt you can see the blips in the occupancies when the a gents get back home. More than seven groups, totalling 1,000 travel agentsf rom the US and Canada, will have visited Sandals Royal Bahamian by the end of the year. Mr Drake said following the passage of Hurricane Irene in August, the resort had to refurbish some of its property, which he said has led to marked improvements in its look. And he added that bookings since the hurricane have been much better than expected. We have seen an improvement in the occupancy thatwe originally were forecast ing, he said. But that seems to be quite traditional of the Bahamas, which seems to be a late booking market. After visiting the property yesterday, several travel agents insisted they would have no problem convincing their customers to visit San dals Royal Bahamian. They cited the ease of trav el to the Bahamas, as well as the cuisine and friendliness of the people, as the top selling f actors. M r Drake said the resort was looking forward to the opening of its refurbished 60year-old building and further increasing its foothold on thea ll-inclusive market in the region. Everybody is going after a diminishing market, he said. If youre not at the top of the stream, obviously therei s no future at the bottom. At t he top of the market is still where its at, and if youre g oing to give that sort of comm itment then you have to have a product to back it up. Mr Drake added that more flights to the Bahamas has also created a cosmopolitanm ix of guests for the resort, with the resort holding an a verage return guest rate of 3 3 per cent. With airlift coming in from so many destinationsw e are seeing quite a nice i nternational mix, said Mr Drake. BUSINESS T HE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 3B CONDO FOR SALE St. Albans Drive off West Bay St.Beautiful 3 storey town house, 2 bed, 2 1/2 bath in private gated property, swimming pool, Rec. GregInterior, nished to your taste with stainless steel appliances, granite tops etc. End Unit $225,000.00 M iddle Unit $217,000.00 Tel: 325-1325 | 325-1408 | 477-0200 T HE BAHAMAS RED CROSS SOCIETY SEEKS APPLICATIONS FOR THE FOLLOWING POSITIONS: FIELD OFFICER (100% Full Time Employment) to be based in preparedness project targeting high risk communities in the Bahamas. management. Duties include supporting the Project Manager in the implementation and reporting on the National Societys disaster to travel. To apply, email:directorgeneral@bahamasredcross.com or redcross@bahamas.net.bs Closing date: November 18, 2011. PROJECTCOORDINATOR (100% FTE including activities related to response, preparedness and mitigation at national and community levels. Responsibilities include project start project tools and protocols, and monitoring and evaluation systems control, monitoring, supervision), partnership development managing must be ableto accommodate a work schedule that may include directorgeneral@bahamasredcross.com Closing date: November 18, 2011.Positions Available By NATARIO McKENZIE T ribune Business Reporter n mckenzie@tribunemedia.net COMPETITIONin the Bahamian cellular telecommunications market may only arrive in 2016, it was suggested yesterday, as Cable Bahamas confirmed it w ould be bidding on the first licence to become available in 2014. M ark Cabrelli, the BISX-listed communications providers vice-president of marketing and sales, said it will beingl ooking at adding cellular phone services to its offering once the BahamasT elecommunications Companys (BTC c urrent monopoly comes to an end. S peaking at a press conference to announce the launch of Cable Bahamas fixed landline phone service, REVOICE, Mr Cabrelli said: There is a monopoly market at the moment, and will be for the next few years. We are going to concentrate on the here and now. Were focusing on the three main products we can offer today. Looking to the future, cellular is a bsolutely on the agenda once the m onopoly is taken away and it becomes c ompetitive. I understand that there is going to be at least one further licence that is going to be issued, and CableB ahamas will hope to be in the running for that. Well certainly be very intereste d in that when it happens. A mong its main competitors for that l icence will be pan-Caribbean cellular o perator, Digicel. BTC has a cellular monopoly in the B ahamas until April 6,2014. Given that it will possibly take one year to award the licence post-bidding, and another year for the winner to get its infrastructure ready, it is possible cellular competition may only become a reality in 2016. I n the meantime, Cable Bahamas e xpects its new landline offering to provide serious competition to BTC. On Wednesday, the company completed its triple play services of REVTV, its cable t elevision offering, REVON, its Internet offering, and now REVOICE, its f ixed-line offering. C able Bahamas says a variety of packages will be offered to residential and business customers, from local calling and international calling to unlimited p lans. CELLULAR COMPETITION TO ONLY COME IN 2016 Cable Bahamas confirms aim to bid on first licence coming available in 2014 M ARK CABRELLI, v ice president of mark eting and sales, and Sharnette Curry, m arketing director SANDALS TARGETS 60 PER CENT OCCUPANCY TO 1 YEAR-END FROM page one PAGE 19 By NATARIO McKENZIE Tribune Business Reporter n mckenzie@tribunemedia.net C ABLE Bahamas (CBL expects a significant uptake in its new fixed line service, REVOICE, leading up to the Christmas season and into 2012, a senior market-i ng executive told Tribune Business yesterday. The company yesterday o fficially launched its fixedline offering, REVOICE, via its subsidiary SystemsR esource Group (SRG adding the final piece to the companys triple play communications, with REVTV a nd REVON constituting its cable television and Internet offerings, respectively. M ark Cabrelli, the comp anys vice-president of m arketing and sales, said: We have launched it at this t ime because we think the m arket is ready for a competitor to come in and offer fixed-line services. We have offered it at this time of the year becausew ere coming up on the holiday season, so we are e xpecting and hoping that there will be a significant uptake leading up to the C hristmas period and then into next year. We think 2 012 is going to be quite a defining time for the company in terms of getting a r eal solid market share of the fixed voice market. W hile not disclosing exact f igures, Mr Cabrelli said that thousands of customers have been introduced to the offering via the companyss oft launch. We had a soft launch, he added. The numbers were in the thousands in terms of bringing customers on board, and we hope that that is going to increase subs tantially as we move into t he New Year. M r Cabrelli said Cable B ahamas, from now until t he New Year, will roll-out t he REVOICE offering in New Providence, with plans to introduce it to the Family Islands in early 2012. On the fixed-voice side, BTC is estimated to have 98 per cent market share, C able Bahamas inheriting 2 per cent from SRG. Mr Cabrelli said Cable B ahamas will be offering its f ixed-line service at signifi c antly lower costs to its competitor, BTC. We will be offering sav i ngs against the fixed-line company today. We are offering various p ackages. We are offering discounts to the incumbent for sure. Once we can offer ourb undled service, there will b e even deeper benefits we can pass on to our cus tomers, he added. BUSINESS PAGE 4B, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE COMMONWEALTH OFTHE BAHAMAS 1997 IN THE SUPREME COURT No. 95 Equity Side IN THE MATTER OFWADE ADAMS CONSTRUCTION LIMITED ( In Voluntary Liquidation under Supervision of t he Supreme Court) AND IN THE M ATTER of the International B usiness Companies A ct 2000 Ch. 309, Statute Laws of The Bahamas 2000 Edition AND IN THE MATTER of the Companies Act 1 992 Ch. 308 Statute Laws of The Bahamas 2000 Edition NOTICE N OTICE is hereby given that the Creditors of the above-named Company are required, on or before the 1 9th day of December, 2011 to send their names and addresses, with particulars of their debts or claims, and the names and addresses of their Attorneys (if anyt he undersigned, Paul F. Clarke at One Montague Place, East Bay Street, P.O. Box N-3932, Nassau, Bahamas, FURTHER TAKE NOTICE intended to be declared in the above matter. Creditors w ho do not prove their debts or claims by the 19th day t his dividend. D ated 15th November, 2011 Paul F. Clarke THOUSANDS EYE CABLE AS FIXED-LINE ALTERNATIVE TO BTC PAGE 20 eign direct investment (FDI destination, Mr Winder said that while there were many highly qualified, productive civil servants in the public service, the present fiscal situation meant the Government needed to get more for the money spent. In other words, the Deloitte & Touche (Bahamas ing partner is saying the public sector needs to do more with less, become more productive and efficient, and deliver greater value for taxpayer money. Tracing the public sectors increasing difficulty in attracting the best and brightest Bahamians to both the private sectors evolution, coupledwith the well-publicised education system failings, Mr Winder said: On the drive to independence, the Bahamas was able to attract the best and brightest Bahamians into the public sector, but as the nation grew and opportunities in other sectors became available accountants, lawyers, doctors engineers and the like those professions were able to attract the best talents coming out of school in the last 15-25 years. The better salaries and talents on offer in the Bahamian private sector, Mr Winder added, had coupled with the fact the quality of education has not kept pace weve been unable to improve the results of students, and GPAs are below standard. This, the Bahamas lead W TO negotiator added, has created a deficiency in the public sector being able to attract the kind of talent that is needed to take the country into the next century. This is one of the reasons why the Government is finding it difficult to retire some of its b etter civil servants, and when we look at the ease of doing business, we have dropped several notches in the rankings, Mr Winder told Tribune Business. In order for us to raise that level, and be able to solve many of the challenges we h ave in the public sector, the Government will have to continue privatising the various public corporations and major activities within the Government itself, such as the Registry of Companies and Registrar Generals Office. He also urged the Governm ent to focus on outsourcing various aspects of revenue collection to the Bahamian private sector, plus operations such as the Tonique WilliamsDarling Highway landfill. The position is for us, that even though the Government seems to be committed to improving the timeframe, efficiency and ease of doing business, we dont seem to be making the headway we need to be competitive, Mr Winder told Tribune Business. Its the ease of being able t o provide these services in a timely and efficient way, and compete in a way to attract the kind of foreign direct investment, and additional companies and individuals, wanting to do business in the Bahamas. Without such improvem ents, he warned it makes it very difficult to do business. Numerous businesses and entrepreneurs, Mr Winder said, frequently seemed to be concerned and complaining about how long and how quickly they get responses back from the various mini stries. This is not to say we dont have good and qualified people in the public sector, he added, but because of the demand for talent in other sectors, and what is seen as a lack of sufficient opportunities over time, this is going to make life i n the public sector much more difficult in terms of what it needs to do. And such woes were set to be further exacerbated by the fiscal constraints the Govern ment is now labouring under. When you look at the Budget and the need to curtail spending, were going to need to get more for the money spent in this area, and government cannot afford to grow public expenditure without getting the benefits needed, Mr Winder told Tribune Business. When you look at the cost pressures from civil service retirements, all this forces us to be more efficient with the individuals we currently have, because the costs are not decreasing. BUSINESS T HE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 5B 7KHKHUDWRQDVVDX%HDFKHVRUWURRPUHVRUWIHDWXULQJVHYHQDFUHVRI V SHFWDFXODUZDWHUVFDSHULYDOHGRQO\E\WKHFXOLQDU\GHOLJKWVVHUYHGLQLWVUHVWDXUDQWV F XUUHQWO\VHHNVWRKLUH 'LUHFWRURI)RRGt%HYHUDJH 'LUHFWDQGRUJDQL]HWKH)RRGt%HYHUDJHIXQFWLRQZLWKLQWKHKRWHOLQRUGHUWRPDLQWDLQ J J KLJKVWDQGDUGVRIIRRGDQGEHYHUDJHTXDOLW\VHUYLFHDQGPHUFKDQGLVLQJWRPD[LPL]H SURWVDUWLFLSDWHLQWRWDOKRWHOPDQDJHPHQWDVDPHPEHURIWKHKRWHO([HFXWLYH &RPPLWWHH (VVHQWLDO 3ODQDQGGLUHFWWKHIXQFWLRQVRIDGPLQLVWUDWLRQDQGSODQQLQJRIWKH)RRGDQG%HYHUDJH 'HSDUWPHQWWRPHHWWKHGDLO\QHHGVRIRSHUDWLRQ &OHDUO\GHVFULEHDVVLJQDQGGHOHJDWHUHVSRQVLELOLW\DQGDXWKRULW\IRUWKHRSHUDWLRQRI WKHYDULRXVIRRGDQGEHYHUDJHVXEGHSDUWPHQWVLHURRPVHUYLFHUHVWDXUDQWV EDQTXHWVNLWFKHQVVWHZDUGVHWF 'HYHORSLPSOHPHQWDQGPRQLWRUVFKHGXOHVIRUWKHRSHUDWLRQRIDOOUHVWDXUDQWVDQG EDUVWRDFKLHYHDSURWDEOHUHVXOW 3DUWLFLSDWHZLWKWKHFKHIRXWOHWPDQDJHUVDQGFDWHULQJPDQDJHUVLQWKHFUHDWLRQ RI DWWUDFWLYHDQGPHUFKDQGLVLQJPHQXVGHVLJQHGWRDWWUDFWDSUHGHWHUPLQHG FXVWRPHUPDUNHW ,PSOHPHQWHIIHFWLYHFRQWURORIIRRGEHYHUDJHDQGODERUFRVWVDPRQJDOOVXE GHSDUWPHQWV $VVLVWWKHDUHDPDQDJHUVLQHVWDEOLVKLQJDQGDFKLHYLQJSUHGHWHUPLQHGSURWREMHFWLYHV DQGGHVLUHGVWDQGDUGVRITXDOLW\IRRGVHUYLFHFOHDQOLQHVVPHUFKDQGLVLQJDQG SURPRWLRQ 6NLOOVt$ELOLWLHV 0XVWEHDEOHWRVSHDNUHDGZULWHDQGXQGHUVWDQGWKHSULPDU\ODQJXDJHVfXVHGLQWKH ZRUNSODFH 0XVWEHDEOHWRUHDGDQGZULWHWRIDFLOLWDWHWKHFRPPXQLFDWLRQSURFHVV 5HTXLUHVJRRGFRPPXQLFDWLRQVNLOOVERWKYHUEDODQGZULWWHQ &RQVLGHUDEOHNQRZOHGJHRIFRPSOH[PDWKHPDWLFDOFDOFXODWLRQVDQGFRPSXWHU DFFRXQWLQJSURJUDPV%XGJHWDU\DQDO\VLVFDSDELOLWLHVUHTXLUHG $ELOLW\WRDFFHVVDQGDFFXUDWHO\LQSXWLQIRUPDWLRQXVLQJDPRGHUDWHO\FRPSOH[ FRPSXWHUV\VWHP $ELOLW\WRHIIHFWLYHO\GHDOZLWKLQWHUQDODQGH[WHUQDOFXVWRPHUVVRPHRIZKRPZLOO UHTXLUHKLJKOHYHOVRISDWLHQFHWDFWDQGGLSORPDF\WRGLIIXVHDQJHUFROOHFWDFFXUDWH LQIRUPDWLRQDQGUHVROYHFRQLFWV 0RVWWDVNVDUHSHUIRUPHGLQDWHDPHQYLURQPHQWZLWKWKHHPSOR\HHDFWLQJDVDWHDP OHDGHU7KHUHLVPLQLPDOGLUHFWVXSHUYLVLRQ 4XDOLFDWLRQVt([SHULHQFH +LJKFKRRORUHTXLYDOHQWHGXFDWLRQUHTXLUHG%DFKHORUV'HJUHHSUHIHUUHG 6HYHUDO\HDUVH[SHULHQFHLQRYHUDOO)RRG%HYHUDJHRSHUDWLRQDVZHOODVPDQDJHPHQW H[SHULHQFH&XOLQDU\VDOHVDQGVHUYLFHEDFNJURXQGUHTXLUHG 4XDOLHGDSSOLFDQWVDUHLQYLWHGWRDSSO\DW ZZZVKHUDWRQMREV 1RWH$OOLQIRUPDWLRQZLOOEHKHOGLQVWULFWHVWRIFRQGHQFH 'HDGOLQHIRU DOODSSOLFDQWVLV 'HFHPEHU QG FROM page one PUBLIC SECTOR NOT TAKING BAHAMAS INTO THIS CENTURY PAGE 21 f rom the Jamaica Stock Exchange to address the reg-u latory breach there, which has no bearing on our Bahamas investment. FirstCaribbean International Bank (Jamaicat o de-list in that nation after it c ame under pressure because the percentage of its stock in the hands of Jamaican public investors, at around 4 per cent, was well below the 20 per cent minimum set by thats tock exchanges listing rules. B ISX actually has a higher minimum percentage of a listed stock that must be in the hands of Bahamian institutional and retail investors, at 25 per cent. It is thus intere sting to contrast the respective approaches of BISX and its Jamaican counterpart, especially since FirstCaribbean International Bank (Bahamasp er cent of its stock, as at October 31, 2010, in public hands. The BISX 25 per cent minimum threshold will also come into play with two impending, government-connected initial public offerings (IPOs Arawak Cay Port issue, and the likely $37 millionB ahamas Telecommunications Company (BTC tion. Both these IPOs will offer Bahamian public investors cumulative stakes of 20 per cent and 9 per cent, respectively, well below the BISX threshold. While confirming that nothing was afoot in theB ahamas with respect to FirstCaribbeans listing here, BISXs chief executive, Keith Davies, implied that the 25 per cent benchmark was not a hard and fast rule, and that the exchanges assessment ofa ny potential listing was based o n all circumstances such as the total value and markets ability to absorb it. Once they [FirstCaribbean] initially appliedf or listing, we determined they had a sufficient market, despite the fact the percent-a ge was below our operational requirements, Mr Davies said. They had a sufficient market to justify listing and trading on the exchange. That p osition has not changed to d ate, and weve had not reas on to review that. Around 5.7 million FirstC aribbean International Bank (Bahamas Bahamian public investor h ands as at October 31, 2010, o ut of a total 120.221 million shares. The 95.21 per cent balance is held by the banks Barbados-based regional parent. If FirstCaribbean International Bank (Bahamasf ollowed its Jamaican affiliates lead it would have been a big blow to the Bahamian capital markets, at least froma market capitalisation standpoint. It is the largest BISX market cap, accounting fora round 40 per cent of the m arket, and a relatively liquid stock. Its a sought after stock. Its not the most liquid, but its the largest cap stock,a nd when you talk about the amount in public hands its a very liquid company,t he BISX chief executive added. Mr Davies, meanwhile, said B ISX had previously communicated with FirstCaribbean International Bank ( Bahamas) as to its future p lans and offering additional s ecurities to increase the percentage held in the market. H e added that the bank had indicated it would review this, and there were a few o ther options discussed as w ell relating to the percentage in public hands that Im n ot at liberty to discuss. M r Davies told Tribune Business: We were satisfied with their answer. There are any number of factors to consider when increasing the per-c entage in public hands. Thats the value of any p articular issue, the ability of the market to absorb it, and its willingness to receive additional securities. The BISX chief executive s aid each listed stock, and the p ercentage of shares in public hands, had to be viewed on its merits. It was an issue that was constantly reviewed to determine if the percentage was suitable for a companys c ontinued listing. M r Davies said many listed companies with public shareholdings of less than 25 per cent had existed before BISX, and were grandfathered in after its creation. As regards new listings, t hey would have to meet the 25 per cent benchmark unless there are circumstances that warrant that to be different. E xplaining the rationale for potentially treating the Arawak Cay port and BTC I POs differently, Mr Davies said: It depends on the size of the company. You mayh ave a billion dollar company seeking to raise several hundreds of million dollars. It can depend on the size of the company, the age of the comp any. Given that the largest IPO i n Bahamian capital market history, this years Commonwealth Brewery IPO, raised j ust over $50 million (the National Insurance Board picking up the roughly $12.5m illion balance), the decision t o float just 9 per cent of BTC as opposed to the Governments entire 51 per cent s take, seems realistic and pru dent. There are a number of fact ors to do with the size and reception of the market, and its ability to absorb any potential offering, Mr Davies told T ribune Business. And, as with all potential listings, the Arawak Cay port a nd BTC will, in their appli cation, have to list and explain the amount/percentage ofs hares being offered to the B ahamian public, and the price being paid. They must answer how m uch they wish to sell, and explain why they wish to sell that amount and what price,M r Davies added. We will wait to see what they have to say, and make a determination thereafter. BUSINESS PAGE 6B, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE NOTICEInternational Business Companies Act (No. 46 of 2000 Exclusive Resorts AB-I, Ltd.Registration Number: 134044B ( In Voluntary Liquidation) Notice is hereby given that in accordance with Section 138 (4of the International Business Companies Act (No. 46 or 2000) Exclusive Resorts AB-I, Ltd. commenced voluntary liquidation on 10th Novemeber, 2011. Any person having any claim against Exclusive Resorts AB-I, Ltd. i s required on or before the 12th day of December, 2011 to send their n ame, address and particulars of the debt or claim to the Liquidator of the FRPSDQ\RULQGHIDXOWWKHUHRIWKH\PD\KDYHH[FOXGHGIURPWKHEHQHW of any distribution made before such claim is approved. GSO Corporate Services Ltd., of 303 Shirley Street, Nassau, The Bahamas is the Liquidator of DEGA INC. GSO Corporate Services Ltd. Liquidator FIRSTCARIBBEAN: NO PLANS FOR BAHAMAS DE-LIST FROM page one PAGE 22 centred in, and around, the northern Bahamas, Little Bahama Bank and the islando f Grand Bahama, in particul ar. Confirming it had suggested the 75/25 profit split b etween excavator and g overnment, based on points, for each artifact discovered in Bahamian waters, a top partner at the law firm, speaking to Tribune Business on condition of anonymity, said: Weve h ad a number of calls from i nternational treasure salv agers who are keenly interested in salvaging the Bahamas, and have been keen to do so for many years. This interest goes back f or at least five years. W eve heard of at least two w recks. Wed say it could p ossibly be an industry in the hundreds of millions. It has terrific touristic andc ultural potential. Weve already had interest from three or four major groups, and we think m ore will come. Theres said to be 200 wrecks around Grand Bahama a lone. Wreck exploration and s alvaging, and the prospect of finding valuable artifacts, could be another p otential economic sector for a Bahamian economy desperately in need of diversification and new revenue/employment sources. A moratorium on such a ctivities had been in place f or several years, and that coupled with uncertainty o ver the legal, regulatory a nd profit-sharing regime g overning it had deterred major international salvagers from dipping their toe into the Bahamianm arket. Given this nations posit ion at the heart of the C aribbean, Atlantic and Florida waterways, and rich history (having been discovered by Christopher C olumbus, and later used a s a piracy bolt-hole), it w ould seem likely there are numerous wrecks in deeplying Bahamian waters. It could have great touristic spin-offs and job spin-offs, not only on theb oats, the law firms leading partner told Tribune Business. You set up a processing centre, where you clean and certify artifacts. Thats an industry by itself. T he attorney said many B ahamians had made mone y by salvaging wrecks they knew about, referring to one now-deceased Abaconian who had known the w hereabouts of a major w reck, and had been able t o recover gold coins and o ther valuable artifacts. T hat, though, had reached t he stage where major h eavy-duty equipment was required to complete any further salvage. The senior attorney also old Tribune Business had had been told that inS witzerland, theyre constantly auctioning Bahamian artifacts, which have been stolen from our country. As a result, the legislative amendments to the A ntiquities, Monuments a nd Museums Bill have b een designed to protect potentially valuable Bahamian artifacts that may be recovered, enabling t hem to be retained for m useums to protect this n ations cultural heritage a s well as serving as a poss ible tourist attraction. T he 75/25 split is condit ional, with the Government getting more depending on the artifacts cultural value and deciding those that were valuable upfront. T he Bill states: Both government and licensee to agree in writing that governments retention of artifacts important to the protection of the national patrimony may exceed gove rnments 25 per cent share i n certain years with the i mbalance to be corrected by future divisions. M eanwhile, the Bahamia n law firm said it awaited t he Bills accompanying regulations, which were needed to govern this industry and avoid the potential abuse and mistreatment of the Bahamasn atural resources. Our country has previously learned that should we not protect our assets they will remain subject to todays pirates, much in the form and not the appeara nce, of days of old, the l aw firm added. In our reading and understanding of the Bill, w e are grateful to note that t he Government of the B ahamas has ensured that Bahamians and Bahamian flagships may now lawfully explore and discover underwater cultural heritage artifacts and itemsw ithout penalty or sanction. We are very much aware that there remains the needed or required application and approval process to further survey a nd/or recover and/or salv age these items and artif acts comprising any underwater cultural heritage. BUSINESS T HE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 7B ( D F 5 5 ) PROCLAMATION WHEREAS, there has been an Anglican presence in these Islands for more than 360 years; AND WHEREAS, pastoral and administrative oversight thereof was given originally by the Bishop of London who was responsible for church life in all British Colonies; AND WHEREAS, the territories of the West Indies developed to the extent that in 1824 pastoral and administrative authority was transferred thereto with the establishment of the Anglican Dioceses of Jamaica and Barbados, and The Bahama Islands and The Turks and Caicos Islands coming under the oversight of the Diocese of Jamaica; AND WHEREAS, ongoing development in the Bahama Islands and The Turks and Caicos Islands led to the separation from the Anglican Diocese of Jamaica and their formation into a self-governing entity, then called the Diocese of Nassau; AND WHEREAS, with the issuing of Letters Patent by Queen Victoria on 4 November, 1861, Dr. Charles Caulfield was designated as the Bishop-elect of the new Diocese of Nassau, the Parish of Christ Church was designated the Cathedral and, only then could the Towne of Nassau be created a city; AND WHEREAS, the history, evolution and development of both the said Diocese of Nassau, and the said City of Nassau have been inextricably intertwined over these centuries; AND WHEREAS, this year marks the 150th Anniversary of their common foundation; NOW, THEREFORE, I, Hubert A. Ingraham, Prime Minister of the Commonwealth of The Bahamas, do hereby proclaim the month of November, 2011 as 0TH ANNIVERSARY MONTH celebrating the establishment of the Anglican Diocese of The Bahamas and The Turks and Caicos Islands, and the establishment of the City of Nassau. HUBERT A. INGRAHAM Prime Minister IN WITNESS WHEREOF, I have hereunto set my Hand and Seal this 18th day of October, 2011. FROM page one SALVAGING A MULTI MILLION INDUSTRY PAGE 23 out imminently to beat Christmas. Multiple sources, some close to the situation, told Tribune Business that the IPO prospectus was now at the Securities Commission for the r egulators review and approval, the Government and Ministry of Finance having finished their work on it. The Arawak Cay Port Development Company and its financial advisers/placement agents, CFAL and Providence Advisors, had initially hoped to launch the IPO intended to give Bahamian institutional and retail investors a collective 20 per cent stake in New Providences sole purpose-built commercial shipping port in late September/early October. That was some six weeks ago, and Tribune Business was told that the original timeframe had been delayed due to the fact that numerous persons, including the Ports Board and private shareholders, plus the Government, had to agree and sign-off on the prospectus. It is understood, though, that everyone connected with the IPO the Port company, the Government, regulators and advisers is conscious of the need to launch the offering by next week, to give investors the normal four weeks to decide whether to invest. Any later, and the Arawak Cay Port IPO runs the risk of clashing with the last-minute Christmas shopping period, when minds are elsewhere. The delay in getting this IPO off the ground has also almost certainly pushed the Bahamas Telecommunications Company (BTC the New Year. Everything is at the [Securities] Commission, one wellplaced source told Tribune Business, declining to comment further. Philip Stubbs, the Securities Commissions executive chairman, did not return Tribune Businesss calls seeking comment. However, another wellconnected contact also familiar with the IPOs progress, said: Its pretty much there, to be honest. Its just taken longer than planned. Various people had to review it, and the Ports operating plan continues to be tweaked. There was a series of things that added up to a delay from the earlier proposed timing. The Government took a while to come back with their comments, which were not earth shattering. Theres a lot of players involved here that need to look at it and give their input. With construction of the Arawak Cay port ongoing, its financial numbers continually require tweaking, and its private-partnership structure requires input from numerous parties into the decisionmaking. Asked whether all parties were aware of the impending clash with Christmas, the source replied: Theyre very conscious of that, so hopefully it will be imminent. Theres only a small number of things that need to be dealt with. Tribune Business also understands that the IPO prospectus will make clear that the guaranteed 10 per cent internal rate of return (IRR will enjoy, as set out in the Memorandum of Understanding (MoU Government and private sector, applies only to the company and does not mean investors enjoy a guaranteed annual 10 per cent return. There was some discussion about that, Tribune Businesss source conceded, but the MoU is pretty clear that attaches to the project. There was some discussion about making investors understand that does not mean a minimum 10 per cent dividend. Given that the Arawak Cay Port is in its initial construction and development phase, it is unlikely to be paying out 100 per cent of its annual earnings as dividends for some years. The delay in the IPOs launch may also mean that its advisers will have to again whet market appetite, and regain investor confidence, for the $8 million issue by explaining what caused the push back. Still, one Tribune Business source expressed confidence the issue would still be oversubscribed. All the PRs lined up for it, they added, and maybe people can buy shares as Christmas presents. Based on the expressions of interest that have been coming in, its highly likely this will be oversubscribed. It creates a nice ownership opportunity for Bahamians. Another capital markets source added that, if the $8 million issue was oversubscribed, the Ports advisers would likely adopt a bottom up approach, ensuring all subscribers received shares up to a certain level. They would then have to determine how to allocate the remainder. The source said the main objective with the Arawak Cay Port offering was to deepen and broaden Bahamian participation in the capital markets, and ownership of key economic assets. The Government and private sector each invested $20 million into the Arawak Cay Port, and are selling off 20p er cent or $4 million each, of their stakes. Once the IPO i s completed, the Government and private sector will each own a 40 per cent stake, with the public holding 20 per cent. A $40 million private placement, scheduled for nexty ear, is designed to replace the original line of bank credit financing taken out for the construction phase. BUSINESS PAGE 8B, THURSDAY, NOVEMBER 17, 2011 THE TRIBUNE 52wk-Hi52wk-LowSecurit y Previous CloseToday's CloseChangeDaily Vol.EPS $Div $P/EYield 1.190.97AML Foods Limited1.181.180.000.1480.0408.03.39% 1 0.639.05Bahamas Property Fund10.6310.630.0050.003500.0970.04018.22.26% 10.468.29Cable Bahamas8.438.430.000.2450.32034.43.80% 2.802.33Colina Holdings2.342.340.000.4380.0405.31.71% 8.508.33Commonwealth Brewery8.508.500.006120.7400.00011.50.00% 7.006.21Commonwealth Bank (S1 6.516.510.000.4960.32013.14.92% 2.001.63Consolidated Water BDRs1.801.73-0.070.1110.04515.62.60% 1.771.31Doctor's Hospital1.371.370.001200.0740.04018.52.92% 5.504.75Famguard5.435.430.000.4980.24010.94.42% 7 .504.82Finco4.824.820.001,0000.7570.0006.40.00% 9.457.75CIBC FirstCaribbean Bank8.148.140.000.4940.35016.54.30% 6.005.00Focol (S 5.335.330.000.4350.22012.34.13% 1.001.00Focol Class B Preference1.001.000.000.0000.000N/M0.00% 7.305.58. 99.4699.46Bahamas Note 6.95 (2029BAH2999.460.00 100.00100.00Fidelity Bank Note 17 (Series A) +FBB17100.000.00 1 00FINDEX: YEAR END 2008 -12.31%3 0 May 2013 2 0 November 2029 7 % RoyalFidelityMerchantBank&TrustLtd(Over-The-CounterSecurities) 29 May 2015BISX LISTED & TRADED SECURITIES AS OF:7% Interest 19 October 2022 Prime + 1.75% Prime + 1.75% 6 .95%WEDNESDAY, 16 NOVEMBER 2011BISX ALL SHARE INDEX: CLOSE 1,355.68 | CHG 0.03 | %CHG 0.00 | YTD -143.83 | YTD % -9.59BISX LISTED DEBT SECURITIES (Bonds trade on a Percentage Pricing basis)Maturity 19 October 2017 W WW.BISXBAHAMAS.COM | TELEPHONE: 242-677-BISX (2479) | FACSIMILE: 242-323-2320 5 2wk-Hi52wk-LowSymbolBid $ A s k $Last PriceDaily Vol.EPS $Div $P/EYield 10.72022.5398Royal Fidelity Bahamas G & I Fund2.4974-8.19%-7.45% 13.849313.2825Royal Fidelity Prime Income Fund13.91804.19%5.21%773.59%4.94% 1.13431.0000FG Financial Growth Fund1.14152.06%4.07% 1.17641.0000FG Financial Diversified Fund1.18903.47%5.04% 9.9952 9.5078Royal Fidelity Bah Int'l Investment Fund Principal Protected TIGRS, Series 19.94330.98%4.58% 11.49859.8690Royal Fidelity Bah Int'l Investment Fund Principal Protected TIGRS, Series 210.3699-6.17%-2.17% 10.68139.6635Royal Fidelity Bah Int'l Investment Fund Principal Protected TIGRS, Series 310.20631.81%7.39%31-Oct-11 31-Jul-11 31-Oct-11TO TRADE CALL: CFAL 242-502-7010 | ROYALFIDELITY 242-356-7764 | FG CAPITAL MARKETS 242-396-4000 | COLONIAL 242-502-752531-Jul-11 30-Jun-11 31-Jul-11 5-Aug-11 30-Oct-11MARKET TERMS30-Sep-11 31-Oct-11 RoyalFidelity Merchant Bank & Trust Ltd (Over-The-Counter Securities) CFAL Securities Ltd. (Over-The-Counter Securities) BISX Listed Mutual Funds30-Jun-11 30-Sep-11 NAV 6MTH 1.535365 2.952663 1.580804 111.469744 115.762221 NAV Date 31-May-11 30-Sep-11 &20021:($/ 1 & 20021/$$1'(48,7<',9,6,21 ,17+(0$77(5 DOOWKDW SLHFHSDUFHORUORWRI O DQGFRQWDLQLQJ)LYH7KRXVDQG7ZR+XQGUHGDQG 1LQHWHHQ6TXDUHIHHWVLWXDWHDSSUR[LPDWHO\ 6 HYHQW\)LYHIHHW(DVWRIWKHMXQFWLRQRI 'XQPRUH6WUHHWDQG'XNH6WUHHWZKLFKSLHFHSDUFHO RUORWRIODQGLVERXQGHGRQWKH1RUWKORWRI ODQGWKHSURSHUW\RI+RZDUG$OEXU\DQGUXQQLQJ W KHUHRQ6HYHQW\DQG)LIW\ZR+XQGUHGWKV I HHWDQGERXQGHGRQWKH:HVWWKH3XEOLF5RDG D QG UXQQLQJWKHUHRQ)RUW\QHDQG(LJKW\KUHH +XQGUHGWKVIHHWDQGERXQGHGRQWKH6RXWK E \ WKHSURSHUW\RI0HUF\+LJJV-RKQVRQDQGUXQQLQJ W KHUHRQ1LQHW\HYHQDQG7KLUW\(LJKW+XQGUHGWKV IHHWDQGERXQGHGRQWKH(DVWORWRI ODQGWKHSURSHUW\RI%HUWUDP.HQQHWK6DZ\HUDQG UXQQLQJWKHUHRQ1LQHW\DQG)LIW\LQH+XQGUHGWKV IHHWRQWKH,VODQGRI+DUERXU,VODQGRQHRI W KH,VODQGVRIWKH&RPPRQZHDOWKRI7KH%DKDPDV $ 1' ,1+($77(5 WKHXLHWLQJ7LWOHV $ 1' 1 7+($77(5 W KHHWLWLRQRI % (575$0 .(11(7+$:<(5 BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB 1 27,&()(7,7,21 B BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB 7KH3HWLWLRQRI %(575$0.(11(7+ 6$:<(5 RIWKH,VODQGRI+DUERXU,VODQGRQHRIWKH ,VODQGVRIWKH&RPPRQZHDOWKRI7KH%DKDPDVLQ U HVSHFWRI $//7+$SLHFHSDUFHORUORWRIODQGFRQWDLQLQJ )LYH7KRXVDQG7ZR+XQGUHGDQG1LQHWHHQ6TXDUH I HHWVLWXDWHDSSUR[LPDWHO\6HYHQW\)LYH IHHW(DVWRIWKHMXQFWLRQRI'XQPRUH6WUHHW DQG'XNH6WUHHWZKLFKSLHFHSDUFHORUORWRI ODQGLVERXQGHGRQWKH1RUWKORWRIODQGWKH S URSHUW\RI+RZDUG$OEXU\DQGUXQQLQJWKHUHRQ 6 HYHQW\DQG)LIW\ZR+XQGUHGWKVIHHW DQGERXQGHGRQWKH:HVWWKH3XEOLF5RDG DQGUXQQLQJWKHUHRQ)RUW\QHDQG(LJKW\KUHH +XQGUHGWKVIHHWDQGERXQGHGRQWKH6RXWK E \ WKH SURSHUW\RI0HUF\+LJJV-RKQVRQDQG UXQQLQJWKHUHRQ1LQHW\HYHQDQG7KLUW\(LJKW +XQGUHGWKVIHHWDQGERXQGHGRQWKH(DVW ORWRIODQGWKHSURSHUW\RI%HUWUDP.HQQHWK 6 DZ\HUDQGUXQQLQJWKHUHRQ1LQHW\DQG)LIW\LQH + XQGUHGWKVIHHWRQWKH,VODQGRI+DUERXU ,VODQGRQHRIWKH,VODQGVRIWKH&RPPRQZHDOWK RI7KH%DKDPDVZKLFKVDLGSLHFHSDUFHORUORWRI O DQGKDVVXFKSRVLWLRQVKDSHPDUNVERXQGDULHV D QG GLPHQVLRQVDVDUHVKRZQRQWKHSODQOHGLQ WKLVPDWWHUDQGLVGHOLQHDWHGRQWKDWSDUWRIWKH VDLGSODQFRORXUHGLQN 127,&(,6+(5(%<*,9(1 WKDW%HUWUDP .HQQHWK6DZ\HUFODLPVWREHWKHRZQHULQIHH VLPSOHLQSRVVHVVLRQRIWKHVDLGODQGDQGKDV PDGHDSSOLFDWLRQWRWKH6XSUHPH&RXUWRIWKH &RPPRQZHDOWKRIWKH%DKDPDVSXUVXDQWWRWKH 4XLHWLQJ7LWOHV&KDSWHU WRKDYH KLVWLWOHWRWKHVDLGODQGLQYHVWLJDWHGDQGWKHQDWXUH DQGH[WHQWWKHUHRIGHWHUPLQHGDQGGHFODUHGLQ &HUWLFDWHRI7LWOHWREHJUDQWHGWKH&RXUWLQ DFFRUGDQFHZLWKWKHSURYLVLRQVRIWKHVDLG$FW $1'7$.(127,&( WKDWFRSLHVRIWKH 3HWLWLRQDQGSODQRIWKHVDLGODQGPD\EHLQVSHFWHG GXULQJQRUPDORIFHKRXUVDWWKHIROORZLQJSODFHV 7KH5HJLVWU\RIWKH6XSUHPH&RXUWRIWKH &RPPRQZHDOWKRI7KH%DKDPDVRQWKHVHFRQG RIWKH$QVEDFKHU%XLOGLQJVLWXDWHDW(DVW6WUHHWDQG %DQN/DQHRQWKH,VODQGRI1HZ3URYLGHQFHRQH RIWKH,VODQGVRIWKHVDLG&RPPRQZHDOWKRI7KH %DKDPDV 7KH2IFHRIWKH$GPLQLVWUDWRULQ+DUERXU,VODQGDQG RQWKHQRWLFHERDUGRIWKH/RFDO&RQVWDEOHLQ+DUERXU ,VODQGRQHRIWKH,VODQGVRIWKH&RPPRQZHDOWKRI WKH%DKDPDVRU 7KH&KDPEHUVRI*LEVRQ5LJE\.LDOH[ +RXVH'RZGHVZHOO6WUHHWLQWKH&LW\RI1DVVDX 1HZURYLGHQFHDIRUHVDLG $1'7$.()857+(5127,&( WKDWDQ\SHUVRQ KDYLQJGRZHURUULJKWWRGRZHUDQDGYHUVHRUFODLP QRWUHFRJQL]HGLQWKH3HWLWLRQVKDOOWKH WK GD\ DIWHUWKHODVWGD\RQZKLFKWKHDGYHUWLVHPHQWDSSHDUV LQWKHSDSHUVLQWKH6XSUHPH&RXUWDQGVHUYHRQ WKH3HWLWLRQHUVRUWKHLU$WWRUQH\VDQ$GYHUVH&ODLPLQ WKHSUHVFULEHGIRUPVXSSRUWHGE\$IGDYLW )$,/85($1<3(5621 WRDQGVHUYHDQ $GYHUVH&ODLPRQRUEHIRUHWKH WK GD\DIWHUWKH ODVWGD\RQZKLFKWKHDGYHUWLVHPHQWDSSHDUVLQWKH SDSHUVZLOORSHUDWHDVDEDUWRVXFKFODLP '$7(' WK GD\RI-XO\ *,%621,*%< &KDPEHUV .LDOH[+RXVH 'RZGHVZHOOWUHHW 1DVVDXKH%DKDPDV $WWRUQH\VIRUWKHHWLWLRQHU RI)2;+,//1$66$8 NOTICEI NTERNATIONAL BUSINESS COMPANIES ACT ( No 46 of 2000)M IKADO HOLDING INTERNATIONAL LIMITED(IBC N 143391 B (In Voluntary Liquidation)NOTICE is hereby given that in accordance with Section 138 of the International Business Companies Act No. 46 of 2000, MIKADO HOLDING INTERNATIONALLIMITED is in Dissolution. Any person having a Claim against the above-named Company is required on or before 10th January, 2012 to send their name, ad-d ress and particulars of the debt or claim to the Liquidator of the C ompany, or in default thereof they may be excluded from the Rosana Hollins of Suite 2B, Mansion House, 143 Main Street, Gibraltar, is the Liquidator of MIKADO HOLDING INTERNATIONAL LIMITED Legal Notice N OTICEFLY ONE EIGHTY EIGHT LIMITEDN OTICE IS HEREBY GIVEN as follows: (aLimited. is in dissolution under the provisions of the International Business Companies Act 2000 ( b) The dissolution of the said Company commenced on t L egal Notice N OTICEF LY ONE EIGHTY EIGHT LIMITEDNOTICE IS HEREBY GIVEN as follows: ( a) Fly One Eighty Eight Limited. is in dissolution under t he provisions of the International Business Companies Act 2000 ( b) The dissolution of the said Company commenced on FROM page one $8M PORT IPO ALMOST THERE PAGE 24 resolved with a kiss, even if it is a logo design? Kiss stands for: Keep It Simple, Stupid. The famous saying less is more holds true, a nd a simple logo typically s hould be memorable and s tand out from the rest. Professionalism: Do you know that if your logo is perceived to look amateurish, so will your business? In essence, a professional b usiness should look the p art by investing suitably in a classic but attractive symbol. There are common reasons why many logos look amateurish, one of which is t he business owners desire to save money by designing t he logo themselves, or r etaining a friend/relative w ho has access to Photoshop to complete it as a favour. Vector image software: A lthough it can be tempti ng to use a program such a s Adobe Photoshop, it is standard practice to use Adobe Illustrator and its pen tool, and CorelDraw, which are more appropriate and obliging. Using raster images for l ogos is not advisable, as problems can arise with reproduction and zooming, which results in your graphic appearing pixilated (blotchy al. The main advantages of v ector graphics are that the l ogo can be scaled to any s ize without losing quality; editing the logo later on is much easier; and it can be adapted to other media more easily than a raster i mage. Maintain visual cons istency by making sure the l ogo looks the same in all sizes. Stock Art Logos: Incorporating stock vector graphics in a companys logo is risky, and could possibly c ause identity issues and misjudgment. My premise is this: If you are using a stock vector image from a stock pool, chances are it is also being used somewhere else in the world, which suggests t hat your logo is no longer u nique. The purpose of a l ogo is to uphold a one of its kind image of your business. Remember, once your logo is completed, ensure you register it immediately t o prevent use or copy. D esigning for the client: You can often spot this logo design a mile away. Designers should never impose their personality on to a clients work. It is fine to advise and guide a client, but r emain focused on the clients requirements by following their brief. Here is a video sample of how to make a logo: ch?v=3r2qHTKPBmU ( copy and paste into your u rl window) Fonts: The choice of fonts can make or break a logo. This is the most important decision a designer can m ake, as more often than n ot, logos can fail due to a p oor font choice. For example, if the font match is too close, the icon and font will compete with each other for attention. Using too many fonts is like trying to show someone a n entire 300 page photo album at once. A maximum of two fonts of different weights is standard, and improves the legibility. The key is finding a balance s omewhere in the middle, s ince every typeface has a personality. If the font chos en does not reflect the i cons characteristics, then p ossibly the entire message may misfire. F inally, we need to understand that a logo never stands on its own. It is always a part of a bigger picture, as its philosophy ando ther ideals represent the value of the company. I w ould like to believe that a logo should be an impress ive, but seductive, way that your business earns respect and trust. Have you ever wondered w hether adding a logo will g ive your business distinct ion, while showing off its swagger? Then think about this: Logos are resilient and should convey the qualities and thoughts of your busin ess. It helps to give your c ompany an established and p rofessional feel and, moreover, people normally find it easier to memorise or recall images easier than text. Remember the ole clich: A picture is worth a thous and words. In this case, the logo represents that picture, and can be remembered and identified with greater ease than a thousand words. So until we meet a gain, have fun, enjoy life a nd stay on top of your game. N B: The columnist welc omes feedback at deedee2111@hotmail.com About the Columnist: Ms B astian is a trained graphic designer who has qualifications of M.Sc., B.Sc., A.Sc. She has trained at institutions such as: Miami LakesT echnica l Centre, Success Training College, College of t he Bahamas, Nova Southeastern University, Learni ng Tree International, Langevine International and Synergy Bahamas. BUSINESS T HE TRIBUNE THURSDAY, NOVEMBER 17, 2011, PAGE 19B GET THE PICTURE ON YOUR LOGOS FROM page two Shar e your news T he Tribune wants to hear f rom people who are m aking news in their neighbourhoods. Perhaps you are raising funds for ag ood cause, campaigning for improvements in the area or have won an award. If so, call us on 322-1986 a nd share your story. PAGE 25 SPORTS P AGE 2E, THURSDAY, NOVEMBER 17, 2011 TRIBUNE SPORTS TRACKANDFIELD Week 6 9th November, 2011 Scores: Central Bank Controllers 4 Green Parrot Bootleggers 8 Silver Dollar Coins B 4 Mandy's French Bakery 8 CSB Buccaneers 9 HammerHead Sharks 3 Backyard Destroyers 10 Shafters 2 Pro Plan Bandits 6 Charlie's Top Dogs 6 Moss Gas B 2 Sigma Shots 10 Conch Hill Breezers 2 Moss Gas A 10 Scorpio Bulls 6 Sands Bullets 6 Toads 4 B52's 8 Charlie's Devils 5 StingRays 7 Cricket Club LBW's 3 Lisa's Bums A 9 Panama Jack Bullshooters 4 Silver Dollar Coins A 8 Jacvar Bums 9 The Parlour Rum Runners 3 N ASS AUDARTSASSOCIATION N assau, Bahamas This column takes a look at the worldwide rankings of Bahamian athletes in 2011 w ith some comparisons to 2010. T he top Bahamian senior, 4 00m runner Demetrius Pinder, was ranked 9th with a 44.78sec clocking in 2011. On the junior level, A nthonique Strachan was the highest ranked for 2011 p lacing second in the 200m a t 22.70sec. At the Youth Level, Latario Collie-Minns ranked n umber one in the world w ith a 16.55m jump. I n its proudest hour in 2011 The Bahamas placed f ourth to The United States, Kenya, and Jamaica in the World Youth Champi-o nships in Lille, France. T he Bahamas won three G old medals. Shaunae Miller won the 400m in 51.84sec, Stephen Newbold the 200m in 20.89sec, and Latario Collie-Minns,1 6.06m in the Triple Jump. Lathone Collie-Minns won the Bronze medal in t he Triple Jump with a 15.51m jump. A listing of athletes in the t op ten performances foll ows: Seniors #9 Demetrius Pinder 4 00m, 44.78sec Juniors #2 Anthonique Strachan 2 00m, 22.70sec #3 Shaunae Miller400m, 51.84sec #4 Katrina Seymour 400m H urdles, 57.24sec #7 Anthonique Strachan 100m, 11.38sec # 9 Ryan Ingraham High Jump, 2.23m Youth #1 Latario Collie-Minns T riple Jump, 16.55m #2 Shaunae Miller 400m 51.84sec # 4 Stephen Newbold 200m, 20.89sec Lathone Collie-Minns Triple Jump, 15.73m Below is a listing of every Bahamian athlete who was ranked this year and some w ho were ranked in 2010. Seniors M en 100m 2 011 Ranking Jamial Rolle 10.26sec, 132nd Warren Fraser 10.28sec, 1 49th Rodney Green 1 0.28sec,151st A drian Griffith 10.28sec, 154th 2 010 Ranking D errick Atkins 10.13sec, 1 37th Adrian Griffith 10.19sec, 1 61st Jamal Forbes 10.28sec, 221st 2 00m 2 011 Ranking Michael Mathieu 20.38sec,27th Demetrius Pinder 20.54sec, 50th J amial Rolle 20.81sec, 220th 2 010 Ranking Jamial Rolle 20.75sec, 111th N athaniel Mckinney 2 0.82sec, 151st Derrick Atkins 20.87sec, 167th M ichael Mathieu 20.85sec, 170th 4 00m 2011 Ranking Demetrius Pinder 44.78sec, 9th C hris Brown44.79sec, 11th Ramon Miller 4 5.01sec, A 20th Michael Mathieu 45.54sec, 33rd Avard Moncur46.18sec, 1 30th Andrae Williams 46.18sec, 138th L aToy Williams 46.18sec,148th 2010 Ranking Demetrius Pinder 4 4.93sec, 17th C hris Brown45.01sec, 23rd H igh Jump 2 011 Ranking Donald Thomas2.32m ( 7-1/2), 11th Trevor Barry2.32m ( 7-1/2), 13th Ryan Ingraham 2 .23m (7 2 010 Ranking Donald Thomas 2.32m (7-1/2 Trevor Barry 2.29m ( 7-1/4), 17th Long Jump 2 011 Ranking Raymond Higgs 8.15m (26 Triple Jump 2011 Ranking Leevan Sands 17.21m (56-3/4 Latario Collie-Minns 16.55m (54-3/4 2010 Ranking Leevan Sands 17.21m (56-3/4 4x100m Relay 2011 Ranking Adrian Griffith, Rodney Green, Demetrius Pinder, Michael Mathieu 39.29sec, 27th 4x400m Relay L atoy Williams, Avard M oncur, Micael Mathieu, Ramon Miller 3 :01.33 12th W omen 100m 2 011 Ranking Debbie Ferguson-Mckenz ie 11.09sec ,18th Sheniqua Ferguson 11.17sec, 32nd Anthonique Strachan 1 1.38sec, 76th Tynia Gaither 11.41sec, 95th 2 010 Ranking Chandra Sturrup 11.13sec, 18th D ebbie Ferguson-Mcken zie 11.15sec, 22nd Sheniqua Ferguson 1 1.19sec, 31st 200m 2011 Ranking Anthonique Strachan 22.70sec, 25th Debbie Ferguson-Mcken z ie 22.76sec, 29th Nivea Smith22.80sec, 33th 2 010 Ranking Debbie Ferguson-Mckenzie 22.62sec, 18th Nivea Smith 22.71sec, 23rd Sheniqua Ferguson 22.87sec, 38th 400m 2 011 Ranking S haunae Miller 51.84sec, 58th 2010 Ranking C hristine Amertil 5 1.67sec, 47th Shaunae Miller 52.45sec, 9 0th L ong Jump 2011 Ranking Bianca Stuart6.81m (22-1/4 2010 Ranking Bianca Stuart 6.54m (211/2), 78th 4x100m Relay 2011 Ranking V Alonee Robinson, Nivea Smith, Sheniqua Ferguson, Debbie Ferguson-M cKenzie 43.65sec, 14th Juniors Boys 200m 2011 Ranking S tephen Newbold 20.89sec, 23rd High Jump 2 011 Ranking Ryan Ingraham 2.23m (7 Triple Jump 2011 Ranking Latario Collie-Minns 16.55m (54-3/4 Girls 100m 2011 Ranking Anthonique Strachan 11.38sec, 7th Tynia Gaither 11.41sec, 12th 200m 2011 Ranking Anthonique Strachan 22.70sec, 2nd 2010 Ranking Anthonique Strachan 23.66sec, 25th Tynia Gaither 23.68sec, 28th 400m 2011 Ranking Shaunae Miller 51.84sec, 3rd 2010 Ranking Shaunae Miller 52.45sec, 3rd Amara Jones 53.01sec, 17th 100m Hurdles 2010 Ranking Ivanique Kemp 13.58sec 25th 4 00m Hurdles 2 011 Ranking Katrina Seymour 57.24sec, 4 th 4 x100m Relay 2 011 Ranking Devynne Charlton, C armeisha Cox, VAlonee Robinson, Anthonique Strac han 45.04sec, 7th 2010 Ranking V Alonee Robinson, Ivanique kemp, Marvar Eti enne, Tynia Giather 45.45sec, 17th Youth Boys 2 00m 2011 Ranking Stephen Newbold 2 0.89sec, 4th 400m 2011 Ranking Andre Wells 46.87sec, 14th 2 010 Ranking Stephen Newbold 47.84sec, 29th M edley Relay 2011 Ranking Anthony Adderley, Delano Davis, Stephen Newbold, Andre Wells 1:52.66 5th Triple Jump 2011 Ranking Latario Collie-Minns 16.55m (54-3/4 Lathone Collie-Minns 15.73m (51-1/2 2010 Ranking Latario Collie-Minns 15.78m (51-1/2 Lathone Collie-Minns 15.33m (50-3/4 Girls 400m 2011 Ranking Shaunae Miller 51.84sec, 2nd 2010 Ranking Shaunae Miller 52.45sec, 1st Medley Relay 2011 Ranking Devynne Chartlton, Carmiesha Cox, Pedrya Seymour, Gregria Higgs 2:11.10 12th The above gives a good idea as to where the Bahamas Track and Field is today. It shows our strengths and our weaknesses. Pinder ranked 9th in world DEMETRIUS PINDER pictured in action. LATARIO MINNS, Barry Malcolm and Anthonique Strachan. Latario was ranked number one in the world at the junior level in the triple jump, while Anthonique placed second at the youth level in the 200m. PAGE 26 BASKETBALL C ATHOLIC PRIMARY SCHOOLS RESULTS THE Catholic Diocesan P rimary Schools continued its 2 011 basketball season with t wo games at Loyola Hall, Gladstone Road, on Monday. In the opener, the Xaviers G iants pounded the St Bedes Crushers 30-18 as Jamal Davis scored a game high 13 points. L arvardo Dean had seven in the loss. And in the feature contest, defending champions St Cecilias Strikers nipped the St Francis/Joseph Shockers 31-30 behind nine points apiece from Daunte Stuart and Cornelius Clyde. Ashanti Johnson had a game high 12 in the loss. Starting 3:30pm today, the league is expected to be back in action with St Bedes playing Our Ladys, followed by St Thomas More vs. St Cecilias. CYCLING CALENDAR FOR 2011 THE New Providence Cycling Association is prepar ing the calendar of cycling events and activities within the island of New Providence for 2012. Therefore, the association is asking all cyclists, teams, clubs and race organisers who are organising, co-ordinating and sponsoring races to provide the association with their information as it prepares the dates/distance/time of the races or activities for 2012. BSFS QUALIFYING TOURNAMENT THE Bahamas Softball Federation has announced that the mens national teamwill have another week to prepare for the qualifying tournament for the World Softball Championships. The tournament has been rescheduled and is now expected to take place November 24 to December 5. The team will be managed by Godfrey Burnside. The players selected are pitchers Edney Bethel, Alcott Forbes, Eugene Pratt, Fred Cornish and Thomas Davis; infielders Greg Gar diner, Desmond Bannister, Marvin Wood, Ken Wood and Larry Russell and outfielders Martin Burrows Jr., Lamar Watkins, Sherman Johnson, Van Johnson and Godfrey Burnside Jr. (outfielders BASEBALL FREEDOM FARM C OACHES PITCH THE Freedom Farm is slat ed to host a Coach Pitch Tournament at the park in Yamacraw Beach EstatesN ovember 25-27. Teams from the Junior Baseball League of Nassau, the Grand Bahama B aseball League, Spanish Wells and Freedom Farm are expected to participate. For more information, persons can contact Pat Moss, t he vice president of Freedom Farm, CJ McKenzie or Valen c ia Lockhart at golyn29@yahoo.com T RACK C ONDOLENCES TO RUTHERFORD FAMILY T HE track and field community, especially the St Augustines College Big Red Machine, is expressing its sympathy to the family of the late Greg Rutherford whop assed away Sunday after suffering an aneurysm on Thurs day. Rutherford, a graduate of SACs class of 1979, was ano utstanding athlete for the Big Red Machine under the tutelage of coach Martin L undy. The Tribune Sports Department also extends its condolences. VOLLEYBALL N PVA ACTION NEW Providence Volleyball Association action is scheduled to continue at the DW Davis Gym on Wednes day with another double h eader. Wednesday 7:30pm Truckers vs Titans (L Scotia Defenders vs Crusaders (M F riday 7:30pm Lady Technicians vs Cougars (L 9pm Saints vs BTVI (M MARK KNOWLES CELEBRITY INVITATIONAL MARK Knowles is pleased t o announce the annual Mark Knowles Celebrity Tennis Invitational is set to be heldD ecember 1-4 at the Atlantis resort by presenting sponsor MDC-Partners and organised b y the Mark Knowles Man agement Group (MKMG T his years featured play ers are Andy Roddick, Xavier Malisse and Sabine Lisicki with some additional stars to be announced at a later date. T he organisers plan to hold a Pro/Am doubles tourna ment for platinum sponsors, a P ro Exhibition and an opportunity for top Bahamian junior tennis players to inter act with the visiting pros. SPORTS T RIBUNE SPORTS THURSDAY, NOVEMBER 17, 2011, PAGE 3E SOCCERCLINIC SPOR TS IN BRIEF THE SECOND annual S3 Soccer Clinic will return toF reeport from December 14 17, 2011. The first clinic was held in January of this year and by all accounts was a great success. Five participants r eceived scholarship offers, t hree of whom accepted their offers. There were over 160 participants from ages 6-19 who e njoyed their interaction with t he American college and local coaches. Organisers sayit is their goal to introduce t he islands up and coming soccer stars from ages 8 18years to all opportunities a vailable to them. The fee to participate is only $30. During this four-day clinic, w ill conduct a panel discussion featuring coaches and college representatives whow ill offer information on obtaining scholarships, what coaches and schools are looking for and how to apply to colleges, as well as how to be a successful student athlete. Participants will have the opportunity to train daily with professionalc oaches. A pproximately ten coaches are scheduled to participate in the clinic. Thus far, schools b eing represented include, B arry University (Miami, Florida), St. Andrews University (Laurinburg, NorthC arolina), University of Penns ylvania (Philadelphia, Pennsylvania), University of Montana (Missoula, MontanaU niversity of Northern Alabama (Florence, Alabam a) and Florida International University (Miami, Florida). Sponsors include Cable B ahamas, the Grand Bahama Port Authority (GBPA TheBahamasWeekly.com. R egistration forms can be f ound on TheBahamasWeekly.com under Sports, Corporate or individual s ponsors are also needed to e nsure the success of the event. Overall expected costs t o run this youth event is in e xcess of fifteen thousand dollars ($15,000.00 toward drinks, equipment,c oaches accommodations, transportation, welcome reception, gift items and tours for visiting coaches. The clinic also sponsors select lowi ncome students at $30.00 each. Premier sponsors of $500.00 or more will receive a display banner on the field for the duration of the clinic. S hould you wish to support us in this endeavor, please make cheques payable to S3 Soccer Clinic. More information can be b e obtained through clinic organizers Cletis Smith, Wayne Smith, and Tiffany Sweeting-Smith at s3soccerclinic@gmail.com or infor m ation can be found on the 3S Soccer Clinic Facebook page. Clinic to teach the next generation of soccer stars S3-SOCCER-CLINIC Coach Mark Plakorus of Texas Christian University works with participants of the first S3 Soccer Clinic in Grand Bahama held in January 2011. The camp returns to Freeport from December 14th 17th, 2011 with approximately ten visiting American coaches. P hoto: TheBahamasWeekly.com T he tentative schedule is as follows: Wednesday, December 14, 2011-3:30pm 5:30pm: Opening/Training Session (BMES Field Thursday, December 1 5, 2011-3:30pm 5:30pm: Training Session (BMES F ield) ( Bishop Michael Eldon Auditorium)6:00pm 7:30pm: Panel Discussion(Parents welcome Friday, December 16, 2011-3:30pm 5:30pm: T raining Session (BMES F ield) Saturday, December 17, 2 011 -10:00am 1:00pm: Scrimmage/Closing (BMES Field) SPORTS CLINIC SCHEDULE PAGE 27 SPORTS P AGE 4E, THURSDAY, NOVEMBER 17, 2011 TRIBUNE SPORTS GN-1333 By BRENT STUBBS S enior Sports Reporter b stubbs@tribunemedia.net SOME of us are too young to know and others are too old to forget how the local sailing got started in New P rovidence. B ut out of the need to organise boat racing, three men got together and formed the Nassau Yacht Racing Association on the easternf oreshore below The Folly i n the 1930s. T hose men responsible for the creation of the association and subsequently sailing competition on the island shortlya fter World War One were Captain Harry Knowles, a well known pilot; Willie Hall,a marine curio man and Commodore Skimmins, an Amer-i can who had built a simle h ermits retreat. O nce established, racing was held every Friday afternoon by members of the asso-c iation, along with men from all over the island who were interested in sailing in allt ypes of classes and sizes of boats. Then early in 1931, RT Symonette emerged as a young man, who had an idea of erecting a shipyard that would be swecond to nonea nywhere in the world. At first, it was a foolish idea preceived by many, but in time, he was considered a genius because of ther emarkable accomplishment that was made during the last quarter of a century. After sailing in the races organised by the association, Symonette had another vision and this time it was to establ ish the Nassau Yacht Club t hrough the help of such men as Joseph H. Thompson, Charles Albury, Stafford L. Sands, William Saunders, Charles R. Arteaga, Hertbert A. McKinney, Roy C. Arteaga, Charles A. Arteaga, John K nowles, J.E. Lewless, Harry Knowles, Everette Sands and Dudley Sands. From that group of men w ho met in his office on September 9, 1931, Symonette was elected as the first president; Joseph H. Thompson as vice president; Stafford L. Sands as secretary and Charles Albury as treasurer. A t the time, there was the preception that it was just another social club. But the pricipal qualification was that in order to be a member, one had to have the ability to handle a boat skillfully. D uring their first meeting, a committee was appointed to confer with other athletic clubs and, if possible, to arrange a schedule of races, not conflicting with other s ports. I t wasnt until the second meeting that the committee announced that the NassauY acht Racing Association was defunct and the debts and funds were taken over by the Nassau Yacht Club. Now in charge of the full operation of the sport, the club held its first official race o n October 23, 1931, with c ompetition staged in three classes A, B and C. Competing in the initial race were Ogam (skippered by Kenneth Butler), Lady Patsy 1 and Lady Patsy II (Alan Kelly), Ram (Jack Turt le), Hotspun (Leonard Roberts), Teaser (JD Albury), Snipe (AH Sands), Flash (WE SaundersP hoenix (EC Moseley Amphion (JH Thompson Marie S. (Basil McKinney nameless yacht owned by JE Lewless, Ortolan (Everette Sands), Phantom (Stafford L. Sands), Jolly Roger andF eisien II ()RT Symonette), Thustle (Oswald Mosely Baby Patsy and Canvas Back (CA Arteaga), Fussie (CF Dillon), Rosalie (Charles Albury), Flamingo (Maurice Barbes), Barbara (FT Stur r up) and Miss Nassau (RC Arteaga). Jas. P. Sands, a paint department managed by Arthur Sands, presented the first trophy. Hence, it was the b irth of competitive sailing in N ew Providence. The yacht club that grew out of a rich history of sailing THENASSAUYACHTCLUB has a rich history and a vibrant present, as a visit to its website, pictured, will show.
https://ufdc.ufl.edu/UF00084249/03157
CC-MAIN-2021-10
refinedweb
21,881
57.3
Pass maps are an established visualisation in football analysis, used to show the area of the pitch where a player made their passes. You’ll find examples across the Football Manager series, TV coverage, and pretty much all formats of football journalism. Similar plots are used to show shots or other events in a game, and multiple other sports make use of similar maps of what goes on during a game. This article runs through one way to create these in Python, making use of the Matplotlib library. Let’s fire up our modules, open our dataset and take a look at what we are working with: import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Arc %matplotlib inline data = pd.read_csv("EventData/passes.csv") data.head() *** Plotting Lines Our dataset contains Zeedayne’s passes from her match. We have when they happened, in additon to the starting and ending X and Y locations. With this information, matplotlib makes it easy to draw lines. We can use the ‘.plot()’ function to draw lines if we give it two lists: - List one must contain the start and end X locations - List two gives the start and end Y locations For example, plt.plot([0,1],[2,3] will plot a line from location (0,2) to (1,3). We could write this line to plot each of Zeedayne’s passes, but we hate repeating ourselves and are a little bit lazy, so let’s use a for loop to do this. Take a look at our code below to see it in action: fig, ax = plt.subplots() fig.set_size_inches(7, 5) for i in range(len(data)): plt.plot([int(data["Xstart"][i]),int(data["Xend"][i])], [int(data["Ystart"][i]),int(data["Yend"][i])], color="blue") plt.show() Great job on plotting all of the passes! Unfortunately, we do not know where they happened on the pitch, or the direction, or much else, but we will get there! Let’s start with adding a circle at the starting point of each pass to understand the direction. This is as easy as before, we just plot the start data, like below: fig, ax = plt.subplots() fig.set_size_inches(7, 5) for i in range(len(data)): plt.plot([int(data["Xstart"][i]),int(data["Xend"][i])], [int(data["Ystart"][i]),int(data["Yend"][i])], color="blue") plt.plot(int(data["Xstart"][i]),int(data["Ystart"][i]),"o", color="green") plt.show() Another massive and easy improvement would be to add a pitch map – as our article here explains. Let’s steal the code and add the pitch here – obviously feel free to steal the pitch too! ') for i in range(len(data)): plt.plot([int(data["Xstart"][i]),int(data["Xend"][i])],[int(data["Ystart"][i]),int(data["Yend"][i])], color="blue") plt.plot(int(data["Xstart"][i]),int(data["Ystart"][i]),"o", color="green") #Display Pitch plt.show() Awesome, now we can see Zeedayne’s pass locations – seems to cover just about everywhere! Summary Plotting simple pass maps is pretty easy – we just need to use matplotlib’s ‘.plot’ functionality to draw our lines, and a for loop to run through X/Y origin and destiniation data to plot each line. On their own, they do not offer much information, but once we add start location and a pitch map, we start to see where a player played their passes, where they ended up and the range that they employed in the match. To develop on this, we can look to colour code our lines for success, or another variable. We could even look to plot a heatmap to show where a player was active. Watch out for a further article on these!
https://fcpython.com/visualisation/drawing-pass-map-python
CC-MAIN-2018-51
refinedweb
630
65.32
This document describes the XMPP extension used by Google Talk to enable multiple applications signed in as the same user to report the same status message. Note: This extension is not intended to become a standard and is subject to change. Table of Contents Introduction Google Talk, like many chat clients, lets a user display a custom status message to other users. The Google Talk server stores lists of recently used status messages, and you can request and modify these values. This XEP extension enables a client to retrieve and modify these stored message lists, and also provides notifications so that all resources can report the same status and message. Whenever any resource changes its status or message, all other resources will be notified with the new values.:shared-status' feature as shown here: Example 2. Client service discovery response <iq type='result' to='romeo@gmail.com' from='gmail.com'> <query xmlns=''> ... <feature var='google:shared-status'/> ... </query> </iq> Requesting Current Status and Message Lists A client can query the server for the current lists of status messages stored by the user. Sending this query also registers the client to receive notifications whenever any instance of that user changes the displayed status message, or if the stored list values change. To query for the current list, send an IQ query qualified by the 'google:shared-status' namespace, as shown here: Example 3. Client requests status lists. <iq type='get' to='romeo@gmail.com' id='ss-1' > <query xmlns='google:shared-status' version='2'/> </iq> The server will respond with the lists of stored messages, as well as information about its message-storing capabilities. Each response includes several lists; each list is associated with one status value. Google Talk servers only recognize lists associated with two <show> values: dnd and default. If other show values are used, there is no guarantee that a client will understand or display them. The following code shows an example query response. Example 4. Server responds to a shared status message response with messages and capabilities. <iq type='result' to='romeo@gmail.com/orchard' id='ss-1'> <query xmlns='google:shared-status' status- <status>Pining away</status> <show>default</show> <status-list <status>Pining away</status> <status>Wherefore indeed</status> <status>Thinking about the sun</status> </status-list> <status-list <status>Chilling with Mercutio</status> <status>Visiting the monk</status> </status-list> <invisible value='false'/> </query> </iq> The previous stanza includes two lists, once for the default status (available), and one for the busy status ("dnd"). Each status list is enclosed in a <status-list> element with a show element describing the status associated with it. It also includes the current status (default) and message ("Pining away"). The following table describes the <elements> and attributes in this query response. Changing the Current Status or Message Once you have registered as a shared-status-aware client, the server will rewrite your presence stanzas and you must use shared-status to change your status or the status message. To change your shared-status, send an IQ set of the same form as the IQ result returned by the server when you queried for the current status. Include the new status and message values in the top-level <status> and <show> elements. The lists you send in your IQ set will replace the lists currently stored by the server. You should therefore resend the existing lists unless you want to clear these messages from the server. If the message that you are replacing is not in the appropriate status list, you should add it to the top of the appropriate status message list. If the list length then exceeds the server's maximum list length, remove the oldest message from that list to make room. Note: Because Google Talk interprets the 'away' status as idle, and idleness is a per-connection property, you cannot set an 'away' status using this method. To set an idle status message, send a standard <presence> stanza. To report a status change, send an IQ query qualified by the 'google:shared-status' namespace. Place the new status and message values in the top level <status> and <show> elements. You must include both these elements, even if you are only changing one of them. The following stanza demonstrates a request to change the status message from Pining away to Juliet's here. Note that Juliet's here was added to the list for the active status. Example 5. Changing the current message. <iq type='set' to='romeo@gmail.com/orchard' id='ss-2'> <query xmlns='google:shared-status' version='2'> <status>Juliet's here</status> <show>default</show> <status-list <status>Juliet's here</status> <status>Pining away</status> <status>Wherefor indeed</status> <status>Thinking about the sun</status> </status-list> <status-list <status>Chilling with Mercutio</status> <status>Visiting the monk</status> </status-list> <invisible value='false'/> </query> </iq> In response to your request, the server will send the following IQ set to all registered clients for this user. It will also send a standard <presence> stanza to all the subscribed contacts for this user. Example 6. Server sends status message notification to all registered clients for this user. <iq type='set' to='romeo@gmail.com/orchard' id='ss-3'> <query xmlns='google:shared-status' status- <status>Juliet's here</status> <show>default</show> <status-list <status>Juliet's here</status> <status>Pining away</status> <status>Wherefore indeed</status> <status>Thinking about the sun</status> </status-list> <status-list <status>Chilling with Mercutio</status> <status>Visiting the monk</status> </status-list> <invisible value='false'/> </query> </iq> How the Server Broadcasts Presence Whenever a shared-status client sends a google:shared-status IQ set, the server stores the <show> and <status> values and rewrites that client's last-sent presence stanza to include the new values. The server then broadcasts the new presence stanza to all subscribed contacts unless the user is in invisible mode. Whenever a client sends a presence stanza the server will rewrite the 'show' and 'status' values according to these rules before broadcasting. <show>The server will broadcast the presence stanza's 'show' unless the current shared show value is dnd. In that case, the server broadcasts dnd. (You must reset the dnd <show>value using shared status.) <status>The server will rewrite the presence stanza with the last shared status value. The following example is a simple presence stanza sent by a shared-status-aware client when the resource sends an "away" presence notification. The server will ignore the <status> value. Example 7. Client sends a request to change the capabilities and <show> value. <presence> <show>away</show> <status>A far, far better thing</status> <c xmlns='' node='' ver='0.92'/> </presence> The server rewrites and sends out the following value to all subscribed contacts and other instances. Note that the <status> value used is the current shared value, not the value just sent by the <presence> stanza. Example 8. Server rewrites the <show> value and sends a notification to all clients and subscribed contacts. <presence> <show>away</show> <status>Juliet's here</status> <c xmlns='' node='' ver='0.92'/> </presence>.
https://developers.google.com/talk/jep_extensions/shared_status
CC-MAIN-2015-27
refinedweb
1,191
53.92
Beginner's Guide to JDK To complete the setup, you should modify your PATH environment variable to include the location of JDK wrappers. Using your favorite editor, edit the appropriate startup file (for me, this was .profile) and add /usr/local/jdk1.1.6/bin to your PATH. Adding this to the beginning of your current PATH setting ensures that this JDK is invoked. You also need to add two new environment variables: JAVA_HOME and CLASSPATH. JAVA_HOME tells JDK where its base directory is located. Although it isn't mandatory to set this variable since the JDK does a good job of determining this location, it is used by other Java programs such as the Swing Set. CLASSPATH can be confusing and frustrating, but it is possible to use it well and correctly from the beginning. Just remember this simple analogy: CLASSPATH is for Java what PATH is for a shell on your machine. Looking closer at the analogy, your shell executes only those programs or scripts residing in the directory pointed to by PATH, unless the full path of the program is specified. CLASSPATH works the same way for Java. Only those applications and applets in the directories specified by the CLASSPATH environment variable can be run without specifying the complete location. I usually set CLASSPATH to a simple dot. This lets me run any application that is in my current directory. I also create scripts that set my CLASSPATH on an “as needed” basis, depending on what I am doing during that particular session. If you use bash as your shell, these three environment variables can be set as follows: PATH=/usr/local/jdk1.1.6/bin:$PATH CLASSPATH=. JAVA_HOME=/usr/local/jdk1.1.3" export PATH CLASSPATH JAVA_HOME Note that the RPMs which come with Red Hat 4.1 and 4.2 do not work out of the box. I recommend erasing the RPMs and using the JDK distribution from Blackdown. Erase the RPMs with the commands rpm -e jdk and rpm -e kaffe. You're now ready to test the JDK. Either log in or execute the startup file to set your new environment variables, and make sure the new environment variables are indeed taking effect. Executing rlogin localhost will do the trick. Now, type java. A message giving usage parameters should appear. Typing javac should also work, displaying different usage parameters. Next, use your favorite editor and type in your first Java program; name this file HelloLinux.java. public class HelloLinux { public static void main (String args[]) { System.out.println("Hello Linux!"); } } To compile this program, type javac HelloLinux.java. The compilation process creates a single file called HelloLinux.class. To run your Java application, enter java HelloLinux. This outputs the single line “Hello Linux!” The Linux kernel is capable of detecting Java byte code and automatically starting Java to run it. This eliminates the need to type java first. When the kernel is configured with Java support, you need do only two things. First, change permissions of your .class file to make it executable using the chmod command. Then, run it like any normal script or executable program. For example, after compiling the Java program HelloLinux, perform the following commands: chmod 755 HelloLinux.class ./HelloLinux.class Note that you now have to specify the full name of the application. This includes the .class extension. To set up Java support, you need the source code to the Linux kernel. The default installation of Caldera OpenLinux installs the kernel source code for you. Use this or download the latest and greatest kernel source and install it. If you haven't compiled a kernel for your Linux box before, I recommend doing it once or twice to get a feel for it. This will also ensure that problems unrelated to Java don't arise when you are trying to add native Java support to the kernel. Three steps are required to set up the kernel to automatically run Java byte code. You can find more information about using this feature of the kernel in Documentation/java.txt in your kernel source tree. In the “Code Maturity Option” menu, select “Prompt for development and/or incomplete code/drivers”. The support of Java is still somewhat new and may have problems which not everyone is prepared to encounter. In the “General Setup” menu, select “Kernel Support for Java Binaries”. Mark it as either a module or a part of the kernel. Before compiling the kernel, edit the fs/binfmt_java.c file and place the path to your java interpreter in the #defines located at the start of that file. (For me, this path is /usr/local/jdk1.1.6/bin/java/.) Also, edit the path pointing to the applet viewer. An alternate method is to leave the paths alone in fs/binfmt_java.c and make symbolic links to the appropriate locations. If you compiled Java support as a part of the kernel—i.e., it was not a module—then there is still another way to tell the kernel where your java wrapper lives. Log in as root and issue the command: echo "/path/to/java/interpreter" >\ /proc/sys/kernel/java-interpreterNote that this command needs to be executed each time you boot the kernel, so you should place it in the rc.local file or an equivalent
http://www.linuxjournal.com/article/2570?page=0,1
CC-MAIN-2016-50
refinedweb
889
58.48
A simple wrapper around the Slack web api to post messages Project description Simple wrapper around the Slack Web API to post messages. Details Slackelot contains a single function: send_message(message, webhook_url, pretext='', title='', author_name='', color=None) webhook_url should be in the following format: '' Example from slackelot import send_message webhook_url = '' message = 'Who wants to push the pram?\n@lancelot @percival' pretext = 'Knights of the Round Table' title = 'Spamelot' author_name = 'Arthur' color = '#663399' send_message(message, webhook_url, pretext=pretext, title=title, author_name=author_name, color=color) Extra Goodness Paid teams have the option to mention other subteams, (ie. channel). In that case, you might append something like this to your message: '\n<!subteam^ID|HANDLE>' (replace ID and HANDLE with your subteam’s id and name, respectively). For more information on message formatting, see the Slack API docs Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/slackelot/
CC-MAIN-2018-51
refinedweb
162
50.06
Create a Game Engine: Part I Shader Manager In the first chapters of this series I told you that we somehow want to create a basic Game Engine and keep our architecture as decoupled as we can. I know that this might sound a little crazy to begin working on a Game Engine or to an architecture knowing how to render just a triangle in NDC space and not touching several graphics techniques. But I think that in the end this will pay off and we can easily insert anything we want in our engine. So we are not creating a program to handle just a specific technique, we are creating an engine to handle that technique along with other techniques. At this moment, this might be a little boring because nothing spectacular will happen from a visual angle. Until now, if you followed our tutorials up to this point, you saw that we tried to keep our architecture as decoupled as we could. I mean for God’s sake we created 10 tutorials just to render one triangle :)). However there are a few problems you might notice: - Up to now we can load just one shader and use just one program with our Shader_Loader class which was presented here. What happens if we need another shader in our engine? - Too many things are happening in the main.cpp file (Init GLUT, GLEW, loading shaders, loading the triangle, render). - Loading shaders,loading the model (triangle) and render the model is also in main.cpp - GameModels class is too specific in creating just a triangle. We must be able to load other models too. I have to tell you right from the beginning that this architecture will suffer modifications and refactoring in the future and what are we doing now is just a basic engine. Also at the end of this chapter I will write about the performance issues with this architecture. Let’s see how the whole architecture for this engine will look: So, first things first let’s solve the Shader_Loader class to be able to load other shaders too. So what we are going to do is: - Keep these programs in a map structure (the map from STL) where the key is the shader name. - Have a GetProgram method to get a specific shader. - Make the map and the GetProgram method static to access them anywhere. - Replace all param of type char* with std::string for flexibility. - Create the Shader Manager by renaming Shader_Loader to Shader_Manager. - Create a folder in your project called Managers and move the Shader_Manager.cpp and Shader_Manager.h in this folder. - Rename the namespace Core for this class with Managers. - Delete programs in destructor. Program structure should look like this: Your Shader_Manager.h should look like this: //Shader_Manager.h //modify from #pragma once #include <fstream> #include <iostream> #include <map> #include <vector> #include "../Dependencies/glew/glew.h" #include "../Dependencies/freeglut/freeglut.h" namespace Managers { class Shader_Manager { public: Shader_Manager(void); ~Shader_Manager(void); //modify char* to std::string void CreateProgram(const std::string& shaderName, const std::string& VertexShaderFilename, const std::string& FragmentShaderFilename); static const GLuint GetShader(const std::string&); private: //modify char* to std::string std::string ReadShader(const std::string& filename); //modify char* to std::string GLuint CreateShader(GLenum shaderType, const std::string& source, const std::string& shaderName); static std::map<std::string, GLuint> programs; }; } I think that it is obvious why we renamed our class from Shader_Loader to Shader_Manager. Beside loading and creating one shader program, we can load and create multiple shaders, provide the specific shader when is required and of course delete them. So this class act like a manager. Big engines (like Ogre for example) use this name convention. Of course there are other ways to implement this, feel free to comment below what you prefer if you done this before. Now let’s see what changed in the cpp file( I pointed out in comments): //Shader_Manager.cpp #include "Shader_Manager.h" using namespace Managers; //don't forget about this little static guy in cpp std::map<std::string, GLuint> Shader_Manager::programs; Shader_Manager::Shader_Manager(void) { //same } //destructor delete programs Shader_Manager::~Shader_Manager(void) { std::map<std::string, GLuint>::iterator i; for (i = programs.begin();i != programs.end(); ++i) { GLuint pr = i->second; glDeleteProgram(pr); } programs.clear(); } std::string Shader_Manager::ReadShader(const std::string& filename) { //same, nothing modified // use filename.c_str() to convert to const char* } GLuint Shader_Manager::CreateShader(GLenum shaderType, const std::string& source, const std::string& shaderName) { //same, nothing modified //use c_str() to convert to const char* whre is required } void Shader_Manager::CreateProgram(const std::string& shaderName, const std::string& vertexShaderFilename, const std::string& fragmentShaderFilename) { //same, nothing modified //use c_str() to convert to const char* where is required //last line of this function instead of return program will be: programs[shaderName] = program; //also don't forget to check if the shaderName is already in the map //you could use programs.insert; but it's your call } //the new method used to get the program const GLuint Shader_Manager::GetShader(const std::string& shaderName) { //make sure that you check if program exist first //before you return it return programs.at(shaderName); } Of course beside GetShader other methods can be static too. But for the moment we only need the GetShader and create them in one place with a normal method. You can see that I didn’t check if the program exists or if the key is already in the map, to keep the tutorial simple. You should always do this checks otherwise you can end up with a black triangle because your program was deleted somewhere in the flow or you misspelled the shader name. Also you can implement a DeleteShader(name) method to delete a specific shader (homework). Now going back in main.cpp we can use the manager like this: //just after includes Managers::Shader_Manager* shaderManager; void Init() { glEnable(GL_DEPTH_TEST); gameModels = new Models::GameModels(); gameModels->CreateTriangleModel("triangle1"); //load and compile shaders shaderManager = new Managers::Shader_Manager(); //thanks to Erik // for pointing this out shaderManager->CreateProgram("colorShader", "Shaders\\Vertex_Shader.glsl", "Shaders\\Fragment_Shader.glsl"); program = Managers::Shader_Manager::GetShader("colorShader"); } //don't forget to delete the manager in main int main(int argc, char **argv) { glutMainLoop(); delete gameModels; delete shaderManager; return 0; } One problem is solve, now we can load other shaders too in our program. In the next tutorial we take care of this dirty main.cpp breaking him in other modules. // source code updated 3/9/2015 Source code: 1_Setting_OpenGL_multiple_shaders
http://in2gpu.com/2015/02/25/create-a-game-engine-part-i-shader-manager/
CC-MAIN-2017-22
refinedweb
1,076
52.9
This document is the reference manual for the Rust programming language. It provides three kinds of material: This document does not serve as a tutorial introduction to the language. Background familiarity with the language is assumed. A separate tutorial document is available to help acquire such background familiarity. This document also does not serve as a reference to the standard library included in the language distribution. Those libraries are documented separately by extracting documentation attributes from their source code. Rust is a work in progress. The language continues to evolve as the design shifts and is fleshed out in working code. Certain parts work, certain parts do not, certain parts will be removed or changed. This manual is a snapshot written in the present tense. All features described exist in working code unless otherwise noted, but some are quite primitive or remain to be further modified by planned work. Some may be temporary. It is a draft, and we ask that you not take anything you read here as final. If you have suggestions to make, please try to focus them on reductions to the language: possible features that can be combined or omitted. We aim to keep the size and complexity of the language under control. Note: The grammar for Rust given in this document is rough and very incomplete; only a modest number of sections have accompanying grammar rules. Formalizing the grammar accepted by the Rust parser is ongoing work, but future versions of this document will contain a complete grammar. Moreover, we hope that this grammar will be extracted and verified as LL(1) by an automated grammar-analysis tool, and further tested against the Rust sources. Preliminary versions of this automation exist, but are not yet complete. Rust's grammar is defined over Unicode codepoints, each conventionally denoted U+XXXX, for 4 or more hexadecimal digits X. Most of Rust's grammar is confined to the ASCII range of Unicode, and is described in this document by a dialect of Extended Backus-Naur Form (EBNF), specifically a dialect of EBNF supported by common automated LL(k) parsing tools such as llgen, rather than the dialect given in ISO 14977. The dialect can be defined self-referentially as follows: grammar : rule + ; rule : nonterminal ':' productionrule ';' ; productionrule : production [ '|' production ] * ; production : term * ; term : element repeats ; element : LITERAL | IDENTIFIER | '[' productionrule ']' ; repeats : [ '*' | '+' ] NUMBER ? | NUMBER ? | '?' ; Where: LITERALis a single printable ASCII character, or an escaped hexadecimal ASCII code of the form \xQQ, in single quotes, denoting the corresponding Unicode codepoint U+00QQ. IDENTIFIERis a nonempty string of ASCII letters and underscores. repeatforms apply to the adjacent element, and are as follows: ?means zero or one repetition *means zero or more repetitions +means one or more repetitions This EBNF dialect should hopefully be familiar to many readers. A few productions in Rust's grammar permit Unicode codepoints outside the ASCII range. We define these productions in terms of character properties specified in the Unicode standard, rather than in terms of ASCII-range codepoints. The section Special Unicode Productions lists these productions. Some rules in the grammar — notably unary operators, binary operators, and keywords — are given in a simplified form: as a listing of a table of unquoted, printable whitespace-separated enclosed in double-quotes ( ") occurs inside the grammar, it is an implicit reference to a single member of such a string table production. See tokens for more information. Rust input is interpreted as a sequence of Unicode codepoints encoded in UTF-8, normalized to Unicode normalization form NFKC. Most Rust grammar rules are defined in terms of printable ASCII-range codepoints, but a small number are defined in terms of Unicode properties or explicit codepoint lists. 1 The following productions in the Rust grammar are defined in terms of Unicode properties: ident, non_null, non_star, non_eol, non_slash_or_star, non_single_quote and non_double_quote. The ident production is any nonempty Unicode string of the following form: XID_start XID_continue that does not occur in the set of keywords. Note: XID_start and XID_continue as character properties cover the character ranges used to form the more familiar C and Java language-family identifiers. Some productions are defined by exclusion of particular Unicode characters: non_nullis any single Unicode character aside from U+0000(null) non_eolis non_nullrestricted to exclude U+000A( '\n') non_staris non_nullrestricted to exclude U+002A( *) non_slash_or_staris non_nullrestricted to exclude U+002F( /) and U+002A( *) non_single_quoteis non_nullrestricted to exclude U+0027( ') non_double_quoteis non_nullrestricted to exclude U+0022( ") comment : block_comment | line_comment ; block_comment : "/*" block_comment_body * '*' + '/' ; block_comment_body : [block_comment | character] * ; line_comment : "//" non_eol * ; Comments in Rust code follow the general C++ style of line and block-comment forms. Nested block comments are supported. Line comments beginning with exactly three slashes ( ///), and block sequence ( /**), are interpreted as a special syntax for doc attributes. That is, they are equivalent to writing #[doc="..."] around the body of the comment (this includes the comment characters themselves, ie /// Foo turns into #[doc="/// Foo"]). Non-doc comments are interpreted as a form of whitespace. whitespace_char : '\x20' | '\x09' | '\x0a' | '\x0d' ; whitespace : [ whitespace_char | comment ] + ; The whitespace_char production is any nonempty Unicode string consisting of any of the following Unicode characters: U+0020 (space, ' '), U+0009 (tab, '\t'), U+000A (LF, '\n'), U+000D (CR, '\r').. simple_token : keyword | unop | binop ; token : simple_token | ident | literal | symbol | whitespace token ; Tokens are primitive productions in the grammar defined by regular (non-recursive) languages. "Simple" tokens are given in string table production form, and occur in the rest of the grammar as double-quoted strings. Other tokens have exact rules given. The keywords are the following strings: as box break continue crate else enum extern false fn for if impl in let loop match mod mut priv proc pub ref return self static struct super true trait type unsafe use while Each of these keywords has special meaning in its grammar, and all of them are excluded from the ident rule.. literal : string_lit | char_lit | byte_string_lit | byte_lit | num_lit ; char_lit : '\x27' char_body '\x27' ; string_lit : '"' string_body * '"' | 'r' raw_string ; char_body : non_single_quote | '\x5c' [ '\x27' | common_escape | unicode_escape ] ; string_body : non_double_quote | '\x5c' [ '\x22' | common_escape | unicode_escape ] ; raw_string : '"' raw_string_body '"' | '#' raw_string '#' ; common_escape : '\x5c' | 'n' | 'r' | 't' | '0' | 'x' hex_digit 2 unicode_escape : 'u' hex_digit 4 | 'U' hex_digit 8 ; hex_digit : 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'A' | 'B' | 'C' | 'D' | 'E' | 'F' | dec_digit ; oct_digit : '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' ; dec_digit : '0' | nonzero_dec ; nonzero_dec: '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' ; A character literal is a single Unicode character enclosed within two U+0027 (single-quote) characters, with the exception of U+0027 itself, which must be escaped by a preceding U+005C character ( \). A string literal is a sequence of any Unicode characters enclosed within two U+0022 (double-quote) characters, with the exception of U+0022 itself, which must be escaped by a preceding U+005C character ( \), or a raw string literal. Some additional escapes are available in either character or non-raw string literals. An escape starts with a U+005C ( \) and continues with one of the following forms: U+0078( x) and is followed by exactly two hex digits. It denotes the Unicode codepoint equal to the provided hex value. U+0075( u) and is followed by exactly four hex digits. It denotes the Unicode codepoint equal to the provided hex value. U+0055( U) and is followed by exactly eight hex digits. It denotes the Unicode codepoint equal to the provided hex value. U+006E( n), U+0072( r), or U+0074( t), denoting the unicode values U+000A(LF), U+000D(CR) or U+0009(HT) respectively. U+005C( \) which must be escaped in order to denote itself. Raw string literals do not process any escapes. They start with the character U+0072 ( r), followed by zero or more of the character U+0023 ( #) and a U+0022 (double-quote) character. The raw string body is not defined in the EBNF grammar above: it can contain any sequence of Unicode characters and is terminated only by another U+0022 (double-quote) character, followed by the same number of U+0023 ( #) characters that preceded the opening U+0022 (double-quote) character.:fn main() { "foo"; r"foo"; // foo "\"foo\""; r#""foo""#; // "foo" "foo #\"# bar"; r##"foo #"# bar"##; // foo #"# bar "\x52"; "R"; r"R"; // R "\\x52"; r"\x52"; // \x52 } "foo"; r"foo"; // foo "\"foo\""; r#""foo""#; // "foo" "foo #\"# bar"; r##"foo #"# bar"##; // foo #"# bar "\x52"; "R"; r"R"; // R "\\x52"; r"\x52"; // \x52 byte_lit : 'b' '\x27' byte_body '\x27' ; byte_string_lit : 'b' '"' string_body * '"' | 'b' 'r' raw_byte_string ; byte_body : ascii_non_single_quote | '\x5c' [ '\x27' | common_escape ] ; byte_string_body : ascii_non_double_quote | '\x5c' [ '\x22' | common_escape ] ; raw_byte_string : '"' raw_byte_string_body '"' | '#' raw_byte_string '#' ; A byte literal is a single ASCII character (in the U+0000 to U+007F range) enclosed within two U+0027 (single-quote) characters, with the exception of U+0027 itself, which must be escaped by a preceding U+005C character ( \), or a single escape. It is equivalent to a u8 unsigned 8-bit integer number literal. A byte string literal is a sequence of ASCII characters and escapes enclosed within two U+0022 (double-quote) characters, with the exception of U+0022 itself, which must be escaped by a preceding U+005C character ( \), or a raw byte string literal. It is equivalent to a &'static [u8] borrowed vector of unsigned 8-bit integers. Some additional escapes are available in either byte or non-raw byte string literals. An escape starts with a U+005C ( \) and continues with one of the following forms: U+0078( x) and is followed by exactly two hex digits. It denotes the byte equal to the provided hex value. U+006E( n), U+0072( r), or U+0074( t), denoting the bytes values 0x0A(ASCII LF), 0x0D(ASCII CR) or 0x09(ASCII HT) respectively. U+005C( \) which must be escaped in order to denote its ASCII encoding 0x5C. Raw byte string literals do not process any escapes. They start with the character U+0072 ( r), followed by U+0062 ( b), followed by zero or more of the character U+0023 ( #), and a U+0022 (double-quote) character. The raw string body is not defined in the EBNF grammar above: it:fn main() { b"foo"; br"foo"; // foo b"\"foo\""; br#""foo""#; // "foo" b"foo #\"# bar"; br##"foo #"# bar"##; // foo #"# bar b"\x52"; b"R"; br"R"; // R b"\\x52"; br"\x52"; // \x52 } b"foo"; br"foo"; // foo b"\"foo\""; br#""foo""#; // "foo" b"foo #\"# bar"; br##"foo #"# bar"##; // foo #"# bar b"\x52"; b"R"; br"R"; // R b"\\x52"; br"\x52"; // \x52 num_lit : nonzero_dec [ dec_digit | '_' ] * num_suffix ? | '0' [ [ dec_digit | '_' ] * num_suffix ? | 'b' [ '1' | '0' | '_' ] + int_suffix ? | 'o' [ oct_digit | '_' ] + int_suffix ? | 'x' [ hex_digit | '_' ] + int_suffix ? ] ; num_suffix : int_suffix | float_suffix ; int_suffix : 'u' int_suffix_size ? | 'i' int_suffix_size ? ; int_suffix_size : [ '8' | '1' '6' | '3' '2' | '6' '4' ] ; float_suffix : [ exponent | '.' dec_lit exponent ? ] ? float_suffix_ty ? ; float_suffix_ty : 'f' [ '3' '2' | '6' '4' ] ; exponent : ['E' | 'e'] ['-' | '+' ] ? dec_lit ; dec_lit : [ dec_digit | '_' ] + ; A number literal is either an integer literal or a floating-point literal. The grammar for recognizing the two kinds of literals is mixed, as they are differentiated by suffixes. An integer literal has one of four forms: U+0030 U+0078( 0x) and continues as any mixture hex digits and underscores. U+0030 U+006F( 0o) and continues as any mixture octal digits and underscores. U+0030 U+0062( 0b) and continues as any mixture binary digits and underscores. An integer literal may be followed (immediately, without any spaces) by an integer suffix, which changes the type of the literal. There are two kinds of integer literal suffix: iand usuffixes give the literal type intor uint, respectively. u8, i8, u16, i16, u32, i32, u64and i64give the literal the corresponding machine type. The type of an unsuffixed integer literal is determined by type inference. If an integer type can be uniquely determined from the surrounding program context, the unsuffixed integer literal has that type. If the program context underconstrains the type, the unsuffixed integer literal's type is int; if the program context overconstrains the type, it is considered a static type error. Examples of integer literals of various forms:fn main() { A floating-point literal has one of two forms: U+002E( .), with an optional exponent trailing after the second decimal literal. By default, a floating-point literal has a generic type, but will fall back to f64. A floating-point literal may be followed (immediately, without any spaces) by a floating-point suffix, which changes the type of the literal. There are two floating-point suffixes: f32, and f64 (the 32-bit and 64-bit floating point types). Examples of floating-point literals of various forms:fn main() { 123.0; // type f64 0.1; // type f64 0.1f32; // type f32 12E+99_f64; // type f64 } 123.0; // type f64 0.1; // type f64 0.1f32; // type f32 12E+99_f64; // type f64 The unit value, the only value of the type that has the same name, is written as (). The two values of the boolean type are written true and false. symbol : "::" "->" | '#' | '[' | ']' | '(' | ')' | '{' | '}' | ',' | ';' ; Symbols are a general class of printable token that play structural roles in a variety of grammar productions. They are catalogued here for completeness as the set of remaining miscellaneous printable tokens that do not otherwise appear as unary operators, binary operators, or keywords. expr_path : [ "::" ] ident [ "::" expr_path_tail ] + ; expr_path_tail : '<' type_expr [ ',' type_expr ] + '>' | expr_path ; type_path : ident [ type_path_tail ] + ; type_path_tail : '<' type_expr [ ',' type_expr ] + '>' | "::" type_path ; A path is a sequence of one or more path components logically separated by a namespace qualifier ( ::). If a path consists of only one component, it may refer to either an item or a slot in a local control scope. If a path has multiple components, it refers to an item. Every item has a canonical path within its crate, but the path naming an item is only meaningful within a given crate. There is no global namespace across crates; an item's canonical path merely identifies it within the crate. Two examples of simple paths consisting of only identifier components:fn main() { x; x::y::z; } x; x::y::z; Path components are usually identifiers, but the trailing component of a path may be an angle-bracket-enclosed list of type arguments. In expression context, the type argument list is given after a final ( ::) namespace qualifier in order to disambiguate it from a relational expression involving the less-than symbol ( <). In type expression context, the final namespace qualifier is omitted. Two examples of paths with type arguments:fn main() { struct HashMap<K, V>; fn f() { fn id<T>(t: T) -> T { t } type T = HashMap<int,String>; // Type arguments used in a type expression let x = id::<int>(10); // Type arguments used in a call expression } } type T = HashMap<int,String>; // Type arguments used in a type expression let x = id::<int> } } Paths starting with the keyword super begin resolution relative to the parent module. Each further identifier must resolve to an item mod a { pub fn foo() {} } mod b { pub fn foo() { super::a::foo(); // call a's foo function } } Paths starting with the keyword self begin resolution relative to the current module. Each further identifier must resolve to an item. fn foo() {} fn bar() { self::foo(); } A number of minor features of Rust are not central enough to have their own syntax, and yet are not implementable as functions. Instead, they are given names, and invoked through a consistent syntax: name!(...). Examples include: format!: format data into a string env!: look up an environment variable's value at compile time file!: return the path to the file being compiled stringify!: pretty-print the Rust expression given as an argument include!: include the Rust expression in the given file include_str!: include the contents of the given file as a string include_bin!: include the contents of the given file as a binary blob error!, warn!, info!, debug!: provide diagnostic information. All of the above extensions are expressions with values. expr_macro_rules : "macro_rules" '!' ident '(' macro_rule * ')' ; macro_rule : '(' matcher * ')' "=>" '(' transcriber * ')' ';' ; matcher : '(' matcher * ')' | '[' matcher * ']' | '{' matcher * '}' | '$' ident ':' ident | '$' '(' matcher * ')' sep_token? [ '*' | '+' ] | non_special_token ; transcriber : '(' transcriber * ')' | '[' transcriber * ']' | '{' transcriber * '}' | '$' ident | '$' '(' transcriber * ')' sep_token? [ '*' | '+' ] | non_special_token ; User-defined syntax extensions are called "macros", and the macro_rules syntax extension defines them. Currently, user-defined macros can expand to expressions, statements, or items. , block, stmt, pat, expr, ty (type), ident, path, matchers (lhs of the => in macro rules), tt (rhs of the => in macro rules). parens, optionally followed by a separator token, followed by * or +. * means zero or more repetitions, + means at least one repetition. The parens. The parser used by the macro system is reasonably powerful, but the parsing of Rust syntax is restricted in two ways: $i:expr [ , ]against 8 [ , ], it will attempt to parse ias an array index operation and fail. Adding a separator can solve this problem. $name :designator. This requirement most often affects name-designator pairs when they occur at the beginning of, or immediately after, a $(...)*; requiring a distinctive token in front can solve the problem. log_syntax!: print out the arguments at compile time trace_macros!: supply trueor falseto enable or disable macro expansion logging stringify!: turn the identifier argument into a string literal concat!: concatenates a comma-separated list of literals concat_idents!: create a new identifier by concatenating the arguments Rust is a compiled language. Its semantics obey a phase distinction between compile-time and run-time. Those semantic rules that have a static interpretation govern the success or failure of compilation. We refer to these rules as "static semantics". Semantic rules called "dynamic semantics" govern the behavior of programs at run-time. A program that fails to compile due to violation of a compile-time rule has no defined dynamic semantics; the compiler should halt with an error report, and produce no executable artifact. The compilation model centres on artifacts called crates. Each compilation processes a single crate in source form, and if successful, produces a single crate in binary form: either an executable or. Each source file contains a sequence of zero or more item definitions, and may optionally begin with any number of attributes that apply to the containing module. Attributes on the anonymous crate module define important metadata that influences the behavior of the compiler. // Crate ID #![crate_id = "projx#2.5"] // Additional metadata attributes #![desc = "Project X"] #![license = "BSD"] #![comment = "This is a comment on Project X."] // Specify the output type #![crate_type = "lib"] // Turn on a warning #![warn(non_camel_case_types)] A crate that contains a main function can be compiled to an executable. If a main function is present, its return type must be unit and it must take no arguments. Crates contain items, each of which may have some number of attributes attached to it. item : mod_item | fn_item | type_item | struct_item | enum_item | static_item | trait_item | impl_item | extern_block ; An item is a component of a crate; some module items can be defined in crate files, but most are defined in source files. first-class "forall" types. mod_item : "mod" ident ( ';' | '{' mod '}' ); mod : [ view_item | item ] * ; A module is a container for zero or more view items and zero or more items. The view items manage the visibility of the items defined within the module, as well as the visibility of names from outside the module when referenced from inside the module. A module item is a module, surrounded in braces, named, and prefixed with the keyword mod. A module item introduces a new, named module into the tree of modules making up a crate. Modules can nest arbitrarily. An example of a module:fn main() { mod math { type Complex = (f64, f64); fn sin(f: f64) -> f64 { /* ... */ fail!(); } fn cos(f: f64) -> f64 { /* ... */ fail!(); } fn tan(f: f64) -> f64 { /* ... */ fail!(); } } } mod math { type Complex = (f64, f64); fn sin(f: f64) -> f64 { /* ... */ } fn cos(f: f64) -> f64 { /* ... */ } fn tan(f: f64) -> f64 { /* ... */ } } Modules and types share the same namespace. Declaring a named type that task { // Load the `local_data` module from `task/local_data.rs` mod local_data; } The directories and files used for loading external file modules can be influenced with the path attribute. #[path = "task_files"] mod task { // Load the `local_data` module from `task_files/tls.rs` #[path = "tls.rs"] mod local_data; } view_item : extern_crate_decl | use_decl ; A view item manages the namespace of a module. View items do not define new items, but rather, simply change other items' visibility. There are several kinds of view item: extern cratedeclarations usedeclarations extern_crate_decl : "extern" "crate" ident [ '(' link_attrs ')' ] ? [ '=' string_lit ] ? ; link_attrs : link_attr [ ',' link_attrs ] + ; link_attr : ident '=' literal ; soname is resolved at compile time by scanning the compiler's library path and matching the optional crateid provided as a string literal against the crateid attributes that were declared on the external crate when it was compiled. If no crateid is provided, a default name attribute is assumed, equal to the ident given in the extern_crate_decl. Four examples of extern crate declarations: extern crate pcre; extern crate std; // equivalent to: extern crate std = "std"; extern crate ruststd = "std"; // linking to 'std' under another name extern crate foo = "some/where/rust-foo#foo:1.0"; // a full crate ID for external tools use_decl : "pub" ? "use" [ ident '=' path | path_glob ] ; path_glob : ident [ "::" [ path_glob | '*' ] ] ? | '{' ident [ ',' ident ] * '}' ; A use declaration creates one or more local name bindings synonymous with some other path. Usually a use declaration is used to shorten the path required to refer to a module item. These declarations may appear at the top of modules and blocks. Note: Unlike in many languages, use declarations in Rust do not declare linkage dependency with external crates. Rather, extern crate declarations declare linkage dependencies. Use declarations support a number of convenient shortcuts: use x = p::q::r;. use a::b::{c,d,e,f}; use a::b::*; An example of use declarations: use std::iter::range_step; use std::option::{Some, None}; fn main() { // Equivalent to 'std::iter::range_step(0u, 10u, 2u);' range_step(0u, 10u, 2u); // Equivalent to 'foo(vec![std::option::Some(1.0f64), // std::option::None]);' foo(vec![Some(1.0f64), None]); } applies to both module declarations and extern crate declarations. An example of what will and will not work for use items: use foo::native::start; // good: foo is at the root of the crate use foo::baz::foobaz; // good: foo is at the root of the crate mod foo { extern crate native; use foo::native::start; // good: foo is at crate root // use native::start; // bad: native an optional final expression, along with a name and a set of parameters. Functions are declared with the keyword fn. Functions declare a set of input slots as parameters, through which the caller passes arguments into the function, and an output slot through which the function passes results back to the caller.:fn main() { fn add(x: int, y: int) -> int { return x + y; } } fn add(x: int, y: int) -> int { return x + y; } As with let bindings, function arguments are irrefutable patterns, so any pattern that is valid in a let binding is also valid as an argument. fn first((value, _): (int, int)) -> int { value } A generic function allows one or more parameterized types to appear in its signature. Each type parameter must be explicitly declared, in an angle-bracket-enclosed, comma-separated list following the function name.fn main() { fn iter<T>(seq: &[T], f: |T|) { for elt in seq.iter() { f(elt); } } fn map<T, U>(seq: &[T], f: |T| -> U) -> Vec<U> { let mut acc = vec![]; for elt in seq.iter() { acc.push(f(elt)); } acc } } fn iter<T>(seq: &[T], f: |T|) { for elt in seq.iter() { f(elt); } } fn map<T, U>(seq: &[T], f: |T| -> U) -> Vec<U> { let mut acc = vec![]; for elt in seq.iter() { acc.push(f(elt)); } acc } Inside the function signature and body, the name of the type parameter can be used as a type name. When a generic function is referenced, its type is instantiated based on the context of the reference. For example, calling the iter function defined above on [1, 2] will instantiate type parameter T with int, and require the closure parameter to have type fn(int). The type parameters can also be explicitly supplied in a trailing path component after the function name. This might be necessary if there is not sufficient context to determine the type parameters. For example, mem::size_of::<u32>() == 4. Since a parameter type is opaque to the generic function, the set of operations that can be performed on it is limited. Values of parameter type can only be moved, not copied.fn main() { fn id<T>(x: T) -> T { x } } fn id<T>(x: T) -> T { x } Similarly, trait bounds can be specified for type parameters to allow methods with that trait to be called on values of that type. Unsafe operations are those that potentially violate the memory-safety guarantees of Rust's static semantics. The following language level features cannot be used in the safe subset of Rust: Unsafe functions are functions that are not safe in all contexts and/or for all possible inputs. Such a function must be prefixed with the keyword unsafe. A block of code can also tasks managed or reference-counted pointers in safe code. By using unsafe blocks to represent the reverse links as raw pointers, it can be implemented with only owned pointers. This is a list of behavior which is forbidden in all Rust code. Type checking provides the guarantee that these issues are never caused by safe code. An unsafe block or function is responsible for never invoking this behaviour or exposing an API making it possible for it to occur in safe code. std::ptr::offset( offsetintrinsic), with the exception of one byte past the end which is permitted. std::ptr::copy_nonoverlapping_memory( memcpy32/ memcpy64instrinsics) on overlapping buffers false(0) or true(1) in a bool enumnot included in the type definition charwhich is a surrogate or above char::MAX str This is a list of behaviour not considered unsafe in Rust terms, but that may be undesired. std::repr, format!("{:?}", x)) A special kind of function can be declared with a ! character where the output slot type would normally be. For example: fn my_err(s: &str) -> ! { println!("{}", s); fail!(); } We call such functions "diverging" because they never return a value to the caller. Every control path in a diverging function must end with a fail!() or a call to another diverging function on every control path. The ! annotation does not denote a type. Rather, the result type of a diverging function is a special type called $\bot$ ("bottom") that unifies with any type. Rust has no syntax for $\bot$.: fn f(i: int) -> int { if i == 42 { return 42; } else { my_err("Bad number!"); } } This will not compile without the ! annotation on my_err, since the else branch of the conditional in f does not return an int,. // Declares an extern fn, the ABI defaults to "C" extern fn new_int() -> int { 0 } // Declares an extern fn with "stdcall" ABI extern "stdcall" fn new_int_stdcall() -> int { 0 } Unlike normal functions, extern fns have an extern "ABI" fn(). This is the same type as the functions declared in an extern block. let fptr: extern "C" fn() -> int = new_int; Extern functions may be called directly from Rust code as Rust uses large, contiguous stack segments like C. A type definition defines a new name for an existing type. Type definitions are declared with the keyword type. Every value has a single, specific type; the type-specified aspects of a value include: For example, the type (u8, u8) defines the set of immutable values that are composite pairs, each containing two unsigned 8-bit integers accessed by pattern-matching and laid out in memory with the x component preceding the y component. A structure is a nominal structure type defined with the keyword struct. An example of a struct item and its use: struct Point {x: int, y: int} let p = Point {x: 10, y: 11}; let px: int = p.x; A tuple structure is a nominal tuple type, also defined with the keyword struct. For example: struct Point(int, int); let p = Point(10, 11); let px: int = match p { Point(x, _) => x }; A unit-like struct is a structure without any fields, defined by leaving off the list of fields entirely. Such types will have a single value, just like the unit value () of the unit type. For example: struct Cookie; let c = [Cookie, Cookie, Cookie, Cookie]; By using the struct_inherit feature gate, structures may use single inheritance. A Structure may only inherit from a single other structure, called the super-struct. The inheriting structure (sub-struct) acts as if all fields in the super-struct were present in the sub-struct. Fields declared in a sub-struct must not have the same name as any field in any (transitive) super-struct. All fields (both declared and inherited) must be specified in any initializers. Inheritance between structures does not give subtyping or coercion. The super-struct and sub-struct must be defined in the same crate. The super-struct must be declared using the virtual keyword. For example: virtual struct Sup { x: int } struct Sub : Sup { y: int } let s = Sub {x: 10, y: 11}; let sx = s.x;: enum Animal { Dog, Cat } let mut a: Animal = Dog; a = Cat; Enumeration constructors can have either named or unnamed fields:#![feature(struct_variant)] fn main() { enum Animal { Dog (String, f64), Cat { name: String, weight: f64 } } let mut a: Animal = Dog("Cocoa".to_string(), 37.2); a = Cat { name: "Spotty".to_string(), weight: 2.7 }; } enum Animal { Dog (String, f64), Cat { name: String, weight: f64 } } let mut a: Animal = Dog("Cocoa".to_string(), 37.2); a = Cat { name: "Spotty".to_string(), weight: 2.7 }; In this example, Cat is a struct-like enum variant, whereas Dog is simply called an enum variant. static_item : "static" ident ':' type '=' expr ';' ; A static item is a named constant value stored in the global data section of a crate. Immutable static items are stored in the read-only data section. The constant value bound to a static item is, like all constant values, evaluated at compile time. Static items have the static lifetime, which outlives all other lifetimes in a Rust program. Static items are declared with the static keyword. A static item must have a constant expression giving its definition. Static items must be explicitly typed. The type may be bool, char, a number, or a type derived from those primitive types. The derived types are references with the static lifetime, fixed-size arrays, tuples, and structs. static BIT1: uint = 1 << 0; static BIT2: uint = 1 << 1; static BITS: [uint, ..2] = [BIT1, BIT2]; static STRING: &'static str = "bitstring"; struct BitsNStrings<'a> { mybits: [uint, ..2], mystring: &'a str } static bits_n_strings: BitsNStrings<'static> = BitsNStrings { mybits: BITS, mystring: STRING }; tasks running in the same process. Mutable statics are still very useful, however. They can be used with C libraries and can also be bound from C libraries (in an extern block). static mut LEVELS: uint = 0; // This violates the idea of no shared state, and this doesn't internally // protect against races, so this function is `unsafe` unsafe fn bump_levels_unsafe1() -> uint {() -> uint { return atomic_add(&mut LEVELS, 1); } A trait describes a set of method types. Traits can include default implementations of methods, written in terms of some unknown self type; the self type may either be completely unspecified, or constrained by some other trait. Traits are implemented for specific types through separate implementations.fn main() { type Surface = int; type BoundingBox = int; trait Shape { fn draw(&self, Surface); fn bounding_box(&self) -> BoundingBox; } } trait Shape { fn draw(&self, Surface); fn bounding_box(&self) -> BoundingBox; } This defines a trait with two methods. All values that have implementations of this trait in scope can have their draw and bounding_box methods called, using value.bounding_box() syntax. Type parameters can be specified for a trait to make it generic. These appear after the trait name, using the same syntax used in generic functions.fn main() { trait Seq<T> { fn len(&self) -> uint; fn elt_at(&self, n: uint) -> T; fn iter(&self, |T|); } } trait Seq<T> { fn len(&self) -> uint; fn elt_at(&self, n: uint) -> T; fn iter(&self, |T|); } Generic functions may use traits as bounds on their type parameters. This will have two effects: only types that have the trait may instantiate the parameter, and within the generic function, the methods of the trait can be called on values that have the parameter's type. For example:fn main() { type Surface = int; trait Shape { fn draw(&self, Surface); } fn draw_twice<T: Shape>(surface: Surface, sh: T) { sh.draw(surface); sh.draw(surface); } } fn draw_twice<T: Shape>(surface: Surface, sh: T) { sh.draw(surface); sh.draw(surface); } Traits also define an object type with the same name as the trait. Values of this type are created by casting pointer values (pointing to a type for which an implementation of the given trait is in scope) to pointers to the trait name, used as a type.fn main() { trait Shape { } impl Shape for int { } let mycircle = 0i; let myshape: Box<Shape> = box mycircle as Box<Shape>; } let myshape: Box<Shape> = box: trait Num { fn from_int(n: int) -> Self; } impl Num for f64 { fn from_int(n: int) -> f64 { n as f64 } } let x: f64 = Num::from_int(42); Traits may inherit from other traits. For example, infn main() { trait Shape { fn area() -> f64; } trait Circle : Shape { fn radius() -> f64; } } trait Shape { fn area() -> f64; } trait Circle : Shape { fn radius() ->. In type-parameterized functions, methods of the supertrait may be called on values of subtrait-bound type parameters. Referring to the previous example of trait Circle : Shape: fn radius_times_area<T: Circle>(c: T) -> f64 { // `c` is both a Circle and a Shape c.radius() * c.area() } Likewise, supertrait methods may also be called on trait objects.fn main() { trait Shape { fn area(&self) -> f64; } trait Circle : Shape { fn radius(&self) -> f64; } impl Shape for int { fn area(&self) -> f64 { 0.0 } } impl Circle for int { fn radius(&self) -> f64 { 0.0 } } let mycircle = 0; let mycircle: Circle = ~mycircle as ~Circle; let nonsense = mycircle.radius() * mycircle.area(); } let mycircle: Circle = ~mycircle as ~Circle; let nonsense = mycircle.radius() * mycircle.area(); An implementation is an item that implements a trait for a specific type. Implementations are defined with the keyword impl. struct Circle { radius: f64, center: Point, }), and the implementation must appear in the same module or a sub-module as the self type.. impl<T> Seq<T> for Vec<T> { /* ... */ } impl Seq<bool> for u32 { /* Treat the integer as a sequence of bits */ } extern_block_item : "extern" '{' extern_block '}' ; extern_block : [ foreign_fn ] * ;.extern crate libc; use libc::{c_char, FILE}; extern { fn fopen(filename: *c_char, mode: *c_char) -> *FILE; } fn main() {} extern crate libc; use libc::{c_char, FILE}; extern { fn fopen(filename: *c_char, mode: *c_char) -> *FILE; } Functions within external blocks may be called by Rust code, just like functions defined in Rust. The Rust compiler automatically translates between the Rust ABI and the foreign ABI. A number of attributes control the behavior of external blocks. By default external blocks assume that the library they are calling uses the standard C "cdecl" ABI. Other ABIs may be specified using an abi string, as shown here: // Interface to the Windows API extern "stdcall" { }. These two terms are often used interchangeably, and what they are attempting to convey is the answer to the question "Can this item be used at this location?" Rust's name resolution operates on a global hierarchy of namespaces. Each level in the hierarchy can be thought of as some item. The items are one of those mentioned above, but also include external crates. Declaring or defining a new module can be thought of as inserting a new tree into the hierarchy at the location of the definition. To control whether interfaces can be used across modules, Rust checks each use of an item to see whether it should be allowed or not. This is where privacy warnings are generated, or otherwise "you used a private item of another module and weren't allowed to." By default, everything in rust is private, with one exception. Enum variants in a pub enum are also public by default. You are allowed to alter this default visibility with the priv keyword. When an item is declared as pub, it can be thought of as being accessible to the outside world. For example: // Declare a private struct struct Foo; // Declare a public struct with a private field pub struct Bar { field: int } // Declare a public enum with two public variants pub enum State { PubliclyAccessibleState, PubliclyAccessibleState2, } With the notion of an item being either public or private, Rust allows item accesses in two cases: These two cases are surprisingly powerful for creating module hierarchies exposing public APIs while hiding internal implementation details. To help explain, here's a few use cases and what they would entail. A library developer needs to expose functionality to crates which link against their library. As a consequence of the first case, this means that anything which is usable externally must be pub from the root down to the destination item. Any private item in the chain will disallow external accesses. A crate needs a global available "helper module" to itself, but it doesn't want to expose the helper module as a public API. To accomplish this, the root of the crate's hierarchy would have a private module which then internally has a "public api". Because the entire crate is a descendant of the root, then the entire local crate can access this private module through the second case. When writing unit tests for a module, it's often a common idiom to have an immediate child of the module to-be-tested named mod test. This module could access any items of the parent module through the second case, meaning that internal implementation details could also be seamlessly tested from the child module. In the second case, it mentions that a private item "can be accessed" by the current module and its descendants, but the exact meaning of accessing an item depends on what the item is. Accessing a module, for example, would mean looking inside of it (to import more items). On the other hand, accessing a function would mean that it is invoked. Additionally, path expressions and import statements are considered to access an item in the sense that the import/expression is only valid if the destination is in the current visibility scope. Here's an example of a program which exemplifies the three cases outlined above.//(); } } } fn main() {} //(); } } } For a rust program to pass the privacy checking pass, all paths must be valid accesses given the two rules above. This includes all use statements, expressions, types, etc. Rust allows publicly re-exporting items through a pub use directive. Because this is a public directive, this allows the item to be used in the current module through the rules above. It essentially allows public access into the re-exported item. For example, this program is valid: pub use api = self::implementation; mod implementation { pub fn f() {} } This means that any external crate referencing implementation::f would receive a privacy violation, while the path api::f would be allowed. When re-exporting a private item, it can be thought of as allowing the "privacy chain" being short-circuited through the reexport instead of passing through the namespace hierarchy as it normally would. Currently glob imports are considered an "experimental" language feature. For sanity purpose along with helping the implementation, glob imports will only import public items from their destination, not private items. Note: This is subject to change, glob exports may be removed entirely or they could possibly import private items for a privacy error to later be issued if the item is used. attribute : '#' '!' ? '[' meta_item ']' ; meta_item : ident [ '=' literal | '(' meta_seq ')' ] ? ; meta_seq : meta_item [ ',' meta_seq ] ? ; Static entities in Rust — crates, modules and items — may have attributes applied to them. Attributes in Rust are modeled on Attributes in ECMA-335, with the syntax coming from ECMA-334 (C#). An attribute is a general, free-form metadatum that is interpreted according to name, convention, and language and compiler version. Attributes may appear as any of: Attributes with a bang ("!") after the hash ("#") apply to the item that the attribute is declared within. Attributes that do not have a bang after the hash apply to the item that follows the attribute. An example of attributes:fn main() { //; Note: At some point in the future, the compiler will distinguish between language-reserved and user-available attributes. Until then, there is effectively no difference between an attribute handled by a loadable syntax extension and the compiler. crate_id- specify the this crate's crate ID. crate_type- see linkage. feature- see compiler features. no_main- disable emitting the mainsymbol. Useful when some other object being linked to defines main. no_start- disable linking to the nativecrate, which specifies the "start" language item. no_std- disable linking to the stdcrate. no_builtins- disable optimizing certain code patterns to invocations of library functions that are assumed to exist macro_escape- macros defined in this module will be visible in the module's parent, after this module has been included.. plugin_registrar- mark this function as the registration point for compiler plugins, such as loadable syntax extensions. main- indicates that this function should be passed to the entry point, rather than the function in the crate root named main. start- indicates that this function should be used as the entry point, overriding the "start" language item. See the "start" language item for more details. thread_local- on a static mut, this signals that the value of this static may change depending on the current thread. The exact consequences of this are implementation-defined.. See external blocks On declarations inside an extern block, the following attributes are interpreted: link_name- the name of the symbol that this function or static should be imported as. linkage- on a static, this specifies the linkage type. link_section- on statics and functions, this specifies the section of the object file that this item's contents will be placed into. macro_export- export a macro for cross-crate usage. no_mangle- on any item, do not apply the standard name mangling. Set the symbol for this item to its identifier. packed- on structs or enums, eliminate any padding that would be used to align fields. repr- on C-like enums, this sets the underlying type used for representation. Useful for FFI. Takes one argument, which is the primitive type this enum should be represented for, or C, which specifies that it should be the default enumsize of the C ABI for that platform. Note that enum representation in C is undefined, and this may be incorrect when the C code is compiled with certain flags. simd- on certain tuple structs, derive the arithmetic operators, which lower to the target's SIMD instructions, if any; the simdfeature gate is necessary to use this attribute. static_assert- on statics whose type is bool, terminates compilation with an error if it is not initialized to true. unsafe_destructor- allow implementations of the "drop" language item where the type it is implemented for does not implement the "send" language item; the unsafe_destructorfeature gate is needed to use this attribute unsafe_no_drop_flag- on structs, remove the flag that prevents destructors from being run twice. Destructors might be run multiple times on the same object with this attribute. Sometimes one wants to have different compiler outputs from the same code, depending on build target, such as targeted operating system, or to enable release builds. There are two kinds of configuration options, one that is either defined or not ( #[cfg(foo)]), and the other that contains a string that can be checked against ( #[cfg(bar = "baz")] (currently only compiler-defined configuration options can have the latter form). // The function is only included in the build when compiling for OSX #[cfg(target_os = "macos")] fn macos_only() { // ... } // This function is only included when either foo or bar is defined #[cfg(foo)] #[cfg(bar)] fn needs_foo_or_bar() { // ... } // This function is only included when compiling for a unixish OS with a 32-bit // architecture #[cfg(unix, target_word_size = "32")] fn on_32bit_unix() { // ... } This illustrates some conditional compilation can be achieved using the #[cfg(...)] attribute. Note that #[cfg(foo, bar)] is a condition that needs both foo and bar to be defined while #[cfg(foo)] #[cfg(bar)] only needs one of foo and bar to be defined (this resembles in the disjunctive normal form). Additionally, one can reverse a condition by enclosing it in a not(...), like e. g. #[cfg(not(target_os = "win32"))]. The following configurations must be defined by the implementation: target_arch = "...". Target CPU architecture, such as "x86", "x86_64" "mips", or "arm". target_endian = "...". Endianness of the target CPU, either "little"or "big". target_family = "...". Operating system family of the target, e. g. "unix"or "windows". The value of this configuration option is defined as a configuration itself, like unixor windows. target_os = "...". Operating system of the target, examples include "win32", "macos", "linux", "android"or "freebsd". target_word_size = "...". Target word size in bits. This is set to "32"for targets with 32-bit pointers, and likewise set to "64"for 64-bit pointers. unix. See target_family. windows. See target_family. A lint check names a potentially undesirable coding pattern, such as unreachable code or omitted documentation, for the static entity to which the attribute applies. For any lint check C: warn(C)warns about violations of Cbut continues compilation, deny(C)signals an error after encountering a violation of C, allow(C)overrides the check for Cso that violations will go unreported, forbid(C)is the same as deny(C), but also forbids changing the lint level afterwards. The lint checks supported by the compiler can be found via rustc -W help, along with their default settings. mod m1 { // Missing documentation is ignored here #[allow(missing_doc)] pub fn undocumented_one() -> int { 1 } // Missing documentation signals a warning here #[warn(missing_doc)] pub fn undocumented_too() -> int { 2 } // Missing documentation signals an error here #[deny(missing_doc)] pub fn undocumented_end() -> int { 3 } } This example shows how one can use allow and warn to toggle a particular check on and off. #[warn(missing_doc)] mod m2{ #[allow(missing_doc)] mod nested { // Missing documentation is ignored here pub fn undocumented_one() -> int { 1 } // Missing documentation signals a warning here, // despite the allow above. #[warn(missing_doc)] pub fn undocumented_two() -> int { 2 } } // Missing documentation signals a warning here pub fn undocumented_too() -> int { 3 } } This example shows how one can use forbid to disallow uses of allow for that lint check. #[forbid(missing_doc)] mod m3 { // Attempting to toggle warning signals an error here #[allow(missing_doc)] /// Returns 2. pub fn undocumented_too() -> int { 2 } } Some primitive Rust operations are defined in Rust code, rather than being implemented directly in C or assembly language. The definitions of these operations have to be easy for the compiler to find. The lang attribute makes it possible to declare these operations. For example, the str module in the Rust standard library defines the string equality function: #[lang="str_eq"] pub fn eq_slice(a: &str, b: &str) -> bool { // details elided } The name str_eq has a special meaning to the Rust compiler, and the presence of this definition means that it will use this definition when generating calls to the string equality function. A complete list of the built-in language items follows: send: Able to be sent across task boundaries. sized: Has a size known at compile time. copy: Types that do not move ownership when used by-value. share: Able to be safely shared between tasks when aliased. drop: Have destructors. These language items are traits: add: Elements can be added (for example, integers and floats). sub: Elements can be subtracted. mul: Elements can be multiplied. div: Elements have a division operation. rem: Elements have a remainder operation. neg: Elements can be negated arithmetically. not: Elements can be negated logically. bitxor: Elements have an exclusive-or operation. bitand: Elements have a bitwise andoperation. bitor: Elements have a bitwise oroperation. shl: Elements have a left shift operation. shr: Elements have a right shift operation. index: Elements can be indexed. eq: Elements can be compared for equality. ord: Elements have a partial ordering. deref: *can be applied, yielding a reference to another type deref_mut: *can be applied, yielding a mutable reference to another type These are functions: str_eq: Compare two strings ( &str) for equality. uniq_str_eq: Compare two owned strings ( String) for equality. strdup_uniq: Return a new unique string containing a copy of the contents of a unique string. unsafe: A type whose contents can be mutated through an immutable reference type_id: The type returned by the type_idintrinsic. These types help drive the compiler's analysis covariant_type: The type parameter should be considered covariant contravariant_type: The type parameter should be considered contravariant invariant_type: The type parameter should be considered invariant covariant_lifetime: The lifetime parameter should be considered covariant contravariant_lifetime: The lifetime parameter should be considered contravariant invariant_lifetime: The lifetime parameter should be considered invariant no_send_bound: This type does not implement "send", even if eligible no_copy_bound: This type does not implement "copy", even if eligible no_share_bound: This type does not implement "share", even if eligible managed_bound : This type implements "managed" fail_ : Abort the program with an error. fail_bounds_check : Abort the program with a bounds check error. exchange_malloc : Allocate memory on the exchange heap. exchange_free : Free memory that was allocated on the exchange heap. malloc : Allocate memory on the managed heap. free : Free memory that was allocated on the managed heap. Note: This list is likely to become out of date. We should auto-generate it from librustc/middle/lang_items.rs. The inline attribute is used to suggest to the compiler to perform an inline expansion and place a copy of the function or static in the caller rather than generating code to call the function or access the static where it is defined. The compiler automatically inlines functions based on internal heuristics. Incorrectly inlining functions can actually making the program slower, so it should be used with care. Immutable statics are always considered inlineable unless marked with #[inline(never)]. It is undefined whether two different inlineable statics have the same memory address. In other words, the compiler is free to collapse duplicate inlineable statics together. #[inline] and #[inline(always)] always causes the function to be serialized into crate metadata to allow cross-crate inlining. There are three different types of inline attributes: #[inline]hints the compiler to perform an inline expansion. #[inline(always)]asks the compiler to always perform an inline expansion. #[inline(never)]asks the compiler to never perform an inline expansion. The deriving attribute allows certain traits to be automatically implemented for data structures. For example, the following will create an impl for the PartialEq and Clone traits for Foo, the type parameter T will be given the PartialEq or Clone constraints for the appropriate impl: #[deriving(PartialEq, Clone)] struct Foo<T> { a: int, b: T } The generated impl for PartialEq is equivalent to impl<T: PartialEq> PartialEq for Foo<T> { fn eq(&self, other: &Foo<T>) -> bool { self.a == other.a && self.b == other.b } fn ne(&self, other: &Foo<T>) -> bool { self.a != other.a || self.b != other.b } } Supported traits for deriving are: PartialEq, Eq, PartialOrd, Ord. Encodable, Decodable. These require serialize. Clone, to create Tfrom &Tvia a copy. Hash, to iterate over the bytes in a data type. Rand, to create a random instance of a data type. Default, to create an empty instance of a data type. Zero, to create a zero instance of a numeric data type. FromPrimitive, to create an instance from a numeric primitive. Show, to format a value using the {}formatter. One can indicate the stability of an API using the following attributes: deprecated: This item should no longer be used, e.g. it has been replaced. No guarantee of backwards-compatibility. experimental: This item was only recently introduced or is otherwise in a state of flux. It may change significantly, or even be removed. No guarantee of backwards-compatibility. unstable: This item is still under development, but requires more testing to be considered stable. No guarantee of backwards-compatibility. stable: This item is considered stable, and will not change significantly. Guarantee of backwards-compatibility. frozen: This item is very stable, and is unlikely to change. Guarantee of backwards-compatibility. locked: This item will never change unless a serious bug is found. Guarantee of backwards-compatibility. These levels are directly inspired by Node.js' "stability index". Stability levels are inherited, so an items's stability attribute is the default stability for everything nested underneath it. There are lints for disallowing items marked with certain levels: deprecated, experimental and unstable. For now, only deprecated warns by default, but this will change once the standard library has been stabilized. Stability levels are meant to be promises at the crate level, so these lints only apply when referencing items from an external crate, not to items defined within the current crate. Items with no stability level are considered to be unstable for the purposes of the lint. One can give an optional string that will be displayed when the lint flags the use of an item. For example, if we define one crate called stability_levels: #[deprecated="replaced by `best`"] pub fn bad() { // delete everything } pub fn better() { // delete fewer things } #[stable] pub fn best() { // delete nothing } then the lints will work as follows for a client crate:#![warn(unstable)] extern crate stability_levels; use stability_levels::{bad, better, best}; fn main() { bad(); // "warning: use of deprecated item: replaced by `best`" better(); // "warning: use of unmarked item" best(); // no warning } #![warn(unstable)] extern crate stability_levels; use stability_levels::{bad, better, best}; fn main() { bad(); // "warning: use of deprecated item: replaced by `best`" better(); // "warning: use of unmarked item" best(); // no warning } Note: Currently these are only checked when applied to individual functions, structs, methods and enum variants, not to entire modules, traits, impls or enums themselves. Certain aspects of Rust may be implemented in the compiler, but they're not necessarily ready for every-day use. These features are often of "prototype quality" or "almost production ready", but may not be stable enough to be considered a full-fledged language feature. For this reason, Rust recognizes a special crate-level attribute of the form:fn main() { #![feature(feature1, feature2, feature3)] } #![feature(feature1, feature2, feature3)] This directive informs the compiler that the feature list: feature1, feature2, and feature3 should all be enabled. This is only recognized at a crate-level, not at a module-level. Without this directive, all features are considered off, and using the features will result in a compiler error. The currently implemented features of the reference compiler are: macro_rules - The definition of new macros. This does not encompass macro-invocation, that is always enabled by default, this only covers the definition of new macros. There are currently various problems with invoking macros, how they interact with their environment, and possibly how they are used outside of location in which they are defined. Macro definitions are likely to change slightly in the future, so they are currently hidden behind this feature. globs - Importing everything in a module through *. This is currently a large source of bugs in name resolution for Rust, and it's not clear whether this will continue as a feature or not. For these reasons, the glob import statement has been hidden behind this feature flag. struct_variant - Structural enum variants (those with named fields). It is currently unknown whether this style of enum variant is as fully supported as the tuple-forms, and it's not certain that this style of variant should remain in the language. For now this style of variant is hidden behind a feature flag. once_fns - Onceness guarantees a closure is only executed once. Defining a closure as once is unlikely to be supported going forward. So they are hidden behind this feature until they are to be removed. managed_boxes - Usage of @ pointers is gated due to many planned changes to this feature. In the past, this has meant "a GC pointer", but the current implementation uses reference counting and will likely change drastically over time. Additionally, the @ syntax will no longer be used to create GC boxes. asm - The asm! macro provides a means for inline assembly. This is often useful, but the exact syntax for this feature along with its semantics are likely to change, so this macro usage must be opted into. non_ascii_idents - The compiler supports the use of non-ascii identifiers, but the implementation is a little rough around the edges, so this can be seen as an experimental feature for now until the specification of identifiers is fully fleshed out. thread_local - The usage of the #[thread_local] attribute is experimental and should be seen as unstable. This attribute is used to declare a static as being unique per-thread leveraging LLVM's implementation which works in concert with the kernel loader and dynamic linker. This is not necessarily available on all platforms, and usage of it is discouraged (rust focuses more on task-local data instead of thread-local data). link_args - This attribute is used to specify custom flags to the linker, but usage is strongly discouraged. The compiler's usage of the system linker is not guaranteed to continue in the future, and if the system linker is not used then specifying custom flags doesn't have much meaning. If a feature is promoted to a language feature, then all existing programs will start to receive compilation warnings about #[feature] directives which enabled the new feature (because the directive is no longer necessary). However, if a feature is decided to be removed from the language, errors will be issued (if there isn't a parser error first). The directive in this case is no longer necessary, and it's likely that existing code will break if the feature isn't removed. If a unknown feature is found in a directive, it results in a compiler error. An unknown feature is one which has never been recognized by the compiler.. A slots or new items. An item declaration statement has a syntactic form identical to an item declaration within a module. Declaring an item — a function, enumeration, structure, type, static, trait, implementation or module — locally within a statement block is simply a way of restricting its scope to a narrow region containing all of its uses; it is otherwise identical in meaning to declaring the item outside the statement block. Note: there is no implicit capture of the function's dynamic environment when declaring a function-local item. let_decl : "let" pat [':' type ] ? [ init ] ? ';' ; init : [ '=' ] expr ; A slot declaration introduces a new set of slots, given by a pattern. The pattern may be followed by a type annotation, and/or an initializer expression. When no type annotation is given, the compiler will infer the type, or signal an error if insufficient type information is available for definite inference. Any slots introduced by a slot declaration are visible from the point of declaration until the end of the enclosing block scope. An expression statement is one that evaluates an expression and ignores its result. The type of an expression statement e; is always (), regardless of the type of e. As a rule, an expression statement's purpose is to trigger the effects of evaluating its expression.. Expressions are divided into two main categories: lvalues and rvalues. Likewise within each expression, sub-expressions may occur in lvalue context or rvalue context. The evaluation of an expression depends both on its own category and the context it occurs within. An lvalue is an expression that represents a memory location. These expressions are paths (which refer to local variables, function and method arguments, or static variables), dereferences ( *expr), indexing expressions ( expr[expr]), and field references ( expr.f). All other expressions are rvalues. The left operand of an assignment or compound-assignment expression is an lvalue context, as is the single operand of a unary borrow. All other expression contexts are rvalue contexts. When an lvalue is evaluated in an lvalue context, it denotes a memory location; when evaluated in an rvalue context, it denotes the value held in that memory location. When an rvalue is used in lvalue context, a temporary un-named lvalue is created and used instead. A temporary's lifetime equals the largest lifetime of any reference that points to it. When a local variable is used as an rvalue the variable will either be moved or copied, depending on its type. For types that contain owning pointers or values that implement the special trait Drop, the variable is moved. All other types are copied. A literal expression consists of one of the literal forms described earlier. It directly describes a number, character, string, boolean value, or the unit value. (); // unit type "hello"; // string type '5'; // character type 5; // integer type A path used as an expression context denotes either a local variable or an item. Path expressions are lvalues. Tuples are written by enclosing one or more comma-separated expressions in parentheses. They are used to create tuple-typed values. (0,); (0.0, 4.5); ("a", 4u, true); struct_expr : expr_path '{' ident ':' expr [ ',' ident ':' expr ] * [ ".." expr ] '}' | expr_path '(' expr [ ',' expr ] * ')' | expr_path ; There are several forms of structure expressions. A structure expression consists of the path of a structure item, followed by a brace-enclosed list of one or more comma-separated name-value pairs, providing the field values of a new instance of the structure. A field name can be any identifier, and is separated from its value expression by a colon. The location denoted by a structure field is mutable if and only if the enclosing structure is mutable. A tuple structure expression consists of the path of a structure item, followed by a parenthesized list of one or more comma-separated expressions (in other words, the path of a structure item followed by a tuple expression). The structure item must be a tuple structure item. A unit-like structure expression consists only of the path of a structure item. The following are examples of structure expressions:fn main() { struct Point { x: f64, y: f64 } struct TuplePoint(f64, f64); mod game { pub struct User<'a> { pub name: &'a str, pub age: uint, pub score: uint } } struct Cookie; fn some_fn<T>(t: T) {} Point {x: 10.0, y: 20.0}; TuplePoint(10.0, 20.0); let u = game::User {name: "Joe", age: 35, score: 100_000}; some_fn::<Cookie>(Cookie); } Point {x: 10.0, y: 20.0}; TuplePoint(10.0, 20.0); let u = game::User {name: "Joe", age: 35, score: 100_000}; some_fn::<Cookie>(Cookie); A structure expression forms a new value of the named structure type. Note that for a given unit-like structure type, this will always be the same value. A structure expression can terminate with the syntax .. followed by an expression to denote a functional update. The expression following .. (the base) must have the same structure type as the new structure type being formed. The entire expression denotes the result of constructing a new structure (with the same type as the base expression) with the given values for the fields that were explicitly specified and the values in the base expression for all other fields. let base = Point3d {x: 1, y: 2, z: 3}; Point3d {y: 0, z: 10, .. base}; block_expr : '{' [ view_item ] * [ stmt ';' | item ] * [ expr ] '}' ; A block expression is similar to a module in terms of the declarations that are possible. Each block conceptually introduces a new namespace scope. View items can bring new names into scopes and declared items are in scope for only the block itself. A block will execute each statement sequentially, and then execute the expression (if given). If the final expression is omitted, the type and return value of the block are (), but if it is provided, the type and return value of the block are that of the expression itself. method_call_expr : expr '.' ident paren_expr_list ; A method call consists of an expression followed by a single dot, an identifier, and a parenthesized expression-list. Method calls are resolved to methods on specific traits, either statically dispatching to a method if the exact self-type of the left-hand-side is known, or dynamically dispatching if the left-hand-side expression is an indirect object type. field_expr : expr '.' ident ; A field expression consists of an expression followed by a single dot and an identifier, when not immediately followed by a parenthesized expression-list (the latter is a method call expression). A field expression denotes a field of a structure.fn main() { mystruct.myfield; foo().x; (Struct {a: 10, b: 20}).a; } mystruct.myfield; foo().x; (Struct {a: 10, b: 20}).a; A field access is an lvalue referring to the value of that field. When the type providing the field inherits mutabilty, it can be assigned to. Also, if the type of the expression to the left of the dot is a pointer, it is automatically dereferenced to make the field access possible. vec_expr : '[' "mut" ? vec_elems? ']' ; vec_elems : [expr [',' expr]*] | [expr ',' ".." expr] ; A vector expression is written by enclosing zero or more comma-separated expressions of uniform type in square brackets. In the [expr ',' ".." expr] form, the expression after the ".." must be a constant expression that can be evaluated at compile time, such as a literal or a static item. [1, 2, 3, 4]; ["a", "b", "c", "d"]; [0, ..128]; // vector with 128 zeros [0u8, 0u8, 0u8, 0u8]; idx_expr : expr '[' expr ']' ; Vector-typed expressions can be indexed by writing a square-bracket-enclosed expression (the index) after them. When the vector is mutable, the resulting lvalue can be assigned to. Indices are zero-based, and may be of any integral type. Vector access is bounds-checked at run-time. When the check fails, it will put the task in a failing state.fn main() { use std::task; task::spawn(proc() { ([1, 2, 3, 4])[0]; (["a", "b"])[10]; // fails }) } ([1, 2, 3, 4])[0]; (["a", "b"])[10]; // fails Rust defines six symbolic unary operators. They are all written as prefix operators, before the expression they apply to. -: Negation. May only be applied to numeric types. * : Dereference. When applied to a pointer it denotes the pointed-to location. For pointers to mutable locations, the resulting lvalue can be assigned to. On non-pointer types, it calls the deref method of the std::ops::Deref trait, or the deref_mut method of the std::ops::DerefMut trait (if implemented by the type and required for an outer expression that will or could mutate the dereference), and produces the result of dereferencing the & or &mut borrowed pointer returned from the overload method. ! : Logical negation. On the boolean type, this flips between true and false. On integer types, this inverts the individual bits in the two's complement representation of the value. box : Boxing operators. Allocate a box to hold the value they are applied to, and store the value in it. box creates an owned box. & : Borrow operator. Returns a reference, pointing to its operand. The operand of a borrow is statically proven to outlive the resulting pointer. If the borrow-checker cannot prove this, it is a compilation error. binop_expr : expr binop expr ; Binary operators expressions are given in terms of operator precedence. Binary arithmetic expressions are syntactic sugar for calls to built-in traits, defined in the std::ops module of the std library. This means that arithmetic operators can be overridden for user-defined types. The default meaning of the operators on standard types is given here. +: Addition and vector/string concatenation. Calls the addmethod on the std::ops::Addtrait. -: Subtraction. Calls the submethod on the std::ops::Subtrait. *: Multiplication. Calls the mulmethod on the std::ops::Multrait. /: Quotient. Calls the divmethod on the std::ops::Divtrait. %: Remainder. Calls the remmethod on the std::ops::Remtrait. Like the arithmetic operators, bitwise operators are syntactic sugar for calls to methods of built-in traits. This means that bitwise operators can be overridden for user-defined types. The default meaning of the operators on standard types is given here. &: And. Calls the bitandmethod of the std::ops::BitAndtrait. |: Inclusive or. Calls the bitormethod of the std::ops::BitOrtrait. ^: Exclusive or. Calls the bitxormethod of the std::ops::BitXortrait. <<: Logical left shift. Calls the shlmethod of the std::ops::Shltrait. >>: Logical right shift. Calls the shrmethod of the std::ops::Shrtrait.. Comparison operators are, like the arithmetic operators, and bitwise operators, syntactic sugar for calls to built-in traits. This means that comparison operators can be overridden for user-defined types. The default meaning of the operators on standard types is given here. ==: Equal to. Calls the eqmethod on the std::cmp::PartialEqtrait. !=: Unequal to. Calls the nemethod on the std::cmp::PartialEqtrait. <: Less than. Calls the ltmethod on the std::cmp::PartialOrdtrait. >: Greater than. Calls the gtmethod on the std::cmp::PartialOrdtrait. <=: Less than or equal. Calls the lemethod on the std::cmp::PartialOrdtrait. >=: Greater than or equal. Calls the gemethod on the std::cmp::PartialOrdtrait. A type cast expression is denoted with the binary operator as. Executing an as expression casts the value on the left-hand side to the type on the right-hand side. A numeric value can be cast to any numeric type. A raw pointer value can be cast to or from any integral type or raw pointer type. Any other cast is unsupported and will fail to compile. An example of an as expression: fn avg(v: &[f64]) -> f64 { let sum: f64 = sum(v); let sz: f64 = len(v) as f64; return sum / sz; } An assignment expression consists of an lvalue expression followed by an equals sign ( =) and an rvalue expression. Evaluating an assignment expression either copies or moves its right-hand operand to its left-hand operand.fn main() { let mut x = 0; let y = 0; x = y; } x = y; The +, -, *, /, %, &, |, ^, <<, and >> operators may be composed with the = operator. The expression lval OP= val is equivalent to lval = lval OP val. For example, x = x + 1 may be written as x += 1. Any such expression always has the unit type. The precedence of Rust binary operators is ordered as follows, going from strong to weak: * / % as + - << >> & ^ | < > <= >= == != && || = Operators at the same precedence level are evaluated left-to-right. Unary operators have the same precedence level and it is stronger than any of the binary operators'. An expression enclosed in parentheses evaluates to the result of the enclosed expression. Parentheses can be used to explicitly specify evaluation order within an expression. paren_expr : '(' expr ')' ; An example of a parenthesized expression:fn main() { let x = (2 + 3) * 4; } let x = (2 + 3) * 4; expr_list : [ expr [ ',' expr ]* ] ? ; paren_expr_list : '(' expr_list ')' ; call_expr : expr paren_expr_list ; A call expression invokes a function, providing zero or more input slots and an optional reference slot to serve as the function's output, bound to the lval on the right hand side of the call. If the function eventually returns, then the expression completes. Some examples of call expressions:fn main() { use std::from_str::FromStr; fn add(x: int, y: int) -> int { 0 } let x: int = add(1, 2); let pi: Option<f32> = FromStr::from_str("3.14"); } let x: int = add(1, 2); let pi: Option<f32> = FromStr::from_str("3.14"); ident_list : [ ident [ ',' ident ]* ] ? ; lambda_expr : '|' ident_list '|' expr ; A lambda expression (sometimes called an "anonymous function expression") defines a function and denotes it as a value, in a single expression. A lambda expression is a pipe-symbol-delimited ( |) list of identifiers followed by an expression. A lambda expression denotes a function that maps a list of parameters ( ident_list) onto the expression that follows the ident_list. The identifiers in the ident_list are the parameters to the function. These parameters' types need not be specified, as the compiler infers them from context. Lambda expressions are most useful when passing functions as arguments to other functions, as an abbreviation for defining and capturing a separate function. Significantly, lambda expressions capture their environment, which regular function definitions do not. The exact type of capture depends on the function type inferred for the lambda expression. In the simplest and least-expensive form (analogous to a || { } expression), the lambda expression captures its environment by reference, effectively borrowing pointers to all outer variables mentioned inside the function. Alternately, the compiler may infer that a lambda expression should copy or move values (depending on their type.) from the environment into the lambda expression's captured environment. In this example, we define a function ten_times that takes a higher-order function argument, and call it with a lambda expression as an argument. fn ten_times(f: |int|) { let mut i = 0; while i < 10 { f(i); i += 1; } } ten_times(|j| println!("hello, {}", j)); while_expr : "while" no_struct_literal_expr '{' block '}' ; A while loop begins by evaluating the boolean loop conditional expression. If the loop conditional expression evaluates to true, the loop body block executes and control returns to the loop conditional expression. If the loop conditional expression evaluates to false, the while expression completes. An example:fn main() { let mut i = 0; while i < 10 { println!("hello"); i = i + 1; } } let mut i = 0; while i < 10 { println!("hello"); i = i + 1; } A loop expression denotes an infinite loop. loop_expr : [ lifetime ':' ] "loop" '{' block '}'; A loop expression may optionally have a label. If a label is present, then labeled break and continue expressions nested within this loop may exit out of this loop or return control to its head. See Break expressions and Continue expressions. break_expr : "break" [ lifetime ]; A break expression has an optional label. If the label is absent, then executing a break expression immediately terminates the innermost loop enclosing it. It is only permitted in the body of a loop. If the label is present, then break foo terminates the loop with label foo, which need not be the innermost label enclosing the break expression, but must enclose it. continue_expr : "continue" [ lifetime ]; A continue expression has an optional label. If the label is absent, then executing a continue expression immediately terminates the current iteration of the innermost loop enclosing it, returning control to the loop head. In the case of a while loop, the head is the conditional expression controlling the loop. In the case of a for loop, the head is the call-expression controlling the loop. If the label is present, then continue foo returns control to the head of the loop with label foo, which need not be the innermost label enclosing the break expression, but must enclose it. A continue expression is only permitted in the body of a loop. for_expr : "for" pat "in" no_struct_literal_expr '{' block '}' ; A for expression is a syntactic construct for looping over elements provided by an implementation of std::iter::Iterator. An example of a for loop over the contents of a vector:fn main() { type Foo = int; fn bar(f: Foo) { } let a = 0; let b = 0; let c = 0; let v: &[Foo] = &[a, b, c]; for e in v.iter() { bar(*e); } } let v: &[Foo] = &[a, b, c]; for e in v.iter() { bar(*e); } An example of a for loop over a series of integers:fn main() { fn bar(b:uint) { } for i in range(0u, 256) { bar(i); } } for i in range(0u, 256) { bar(i); } if_expr : "if" no_struct_literal_expr '{' block '}' else_tail ? ; else_tail : "else" [ if_expr | '{' block '}' ] ;. match_expr : "match" no_struct_literal_expr '{' match_arm * '}' ; match_arm : attribute * match_pat "=>" [ expr "," | '{' block '}' ] ; match_pat : pat [ '|' pat ] * [ "if" expr ] ? ; A match expression branches on a pattern. The exact form of matching that occurs depends on the pattern. Patterns consist of some combination of literals, destructured vectors or enum constructors, structures and tuples, variable binding specifications, wildcards ( ..), and placeholders ( _). A match expression has a head expression, which is the value to compare to the patterns. The type of the patterns must equal the type of the head expression. In a pattern whose head expression has an enum type, a placeholder ( _) stands for a single data field, whereas a wildcard .. stands for all the fields of a particular variant. For example: enum List<X> { Nil, Cons(X, Box<List<X>>) } let x: List<int> = Cons(10, box Cons(11, box Nil)); match x { Cons(_, box Nil) => fail!("singleton list"), Cons(..) => return, Nil => fail!("empty list") } The first pattern matches lists constructed by applying Cons to any head value, and a tail value of box Nil. The second pattern matches any list constructed with Cons, ignoring the values of its arguments. The difference between _ and .. is that the pattern C(_) is only type-correct if C has exactly one argument, while the pattern C(..) is type-correct for any enum variant C, regardless of how many arguments C has. Used inside a vector pattern, .. stands for any number of elements. This wildcard can be used at most once for a given vector, which implies that it cannot be used to specifically match elements that are at an unknown distance from both ends of a vector, like [.., 42, ..]. If followed by a variable name, it will bind the corresponding slice to the variable. Example: fn is_symmetric(list: &[uint]) -> bool { match list { [] | [_] => true, [x, ..inside, y] if x == y => is_symmetric(inside), _ => false } } fn main() { let sym = &[0, 1, 4, 2, 4, 1, 0]; let not_sym = &[0, 1, 7, 2, 4, 1, 0]; assert!(is_symmetric(sym)); assert!(!is_symmetric(not_sym)); } A match behaves differently depending on whether or not the head expression is an lvalue or an rvalue. If the head expression is an rvalue, it is first evaluated into a temporary location, and the resulting value is sequentially compared to the patterns in the arms until a match is found. The first arm with a matching pattern is chosen as the branch target of the match, any variables bound by the pattern are assigned to local variables in the arm's block, and control enters the block. When the head expression is an lvalue, the match does not allocate a temporary location (however, a by-value binding may copy or move from the lvalue). When possible, it is preferable to match on lvalues, as the lifetime of these matches inherits the lifetime of the lvalue, rather than being restricted to the inside of the match. An example of a match expression: enum List<X> { Nil, Cons(X, Box<List<X>>) } let x: List<int> = Cons(10, box Cons(11, box Nil)); match x { Cons(a, box Cons(b, _)) => { process_pair(a, b); } Cons(10, _) => { process_ten(); } Nil => { return; } _ => { fail!(); } } Patterns that bind variables default to binding to a copy or move of the matched value (depending on the matched value's type). This can be changed to bind to a reference by using the ref keyword, or to a mutable reference using ref mut. Subpatterns can also be bound to variables by the use of the syntax variable @ subpattern. For example: enum List { Nil, Cons(uint, Box<List>) } fn is_sorted(list: &List) -> bool { match *list { Nil | Cons(_, box Nil) => true, Cons(x, ref r @ box Cons(y, _)) => (x <= y) && is_sorted(*r) } } fn main() { let a = Cons(6, box Cons(7, box Cons(42, box Nil))); assert!(is_sorted(&a)); } Patterns can also dereference pointers by using the &, box or @ symbols, as appropriate. For example, these two matches on x: &int are equivalent: let y = match *x { 0 => "zero", _ => "some" }; let z = match x { &0 => "zero", _ => "some" }; assert_eq!(y, z); A pattern that's just an identifier, like Nil in the previous example, could either refer to an enum variant that's in scope, or bind a new variable. The compiler resolves this ambiguity by forbidding variable bindings that occur in match patterns from shadowing names of variants that are in scope. For example, wherever List is in scope, a match pattern would not be able to bind Nil as a new name. The compiler interprets a variable pattern x as a binding only if there is no variant named x in scope. A convention you can use to avoid conflicts is simply to name variants with upper-case letters, and local variables with lower-case letters. Multiple match patterns may be joined with the | operator. A range of values may be specified with ... For example: let message = match x { 0 | 1 => "not many", 2 .. 9 => "a few", _ => "lots" }; Range patterns only work on scalar types (like integers and characters; not like vectors and structs, which have sub-components). A range pattern may not be a sub-range of another range pattern inside the same match. Finally, match patterns can accept pattern guards to further refine the criteria for matching a case. Pattern guards appear after the pattern and consist of a bool-typed expression following the if keyword. A pattern guard may refer to the variables bound within the pattern they follow. let message = match maybe_digit { Some(x) if x < 10 => process_digit(x), Some(x) => process_other(x), None => fail!() }; return_expr : "return" expr ? ; Return expressions are denoted with the keyword return. Evaluating a return expression moves its argument into the output slot of the current function, destroys the current function activation frame, and transfers control to the caller frame. An example of a return expression: fn max(a: int, b: int) -> int { if a > b { return a; } return b; } Every slot, item and value in a Rust program has a type. The type of a value defines the interpretation of the memory holding it. Built-in types and type-constructors are tightly integrated into the language, in nontrivial ways that are not possible to emulate in user-defined types. User-defined types have limited capabilities. The primitive types are the following: (), having the single "unit" value ()(occasionally called "nil"). 3 boolwith values trueand false. The machine types are the following: The unsigned word types u8, u16, u32 and u64, with values drawn from the integer intervals [0, 28 - 1], [0, 216 - 1], [0, 232 - 1] and [0, 264 - 1] respectively. The signed two's complement word types i8, i16, i32 and i64, with values drawn from the integer intervals [-(27), 27 - 1], [-(215), 215 - 1], [-(231), 231 - 1], [-(263), 263 - 1] respectively. The IEEE 754-2008 binary32 and binary64 floating-point types: f32 and f64, respectively. The Rust type uint 4 is an unsigned integer type with target-machine-dependent size. Its size, in bits, is equal to the number of bits required to hold any memory address on the target machine. The Rust type int 5 is a two's complement signed integer type with target-machine-dependent size. Its size, in bits, is equal to the size of the rust type uint on the same target machine. The types char and str hold textual data. A value of type char is a Unicode scalar value (ie. a code point that is not a surrogate), represented as a 32-bit unsigned word in the 0x0000 to 0xD7FF or 0xE000 to 0x10FFFF range. A [char] vector is effectively an UCS-4 / UTF-32 string. A value of type str is a Unicode string, represented as a vector of 8-bit unsigned bytes holding a sequence of UTF-8 codepoints. Since str is of unknown size, it is not a first class type, but can only be instantiated through a pointer type, such as &str or String.. The members of a tuple are laid out in memory contiguously, in order specified by the tuple type. An example of a tuple type and its use:fn main() { type Pair<'a> = (int, &'a str); let p: Pair<'static> = (10, "hello"); let (a, b) = p; assert!(b != "world"); } type Pair<'a> = (int, &'a str); let p: Pair<'static> = (10, "hello"); let (a, b) = p; assert!(b != "world"); The vector type constructor represents a homogeneous array of values of a given type. A vector has a fixed size. (Operations like vec.push operate solely on owned vectors.) A vector type can be annotated with a definite size, such as [int, ..10]. Such a definite-sized vector type is a first-class type, since its size is known statically. A vector without such a size is said to be of indefinite size, and is therefore not a first-class type. An indefinite-size vector can only be instantiated through a pointer type, such as &[T] or Vec<T>. The kind of a vector type depends on the kind of its element type, as with other simple structural types. Expressions producing vectors of definite size cannot be evaluated in a context expecting a vector of indefinite size; one must copy the definite-sized vector contents into a distinct vector of indefinite size. An example of a vector type and its use:fn main() { let v: &[int] = &[7, 5, 3]; let i: int = v[2]; assert!(i == 3); } let v: &[int] = &[7, 5, 3]; let i: int = v[2]; assert!(i == 3); All in-bounds elements of a vector are always initialized, and access to a vector is always bounds-checked. A struct type is a heterogeneous product of other types, called the fields of the type.6 New instances of a struct can be constructed with a struct expression. The memory layout of a struct is undefined by default to allow for compiler optimziations like field reordering, but it can be fixed with the #[repr(...)] attribute. In either case, fields may be given in any order in a corresponding struct expression; the resulting struct value will always have the same memory layout. The fields of a struct may be qualified by visibility modifiers, to allow access to data in a structure outside a module. A tuple struct type is just like a structure type, except that the fields are anonymous. A unit-like struct type is like a structure type, except that it has no fields. The one value constructed by the associated structure expression is the only value that inhabits such a type. An enumerated type is a nominal, heterogeneous disjoint union type, denoted by the name of an enum item. 7 An enum item declares both the type and a number of variant constructors, each of which is independently named and takes an optional tuple of arguments. New instances of an enum can be constructed by calling one of the variant constructors, in a call expression. Any enum value consumes as much memory as the largest variant constructor for its corresponding enum type. Enum types cannot be denoted structurally as types, but must be denoted by named reference to an enum item. Nominal types — enumerations and structures — may be recursive. That is, each enum constructor or struct field may refer, directly or indirectly, to the enclosing enum or struct type itself. Such recursion has restrictions: enumitem must have at least one non-recursive constructor (in order to give the recursion a basis case). An example of a recursive type and its use:fn main() { enum List<T> { Nil, Cons(T, Box<List<T>>) } let a: List<int> = Cons(7, box Cons(13, box Nil)); } enum List<T> { Nil, Cons(T, Box<List<T>>) } let a: List<int> = Cons(7, box Cons(13, box Nil)); All pointers in Rust are explicit first-class values. They can be copied, stored into data structures, and returned from functions. There are four varieties of pointer in Rust: Owning pointers ( Box) : These point to owned heap allocations (or "boxes") in the shared, inter-task heap. Each owned box has a single owning pointer; pointer and pointee retain a 1:1 relationship at all times. Owning pointers are written Box<content>, for example Box<int> means an owning pointer to an owned box containing an integer. Copying an owned box is a "deep" operation: it involves allocating a new owned box and copying the contents of the old box into the new box. Releasing an owning pointer immediately releases its corresponding owned box. References ( &) : These point to memory owned by some other value. References arise by (automatic) conversion from owning pointers, managed pointers, or by applying the borrowing operator & to some other value, including lvalues, rvalues or temporaries. References are written &content, or in some cases &'f content for some lifetime-variable f, for example &int means a reference to an integer. Copying a reference is a "shallow" operation: it involves only copying the pointer itself. Releasing a reference typically has no effect on the value it points to, with the exception of temporary values, which are released when the last reference to them is released. Raw pointers ( *) : Raw pointers are pointers without safety or liveness guarantees. Raw pointers are written as *const T or *mut T, for example *const int means a raw pointer to an integer. Copying or dropping a raw pointer has no effect on the lifecycle of any other value. Dereferencing a raw pointer or converting it to any other pointer type is an unsafe operation. Raw pointers are generally discouraged in Rust code; they exist to support interoperability with foreign code, and writing performance-critical or low-level functions. The function type constructor fn forms new function types. A function type consists of a possibly-empty set of function-type modifiers (such as unsafe or extern), a sequence of input types and an output type. An example of a fn type: fn add(x: int, y: int) -> int { return x + y; } let mut x = add(5,7); type Binop<'a> = |int,int|: 'a -> int; let bo: Binop = add; x = bo(5,7); closure_type := [ 'unsafe' ] [ '<' lifetime-list '>' ] '|' arg-list '|' [ ':' bound-list ] [ '->' type ] procedure_type := 'proc' [ '<' lifetime-list '>' ] '(' arg-list ')' [ ':' bound-list ] [ '->' type ] lifetime-list := lifetime | lifetime ',' lifetime-list arg-list := ident ':' type | ident ':' type ',' arg-list bound-list := bound | bound '+' bound-list bound := path | lifetime The type of a closure mapping an input of type A to an output of type B is |A| -> B. A closure with no arguments or return values has type ||. Similarly, a procedure mapping A to B is proc(A) -> B and a no-argument and no-return value closure has type proc(). An example of creating and calling a closure:fn main() {); Unlike closures, procedures may only be invoked once, but own their environment, and are allowed to move out of their environment. Procedures are allocated on the heap (unlike closures). An example of creating and calling a procedure:fn main() {)); Every trait item (see traits) defines a type with the same name as the trait. This type is called the object type of the trait. Object types permit "late binding" of methods, dispatched using virtual method tables ("vtables"). Whereas most calls to trait methods are "early bound" (statically resolved) to specific implementations at compile time, a call to a method on an object type is only resolved to a vtable entry at compile time. The actual implementation for each vtable entry can vary on an object-by-object basis. Given a pointer-typed expression E of type &T or Box<T>, where T implements trait R, casting E to the corresponding pointer type &R or Box<R> results in a value of the object type R. This result is represented as a pair of pointers: the vtable pointer for the T implementation of R, and the pointer value of E. An example of an object type:trait Printable { fn to_string(&self) -> String; } impl Printable for int { fn to_string(&self) -> String { self.to_str() } } fn print(a: Box<Printable>) { println!("{}", a.to_string()); } fn main() { print(box 10i as Box<Printable>); } trait Printable { fn to_string(&self) -> String; } impl Printable for int { fn to_string(&self) -> String { self.to_str() } } fn print(a: Box<Printable>) { println!("{}", a.to_string()); } fn main() { print(box 10i as Box<Printable>); } In this example, the trait Printable occurs as an object type in both the type signature of main. Within the body of an item that has type parameter declarations, the names of its type parameters are types:fn main() { fn map<A: Clone, B: Clone>(f: |A| -> B, xs: &[A]) -> Vec<B> { if xs.len() == 0 { return vec![]; } let first: B = f(xs[0].clone()); let rest: Vec<B> = map(f, xs.slice(1, xs.len())); return vec![first].append(rest.as_slice()); } } fn map<A: Clone, B: Clone>(f: |A| -> B, xs: &[A]) -> Vec<B> { if xs.len() == 0 { return vec![]; } let first: B = f(xs[0].clone()); let rest: Vec<B> = map(f, xs.slice(1, xs.len())); return vec![first].append(rest.as_slice()); } Here, first has type B, referring to map's B type parameter; and rest has type Vec<B>, a vector type with element type B. The special type self has a meaning within methods inside an impl item. It refers to the type of the implicit self argument. For example, in: trait Printable { fn make_string(&self) -> String; } impl Printable for String { fn make_string(&self) -> String { (*self).clone() } } self refers to the value of type String that is the receiver for a call to the method make_string. Types in Rust are categorized into kinds, based on various properties of the components of the type. The kinds are: Send: Types of this kind can be safely sent between tasks. This kind includes scalars, owning pointers, owned closures, and structural types containing only other owned types. All Sendtypes are 'static. Copy: Types of this kind consist of "Plain Old Data" which can be copied by simply moving bits. All values of this kind can be implicitly copied. This kind includes scalars and immutable references, as well as structural types containing other Copytypes. 'static: Types of this kind do not contain any references (except for references with the staticlifetime, which are allowed). This can be a useful guarantee for code that breaks borrowing assumptions using unsafeoperations. Drop : This is not strictly a kind, but its presence interacts with kinds: the Drop trait provides a single method drop that takes no parameters, and is run when values of the type are dropped. Such a method is called a "destructor", and are always executed in "top-down" order: a value is completely destroyed before any of the values it owns run their destructors. Only Send types can implement Drop. Default : Types with destructors, closure environments, and various other non-first-class types, are not copyable at all. Such types can usually only be accessed through pointers, or in some cases, moved between mutable locations. Kinds can be supplied as bounds on type parameters, like traits, in which case the parameter is constrained to types satisfying that kind. By default, type parameters do not carry any assumed kind-bounds at all. When instantiating a type parameter, the kind bounds on the parameter are checked to be the same or narrower than the kind of the type that it is instantiated with. Sending operations are not part of the Rust language, but are implemented in the library. Generic functions that send values bound the kind of these values to sendable.. A Rust program's memory consists of a static set of items, a set of tasks each with its own stack, and a heap. Immutable portions of the heap may be shared between tasks, mutable portions may not. Allocations in the stack consist of slots, and allocations in the heap consist of boxes. The items of a program are those functions, modules and types that have their value calculated at compile-time and stored uniquely in the memory image of the rust process. Items are neither dynamically allocated nor freed. A task's stack consists of activation frames automatically allocated on entry to each function as the task executes. A stack allocation is reclaimed when control leaves the frame containing it. The heap is a general term that describes two separate sets of boxes: managed boxes — which may be subject to garbage collection — and owned boxes. The lifetime of an allocation in the heap depends on the lifetime of the box values pointing to it. Since box values may themselves be passed in and out of frames, or stored in the heap, heap allocations may outlive the frame they are allocated within. A task owns all memory it can safely reach through local variables, as well as managed, owned boxes and references. When a task sends a value that has the Send trait to another task, it loses ownership of the value sent and can no longer refer to it. This is statically guaranteed by the combined use of "move semantics", and the compiler-checked meaning of the Send trait: it is only instantiated for (transitively) sendable kinds of data constructor and pointers, never including managed boxes or references. When a stack frame is exited, its local allocations are all released, and its references to boxes (both managed and owned) are dropped. A managed box may (in the case of a recursive, mutable managed type) be cyclic; in this case the release of memory inside the managed structure may be deferred until task-local garbage collection can reclaim it. Code can ensure no such delayed deallocation occurs by restricting itself to owned boxes and similar unmanaged kinds of data. When a task finishes, its stack is necessarily empty and it therefore has no references to any boxes; the remainder of its heap is immediately freed. A task's stack contains slots. A slot is a component of a stack frame, either a function parameter, a temporary, or a local variable. A local variable (or stack-local allocation) holds a value directly, allocated within the stack's memory. The value is a part of the stack frame. Local variables are immutable unless declared otherwise like: let mut x = .... Function parameters are immutable unless declared with mut. The mut keyword applies only to the following parameter (so |mut x, y| and fn f(mut x: Box<int>, y: Box<int>) declare one mutable variable x and one immutable variable y). Methods that take either self or ~self can optionally place them in a mutable slot by prefixing them with mut (similar to regular arguments): trait Changer { fn change(mut self) -> Self; fn modify(mut ~self) -> Box<Self>; } Local variables are not initialized when allocated; the entire frame worth of local variables are allocated at once, on frame-entry, in an uninitialized state. Subsequent statements within a function may or may not initialize the local variables. Local variables can be used only after they have been initialized; this is enforced by the compiler. An owned box is a reference to a heap allocation holding another value, which is constructed by the prefix operator box. When the standard library is in use, the type of an owned box is std::owned::Box<T>. An example of an owned box type and value:fn main() { let x: Box<int> = box 10; } let x: Box<int> = box 10; Owned box values exist in 1:1 correspondence with their heap allocation copying an owned box value makes a shallow copy of the pointer Rust will consider a shallow copy of an owned box to move ownership of the value. After a value has been moved, the source location cannot be used unless it is reinitialized.fn main() { let x: Box<int> = box 10; let y = x; // attempting to use `x` will result in an error here } let x: Box<int> = box 10; let y = x; // attempting to use `x` will result in an error here An executing Rust program consists of a tree of tasks. A Rust task consists of an entry function, a stack, a set of outgoing communication channels and incoming communication ports, and ownership of some portion of the heap of a single operating-system process. (We expect that many programs will not use channels and ports directly, but will instead use higher-level abstractions provided in standard libraries, such as pipes.) Multiple Rust tasks may coexist in a single operating-system process. The runtime scheduler maps tasks to a certain number of operating-system threads. By default, the scheduler chooses the number of threads based on the number of concurrent physical CPUs detected at startup. It's also possible to override this choice at runtime. When the number of tasks exceeds the number of threads — which is likely — the scheduler multiplexes the tasks onto threads.8 Rust tasks are isolated and generally unable to interfere with one another's memory directly, except through unsafe code. All contact between tasks is mediated by safe forms of ownership transfer, and data races on memory are prohibited by the type system. Inter-task communication and co-ordination facilities are provided in the standard library. These include: When such facilities carry values, the values are restricted to the Send type-kind. Restricting communication interfaces to this kind ensures that no references or managed pointers move between tasks. Thus access to an entire data structure can be mediated through its owning "root" value; no further locking or copying is required to avoid data races within the substructure of such a value. The lifecycle of a task consists of a finite set of states and events that cause transitions between the states. The lifecycle states of a task are: A task begins its lifecycle — once it has been spawned — in the running state. In this state it executes the statements of its entry function, and any functions called by the entry function. A task may transition from the running state to the blocked state any time it makes a blocking communication call. When the call can be completed — when a message arrives at a sender, or a buffer opens to receive a message — then the blocked task will unblock and transition back to running. A task may transition to the failing state at any time, due being killed by some external event or internally, from the evaluation of a fail!() macro. Once failing, a task unwinds its stack and transitions to the dead state. Unwinding the stack of a task is done by the task itself, on its own control stack. If a value with a destructor is freed during unwinding, the code for the destructor is run, also on the task's control stack. Running the destructor code causes a temporary transition to a running state, and allows the destructor code to cause any subsequent state transitions. The original task of unwinding and failing thereby may suspend temporarily, and may involve (recursive) unwinding of the stack of a failed destructor. Nonetheless, the outermost unwinding activity will continue until the stack is unwound and the task transitions to the dead state. There is no way to "recover" from task failure. Once a task has temporarily suspended its unwinding in the failing state, failure occurring from within this destructor results in hard failure. A hard failure currently results in the process aborting. A task in the dead state cannot transition to other states; it exists only to have its termination status inspected by other tasks, and/or to await reclamation when the last reference to it drops. The currently scheduled task is given a finite time slice in which to execute, after which it is descheduled at a loop-edge or similar preemption point, and another task within is scheduled, pseudo-randomly. An executing task can yield control at any time, by making a library call to std::task::yield, which deschedules it immediately. Entering any other non-executing state (blocked, dead) similarly deschedules the task. The Rust runtime is a relatively compact collection of C++ and Rust code that provides fundamental services and datatypes to all Rust tasks at run-time. It is smaller and simpler than many modern language runtimes. It is tightly integrated into the language's execution model of memory, tasks, communication and logging. Note: The runtime library will merge with the stdlibrary in future versions of Rust. The runtime memory-management system is based on a service-provider interface, through which the runtime requests blocks of memory from its environment and releases them back to its environment when they are no longer needed. The default implementation of the service-provider interface consists of the C runtime functions malloc and free. The runtime memory-management system, in turn, supplies Rust tasks with facilities for allocating releasing stacks, as well as allocating and freeing heap data. The runtime provides C and Rust code to assist with various built-in types, such as vectors, strings, and the low level communication system (ports, channels, tasks). Support for other built-in types such as simple types, tuples and enums is open-coded by the Rust compiler. The runtime provides code to manage inter-task communication. This includes the system of task-lifecycle state transitions depending on the contents of queues, as well as code to copy values between queues and their recipients and to serialize values for transmission over operating-system inter-process communication facilities. The Rust compiler supports various methods to link crates together both statically and dynamically. This section will explore the various methods to link Rust crates together, and more information about native libraries can be found in the ffi tutorial. In one session of compilation, the compiler can generate multiple artifacts through the usage of either command line flags or the crate_type attribute. If one or more command line flag is specified, all crate_type attributes will be ignored in favor of only building the artifacts specified by command line. --crate-type=bin, #[crate_type = "bin"] - A runnable executable will be produced. This requires that there is a main function option is to generate the "compiler recommended" style of library. The output library will always be usable by rustc, but the actual type of library may change from time-to-time. The remaining output types are all different flavors of libraries, and the lib type can be seen as an alias for one of them (but the actual one is compiler-defined). --crate-type=dylib, #[crate_type = "dylib"] - A dynamic Rust library will be produced. This is different from the lib output type in that this forces dynamic library generation. The resulting dynamic library can be used as a dependency for other libraries and/or executables. This output type will create *.so files on linux, *.dylib files on osx, and *.dll files on windows. --crate-type=staticlib, #[crate_type = "staticlib"] - A static system library will be produced. This is different from other library outputs in that the Rust compiler will never attempt to link to staticlib outputs. The purpose of this output type is to create a static library containing all of the local crate's code along with all upstream dependencies. The static library is actually a *.a archive on linux and osx and a *.lib file on windows. This format is recommended for use in situtations such as linking Rust code into an existing non-Rust application because it will not have dynamic dependencies on other Rust code. --crate-type=rlib, #[crate_type = "rlib"] - A "Rust library" file will be produced. This is used as an intermediate artifact and can be thought of as a "static Rust library". These rlib files, unlike staticlib files, are interpreted by the Rust compiler in future linkage. This essentially means that rustc will look for metadata in rlib files like it looks for metadata in dynamic libraries. This form of output is used to produce statically linked executables as well as staticlib outputs. flag formats. file is being produced, then there are no restrictions on what format the upstream dependencies are available in. It is simply required that all upstream dependencies be available for reading metadata from. The reason for this is that rlib files do not contain any of their upstream dependencies. It wouldn't be very efficient for all rlib files to contain a copy of libstd.rlib! If an executable is being produced and the -C prefer-dynamic flag is not specified, then dependencies are first attempted to be found in the rlib format. Rust crate. The runtime contains a system for directing logging expressions to a logging console and/or internal logging buffers. Logging can be enabled per module. Logging output is enabled by setting the RUST_LOG environment variable. RUST_LOG accepts a logging specification made up of a comma-separated list of paths, with optional log levels. For each module containing log expressions, if RUST_LOG contains the path to that module or a parent of that module, then logs of the appropriate level will be output to the console. The path to a module consists of the crate name, any parent modules, then the module itself, all separated by double colons ( ::). The optional log level can be appended to the module path with an equals sign ( =) followed by the log level, from 1 to 4, inclusive. Level 1 is the error level, 2 is warning, 3 info, and 4 debug. You can also use the symbolic constants error, warn, info, and debug. Any logs less than or equal to the specified level will be output. If not specified then log level 4 is assumed. Debug messages can be omitted by passing --cfg ndebug to rustc. As an example, to see all the logs generated by the compiler, you would set RUST_LOG to rustc, which is the crate name (as specified in its crate_id attribute). To narrow down the logs to just crate resolution, you would set it to rustc::metadata::creader. To see just error logging use rustc=0. Note that when compiling source files that don't specify a crate name the crate is given a default name that matches the source file, with the extension removed. In that case, to turn on logging for a program compiled from, e.g. helloworld.rs, RUST_LOG should be set to helloworld. Rust provides several macros to log information. Here's a simple Rust program that demonstrates all four of them:#![feature(phase)] #[phase(plugin, link)] extern crate log; fn main() { error!("This is an error log") warn!("This is a warn log") info!("this is an info log") debug!("This is a debug log") } #![feature(phase)] #[phase(plugin, link)] extern crate log; fn main() { error!("This is an error log") warn!("This is a warn log") info!("this is an info log") debug!("This is a debug log") } These four log levels correspond to levels 1-4, as controlled by RUST_LOG: $ RUST_LOG=rust=3 ./rust This is an error log This is a warn log this is an info log TODO. The essential problem that must be solved in making a fault-tolerant software system is therefore that of fault-isolation. Different programmers will write different modules, some modules will be correct, others will have errors. We do not want the errors in one module to adversely affect the behaviour of a module which does not have any errors. — Joe Armstrong In our approach, all data is private to some process, and processes can only communicate through communications channels. Security, as used in this paper, is the property which guarantees that processes in a system cannot affect each other except by explicit communication. When security is absent, nothing which can be proven about a single module in isolation can be guaranteed to hold when that module is embedded in a system [...] — Robert Strom and Shaula Yemini Concurrent and applicative programming complement each other. The ability to send messages on channels provides I/O without side effects, while the avoidance of shared data helps keep concurrent processes from colliding. — Rob Pike Rust is not a particularly original language. It may however appear unusual by contemporary standards, as its design elements are drawn from a number of "historical" languages that have, with a few exceptions, fallen out of favour. Five prominent lineages contribute the most, though their influences have come and gone during the course of Rust's development: The NIL (1981) and Hermes (1990) family. These languages were developed by Robert Strom, Shaula Yemini, David Bacon and others in their group at IBM Watson Research Center (Yorktown Heights, NY, USA). The Erlang (1987) language, developed by Joe Armstrong, Robert Virding, Claes Wikström, Mike Williams and others in their group at the Ericsson Computer Science Laboratory (Älvsjö, Stockholm, Sweden) . The Sather (1990) language, developed by Stephen Omohundro, Chu-Cheow Lim, Heinz Schmidt and others in their group at The International Computer Science Institute of the University of California, Berkeley (Berkeley, CA, USA). The Newsqueak (1988), Alef (1995), and Limbo (1996) family. These languages were developed by Rob Pike, Phil Winterbottom, Sean Dorward and others in their group at Bell Labs Computing Sciences Research Center (Murray Hill, NJ, USA). The Napier (1985) and Napier88 (1988) family. These languages were developed by Malcolm Atkinson, Ron Morrison and others in their group at the University of St. Andrews (St. Andrews, Fife, UK). Additional specific influences can be seen from the following languages: Substitute definitions for the special Unicode productions are provided to the grammar verifier, restricted to ASCII range, when verifying the grammar in this document. ↩ A crate is somewhat analogous to an assembly in the ECMA-335 CLI model, a library in the SML/NJ Compilation Manager, a unit in the Owens and Flatt module system, or a configuration in Mesa. ↩ The "unit" value () is not a sentinel "null pointer" value for reference slots; the "unit" type is the implicit return type from functions otherwise lacking a return type, and can be used in other contexts (such as message-sending or type-parametric code) as a zero-size type.] ↩ A Rust uint is analogous to a C99 uintptr_t. ↩ A Rust int is analogous to a C99 intptr_t. ↩ struct types are analogous struct types in C, the record types of the ML family, or the structure types of the Lisp family. ↩ The enum type is analogous to a data constructor declaration in ML, or a pick ADT in Limbo. ↩ This is an M:N scheduler, which is known to give suboptimal results for CPU-bound concurrency problems. In such cases, running with the same number of threads and tasks can yield better results. Rust has M:N scheduling in order to support very large numbers of tasks in contexts where threads are too resource-intensive to use in large number. The cost of threads varies substantially per operating system, and is sometimes quite low, so this flexibility is not always worth exploiting. ↩
http://doc.rust-lang.org/0.11.0/rust.html
CC-MAIN-2014-49
refinedweb
18,922
54.22
How do I get around Inconsistant accessibilty error in C # ? I need to pass a pointer to a node in a linked list to a method. When I do, I get a "Compiler Error CS0051" Example The following sample generates CS0051: Copy Code // CS0051.cs public class A { // Try making B public since F is public // B is implicitly private here class B { } public static void F(B b) // CS0051 { } public static void Main() { } } That is a simple example. The actual program is a bit more complicated. I am actually using a node in a linked list to pass to the method LinkedListNode<LevelNode> node The method uses recursion because the node is mart of a huge linked list structure of linked lists that outputs an xml file. Either I have to find a way to use recursion without using methods or I need to find a way to pass pointers nodes or actual nodes. How do I get around Inconsistant accessibilty error in C # ? You make your classes consistently accessible. Really, there is nothing to get around, it is a bug in your code. You have a public class with a public method that returns an object of a class internal (it is not implicitly 'private', it is implicitly 'internal') to your assembly. How could that possibly work? How could code outside of your assembly possibly create an object of type 'B' when it does not have access to the definition or interface of said class? It can't. You need to think through your design a bit harder. Your interface for class 'A' defines a contract. It expects clients to work with an object of type 'B', but it doesn't want to tell them what a 'B' actually is. If you liked my post go ahead and give me an upvote so that my epee.... ahem, reputation will grow. Yes; I have a blog too - Forum Rules
http://forums.codeguru.com/showthread.php?496071-RESOLVED-Dynamically-remove-row-from-table-layout&goto=nextnewest
CC-MAIN-2016-18
refinedweb
320
71.55
.? #!/usr/bin/perl use strict; use warnings; my $iter = combo( 30..50 ); while ( my @combo = $iter->() ) { print "@combo\n"; } sub combo { my @list = @_; return sub { () } if ! @_; my (@position, @stop, $end_pos, $done); my ($by, $next) = (0, 1); return sub { return () if $done; if ( $next ) { $by++; return () if $by > @list; @position = (0 .. $by - 2, $by - 2); @stop = @list - $by .. $#list; $end_pos = $#position; $next = undef; } my $cur = $end_pos; { if ( ++$position[ $cur ] > $stop[ $cur ] ) { $position[ --$cur ]++; redo if $position[ $cur ] > $stop[ $cur ]; my $new_pos = $position[ $cur ]; @position[ $cur .. $end_pos ] = $new_pos .. $new_pos + + $by; } } if ( $position[0] == $stop[0] ) { $position[0] == @list ? $done = 1 : $next = 1; } return @list[ @position ]; } } [download] Cheers - L~R I Yes No A crypto-what? Results (173 votes), past polls
http://www.perlmonks.org/index.pl?node_id=128293
CC-MAIN-2014-10
refinedweb
120
74.29
Introduction The __name__ special variable is used to check whether a file has been imported as a module or not, and to identify a function, class, module object by their __name__ attribute. Remarks. __name__ == '__main__' The special variable __name__ is not set by the user. It is mostly used to check whether or not the module is being run by itself or run because an import was performed. To avoid your module to run certain parts of its code when it gets imported, check if __name__ == '__main__'. Let module_1.py be just one line long: import module2.py And let's see what happens, depending on module2.py Situation 1 module2.py print('hello') Running module1.py will print hello Running module2.py will print hello Situation 2 module2.py if __name__ == '__main__': print('hello') Running module1.py will print nothing Running module2.py will print hello function_class_or_module.__name__ The special attribute __name__ of a function, class or module is a string containing its name. import os class C: pass def f(x): x += 2 return x print(f) # <function f at 0x029976B0> print(f.__name__) # f print(C) # <class '__main__.C'> print(C.__name__) # C print(os) # <module 'os' from '/spam/eggs/'> print(os.__name__) # os The __name__ attribute is not, however, the name of the variable which references the class, method or function, rather it is the name given to it when defined. def f(): pass print(f.__name__) # f - as expected g = f print(g.__name__) # f - even though the variable is named g, the function is still named f This can be used, among others, for debugging: def enter_exit_info(func): def wrapper(*arg, **kw): print '-- entering', func.__name__ res = func(*arg, **kw) print '-- exiting', func.__name__ return res return wrapper @enter_exit_info def f(x): print 'In:', x res = x + 2 print 'Out:', res return res a = f(2) # Outputs: # -- entering f # In: 2 # Out: 4 # -- exiting f Use in logging When configuring the built-in logging functionality, a common pattern is to create a logger with the __name__ of the current module: logger = logging.getLogger(__name__) This means that the fully-qualified name of the module will appear in the logs, making it easier to see where messages have come from.
https://pythonpedia.com/en/tutorial/1223/the---name---special-variable
CC-MAIN-2020-16
refinedweb
375
65.73
I’m trying to use a constant instead of a string literal in this piece of code: new InputStreamReader(new FileInputStream(file), "UTF-8") "UTF-8" appears in the code rather often, and would be much better to refer to some static final variable instead. Do you know where I can find such a variable in JDK? BTW, on a second thought, such constants are bad design: Public Static Literals … Are Not a Solution for Data Duplication In Java 1.7+, java.nio.charset.StandardCharsets defines constants for Charset including UTF_8. import java.nio.charset.StandardCharsets ... StandardCharsets.UTF_8.name(); For Android: minSdk 19 Now I use org.apache.commons.lang3.CharEncoding.UTF_8 constant from commons-lang. The Google Guava library (which I’d highly recommend anyway, if you’re doing work in Java) has a Charsets class with static fields like Charsets.UTF_8, Charsets.UTF_16, etc. Since Java 7 you should just use java.nio.charset.StandardCharsets instead for comparable constants. Note that these constants aren’t strings, they’re actual Charset instances. All standard APIs that take a charset name also have an overload that take a Charset object which you should use instead. In case this page comes up in someones web search, as of Java 1.7 you can now use java.nio.charset.StandardCharsets to get access to constant definitions of standard charsets. There are none (at least in the standard Java library). Character sets vary from platform to platform so there isn’t a standard list of them in Java. There are some 3rd party libraries which contain these constants though. One of these is Guava (Google core libraries): You can use Charset.defaultCharset() API or file.encoding property. But if you want your own constant, you’ll need to define it yourself. This constant is available (among others as: UTF-16, US-ASCII, etc.) in the class org.apache.commons.codec.CharEncoding as well. If you are using OkHttp for Java/Android you can use the following constant: import com.squareup.okhttp.internal.Util; Util.UTF_8; // Charset Util.UTF_8.name(); // String Tags: java, string, utf-8
https://exceptionshub.com/where-to-get-utf-8-string-literal-in-java.html
CC-MAIN-2022-05
refinedweb
352
59.8
SQLite database in Qt From Wiki This example shows you how to create an SQLite database in Qt. Article Metadata Tested with SDK: 4.7 and later (also tested 4.8) Devices(s): Nokia 5800 XpressMusic, Nokia N900Compatibility Platform(s): QtArticle Symbian S60 5th Edition Symbian S60 5th Edition Keywords: QSqlDatabase, QSQlite, QSqlError Created: tepaa (24 Nov 2009) Reviewed: lilian.moraru (08 Nov 2012) Last edited: hamishwillee (08 Nov 2012) Preconditions For Maemo SQLite development, the following packages must be installed: - libqt4-sql - libqt4-sql-sqlite - libsqlite3-0 - libsqlite3-dev Project file (.pro) Add the following line to your .pro file QT += sql Header #include <QObject> #include <QSqlDatabase> #include <QSqlError> #include <QFile> class DatabaseManager : public QObject { public: DatabaseManager(QObject *parent = 0); ~DatabaseManager(); public: bool openDB(); bool deleteDB(); QSqlError lastError(); private: QSqlDatabase db; }; Source bool DatabaseManager::openDB() { // Find QSLite driver db = QSqlDatabase::addDatabase("QSQLITE"); #ifdef Q_OS_LINUX // NOTE: We have to store database file into user home folder in Linux QString path(QDir::home().path()); path.append(QDir::separator()).append("my.db.sqlite"); path = QDir::toNativeSeparators(path); db.setDatabaseName(path); #else // NOTE: File exists in the application private folder, in Symbian Qt implementation db.setDatabaseName("my.db.sqlite"); #endif // Open databasee return db.open(); } QSqlError DatabaseManager::lastError() { // If opening database has failed user can ask // error description by QSqlError::text() return db.lastError(); } bool DatabaseManager::deleteDB() { // Close database db.close(); #ifdef Q_OS_LINUX // NOTE: We have to store database file into user home folder in Linux QString path(QDir::home().path()); path.append(QDir::separator()).append("my.db.sqlite"); path = QDir::toNativeSeparators(path); return QFile::remove(path); #else // Remove created database binary file return QFile::remove("my.db.sqlite"); #endif } Postconditions The database binary file is created on the device disk in Symbian & Windows and in the device memory in Maemo. See also - Creating a database table in Qt - Inserting a row into a database in Qt - Searching for data in a database in Qt - Deleting data from a database in Qt - Selecting data from a database without using SQL statements in Qt - Using QDataWidgetMapper to show data from a database in Qt Lilian.moraru - CommentI think that it is important to point out that in order for this to work you should add a new line "QT += sql" in the project file(.pro) lilian.moraru 01:59, 7 November 2012 (EET) Hamishwillee - @Lilian - this is a wiki Hi Lilian Thanks very much for pointing this out. I added the section. Note that this is a wiki, so if you see changes like this that are worth writing a comment on, then you might as well just add the section :-)BTW, did you test this on a recent version of Qt, and if so, which one? (this was tested on Qt Tower prerelease, and if you know it works on a later version then that would be great) hamishwillee 07:47, 7 November 2012 (EET) Lilian.moraru - Re @HamishwilleeI tested it on Linux and Windows with Qt version 4.7 and 4.8 and it works. lilian.moraru 11:11, 7 November 2012 (EET) Hamishwillee - Thanks very much. I've added that information in the ArticleMetaData, along with your name as having reviewed it. Also fixed up links to current Qt 4.7 docs. I think this is closed, so next time I come to this article I'll delete these comments.Thank you! hamishwillee 06:25, 8 November 2012 (EET)
http://developer.nokia.com/community/wiki/Creating_an_SQLite_database_in_Qt
CC-MAIN-2014-49
refinedweb
568
54.12
A scatter plot displays data between two continuous data. It shows how one data variable affects the other variable. A scatter plot can display data in different types of plots, both 2D and 3D. Seaborn is a widely-used library in Python for data visualization. It represents data in a straightforward way in the form of plots. Seaborn offers different ways of styling the plots, such as by changing the color palette with multiple options. In Seaborn, we use the scatterplot() method to create plots. sns.scatterplot(x, y, data) sns: It is the Seaborn variable. x: It is the data value on the x-axis. y: It is the data value on the y-axis. data: It is a DataFrame containing variables and observations. In the code snippet below, we visualize a data frame in pictorial form: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # A list containing year values Year = [2000,2001,2002,2003,2004,2005,2006, 2007,2008,2009,2010,2011,2012,2013,2014,2015,2016] # A list containing profit values Profit = [90, 65.8, 74, 65, 99.5, 19, 33.6,23,35,12,86,34,867,20,70,64,44] # pd.DataFrame converts lists to a dataframe data_plot = pd.DataFrame({"Year":Year, "Profit":Profit}) # the scatterplot function represent data in the form of dots sns.scatterplot(x = "Year", y = "Profit", data= data_plot) Yearto include the years. Profitto include the profit figures. Yearas the x-axis label and Profitas the y-axis label. sns.scatterplot()function from the Seaborn library to generate a scatter plot on the data above. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-draw-a-scatter-plot-using-seaborn-in-python
CC-MAIN-2022-33
refinedweb
274
60.92
Often times it is extremely convenient to compile certain assets, or data, straight into C code. This can be nice when creating code for someone else to use. For example in the tigr graphics library by Richard Mitton various shaders and font files are directly included as character arrays into the C source. Another example: MEM_FILEfp; OpenFileInMemory( &fp, some_memory ); char buffer[ 256 ]; fscanf( &fp, "%s", buffer );: #include <stdio.h> #include "memfile.h" // generated by incbin.pl from poem.txt const unsigned char poem[] = { 0x74,0x68,0x65,0x20,0x73,0x70,0x69,0x64,0x65,0x72,0x0d,0x0a,0x63,0x72,0x61,0x77, 0x6c,0x65,0x64,0x20,0x6f,0x6e,0x0d,0x0a,0x75,0x70,0x20,0x74,0x68,0x65,0x20,0x77, 0x65,0x62,0x0d,0x0a,0x61,0x6e,0x64,0x20,0x73,0x6d,0x69,0x6c,0x65,0x64,0x0d,0x0a, 0x3a,0x29 }; const int poem_size = (int)sizeof(poem); int main( ) { MEM_FILEfp; OpenFileInMemory( &fp, poem ); // prints: // the spider crawled on up the web and smiled 🙂 while ( fp.bytes_read < poem_size ) { char buffer[ 256 ]; fscanf( &fp, "%s", buffer ); printf( "%s ", buffer ); } } 评论 抢沙发
http://www.shellsec.com/news/21189.html
CC-MAIN-2018-13
refinedweb
171
58.18
A set helpers to make it easy to use Shelf on App Engine. Example code for this package does not follow Dart conventions. The package is structured so it can be run directly using gcloud. Using pub build The easiest way to run the sample is to run pub build before you execute gcloud preview app run app.yaml. If you change the content of the web Directory, you will have to rerun pub build. Using pub serve If you'd like to use pub serve during development, follow the instructions here. Note: you will still need to run pub build before you deploy. portargument to servefunction. Requires appengine >= 0.3.1. appenginepackage. Made DirectoryIndexServeMode an enum. Support the latest version of shelf package. Require Dart 1.9 or greater. DirectoryIndexServeMode.SERVEmode to have no effect. Made assetHandler a function. Added the directoryIndexServeMode named parameter to the assetHandler method to enable auto-serving or redirecting to index.html files. Allow changing the default index files name to serve with indexFileName. Formatted the code. Updated example code to run on the latest configuration. Add this to your package's pubspec.yaml file: dependencies: shelf_appengine: "^0.2.3" You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:shelf_appengine/shelf_appengine.dart'; We analyzed this package, and provided a score, details, and suggestions below. Detected platforms: other Primary library: package:shelf_appengine/shelf_appengine.dartwith components: io, shelf_appengine.dart.
https://pub.dartlang.org/packages/shelf_appengine
CC-MAIN-2018-09
refinedweb
262
62.34
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python. Let's import PyOpenCL. import pyopencl as cl import numpy as np This object defines some flags related to memory management on the device. mf = cl.mem_flags We create an OpenCL context and a command queue. ctx = cl.create_some_context() queue = cl.CommandQueue(ctx) Now, we initialize the NumPy array that will contain the fractal. size = 200 iterations = 100 col = np.empty((size, size), dtype=np.int32) We allocate memory for this array on the GPU. col_buf = cl.Buffer(ctx, mf.WRITE_ONLY, col.nbytes) We write the OpenCL kernel in a string. The mandelbrot function accepts pointers to the buffers as arguments, as well as the figure size. It updates the col buffer with the escape value in the fractal for each pixel. code = """ __kernel void mandelbrot(int size, int iterations, global int *col) { // Get the row and column index of the current thread. int i = get_global_id(1); int j = get_global_id(0); int index = i * size + j; // Declare and initialize the variables. double cx, cy; double z0, z1, z0_tmp, z0_2, z1_2; cx = -2.0 + (double)j / size * 3; cy = -1.5 + (double)i / size * 3; // Main loop. z0 = z1 = 0.0; for (int n = 0; n < iterations; n++) { z0_2 = z0 * z0; z1_2 = z1 * z1; if (z0_2 + z1_2 <= 100) { // Need to update z0 and z1 in parallel. z0_tmp = z0_2 - z1_2 + cx; z1 = 2 * z0 * z1 + cy; z0 = z0_tmp; col[index] = n; } else break; } } """ Now, we compile the OpenCL program. prg = cl.Program(ctx, code).build() We call the compiled function, passing the command queue, the grid size, the number of iterations, and the buffer as arguments. prg.mandelbrot(queue, col.shape, None, np.int32(size), np.int32(iterations), col_buf).wait() Once the function has completed, we copy the contents of the OpenCL buffer back to the NumPy array col. cl.enqueue_copy(queue, col, col_buf); Let's display the fractal. import matplotlib.pyplot as plt %matplotlib inline plt.imshow(np.log(col), cmap=plt.cm.hot,); plt.xticks([]); plt.yticks([]); Let's evaluate the time taken by this function. %%timeit prg.mandelbrot(queue, col.shape, None, np.int32(size), np.int32(iterations), col_buf).wait() cl.enqueue_copy(queue, col, col_buf); You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter05_hpc/08_opencl.ipynb
CC-MAIN-2018-13
refinedweb
410
67.65
(cross-posting back to python-dev to finalize discussions) 2009/4/2 Guido van Rossum <guido at python.org> [...] > > The problem you report: > >> > >> try: > >> ... > >> except OSWinError: > >> ... > >> except OSLinError: > >> ... > >> > > > > Would be solved if both OSWinError and OSLinError were always defined in > > both Linux and Windows Python. Programs could be written to catch both > > OSWinError and OSLinError, except that on Linux OSWinError would never > > actually be raised, and on Windows OSLinError would never occur. Problem > > solved. > > Yeah, but now you'd have to generate the list of exceptions (which > would be enormously long) based on the union of all errno codes in the > universe. > > Unless you only want to do it for some errno codes and not for others, > which sounds like asking for trouble. > > Also you need a naming scheme that works for all errnos and doesn't > require manual work. Frankly, the only scheme that I can think of that > could be automated would be something like OSError_ENAME. > > And, while OSError is built-in, I think these exceptions (because > there are so many) should not be built-in, and probably not even live > in the 'os' namespace -- the best place for them would be the errno > module, so errno.OSError_ENAME. > > > The downsides of this? I can only see memory, at the moment, but I might > be > > missing something. > > It's an enormous amount of work to make it happen across all > platforms. And it doesn't really solve an important problem. I partially agree. It will be a lot of work. I think the problem is valid, although not very important, I agree. > > > > Now just one final word why I think this matters. The currently correct > way > > to remove a directory tree and only ignore the error "it does not exist" > is: > > > > try: > > shutil.rmtree("dirname") > > except OSError, e: > > if errno.errorcode[e.errno] != 'ENOENT': > > raise > > > > However, only very experienced programmers will know to write that > correct > > code (apparently I am not experienced enought!). > > That doesn't strike me as correct at all, since it doesn't distinguish > between ENOENT being raised for some file deep down in the tree vs. > the root not existing. (This could happen if after you did > os.listdir() some other process deleted some file.) OK. Maybe in a generic case this could happen, although I'm sure this won't happen in my particular scenario. This is about a build system, and I am assuming there are no two concurrent builds (or else a lot of other things would fail anyway). > A better way might be > > try: > shutil.rmtree(<dir>) > except OSError: > if os.path.exists(<dir>): > raise Sure, this works, but at the cost of an extra system call. I think it's more elegant to check the errno (assuming the corner case you pointed out above is not an issue). > Though I don't know what you wish to happen of <dir> were a dangling > symlink. > > > What I am proposing is that the simpler correct code would be something > > like: > > > > try: > > shutil.rmtree("dirname") > > except OSNoEntryError: > > pass > > > > Much simpler, no? > > And wrong. > > > Right now, developers are tempted to write code like: > > > > shutil.rmtree("dirname", ignore_errors=True) > > > > Or: > > > > try: > > shutil.rmtree("dirname") > > except OSError: > > pass > > > > Both of which follow the error hiding anti-pattern [1]. > > > > [1] > > > > Thanks for reading this far. > > Thanks for not wasting any more of my time. OK, I won't waste more time. If this were an obvious improvement beyond doubt to most people, I would pursue it, but since it's not, I can live with it. Thanks anyway, -- Gustavo J. A. M. Carneiro INESC Porto, Telecommunications and Multimedia Unit "The universe is always one step beyond logic." -- Frank Herbert -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-dev/2009-April/088107.html
CC-MAIN-2017-30
refinedweb
620
74.69
vin-decoder-dart A VIN decoding and validation library for Dart. vin_decoder provides a simple decoding and validation library for Vehicle Identification Numbers (VINs) based on ISO 3779:2009 and World Manufacturer Identifiers (WMIs) based on ISO 3780:2009. The decoder can be used standalone in an offline mode (the default behaviour, as per earlier versions of the API), or can be further enriched by querying additional VIN information from the NHTSA Vehicle API, such as the precise make, model, and vehicle type in extended mode. Usage A simple usage example: import 'package:vin_decoder/vin_decoder.dart'; void main() async { var vin = VIN(number: 'WP0ZZZ99ZTS392124', extended: true); print('WMI: ${vin.wmi}'); print('VDS: ${vin.vds}'); print('VIS: ${vin.vis}'); print("Model year is " + vin.modelYear()); print("Serial number is " + vin.serialNumber()); print("Assembly plant is " + vin.assemblyPlant()); print("Manufacturer is " + vin.getManufacturer()); print("Year is " + vin.getYear().toString()); print("Region is " + vin.getRegion()); print("VIN string is " + vin.toString()); // The following calls are to the NHTSA DB, and are carried out asynchronously var make = await vin.getMakeAsync(); print("Make is ${make}"); var model = await vin.getModelAsync(); print("Model is ${model}"); var type = await vin.getVehicleTypeAsync(); print("Type is ${type}"); } which produces the following: WMI: WP0 VDS: ZZZ99Z VIS: TS392124 Model year is T Serial number is 92124 Assembly plant is S Manufacturer is Porsche Year is 1996 Region is EU VIN string is WP0ZZZ99ZTS392124 Make is Porsche Model is 911 Type is Passenger Car Features and bugs Please file feature requests and bugs at the issue tracker. License Licensed under the terms of the Apache 2.0 license, the full version of which can be found in the LICENSE file included in the distribution. Libraries - vin_decoder - Support for VIN parsing and validation.
https://pub.dev/documentation/vin_decoder/latest/
CC-MAIN-2020-05
refinedweb
291
50.73
capri Capri Library :: Clasic object oriented JavaScript npm install capri Rotorz Limited ( ) The Capri project (and all derivatives) are licensed under the terms of the BSD license or the GNU General Public License (GPL) Version 2. The BSD license is recommended for most projects because it imposes fewer restrictions than the GPL license. You are free to select which license to use. Please ensure that any copyright information is retained in source files (even when in minified form). Please refer to the relevant license text: - BSD-LICENSE.txt - - GPL-LICENSE.txt - Overview The fundamental purpose of Capri is to provide the ability to define and extend classes and namespaces in a way that is clean and easy to understand. The secondary purpose of Capri is to provide a reusable toolset that caters to common needs. Capri can be used as a module for node.js applications or selected features can be built and minified for use in web applications. Capri also provides its own flavour of modules which can be used if desired. The intention of Capri-style modules is to allow developers to write modules that can be consumed by both server and client applications with minimal (if any) change. It is useful to note that a cleaner syntax can be used when developing modules for ECMAScript 5 compliant platforms (like node.js or modern web browsers). Special property names must otherwise be placed within quotes for compatibility with all major browsers. The project is largely experimental, it would be interesting to see if fellow developers find this of use! IMPORTANT, CURRENT DOCUMENTATION IS INVALID FOR THIS RELEASE! How to Contribute? We have chosen to host our project using GitHub. Please read the contribution agreement before making any contributions. All bug reports and suggestions should be contributed by creating a new issue. Please follow these steps if you would like to submit a bug fix or feature: - Create new issue to discuss your ideas (optional) - Fork the Capri repository. - Take a look at how we format our code and read our formatting guide. - Hack away! - Create test cases to ensure that contributed source code works properly and is of a high standard. - Create a pull request. Contribution Agreement Capri is licensed under the BSD and GPL licenses (find out more). To be in the best position to enforce these licenses the copyright status of Capri. Disclaimer Linked content in the above text are for convienence purposes only and do not contribute to the agreement in any way. Linked content should be digested under the readers discretion.
https://www.npmjs.org/package/capri
CC-MAIN-2014-10
refinedweb
427
64.71
Introduction: How to Use Arduino WeMos D1 WiFi UNO ESP8266 IOT IDE Compatible Board by Using Blynk Arduino WeMos D1 WiFi UNO ESP8266 IOT IDE Compatible Board Description: WiFi ESP8266 Development Board WEMOS D1. WEMOS D1 is a WIFI development board based on ESP8266 12E. The functioning is similar to that of NODEMCU, except that the hardware is built resembling Arduino UNO. The D1 board can be configured to work on Arduino environment using BOARDS MANAGER. Specification: - Microcontroller: ESP-8266EX - Operating Voltage: 3.3V - Digital I/O Pins: 11 - Analog Input Pins: 1 - Clock Speed: 80MHz/160MHz - Flash: 4M bytes Step 1: Item Preparation In this tutorial, we'll use an application from smartphone "Blynk" to control the Arduino Wemos D1 (ESP8266) with LED Traffic Light Module. Before we begin, prepare all the item needed: - Breadboard - Arduino Wemos D1 Wifi UNO ESP8266 - Jumper wires male to male - LED Traffic Light Module ( you can also use base LEDs ) - micro USB - Smartphone ( You need to download "Blynk" from Play Store/iStore ) Step 2: Pin Connection Follow the connection as shown above. Step 3: Board Installation Next, open Arduino IDE and go to [File => Preferences]. A dialog box appears. In this box, an additional board manager URL text box is present. - Copy and paste the following URL into the box and click OK to download the packages. - Step 4: Find Out in Board Manager Next, go to [Tools => Board => Board Manager] in your Arduino IDE. The Boards Manager window appears as below. Scroll down the boards in the board manager to select ESP8266 from the list of available boards. Click on install to begin the installation. Step 5: Select Board Next, uploading your first program select the type of "WeMos D1 R1" board from the [Tools => Boards] section in your Arduino IDE. Step 6: Example Code To get the example code from the Blynk you need to download the library from Blynk website. Follow these steps: - Select "Download Blynk Library". - Select to "Blynk_Release_v0.5.4.zip". - Extract the files and copy both of the files (libraries, tools). - Open Arduino IDE go to [Files => Preferences] find the files that appears on the "Sketchbooks location". - Open the Arduino file and paste both of the files you've copied. Then, open your Arduino IDE, go to [Files => Examples => Blynk => Boards Wifi =>Standalone] for the example code. Step 7: Blynk Setup Next, you need to set up your "Blynk" from your smartphone. Follow these steps: - Download "Blynk" at Play Store/iStore. - Go to "New Project" Enter your project name (if needed). - Choose device "WeMos D1". - Connection type "Wifi" then "Create". (After create you will receive Auth Token from your email). - Slide to the left to open "Widget Box". - Select "Button" to add button. - Touch the button for "Button Settings". - Select [Output => Digital => D2,D3,D4] to choose pin connection. - Mode turn into "Switch". Step 8: Uploading Now you need to check out your email inbox and copy the Auth token code. Insert the Auth Token, Network name, and Password to your programming. Now upload the code to your WeMos D1 (ESP8266) through micro USB. Make sure you use the right port by select at [Tools => Port]. Step 9: Try Out Blynk Button Select the play button from the upper right side and turn on the pin button. Step 10: Finish Now it's working! The Blynk pin buttons works as a switch. 2 People Made This Project! - Recep UYSAL made it! Recommendations 9 Comments 8 months ago You need to download this driver for your USB micro "CH341SER" 8 months ago I followed these instructions but I'm receiving an error. It looks like it can't find a specific file. "Arduino: 1.8.13 (Windows 8.1), Board: "WeMos D1 R1, 80 MHz, Flash, Legacy (new can return nullptr), All SSL ciphers (most compatible), 4MB (FS:2MB OTA:~1019KB), v2 Lower Memory, Disabled, None, Only Sketch, 921600" ESP8266_Standalone:37:32: fatal error: BlynkSimpleEsp8266.h: No such file or directory #include <BlynkSimpleEsp8266.h> ^ compilation terminated. exit status 1 BlynkSimpleEsp8266.h: No such file or directory This report would have more information with "Show verbose output during compilation" option enabled in File -> Preferences." 10 months ago For those scratching their heads looking for the WeMos board definition, it's not in alphabetical order, look for Lolin. 1 year ago reset that if your blynk and hardwar is not connected as same network no problem, Question 1 year ago on Step 10 how do you get the file its just sendes me to the page and nothing else 1 year ago hello,how to blink the inbuilt led through this the app only showing the digital pins Question 1 year ago Hello, without the wi-fi feature... Can I use it as a Uno R3 ? Will it work with same code as uno? I want to build a cnc with "cnc shield" with it... 2 years ago on Step 10 Thanks, very good! Question 2 years ago
https://www.instructables.com/Arduino-WeMos-D1-WiFi-UNO-ESP-8266-IoT-IDE-Compati/
CC-MAIN-2021-49
refinedweb
828
74.39
Code First Stored Procedures with Multiple Results One(); } } } }; } } public class BloggingContext : DbContext { public DbSet<Blog> Blogs { get; set; } public DbSet<Post> Posts { get; set; } } }. 18 Responses to “Code First Stored Procedures with Multiple Results” Where's The Comment Form? Very nice, thank you Justin September 3, - ADO.NET Blog - Site Home - MSDN Blogs September 25, | MSDN Blogs September 25, 2012 Can you give a rough estimate as to when there will be a release to natively support code first stored procedures. I’m in the beginning stages of building a large enterprise product and hoping that I don’t have to hand code stored proc wrappers for long. Thanks for all your hard work on this project! Grady Dycus October 4, 2012 It will be available in EF6, which will be RTM’d sometime next year. We haven’t started implementing the feature yet but it’s close to the top of the list now. I’d expect us to start working on it in the next couple of months. Once we have it implemented you could try it out using our nightly builds. romiller.com October 4, 2012 Just realized that what asked could be confusing, so to be a bit more specific, I’m looking forward to natively support for stored procedures, but I’m specifically hoping that “Reverse Engineer Code First” will generate the stored proc wrappers for me. Grady Dycus October 4, 2012 Hi Rowan, Nice and helpful article. I think it is important to note that each entity (each blog/post) doesn’t get Translated or added to the DbSet.Local collection until the call is made to access one of it’s properties, such as in the foreach loops you have above. Meaning that if you were to remove those foreach() {Console.WriteLine(…)} at the end of it, you have no entities in the Local collection of the DbSets. Chris.- Chris Amelinckx January 10, 2013 The database surcuttre is similar to Northwind’s Employee and Territories, but I didn’t see how you configure the many-to-many mapping, how does it works , and could you explain more how to declare the many-to-many mapping? thanks. Gora February 13, 2013 Hello Rowan, I wonder if the Alpha has already a way to try stored procedures. Thank you for this example. Richard Valdivieso January 10, 2013 i tried yout example, it worked, but when i use a procedure with a parameter it dosent. Im doing that: …… var cmd = db.Database.Connection.CreateCommand(); cmd.CommandText = “[dbseg].[test]“; cmd.Parameters.Add(new SqlParameter(“@test”, “this is a test”)); try { // Run the sproc db.Database.Connection.Open(); var reader = cmd.ExecuteReader(); …… Lourenço February 7, 2013 I am able to successfully do this with parameters on a stored procedure, two things to try: 1. Set the command type to stored procedure by doing cmd.CommandType = System.Data.CommandType.StoredProcedure; 2. Remove the @ from the parameter name Try them separately, particularly curious to see if (1.) solves it, because I seem to recall similar behavior when I first tried it. Chris Amelinckx February 7, 2013 It worked! But with the @test instead of just test, so it was the number 1 solution that solved it. i’ve tried the 2 solution but no good results alone and with cmd.CommandType = System.Data.CommandType.StoredProcedure, but it didnt worked. Only works if it have the @. thanks for the help. Lourenço February 7, 2013 I need to read many-to-many entities from procedure. Can you help me? Anonymous April 12, 2013 Hi Steve,How can I configure which datbaase to use? And where is the data saved by default?I have installed SQL Server CE 4.0. I added a datbaase called SimpleTest.sdf to the App_Data folder and then added a connectionstring to my Web.config with the same name as the DbContext that was scaffolded:When I run the application, I everything works, but my datbaase isn’t used. That means it’s storing the data somewhere else and I don’t know where or how to change this.I guess I’m just overlooking something very simple here Etiene April 23, 2013 NhatNguyen July 3, 2013 Hi, This solution does not work for me , Its same as, I have tried following things 1) I have downloaded EF 4.1 and installed successfully after that i have added EntityFramework dll inside my proejct 2) After that When I tried to run my code getting following errror .in reading from dataReader; ‘GetDepartmentDivision’ could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly. Near simple identifier, line 1, column 1. My Code is like this … using (var objPractice = new PracticeEntities()) { System.Data.Common.DbDataReader sqlReader; var cmd = objPractice.Connection.CreateCommand(); cmd.CommandText = “GetDepartmentDivision”; if (cmd.Connection.State == ConnectionState.Closed) cmd.Connection.Open(); sqlReader = cmd.ExecuteReader(); var depObj = ((IObjectContextAdapter)objPractice).ObjectContext.Translate(reader, “Department”, MergeOption.AppendOnly); reader.NextResult(); var divObj = ((IObjectContextAdapter)objPractice).ObjectContext.Translate(reader, “Division”, MergeOption.AppendOnly); I do not know what’s i am missing ? because some peoples are saying that your code is working but i have done practical example it’s not working … shailesh July 24, 2013 I have a entity model as below: public Entity { //Properties } public Activity : Entity { //Properties public Action Action { get; set; } public ICollection Commands { get; set; } } public Action : Entity { //Properties } public Command : Entity { //Properties } I have a store procedure that returns List of Activities along with Related Actions and Commands of each Activity. How I can map the result sets of the before-mentioned store procedure in EF5 or EF 6 Codefirst ? I already used ObjectContext.Translate method of which is explained in MSDN article Stored Procedures with Multiple Result Sets. But the problem is all my entities are derived from Entity class and that’s why when I used ObjectContext.Translate for Activity .. I can’t use it for Action and Command since It maps the EntitySetName of the Entity class for Activityso if I use it for either Action or Command it’ll raise error. How I can manage that? Sharareh December 14, 2013 Hi This method is Great, but do you use in return multiple result just lke select blogs left join tabl1 .. left join table2… I use this method to all my Stored Procedures , but in the first time wo call this method is aways used Over time 1000ms。 do you have some Solve Method, Thanks YINZHU January 12, 2014
http://romiller.com/2012/08/15/code-first-stored-procedures-with-multiple-results/
CC-MAIN-2014-15
refinedweb
1,092
56.05
On Thu, Sep 24, 2009 at 12:45:20PM +0200, Michael Niedermayer wrote: > On Thu, Sep 24, 2009 at 12:03:55PM +0200, Reimar D?ffinger wrote: > > To be applied after my three other patches related to this. > > I think there is also a roundup bug about this open, I don't remember > > its number though. > > ok Applied, but I missed two more parts. I think that ff_realloc_static can be removed without a major version bump, since AFAICT it was neither public nor used by libavformat, so there should be no issue removing it, right? So I suggest this (do you want it applied as two parts?): Index: libavcodec/bitstream.c =================================================================== --- libavcodec/bitstream.c (revision 20013) +++ libavcodec/bitstream.c (working copy) @@ -38,25 +38,6 @@ 8, 9,10,11,12,13,14,15 }; -#if LIBAVCODEC_VERSION_MAJOR < 53 -/** - * Same as av_mallocz_static(), but does a realloc. - * - * @param[in] ptr The block of memory to reallocate. - * @param[in] size The requested size. - * @return Block of memory of requested size. - * @deprecated. Code which uses ff_realloc_static is broken/misdesigned - * and should correctly use static arrays - */ -attribute_deprecated av_alloc_size(2) -static void *ff_realloc_static(void *ptr, unsigned int size); - -static void *ff_realloc_static(void *ptr, unsigned int size) -{ - return av_realloc(ptr, size); -} -#endif - void align_put_bits(PutBitContext *s) { #ifdef ALT_BITSTREAM_WRITER @@ -124,13 +105,9 @@ index = vlc->table_size; vlc->table_size += size; if (vlc->table_size > vlc->table_allocated) { - if(use_static>1) + if(use_static) abort(); //cant do anything, init_vlc() is used with too little memory vlc->table_allocated += (1 << vlc->bits); - if(use_static) - vlc->table = ff_realloc_static(vlc->table, - sizeof(VLC_TYPE) * 2 * vlc->table_allocated); - else vlc->table = av_realloc(vlc->table, sizeof(VLC_TYPE) * 2 * vlc->table_allocated); if (!vlc->table)
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-September/079973.html
CC-MAIN-2013-20
refinedweb
271
52.9
Image manipulation in Python Someone asked me about determining whether an image was "portrait" or "landscape" mode from a script. I've long had a script for automatically rescaling and rotating images, using ImageMagick under the hood and adjusting automatically for aspect ratio. But the scripts are kind of a mess -- I've been using them for over a decade, and they started life as a csh script back in the pnmscale days, gradually added ImageMagick and jpegtran support and eventually got translated to (not very good) Python. I've had it in the back of my head that I should rewrite this stuff in cleaner Python using the ImageMagick bindings, rather than calling its commandline tools. So the question today spurred me to look into that. I found that ImageMagick isn't the way to go, but PIL would be a fine solution for most of what I need. ImageMagick: undocumented and inconstant Ubuntu has a python-pythonmagick package, which I installed. Unfortunately, it has no documentation, and there seems to be no web documentation either. If you search for it, you find a few other people asking where the documentation is. Using things like help(PythonMagick) and help(PythonMagick.Image), you can ferret out a few details, like how to get an image's size: import PythonMagick filename = 'img001.jpg' img = PythonMagick.Image(filename) size = img.size() print filename, "is", size.width(), "x", size.height() Great. Now what if you want to rescale it to some other size? Web searching found examples of that, but it doesn't work, as illustrated here: >>> img.scale('1024x768') >>> img.size().height() 640 The built-in help was no help: >>> help(img.scale) Help on method scale: scale(...) method of PythonMagick.Image instance scale( (Image)arg1, (Geometry)arg2) -> None : C++ signature : void scale(Magick::Image {lvalue},Magick::Geometry) So what does it want for (Geometry)? Strings don't seem to work, 2-tuples don't work, and there's no Geometry object in PythonMagick. By this time I was tired of guesswork. Can the Python Imaging Library do better? PIL -- the Python Imaging Library PIL, happily, does have documentation. So it was easy to figure out how to get an image's size: from PIL import Image im = Image.open(filename) w = im.size[0] h = im.size[1] print filename, "is", w, "x", hIt was equally easy to scale it to half its original size, then write it to a file: newim = im.resize((w/2, h/2)) newim.save("small-" + filename) Reading EXIF Wow, that's great! How about EXIF -- can you read that? Yes, PIL has a module for that too: import PIL.ExifTags exif = im._getexif() for tag, value in exif.items(): decoded = PIL.ExifTags.TAGS.get(tag, tag) print decoded, '->', value There are other ways to read exif -- pyexiv2 seems highly regarded. It has documentation, a tutorial, and apparently it can even write EXIF tags. If neither PIL nor pyexiv2 meets your needs, here's a Stack Overflow thread on other Python EXIF solutions, and here's another discussion of Python EXIF. But since you probably already have PIL, it's certainly an easy way to get started. What about the query that started all this: how to find out whether an image is portrait or landscape? Well, the most important thing is the image dimensions themselves -- whether img.size[0] > img.size[1]. But sometimes you want to know what the camera's orientation sensor thought. For that, you can use this code snippet: for tag, value in exif.items(): decoded = PIL.ExifTags.TAGS.get(tag, tag) if decoded == 'Orientation': print decoded, ":", valueThen compare the number you get to this Exif Orientation table. Normal landscape-mode photos will be 1. Given all this, have I actually rewritten resizeall and rotateall using PIL? Why, no! I'll put it on my to-do list, honest. But since the scripts are actually working fine (just don't look at the code), I'll leave them be for now. [ 15:33 Mar 16, 2012 More programming | permalink to this entry | comments ]
http://shallowsky.com/blog/tags/imagemagick/
CC-MAIN-2016-07
refinedweb
682
67.86
man Toor wrote: > What is the proper way of using multi threaded application within WebKit. > > for example: > > # define a class that subclasses Thread > class showTime(threading.Thread): > > # define instance constructor > def __init__(self,interval,id): > self.w = interval > self.id = id > threading.Thread.__init__(self) # we are required to this > > # define run method (body of the thread) > def run(self): > time.sleep(self.w) > print "thread", self.id, "done at", time.ctime(time.time()) > > how do i start the threads so that it can finish successfully and i > can shutdown my server and can restart it. Hi Salman, here is a simple example servlet. The thread is started when the servlet is first called and stopped when the AppServer stops. --------------------------------------------------------------- from ExamplePage import ExamplePage from time import * from threading import Thread, Event class MyThread(Thread): def __init__(self, interval, id): Thread.__init__(self) self.interval = interval self.id = id self.time = None self.stop_event = Event() def run(self): while not self.stop_event.isSet(): self.time = ctime(time()) print "thread", self.id, "done at", self.time self.stop_event.wait(self.interval) def stop(self): print "thread", self.id, "shutdown" self.stop_event.set() self.join(1) my_thread = MyThread(2, 4711) from WebKit.AppServer import globalAppServer globalAppServer.application().addShutDownHandler(my_thread.stop) my_thread.start() class ShowTime(ExamplePage): def writeContent(self): self.write('<h1>Thread Test</h1>') self.write('<h2>Last executed at: ', my_thread.time, '</h2>') --------------------------------------------------------------- Hope that helps. -- Christoph So, what might you all recommend as a starting point? which one? Preferably, it would be nice if it was dead simple to begin with (like PHP's method of gathering form data -- just call the $_GET array with the name of the form field) ... but has some ability to turn it into advanced stuff if need be. Or something like the Rails or CakePHP system. The system would need to be useable with stuff like xmlhttprequests from javascript (though I don't see why any of them wouldn't be). Also, if it matters, adding in compression and custom encryption (and maybe even custom protocols later) might be nice. By compression, I mean sending a compressed xmlhttprequest to the script and have the script process the compressed httprequest data. However, I do not want this feature now (too complicated to learn at the moment), but would like to be able to move to it later. So, whichever one you recommend, could you include where the setup/install procedures are and a little expanded hello world tutorial that shows you how the system works. ---- On another note, someone mentioned that python compiles the script at runtime? It this different than other scripting languages? PHP, Perl, Javascript? -------------- Original message ---------------------- From: Christoph Zwerschke <cito@...> > Ar18@... wrote: > > BTW, one of the comments made was about some extra module/library > > being needed for form handling. Does that mean Webware does not > > handle this part? > > This can be handled by Webware plug-ins (e.g. FormKit, FunFormKit) or by > other external libraries (e.g. FormEncode). > > Unfortunately, the plug-ins are not part with the Webware package, > though I think at least one of them should be included if the authors > permit. I'd also like to put SimpleHTMLGen which is part of FunFormKit > into Webware's WebUtil library, and use these tools in some of the > example servlets. Any opinions and recommendations? > > -- Christoph > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > > _______________________________________________ > Webware-discuss mailing list > Webware-discuss@... >
https://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200701&viewday=13
CC-MAIN-2017-51
refinedweb
595
69.18
Hi Claudius, >> - Crypto / x-crypt module (we can link to the spec at >>) > > I am not sure, but I dont think this is the implementation of that > spec, maybe check with Claudius. I think this is actually an early > alpha version, before the spec was written and the namespaces etc > changed. For the eXist-db documentation update I would like to be able to describe the relationship between the x-crypt module in eXist-db trunk and the EXPath Crypto spec (10 August 2011 edition, at). I recall that the x-crypt module predated the spec, and I know the x-crypt module works well (I use it on history.state.gov). But could you confirm that there is not an eXist-db implementation that corresponds to the 10 August 2011 version of the spec somewhere else? Also, if you have any other notes about either the x-crypt module or the EXPath Crypto spec's future direction, this info would be helpful to include. Thanks! Joe View entire thread
http://sourceforge.net/p/exist/mailman/message/29914360/
CC-MAIN-2015-27
refinedweb
170
65.35
by Prempeh-Gyan GitHub Readme.md This section contains the pre-requisite to run the application and how to use the API. To deploy this project on Heroku, click the button below: Wake the Dyno It takes between 15 to 20 seconds to wake the Dyno, you will need a little patience Required Maven3.3+ JDK8+ Optional Postman- for testing the api endpoint Get the project from the source repository git clone To run the project, first navigate into the source directory cd WebScraper and execute the following command: mvn spring-boot:run: that's all you need to get it started. The application starts the server instance on port 8080. Open the link in your browser and start using it. The main functionality of this API is to take a given url, navigate to this url, crawl the page and extract all <a> tags on the page. Using the href attribute of the tags, the urls defined in the tags are extracted and processed for presentation. The urls are grouped using their host names. A list of host name - frequency is then returned in JSON format. This is the API-endpoint from which you send requests. Note that when you do a get request from the browser you will have to follow the API-endpoint with a ?url=someActualURL The url is the parameter you are passing to the Web-Service for processing. Hence an example of a full request to the API-endpoint will be You can also make a post request to the same API-endpoint in which case you will have to provide the url parameter as a form-data Below is the code snippet for the web service of the API-endpoint file: src/main/java/com/prempeh/webscraper/service/WebScrapingService.java file: src/main/java/com/prempeh/webscraper/serviceImpl/WebScrapingServiceImpl.java package com.prempeh.webscraper.service; import java.io.IOException; import java.util.Map; /** * This is the WebScrapingService interface defining the action required to retrieve a summary of the links on a web page * * @author Prince Prempeh Gyan * @version 1.0 <br/> * Date: 19/10/2017 * */ public interface WebScrapingService { Map<String, Long> getSummaryOfLinksOnPage(String url) throws IOException; } package com.prempeh.webscraper.serviceImpl; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.function.Function; import java.util.stream.Collectors; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.select.Elements; import org.springframework.stereotype.Service; import com.prempeh.webscraper.service.WebScrapingService; import lombok.extern.slf4j.Slf4j; /** * This WebScrapingService Implementation takes a url, uses Jsoup to connect to the page, * extract links from the page and return a list of all the links on the page * * @author Prince Prempeh Gyan * @version 1.1 <br/> * Date: 19/10/2017 * */ @Service @Slf4j public class WebScrapingServiceImpl implements WebScrapingService { @Override public Map<String, Long> getSummaryOfLinksOnPage(String url) throws IOException { List<String> linksExtractedOnPage = new ArrayList<>(); log.info("Jsoup is connecting to : {}", url); /** * When Jsoup connects to the url, it parses the resource as an HTML Document * and saves it into the "webPage" variable for use */ Document webPage = Jsoup.connect(url).get(); log.info("Extracting anchor tags from {}", url); /** * The Elements map to actual elements in the HTML document saved in the * "webPage" variable. By calling the "select" method on the "webPage" variable, * the links on the page that appear in the anchor tag can be extracted by * passing "a[href]" as an argument to method "select". The result is a list of * anchor tag elements saved in the "linksOnPage" variable */ Elements linksOnPage = webPage.select("a[href]"); /** * Once the elements have been extracted, they need to be processed into actual * URIs. Java 8s streams and lambdas are used here to enhance performance. For * the purposes of this application only the host names of the URIs are of * interest at this level, the "schemes" and the "paths" are not necessary. All * extracted host names are added to the "linksExtractedOnPage" variable. At * this point the list may contain duplicate host names. */ linksOnPage.parallelStream().forEach(linkOnPage -> { try { URI uri = new URI(linkOnPage.attr("abs:href")); String link = uri.getHost(); log.info("Tag <a href= '{}'>", uri); log.info("HostName = {}", link); linksExtractedOnPage.add(link); } catch (URISyntaxException e) { System.err.println("URISyntaxException : " + "url = " + linkOnPage.attr("abs:href") + "\nMessage = " + e.getMessage()); System.out.println("URISyntaxException : " + "url = " + linkOnPage.attr("abs:href") + "\nMessage = " + e.getMessage()); } }); log.info("Returning list of HostNames to caller"); return getSummary(linksExtractedOnPage); } private Map<String, Long> getSummary(List<String> linksOnPage) { log.info("List of HostNames recieved"); log.info("Removing empty HostNames from list"); log.info("Grouping identical HostNames and counting"); log.info("Creating a Map with unique HostNames as key and their frequencies as values"); /** * Since Maps have unique keys, while the stream is being processed the filtered * host names which have no empty or null elements are grouped into a Map of Key * Value pairs, where the host names are saved as Keys and their corresponding * frequencies are saved as values. The result is then returned to the caller. */ Map<String, Long> SummaryOfLinksOnPage = linksOnPage.parallelStream() .filter(link -> (link != null && !link.isEmpty())) .collect(Collectors.groupingBy(Function.identity(), Collectors.counting())); return SummaryOfLinksOnPage; } } Using the Browser for sending requests Sending GET Request through Ajax by button click Sending POST Request to MVC Controller Sending GET Request through Browser with url parameter Using Postman to send GET Request Using Postman to send POST Request
https://elements.heroku.com/buttons/prempeh-gyan/webscraper
CC-MAIN-2019-43
refinedweb
905
50.63
Project 7: Object-Oriented Simulation Design This is the third project on elephant population simulation. In the first project, you developed the overall simulation and used it to figure out a single parameter: the percentage of female elephants to dart each year. In the second project, you explored how to optimize one or more parameters of a simulation automatically. In this project, you are using the same content/concept but redesigning the code to use classes for the Elephant and Simulation parts of the project. You should find that using classes makes the coding process simpler and avoids some of the challenges of the prior two weeks. Tasks -, percDart = 0.425, percDart parameter to a corresponding field of the object, which is referred to by the variable self. self.percDart = percD" and "set" methods, one for each of the fields except for population. For example, the following returns the value of the percDart field. def getPercDart(self): return self.percDart Create similar methods for all of the other simulation parameters. Then create methods that allow you to set each parameter. Each of these methods should take self and the new value of the field as arguments. For example, the following sets the percDart field to a new value. def setPercDart(self, val): self.percDart = val When you have both get and set methods, test them with the following test program, which should give you this output. Note that it doesn't matter what you call your internal fields. You could store carrying capacity in self.cc, for example. It does matter what you call the get and set methods: method. and adult female (use the isFemale and isAdult methods), and if random.random() is less than self.percDphants_0() sim.showPopulation() Make sure there are only 15 elephants in the last step. Write the controlPopulation method. It should call either cullElephants_0 or dartPopulation, depending on the value of self.percD.42, you should get around 1000 total elephants at the end. If you run it with larger or smaller percdart values, make sure you get smaller/larger total population results. - Write a writeDemographics method, percentage of 0.425. Then make a second plot that shows the same data for a dart percentage of 0.0. - Implement a second cull strategy that culls only adult females. Create a new method, junvenile, inititialization?. - Figure out how to automate the graphiing process using gnuplot. - Develop another management strategy, which is to adjust the percent darted based on whether the population is above or below the target. You have to make very small adjustments to the percent darted in order to avoid large oscillations. See how this method responds to an event that decimates the population. - Explore how the population responds to decimation events that are selective in their effect. For example, what if only calves and juveniles are affected? What if only pregnant females are affected? - Develop other culling/darting strategies and discuss their effects and trade-offs. How easy would they be to implement? - Enable the user to control your top level program with optional flags. For example, -par CarryingCapacity would specify that the program should evaluate carrying capacity, and -min 3500 would specify that it should start the evaluation at 3500. - Check out the os package (import os). What could you do with the os.system function to automate your simulations?152s17project7 in the label field on the bottom of the page. But give the page a meaningful title (e.g. Milo's Project 7). summary of your findings in the simulations. What did you discover by using the simulation? Do your results make sense? - A description of any extensions you undertook, including text output or images demonstrating those extensions. If you added any modules, methods,7. Make sure it is there.
http://cs.colby.edu/courses/S17/cs152-labs/labs/lab07/assignment.php
CC-MAIN-2017-47
refinedweb
630
60.11
Jifty::Web::Session - A Jifty session handler In your etc/config.yml (optional): framework: Web: # The default ($PORT is replaced by the port the app is running on) SessionCookieName: JIFTY_SID_$PORT Returns a new, empty session. Returns the session's id if it has been loaded, or undef otherwise. Assign a new ID, and store it server-side if necessary. Load up the current session from the given ID, or the appropriate cookie (see "cookie_name") otherwise. If both of those fail, creates a session in the database. Load up the current session from the given (key, value) pair. If no matching session could be found, it will create a new session with the key, value set. Be sure that what you're loading by is unique. If you're loading a session based on, say, a timestamp, then you're asking for trouble. Flush the session, and leaves the session object blank. Returns true if the session has already been loaded. Returns the value for KEY for the current user's session. TYPE, which defaults to "key", allows accessing of other namespaces in the session, including "metadata" and "continuation". Sets the value VALUE for KEY for the session. TYPE, which defaults to "key", allows values to be set in other namespaces, including "metadata" and "continuation". VALUE can be an arbitrary perl data structure -- Jifty::Web::Session will serialize it for you. Remove key KEY from the cache. TYPE defaults to "key". Removes the session from the database entirely. Stores a continuation in the session. Pulls a continuation from the current session. Expects a continuation ID. Removes a continuation with id ID from the store. Return a hash of all the continuations in this session, keyed by the continuations' id. Sets the session cookie. Returns the current session's cookie_name -- it is the same for all users, but varies according to the port the server is running on. Get or set the session's expiration date, in a format expected by Cache::Cache.
http://search.cpan.org/~sartak/Jifty-1.10518/lib/Jifty/Web/Session.pm
CC-MAIN-2016-44
refinedweb
332
68.47
A dialog emulates a modal window that blocks the user-interface. More... #include <Wt/Ext/Dialog> A dialog emulates a modal window that blocks the user-interface. A modal window blocks the user interface, and does not allow the user to interact with any other part of the user interface until the dialog is closed. There are two ways for using a Dialog window. The easiest way is using the exec() method: after creating a Dialog window, call the exec() method which blocks until the dialog window is closed, and returns the dialog result. Typically, an OK button will be connected to the accept() slot, and a Cancel button to the reject() slot. This solution has the drawback that it is not scalable to many concurrent sessions, since every recursive event loop (which is running during the exec() method) locks a thread. Therefore it is only suitable for software that doesn't need to scale (to thousands of users). A second way is by treating the Dialog as another is hidden by default. You must use the method show() or setHidden(true) to show the dialog. Since Dialog is a Panel, the dialog contents may be layed out inside the dialog using layout managers. To be compatible with WDialog howevere, a contents() method is provided which creates a WFitLayout that fits a single WContainerWidget widget inside the dialog. Only one Dialog window may exist at any time in a single application. An attempt to instantiate a second dialog will result in undefined behaviour. The API is a superset of the WDialog API: The result of a modal dialog execution. Construct a Dialog with a given window title. Only a single Dialog may be constructed at any time. Unlike other widgets, a dialog should not need be added to a container widget to be displayed. Stop a recursive event loop with result Accepted. Add a button at the bottom of this dialog. Is the same as Panel::addFooterButton() Return the list of buttons at the bottom of this dialog. Is the same as Panel::footerButtons() Return the dialog contents container. The first invocation to this method creates a single WContainerWidget that is fitted in the panel content area, like this: Return the default button for this dialog. Execute the dialog in a recursive event loop. Executes the dialog. This blocks the current thread of execution until one of done(DialogCode), accept() or reject() is called. Warning: using exec() does not scale to many concurrent sessions, since the thread is locked. Signal emitted when the recursive event loop is ended. Return if the size grip is enabled. Stop a recursive event loop with result Rejected. Remove a button from the bottom of this dialog. The button must have been previously added using addButton(). Is the same as Panel::removeFooterButton() Return the result that was set for this dialog. Configure a default button for this dialog. The button must have been previously added using addButton(). A default button is activated when the user presses Return in the dialog.::Ext::Widget. Reimplemented in Wt::Ext::MessageBox. Configure a size grip to allow the user to resize this dialog. When a size grip is enabled, then the user may resize the dialog window. The default is true. Set the dialog window title. Is the same as Panel::setTitle(const WString&) Return the dialog window title. Is the same as Panel::title()
https://webtoolkit.eu/wt/doc/reference/html/classWt_1_1Ext_1_1Dialog.html
CC-MAIN-2016-50
refinedweb
566
58.48
Hello everyone, I have a program here that will display between 2 -4 cars "racing" accross the track. I am trying to get the cars to move at random speeds. Some faster, some slower but always different. I have tried a for loop and if it is working it is so neglible I cannot tell the difference. Here is the code I did for the loop: Code : public CarImage() { int y = (int)(Math.random() * 10) + 10; Timer timer1 = new Timer(y, new ActionListener(){ public void actionPerformed(ActionEvent e) { x += 10; c ++; repaint(); } }); timer1.start(); } Here is the complete code in case someone needs to see what is going on: Code : import java.awt.*; import java.awt.event.*; import javax.swing.*; import java.util.Random; public class RacingCar extends JFrame { public RacingCar() { int x = (int)(Math.random() * 3) + 2; setLayout(new GridLayout(x, 1, 5,5)); for (int i = 0; i < x; i++){ add(new CarImage()); } } public static void main(String[] args) { JFrame frame = new RacingCar(); frame.setTitle("Racing Car"); frame.setSize(1200, 350); frame.setLocationRelativeTo(null); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setVisible(true); } class CarImage extends JPanel { protected int x = 0; protected int y = 150; protected int z = 300; protected int c = 0; public CarImage() { Timer timer1 = new Timer(40, new ActionListener(){ public void actionPerformed(ActionEvent e) { x += 10; c ++; repaint(); } }); timer1.start(); } public void paintComponent(Graphics g) { super.paintComponent(g); // x = 0; y = getHeight(); z = getWidth(); g.setColor(Color.WHITE); g.fillRect(0, 0, z, y); Polygon polygon = new Polygon(); polygon.addPoint(x + 10, y - 21); polygon.addPoint(x + 20, y - 31); polygon.addPoint(x + 30, y - 31); polygon.addPoint(x + 40, y - 21); if (x < z - 50) { g.setColor(Color.BLACK); g.fillOval(x + 10, y - 11, 10, 10); g.fillOval(x + 30, y - 11, 10, 10); g.setColor(Color.BLUE); g.fillRect(x, y - 21, 50, 10); g.setColor(Color.GRAY); g.fillPolygon(polygon); g.setColor(Color.RED); } else x = 0; if (c < z - 86) g.drawString("Clint's Car", c, y - 51); else c = 0; } } } Please note that the loop I tried is NOT in the above program. If anyone has a suggestion as to how I can get the different speeds I could use some help. A snippet of code would be nice. Yes this is homework. I did try what I thought would work but obviously it did not work. Thank you in advance.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/17950-problems-math-random-loop-printingthethread.html
CC-MAIN-2015-06
refinedweb
400
60.01
Our Own Multi-Model Database (Part 6) Our Own Multi-Model Database (Part 6) As the journey toward a multi-model database continues, we look at indexing data to enhance performance while also examining relationships between nodes. Back in part two, we ran some JMH tests to see how many empty nodes we could create. (If you want to start even earlier, dive into parts one, three, four, and five.) Let’s try that test one more time, but adding some properties. Our nodes will have a username, an age, and a weight randomly assigned. It’s not a long test, but just enough to give us a ballpark. @Benchmark @Warmup(iterations = 10) @Measurement(iterations = 10) @Fork(1) @Threads(4) @BenchmarkMode(Mode.Throughput) @OutputTimeUnit(TimeUnit.SECONDS) public void measureCreateNodeWithProperties() throws IOException { HashMap<String, Object> properties = new HashMap<>(); properties.put("username", "username" + rand.nextInt() ); properties.put("age", + rand.nextInt(100) ); properties.put("weight", rand.nextInt(300) ); db.addNode( String.valueOf(rand.nextInt()), properties); } When we run it, we get about 350,000 operations per second. That’s pretty damn nice, but what about reads? Let’s start with a little foreshadowing… In the ChronicleMap readme: Chronicle Map is not a multimap. Using a ChronicleMap<K, Collection> as multimap is technically possible, but often leads to problems (see a StackOverflow answer for details). Then on Twitter: Hum… we already saw in part two that the serializing and deserializing gave me trouble reading. So let's test. We’re going to borrow a stupid test from another multi-model database that is really a fruit but tells everyone they are a vegetable. These guys go around talking about distributing graph databases with billions of edges and run bullshit comparative benchmarks on 1,632,803 nodes and 30M relationships graphs. Anyway, they run an aggregation test where they group count all the nodes by their age property. The equivalent of this: @Test public void shouldAggregate() { Iterator<Map.Entry<String, HashMap>> iter = db.getAllNodes(); HashMap<Integer, Integer> ages = new HashMap<>(); Integer age; Long start = System.currentTimeMillis(); while (iter.hasNext()) { Map.Entry<String, HashMap> nodeEntry = iter.next(); age = (Integer) nodeEntry.getValue().get("age"); ages.merge(age, 1, Integer::sum); } Long end = System.currentTimeMillis(); System.out.println(end-start); } So how fast do we complete this test? In about 28 seconds. How fast do they do it in? 1.250 seconds. Damn it, we can’t let some else win. No way. So what do you do when the docs tell you that you are barking up the wrong tree and one of the authors of the library you are using tells you that you done messed up? But what am I shopping for? Let’s go back to our test for a minute, what about if we only wanted to find users between with ages between 35-44? How would we handle that? Iterate and throw away invalid values? No, that doesn’t scale. We need an index. We haven’t even talked about indexing yet. I was going to punt on that until later, but it is later. Alright… so we either need to bake in Lucene or another index engine that was really meant to index documents, not simple properties… or we need to find a Java Collection Library that has indexing built in, doesn’t require serializing and deserializing for access, and is not much slower than ChronicleMap. After praying to the all knowing and all powerful search engine in the cloud for some help, it answered with CQEngine. CQEngine solves the scalability and latency problems of iteration by making it possible to build indexes on the fields of the objects stored in a collection, and applying algorithms based on the rules of set theory to reduce the time complexity of accessing them. I can’t believe I never heard of it before. Basically, it’s just what I’m looking for, but we already got burned twice, so how about before we get too far we do some testing first? Good, first thing I gotta do is create a copy of GuancialeDB, called “GuancialeDB2” and start replacing. We’ll need an ObjectLockingIndexedCollection because we want to have unique constraints on the node id and we don’t want two or more threads stepping all over each other. private static IndexedCollection<PropertyContainer> nodes = new ObjectLockingIndexedCollection<>(); private static IndexedCollection<PropertyContainer> relationships = new ObjectLockingIndexedCollection<>(); We’ll create a class called PropertyContainer that has an id a bunch of properties in a HashMap like before. public class PropertyContainer { public final String id; public final HashMap<String, Object> properties; public PropertyContainer(String id, HashMap<String, Object> properties) { this.id = id; this.properties = properties; } public String getId() { return id; } public static final Attribute<PropertyContainer, String> ID = attribute("id", PropertyContainer::getId); ... private GuancialeDB2() { nodes.addIndex(UniqueIndex.onAttribute(PropertyContainer.ID)); relationships.addIndex(UniqueIndex.onAttribute(PropertyContainer.ID)); related = new HashMap<>(); } There were some minor tweaks to various methods, but it wasn’t very painful. Luckily we have not written much code, and this stuff is contained within just the GuancialeDB2 class, so it was easy to change. Running various benchmarks now gives us: Benchmark Score Error GuancialeDBBenchmark.measureCreateEmptyNode 2727115.220 ± 542998.073 ops/s GuancialeDBBenchmark2.measureCreateEmptyNode 3070698.529 ± 1159462.893 ops/s GuancialeDBBenchmark.measureCreateEmptyNodes 6535.202 ± 1608.875 ops/s GuancialeDBBenchmark2.measureCreateEmptyNodes 5966.563 ± 4186.546 ops/s GuancialeDBBenchmark.measureCreateNodeWithProperties 353071.807 ± 23993.490 ops/s GuancialeDBBenchmark2.measureCreateNodeWithProperties 889107.557 ± 393243.609 ops/s GuancialeDBBenchmark.measureCreateNodesWithProperties 471.845 ± 88.751 ops/s GuancialeDBBenchmark2.measureCreateNodesWithProperties 277.441 ± 350.483 ops/s GuancialeDBBenchmark.measureCreateEmptyNodesAndRelationships 3.861 ± 10.257 ops/s GuancialeDBBenchmark2.measureCreateEmptyNodesAndRelationships 1.602 ± 1.314 ops/s The variability of some of these tests is HUGE… a good reason not to run performance tests on your laptop. But if we squint at it, it tells us that it’s close enough on the writes. What about reads? What about that aggregation test from earlier on? Would you believe it came all the way down to just 400ms. Boo Yeah! Now we’re winning. The indexing functionality lets us generate new indexes at runtime which is perfect if we’re going to continue to follow Neo4j into the 2.0 era with Labels and optional Schema. Let’s see how that works by replicating the aggregation test but to only count people with ages 35-44. First, we’ll create our users. IndexedCollection<PropertyContainer> nodes = new ConcurrentIndexedCollection<>(); for (int person = 0; person < 1632803; person++) { HashMap<String, Object> properties = new HashMap<>(); properties.put("id" + person, "id" + person); properties.put("age", rand.nextInt(120)); nodes.add(new PropertyContainer("id" + person, properties)); } Class<? extends SimpleNullableAttribute<PropertyContainer, Integer>> attributeClass = generateSimpleNullableAttributeForParameterizedGetter( PropertyContainer.class, Integer.class,"getIntegerProperty", "age", "age"); SimpleNullableAttribute<PropertyContainer, Integer> ageAttribute = attributeClass.newInstance(); public Integer getIntegerProperty(String key) { return (Integer)properties.get(key); } nodes.addIndex( NavigableIndex.onAttribute(ageAttribute)); HashMap<Integer, Integer> ages = new HashMap<>(); Integer age; Query<PropertyContainer> query = between(ageAttribute, 35, 44); Long start = System.currentTimeMillis(); ResultSet<PropertyContainer> results = nodes.retrieve(query); for (PropertyContainer nodeEntry : results) { age = (Integer) nodeEntry.getProperties().get("age"); ages.merge(age, 1, Integer::sum); } Long end = System.currentTimeMillis(); System.out.println(end-start); There is just one problem… since we’re no longer using ChronicleMap the name GuancialeDB doesn’t make sense anymore. Plus I also want to leave that code up, so people see the folly of my ways. So we need a new name. A good one, one that sticks. At dinner the other day, Luke Gannon suggested “Disoriented DB.” Named somewhat after another multi-model database that doesn’t know which way to turn, but they know benchmarks and magically win them all *cough* cached results *cough*. They also know feature tables. They have the best feature tables. There are no better feature tables. Full of the “alternate facts” we keep hearing so much about lately. I also heard that Sean Parker stopped over for dinner at another database vendor that was doing bullshit benchmarks recently and told them to "drop the D," just "graph." I like that, simpler is better. So now it is just "Disoriented." We can write Disoriented tests, start our Disoriented server, etc. When your relational database is giving you troubles, you can tell your manager “let’s get Disoriented”. Once you get Disoriented, you can tell your DevOps team to just “deploy Disoriented” and get some nachos. Love it. New code on GitHub as always. …and a message to all our graph database vendor frenemies out there. Stop with the competitive vendor benchmarks. We don’t do that at Neo4j. We spend our time educating the world about graphs. We wrote a book and give it away for free, we write countless blog posts and encourage our community to do the same, we have over a hundred meet-up groups, we write example models and queries, we host two graph conferences at year and would be happy to speak about graphs anywhere invited. Do the same. Work on your documentation, work on guides and walk-throughs, write sample applications, help your users on Slack or StackOverflow, start topics on your mailing list, host a meet-up, grow your community. Help the world make sense of data by using graphs, and fight the real enemy. Max De Marzi , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/our-own-multi-model-database-part-6
CC-MAIN-2018-43
refinedweb
1,560
50.43
- Author: - jorjun - Posted: - March 2, 2011 - Language: - Python - Version: - 1.2 - choice choices model field - Score: - 2 (after 2 ratings) Nice to name your constant multiple choice fields in models, this is one way of doing that. Sorry I haven't looked into existing alternatives. But this approach worked for me. More like this - Choices helper by jacobian 6 years, 10 months ago - Model field choices as a namedtuple by whiteinge 5 years, 2 months ago - Choices class by dc 7 years, 7 months ago - Multiple Choice model field by danielroseman 7 years, 7 months ago - Django enumeration for model field choices by martinthenext 4 years, 2 months ago CORRECTION (sorry) def get_named_choices(name, choices): return namedtuple(name, choices)(*range(len(choices))) # class Property: def init(self, kwd): for (key, val) in kwd.iteritems(): if isinstance(val, dict): val = Property(val) self.dict[key] = val # # Please login first before commenting.
https://djangosnippets.org/snippets/2373/
CC-MAIN-2016-26
refinedweb
150
60.04
Pftop is a small, curses-based utility for real-time display of active states and rule statistics for pf, the packet filter (for OpenBSD) WWW: make generate-plist To install the port: cd /usr/ports/sysutils/pftop/ && make install cleanTo add the package: pkg install pftop cd /usr/ports/sysutils/pftop/ && make install clean pkg install pftop PKGNAME: pftop distinfo: SHA256 (pftop-0.7.tar.gz) = afde859fab77597e4aae1ef6b87f1bb26a5ad8cb2b1d7316a12e5098153492af SIZE (pftop-0.7.tar.gz) = 59765 There are no ports dependent upon this port ===> The following configuration options are available for pftop-0.7_8: ALTQ=off: ALTQ support for queue statistics ===> Use 'make config' to modify these settings Number of commits found: 46 Fix building of sysutils/pftop on FreeBSD 12, where pcap-int.h has been removed. This patches the affected files to use <pcap/pcap.h> instead. Submitted by: woodsb02 Approved by: bapt PR: 217219 MFH: 2017Q2 Mark some ports failing on power64. In cases where the error message was a stub, provide a real one. While here, pet portlint. Approved by: portmgr (tier-2 blanket) Reported by: swills - Always check OPSYS along with OSVERSION Approved by: portmgr blanket sysutils/pftop: add ALTQ option, disable by default - ALTQ is not in GENERIC and thus browsing through pftop modes the queue view gives an error. - While there, modernise the ATLQ disable patch, it is not a unified diff. - Add LICENSE PR: 215313 Submitted by: Franco Fichtner <franco@opnsense.org> Approved by: araujo (maintainer) Extract do-patch into a separate script. PR: 215761 Submitted by: mat Exp-run by: antoine Sponsored by: Absolight Differential Revision: Remove BROKEN_FreeBSD_9 Approved by: portmgr (blanket) Do not terminate BROKEN messages with period, it is added by the framework. Mark as broken on FreeBSD 9. - Fix build on powerpc64. - Bump PORTREVISION. Submitted by: demik475_gmail.com Differential Revision: many ports: mark broken on powerpc64 - For the CoDel and FairQ patch, the safest FreeBSD version must be 1100080, as the CoDel landed in r287009 and FairQ landed in r284777. - Bump PORTREVISION. Submitted by: junovitch - Add support for ALTQ FairQ and Codel protocols. - Bump PORTREVISION to 5. PR: ports/204405 Submitted by: Renato Botelho <garga@FreeBSD.org> Obtained from: pfSense Sponsored by: Rubicon Communications (Netgate) Drop 8 support. With hat: portmgr Sponsored by: Absolight Differential Revision: sysutils/pftop: unbreak build on 11.0C pftop.c:47:10: fatal error: 'altq/altq.h' file not found #include <altq/altq.h> ^ 1 error generate Reported by: pkg-fallout Approved by: portmgr blanket - Fix build on ARM. PR: 198682 Submitted by: garga - Fix my latest commit, it UNBROKEN on 11, but BREAK it in all other OSVERSION. - Bump PORTREVISION. Reported by: "Herbert J. Skuhra" <herbert@oslo.ath.cx> Tested by: "Herbert J. Skuhra" <herbert@oslo.ath.cx> - Mark as UNBROKEN on FreeBSD >= 1100000. PR: 188826 Submitted by: Oliver Peter Tested by: daniel.engberg.lists Rename sysutils/ patch-xy patches to reflect the files they modify. - Mark as BROKEN on HEAD. PR: 188826 Submitted by: daniel.engberg.lists Support staging Add NO_STAGE all over the place in preparation for the staging support (cat: sysutils) - Instead of patching the code just use C89 and stop the problems with inline. PR: ports/180269 Submitted by: tijl@ - Take maintainership. mlaier's bit has been taken in for safekeeping. - Unbreak pftop on HEAD > r240233 [1] . Reported by Sven Hazejager. - Unbreak on FreeBSD 9 without pf 4.5 [1] - Fix segfaults on FreeBSD 8 [2] - Fix rule display in a couple of views on FreeBSD 9 and 10 [1] . Reported and tested by Thomas Kinsey . Fix reported to OpenBSD by Robert Mills PR: ports/175927 Submitted by: [1] Fabian Keil <fk@fabiankeil.de>, [2] garga@ Approved by: maintainer timeout (over 60 days) unbreak on >= 9.0 PR: ports/155938 Submitted by: Fabian Keil <fk@fabiankeil.de> Approved by: maintainer timeout (46 weeks) - Mark BROKEN on 9.X: does not compile Reported by: pointyhat - DISTNAME= ${PORTNAME}-${PORTVERSION} is the default and not needed. PR: ports/153292 Submitted by: myself (pgollucci) Tested by: -exp run by pav Approved by: portmgr (pav) - Use canonical format for FreeBSD.org MAINTAINER addresses - Remove obsolete MD5 checksum while I'm here PR: ports/152844 Submitted by: sunpoet (myself) Approved by: miwi (with portmgr hat) - Remove conditional checks for FreeBSD 5.x and older - Remove duplicates from MAKE_ENV after inclusion of CC and CXX in default MAKE_ENV Update patch to properly display direction of filtering rules. Bump PORTREVISION. PR: ports/123670 Submitted by: Andrey Groshev <greenx@yandex.ru> Reviewed by: mlaier (maintainer) Approved by: garga (mentor), mlaier (maintainer) Reported-by: many[1], Frank Fenor[2] Update to 0.7 - adds state display filters. While here also add a patch to support dynamic ALTQ (by ignoring INACTIVE queues). Approved by: flz Update pftop to 0.6 in order to make it work with changed pf ABI after the 4.1 import. Reported by: Bruce Cran Approved by: gabor PR: ports/116187 fix SIZE Update to 0.5. PR: ports/92094 Submitted by: Jeffrey H dot Johnson <CPE1704TKS at bellsouth dot net> Approved by: maintainer SHA256ify Approved by: krion@ - Remove dependencies on security/pf, it was removed. pf is in base since 502106 Pointy hat to: pav Update pftop for the pf 3.7 import. Submitted by: Edwin Brown <edwin(!)brown(at)gmail(!)com> (w/changes) Approved by: pav Enable ALTQ and 3.5 functionality after the import and update to base. Submitted by: maintainer Reminded by: yong). New port: sysutils/pftop - Utility to monitor securtiy/pf Pftop is a small, curses-based utility for real-time display of active states and rule statistics for pf, the packet filter (for OpenBSD) This used to be part of security/pf but is now individual after (ports/57305) PR: ports/57307 Submitted by: Max Laier <max@love2party.net> Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 1 vulnerabilities affecting 7 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
https://www.freshports.org/sysutils/pftop/
CC-MAIN-2017-47
refinedweb
986
57.67
Ads Via DevMavens. For example, a piece of declarative knowledge would be amount of declarative knowledge: tax preparation systems, mortgage-banking software, and hotel reservation systems, to name just a few. We often refer to declarative knowledge as our "business rules". Encoding business rules into procedural code makes the rules harder to find, read, and modify. Over the years, the software industry has invented tools for working with declarative knowledge. We categorize these tools as rules engines, inference engines, and logic machines. A rules engine specializes in making declarative knowledge easier to implement, process, isolate, and modify. Windows Workflow offers the best of both worlds. We can use Sequence activities to implement procedural knowledge, and Policy activities to execute declarative knowledge. In this article, we will focus on the activities that use rules and conditions to express declarative knowledge. The topics will include the Policy activity, the ConditionedActivityGroup, and others. Three important terms we will use in this chapter are conditions, rules, and rule sets. In WF, conditions are chunks of logic that return true or false. A number of WF activities utilize conditions to guide their behavior. These activities include the While activity, the IfElseBranch activity, and the ConditionedActivityGroup. The While activity, for instance, loops until it's Condition property returns false. We can implement conditions in code, or in XML. Rules are conditions with a set of actions to perform. Rules use a declarative if-then-else style, where the "if" is a condition to evaluate. If the condition evaluates to true, the runtime performs the "then" actions, otherwise the "else" actions. While this sounds like procedural code, there are substantial differences. The if-then-else constructs in most languages actively changes the flow of control in an application. Rules, on the other hand, passively wait for an execution engine to evaluate their logic and invoke their actions. A rule set is a collection of one or more rules. As another example from the hotel business, we might have three rules we use to calculate the discount on the price of a room (shown here in pseudo-code). if person's age is greater than 55 then discount = discount + 10% if length of stay is greater than 5 days then discount = discount + 10% if discount is greater than 12% then discount = 12% Before we can evaluate these three rules, we need to group them inside a rule set. We can assign each rule a priority to control the order of evaluation. WF can revisit rules if later rules change the data used inside previous rules. We can store rules in an external XML file, and feed external rules to the workflow runtime when creating a new workflow. WF provides an API for us to programmatically update, create, and modify rule sets and rules at runtime. The features and execution semantics described above give us more flexibility when compared to procedural code. We can dynamically customize rules to meet the needs of a specific customer or business scenario. We will return to rules and rule sets later in the article. For now we will drill into conditions in Windows Workflow. The While activity is one activity that uses a condition. The activity will repeatedly execute its child activity until its Condition property returns false. The Properties window for the While activity allows us to set this Condition property to a Code Condition or a Declarative Rule Condition. In the figure to the right (click to view), we've told the While activity to use a code condition, and that the code condition is implemented in a method named CheckBugIndex. A code condition is an event handler in our workflow's code-beside file. A code condition returns a boolean value via a ConditionalEventArgs parameter. Because a code condition is just another method on our workflow class, the conditional logic compiles into the same assembly that hosts our workflow definition. The implementation of CheckBugIndex is shown below. We have an array of Bug objects for the workflow to process. The array might arrive as a parameter to the workflow, or through some other communication mechanism like the HandleExternalEvent activity. The workflow uses the bugIndex field to track its progress through the array. Somewhere, another activity will increment bugIndex as the workflow finishes processing each bug. If the array of bugs is not initialized, or if the bugIndex doesn't point to a valid entry in the array, we want to halt the While activity by having our code condition return a value of false. Code conditions, like our method above, are represented by CodeCondition objects at runtime. The CodeCondition class derives from an abstract ActivityCondition class. Because the Condition property of the While activity accepts an ActivityCondition object, we have the choice of assigning either a CodeCondition or a RuleConditionReference. Regardless of which we choose, all the runtime needs to do is call the Evaluate method to fetch a boolean result. A CodeCondition will ultimately fire its Condition event to retrieve this boolean value. It is this Condition event that we are wiring up to the method in our code-behind file. We can see this a little more clearly by looking at the XAML markup produced by the designer. Declarative rule conditions work differently from code conditions. If we expressed our CheckBugIndex condition as a declarative rule, we would just need to type the following string into the designer: bugs == null || bugIndex >= bugs.Length Windows Workflow will parse and evaluate this rule at runtime. We don't need to create a new method in our workflow class. The definition for this expression will ultimately live inside a .rules file as part of our workflow project. A RuleConditionReference object will reference the expression by name (every rule in WF has a name). As an example, suppose we are creating a new workflow with a While activity, and we want the activity to loop until a _retryCount field exceeds some value. After we drop the While activity in the designer, we can open the Properties windows and click the drop drown list beside the Condition property. This time, we will ask for a Declarative Rule Condition. The designer will make two additional entries available - ConditionName and Expression. Clicking in the text box beside ConditionName will display the ellipses pointed to in the figure below. Clicking the ellipses button launches the Select Condition dialog, shown below (click to expand). This dialog will list all of the declarative rule conditions in our workflow, and will initially be empty. Along the top of the dialog are buttons to create, edit, rename, and delete rules. The Valid column on the right-hand side will let us know about syntax errors and other validation problems in our rules. At this point we want to create a new rule. Clicking the New… button will launch the Rule Condition Editor shown below (click to expand). Inside this editor is where we can type our expression. The expression we've entered will return true as long as the _retryCount field is less than 4. If we type the C# this keyword (or the Me keyword in Visual Basic), an Intellisense window will appear and display a list of fields, properties, and methods in our workflow class. Clicking the OK button in the editor will return us to the Select Condition dialog, where we can click the Rename button to give our condition a friendly name (the default name would be Condition1, which isn't descriptive). We will give our rule the name of RetryCountCondition. After all these button clicks, a new file will appear nested underneath our workflow definition in the Solution Explorer window. The file will have the same name as our workflow class name but with an extension of .rules. Inside is a verbose XML representation of the condition we wrote. If you remember our XAML discussion from the article "Authoring Workflows", you'll realize this is a XAML representation of objects from the System.CodeDom namespace. The CodeDom (Code Document Object Model) namespace contains classes that construct source code in a language agnostic fashion. For instance, the CodeBinaryOperatorExpression class represents a binary operator between two expressions. The instance in our XAML is a "LessThan" operator, but could be an addition, subtraction, greater than, or bitwise operation. At compile time, this .rules file becomes an embedded resource in our assembly. WF will read the resource at runtime and use classes in the System.CodeDom.Compiler namespace to generate and compile source code from the XAML. Once the runtime compiles the expression, WF can evaluate the rule to inspect its result. Most of the expressions we write in C# or VB.NET will be valid rules. For instance, all of the following expressions are valid. We can invoke methods, retrieve properties, index into arrays, and even use other classes from the base class library, like the RegEx class for regular expressions. this.x + 1 < 100 this.name.StartsWith("Scott") Regex.Match(this.AreaCode, @"^\\(\\d{3}\\)\\s\\d{3}-\\d{4}$").Success this.CheckIndex() this.GetResult() != 10 this.numbers[this.x] == this.numbers[this.x + 1] Expressions must evaluate to true or false. The following examples are invalid. Console.Write(this.name) this.x = this.GetResult() In the Authoring Workflows article, we also discussed workflow activation. Activation allows us to pass a XAML definition of our workflow to the workflow runtime, instead of using a compiled workflow definition. For instance, let's assume we have the following workflow definition in a file named Activation.xoml. Let's also assume our condition (Condition1) is in a file named Activation.rules. We can load and execute the workflow with the following code: Activation gives us a great deal of flexibility. For instance, we could store workflow and rule definitions inside of database records, and update the rules without recompiling or redeploying an application. Before we finish talking about conditions, we need to take a closer look at one condition-centric activity that is flexible and powerful. The ConditionedActivityGroup (CAG) executes a collection of child activities based on a When condition attached to each child. Furthermore, the CAG continues to execute until an Until condition on the CAG returns true. This behaviour makes the CAG somewhat of a cross between a While activity and a Parallel activity. When we drop the CAG into the workflow designer, it will appear as shown below. In the top of the activity shape is an activity "storyboard" where we can drop activities. The arrows on either side of the storyboard allow us to scroll through the child activities in the storyboard. When we select an activity in the storyboard, the selected activity will appear in the bottom of the activity shape inside the preview box. We can toggle between preview and edit modes using the button in the middle of the CAG's shape. In the figure below, we've arranged some activities in the CAG's storyboard. The first activity is a Sequence activity, and we've selected the activity for editing. The bottom of the CAG's shape displays the Sequence activity in detail. Inside the Sequence activity, are two Code activities. Since the Sequence activity is a direct descendant of the CAG, we can assign the Sequence activity a When condition (click the figure to the right). As with all conditions, the When condition can be a code condition, or a declarative rule. The CAG only executes a child activity if the child's When condition returns true, however, the When condition is optional. If we do not specify a When condition, the child activity will execute only once. No matter how many times the CAG continues to loop, an activity without a When condition will execute only during the first iteration. The CAG repeatedly executes child activities until one of two things happen. The CAG itself has an Until condition (see the figure below). When the Until condition returns true, the CAG immediately stops processing and also cancels any currently executing child activities. The CAG will also stop processing if there are no child activities to execute. This can occur when the When condition of all child activities return false. It's important to note that the CAG evaluates the Until condition when it first begins executing. If the Until condition returns true at this point, no child activities will execute. Also, the CAG evaluates the Until condition each time a child activity finishes execution. This means only a subset of the child activities may execute. Finally, the CAG doesn't guarantee the execution order of child activities, which is why the CAG is similar to the Parallel activity. For example, dropping a Delay activity inside the CAG will not block the CAG from executing its other child activities. The CAG is useful in goal-seeking scenarios. Let's say we are building a workflow to book flight, hotel, and car reservations for a trip. Inside the workflow, we might use web service activities to request pricing information from third party systems. We can arrange the web service calls inside a CAG to request prices repeatedly until we meet a goal. Our goal might be for the total price of the trip to meet a minimum cost, or we might use a more advanced goal that includes cost, total travel time, and hotel class. The first concept to notice is that the RuleSet class manages a collection of rules. The Policy activity will use the Execute method of a RuleSet to process the rule collection. We will cover the Policy activity in more detail soon. Every Rule inside a RuleSet has a Condition property that references a single RuleCondition object. The RuleSet logic will use the Evaluate method of a RuleCondition to retrieve a value of true or false. Every Rule maintains two collections of RuleAction objects - the ElseActions and the ThenActions. When a rule's condition evaluates to true, the runtime invokes the Execute method of each action in the ThenActions collection, otherwise the runtime invokes the Execute method of the actions in the ElseActions collection. With a basic understanding of how rules work on the inside, let's take a look at the Policy activity. Encarta describes policy as "a program of actions adopted by an individual, group, or government". Policies are everywhere in real life. Universities define policies for student admissions, and banks define policies for lending money. U.S. banks often base their lending policies on credit scores, and a credit score takes into account many variables, like an individual's age, record of past payment, income, and outstanding debt. Business policy can become very complex, and is full of declarative knowledge. As we discussed at the beginning of the chapter, declarative knowledge is about the relationships in data. For example, one bank's policy might say that if my credit score is less than 500 points, they will charge me an extra one percent in interest. Although we can use a Policy activity almost anywhere inside of a larger workflow, we will be using a simple workflow with only a single Policy activity inside. All we need is to create a new sequential workflow, and drag a Policy shape into the designer (click the figure below to expand). Click to view In the Properties window in the above figure we can see the RuleSetReference property. The RuleSetReference property is the primary property of a Policy activity. We can click the ellipses button in the properties window to launch the Select Rule Set dialog, shown next. When we first start a workflow, we won't have any rule sets defined. A workflow can contain multiple rule sets, and each rule set will contain one or more rules. Although a Policy activity can only reference a single rule set, we might design a workflow with multiple Policy activities inside, and need them each to reference a different rule set. Clicking on the New button in the dialog will launch the Rule Set Editor dialog shown below (click to expand). The Rule Set Editor exposes many options for rules and the rule set. For now, we are going to concentrate on conditions and actions. Let's suppose we are defining a policy to "score" a software bug. The score will determine if we need to send notifications to team members, who will jump into immediate action. The bug will be a member field in our workflow class, and will expose various properties (Priority, IsOpenedByClient) that we will inspect to compute a score. Our first three rules will determine a bug's base score by looking at the bug's Priority property. We can start by clicking the "Add Rule" button in the dialog. Our first rule is: IF this.Bug.Priority == BugPriority.Low THEN this.Score = 0. In the rules dialog, we can give this rule a meaningful name of SetScore_LowPriority. The IF conditions in our rules are just like the conditions we examined earlier in the chapter. We can use tests for equality, inequality, greater than or less than. We can call methods, and index into arrays. As long as the IF condition's expression returns a true or false, and can be represented by types in the System.CodeDom namespace, we will have a valid expression. The actions in our rules have even greater flexibility. An action is not restricted to returning a Boolean value. In fact, most actions will perform assignments and manipulate fields and properties in a workflow. In our first rule, we've used a rule action to assign an initial score of 0. Remember the action property on a rule is a collection, meaning we can specify multiple actions for both the then actions and else actions. We will need to place each action on a separate line inside the action text box. If we have three possible bug priorities (Low, Medium, and High), we'll need a rule to set the bug score for each priority level. Once we've entered the three rules, the Rule Set Editor should look like the figure below (click to expand). With three rules in place, we could execute our workflow and watch the Policy activity compute our bug score. These rules, like the declarative conditions we used earlier, will live in a .rules file. When a rule set executes, it will process each rule by evaluating the rule's condition and executing the then or else actions. The rule set continues processing in this fashion until it has processed every rule in the rule set, or until it encounters a Halt instruction. Halt is a keyword we can place in a rule's action list that will stop the rule set from processing additional rules. There is still an additional rule we would like to add to our rule set, however. The rule should say, "If the bug's score is greater than 75, then send an email to the development team". This rule presents a potential problem, however, because it would not work if the rule set evaluates this new rule first. We need to set the score for a bug first. We can achieve this goal using rule priorities. Each rule has a Priority property of type Int32. We can see this property exposed in the Rule Set Editor screen shot above. Before executing rules, the rule set will sort its rules into a list ordered by priority. A rule with a high priority value will execute before a rule with a low priority value. The rule with the highest priority in the rule set will execute first. Rules of equal priority will execute in the alphabetic order of their Name property. To make sure our notification rule is evaluated last, we need to assign the rule a priority of 0, and ensure all other rules have a higher priority. In the figure below, we've given our first three rules a priority of 100. The number we use for priority is arbitrary, as it only controls the relative execution order. All the rules we've written so far are independent. Our rules do not modify any fields in the workflow that other rules depend upon. Suppose, however, we had a rule that said "If the IsSecurityRelated property of the bug is true, set the bug Priority to High". Obviously, the first three rules we wrote. One solution to this problem would be to set the relative priorities of the rules to ensure the "set score" rules always execute after any rule that might set the Priority field. However, this type of solution isn't always feasible as the rule set grows larger and dependencies between the rules become more entangled. Fortunately, Windows Workflow can simplify this scenario. If you look back at the class diagram from earlier, you'll notice the RuleCondition class carries a GetDependencies method, and a RuleAction class carries a GetSideEffects method. These two methods allow the rules engine to match the dependencies of a rule (which are the fields and properties the rule's condition inspects to compute its value) against the side effects of other rules (whcih are the fields and properties a rule's action modifies). When an action produces a side effect that matches a dependency from a previously executed rule, the rules engine can go back and re-evaluate the previous rule. In rules engine terminology, we call this feature forward chaining. By default, the forward chaining in Windows Workflow is implicit. The rules engine examines the expressions in each rule condition and each rule action to produce lists of dependencies and side effects. We can go ahead and write our rule without worrying about priorities, as shown in the figure below (click to expand). Now, if the workflow looks at a bug with the IsSecurityRelated property set to true, the action of the new rule will change the bug's Priority to High. The rules engine will know that three previous rules have a dependency on the Priority property and re-evaluate all three rules. All of this happens before the NotificationRule runs, so a bug with IsSecurityRelated set will create a score of 100, and the NotificationRule will invoke the SendNotification method. We'll see these exact steps in more detail later. Implicit chaining is a great feature because we don't have to calculate dependencies manually. For implicit chaining to work, however, the rules engine has to be able to parse the rule expression. If we have a rule that calls into our compiled code, or into third party code, the rules engine can no longer resolve the dependencies. In these scenarios, we can take advantage of chaining using metadata attributes or explicit actions. Let's suppose the logic we need to execute for a rule is complicated - so complicated we don't feel comfortable writing all the logic declaratively. What we can do is place the logic inside a method in our code-behind file, and invoke the method from our rule. As an example, let's write the last rule like the following. IF this.Bug.IsSecurityRelated THEN this.AdjustBugForSecurity() The method call presents a problem if we need forward chaining. The rules engine will not know what fields and properties the AdjustBugForSecurity method will change. The good news is, Windows Workflow provides attributes we can use to declare a method's dependencies and side effects. If a method does not carry one of these three attributes, the rules engine will assume the method does not read or write any fields or properties. If we want forward chaining to work with our method, we'll need to define it as follows. The RuleWrite attribute uses a syntax similar to the property binding syntax in Windows Workflow. This particular RuleWrite attribute declares that the method will write to the Priority property of the Bug property. The rules engine will also parse a wildcard syntax, so that [RuleWrite("Bug/*")] would tell the engine that the method writes to all the fields and properties on the bug object. The RuleRead attribute uses this same syntax, except we would use a RuleRead attribute on methods called from the conditional part of our rules. The RuleRead attribute tells the engine about the method's dependencies. We can use the RuleInvoke attribute when our method calls into other methods, as shown in the following example. In this code sample, we've told the rules engine that the method called from our rule will in turn call the SetBugPriorityHigh method. The rules engine will follow the lead and inspect the SetBugPriorityHigh method for attributes. In this example the engine will find a RuleWrite attribute and forward chaining will continue to work. In some scenarios, we may need to call into third party code from our rules. This third party code may have side effects, but since we do not own the code, we cannot add a RuleWrite attribute. In this scenario, we can use an explicit Update statement in our rule actions. For example, if we used an explicit update statement with our AdjustBugForSecurity method instead of using a RuleWrite attribute, we'd write our declarative rule condition like the following. this.AdjustBugForSecurity() Update("this/Bug/Priority/") Note that the update statement syntax is again similar to our RuleWrite syntax, and that there is no corresponding Read statement available. It is generally better to use the attribute-based approach whenever possible. This explicit approach is designed for scenarios when we cannot add method attributes, or when we need precise control over the chaining behaviour, as described below. The forward chaining behaviour of the rule set is powerful. We can execute rules and have them re-evaluated even when we don't know their interdependencies. However, chaining can sometimes produce unpleasant results. For instance, it is possible to put the rules engine into an infinite loop. It is also possible that we will write a rule that we never want the engine to re-evaluate. Fortunately, there are several options available to tweak rule processing. The first option is a ChainingBehavior property on the RuleSet class. The Rule Set Editor exposes this property with a drop down list labelled "Chaining". The available options are "Sequential", "Explicit Update Only", and "Full Chaining". "Full Chaining" is the default rule set behavior, and provides us with the behavior we've described so far. The "Explicit Update Only" option tells the rules engine not to use implicit chaining. In addition, the rules engine will ignore RuleWrite and RuleRead attributes. The only mechanism available for chaining is the explicit Update statement we described in the last section. Explicit updates give us precise control over the rules that can cause a re-evaluation of previous rules. The "Sequential" option disables chaining altogether. A rule set operating with sequential behaviour will execute all its rules only once, and in the order specified by their respective Priority properties (of course, a Halt statement could still terminate the rule processing before all rules complete execution). Another option to control chaining is to use the ReevaluationBehavior property of a rule. This property is exposed in the Rule Set editor by a drop down list next to a rule labelled "Reevaluation". The available options are "Always" and "Never". "Always" is the default behaviour for a rule. The rules engine will always re-evaluate a rule with this setting, if the proper criteria are met. This setting would not override a rule set chaining behaviour of "Sequential", for instance. "Never", as the name implies, turns off re-evaluation. It is important to know that the rules engine only considers a rule "evaluated" if the rule executes a non-empty action. For example, consider a rule that has Then actions, but no Else actions, like the rules we've defined. If the rule is evaluated and its condition returns false, the rule is still a candidate for re-evaluation because the rule did not execute any actions. Given the various chaining behaviors, and the complexities of some real world rule sets, we will find it useful to see what is happening inside the rules engine. As we discussed in the article "Hosting Windows Workflow", Windows Workflow takes advantage of the .NET 2.0 tracing API and it's own built-in tracking features to supply instrumentation information. In this section, we will explore the tracing and tracking features of the rules engine. Refer to the previous article for general details on tracing and tracking. To setup tracing for the rules engine we need an application configuration file with some trace switches set. The following configuration file will log all trace information from the rules engine to a WorkflowTrace.log file. The file will appear in the application's working directory. The amount of detail provided by the trace information can be useful for tracking down chaining and logic problems in our rule sets. The rule set we've been working with in this chapter will produce the following trace information (some editing applied). Rule "SetScore_HighPriority" Condition dependency: "this/Bug/Priority/" Rule "SetScore_HighPriority" THEN side-effect: "this/Score/" Rule "SetScore_LowPriority" Condition dependency: "this/Bug/Priority/" Rule "SetScore_LowPriority" THEN side-effect: "this/Score/" Rule "SetScore_MediumPriority" Condition dependency: "this/Bug/Priority/" Rule "SetScore_MediumPriority" THEN side-effect: "this/Score/" Rule "AdjustBugForSecurity" Condition dependency: "this/Bug/IsSecurityRelated/" Rule "AdjustBugForSecurity" THEN side-effect: "this/Bug/Priority/" Rule "NotificationRule" Condition dependency: "this/Score/" Rule "SetScore_HighPriority" THEN actions trigger rule "NotificationRule" Rule "SetScore_LowPriority" THEN actions trigger rule "NotificationRule" Rule "SetScore_MediumPriority" THEN actions trigger rule "NotificationRule" Rule "AdjustBugForSecurity" THEN actions trigger rule "SetScore_HighPriority" Rule "AdjustBugForSecurity" THEN actions trigger rule "SetScore_LowPriority" Rule "AdjustBugForSecurity" THEN actions trigger rule "SetScore_MediumPriority" This first part of the trace will provide information about dependency and side effect analysis. By the end of the analysis, we can see which actions will trigger the re-evaluation of other rules. Later in the trace, we can observe each step the rule engine takes when executing our rule set. Rule Set "BugScoring": Executing Evaluating condition on rule "SetScore_HighPriority". Rule "SetScore_HighPriority" condition evaluated to False. Evaluating condition on rule "SetScore_LowPriority". Rule "SetScore_LowPriority" condition evaluated to False. Evaluating condition on rule "SetScore_MediumPriority". Rule "SetScore_MediumPriority" condition evaluated to True. Evaluating THEN actions for rule "SetScore_MediumPriority". Evaluating condition on rule "AdjustBugForSecurity". Rule "AdjustBugForSecurity" condition evaluated to True. Evaluating THEN actions for rule "AdjustBugForSecurity". Rule "AdjustBugForSecurity" side effects enable rule "SetScore_HighPriority" reevaluation. Rule "AdjustBugForSecurity" side effects enable rule "SetScore_LowPriority" reevaluation. Rule "AdjustBugForSecurity" side effects enable rule "SetScore_MediumPriority" reevaluation. Evaluating condition on rule "SetScore_HighPriority". Rule "SetScore_HighPriority" condition evaluated to True. Evaluating THEN actions for rule "SetScore_HighPriority". Evaluating condition on rule "SetScore_LowPriority". Rule "SetScore_LowPriority" condition evaluated to False. Evaluating condition on rule "SetScore_MediumPriority". Rule "SetScore_MediumPriority" condition evaluated to False. Evaluating condition on rule "NotificationRule". Rule "NotificationRule" condition evaluated to True. Evaluating THEN actions for rule "NotificationRule". There is a tremendous amount of detail in the trace. We can see the result of each condition evaluation, and which rules the engine re-evaluates due to side effects. These facts can prove invaluable when debugging a misbehaving rule set. A more formal mechanism to capture this information is to use a tracking service, which we cover in the next section. WF provides extensible and scalable tracking features to monitor workflow execution. One tracking service WF provides is a SQL Server tracking service that records events to a SQL Server table. The default tracking profile for this service records all workflow events. Although the tracking information is not as detailed as the trace information, tracking is designed to record information in production applications while tracing is geared for debugging. To enable tracking, we'll need a tracking schema installed in SQL Server, and an application configuration file to configure tracking. The following configuration file will add the tracking service to the WF runtime and point to a WorkflowDB database on the local machine. If we run our bug scoring workflow with the above tracking, we can pull out rule-related tracking information. When the workflow completes, we can pass the workflow's instance ID to the following method and retrieve the rule tracking information. Notice that to retrieve the rule tracking events we need to dig into the user data associated with a UserTrackingRecord. The above code will produce the following output, which includes the result of each rule evaluation. Earlier, we mentioned that one of the advantages to using declarative rules is that we can dynamically modify rules and rule sets at runtime. If these rules were specified in code, we'd have to recompile and redeploy and application. With WF, we can use the WorkflowChanges class to alter an instance of a workflow. If we give the following code an instance of our bug scoring workflow, it will initialize a new WorkflowChanges object with our workflow definition. We can then find the bug scoring rule set by name via a RuleDefinitions instance. Once we have our rule set, we can make changes to our rules. Once we have our rule set, we can iterate through our rules. In the above code, we are turning off the "AdjustBugPriorityForSecurity" rule. We can enable and disable rules on the fly by toggling the Active property of a rule. We make changes that are even more dramatic to our notification rule. We are changing the rule's conditional expression from this.score > 75 to this.score > 120. Expressions can be tricky to manipulate, but remember the .rules file will contain an XML representation of the CodeDom objects that make the rule. We can look inside the file to see how the condition is built for the NotificationRule (shown below). Looking at the XML we can see that we need to replace the CodePrimitiveExpression assigned to the Right property of the CodeBinaryOperatorExpression. Using the CodeDom types we could replace the condition, modify actions, and even build new rules on the fly. The modifications the code makes will apply to one specific instance of a workflow. In other words, we aren't changing the compiled workflow definition. If we want to turn the security rule off for all workflows in the future, we'd either have to run this code on every bug scoring workflow we create, or modify the rule set in the designer and recompile. In this article, we've covered conditions and rules in Windows Workflow. There are several activities in WF featuring conditional logic, including the powerful ConditionedActivityGroup. The purpose of the Windows Workflow Policy activity is to execute sets of rules. These rules contain declarative knowledge, and we can prioritize rules and use forward chaining execution semantics. By writing out our business knowledge in declarative statements instead of in procedural code, we gain a great deal of flexibility. We can track and trace rules, and update rule sets dynamically. Windows Workflow is a capable rules engine. Article by K. Scott AllenComments? Questions? Bring them to my blog.
http://odetocode.com/articles/458.aspx
crawl-002
refinedweb
5,808
55.24
Consider that you have created a Fiori application (Smart or not) with multiple filters and tables. In such a case you might want to help the users by providing default values to various filters, table columns etc. You can achieve that by creating Global Variants. But users still have to explicitly select this Global Variant you created and set as ‘default’. There is no SAP provided way as of now to set a Global Variant as default to all the users. Hope SAP provides this feature one day. Till then here is the programmatic solution. Important note: Like many of my blogs, this is a hack. Which means this is not an SAP documented supported scenario. So SAP does not guarantee that these APIs will work seamlessly across upgrades. Thanks Prasita Prabhakaran for pointing that. So lets get started! Step 1. Create a new Workbench Transport (TCode SE10) in your Gateway system. This is used to store the Global Variant that you are going to create and move it all the way to Production. Step 2. Open the Fiori application under consideration. Set the application status by selecting various filter values, selecting columns of tables, selecting width of layouts so on. Step 3. Save this Variant as a Public Variant, by selecting the checkbox as shown below. Moment you check this box, you will be prompted for a transport request. Select the transport request you created in Step 1. Step 4. At this point, you have the Global Variant and it is available to all users. Now you need to set this as default to all. First we need to find the technical id of this Variant. Refresh the application with Developer Tools open. Under network tab, filter the requests by search term ‘lrep/flex’. Under the Preview tab, open the response. Under ‘changes’, there will be two entries with ‘fileType’ as “variant”. First entry is for the SAP delivered “Standard” variant. Second one is for the new Global Filter you created in Step 3. Copy the content under property “fileName”. This is the technical name of your Global Variant. Step 5. Open the Fiori application where you need to set the default Variant. If it is a free-style application, you can do it in onAFterRendering event of the Controller. If it is an OVP, you need to create a custom controller and write in onAfterRendering event handler. onBeforeRendering: function(){ //Get reference to a control which has Variant management var oSmartFilter = this.getView().byId("ovpGlobalFilter"); //Set the Global Variant as the current Variant oSmartFilter.getVariantManagement().setCurrentVariantId("id_1535046664297_171_page"); } Step 6. If the user has already defaulted a Variant, then you do not want to overwrite that Variant. So ensure that you set the default Variant only if the user does not have a default Variant. Check the self explanatory code below. onBeforeRendering: function(){ //Get reference to a control which has Variant management var oSmartFilter = this.getView().byId("ovpGlobalFilter"); //Ensure that there is no default Variant set by the user. //In such a case, do not set default Variant. var sDefaultVariantKey = oGlobalFilter.getVariantManagement().getDefaultVariantKey(); //If No variant is set, default variant is "standard" if (sDefaultVariantKey !== "*standard*"){ return; } //Set the Global Variant as the current Variant oSmartFilter.getVariantManagement().setCurrentVariantId("id_1535046664297_171_page"); } Hope it was helpful !! Approach 2: (Update on Feb 19th, 2019) If you are explicitly using control SmartVariantManagement in your application, you can do it in a better way by enhancing the SmartVariantManagement control. Step 1. Create a ‘controls’ folder inside your ‘webapp’ folder and create a file by name ‘SmartVariantManagement.js’. Step 2. Enhance the SmartVariantManagement control as below. Copy paste the below code into SmartVariantManagement.js. Ensure that <your component name> is replaced by your actual component name. sap.ui.define( ['sap/ui/comp/smartvariants/SmartVariantManagement'], function (SmartVariantManagement) { return SmartVariantManagement.extend("<your component name>.controls.SmartVariantManagement", { metadata: { properties: { fallbackDefaultVariant: { type: "string" } } }, renderer: function (oRm, oControl) { SmartVariantManagement.getMetadata().getRenderer().render(oRm, oControl); }, _getDefaultVariantKey:function () { var defaultVariant = ""; if (this._oControlPersistence) { defaultVariant = this._oControlPersistence.getDefaultVariantIdSync(); if (defaultVariant === "*standard*" || defaultVariant === "") { //No default variant was set by the user defaultVariant = this.getFallbackDefaultVariant(); } } return defaultVariant; } }); } ); Step 3. Add the namespace for your custom control in your XML view. xmlns:cp="<your component name>.controls" Step 4. Replace SmartVariantManagement with your new control. <!--<smartVariantManagement:SmartVariantManagement--> <cp:SmartVariantManagement Step 5. Pass your default variant id in the above declaration as below. Thats it.! Great stuff Krishna. What a shame SAP are unable to provide such basic functionality for their standard fiori apps... If you save the variant as a tile it just adds a parameter variantKey to the URL. This can be configured into a tile in the designer allowing you to default this variant for ALL users. However - its a "hard" default - the user cannot manually change the default to something else as the url parameter overrules it. Hi Krishna, thanks for the blog post. After creating a public Variant, every user sees it in the "Manage Variants" option and can delete it. Is there a way to limit the users to delete a public Variant, so that only the owner can do it? Greetings, Georgi Variant changes should be restricted to own variants, as described in the Fiori Guidelines. Yet, it seems the standard only restricts changes for non-key users, see SAP Notes 2655097, 2658662 and 2666625. You can restrict variant changes manually: Great advice, thank you so much 🙂 Thanks for sharing this blog! Hi Krishna, Is this method also applicable for extending standard apps “procurement overview page” to set filter variant as default for everyone. I am asking because i tried using the same logic by using adaptation project method by extending controller, but it’s not working. Now i doubt the above logic is applicable on standard app, but limited to custom ovp apps only. Please confirm. Thanks, Rakesh
https://blogs.sap.com/2018/08/23/fiori-variant-management-set-a-variant-as-default-to-all-users/
CC-MAIN-2021-10
refinedweb
966
51.04
Home >>C++ Tutorial >C++ Classes and Objects Programs are basically designed by the help of the classes and objects in C++ because as the C++ is an object oriented language. Class is basically a group of similar objects and is known as the template from which objects can be created and it may have methods, fields, etc. Here are the examples of the object and the Class in C++: class Employee { public: int emp_id; //field or data member float emp_salary; //field or data member String emap_name;//field or data member } Objects in C++ are basically a real world entity that has a state and a behavior like rock, paper, scissors etc. In the previous statements the state represents the data and the behaviors represent the functionality. Object is generally created at the runtime, as it is a runtime entity. Each and every member of the class can be accessed by object and it is generally known as the instance of a class. Here is an example of the object in C++ Employee emp; //creating an object of employee #include <iostream> using namespace std; class Employee { public: int emp_id; string emp_name; }; int main() { Employee emp; //creating an object of Employee Class emp.emp_id = 101; emp.emp_name = "Anand"; cout<<emp.emp_id<<endl; cout<<emp.emp_name<<endl; return 0; } #include <iostream> using namespace std; class Employee { public: int emp_id; string emp_name; void save(int a, string b) { emp_id = a; emp_name = b; } void show() { cout<<emp_id<<" "<<emp_name<<endl; } }; int main(void) { Employee emp1; Employee emp2; emp1.save(101, "Anand"); emp2.save(202, "Shipra"); emp1.show(); emp2.show(); return 0; }
http://www.phptpoint.com/cpp-classes-and-objects/
CC-MAIN-2021-10
refinedweb
265
52.29
Markdown is awesome, you can show rich text without using that ugly http code. On Flutter you have a great package to use to show markdown: flutter_markdown. It is extremely easy to use in its basic "mode" but we'll also show some advanced features. Installation That's a no-brainer, just add to your pubspec.yaml: dependencies: flutter_markdown: ^0.5.2 and do a good old $ flutter pub get then import the package like this: import 'package:flutter_markdown/flutter_markdown.dart'; Showing a file We'll use the text file on this repository (all credits to mxstbr), so we'll create a FutureBuilder and http to get text and give it to our Markdown renderer widget: Widget build(BuildContext context) { return Scaffold( appBar: AppBar( backgroundColor: Colors.red, ), body: Center( child: FutureBuilder( future: getTextData(), builder: (context, snapshot){ if(snapshot.hasData){ //HERE we need to add the text renderer } return Container(); } ) )); } Future<String> getTextData() async{ String url = ''; var response = await http.get(url); return response.body; } To show the markdown content we just need to return this iwdget inside the builder: return Markdown(data: snapshot.data); the result will be this: the widget is already scrollable, if we need to add it to a scrollable parent we should use MarkdownBody. Some advanced features This package also includes: - Image support in form of URL, absolute file or resources - Selectable text - Emoji support - AutoLink support and more. There, your natural evolution of WYSIWYG is here, you are welcome. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/theotherdevs/starting-with-flutter-showing-markdown-2fkb
CC-MAIN-2021-17
refinedweb
246
55.13
lp:minetest-c55 Branch merges Related bugs Related blueprints Branch information - Owner: - Minetest Developers - Status: - Development Import details This branch is an import of the HEAD branch of the Git repository at git://github.com/minetest/minetest.git. Last successful import was on 2017-03-18. Recent revisions - 5452. By Loïc Blot <email address hidden> on 2017-03-17 Reduce memory & function cost of Game class functions (#5406) GameRunData is passed on many game functions, or one of its attributes whereas it's a member of the class. Remove it from functions arguments and call object directly from concerned functions. This will reduce a little bit the Game class loop usage & very little bit the memory usage (due to non creation of pointer/references) - 5451. By zeuner <email address hidden> on 2017-03-17 avoid crashing when accessing mapgen early (#5384) - 5450. By red-001 <email address hidden> on 2017-03-17 Give CSM access to use `core.colorize()` (#5113) - 5449. By Loïc Blot <email address hidden> on 2017-03-17 [CSM] Fix minimap problems (#5405) This fixes issue #5404 - 5448. By Loïc Blot <email address hidden> on 2017-03-17 [CSM] Add core.get_timeofday & core.get_day_count env calls (#5401) * [CSM] Add core.get_timeofday & core.get_day_count env calls * [CSM] Add core.get_ node_level, core.get_ node_max_ level, core.find_node_near - 5447. By Loïc Blot <email address hidden> on 2017-03-16 Fix indentation problem since merge resolution Github merge conflict resolution is not the best with indent - 5446. By Loïc Blot <email address hidden> on 2017-03-16 [CSM] Add minimap API modifiers (#5399) * Rename Mapper (too generic) to Minimap * Add lua functions to get/set position, angle, mode for minimap * Client: rename m_mapper to m_minimap * Add minimap to core.ui namespace (core.ui.minimap) * Add various functions to manage minimap (show, hide, toggle_shape) * Cleanup trivial declaration in client - 5445. By Loïc Blot <email address hidden> on 2017-03-16 Add ModStorageAPI to client side modding (#5396) mod storage is located into user_path / client / mod_storage - 5444. By paramat <email address hidden> on 2017-03-16 Get biome list: Downgrade missing biome message to infostream It is harmless for a biome listed in an ore or decoration registration to be missing. Now that we are registering certain biomes or not based on options (such as floatland biomes), the biome lists in ore and decoration registrations trigger these error messages, avoiding these error messages would need a large amount of duplication of ore and decoration registrations. - 5443. By Sfan5 on 2017-03-16 Sneak: Fix various problems with sneaking Sneaking won't actually hover you in the air, releasing shift guarantees not falling down (same as in MC). Sneak-jump no longer goes higher than a normal jump (^ was required for this). Sneaking no longer avoids fall damage. You can sneak on partial nodes (slabs, sideways slabs) correctly. Sneaking doesn't "go out" as far anymore (0.29 instead of 0.4). Can't jump when sneaking out as far as possible (breaks the sneak ladder). Branch metadata - Branch format: - Branch format 7 - Repository format: - Bazaar repository format 2a (needs bzr 1.16 or later)
https://code.launchpad.net/~minetestdevs/minetest-c55/upstream
CC-MAIN-2019-26
refinedweb
522
64.51
27 April 2010 11:44 [Source: ICIS news] LONDON (ICIS news)--DuPont's first-quarter 2010 net income soared to $1.14bn (€855m), more than doubling the $489m recorded in the same period last year, as its businesses continued to recover from the global recession, the ?xml:namespace> Sales for the three months ended 31 March increased by 23% year on year to $8.48bn, as volumes increased. The group also cited higher selling prices, lower raw material costs and benefits gained through currency as reasons for the improved results. “Our intense focus on customers, sustained R&D [research and development] investments and productivity improvements are delivering growth,” said DuPont CEO Ellen Kullman. “Macro trends drove first-quarter demand for our science-based innovations, and DuPont was ready. The actions taken last year are benefiting the company as we emerge stronger in 2010,” Kullman added. Looking forward, DuPont expected stronger sales growth and improved pre-tax operating margins as global economic improvements continued, with particularly strong demand in Asia Pacific. (
http://www.icis.com/Articles/2010/04/27/9353926/duponts-q1-net-income-more-than-doubles-to-1.14bn.html
CC-MAIN-2014-41
refinedweb
170
53.51
I am willing to know the typecasting procedure in c++. What is the way to convert a double to an integer in c++? A sample code would be much appreciated. Thanks Like some of the other programming languages typecasting are possible and quite easy to do. As far as your query is a concern, you asked to know the double-integer casting in c++ Either in Explicit Conversion: #include <iostream> using namespace std; int main() { double x = 1.5; // Explicit conversion from double to int int sum = (int)x + 1; cout << "Sum = " << sum; return 0; } Or in Cast Operator: #include <iostream> using namespace std; int main() { float f = 3.5; // using cast operator int b = static_cast<int>(f); cout << b; } Above both programs will return an integer value. I hope you get this.
https://kodlogs.com/37652/how-to-convert-double-to-int-c
CC-MAIN-2021-04
refinedweb
133
63.8
--- "John W. Eaton" <address@hidden> wrote: > On 31-Jan-2008, Matthias Brennwald wrote: > > | I can't offer a solution to your problem, but I can add a bit to the > | confusion. Consider this (with Octave 3.0 on Ubuntu Linux 7.10): > | > | octave:1> global x = [1 2 3] > | octave:2> clear x > | octave:3> x > | error: `x' undefined near line 33 column 1 > | > | So, it looks like x has been cleared and does not exist anymore. But, if > | I continue with the following, this seems not to be the case: > | > | octave:4> global x = [9 9 9] > | octave:5> x > | x = > | > | 1 2 7 > | > | There are two things that confuse me: > | 1. Why is x not equal to [9 9 9]? > | 2. Why does Octave remember the previous value of x, although it has > | been cleared previously? > | > | I don't believe this is a bug, because it is so fundamental. I guess > | this has something to do with me (and others) not understanding well how > | things work. Can someone enlighten me? > > The statement > > global x > > creates a local variable called X, and, if it does not already > exist, a variable X in the global namespace. The local variable X is > linked to the global variable X. If you then write > > clear x > > you are clearing the local variable only. You need to use > > clear all > > or > > clear global x > > to remove X from the global namespace. > > Once a global variable is initialized with a statement like > > global x = 13 > > it can't be initialized again unless it is cleared from the global > namespace first. > > Does that make it any clearer? > > jwe > _______________________________________________ Actually, it looks even more complicated now. What does "The local variable X is linked to the global variable X" exactly mean ? If in a given scope there are two entities named "X", how does one differentiate between them ? In Perl, for example, global variables can be disambiguated using package name, e.g. our $x; # global variable, let's assume we're in package "main" $x = 1; { my $x; # lexically scoped variable $x = 2; warn "\$x=$x"; # should print 2 warn "\$main::x=$main::x"; # should print 1 } Thanks, Sergei. Applications From Scratch: ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page.
https://lists.gnu.org/archive/html/help-octave/2008-02/msg00013.html
CC-MAIN-2019-43
refinedweb
376
72.66
This is a C program to print character in reverse case i.e. the program will read a character from keyboard and then print that character in lower case if it is in upper case or print in upper case if that input character is in lower case. This program will use c programming library functions like islower(), isupper(), toupper() and tolower(). All these functions are defined under header file <ctype.h>. C program to print character in reverse case #include <stdio.h> #include <ctype.h> main() { char x; printf("Enter an alphabet :"); x = getchar(); // read a character from keyboard if (islower(x)) putchar(toupper(x)); //change to uppercase else putchar(tolower(x)); //else change to lowercase } OUTPUT Enter an alphabet : d D Enter an alphabet : y Y Explanation of the program In this program c programming functions like islower(), isupper(), toupper() and tolower() are used which are defined under header file called . That’s why it is included after standard header file under which input output functions like printf and other are defined.
http://www.trytoprogram.com/c-examples/c-program-to-print-character-in-reverse-case/
CC-MAIN-2019-30
refinedweb
174
59.23
Adding Cross-Site Scripting Protection to ASP.NET 1.0 Scott Hanselman Chief Architect Corillian Corporation November 2003 Summary: ASP.NET 1.1 added the ValidateRequest attribute to protect your site from cross-site scripting. What do you do, however, if your Web site is still running ASP.NET 1.0? Scott Hanselman shows how you can add similar functionality to your ASP.NET 1.0 Web sites. (12 printed pages) Contents The Problem C#-Eye for the IL Guy HttpModule Programmer Intent Installation and Configuration The Results Conclusion The Problem I've got a customer that has deployed a site on Microsoft® ASP.NET and the Microsoft® .NET Framework 1.0. It's a large site, and they are a large customer, and as a large customer they tend to move, well, slow. We were in the middle of a large deployment when ASP.NET/Framework 1.1 came out. The team felt that it was too risky to move everything over to ASP.NET/Framework 1.1 so close to the finish line. So we decided to move to ASP.NET/Framework 1.1 later in the year. However, since we build complex e-banking Web sites that cross many lines of business and deal with folks' money, security is job #1 (or job #0 if you're zero based). The client has a requirement that we deal with cross-site scripting (often called "XSS") attacks aggressively. XSS is a particularly sinister kind of hacking, where an l33t hx0r (elite hacker) or a "script kiddie" tries to retrieve personal information or fool a site into doing something it shouldn't do by entering JavaScript into a Web Form, or by encoding the script into a parameter in the URL. A simple example is a Web Form that has a single text box and a single button. The user enters their name into the text box and submits the form. The page then prints out "Hello firstname" by string concatenation, String.Format, a Response.Write or through a server-side label. Figure 1. Entering text; seems safe enough Since the page takes the users input and directly "regurgitates" it, if I entered a swear word, I'd get a different kind of greeting! But what happens if instead of entering their name, the user enters a script fragment like "<script>alert('bad stuff happens');</script>." The code behind looks like this: if (this.IsPostBack) Response.Write("Hello " + this.TextBox1.Text); You can see that the contents of the text box will be written directly out to the response stream and the JavaScript will be evaluated on the user's browser! This is a trivial example, but imagine if the malicious JavaScript contained code to access the user's cookies collection or redirect a form post to another site? Figure 2. Entering JavaScript where text is expected Figure 3. JavaScript executes on response For simplicity's sake, we'd rather not build extra complexity into our Web tier or business logic to deal with someone entering JavaScript into a form field or some other chicanery. We'd like to deal with XSS in some central way, perhaps as a filter, earlier in the HTTP worker request chain, certainly before the actual page executes. Well, ASP.NET 1.1 includes a new @Page directive to do just this! Input validation is turned on by default, and can be controlled with the ValidateRequest attribute of the @Page directive. <%@ Page language="c#" Codebehind="WebForm1.aspx.cs" ValidateRequest="true" AutoEventWireup="false" Inherits="Junk.WebForm1" %> ASP.NET 1.1 request validation catches malicious scripting code in the Cookie Collection, the QueryString, and Forms Posts. It checks all input data against a list of potentially dangerous values. In case you're worried that this kind of validation will impair functionality for your users in some way, let me assure you that if your users are entering JavaScript into your form fields, they're not the kind of users you want. ValdidateRequest=true won't hamper your users experience in any way. If malicious script is detected in some input data, an HttpRequestValidationException is thrown. You can certainly catch this error in the Global.asax and replace the default error page with your own personal threats if you'd like. It's great that ASP.NET 1.1 has included this powerful filter for free, but it doesn't help me and my client's pending ASP.NET 1.0 site launch. How can I protect against cross-site scripting with ASP.NET 1.0 while I wait for my client to upgrade? We kicked around a few ideas like writing some regular expressions and searching the HTTP Headers in Application_BeginRequest, but none of our ideas felt good. I also reminded myself that I work for an e-finance company, not a company that makes components to prevent cross-site scripting attacks. No need for me to attempt to reinvent the wheel. Then I realized that I had the solution sitting right in front of my face; ASP.NET 1.1 had already solved this problem, I just needed to solve the problem backwards. So, I decided to back-port the existing 1.1 to ASP.NET 1.0 C#-Eye for the IL Guy In order to explore what was going on inside ASP.NET 1.1, I needed a tool that was a little higher level than ILDASM.EXE, the .NET disassembler included with the .NET Framework SDK. Were I a smarter person, perhaps I could take System.Web apart with only ILDASM, but reading IL is non-trivial and I had a schedule. I found that tool in Lutz Roeder's Reflector. Reflector is an object browser that gives you a great tree view of all the namespaces and classes that the Base Class Library (BCL) provides. Figure 4. Looking at the CrossSiteScriptingValidation class in Reflector Figure 5. Exporting source code for the CrossSiteScriptingValidation class However, where Reflector really shines is in its ability to decompile .NET assemblies and present the results, not as IL, but equivalent C# or Microsoft® Visual Basic® .NET code. Of course, some obvious fidelity is lost in the process, such as local variable names, but that's life (and code). So, I ran around in System.Web until I found an internal class called CrossSiteScriptingValidation. Sounded promising. This is where the tough questions are answered, such as IsDangerousString or IsDangerousScriptString. All the methods in CrossSiteScriptingValidation return booleans; true on most qualifies as dangerous. But what strings are we evaluating and who calls this utility class? Seemed to me that the answer would lie in HttpRequest as we are attempting to validate all requests. HttpRequest contains collections for Form variables, Cookies, and the QueryString. These objects of type NameValueCollection (cookies is actually an HttpCookieCollection, which has some trivial extra stuff), so if your URL is, then the QueryString collection would contain an entry for the name ID with the value 3. HttpRequest has a public get property for this collection, so when you code Request.QueryString, you're accessing that property. Here's where it all happens. When the collection is accessed for the first name, it's checked for dangerous strings through ValidateNameValueCollection. If an HttpRequestValidationException isn't thrown, the now valid QueryString is returned and a flag is set to avoid the overhead of checking the collection again. if (this._flags[1] != null) { this._flags[1] = 0; this.ValidateNameValueCollection(this._queryString, "Request.QueryString"); } return this._queryString; Validation code like this is all through the HttpRequest collections in ASP.NET 1.1. Of course, since I want a solution that runs on ASP.NET 1.0, and I can't override the behavior of the Forms, QueryString and Cookie collections, I'll need to find another opportunity within the call stack to validate the collections. HttpModule An HttpModule seemed the perfect choice. A simple custom public class that implements IHttpModule. The IHttpModule interface consists of only two methods, Init() and Dispose(). Init() is called once by ASP.NET with the HttpApplication as the only parameter, and is my opportunity to hook up any event handlers to the application. For performance reasons, I wanted to make sure that my cross-site scripting validation code only ran once and ran before and independently from the page and associated business logic. The HttpApplication has these events that fire in the order shown: - BeginRequest - AuthenticateRequest - AuthorizeRequest - ResolveRequestCache - [A handler (a page corresponding to the request URL) is created at this point.] - AcquireRequestState - PreRequestHandlerExecute - [The handler is executed. In our case the Page] - PostRequestHandlerExecute - ReleaseRequestState - [Response filters, if any, filter the output.] - UpdateRequestCache - EndRequest It looks like the time to run the validator is during the PreRequestHandlerExecute event handler, just before the page itself. If I find something potentially dangerous and throw an exception, the page will never run. This is the desired behavior. So, I created a class called ValidateInput that implements IHttpModule and in the Init() hooks up an EventHandler for PreRequestHandlerExecute to call my custom function, ValidateRequest. It will be inside ValidateRequest where I'll call the functions I'll bring over from ASP.NET 1.1. I'll also add a quick version check to make sure no one tries to use this module on ASP.NET 1.1. I'd hate to have someone forget to remove this module when we upgrade to 1.1. public class ValidateInput : IHttpModule { HttpContext context; HttpApplication application; public ValidateInput(){} public void Init(HttpApplication app) { Version v = System.Environment.Version; if (v.Major != 1 && v.Minor != 0) throw new NotSupportedException(@"The ValidateInput HttpModule is not supported on this version of ASP.NET. Remove it from your Web.config file!"); app.PreRequestHandlerExecute += new EventHandler(this.ValidateRequest) ; } I hooked up PreRequestHandlerExecute to my class's ValidateRequest method. Since I can't hook into the Forms, QueryString, and Cookies collections, I'll need to do all the request validation here in order to make sure that only validated requests are passed to my Page handler. public void ValidateRequest(Object src, EventArgs e) { //Store away what may be useful during this Request... application = (HttpApplication)src; context = application.Context; this.ValidateNameValueCollection(context.Request.Form, "Request.Form"); this.ValidateNameValueCollection(context.Request.QueryString, "Request.QueryString"); this.ValidateCookieCollection(context.Request.Cookies); } In ValidateRequest I called my own implementations of ValidateNameValueCollection and ValidateCookieCollection. Each of them spins through the already parsed collections representing the Form POST data, including pre-parsed Cookies and the QueryString. It's important to know that the parsing of this HTTP header data and organizing into NameValueCollections is safe, as any potentially malicious data from the request hasn't reached the Page handler or browser yet. Additionally, if I had chosen the BeginRequest application event instead of PreRequestHandlerExecute, I'd have had to parse the raw HTTP request myself. So, I get the best of both worlds, tedious parsing has been done for me (and is already in well-tested code) and the page hasn't executed yet, giving me time to possibly throw an exception and stop execution of the request. Next I pulled all the other helper functions into my new class, including IsDangerousExpressionString, IsDangerousOnString, IsDangerousScriptString, IsDangerousString, and IsAtoZ from Reflector. It's worth mentioning that the decompiled C# code that Reflector shows is actually a new C# representation of the IL contained in the assembly. The local variable names have been changed, and what was once a loop may now be a series of goto and if statements. Don't judge the writer of the code from the IL representation! Remember that the compiler needs to take liberties when generating the final IL and what's more important is the concept of programmer intent. I'll talk about this a little later below. Figure 6. Looking at the IsAtoZ method Now, we'll need a custom Exception class that derives from ApplicationException that shall be aptly named HttpRequestValidiationException. This coincidentally is the same name that ASP.NET 1.1 uses, but in a different namespace. This exception will be thrown if any potentially dangerous-looking script appears in the HttpRequest. If you chose to show the exception page or log the exception, it's up to you. Some might feel that a potential script attack is a significant event and may chose to handle this exception differently. Either way, be sure to have an exception-handling strategy in place. Programmer Intent I wanted to mention a little something about the intent of the programmer. What has really been decompiled here is the programmer's intent. We're not actually looking at the C# source code as the original writer wrote it. When decompiling to IL, then converting to a C# representation of that same IL, things change. For example, a bit of code from IsDangerousOnString looks like this in Reflector: goto L_0045; L_0040: index = (index + 1); L_0045: if (index >= len) { goto L_005E; } if (CrossSiteScriptingValidation.IsAtoZ(s[index])) { goto L_0040; } This is hard to read for the average programmer, but it correctly conveys the programmer's intent. But just what was that intent? We can "fold" the code back up only so far. It might have been a call to String.IndexOf that was in-lined for all we know. However, we can rewrite it like this (or a half dozen other ways) so that we might better understand it: //Programmer intent: look for non-alphas... while (index < len) { if (!CrossSiteScriptingValidation.IsAtoZ(s[index])) break; index++; } Remember, "Gotos considered harmful" only applies to YOU, not the compiler! Note also that this code could also have been expressed as a "for" loop or some other looping construct, and the intent is still correctly expressed. Installation and Configuration To install ValidateInputASPNET10 on the Web server, we'll need only to add it to the list of httpModules configured in our web.config. The assembly, in this case ValidateInputASPNET10.dll needs to reside in the \bin folder of our site, and any other sites on our box that we wish to protect. <configuration> <system.web> <httpModules> <add name="ValidateInput" type="Corillian.Web.ValidateInput,ValidateInputASPNET10" /> </httpModules> </system.web> </configuration> The Results When I add the HttpModule to the web.config, I'll be able to launch the same ASP.NET application without recompiling, since the HttpModule is its own assembly and Microsoft® Visual Studio® .NET project. On start up, ASP.NET will call Init() on the new ValidateInputASPNET10 HttpModule, and it will chain to the PreRequestHandlerExecute Event. If I try to enter JavaScript into the Form (or QueryString or Cookies Collection) as before, I'm presented with this error message declaring an HttpRequestValidationException. Notice that part of the JavaScript is shown, but only part; we don't want the error message to output and execute the same JavaScript we are trying to protect ourselves from. Figure 7. Protecting your Web site from script input Note Remember, decompiling should be used primarily for debugging and your personal education. Be sure to be aware of intellectual property rules and remember that just because unobfuscated assemblies are easier to decompile than C++ applications, this doesn't give us carte blanche to swipe code. If you're concerned about your code and intellectual prosperity, take a look at the Dotfuscator Community Edition that ships with Visual Studio .NET 2003. Conclusion Cross-site scripting is one of the many types of hacks you need to worry about when creating ASP.NET Web sites. Hackers can use this technique to execute code on the server, possibly leading to loss of data, or worse, theft of customer information. Defensive programming demands you protect yourself from these attacks. Adding validation to input, as done in this article, is a first step towards protecting your Web site. About the Author Scott Hanselman is Chief Architect at the Corillian Corporation, an e-finance enabler. He has over a decade of experience developing software in C, C++, Visual Basic, COM, and currently Visual Basic .NET and C#. Scott is proud to have been appointed the MSDN Regional Director for Portland, Oregon for the last three years, developing content for, and speaking at Developer Days and the Visual Studio .NET launch in both Portland and Seattle. Scott also spoke at the Microsoft® Windows Server™ 2003 and Visual Studio . This year Scott spoke at the Windows Server 2003 launch event in 4 PacWest cities, at TechEd in the U.S. and in Malaysia, and at ASPLive in Orlando. His thoughts on the Zen of .NET, Programming and Web services can be found at.
https://msdn.microsoft.com/en-us/library/ms972967.aspx
CC-MAIN-2015-32
refinedweb
2,749
56.45
0 Hi all I am enrolled in abeginners class and am having problems with methods. I keep getting a .class expected and do not know how to fix this. It is at this line that the message appears total = Circle.area(int radius);. import javax.swing.JOptionPane; public class Circle { public static void main(String [] args) { // double area; ////area to be determined in a method double radius; //radius inputted by user String input; //for holding user input double total;//final area of a circle //Ask the user for a radius or a circle input= JOptionPane.showInputDialog("Please enter the radius of the circle "); //Convet the string input to a double radius=Double.parseDouble(input); // this calls the circle method total = Circle.area(int radius); JOptionPane.showMessageDialog(null,"The area of the circle is "+total ); } /* * The Area method returns the area of the circle * @param rad the radius inputted by the user * @param Math.PI the math class * @return The area of the circle */ public static double area (double rad) { double area; area=Math.PI * (rad*rad); return area; } } Edited by mike_2000_17: Fixed formatting
https://www.daniweb.com/programming/software-development/threads/273472/what-am-i-doing-wrong
CC-MAIN-2018-13
refinedweb
183
56.45
SNMP - Purpose. SNMP is a protocol for getting the status (e.g., CPU load, free memory, network load) of computing devices such as routers, switches and even servers. - Object descriptor, managed object. The client can provide a globally unique names such as cpmCPUTotal5secRev (the average CPU load of a Cisco device for the past 5 seconds) to indicate the information that it wants, then the server should return such information. Such a textual name is called the “object descriptor”. The word “object” or “managed object” refers to the concept of CPU load. The actual CPU load in the device is called the “object instance”. - Object identifier (OID). To make sure that each object descriptor is unique, actually it is defined using a list of integers such as 1.3.6.1.4.1.9.9.109.1.1.1.1.6. Each integer is like a package in Java. For example, the integers in 1.3.6.1.4.1.9 represents iso (1), org (3), dod, i.e., department of defense (6), internet (1), private (4), enterprises (1), cisco (9) respectively. This allows the Internet authority to delegate the management of the namespace hierarchically: to private enterprises and then to Cisco, which can further delegate to its various divisions or product categories. Such a list of integers is called an “object identifier”. This is the ultimate identification for the managed object. - Even though the object descriptor should be unique, it is useful to see the hierarchy. Therefore, usually the full list of object descriptors is displayed such as iso.org.dod.internet.private.enterprises.cisco…cpmCPUTotal5secRev. - Why use integers instead of symbolic names? Probably to allow the network devices (with little RAM or CPU power) implementing SNMP to save space in processing. Symbolic names such as object descriptor can be used by human in commands, but in the protocol’s operation it is done using object identifier. - In principle, the object a.b.c.d and the object a.b.c.d.e on a device have NO containment relationship. That is, they are NOT like a Java object containing a child object. In fact, the value of each object in SNMP is basically a simple value (scalar) such as an integer or a string. The only relationship between them is their names. - Identifying an instance. Now comes the most complicated concept in SNMP. Consider the concept of the number of bytes that have been received by a network interface on a router. This concept is an object. As a router should have multiple interfaces, there must be multiple instances of that object. Then, how can an SNMP client indicate to the SNMP server which instance it is interested in? The solution is more or less a kludge: to allow the instance of, say, a.b.c.d, to represent a table (a compound, structural value), which contains rows (also compound, structural value) represented by a.b.c.d.e. Each row contains child object instances (with scalar values only). Each child object is called a “columnar object”. For example, each row may contain three object instances: a.b.c.d.e.f, a.b.c.d.e.g, and a.b.c.d.e.idx. If you’d like to refer to the a.b.c.d.e.f instance in a particular row, you will write a.b.c.d.e.f.<index>. The meaning of the index is defined by a.b.c.d.e (the row). For example, it may be defined as finding the row in the table which contains a columnar object a.b.c.d.e.idx whose value equals to <index>, then return the columnar object a.b.c.d.e.f as the result. - Note that this is the only situation where the value of an object can be a structure and that there is object containment relationship in SNMP. - What is confusing is that a.b.c.d.e.f is used both as an object identifier and the lookup key to find the child instance in the row. Unlike other object identifiers, the identifier now represents an object containment relationship so it must have a.b.c.d.e as the prefix, otherwise the server won’t know which table to look into and what is the definition for the index. - The complete identifier a.b.c.d.e.f.<index> is called an instance identifier. - Here is a concrete example: Consider iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInOctets.1. The definition of iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry says that to find the row in the table, it should search for a row which contains a child object iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex with the value of 1 (the index specified), then it will return the value of child object iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInOctets in the row. Of course, for this to work, the iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex child object in each row must have been assigned with sequential values 1, 2, …, etc. (which is indeed the case). - Finally, a simple case is that there is no table at all. For example, to find the up time of the device, use the object identifier iso.org.dod.internet.mgmt.mib-2.host.hrSystem.hrSystemUptime and append a .0 as the index, so the instance identifier is iso.org.dod.internet.mgmt.mib-2.host.hrSystem.hrSystemUptime.0. BTW, “hr” stands for “host resources”. - MIB (management information base). A MIB is just a a collection of managed objects (not instances). There are standard MIBs so that device manufacturers can implement and users can find the right object identifiers to use. There are also proprietary MIBs such as those designed by Cisco to provide information only available on its devices. - Finding the supported OIDs. How can you find out the MIBs or the object identifiers supported by a device? It is easier to just “walk the MIB tree”: to show all the instances in the tree or in a subtree. On Linux, this is done as below. You specify the IP or hostname of the server and optionally specify a node so that only that subtree is displayed (the object identifier starts with a dot, otherwise it will be assumed it is relative to iso.org.dod.internet.mgmt.mib-2): # snmpwalk <some args> localhost # snmpwalk <some args> localhost .iso.org.dod.internet.mgmt.mib-2.system # snmpwalk <some args> localhost system - Getting an instance. Just specify the instance identifier: # snmpget <some args> localhost .iso.org.dod.internet.mgmt.mib-2.host.hrSystem.hrSystemUptime.0 # snmpget <some args> localhost host.hrSystem.hrSystemUptime.0 - SNMP enttiy, engine and applications. SNMP is a peer to peer protocol. There is no concept of a server and a client (these terms are used here for simplicity). Instead, both peers have the same core capabilities such as sending or receiving SNMP messages, performing security processing (see below), integrating v1, v2c and v3 processing (dispatching) and etc. This part is called the SNMP engine. On top of the engine, there are different “applications”: one application may only respond to requests for object instances (called SNMP agent in v1 and v2), another application may probe others (called SNMP manager in v1 and v2), yet another may forward SNMP messages (SNMP proxy). The whole server or client is called the “SNMP entity”. - Context. On some devices there are multiple copies of a complete MIB subtree. For example, a physical router may support the concept of virtual routers. Each such virtual router will have a complete MIB subtree of instances. In that case, each virtual router may be indicated as a “context” in the SNMP server. A context is identified by name. For example, a virtual router could be identified as the “vr1″ context. There is a default context with empty string (“”) as its name. When a client sends a query, it can specify a context name. If not, it will query the default context. - Notification (trap). An SNMP server may actively send a notification to the client when some condition occurs. This is very much like a response without a request. Otherwise, everything is similar. The condition (e..g., the changes of the value of an instance or its going out of a range), the destination, the credentials used (see the security section below) and etc. are configured on the server. - Transport binding. Typically SNMP runs on UDP port 161. SNMP security - SNMP v1 and v2c security. In SNMP v1 and v2c (v2 was not widely adopted), there is little security. The only security is the “community string”. That is, the server is configured to be in a community identify by a string such as “foo”, “public” (commonly used and the default for many devices to mean no protection) or “private”. If the client can quote the community strnig, then it is allowed access. As the community string is included as plain text in SNMP packets, it practically provides no security. Therefore, in v1 and v2c, to access an SNMP server, you will do something like: # snmpwalk -v 2c -c public localhost # snmpget -v 2c -c public localhost <INSTANCE ID> - SNMP v3 security. In SNMP v3, there is user-based security. That is, the client may be required to authenticate the messages to the server as originating from a user using a password (authentication password). In addition, the client may be furthered required to encrypt the messages using another password (privacy password). This security requirement is called the “security level” (no authentication needed, authentication but no privacy, authentication with privacy). Therefore, in v3, you will access the server like: # snmpwalk -v 3 -l noAuthNoPriv localhost # snmpwalk -v 3 -l authNoPriv -u kent -A "my auth passwd" localhost # snmpwalk -v 3 -l authPriv -u kent -A "my auth passwd" -X "my priv passwd" localhost - Client configuration file. To save typing all those every time, you can store these parameters into the snmp.conf file as defaults. - Security limitation. It is a bad idea to specify the password on the command line as it can be revealed by local users using “ps”. Storing it into the configuration file is better. However, the file only allows a single authentication password and a single privacy password, not enough to handle the case of using different passwords for different servers. - Security name. A security name is just a user name. No more, no less. That’s the term used in the RFC (maybe in the future it could be something else?) - Algorithm. Further, there are different algorithms for authentication (HMAC using MD5 or SHA) and for privacy (encryption using DES or AES). So, you need to specify the algorithms to use: # snmpwalk -v 3 -l noAuthNoPriv localhost # snmpwalk -v 3 -l authNoPriv -u kent -a MD5 -A "my auth passwd" localhost # snmpwalk -v 3 -l authPriv -u kent -a MD5 -A "my auth passwd" -x DES -X "my priv passwd" localhost - Ensure the algorithms match. As SNMP uses UDP and each query and response may use just a single UDP packet, there is no negotiation of algorithm at “connection phase” at all. In fact, presumably for simplicity in implementation, the algorithms used are not even indicated in the message, so the client must use the agreed-on algorithms as configured in the user account on the server, otherwise the server will simply fail to authenticate or decrypt the message. - Localized keys. The authentication password and privacy password of a user account are not used directly. The idea is, most likely you will use the same password for all the user account on all devices on site. If it is directly used, then a hacker controlling one device will be able to find the password and use it to access all the other devices. Therefore, when creating a user account, you specify the password, but the Linux SNMP server will combine it with a unique ID (called the “engine ID”) generated for the device (such as the MAC or IP and/or a random number generated and stored on installation), hash it and use the result as the password (the “localized key”). This way, even if the hacker can find this localized key, he will still be unable to find the original password. - But how can a client generate the same key? It has to retrieve the engine ID first and then perform the same hashing. This is supported by the SNMP protocol. - User accounts creation. Due to the need to generate localized keys, the way to create user accounts on Linux is quite weird. You stop the server, specify the user account’s name and password in a file, then start the server. It will read the password, convert it to a localized key and overwrite the file. This file is /var/lib/snmp/snmpd.conf on Linux: createUser kent MD5 "my auth password" DES "my privacy password" createUser paul SHA "my auth password, no encryption needed" - Access control. The access control can specify the user account, the lowest security level required, which part of the MIB tree is accessed (may use an OID to identify a subtree), the type of access (read or write), in order to grant the access. Here are some example settings on Linux (although human user names are used, but in practice they should be representing devices): rouser john noauth .iso.org.dod.internet.mgmt.mib-2.system rouser kent priv .iso.org.dod.internet.mgmt.mib-2.system rouser kent auth .iso.org.dod.internet.mgmt.mib-2 rwuser paul priv - View. How to specify several subtrees in an access control rule? You can define a view. A view has a name and is defined as including some subtrees and excluding some subtree. Then you can refer to it by name in access control: view myview included .iso.org.dod.internet.mgmt.mib-2.system view myview included .iso.org.dod.internet.mgmt.mib-2.host view myview excluded .iso.org.dod.internet.mgmt.mib-2.host.hrStorage rwuser paul priv -V myview - Access control for v1 and v2c. For v1 and v2c, access control can specify the community string, the IP range of the client (the “source”), the subtree (OID) or the view: rocommunity public 192.168.1.0/24 .iso.org.dod.internet.mgmt.mib-2.system rwcommunity private localhost -V myview - Most flexible access control model. The above access control model is called the “traditional model”. The new, most flexible access control model is called the “view-based access control model (VACM)”, even though the former can also use views. It may be more suitable to called it group-based access control as it uses user groups in the rules (NOT the precise syntax yet!): group g1 kent group g1 paul #access <group> <context> <min sec level> <exact context?> <view for read> <view for write> <view for notify> access g1 "" auth exact myview1 myview2 myview3 - Mapping community string to user name. When using the VACM, instead of granting access to community strings, you need to merge v1 and v2c into the user-based access control processing. To do that, a community string along with the source can be mapped to a user name (the user name mapped to do NOT have to be existing): com2sec user1 192.168.1.0/24 public # "default" source means any com2sec user2 default private - Security model. Even though the different types of identity in the different SNMP versions are represented uniformly as a user name, their trustworthiness is still significantly different. So, in specifying group memberships and access control rules, you are required to specify the “security model” (v1, v2c or the user security model as in v3) and that’s the correct syntax: group g1 usm kent group g1 usm paul group g2 v2c user1 group g2 v1 user1 group g2 v1 user2 access g1 "" usm auth exact myview1 myview2 myview3 access g2 "" any noauth exact myview4 myview5 myview6 Reference: Concepts of SNMP (including v3) from our JCG partner Kent Tong at the Kent Tong’s personal thoughts on information technology blog.
http://www.javacodegeeks.com/2013/04/concepts-of-snmp-including-v3.html
CC-MAIN-2015-06
refinedweb
2,704
56.35
C++ time() function with example In this tutorial, we will learn about the C++ time() function with example. In C++, the standard library does not provide any particular data type for date and time. It generally inherits from C. For this, we have to include the header file <ctime> which includes all date-time related functions and structures. The data type for the time in C++ is time_t. The time() function has the following functionality: - It returns the current calendar time of the system. - This calendar time is in the form of seconds after date 01, January 1970. - It returns this time as an object of the type time_t. - If the system does not have any time, then -1 is returned by this function. - It takes one argument which is a pointer to the time_t. This pointer can be null or reference variable of the type time_t. Syntax: time_t time(time_t *timeargument); Program to illustrate the use of C++ time() function with example: #include<ctime> #include<iostream> using namespace std; int main() { time_t currentsystemtime; time_t currentsystime; currentsystemtime=time(¤tsystemtime); currentsystime=time(NULL); cout << currentsystemtime << " seconds have passed since 01-01-1970."<<endl; cout << currentsystime << " seconds have passed since 01-01-1970."<<endl; return 0; } Output: 1581355526 seconds have passed since 01-01-1970. 1581355526 seconds have passed since 01-01-1970. This program takes two time_t type variables. Then one of the variables is assigned the time variable with its reference and the other with the time variable with NULL as its argument. In both cases, the time() returns the current system time in seconds is after the date 01, January 1970. These variables contain the same output as shown above. I hope this post was helpful to illustrate the use of the C++ time() function. Thanks for reading! Recommended Posts: Get IP address from hostname in C++ for Windows Find the most frequent word in a string in C++
https://www.codespeedy.com/cpp-time-function-with-example/
CC-MAIN-2020-45
refinedweb
320
66.03
A Simple Introduction to Apache Camel A Simple Introduction to Apache Camel A basic introduction to the Apache Camel integration tool. Stresses on the core concept more than the syntax. Join the DZone community and get the full member experience.Join For Free A "Stable, Consistent Architecture" is a myth. It can never be achieved. Over the years, each of us has come to accept this fact, and with a microservices architecture, it is a lot easier to live with this fact. But, that does pose a lot of other issues that we need to look at. An important hurdle in this journey is that of application integration. A core tenet of the microservice architecture is that any interaction or integration between systems should be based on globally accepted standards rather than a mutually accepted protocol. If we build a system based on a protocol that is based on a simple mutual agreement between two developers, it can never work beyond those two developers. To enable a generic plug and play interface, the communication has to be based on globally accepted standards. Apache Camel is one such standard, globally accepted way of integrating two systems — focused on Java. Apart from a simple protocol definition, it provides an implementation for several scenarios out of the box. Important features of Apache Camel: - It is a light weight framework. - It can be deployed on a variety of containers like Apache Tomcat. - It provides us with a number of built-in components that provide out of the box solutions for several common use cases. - It provides several different type converters for marshaling and unmarshalling the message during routing. - It provides for Routes in a variety of domain-specific languages (DSL). Installation Apache Camel does not require any particular installation. You can just download the Jar files from the Website. Or simply include the dependency in the pom.xml file for a Maven project. xxxxxxxxxx <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>2.13.0</version> </dependency> </dependencies> Sample Application Let's now try to build a simple application that can demonstrate the power of Apache Camel. A simple scenario of integration is to join services generating and consuming files. Apache Camel can help us do this very easily. Just a few lines of code is enough to do the job. Apache Camel defines a concept of a Camel Route. Essentially, a Route is an instruction to Camel on how messages should move from one point to another. In this example, we create a SimpleRouteBuilder class that can move files from SOURCE_FOLDER to TARGET_FOLDER. xxxxxxxxxx package com.krazyminds.camel.introduction; import org.apache.camel.builder.RouteBuilder; public class SimpleRouteBuilder extends RouteBuilder { public static final String SOURCE_FOLDER = "/home/vikas/dev/camel/01/source"; public static final String SOURCE_FOLDER = "/home/vikas/dev/camel/01/target"; public void configure() throws Exception { from("file:" + SOURCE_FOLDER).to("file:" + TARGET_FOLDER); } } Next, we create default camel context and load the route created in SimpleRouteBuilder. When Camel is started, it uses this to create a CamelContext object that contains the definition of the Route to be started. xxxxxxxxxx package com.krazyminds.camel.introduction; import org.apache.camel.CamelContext; import org.apache.camel.impl.DefaultCamelContext; public class MainApp { public static void main(String[] args) { SimpleRouteBuilder routeBuilder = new SimpleRouteBuilder(); CamelContext ctx = new DefaultCamelContext(); try { ctx.addRoutes(routeBuilder); ctx.start(); Thread.sleep(10 * 60 * 1000); ctx.stop(); } catch (Exception e) { e.printStackTrace(); } } } When we run this MainApp.java. Any file in the source folder will be moved to the target folder. What's the Big Deal? This example is so trivial and simple that it can leave one wondering what is the big deal? There are hundreds of ways to move files from one location to another. Why do we need such an elaborate setup? That is right. Moving files is not the point. Point is that the two microservices that produce and consume the file have no idea about what is going on behind the scene. Camel takes care of doing it seamlessly. And this is perhaps the simplest of routes we could create out of Camel. We can do a lot more with it. If required, Camel can help us transform the file on the fly. That sounds interesting? Yes, Apache Camel is quite interesting. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/a-simple-introduction-to-apache-camel
CC-MAIN-2020-29
refinedweb
736
51.24
Table of contents Created 23 December 2008 Additional Requirements You will also need to sign up with Amazon.com to obtain an AWSAccessKey. For more information visit the Amazon Web Services Discussion Forums. Many top brand sites share a common characteristic: they offer great user experiences. An enhanced user experience is not a nice-to-have; it is a must. It is the reason why you or your manager picked Flex in the first place. It is the reason why new applications are built with Flex or that existing applications are rebuilt with Flex. Always remember: making things work is only 50% of the job. The other 50% is making the UI look and feel great. How can you improve the user experience of an e-commerce application? - You can enable users to access the products in their shopping cart without having to switch to the cart view. It would also be convenient for the user to use the drag-and-drop metaphor for putting an item into the cart. - You can clearly indicate the view transitions. - You can let users search for new items at any time without having to go back to the search screen (for example, when they are in the product details view or in the cart view). In this article, I introduce KarlStore, a site that offers users a unique buying experience by using Amazon Web Services and Adobe technology to improve on three existing e-commerce metaphors: the cart, the search results, and the product details. Building an entire e-commerce application is a significant undertaking. In this article I will focus only on searching for items with Amazon Web Services, organizing views, and transitioning between views. Amazon Web Services is an infrastructure web services platform for developers of e-commerce web sites. You can access Amazon Web Services using REST or SOAP. The example in this article uses REST. To get started, create a new project in Flex Builder. - Select File > New > Flex Project. - Type a name for the project, for example, AWSTest (see Figure 1). - Select Web application in the Application type section. - Click Finish. Figure 1. Creating the project Flex Builder will create the project structure for you. Next, create a script block in the main MXML. In the script block, declare a set of variables that you will use to call the service: - The URL to Amazon is the address to the market that you want to retrieve items from. For example, in the U.S. this value is; in Canada it is; and in the UK it is. - The Operation is the action you want to perform on the Amazon service. In this example, the value is "ItemSearch" because you want to retrieve some items. - The Search Index; for example, "Books" - The Service; for example, "AWSECommerceService" - The AWS Access Key that was provided to you by Amazon when you signed up - The Response Groups tell Amazon what information you would like returned for each item. For example, the Images response group returns SmallImage, MediumImage, and LargeImage nodes. - The Keywords for the search. - The Item Page for paging. You can specify 1, 2, 3, and so on. The following code show these variable defined for a sample application: [Bindable] private var URL:String = "?"; [Bindable] private var operation:String = "ItemSearch"; [Bindable] private var SearchIndex:String = "Books"; [Bindable] private var Service:String = "AWSECommerceService"; [Bindable] private var AWSAccessKeyId:String = "********************"; // Enter your AWS Access Key here [Bindable] private var ResponseGroup:String = "Images,ItemAttributes,EditorialReview,Reviews,OfferFull"; [Bindable] private var itemPage:uint = 1; [Bindable] private var totalPages:uint = 1; [Bindable] private var amazonResult:ArrayCollection; Before the variable declarations, add the imports for the RPC events: import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.rpc.events.FaultEvent; import mx.rpc.events.ResultEvent; Next, declare the HTTPService component that will be used to fetch items from Amazon: <mx:HTTPService </mx:HTTPService> Note: The "&" is used in the URL construction of the service to replace the ampersand symbol (&) that separates each variable in a GET query. As you can see, there are two event handlers for the HTTP Service components: onResultand onFault.The onResultfunction reads the result of your item search from the event and stores it: private function onResult(event:ResultEvent):void { totalPages = event.result.ItemSearchResponse.Items.TotalPages as uint; amazonResult = event.result.ItemSearchResponse.Items.Item as ArrayCollection; if(!amazonResult) Alert.show("Your query yielded no result"); } The onFault function simply displays a dialog box with the error message should the request fail: private function onFault(event:FaultEvent):void { Alert.show(event.fault.message); } To improve the default Alert box display, you can add a CSS Style for the Alert component in the application stylesheet: Alert { font-size: 14; background-color: black; background-alpha: 0.8; border-style: solid; border-thickness: 0; corner-radius: 10; } You also need a text field for the user to enter the search keywords. Add <mx:TextInput </mx:TextInput> in your layout. Lastly, you have to trigger the HTTP service by making a call to it from a button. <mx:Button </mx:Button> To test the application, place a breakpoint in the onResulthandler (see Figure 2) and launch the application in debug mode. Enter some keywords into the text field and click the Search button (see Figure 3). If the request was successful, you should see an XML response in onResult()with a collection of items (see Figure 4). Sometimes, the request can be successful, but no items are sent back. For example, when the response groups specified for the operations are not valid, the XML reply will contain an error message. If the request was unsuccessful, the onFault()handler will be called and it will display an error message in a dialog box. <mx:ViewStack </mx:ViewStack> Figure 2: The source code with a breakpoint for the onResult event handler Figure 3 : The search text field and button in the application. Figure 4: ItemSearchResponse shown in the Flex Builder 3 Debug perspective. Now that you can pull data from Amazon, you need a way to display it. To display the data with a TileList, create a <mx:TileList>tag and set its dataProviderto the variable containing the Amazon search results. In this case, this is amazonResult. <!-- Catalog View --> <mx:Box <mx:TileList </mx:TileList> </mx:Box> Notice the itemRendererproperty, which is needed to render each item in the TileList. Create a new MXML component and name it TileItemRenderer.mxml. The base node will be the tag Image.To display the Amazon medium image for each item, set the source property to data.LargeImage.URL; datais the property that represents the data for each item rendered. There are three views you want to display: the search results, the item details, and the cart. In this kind of project, the view stack is an ideal container because it can contain several different child views. You can switch between these views and set transitions between them. In Flex Builder, create a ViewStack node and give it an id(for example, "catalogStack") for future reference in the code. Set the widthand heightproperties to 100% to use all the space available in the container of the view stack. Setting the showEffectand hideEffectproperties to Fadewill ensure smooth transitions between views. Lastly, set creationPolicyto allso that all views will be created when the view stack creation is complete (as opposed to being created on the fly when the view is accessed). This ensures that the code will execute without error during initialization if you need to set things up in all three views at startup. <mx:ViewStack </mx:ViewStack> For each view, add a Box or Canvas under the ViewStack tag. Make sure to give each one a unique id so you can reference it in the code later on. <mx:ViewStack <mx:Box <mx:TileList </mx:TileList> </mx:Box> <mx:Canvas </mx:Canvas> <mx:Canvas </mx:Canvas> </mx:ViewStack> A canvas will give you absolute control of the layout. When you position the elements on the screen using the top, left, bottom, and right properties, they will rearrange automatically when the window is resized. Use the selectedIndexproperty of the ViewStack to change the view when the user selects an item for details, places an item in the cart, or returns to the search results. If you leave the code structure as is, then as your application grows the main.mxml will become huge and it will become difficult to understand the structure of your application. Instead, create an MXML component for each view, and then cut and paste each Boxor Canvasunderneath the ViewStack in their corresponding component. SearchResultView.mxml <?xml version="1.0" encoding="utf-8"?> <mx:Box xmlns: <mx:Script> <![CDATA[ import mx.core.Application; ]]> </mx:Script> <mx:TileList </mx:TileList> </mx:Box> ProductView.mxml <?xml version="1.0" encoding="utf-8"?> <mx:Canvas xmlns: </mx:Canvas> CartView.mxml <?xml version="1.0" encoding="utf-8"?> <mx:Canvas xmlns: </mx:Canvas> If your SearchResultView.mxml is located in the package com.yourcompany.core.view.searchResult, for example, then you need to add the corresponding XML namespace to the Application tag. Do the same for the product view and for the cart view. Your main.mxml file now looks like this: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:ViewStack <searchResult:SearchResultView <product:ProductView <cart:CartView </mx:ViewStack> </mx:Application> Your application code is now well structured and you can proceed to add the components needed for each view. It is important to let users of your application know when the view has changed and which view the application has changed to. To do this using overlays, follow the steps below. Create a style sheet and name it application-styles.css In the main MXML file, create a Style tag that points to the application style sheet: <mx:Style In the style sheet, add a class for the overlay: .OverlayStyle { background-color: black; background-alpha: 0.5; border-style: solid; border-thickness: 0; corner-radius: 25; } This style has a semitransparent background with rounded corners and no borders. Next, create a new MXML component for the overlay. Call it ViewOverlay.mxml. <?xml version="1.0" encoding="utf-8"?> <mx:Box xmlns: </mx:Box> If your overlay MXML component is located at com.yourcompany.core.view.overlay, for example, then add the following XML namespace to the main MXML: <mx:Application xmlns: Now, instantiate the overlay component, by adding the following at the end of the same file. <overlay:ViewOverlay /> Set the styleNameto OverlayStyle, the style class you created earlier. <overlay:ViewOverlay</overlay:ViewOverlay> Set the mouseEnabled property to false so the clicks are not intercepted by the overlay and you can still interact with what's behind it. <overlay:ViewOverlay </overlay:ViewOverlay> Since you are using an absolute layout, anything you display at the bottom of the main MXML after the ViewStack will be displayed on top of the ViewStack content. This is called layering and it is similar to the way layers work in Flash CS4. To display the overlay at the center of the screen, set the horizontalCenter and verticalCenter properties to 0. <overlay:ViewOverlay </overlay:ViewOverlay> The overlay has now been created, but it is still empty. You have two options to display a symbol within the overlay. You can use a 32-bit PNG bitmap with an alpha channel, which will ensure the symbol anti-aliases against any background, or you can use a vector symbol created with the Flex Component Kit for Flash CS3. The advantage of a vector graphic is that it scales at any size without losing quality. The advantage of the bitmap is that it is faster to render. A proliferation of vector graphics in your application will impact performance. Too many embedded bitmaps and your application will become too large. You have to find the right balance between bitmap and vector, so your application can be both rich and responsive. To use a bitmap, start by creating a 32-bit PNG bitmap with transparency in Photoshop, Fireworks, or Flash CS4. For more information on creating these bitmaps, see my earlier article. Save your image as symbol.png. Next, use the Image control to instantiate it: <overlay:ViewOverlay <mx:Image </mx:Image> </overlay:ViewOverlay> Figure 5: The Cart view with the overlay. To use a vector graphic, start by opening the library (press F11 or Ctrl+L). You can import an existing vector graphic or create a new one. To import an existing vector graphic in Flash, choose File > Import > Import to Library, then select an Illustrator AI file for import (for example a SVG file or an EPS file saved as an AI file), and click OK to import all layers. Flash creates the graphic symbol. To create a new graphic using drawing tools in Flash: - Create a new Movieclip - Edit the newly created Movieclip and add the graphic symbol to it - Make sure the graphic is top left aligned to the cross - In the library, select the Movieclip and rename it to Symbol. - In the application menu, choose Commands > Convert Symbol to Flex component (see Figure 6). The component is now ready to be used in Flex. Figure 6: Converting your Flash symbol to a Flex component. - Save the FLA file and publish the project (press Shift+F12) - Delete the SWF. Only the SWC file will be used. - Copy the SWC file to the libs folder of your Flex project. - Add the local namespace: <mx:Application xmlns: - Instantiate the symbol within the overlaytag: <overlay:ViewOverlay <local:Symbol/> </overlay:ViewOverlay> Notice that the visibleproperty in ViewOverlay.mxml is set to false. It is up to you to control how the overlay is displayed and when. First, give the viewIcon ID to the ViewOverlay component instance: <overlay:ViewOverlay </overlay:ViewOverlay> - Within the <Script>tag of the Application, import the Timer and TimerEvent classes: import flash.utils.Timer; import flash.events.TimerEvent; - Create a timer variable: private var viewTimer:Timer; - Add the line creationComplete="onCreationComplete()" to the Application tag.In the onCreationComplete method, create a Timer instance and add an event listener for it: private function onCreationComplete():void { viewTimer = new Timer(1000, 0); viewTimer.addEventListener(TimerEvent.TIMER, onViewImageDelete); } - Write an event handler for the timer that will hide the overlay and reset the timer: private function onViewImageDelete(event:TimerEvent):void { this.viewIcon.visible = false; this.viewTimer.reset(); } - Whenever the user changes view, trigger the display of the overlay and start the timer: private function viewChange():void { if(viewTimer) { viewIcon.visible = true; viewTimer.start(); } } - Define the viewChange() method as the event handler for the ViewStack change event: <mx:ViewStack The final script should look like this: <mx:Script> <![CDATA[ import flash.utils.Timer; import flash.events.TimerEvent; private var viewTimer:Timer; private function onCreationComplete():void { viewTimer = new Timer(1000, 0); viewTimer.addEventListener(TimerEvent.TIMER, onViewImageDelete); } private function onViewImageDelete(event:TimerEvent):void { viewIcon.visible = false; viewTimer.reset(); } private function viewChange():void { if(viewTimer) { viewIcon.visible = true; viewTimer.start(); } } ]]> </mx:Script> Now, when the view changes the user will see the overlay appear immediately and then disappear with a fade effect after one second. Where to go from here In this article, you learned how to retrieve data from Amazon Web Services using REST. You also learned how to improve the overall buying experience, which is likely one of the reasons you decided to use Flex in the first place. For more guidance on user experience design, see the Flex Interface Guide. Instead of using REST, you could access Amazon Web Services with the Flex Builder WSDL wizard by choosing Data > Import Web Service (WSDL) and use the following web service address: You may want to try some of the other available Amazon Web Service operations beyond simply searching for items. You can add an item to your cart, modify the cart, clear the cart, and so on. For more details, see the Amazon Web Service Developer Documentation. You can also use the Cairngorm architectural framework to give more structure to your application. See Steven Webster's series of Cairngorm articles, which are based on the Flex Store.
https://www.adobe.com/devnet/flex/articles/buying_experience.html
CC-MAIN-2019-13
refinedweb
2,680
55.84
Python Typosquatting for Fun not Profit by William Bengtson | @__muscles Two years ago a few colleagues (shoutout to helloarbit, travismcpeak, and coffeetocode) and I were talking about supply chain attacks which led to this work being completed. A supply chain attack is an attack that targets dependencies of a company in hopes that they can leverage a weakness in the supply chain to damage the target company. For a company that produces software, the supply chain attack typically targets the software that is used in development of the software product. With this is mind, the software dependencies are packages/libraries in popular languages like Python, Java, Ruby, and Golang. Each language has their own way of packaging and retrieving dependencies which over the years has led to some having more proactive measures to protect against certain types of supply chain attacks than others. Typosquatting I decided to take a look at what typosquatting in Python looks like and first started to look at Levenshtein to calculate the distance between two package names to determine if one thing is a typosquat of another. This can be useful to detecting a package that has been squatted and used as a tool to prevent packages from being squatted. With this in mind I wanted to see how many packages of the top installed Python packages can be squatted by removing underscores ( _) or dashes ( -). Data One of the most fascinating pieces for me is that the Python Package Index (PyPI) makes data available about each Python package. This data can be very powerful when determining which packages are the most popular packages in the Python ecosystem. For more on analyzing PyPI downloads, checkout their guide here. Implementation The goal of this exercise was to understand how many packages could be squatted and if possible prevent the future squat by registering them myself. Using the data mentioned above, I performed a query to tell me the top 10,000 installed Python packages using the package installer pip. Having a list of the top 10,000 pip installed packages allowed me to come up with a list of which packages could potentially be squatted by removing any _ or - from the package name as mentioned above. I wanted to squat the potential packages so I needed to come up with a strategy for squatting them. There were a few options to choose from: - Clone the existing packages, change the name to the squatted name and register them. - Register the packages with a package that does nothing. - Register the packages with a package that could potentially educate the person installing the squatted package. Looking at the different options, I chose number three. The goal of the project has always been to be a “Guardian” and the other options seemed less than ideal in achieving the goal. Number two would have left developers wasting a lot of time trying to understand why their software doesn’t work if they installed the squatted package instead of the real package and number one seemed shady and could lead people to believe the intent behind this project had a malicious future. Initially I decided to squat around 1,100 or so packages. In order to do this, I first needed to create the package to push to PyPI and then create some automation around this to make squatting this many packages achievable in a short amount of time. I decided to create a simple package that when installed would fail and print out an error message letting you know the actual package name you probably meant to install. Below is an example of what happens when you try to install pythonjsonlogger instead of the real package python-json-logger. pip install pythonjsonlogger Collecting pythonjsonlogger Downloading pythonjsonlogger-0.1.1.tar.gz (1.3 kB) Building wheels for collected packages: pythonjsonlogger Building wheel for pythonjsonlogger (setup.py) ... error ERROR: Command errored out with exit status 1: . . . Complete output (29 lines): running bdist_wheel running build running build_py creating build creating build/lib . . . . File "/private/var/folders/cy/kc766fxx37b5rf87qxkt8hj00000gp/T/pip-install-kp4jab6f/pythonjsonlogger/setup.py", line 20, in run raise Exception("You probably meant to install and run python-json-logger") Exception: You probably meant to install and run python-json-logger ---------------------------------------- ERROR: Failed building wheel for pythonjsonlogger Now that I had the package setup and automation written, I kicked off the squatting and sat back. With over 1,100 packages registered, I now needed to wait to see if anyone would actually install these accidentally. What I found over the next two years is the most interesting piece of this project in my opinion. Results I originally meant to do a post on this after 6 months or a year, but here we are two years later and I have some data and stories to share. I’ll start with the data on 1,131 packages, and end with the stories. Top 10 squatted package downloads from July 16, 2018 until August 4, 2020: In a little over two years there have been 530,950 total pip install commands run on 1,131 packages! This does not include any mirrors or internal package registries that have cloned these packages privately. Malicious packages in PyPI have been know to steal credentials stored on the local file system such as SSH credentials in ~/.ssh/, GPG keys, or perhaps AWS credentials stored in ~/.aws/credentials. If these typosquat packages were written with malicious intent and we assume one attempt per install, that would mean 530,950 machines could have been compromised over the two year period. While the data is incredibly interesting and you can draw your own conclusions on what could have happened if these were malicious packages, the encounters/stories I find are the most interesting. Over the two years I received the following encounters: - Help installing my package - Thanks for protecting the Python community - Text from a friend asking if I owned a certain package - Researchers finding my work - Company asking to confirm the license on one of my squatted packages Help installing my package Over the course of the last two years I have received numerous emails asking for help installing my package or reporting that my package is broken. Each time I’d simply reply with the correct package they should install. Thanks for protecting the Python community A few times I received emails from people who have installed my squatted package accidentally and either read the error print out or saw my package registration clearly stating it is package to prevent exploit. In a few cases it was both a research finding my work and thanking me. I am working on a project for my security class related to attacks on the Python ecosystem. We kept stumbling upon your packages while trying to identify typo-squatting attempts. I just thought I would say thanks for helping out the community! :) Text from a friend asking if I owned a certain package I told a few friends about this work and I don’t quite remember how the interaction went down, but it was something along the lines of: Friend: Hey! Do you own pythonjsonlogger? Me: Yeah why? Friend: Dammit! You know who you are :) Researchers finding my work I have really enjoyed each occurrence of a researcher finding my work because it typically involved a conversation around typosquatting. The example above ended up with my work being included in a symposium paper for the University of Maryland CMSC 8180 class called PYed PIPer by Josiah Wedgwood and Aadesh Bagmar. In the most recent case I met with the creator of pypi-scan, John Speed Myers, on Zoom for about an hour on our work and we talked about potential future collaborations in this area. Most importantly, this conversation got me excited about the topic again and prompted me to finally write this post as well as do another round of squatting 3,000+ more packages to continue to protect the Python ecosystem. Company asking to confirm the license on one of my squatted packages The one I find most scary is an email from a large company asking to confirm the license on one of my squatted packages for use. Final thoughts All in all this was a fun project that I never meant to take two years to actually write about. Over the last year or so, I have discussed this work with the folks at PyPI and they will actually be taking ownership of my packages once I can confirm a final list of which packages are squatted versus the real packages I actually contribute to or own. All language ecosystems are vulnerable to this type of attack with some being harder to achieve due to things like package namespace. This work was very targeted and did not expand into squatting future libraries for evolution of projects. Supply chain security is very difficult and has been challenging companies with large pockets for many many years. Most importantly, THANKS to the folks at PyPI for what they do in making Python packages available to enable folks to develop each day!
https://medium.com/@williambengtson/python-typosquatting-for-fun-not-profit-99869579c35d?source=post_internal_links---------7----------------------------
CC-MAIN-2021-04
refinedweb
1,522
55.37
Are you sure? This action might not be possible to undo. Are you sure you want to continue? Test suites are normally used to group similar test cases together. The collection of individual test cases that will be run in a test sequence is called a test suite. The collection of individual test cases that will be run in a test sequence until some stopping criteria are satisfied is called a test suite. Test suite preparation involves the construction and allocation of individual test cases in some systematic way based on the specific testing techniques used. Another way to obtain a test suite is through reuse of test cases for earlier versions of the same product. This kind of testing is commonly referred to as regression testing. It ensures that common functionalities are still supported satisfactorily in addition to satisfactory performance of new functionalities. Special types of formal models are typically used to make the selection from existing test cases. Test suite management includes managing the collection of both the existing test cases and the newly constructed ones. At a minimum, some consistent database for the test suite needs to be kept and shared by people who are working on similar areas. Some personnel information can also be kept in the test suite, such as the testers who designed specific test cases, to better supported future use of this test suite. An object as we know is a graphic user element in an application e.g. a button or a list or an edit box and the special characteristics of an object within the QuickTest are called object properties. QTP stores the recorded object properties in Object Repository. Object Repositories are of two types Local and shared . If objects are stored in a Local Object Repository then these are available to specific actions but not to all the actions. But if these objects are stored in one or more Shared Object Repositories then multiple actions or tests can use them. By default QTP makes and uses Local Object Repository. If we create a new blank test and do a recording on it, QTP automatically creates a Local Object Repository for that test or action and stores the information about any object it finds in that corresponding Object Repository. In QTP 9 we can associate multiple Shared Object Repositories with an action. If multiple Shared Object Repositories are associated with an action then also while recording QTP stores objects in corresponding Local Object Repository on the condition that those objects are not already stored in any corresponding associated Shared Object Repositories. This is the default that every time we create a new action QTP creates a new corresponding Local Object Repository. It is also true that Object Repositories are associated with actions and no matter how many times we learn or record on the same object in our application in different actions the object will be stored as separate test object in each of the Local Object Repository. Local Object Repository is automatically saved with the test when we save it. The extension of the Local Object Repository is .mtr, but it is not accessible as a separate file as in case of the Shared Object Repository. We can also manipulate some aspects of Local Object Repository using Quick test Object Repository Automation Object Model. For example we can add, remove, rename test objects in Local Object Repository. [QuickTest Object Repository Automation documents the Object Repository automation object model that enables you to manipulate QuickTest object repositories and their contents from outside of QuickTest.] When we open a test that was created using a version of QTP earlier that version 9 we are asked whether we want to convert it or view it in read only format. In any case if the test previously used per-action Object Repository, the objects in each per action repository are moved to the Local Object Repository of each action in the test. If the test previously used a shared object repository, the same shared object repository is associated with each of the actions in the test, and the local object repository is empty. While learning or recording we can specify Shared Object Repository for the selected action. We can specify and associate one or more Shared Object Repositories with each action. We can also create new Shared Object Repository and associate it with our action. In case of Shared Object Repository, QTP uses existing information and does not add objects to the Object Repository if we record operations on an object that already exists either in Shared or Local Object Repository. As said earlier QTP does not add objects directly to the associated Shared Object Repository as we record, instead it adds new objects in Local Object Repository (if that object does not already exist in associated Shared Object Repository). We can surely export Local objects to Shared Object Repository. There are different ways in which we can move objects from Local Object Repository to Shared Object Repository: 1) Exporting the objects to the Shared Object Repository from the Local Object Repository: In Object Repository window choose the action whose local objects you want to move. Choose File-> Export Local Objects.Select the location in which you want to save the file. Click on save. 2) We can update the Shared Object Repository with the Local Object Repository: If we create a new test it will be created with Local Object Repository, we can associate any new or old Shared Object Repository with it, and so we can update that Shared Object Repository with Local Object Repository. In Object Repository Manager open the Shared Object Repository (clear open in read only check box). The test in this case should not be open. In Object Repository Manager go to Tools –> Update From Local Repository. Select the test who's Local Object Repository you want to use. Click update all. It will move all the objects to the Shared Object Repository. 3) We can also merge objects from two Object Repositories (called as primary and secondary in QTP 9) into a new single Object Repository (target Object Repository in QTP 9). The original source files are not changed. It also enables you to merge objects from Local Object Repository of one or more action(s) into a Shared Object Repository. It is recommended to use as a primary Object Repository the file in which you have invested alot of your effort, like which has more number of objects. If we do not specify a file extension for Shared Object Repository when creating a new Shared Object Repository QTP automatically appends a default extension name for Shared Object Repository as .tsr. This means that we can create Shared Object Repository with any extension other than .tsr, it should work fine (I have tried that and it works fine), I think it may create problems while merging two Object Repositories (I haven't tried that yet). We can compare two Object Repositories using the Object Repository Comparison Tool. The tool enables you to identify similarities, variations or changes between two Object Repositories. We can also copy objects to Local Object Repository from the Shared Object Repository. We can copy, paste and move objects in Local Object Repository and copy, paste and move objects within Shared Object Repository and between Shared Object Repositories.As said earlier we can also copy objects from shared Object Repository to Local Object Repository to modify them locally. We cannot remove an association between the action and its Local Object Repository. According to QTP user guide: You can associate as many object repositories as needed with an action, and the same object repository can be associated with different actions as needed. You can also set the default object repositories to be associated with all new actions in all tests. Whenever we make any changes to an Object Repository those changes are automatically updated in all the associated tests open on the same computer as soon as we make the change even if the Object Repository is not yet saved and if we close the same Object Repository without saving the changes the changes are rolled back in any open tests. For the test that was not open when we changed Object Repository, when we open the test on the same machine on which we modified the Object Repository the test is automatically updated with all the saved changes. To see saved changes in a test or repository open on a different computer, you must open the test or object repository file or lock it for editing on your computer to load the changes. Important points about Object Repositories It is a point to consider while planning and creating test that how you want to store objects; either you want to store them in Local Object Repository or Shared Object Repository. 1) For each action, we can also use a combination of objects from the Local and Shared Object Repositories, according to our needs. Local objects can also be transferred to a shared object repository, if necessary. This will cut maintenance and increase the reusability of the tests because it will enable us to maintain the objects in a single, shared location instead of multiple locations. 2) If there is a same name object in both the Local Object Repository and in a Shared Object Repository associated with the same action, the action uses the local object definition i.e. the local object is given preference over the shared object. If an object with the same name is stored in more than one Shared Object Repository associated with the same action, the object definition is used from the first occurrence of the object, according to the order in which the Shared Object Repositories are associated with the action. 3) When we open an existing test, it always uses the object repositories that are specified in the Associated Repositories tab of the Action Properties dialog box or in the Associate Repositories dialog box. When we access Shared Object Repositories from tests they are read-only; we can edit them only using the Object Repository Manager. 4). Object Repository dialog box Object Repository dialog box window shows a tree of all the objects (either Local or Shared) on its left hand side in the selected action. On selecting any object in the tree Object Repository window shows the information about the object like the name, repository in which it is stored etc. On the left hand side in a tree local objects are editable while the shared ones are grayed out (non-editable). To view the test object properties, to modify test object properties and to add objects to Local Object Repository we can use Object Repository window. We can also delete objects from Object Repository window; this is needed as when an object is removed form the test it is not automatically removed from the Local Object Repository. Object Repository in QTP is XML based means that if we change something related to the object in Shared Object Repository., the change will be propagated to all the tests that reference this object, in real time. Adding Objects to Repositories [Please see QTP user guide for in-depth information on these below points.] We can add objects to Shared Object Repository or Local Object Repository in a number of different waysWe can decide whether to add only a selected object, or to add all objects of a certain type, such as all button objects, or to add all objects of a specific class, such as all WebButton objects. We can modify objects stored in a Local Object Repository using the Object Repository Window and objects in a Shared Object Repository using the Object Repository Manager. It is possible to add objects to the object repository before they exist in an application. We can also add objects to the Local Object Repository while editing our test. so that it is available in all actions that use this Shared Object Repository. We can also add objects to a Shared Object Repository while navigating through the application ("Adding Objects Using the Navigate and Learn Option"). Test . OR . we can merge test objects from the Local Object Repository into a Shared Object Repository. For example. or submitting a data form. you should plan it and prepare the required infrastructure. We can add objects to the object repository using the Add Objects to Local or Add Objects option. 1) First step is Planning Before starting to build a test. A test is composed of actions (3 kinds of actions are there in QTP Non-reusable action. Reusable action and External action). QuickTest graphically displays each step we perform as a row in the Keyword View.We can add the object directly to a Shared Object Repository using the Object Repository Manager. short tests that check specific functions of the application or complete site. QTP (QuickTest Professional) lets you create tests and business components by recording operations as you perform them in your application. We can also add an object to the Local Object Repository by choosing it from the application in the Select Object for Step dialog box (from a new step in the Keyword View or from the Step Generator). determine the functionality you want to test. which we can use to verify that our application performs as expected.A compilation of steps organized into one or more actions. We can add objects to the Local Object Repository of the current action by selecting the required object in the Active Screen. The Documentation column of the Keyword View also displays a description of each step in easy-to-understand sentences. A step is something that causes or makes a change in your site or application. If needed. Decide how you want to organize your object repositories. 2) Second step in QTP is Creating Tests or Components We can create a test or component by a) Either recording a session on your application or Web site. As we navigate through the application or site. such as clicking a link or image. And Step Out commands to run a test or component step by step. An output value is a value retrieved during the run session and entered into the Data Table or saved as a variable or a parameter. Run test or component to check the site or application.b) Build an object repository and use these objects to add steps manually in the Keyword View or Expert View. We can control the run session to identify and eliminate defects in the test or component. This enables you to identify whether the Web site or application is functioning correctly. Step Over. We can then modify your test or component with special testing options and/or with programming statements. We can use the Step Into. checking any text strings. We can also use output values to extract data from our test or component. If we parameterized the test with Data Table parameters. 5) Fifth step is running the test After creating test or component. When we run the test or component. We can use many functional testing features of QuickTest to improve your test or component and/or add programming statements to achieve more complex testing goals. 3) Third step is Inserting checkpoints into your test or component. QuickTest connects to your Web site or application and performs each operation in a test or component. . 4) Fourth step is Broaden the scope of your test or component by replacing fixed values with parameters. or tables you specified. A checkpoint is a verification point that compares a recent value for a specified property with the expected value for that property. We can subsequently use this output value as input data in your test or component. we run it. QuickTest substitutes the fixed values in your test or component with parameters Each run session that uses a different set of parameterized data is called an iteration. We can also set breakpoints to pause the test or component at pre-determined points. QuickTest repeats the test (or specific actions in your test) for each set of data values we defined. Run the test or component to debug it. To check how your application performs the same operations with different data you can parameterize your test or component. objects. When you parameterize your test or component. A run-time object is the real (actual) object in the application or Web site on which methods are performed during the run session. and that QuickTest executes when the test or component runs. Test Object Class Properties Methods A test object is an object that QuickTest creates in the test to correspond to (represent) the actual object in the application. The property set for each run-time object is created and maintained by the object architect (creator) (Microsoft for Internet Explorer objects. Similarly.We can view the value of variables in the test or component each time it stops at a breakpoint in the Debug Viewer. we can view the results of the run in the Test Results window. ➤ View the results in the Results window. After running the test or component. 6) Sixth step is analyzing the results After we run test or component. we can view the results. Methods of Run-time object are the methods of the object in the . we can report the defects fond out to a database. A test object class comprises of a list of properties that can individually (uniquely) identify objects of that class and a set of appropriate methods that QuickTest can record for it. or we can report them manually from the Test Results window. methods of test objects are methods that QuickTest recognizes and records when they are executed (performed) on an object while we are recording. We can view a summary of the results as well as a detailed report. Netscape for Netscape objects). SettingsPrivacyAddThis Test object Model is a set of object types or Classes that QuickTest uses to represents the objects in our application. We can instruct QuickTest to automatically report each failed step in the test or component. Properties and methods of objects: The property set for each test object is created and maintained by QuickTest. QuickTest uses the stored information about the object during the run session to identify and check the object. ➤ Report defects identified during a run session. If Quality Center is installed. application as defined by the object architect (creator). We can access and execute run-time object methods using the Object property. Some important points to remember about methods and properties : Each test object method we execute (perform) while recording is recorded as a separate step in the test. When we run the test, QuickTest executes (performs) the recorded test object method on the run-time object. Properties of test object are captured from object while recording. QuickTest uses the values of these properties to identify runtime objects in the application during a run session. Property values of objects in the application may change .To make the test object property values match the property values of the run-time object, we can modify test object properties manually while designing the test or component or using SetTOProperty statements during a run session. We can also use regular expressions to identify property values. We can view or modify the test object property values that are stored with the test or component in the Object Properties or Object Repository dialog box. We can view the syntax of the test object methods as well as the run-time methods of any object on our desktop using the Methods tab of the Object Spy. We can retrieve or modify property values of the TEST OBJECT during the run session by adding GetTOProperty and SetTOProperty statements in the Keyword View or Expert View. We can retrieve property values of the RUNTIME OBJECT during the run session by adding GetROProperty statements. If the available test object methods or properties for an object are not sufficient or they do not provide the functionality we need, we can access the internal methods and properties of any runtime object using the Object property. We can also use the attribute object property to identify Web objects in the application according to user-defined properties. qtp.blogspot.com Bottom of Form Checkpoints in QTP (QuickTest Professional) A checkpoint enables you to identify whether the Web site or application under test is functioning correctly or not by comparing a current value for a particular property with the expected value for that property. After we add a checkpoint, QuickTest adds a checkpoint to the current row in the Keyword View and adds a Check CheckPoint statement in the Expert View. By default, the checkpoint name receives the name of the test object on which the checkpoint is being performed. We can change the name of the checkpoint if needed. Types of Checkpoints: 1. Standard checkpoint 2. Image checkpoints 3. Bitmap Checkpoint 4. Table checkpoints 5. Accessibility Checkpoint 6. Text Checkpoint 7. Page Checkpoint 8. Database Checkpoint 9. XML checkpoints Standard checkpoints allow checking the object property values in the Web site or application under test. Standard checkpoints evaluate (compare) the expected values of object properties captured during recording to the object's current values during a run session. For example we can check that a radio button is activated after it is selected. Standard checkpoints are supported for all add-in environments. Standard checkpoints can be used to perform checks on Images, Tables, Web page properties, and Other objects within your application or Web site. Standard checkpoints can be created for all supported testing environments (as long as the appropriate add-in(s) are loaded). Image checkpoints allow you to check the properties of an image in the application or Web page. For example, you can check that a selected image's source file is correct or not. An image checkpoint can also be created by inserting a standard checkpoint on an image object. Image checkpoints are supported for the Web add-in environment With Bitmap Checkpoint we can check an area of a Web page or application as a bitmap. While creating a test, we have to specify the area to check by selecting an object. An entire object or any area within an object can be checked. Bitmap checkpoints are supported for all add-in environments. Accessibility Checkpoint recognizes areas of your Web site that may not conform to the World Wide Web Consortium (W3C) Web Content Accessibility Guidelines. For example, check if the images on a Web page include ALT properties, required by the W3C Web Content Accessibility Guidelines. Accessibility checkpoints are supported for the Web add-in environment QuickTest can check that a text string is displayed in the appropriate place in an application or on a Web page with Text Checkpoint. Text checkpoints are supported for the Web add-in environment, plus some Web-based add-in environments Page Checkpoint checks the features of a Web page. For example, you can check how long a Web page takes to load or whether a Web page contains broken links. A page checkpoint can also be created by inserting a standard checkpoint on page object. Page checkpoints are supported for the Web add-in environment The contents of a database accessed by your application can be checked by Database Checkpoint. Database checkpoints are supported for all add-in environments By adding XML checkpoints to your test, you can check the contents of individual XML data files or documents that are part of your Web application. The XML Checkpoint option is supported for all add-in environments. Top of Form qtp.blogspot.com Bottom of Form QTP Tutorials 4 - Standard Checkpoint Checkpoints. qtp.blogspot.com This will help you in understanding the standard checkpoint in QTP more deeply. Bottom of Form Simple example of QTP Existing Checkpoint QuickTest (QTP) now makes it possible for you to reuse an existing checkpoint in your test. For example you can use bitmap checkpoint to verify your company's logo on each page of your application or website. Let’s see a very simple example to accomplish this. 12. Save the test as testone. Associate Shared Object Repository Rep1 with it also.. rep1 and click Save (All the objects in Local Object Repository will be grayed) 11. 8. Go to Associated Repositories tab. 6. Now (in testtwo) you can see when you go to Insert-> Checkpoint. Go to Resources->Object Repository (OR associated with this particular action will open) 9.Checkpoint Properties window opens.com . Click on Record in order to start recording. Enter any filename e. Existing Checkpoint will be enabled to let you insert any checkpoints already saved to the shared Object Repository (Rep1 in our case). Checkpoint Properties window opens. (QTP will be minimized and mouse pointer will take the shape of a hand. Click OK. 14. Object Selection .g. Go to File-> Export and Replace Local Objects (Export Object Repository dialog opens) 10. Go to Insert->Checkpoint->Standard Checkpoint. Click OK. Close that Object Repository. Click on the Flights. 3. (Standard checkpoint will be added) 7. Open a new test (Make sure that Flight Reservation Window in open. right click on action 1 and choose Action Properties.blogspot. button which is on the right hand side of Fly To dropdown. 13. (If it asks for Automatic Relative Path Conversion Click Yes) Click Ok to close that Action Properties window.) 2. Click Stop in order to stop recording. Open a new test and Save it with the Name testtwo.1.. qtp. (If not already associated) In the keyword view. Click on the ‘+’ sign to locate the shared Object Repository and associate it. 5.) 4. Page Checkpoint Page checkpoint:It is for web applications only. . 1.google.co. Go to "Web" tab and choose first option "Record and run test on any open browser.Checkpoint Properties" window.in/ is open." and click ok.co. Click on 'Page : Google' option which has a page icon on left of it with right corner of the page slightly folded. Open a blank test. in the results window.google. The mouse pointer will become hand and QTP will be minimized.in in offline mode (not on internet).Common things to check with this are load time. A 'Page Checkpoint Properties' window opens up.google. Click ok. 9. Click anywhere on the white space on the Google. It will Open "Object Selection .co.Check CheckPoint("Google") We will explore this line later on.Page("Google"). Go to Insert (menu)->Checkpoint->Standard Checkpoint (or press F12). on left hand side. Click on Record.google. In the Expert view it will add just one line: Browser("Google").in should be open.in page.(Now only QTP with blank test and www. it was already open.) 3. 4. 2. It recorded the following properties: Property Name load time number of images number of links Property Value "0" "2" "20" Here it shows the load time as 0 because I did not open Google at the time of running the test. Click ok. When we click on Record. I ran this test by opening of Form QTP Tutorials 5 . Let all the options be default. 5. it will show (when every option is expanded): Test Checkpoint-page Summary (where Checkpoint-page is the name with which I saved the test Run-Time Data Table Checkpoint-page Iteration 1 (Row 1) Action1 Summary Google (This will be the browser) Google (This will be the Page) Checkpoint "Google" If you run this test on www. "Record and Run Settings" window opens up. 7. 6. When you run it.com it may fail. Click on Stop in order to stop the Recording. 10. Make sure that. broken links etc. 8. Here click on 'Create' button which is on the right of "Connection String:" It will open 'Select Data Source' window. 6. Select Oracle (on my machine it was 'Oracle in OraHome9'). A Database Query Wizard opens. 15. "Record and Run Settings" window opens up. Click Next. Enter your password for Oracle. Click on New Button. Open a blank test. 13.qtp. 3. It will show all the data source drives it could find. Select 'TNS Service Name' ( I selected 'DB02'. Click 'Test Connection' Button. Enter 'description' (I entered "SQL") 12. 1. Enter 'Data Source name' ( I entered "oracle") 11. Go to Insert (menu)->Checkpoint->Database Checkpoint . Enter userid (I used SCOTT). 3. It will open 'Oracle ODBC Driver Configuration' window. 10. 8.com Bottom of Form QTP Tutorials 6 .It will ask for a Password. Click Next. Click Next.blogspot. If successful it will show 'Testing Connection' window with 'Connection Successful' written on it. When we click on Record. my oracle database name) from combo box. Now we will record a test. Click on 'Machine Data Source' Tab 4. 5. 'Create New Data Source' window opens.(before doing any recording) 1. Click on Record. Select ' Specify SQL statement manually' from the Query definition area. 7. Go to "Windows Applications" tab and choose first option "Record and run test on any open Windows based application. For this go to Insert -> Checkpoint -> Database Checkpoint. This completes our task of Connecting QTP with Oracle. 2. Select 'User Data Source' from Select a type of data source. 9. 2. 14.Database Checkpoint Now we will try out Database checkpoint: using Oracle 9i First of all you have to connect oracle 9i to QTP 9. Click Finish." and click on ok. EXC=F. Click ok 9. It means if you go to the oracle and add or delete any row and run this test again it will fail.TLO=0.FWC=F.FEN=T.Check CheckPoint "DbTable")" This is the simplest database checkpoint example. 6. it just adds one line "DbTable("DbTable")." In the SQL Statement area type "select * from emp.default way in which it was installed. qtp.RST=T.PFC=10.APA=T. It will come to Database Query Wizard window with 'Connection String' field filled with: "DSN=oracle.DBA =W. with the count of cells.DBQ=DB02. ( we don't need to open any other window or application to run this as our Oracle is running at the back end as a service .) Just try to think how QTP is comparing the expected results with the actual one.FDL=10. in details it will show checked 112 cells (in your case number of cells may differ). Go to 'Machine Data Source' Tab 7. Click Finish. Select Oracle from data source name.GDE=F.UID=SCOTT.MTS=F.). It will open 'Oracle ODBC Driver Connect' 8. Lets run it.PWD=TIGER.com Bottom of Form . LOB=T.". Select ' Specify SQL statement manually' from the Query definition area.CSR=F. Click Ok. In the Expert View. 12.4.blogspot.FRC=10. 11. Click Next. Click on Run. Enter password. Click Create. Click Ok.FRL=F. Click Stop in order to stop the Recording.QTO=T.nothing special. A 'Database Query Wizard' opens. 10. 5. MDI=F.BAM=IfAllSuccessful. It will open 'Database Checkpoint Properties' window with the result of the query. Change the "Checkpoint timeout" at the bottom of the window to 0 seconds. in the "Flight Schedule" area. Click ok. "Record and Run Settings" window opens up. Go to Insert (menu)->Checkpoint->Bitmap Checkpoint 4. STYLE A 1. so that we will have no wait time while running the test." and click Ok. 9. I will have "WinComboBox:Fly To" highlighted. instead of clicking on the "Fly To" combo box. Open a blank test. This time it will have "Flight schedule" area instead of just the "Fly To" combo box. To see how it stores the results.. click somewhere in the empty space above the "Fly From" Combo box but below the line. after 3rd point.QTP Tutorials 7 . Click stop to stop recording the test. Click ok 6. just fail the test. Just select "Fly From" combo box by dragging. Click on the "Fly To" combo box.Bitmap Checkpoint Properties" window opens up. Click ok. Mouse pointer will change so that you can select any area by dragging. Now you can run the test it will pass. it will show you the expected bitmap and actual bitmap on the right hand side. STYLE B Above.Bitmap Checkpoint Now we will look at the bitmap checkpoint which is different from the image checkpoint. "Object Selection. "Object Selection. Click ok It will open "Bitmap Checkpoint Properties" winodow. Click on Record. If you have recorded in the style A then just select any value in the "Fly To" combo box and then run the test. (note: it will show that only in case of Failed result) .e." button.Bitmap Checkpoint Properties" window opens up. 5. Change the "Checkpoint timeout" at the bottom of the window to 0 seconds. Now click on the "Select Area. Make sure that QTP and the Flight application are open. 3. Click stop to stop recording the test. It will have "WinObject:Flight Schedule" highlighted. In the result window on the left hand side when you click on Checkpoint "Fly To:". i. so that we will have no wait time while running the test.. When we click on Record. It will open "Bitmap Checkpoint Properties" window. 7. Go to "Windows Applications" tab and choose first option "Record and run test on any open windows based application. 8. 2. Open a blank test. Winter. there will be a folder named My Pictures.Check CheckPoint("Sunset") If you run it with that image open in internet explorer it will pass.com Bottom of Form QTP Tutorials 8 . Go to Insert (menu)->Checkpoint->Standard Checkpoint(or press F12)." and click on Ok. Click on the image which is opened in the explorer. Now only a new blank test and internet explorer with this image should be open.com Bottom of Form . "Record and Run Settings" window opens up. It will open 'Object Selection Checkpoint Properties' window with Image: Sunset highlighted.blogspot. In this window just uncheck all the property values like href. Click stop to stop recording the test.Blue Hills. In the expert view it will just add one line Browser(" %20S"). Go to "Web" tab and choose first option "Record and run test on any open browser. Go to My Documents->My Pictures-> Sample Pictures and right click on image named 'Sunset' and open it with internet explorer. html tag etc and only check last property which is src. Rest every thing will be default. Sunset. When we click on Record. Water lilies) We will run this test with one of the image there.Image("Sunset"). On your system under My Documents. It is just checking that the image in the explorer is in the same location in which it was when the test was recorded and its name is Sunset. Click OK.blogspot. In this way you can test for some or all the properties of the image which it showed in the 'Image Checkpoint Properties' Window. Click on Record. Click Ok.qtp.Image Checkpoint We will look at the Image checkpoint.Sample Pictures( containing 4 pictures .The mouse pointer will become hand and QTP will be minimized.Sunset.Page(""). This test is not intelligent enough. QTP Tutorials & Interview Questions qtp. under this you will will find a folder. It will Open 'Image Checkpoint Properties' Window. If you change the name of some other picture in that folder to Sunset and run the test with that it will also pass. Again Click ok to come out of "Text Checkpoint Properties" window.) of that web page. When we click on Record. At the bottom of the "Text Checkpoint Properties" window change 'Checkpoint timeout' to 0 seconds. Go to Insert (menu)->Checkpoint->Text Checkpoint.Checkpoint properties" window opens. Click on stop in order to stop recording.in/software/software-testinglife-cycle. 4.co. Click on Record. "Object Selection . . 5. Click on Record.QTP Tutorials 9 . The mouse pointer will become hand and QTP will be minimized. 6.Table Checkpoint In this tutorial we will look at a table Checkpoint just to get familiar with it. Go to "Web" tab and choose first option "Record and run test on any open browser.. It will show the text to be checked in "Checkpoint Summary" area in red color and also show in blue color the text which is displayed before and after the selected text.Click on the first paragraph (which starts with-The page you are looking.Text Checkpoint Now we will look at the Text Checkpoint: 1. Select "WebTable: Software Testing Life Cycle" which has a table icon on its left. Run the test and when it is passed just go to the results window and on the left hand side just expand every option and click on last option Checkpoint "Cannot find server". "Record and Run Settings" window opens up. where "Software Testing Life Cycle" is the name of the table. This website has a table at the bottom of the page. "Record and Run Settings" window opens up.blogspot. On the right hand side it will show you the details. Open a blank test and also open a website ". Try to understand those. change before and after text and so on. Click ok.editorial.The mouse pointer will become hand and QTP will be minimized." and click on ok. When we click on Record. but for now just click ok. Open a blank test and a web page in offline mode like this below: 2. Click somewhere inside the table. Click on Configure -here you can change your selected text. Go to "Web" tab and choose first option "Record and run test on any open browser." and click on ok.com Bottom of Form QTP Tutorials 10 . "Text Checkpoint Properties" window opens up. Go to Insert (menu)->Checkpoint->Standard Checkpoint (or press F12).php" in offline mode. 3. qtp. 'Table Checkpoint properties' window opens. So the final version looks like this: Dim return return = Window("Flight Reservation").com Bottom of Form Difference Between Text & Text Area Checkpoint Text Checkpoint Text Checkpoint checks that a text string is displayed in the appropriate place on a Web page or application. Click stop in order to stop recording. Declare a variable and catch the return value in that variable: Dim return return = Window("Flight Reservation"). Text Area Checkpoint Text Area Checkpoint checks that a text string is displayed within a defined area in a Windows-based application. . according to specified criteria.WinButton("FLIGHT"). NOTE: Checkpoint always returns a value. This time we will not do any extra setting.blogspot. it depends on us whether we capture it or not. qtp.Check CheckPoint("FLIGHT") msgbox (return) One thing more we need to do here is that we have to enclose Checkpoint ("FLIGHT") in brackets. Run the test and analyze the results in the result window. We will manipulate test results in later tutorials. qtp. Open that test that contains the standard Checkpoint. Lets now capture it. In the expert view of the test you will see only one line i.Checkpoint Return Value We will use the Standard Checkpoint which we did in tutorial 4.WinButton("FLIGHT").com Bottom of Form QTP Tutorials 11 .Check CheckPoint("FLIGHT") Now we will make some changes in this one line so that it can return some value.WinButton("FLIGHT").blogspot. mainly the checkpoint results to see how QTP verifies the result.Check (CheckPoint("FLIGHT")) msgbox (return) Now run the test and see the msgbox appearing with the return value. Just change the Checkpoint timeout at the bottom of this window to 0 seconds and click ok . Window("Flight Reservation"). It will show all the rows and columns of the selected table.e. (See Screenshot for Text Checkpoint Properties window below). as Standard Windows. . A small & simple example to get a feel of both Text and Text Area Checkpoint: Make sure that the Flight Reservation window is open as shown below. and ActiveX. Java. Click Cancel. Go to Insert (Menu) ->Checkpoint->Text Area Checkpoint QTP will be minimized and the mouse pointer will change into crosshairs. Open a New test in QTP and click Record.recording a test on Windows-based applications.00 in the above Flights Table. With the pointing hand click on $110. Text Checkpoint Properties window opens. Object Selection window opens. Go to Insert (Menu) ->Checkpoint->Text Checkpoint QTP will be minimized and the mouse pointer will change into pointing hand. Click OK. such or Web-based application. Visual Basic.You can add a text checkpoint while You can add a text area checkpoint only while recording or editing steps in a Windows. Text Area Checkpoint Properties window opens. Click Cancel. You can then click Configure to view and manipulate the actual selected text for the checkpoint.] Text Area Checkpoint Properties window: . (See Screenshot for Text Area Checkpoint Properties window below).With the crosshairs select $110.00 in the above Flights Table. Click OK. Object Selection Window for "Text Area checkpoint" will also be similar as the below one for "Text Checkpoint": Text Checkpoint Properties window [In Windows-based environments. Object Selection window opens. if there is more than one line of text selected. the Checkpoint Summary area displays [complex value] instead of the selected text string. blogspot. recording a signature produced by dragging the mouse. In this QTP also records and tracks every movement of the mouse for example.Now considering both Text and Text Area checkpoints. It records at object level and records all run-time objects as Window or WinObject test objects. Analog Recording steps are not editable from within QuickTest. if an environment or on an object not recognized by QuickTest. use Low Level Recording. analyze yourself: Which one is better (if any) and Which one is to be used in which situation? qtp. Normal mode is the default and takes full advantage of the QuickTest test object model. Low Level Recording : At any time. QuickTest records all parent level objects as Window test objects and all other objects as WinObject test objects. There are other recording modes also like Analog Recording or Low Level Recording.com Bottom of Form QTP (QuickTest Professional) Recording The default mode of recording is the Normal recording mode. . as it recognizes the objects in the application regardless of their location on the screen. Each step recorded in Low Level Recording mode is shown in the Keyword View and Expert View. Analog Recording : Exact mouse and keyboard operations are recorded in relation to either the screen or the application window. Analog Recording and Low Level Recording require more disk space than normal recording mode. the value must be passed down through the action hierarchy of the test to the required action.com Bottom of Form Parameterizing Tests in QTP (QuickTest Professional) By replacing fixed values with parameters QuickTest enables you to enlarge the scope of a basic test. Values in steps and checkpoints and also the values of action parameters can be parameterize. Use Analog Recording when : The actual movement of the mouse is what you want to record. and then parameterize the required step using this action input parameter value (that was passed through . If the location of the object is important to your test. A parameter is a variable that is assigned a value from an external data source or generator. There are four types of parameters: Test/action parameters: Test parameters make possible for us to use values passed from the test. Exact location of the operation on your application screen is necessary. QuickTest adds to your test a RunAnalog statement that calls the recorded analog file. in normal mode QuickTest performs the step on an object even if it has moved to a new location on the screen. we can switch to either Analog Recording or Low Level Recording in the middle of a recording session for specific steps and then return to normal recording mode. Action parameters enable us to pass values from other actions in your test.g. suppose that we want to parameterize a step in Action3 using a value that is passed into the test from the external application that runs (calls) the test.All the three modes of recording can be used in a single test e. To use a value within a specific action. We can pass the value from the test level to Action1 (atop-level action) to Action3 (a nested action of Action1). greatly increases the power and flexibility of a test. In Analog Recording mode. We can then use that parameter value to parameterize a step in the test.blogspot. It is known as parameterization. Recording in Analog mode can be relative to the screen or relative to a specific window (see user guide for detail) In Analog Recording a separate file is saved and stored with the action. switch to Low Level Recording qtp. For example. Use Low Level Recording when : Environments or objects not supported by QuickTest. Parameters let us check how the application performs the same operations with multiple sets of data. First of all let's talk a little about keyword view in QTP and then we will talk about recording in QTP and then we will move on to other things. Alternatively. and generates them in a script (in an Expert View). Random number parameters Enable us to insert random numbers as values in your test. or iteration.blogspot. or statement. The values of object properties can be parameterized for a selected step. This column shows a hierarchical icon-based tree. QuickTest displays them as steps in the Keyword View. We can parameterize a call to Action4 based on an output value retrieved from Action2 or Action3. The highest level of the tree is actions. When the value of an object property for a local object is parameterized. After recording all the operations. and Action4 are sibling actions at the same hierarchical level. Therefore. and that these are all nested actions of Action1.com Bottom of Form QTP (QuickTest Professional) keyword view In QTP (QuickTest Professional) we first of all record a test. Parameterizing the value of a checkpoint property enables us to check how an application or Web site performs the same operation based on different data qtp. The values of the operation (method or function arguments) defined for the step can also be parameterized. for example. Data Table parameters allow us to create a data-driven test (or action) that runs several times using the data that we supply. We can then use these parameters in the action step.from the external application). we are amending the test object description in the local object repository. Action3. Operation The operation (methods or functions) to be performed on the item selected in the Item column. and all steps are contained within the relevant branch of the tree. then run a test and then analyze the results. These may be values that we supply. QuickTest uses a different value from the Data Table. all occurrences of the specified object within the action are parameterized. we can pass an output action parameter value from an action step to a later sibling action at the same hierarchical level. In the keyword view there are 4 visible columns – (For other valuable information on below points please see QTP user guide pg 92 and pg 114) Item The item on which we want to perform the step and it can be a test object. Environment variable parameters allow us to use variable values from other sources during the run session. In each repetition. suppose that Action2. or values that QuickTest generates for us based on conditions and options we choose. . utility object. function call. For example. but before running the test we can also enhance it with checkpoints and parameters. Values in steps and checkpoints can be parameterized while recording or editing the test. Click or Select. An action is stored with the test in which you created it. An action has its own test script. Assignment The assignment of a value to or from a variable for example. and enhance the steps . the mouse button to use when clicking the image. for example. If you create a test in which you log into the system (email). we can record. Store in cCols would store the return value of the current step in a variable called cCols so you can use the value later in the test. This column is not visible by default.Value The argument values for the selected operation. check inbox. If there is a process that needs to be included in several tests. modify. They also make it easier to re-record steps in one action when part of your application changes. This column is also not visible by default. your test might be structured as shown—one test calling three separate actions: Test 1 Call to action 1 Call to action 2 Call to action 3 Actions stored with Test 1 Action 1(Logging In) Action 2(Checking Inbox Mails) Action 3(Logging Out) ---> ---> ---> Actions make it possible to parameterize and iterate over specific elements of a test. we can design more modular and well organized and professional tests. When we create a new test. Comment Any textual information you want to add regarding the step. QuickTest creates a corresponding action sheet in the Data Table so that we can enter Data Table parameters that are specific to that action only. containing all of the steps recorded in that action. for example. For every action called in the test. Documentation It is a Read-only auto-documentation of what the step does in an easy-tounderstand sentence. Reusable action Reusable actions are like functions in any programming language. Click the "findFlights" image. and all objects in its local object repository. and then log out of the system (email). it contains a call to one action. qtp. Three types of actions are: Non-reusable action This non reusable action can be called only once and that too in the test with which it is stored. By breaking up the tests into calls to multiple actions.com Bottom of Form Actions in QTP 9 (QuickTest Professional) Actions break up the test into logical sections/units such as specific activities that we perform in our application.blogspot. and can view the detailed results for each action individually. the test results are divided by actions within each test iteration so that we can see the outcome of each action. By default. Wait and Synchronization Synchronization makes available specified amount of time for an object to process prior to moving on to the next step.g. you should use relative paths for your reusable actions so that other users will be able to open your tests even if they have mapped their network drives differently. Then we can call the action from other tests. . qtp. Synchronization is possible in many ways: 1) We can insert a synchronization point for example for a progress bar to reach 100%. If you expect other users to open your tests and all actions in your tests are stored in the same drive.blogspot. QTP will generate a WaitProperty statement in the Expert View in case of synchronization. When we run a test with multiple actions. new actions are non-reusable. editable copy of the Data Table information for the external action. whenever a specific page loads QTP moves forward in case of synchronization. For client-server communications to finish.com Bottom of Form QTP Sync. Examples where synchronization can be used: For a web page to load. Each action created in a test can be marked as reusable or non-reusable. Deleting a reusable action that is called by other tests will cause those tests to fail. but in case of wait. External action is a reusable action stored with another test.of the process and save them in a reusable action. but we can choose to use a local. It can be called several times by the test with which it is stored (the local test). External actions are read-only in the calling test. When a call to an external action is inserted. as well as by other tests. if the wait is for 10 seconds and a webpage loads in 3 seconds then it still waits for 7 seconds. For a button to become enabled or disabled. Synchronization is there to take care of the timing problems between QTP and the AUT. Wait is like forcing the QTP to wait for a specified amount of time while synchronization is not a 'forced' wait e. modifying. and enhancing the same steps each time. the action is inserted in readonly format We can create an additional call to any reusable or external action in the test by pressing CTRL while we drag and drop the action to another location at a parallel (sibling) level within the test. rather than recording. File(menu)-> Settings-> Run tab) Synchronization is enabled only during recording. Click Ok.' selected. In Flight application go to File (menu)-> New Order." and click on ok.' text in the Flight application. Click on Flights. When we click on Record. Click Stop in order to stop the recording. Flights Table window open up.. (Browser Navigation Timeout.blogspot.) Click Ok. Choose Property name as 'text' and Property value as "Insert Done.. "Record and Run Settings" window opens up.g. " in the Property value text box. File(menu)-> Settings-> Web tab) 4) When working with tests. (Object Synchronization Timeout.S of the Fly To combo box. 'Add Synchronization Point' window will open. In Fly To Choose Frankfurt. button which is on the R.. . Exist statements always return a Boolean (0 or 1) value..2) We can use Exist or Wait statements..com Bottom of Form Example of Synchronization Make sure that only QTP and Sample Flight application are open. Enter your name in the Name field." (Don't forget to put those 3 dots. we can increase the default timeout settings for a test to instruct QuickTest to allow more time for objects to appear. Go to "Windows Applications" tab and choose first option "Record and run test on any open windows based application.' text... 3) We can also modify the default amount of time that QTP waits for a Web page to load. In Fly From choose Denver. 'Object Selection . QTP will be minimized and cursor will take the shape of a hand. Let the first option remains selected in that and just click on ok. Go to Insert (menu)-> Synchronisation Point. Click on that 'Insert Done. qtp. and double quotation marks e. Enter the Date of flight as tommorrows date. Click on Record.. Click on Insert Order and let it complete uptill 100% untill you see the 'Insert Done.H. It will automatically fill some of the fields.Synchronization Point' window will open with 'ActiveX: Threed Panel Control. WinButton("Insert Order")..WinButton("Button").Click Window("Flight Reservation")..ActiveX("Threed Panel Control").WinButton("Insert Order").Click Window("Flight Reservation")..WinComboBox("Fly From:").WinComboBox("Fly From:").Select "Denver" Window("Flight Reservation").Select "Denver" Window("Flight Reservation").WinButton("Button"). WaitProperty Waits until the particular object property attain the specified value or exceeds the specified timeout before continuing to the next step. .9 Window("Flight Reservation").WinButton("FLIGHT").Dialog("Flights Table").Set "axc" Window("Flight Reservation").WinComboBox("Fly To:").WinButton("FLIGHT")..ReportEvent micPass.".blogspot. Reporter is an Object used for sending info to test results and it uses ReportEvent method to accomplish this.WaitProperty "text".ActiveX("MaskEdBox").ActiveX("Threed Panel Control"). "Insert Done.WinComboBox("Fly To:"). 10000 Window("Flight Reservation").WinEdit("Name:").Click Window("Flight Reservation").".Click I have modified the above script a little bit to capture the WaitProperty value. "Property of text is true" End If msgbox rc Window("Flight Reservation").Set "axc" Window("Flight Reservation").com Bottom of Form Example of WAIT Make sure that only QTP should be open.Type "092407" Window("Flight Reservation").Type "092407" Window("Flight Reservation")."sync on Insert Done. The code in the Expert view looks like this: Window("Flight Reservation").Click rc=Window("Flight Reservation").Click Window("Flight Reservation").ActiveX("MaskEdBox"). 10000) If rc=true Then reporter.WinButton("OK").This whole process will add WaitProperty.Select "Frankfurt" Window("Flight Reservation"). Window("Flight Reservation").WaitProperty ("text".Select "Frankfurt" Window("Flight Reservation").ActiveX("MaskEdBox").WinButton("OK").Dialog("Flights Table")..WinEdit("Name:").".Click qtp.Click Window("Flight Reservation"). For more info on these plase see QTP help.ActiveX("MaskEdBox"). "Insert Done.Click 1. ReportEvent method sends the results to the result window.9 Window("Flight Reservation").Click 1.. WinMenu("Menu"). When we click on Record. "Record and Run Settings" window opens up.Type micTab 4) wait(5) 5)Dialog("Login").com Bottom of Form Example of Sync Make sure that your internet is On and QTP is open. 1)SystemUtil.WinEdit("Agent Name:").WinEdit("Agent Name:")." and click on ok.. Go to Start->AllPrograms->QuickTest Professional->Sample Applications->Flight. .e.com. when you see 'Done' on the status bar at the bottom) .' Click Ok. it by default opens google.SetSecure "46ed14b628c7ae93e3a3ab35576f08fc424a6fb9" 6) Dialog("Login").Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4a.google.WinEdit("Password:"). Go to "Web" tab and choose first option "Record and run test on any open browser. Go to "Windows Applications" tab and choose first option "Record and run test on any open windows based application.Set "sachin" 3) Dialog("Login").com as homepage.blogger. Click on Record. It will record the code as below but one thing which I have added extra is Wait(5) in step 4 (just go to the expert view and add this line before the line which includes encrypted password).com again. When Flight Reservation window is open. (use tab to move to password textbox."open" 2) Dialog("Login"). click on Back toolbar button (below File menu) to go back to google. so that when you open internet explorer.exe". When on Record.) Enter password as 'mercury.Click 7) Window("Flight Reservation")."".Exit" qtp." and click on ok.) Type Blogger in the search text box and click on "I'm Feeling Lucky" button instead of Search button.blogspot. Enter Username as your first name (make sure to enter 4 or more characters). Go toFile(Menu)->Exit.Select "File. Open internet explorer (make sure your default page is www."C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\".WinButton("OK"). It will record the following code with sync automatically recorded by QTP (see 4th step). When we click on Record. go to File(Menu)->Exit.com is open (i. "Record and Run Settings" window opens up. my internet connection is average). This is just to give you a startup on parameters in QTP. which is too short for a browser to complete the navigation i.Page("Google"). qtp.EXE". Environment Variables in QTP Random Variables in QTP Test parameters Action Parameter Global and Action data sheet Parameters QTP Output Values Parameterize a checkpoint ."C:\Documents and Settings\Sachin".Set "blogger" 3) Browser("Google").1)SystemUtil\Program files\Internet Explorer\IEXPLORE.blogger.e.com is open.Sync 5) Browser("Google").com Bottom of Form QTP Parameters This is not an exhaustive material on parameterization. so that you can go ahead and do wonders with parameters in QTP on your own. It will fail because sync method waits for the browser to complete the current navigation. to open("Blogger: Create your Blog")."open" 2) Browser("Google").WinMenu("ContextMenu"). after the 3rd step it just waits for 2 second after which it goes to the Back button. Keep 'Browser navigation Timeout' to 10 seconds in File(Menu)->Settings and then it should pass because in 10 seconds it surely makes the back button enabled after the 3rd step of clicking "I'm Feeling Lucky".Select "Close" Try to run this code. but here we have set the browser navigation timeout to 2 seconds.Back 6) Browser("Google").WinToolbar("ToolbarWindow32").Page("Google").Press "&File" 7) Browser("Google").WebButton("I'm Feeling Lucky").WebEdit("q"). it works fine.Click 4) Browser("Google"). Web tab and change the 'Browser navigation Timeout' to 2 seconds for example and then run the above code again.blogger."". but it finds it disabled as it gets enabled only when www. because what happens is.com after clicking on "I'm Feeling Lucky" button (I am not on T1 lines. Now go to File(Menu)->Settings. Go to File (Menu)->Settings. It will show you the value of variable 'a' in the message box. Click on the '+' which is on the right of Variable type dropdown. Examples of such variables are OS. Now Run the test. in Name type 'a' and in Value type 'hello' (without quotes. Go to expert view and type: msgbox(environment("a")) 8. By default Built-in variable type is selected and you will be able to see Name and Description of Built-in variables below Variable type dropdown box. 2. It is just a simple way to show how a Built-in environment variable works. 7. 5. Go to Environment Tab. OSVersion. 3. User-Defined External. Go to Environment Tab. Open a new Test.blogspot. respectively. 7. . I just did the above four steps in order to show you from where you can access Built-in variables. Go to File (Menu)->Settings. a Test Settings window opens. 6. It will add the variable with its type as 'internal'.g. 3. 5. Lets look at an example of this: 1. 6. Built-in are the types of environment variables available in QTP. Built-in variables as the name suggests are predefined by QTP. Lets look at an example of this: 1. And Run the test.com Bottom of Form QTP Environment Variables User-Defined Internal. Now close this Test Settings window and go to test. I have added quotes just for clarity) and click on OK. 2. a Test Settings window opens. You can click on any of those variables to see their current value. 4. 4. type the name and value of the variable e. From Variable type dropdown select User-defined. Click Apply and OK to come out of Test Settings window. In the Expert View type: a = environment("ActionName") & " is running on " & environment("OS") msgbox (a) 8. 'Add New Environment Parameter' window opens up. ActionName which are for Operating System. Open a new Test. User Defined Internal variables are the variables defined by you (the user) within the test and which are saved within the test and also accessible from within the test. Operating System Version and name of the action which is currently running.qtp. You can create as many files for environment variables as you want and choose them for your test. As soon as it is imported the complete path of that file will be shown in the File text box and the variable in it will show under Name. Click on Apply and Ok to come out of it.xml extension. Type as Number.blogspot. 4. Now lets look at an example of this: Open a new text file and type the following lines in it and save it with . Go to expert view and type: msgbox(environment("Address")) 7. . 4. There are many different ways in which you can use Random numbers in QTP. 6. 5. qtp. Go to File (Menu)->Settings. Go to 'Parameters' Tab. Open a new test. 'Action Properties' window opens. Value and Type heading (in our case it will show Address under Name. Click Ok. Open a new Test. 25 yellow Road under Value and External under Type). 5. It will show you the value of variable 'Address' in the message box. a Test Settingswindow opens. Lets jump at the examples straightway. 3. and Default value as 1. These act as read only for the test.) 1. Now run the test. It will open 'Value Configuration Options' window. 2. 2. Example 1: 1. Click on "Load variables and values from external file" check box and import that external xml file that we created above.User-Defined external variables are the variables which are defined in the file outside of test. From Variable type dropdown select User-defined. Go to Environment Tab. select 'Action Call Properties'. Again right-click on Action1 in the keyword View . 3. Click on this button. In the keyword View right-click on Action1. select Action Properties. 6. Go to 'Parameter Values' Tab. 'Action Call Properties' window opens. ( I saved it in 'tests' folder in the 'QuickTest Professional' folder under C:\Program files. In the 'Input Parameters' area click on the '+' sign and enter the Name of the input parameter as 'a'. Make a single click under 'Value' column in the 'Input Parameter's' area.com Bottom of Form QTP Random Variables First example of Random Numbers: When you define parameters for an action you can set the parameter's value as Random numbers. it will become a button '<#>'. 100) msgbox(var1) Next Third example of Random Numbers: (This is more or less same as the first one) . (It would be better if you run it by activating the expert view. Go to Expert view of action1 and type: msgbox "action1" msgbox(parameter("a")) 13. click ok to insert a new action.7. Click on 'Parameter' radio button. 11. select first option-For each action iteration.) RandomNumber is an Object.action2. 'Insert Call to New Action' window opens. then it will show you which step it is currently running by pointing to that particular step with yellow color arrow and then you will be able to understand it in a better way. When you copy the above text to Expert View of Action2. Second example of Random Numbers: Here is another way of generating random numbers: Open a new test and in the Expert view write these lines and run the test: For i=1 to 5 var1=RandomNumber (0. Now Go to Insert (Menu)-> Call to New Action.EndNumber]) EndNumber is optional above. oneIteration. Now Run the test. but different values if you run it next time i. a different value at each test run. 10.e. Click on Name Checkbox and choose arg_a from the dropdown. RandomNumber(ParameterNameOrStartNumber [. In the Numeric Range enter 0 against From and 100 against To. Go to Expert View of Action2 and type: For i=1 to 3 RunAction "Action1". In the 'Generate New Random Number' area. select 'Random Number' from the dropdown. If we select the second option 'For each test iteration' then a message box will show same values. Again Click ok to come out of 'Action Call Properties' window. Click ok. 9. just click ok. 15. it will show you a message that it has made Action1 Reusable. RandomNumber("arg_a") Next 14. 8. You will see that it shows a different value in each msgbox() because we selected 'For each action iteration' from the 'Generate new random number' area. 12. Click Ok. select 'Action Call Properties'. It will open 'Value Configuration Options' window. 3. 6. And Run the Test. Make a single click under 'Value' column in the 'Input Parameter's' area. 1. In simple terms Rnd is a function and Randomize is used to initialize this function. Type as Number. 7. Open a new test. 11. Go to 'Parameters' Tab. 10. 9. Fourth example of Random Numbers: Another VBScript method of generating a random number: For i= 1 to 3 var1 = int((101*rnd)+0) ' Generate random value between 0 and 100. and Default value as 1. 'Action Call Properties' window opens. 8. 5. 'Action Properties' window opens.One more way is to define a Random Number parameter in the 'Parameter Options' or 'Value Configuration Options' dialog box. select first option-For each action iteration. 4. Click on 'Name' Checkbox and choose arg_a from the dropdown. In the Numeric Range enter 0 against From and 100 against To. No matter how many times you Run the below code it generates the same values: For i= 1 to 3 randomize(2) . Again right-click on Action1 in the keyword View . Now in the Expert View of action1 type: x=RandomNumber("arg_a") msgbox(x) 12. In the 'Input Parameters' area click on the '+' sign and enter the Name of the input parameter as 'a'. giving it a new seed value. In the keyword View right-click on Action1. select 'Action Properties'. If the number is omitted. If Randomize is not used. Click ok. it will become a button '<#>'. the Rnd function (with no arguments) uses the same number as a seed the first time it is called. MsgBox var1 next Let's talk about Randomize and Rnd for some time: Randomize [number] We use a number with Randomize to initialize the Rnd function's random-number generator. Go to 'Parameter Values' Tab. Click on this button. the value returned by the system timer is used as the new seed value. 2. Click on 'Parameter' radio button. Again Click ok to come out of 'Action Call Properties' window. select 'Random Number' from the dropdown. In the 'Generate New Random Number' area. For i= 1 to 3 x=rnd(1) msgbox(x) Next If the number is Equal to zero (=0)then Rnd generates 'The most recently generated' number. For i= 1 to 3 randomize var1 = Int((6 * Rnd) + 1) ' Generate random value between 1 and 6. using number as the seed.lowerbound + 1) * Rnd + lowerbound) likewise Int((6 * Rnd) + 1) ' Generate random value between 1 and 6. For i= 1 to 3 x=rnd(-1) msgbox(x) Next If the number is Greater than zero(> 0) then Rnd generates 'The next random' number in the sequence. The Rnd function returns a value less than 1 but greater than or equal to 0. Rnd(number)If the number is Less than zero (< 0) then Rnd generates 'The same number' every time.' For i= 1 to 3 x=rnd() msgbox(x) Next . MsgBox var1 next Some light on Rnd: The following formula is used to produce a random number in a given range: Int((upperbound . MsgBox var1 next But if you omitt randomize(2) from the above code and instead put only randomize then at each run it generates different values. For i= 1 to 3 x=rnd(0) msgbox(x) Next If the number is Not supplied then Rnd generates 'The next random number in the sequence.var1 = Int((6 * Rnd) + 1) ' Generate random value between 1 and 6. Remember: For any given initial seed. 8. Remember (taken from QTP guide): . qtp. 11. Open a New Test. When you click on ok to come out of 'Value Configuration Options' window. 'Action Properties' window opens. 9. Enter the Name of the parameter as 'varaction' and its Type as string and no default value. Just click on this button to open 'Value Configuration Options' window. 3. Go to File->Settings. Now run the test. use the Randomize statement without an argument to initialize the randomnumber generator with a seed based on the system timer. 4. Make a single click under 'Value' heading. 12. Again in the Keyword View right click on Action1 and select 'Action Call Properties'. 7. 'Test Parameters' radio button will be selected by default and under 'Test Parameters' radio button select 'vartest' from Parameter dropdown. in the 'Action Call Properties' window. Now in the Keyword View right click on Action1 and select 'Action Properties'. 2. Go to 'Parameter Values' Tab. Click Apply and then Ok. 10.[ remember this vartest is a test parameter we created at the beginning] 13. Before calling Rnd. Click on '+' sign which is on the top right.blogspot.com Bottom of Form QTP Test parameters 1. 14. Go to 'Parameters' Tab. The Type of this parameter is string. the same number sequence is generated because each successive call to the Rnd function uses the previous number as a seed for the next number in the sequence. under 'Value' it will show <vartest>. Click Ok to come out of that window. 6. Enter the Name of the parameter as 'vartest' and its Default Value as 'hello'. 5. Click on '+' sign which is on the top right. it will show a button like this <#>. 'Action Call Properties' window opens. There you will see the 'varaction' action parameter we created earlier. Above we have created a Test Parameter. Go to Expert View and type: msgbox(parameter("varaction")) 15. While running it will show 'hello' in the msgbox. go to 'Parameters' Tab. This example show how to declare test parameters and how to access those. a 'Test Settings' window will open. Click on 'Parameter' radio button and select 'Test/action Parameter' from that dropdown. Click Ok. 6. Just click ok to insert a new action. In the general tab. oneIteration.Go to Start->All Programs->QuickTest Professional->Sample Applications->Flight.blogspot. you must pass the test parameter to the action containing the step. . In the Expert View of Action1 type: s1=parameter("a") s2=parameter("b") msgbox(s1+s2) 7. In the Keyword View right click on Action1 and select 'Action Properties'. In the keyword View right click on action2 and select 'Run from Step'. qtp. QTP iterates all rows of any action data sheet. Enter the Name of the parameter as 'a' and its Type as Number. 8. Make action2 reusable as we did for action1. qtp. 'Insert Call to New Action' window opens.com Bottom of Form QTP Global & Action Data Sheet Parameters Example 1 How. Click Ok. for each row. Click on '+' sign which is on the top right. click on 'Reusable Action' checkbox at the bottom to make the action reusable. In the Keyword View right click on Action1 and select 'Action Properties'.You can directly access test parameters only when parameterizing the value of a top-level action input parameter or when specifying the storage location for a top-level output parameter.2 10. 4. 'Action Properties' window opens. In the same way create another Number parameter 'b'. 2. it will show you the sum 4 in msgbox. 'Action Properties' window opens.com Bottom of Form QTP Action Parameter 1. Click on Record. Go to Insert (menu)-> Call to New Action.blogspot. Open a new Test. Go to 'Parameters' Tab. in global data sheet. In the Expert View of action2 type: RunAction "Action1". To use values supplied for test parameters in steps within an action. 9. 2. Alternatively. 5. you can enter the parameter name in the Expert View using the Parameter utility object. in the format: Parameter ("ParameterName"). 3. 3. 1. 2. 17. Go to Global Data Sheet and Add one more row in it below 'mary'. 15. 11. 16. Now you have two actions (action1 and action2). go to File->Exit. as we did for action1. I added 'amar' and 'Sumit'. It will add a new column in action2 Data Sheet with 'bill ' (because I used bill as an Agent Name ) as its first row. Click Ok (make sure we click ok with mouse and not hit the return (enter) Key.Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4a. make sure you select Current action sheet (local) in the Location in Data Table area.Select "File. Enter the Agent Name as 'mary' and Password as 'mercury'. 8.4. go to Run Tab and select "Run on all rows" radio button. After repeating step 7. 7.) 5. Do the same for action2 and type msgbox("acton2") in its Expert view.Click Window("Flight Reservation").Type micTab Dialog("Login"). Go to Expert View of action one and type : msgbox("acton1"). dtGlobalSheet) Dialog("Login"). Go to Insert (menu) -> Call to New Action to insert action2."open" Dialog("Login").WinMenu("Menu"). repeat steps from 2 to 7.Exit" 19. Click on Parameter radio button and select Data Table from the dropdown and let rest everything be default.WinEdit("Agent Name:")."". when you click on 'Parameter' radio button and select Data Table from the dropdown. a button like <#> will appear.Set DataTable("p_Text". When the Flight Reservation window is open. Now my action1 looks like this: SystemUtil. 10. 14. Right click on action1 and select action call properties.exe".WinButton("OK"). I added 'rama'. 18. 6. Click Ok to close that window. In the Keyword View under 'Value' column. Do the same for action2. In the Local Data Sheet (action2) add two more rows to make them a total of 3. .WinEdit("Agent Name:"). Click Stop in order to stop recording the test. In the Global Data sheet it will add a new columns with 'mary' as its first value. (This time I have used the Agent Name as 'bill') 12. 13. click ok to come out of that window."C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\". click on this button to open 'Value Configuration Options' window. make a single click on 'mary' (Agent Name).SetSecure "4725bcebeea3b6682d186bf7b15ae92cc4e6c0ba" msgbox("acton1") Dialog("Login").WinEdit("Password:"). both of them do the same job but their data is in different sheets. For this action2. 9. dtGlobalSheet) 'accessing data from Global data sheet from action1 msgbox rc rc = DataTable. In the Action2 Data table. 3. 2) 'accessing data from action1 data sheet from action2 msgbox rc qtp. in cell(A.Value("A".Value("A". Example 2 This examples shows that each action can access data not only from Global Data Sheet or its own Local Data Sheet but also from other action's Data Sheet in the same test. dtGlobalSheet) 'accessing data from Global data sheet from action1 msgbox rc rc = DataTable. 4. in cell(A. 1.1) type 'Action2 Data'.Make sure that both (new test and Flight Reservation window) are open and visible.1) type 'Global Data'.Value("A". In Expert View of action1 type: msgbox("I am in action 1") rc = DataTable.com Bottom of Form QTP Output Values This is a very small tutorial on output values.Value("A". otherwise it will be fast and some people may not able to understand.blogspot. Open a new test. 2) 'accessing data from action1 data sheet from action1 msgbox rc In expert view of action2 type: msgbox("I am in action 2") rc = DataTable. Insert two actions.20. Now Run the test. just to make you familiar with the process so that you can start on your own. Go to Insert (Menu)->Output Value->Standard Output Value. For complete understanding of Output values please see QTP User Guide. . In the Global Data table. Open a new test and also open a sample Flight application (Flight Reservation window) 2. in cell(A. Click on Record in order to record a test. I have added the msgbox step in both the action just to make you understand how the QTP is picking up the rows from Data Sheets.1) type 'Action1 Data'. In the Action1 Data table. ' 6. . Make sure that both (new test and Flight Reservation window) are open and visible. Now it means whatever the value of Insert Order button's Enable Property will be. Just make that Insert Order button enable by putting some values in Flight Reservation window and then see that column (Insert_Order_enabled_Out ) in Data table. Click on first checkbox (which is Property enabled and Value False). 8. Click Ok. In the 'Configure Value' area click on Modify button. Click on 'Insert Order' button in 'Flight Reservation window. 4. It will show a true value in there at run time. Go to Insert (Menu)->Output Value->Standard Output Value. QTP will be minimized and mouse pointer will take the shape of a hand. Click on first checkbox (which is Property enabled and Value False). 7. 9. "Object Selection . 9. 5. 14. 'Output Options' window opens. In the Configure Value area click on Modify button. Below tutorial shows how to use output values with Environment variables. Again Click Ok to come out of this "Output Value Properties" window. (It creates Insert_Order_enabled_Out column in Global datasheet in Data Table with a Value False in the first row) 11.Output Value Properties" window opens with WinButton : Insert Order highlighted. In this window just click Ok. 12. Open a new test and also open a sample Flight application (Flight Reservation window) 2. "Output Value Properties" window opens with 'Insert Order' in the Name text field. "Output Value Properties" window opens with 'Insert Order' in the Name text field. Click on 'Insert Order' button in Flight Reservation window. "Object Selection . 1. 13. These above steps show you how to use output values with Data Table. QTP will be minimized and mouse pointer will take the shape of a hand. It will bring you back to "Output Value Properties" window. 7.Output Value Properties" window opens with WinButton : Insert Order highlighted.5. 10. Now earlier where under Value it was showing False (see step 8) now it will show Insert_Order_enabled_Out. Click on Record in order to record a test. It will also show that captured value in the Results window. 8. 10. 3. Output Options window opens. QTP will show that value in Data Table (under Insert_Order_enabled_Out column) at Run time. Click Ok. 6. Click Stop in order to stop the test. Just make a single click on Text Property. Enter the Agent Name as "Sachin" and Password as "mercury". In this area click on Parameter Radio button. from 'Output Types' dropdown. When the Flight Reservation window is open Go to File (menu)->Exit. This is all we need to do. select Environment. in order to highlight it and make "Configure Value" area enabled. Go to Start-> All Programs->QuickTest Professional->Sample Applications->Flight. Now you can Run the test.com Bottom of Form Parameterize a Checkpoint You can create a Checkpoint while recording or editing a test. which you checked above. . Open a new test. Right-click anywhere on that row and choose "Insert Standard Checkpoint. [On the right hand side of Parameter Radio button you will see Parameter Options button (which has paper and pen image on it). (You can check that environment variable by going to File->Settings. I added "aaaa" in the 2nd and "bbbb" in the 3rd. In the Keyword view go to the row which has "Sachin" under Value column. Make sure you use Tab key to move from one text box to another and hit Return (Enter) key after entering the Password.blogspot. Click on Record. you can click on it to see the default values QTP has set for us. Click on Stop in order to stop the recording.11. Now Insert_Order_enabled_out will be User-Defined internal environment variable. 12.] In the "Checkpoint Properties" window just click on Ok to come out of it. It will also show that captured value in the Results window. qtp. and click Ok. Just add the below line in the Expert View at the end to see the value of the environment variable. msgbox(environment("Insert_Order_enabled_out")) 15. Make sure only the Text property is checked which has a value of "Sachin" and rest all of the properties must be unchecked. Add two more values in the subsequent rows. Just make that Insert Order button enable by putting some values in Flight Reservation window and then see that environment variable value (Insert_Order_enabled_Out ). In 'Output Options' window." "Checkpoint Properties" window opens. Click Ok to come out of this "Output Value Properties" window. 13. For this tutorial I will take into account Text Checkpoint created through Standard Checkpoint while editing. Environment Tab and choosing UserDefined from variable type) 14. It will add a column in the Global Data Sheet with "Sachin" as its first value. It will show a true value in there at run time. com Bottom of Form QTP Action input output parameters Those who are still confused about input parameters to actions and output values from actions. (To create an input variable.blogspot. Any value entered there first time will be compared by first row of the Global Data Sheet which has "Sachin" and any value entered there Second time will be compared by second row of the Global Data Sheet which has "aaaa" and so on. qtp. that's it. We don't need to enter the Password. Go to Insert (Menu) -> Call to New Action. Just try to enter some other value during second time like "xxxx" it will run the test but show you "Failed" in the Results window in the second iteration. to add a new action at the end of the test. you have to click on ‘+’ sign which is on the right hand side of Input parameters section). Now we have Action1 and Action2 in this test. By default. same as above. right click on Action1 and choose Action Properties. just have a look at these examples.blogspot. 3) Action output value (value returned by a called action) can be stored in environment variable.Now when we run the test and it opens the window where we need to enter the Agent Name and Password. 4) Action output value (value returned by a called action) can be stored in Any variable and RunAction Statement is not used.com Bottom of Form QTP Action input output parameters Example 1 Action output value (value returned by a called action) can be stored in a variable Open a new test. Make sure you enter "Sachin" during first time. it will have Action1. Go to Parameters tab and create input variable in_a1_1 with Type as Number. 5) Working with Four Actions qtp. no Tab key or Return key). In the Keyword View. these act as a foundation for action input and output values (this is ONE of the many ways. of course there can be other ways of doing the things I have done below). you have to enter Agent Name all of the 3 times (Just enter the Agent Name. Remember this is a Text Checkpoint on the "Agent Name" Text field. Create another input variable in_a1_2. . 1) Action output value (value returned by a called action) can be stored in a variable 2) Action output value (value returned by a called action) can be stored in data table column. "aaaa" during second time and so on. Rest everything be default.. Create another input variable in_a1_2. it may give you a warning that it will make Action1 reusable. it may give you a warning that it will make Action1 reusable. right click on Action1 and choose Action Properties.In the Parameters tab. just click on Ok) . (To create an input variable. also create one output variable out_a1_1 with Type as Number (To create an output variable. In the Keyword View. Go to Insert (Menu) -> Call to New Action.2. oneIteration. In the Parameters tab. By default. you have to click on ‘+’ sign which is on the right hand side of Input parameters section). you have to click on ‘+’ sign which is on the right hand side of Output parameters section). to add a new action at the end of the test. also create one output variable out_a1_1 with Type as Number (To create an output variable. Rest everything be default. qtp. same as above. Now we have Action1 and Action2 in this test. 2.com Bottom of Form QTP Action input output parameters Example 2 Action output value (value returned by a called action) can be stored in data table column.. you have to click on ‘+’ sign which is on the right hand side of Output parameters section).blogspot. var1 msgbox var1 To Run this test make sure Action2 is selected / highlighted if you are in the Keyword View or Action2 is selected from the dropdown above if you are in the Expert View and then choose Automation (Menu) -> Run Current action. just click on Ok) RunAction "Action1". Go to Parameters tab and create input variable in_a1_1 with Type as Number. Open a new test. it will have Action1. Open a new test. To Run this test make sure Action2 is selected / highlighted if you are in the Keyword View or Action2 is selected from the dropdown above if you are in the Expert View and then choose Automation(Menu) -> Run Current action. Environment tab. Rest everything be default. qtp. Go to Parameters tab and create input variable in_a1_1 with Type as Number. B and so on. to add a new action at the end of the test. 2.RunAction"Action1". It will open Change Parameter Name box. it will have Action1. Environment ("env_var") msgbox Environment ("env_var") .com Bottom of Form QTP Action input output parameters Example 3 Action output value (value returned by a called action) can be stored in environment variable. you have to click on ‘+’ sign which is on the right hand side of Output parameters section). 2.blogspot. Now we have Action1 and Action2 in this test. (To create an input variable. in the 'Variable type' dropdown choose User-defined. oneIteration. where you have column names as A.dtGlobalSheet) In the Global Data Sheet. ‘Add New Environment Parameter’ window opens. Click on the + sign which is on the right side. it may give you a warning that it will make Action1 reusable. Enter the Name of the parameter as env_var and let the 'Value' field be empty and click on Ok. just click on Ok) RunAction "Action1". In the Keyword View. double click on A. same as above. In the Parameters tab. In the Expert view of Action2 type: (When you copy this below code in Expert view of Action2. Enter the parameter name as Action1_out and click Ok. 2. right click on Action1 and choose Action Properties. Go to Insert (Menu) -> Call to New Action. DataTable("Action1_out". Create another input variable in_a1_2. also create one output variable out_a1_1 with Type as Number (To create an output variable. oneIteration. you have to click on ‘+’ sign which is on the right hand side of Input parameters section). By default. In the Expert view of Action1 type: s1=parameter("in_a1_1") s2=parameter("in_a1_2") parameter("out_a1_1")=s1+s2 Go to File-> Settings. 2. Go to Parameters tab and create output variable out_a1_1 with Type as Any. In the Keyword View. right click on Action1 and choose Action Properties. always go to Action4 and then. Action1 sums those values (2+2=4) and assigns the sum to out_a1_1 (Action1’s output parameter).To Run this test make sure Action2 is selected / highlighted if you are in the Keyword View or Action2 is selected from the dropdown above if you are in the Expert View and then choose Automation (Menu) -> Run Current action.com Bottom of Form QTP Action input output parameters Example 4 Action output value (value returned by a called action) can be stored in Any variable and RunAction Statement is not used. right click on Action2 and choose Action Properties. Then Action1 passes the sum (i. By default it will have Action1. In the Keyword View.com To Run this test.blogspot. 2. to add a new action at the end of the test. 4) along with another number (3) to Action2 by calling Action2 in its last line.blogspot. In the Keyword View. Go to Parameters tab and create input variable in_a2_1 with Type as Any. Now we have Action1 and Action2 in this test. Go to Parameters tab and in the Value column enter var1. right click on Action1 and choose Action Call Properties. right click on Action2 and choose Action Call Properties. In the Keyword View. . Go to Insert-> Call to New Action. qtp. Automation (Menu) ->Run Current Action What these Actions will do: Action4 will call Action1 with two input values 2.e. Go to Parameters tab and in the ‘Store In’ column enter var1. In the expert view of Action1 type: Parameter("out_a1_1") = 23 In the expert view of Action2 type: msgbox Parameter(" in_a2_1") qtp. Open a new test. [op]. Although there are other methods like CloseProcessByName etc.Run file. let all other things be default. 5. Go to Insert-> Call to New Action. In the Keyword View right click on Action1 and choose Action properties. . parameter("out_a1_1"). 3. object. oneIteration. Similarly add Action3 and Action4. oneIteration.[StartIn]) command: The path and command line options of the application to invoke. Action Properties window opens and go to Parameters tab. 3 In the Expert view of Action2 type: parameter("out_a2_1")= parameter("in_a2_1") * parameter("in_a2_2") RunAction "Action3". 2. [params]. [dir]. 2. just click on Ok. This adds Action2.Run is “one of the methods” of SystemUtil object. click on the +. In the Expert view of Action1 type: s1=parameter("in_a1_1") s2=parameter("in_a1_2") parameter("out_a1_1")=s1+s2 RunAction "Action2". a1 is for action1 and 1 is 1st variable) and keep its Type as Number. Open a new test. oneIteration. Obviously Action1 will be there by default. In the Parameters tab.Action2 multiplies those two values (4. file: The name of the file you want to run. 3) it got from Action1 and passes on the result of multiplication (12) and another number (5) to Action3.5 In the Expert view of Action3 type: parameter("out_a3_1")= parameter("in_a3_1") + parameter("in_a3_2") msgbox parameter("out_a3_1") In the Expert view of Action4 type: RunAction "Action1". output variable out_a2_1) and Action3 (input variables in_a3_1 & in_a3_2. when ‘Insert Call to New Action’ window opens. where these passed on values are added and the result is shown in a message box. 6.Similarly add 2nd input variable in_a1_2 and one output variable out_a1_1 also a Number Type. parameter("out_a2_1"). output variable out_a3_1). which is on the right hand side of input parameters. [mode] The InvokeApplication method can open only executable files and is used primarily for backward compatibility InvokeApplication(Command . 4.2 QTP SystemUtil Vs InvokeApplication SystemUtil is an object which is used to control applications and processes during a run session. 1. Similarly add input and output parameters for Action2 (input variables in_a2_1 & in_a2_2. Add 1st input variable as in_a1_1 (in means input. EXE" There are 10 modes. You can specify one of the modes in the table below." " . and you selected the Record and run test on any application check box in the Record and Run Settings dialog box.com" This below example opens a text file foo which is saved in C:\ drive and waits for sometime and then closes it.params: If the specified file argument is an startIn:The working folder to which the executable file.Close You can run any application from a specified location using a SystemUtil. the open operation is performed. "".google. dir The default directory of the application or file. It should work fine. You can write the below code in a new test in QTP and run it. use the params argument to specify any parameters to be passed to the application. "c:" Example using params: SystemUtil.txt". "Open". (Make sure you have a file name foo.Run "C:\foo. InvokeApplication "\Program Files\Internet Explorer\IEXPLORE.Run "iexplore. Command path refers. InvokeApplication "C:\Program Files\Internet Explorer\IEXPLORE.Run "foo. "1" wait(3) window("text:=foo . .Run statement. for complete list of modes. Example please see QTP User Guide.Notepad"). Default = 1 Activates and displays the window. op:: The action to be performed. If the function fails to open the application.Run statement is automatically added to your test when you run an application from the Start menu or the Run dialog box while recording a test. A SystemUtil. "C:\".txt".Close using both the arguments.exe".Notepad"). ". Example using all arguments except params: SystemUtil. "" wait(3) window("text:=foo . Example using Command argument The following example uses the InvokeApplication function to open Internet Explorer on my machine. This is especially useful if your test includes more than one application. Other actions can be Edit.txt in C:\ drive) SystemUtil. Print etc. If this argument is blank ( ""). You can type the below line in a new test in QTP and Run it.EXE". Return Value Boolean. False is returned. mode: Specifies how the application is displayed when it opens. "". Browser("Browser"). Enter Password.WinButton("Cancel"). Here remember that the step does not cause the Run to fail.Dialog("AutoComplete"). To COMPLETE a Run session an optional step is not necessarily required. QuickTest Professional deems steps that open the following dialog boxes or message boxes as Optional Steps: Dialog Box / Message Box Title Bar AutoComplete File Download Internet Explorer Netscape Enter Network Password Error Security Alert Security Information Security Warning Username and Password Required 1. For example: OptionalStep. QTP avoids that step and continues ahead during a Run session. then the Run fails with an error message.Click This is an Optional step icon A simple example for Optional Step: 1. . 2.If a Step in an optional dialog box does not open. However if QTP does not find an Object from the optional step in the Object Repository.blogspot. At the end of the Run session. 4. Go to Start-> Programs-> QuickTest Professional->Sample Applications-> Flight. 3. Make sure that a new blank test and a blank Internet Explorer window is open. a message is displayed for the step that failed to open the dialog box. 5.com Bottom of Form QTP Optional Step By default. In QTP click on Record in order to start recording. Enter Username. 3.qtp. 2. You can also add an optional step in the Expert View by adding OptionalStep to the beginning of the VBScript statement. Select Action window opens. I created another test in QTP with the name "call twra". Try to run the same test by just removing the ‘Optional Step’ tag from the above lines and see that it will fail and show you the Run Error. But it shows you the warning in the test results. 2. In the Action Properties dialog box that opens. 7.twra" 3. 8. Close the Internet explorer window also from the Cross button which is on the extreme top right. ..blogspot. The idea is when you run the above test without IE. qtp. right click on Action1 and choose Action Properties.g. for better understanding “test with reusable action"). Click Stop to stop the test recording. check the ‘Reusable action’ checkbox in the General tab and click OK. it will not show any error message or fail.6. Hit the enter key. When the Flight application is open go to File->Exit. (E. I created a test in QTP with the name "twra".com Bottom of Form QTP Relative Path For this example assume that all of the tests are stored in C:\Program Files\Mercury Interactive\QuickTest Professional\Tests 1. At present it also has just one line of code: Msgbox "I am going to call a reusable action in a test . follow the below steps: Finally it looks like this Now before you run the test make sure Internet Explorer window is NOT open. 9. Now make sure that "call twra" test is open and go to Insert ->Call to Existing Action. It has just one line of code: Msgbox "I am a reusable action" In the Keyword View. In the Keyword view. it will just bypass the ‘browser closing’ step as we have marked it Optional and it will ignore any error for the optional step. Here you have to click on “…” button or type the complete path (in ‘From test:’ dropdown) to select the test that contains the reusable action. because all the tests are stored in Tests folder. 4. Now without doing anything on Select Action window. The paths that you specify here can be a full path or a relative path. In our case we will add C:\Program Files\Mercury Interactive\QuickTest Professional\Tests. The relative path is relative to the location of the test currently being edited. just close (Cancel) it. Click on + to add a Path. Now go to Tools -> Options and go to Folders tab. . Now again go to Insert ->Call to Existing Action. 5.g. [If you enter any path (e. QuickTest searches for the file in the folders listed in .After entering the path click OK. Now in the “From test:” dropdown just type the name of the test from where you want to call a reusable action (twra in our case) and it will list all of its reusable actions. Select Action window opens.) as relative path. path for Function Library or path for Object Repository etc. then during the run session. Above you saw just one way of doing things. since both “twra” and “Call twra” are stored under Tests.\twra in the “From test:” dropdown (No need of step 4 and 5 above) in order to access its reusable actions. .the Folders pane of the Options dialog box. Path of current test (<current test>) is always there in search list (Tools -> Options -> Folders tab). This is because both the tests are located in C:\Program Files\Mercury Interactive\QuickTest Professional\Tests and we are currently in C:\Program Files\Mercury Interactive\QuickTest Professional\Tests\call twra.] Note: Use of relative path is possible anywhere in QuickTest Professional.. we can simply write . in the order in which the folders are listed. [Just understand the below text very carefully:] We can also do the above steps like this: (Suppose “call twra” test is open) Above in the 3rd step when Select Action window opens. Locate ("twra")) See how much you have learned about QTP Relative & Absolute Path by answering these Multiple Choice Questions .\twra) in ‘From test:’ dropdown in Select Action window..and our current path is C:\Program Files\Mercury Interactive\QuickTest Professional\Tests\call twra.. One more way to understand: (Still suppose we are in “call twra”) If on your system you go to C:\Program Files\Mercury Interactive\QuickTest Professional\Tests\ and cut the folder twra from here and paste it under C:\Program Files\Mercury Interactive\QuickTest Professional\ then in the above situation you have to go two folders back and type . so to find twra which is under C:\Program Files\Mercury Interactive\QuickTest Professional\Tests\ we go one step back with . since we are in test C:\Program Files\Mercury Interactive\QuickTest Professional\Tests\call twra.\ and type twra.. Locate is a method of PathFinder object which returns the full file path that QuickTest uses for the specified relative path based on the folders specified in the Folders tab search list (Tools -> Options -> Folders tab). So in all we type . Script on How to add any path in the Search List '(Tools -> Options -> Folders tab) through a script.\twra and it will find it.. PathFinder is an object which lets you to find file paths.\.Locate You can use a PathFinder. Additionally you can add the below line in the "call twra" test to know which relative path it picked from the search list (Tools -> Options -> Folders tab)...Locate statement in your test to retrieve the complete path that QuickTest will use for a specified relative path based on the folders specified in the Folders tab. . PathFinder. Msgbox (PathFinder.\.\ and then type twra (so in all we have to type . Now Go to Insert (Menu)-> Call to New Action. Click ok. 6. When you copy the above text to Expert View of Action2. Lets jump at the examples straightway. 4. Example 1: 1. Click on 'Parameter' radio button. In the Numeric Range enter 0 against From and 100 against To. select 'Action Call Properties'. 8. Click on Name Checkbox and choose arg_a from the dropdown. 'Action Properties' window opens. Open a new test. oneIteration. select 'Random Number' from the dropdown. There are many different ways in which you can use Random numbers in QTP. Again Click ok to come out of 'Action Call Properties' window. Go to 'Parameter Values' Tab. it will become a button '<#>'. 2. 'Action Call Properties' window opens. Click Ok. Make a single click under 'Value' column in the 'Input Parameter's' area. Now Run the test. It will open 'Value Configuration Options' window. Go to Expert view of action1 and type: msgbox "action1" msgbox(parameter("a")) 13. 15.action2. In the keyword View right-click on Action1.qtp. Go to 'Parameters' Tab. it will show you a message that it has made Action1 Reusable. 11. Type as Number. In the 'Generate New Random Number' area. Go to Expert View of Action2 and type: For i=1 to 3 RunAction "Action1". 'Insert Call to New Action' window opens. 12. and Default value as 1. Click on this button. Again right-click on Action1 in the keyword View . 3. RandomNumber("arg_a") Next 14. 10. select first option-For each action iteration. 7. select Action Properties. click ok to insert a new action. . just click ok. 5.com Bottom of Form QTP Random Variables First example of Random Numbers: When you define parameters for an action you can set the parameter's value as Random numbers.blogspot. 9. In the 'Input Parameters' area click on the '+' sign and enter the Name of the input parameter as 'a'. 3. 5. Type as Number. 7.EndNumber]) EndNumber is optional above. 11. 9. In the 'Generate New Random Number' area. You will see that it shows a different value in each msgbox() because we selected 'For each action iteration' from the 'Generate new random number' area. Click on 'Name' Checkbox and choose arg_a from the dropdown.) RandomNumber is an Object. In the keyword View right-click on Action1. 100) msgbox(var1) Next Third example of Random Numbers: (This is more or less same as the first one) One more way is to define a Random Number parameter in the 'Parameter Options' or 'Value Configuration Options' dialog box. In the 'Input Parameters' area click on the '+' sign and enter the Name of the input parameter as 'a'. Go to 'Parameter Values' Tab. 8. Now in the Expert View of action1 type: x=RandomNumber("arg_a") msgbox(x) . 4. Click Ok. a different value at each test run.(It would be better if you run it by activating the expert view. 'Action Call Properties' window opens. It will open 'Value Configuration Options' window. In the Numeric Range enter 0 against From and 100 against To. select first option-For each action iteration. Click on 'Parameter' radio button. If we select the second option 'For each test iteration' then a message box will show same values. Open a new test. Make a single click under 'Value' column in the 'Input Parameter's' area. 2. 10. select 'Random Number' from the dropdown. 1. select 'Action Properties'. RandomNumber(ParameterNameOrStartNumber [. then it will show you which step it is currently running by pointing to that particular step with yellow color arrow and then you will be able to understand it in a better way. Go to 'Parameters' Tab. Click on this button. select 'Action Call Properties'. and Default value as 1. 6. Again right-click on Action1 in the keyword View . Click ok. but different values if you run it next time i. it will become a button '<#>'. Second example of Random Numbers: Here is another way of generating random numbers: Open a new test and in the Expert view write these lines and run the test: For i=1 to 5 var1=RandomNumber (0. 'Action Properties' window opens.e. Again Click ok to come out of 'Action Call Properties' window. In simple terms Rnd is a function and Randomize is used to initialize this function.lowerbound + 1) * Rnd + lowerbound) likewise Int((6 * Rnd) + 1) ' Generate random value between 1 and 6. MsgBox var1 next Some light on Rnd: The following formula is used to produce a random number in a given range: Int((upperbound . using number as the seed. the value returned by the system timer is used as the new seed value. MsgBox var1 next Let's talk about Randomize and Rnd for some time: Randomize [number] We use a number with Randomize to initialize the Rnd function's random-number generator. giving it a new seed value. MsgBox var1 next But if you omitt randomize(2) from the above code and instead put only randomize then at each run it generates different values. The Rnd function returns a value less than 1 but greater than or equal to 0. And Run the Test. Fourth example of Random Numbers: Another VBScript method of generating a random number: For i= 1 to 3 var1 = int((101*rnd)+0) ' Generate random value between 0 and 100. If the number is omitted. No matter how many times you Run the below code it generates the same values: For i= 1 to 3 randomize(2) var1 = Int((6 * Rnd) + 1) ' Generate random value between 1 and 6. For i= 1 to 3 x=rnd(-1) msgbox(x) Next . Rnd(number)If the number is Less than zero (< 0) then Rnd generates 'The same number' every time. For i= 1 to 3 randomize var1 = Int((6 * Rnd) + 1) ' Generate random value between 1 and 6. the Rnd function (with no arguments) uses the same number as a seed the first time it is called. If Randomize is not used.12. Before calling Rnd.If the number is Greater than zero(> 0) then Rnd generates 'The next random' number in the sequence. For i= 1 to 3 x=rnd(1) msgbox(x) Next If the number is Equal to zero (=0)then Rnd generates 'The most recently generated' number. Crypt object has an Encrypt method which takes string (string to encrypt) as its parameter. QTP Crypt Object Crypt Object is used to encrypt strings. use the Randomize statement without an argument to initialize the randomnumber generator with a seed based on the system timer. pas = "Sachin" MsgBox Crypt_Pass(pas) . pwd = "sachin" e_pwd = Crypt. You can also write the function (Crypt_Pass) in library and call it from a QTP test. the same number sequence is generated because each successive call to the Rnd function uses the previous number as a seed for the next number in the sequence.Encrypt(pwd) msgbox e_pwd Example 2 of Crypt Object Type the below text in a new test in QTP and run it. Example 1 of Crypt Object Type the below text in a new test in QTP and run it. For i= 1 to 3 x=rnd(0) msgbox(x) Next If the number is Not supplied then Rnd generates 'The next random number in the sequence.' For i= 1 to 3 x=rnd() msgbox(x) Next Remember: For any given initial seed. WinEdit("attached text:=Agent Name:").Set "mercury" Dialog("text:=Login"). Entering the encrypted text in a non-secured edit box lets you know the original text. For example I saved it as "a. For example type the below lines in a new test in QTP and run them. Write the below 3 lines in a notepad and save it with .WinEdit("attached text:=Password:").Type micReturn Source .WinEdit("attached text:=Password:"). Set a=CreateObject("Mercury.WinEdit("attached text:=Agent Name:").Encrypter") Msgbox a.Function Crypt_Pass(epas) Crypt_Pass = Crypt.Encrypt(epas) End Function Example 3 of Crypt Object Use Encrypt outside of QTP in VBScript."open" Dialog("text:=Login").vbs" under c:\ and ran it from command prompt by typing just "a" and pressing enter.encrypt ("Sachin") Set a=Nothing Source Example 4 of Crypt Object There is one trick by which you can know the encrypted password.SetSecure e_pwd Dialog("text:=Login").vbs extension and run it from command prompt.exe".Encrypt(pwd) SystemUtil."C:\Program Files\HP\QuickTest Professional\samples\flight\app\"."".Type micTab Dialog("text:=Login").Run "C:\Program Files\HP\QuickTest Professional\samples\flight\app\flight4a. pwd = "sachin" e_pwd = Crypt. I am entering the encrypted password (in e_pwd) in "Agent Name" field of the Login dialog box which shows up when you open the Flight Application. easier to maintain.blogspot. 2. A user-defined function can then be called from within an action.qtp. it will show all of the Built-in functions in Operation Combo box))] If the function is stored in a function library then we have to associate that function library to a test so that the test can call all the public functions listed in that library. Functions in an associated function library are accessible: . User-defined functions can be registered as a method for a QTP test object. 3. in theStep Generator dialog box choose Built-in functions from Library Combo box. User-defined functions will make your tests look shorter. [It is advisable not to give user-defined functions same name as built-in functions (refer to the Built-in functions list in the Step Generator (Insert > Step Generator.com Bottom of Form Function Library Function Libraries in QTP If you have repeatable steps in a test or an action then consider using a user-defined function. Time and resources can be saved by implementing and using user-defined reusable functions. read and design. Advantages of Function Library (functions) 1. User-defined functions can be stored in a function library or within an action in the test. 2. . Associate it with a test (File-> Associate Library with Test). b) From the Operation column in the Keyword View. you must include it in all the function libraries associated with the test otherwise QuickTest ignores all the Option Explicit statements in all function libraries. Open a new function library (File->New->Function library). If a function library that is referenced by a test is modified by you or any other user using an external editor.) You can save function library either from File->Save or right click on function library tab on the top of function library and choose save. Functions can be created manually or by using Function Definition Generator. you must add a Reporter. Example 1 . Steps using user-defined functions are not displayed in the test results tree of the Test Results window by default.a) From Step Generator (for tests and function libraries). If we use options like Run from step or Debug from step. If you want to use Option Explicit statement in Function Library.Example of registering a function to a test object by creating a new operation Example 4 . Many different function libraries can be opened simultaneously as each opens in its own separate window and you can work on them separately.Example of overriding a copy operation of a WinEdit Class objects Some important points regarding Function Libraries. For function to appear in the test results tree. You are ready to go. It is easy to create a function library: 1. to begin running a test from a point after method registration was performed in a test step (and not in a function library). Functions directly stored in an action in a test can be called from within that action only making them private from the outside world. 3. (Resources->Associated Function Libraries.Example using private and public functions in function library Example 3 . or c) Can be entered manually in the Expert View. Add content to it (your function).Simple example of a Function Library and test Example 2 .ReportEvent statement to the function code. When we register a function. it applies to an entire test object class and it's not possible to register a method for a specific test object. If a test is open you can view all the function libraries associated with it. QTP does not recognize the method registration because it occurred earlier to the beginning of the current run session and this all is due to the reason that QTP clears all method registrations at the beginning of each run session. A Private function can also be created in a function library and this private function can only be called from within the function library itself in which it is defined. the changes will take effect only after the test is reopened. 8. Let the ‘Documentation’ be empty. 5. the Set method stops using the functionality defined in the MySet2 function.) 3. (Now both new test and function library are open at the same time and we are in function library. We can re-register the same method with different user-defined functions without first unregistering the method. In the Function Definition Generator window. it is strongly recommended to unregister the method at the end of the action (and then re-register it at the beginning of the next action if necessary). When it is unregistered it is reset to its original QTP functionality e. "MySet2" UnRegisterUserFunc "WebEdit". and not to the functionality defined in the MySet function. Open a new test. Include a Dim statement only in the last function library (since function libraries are loaded in the reverse order). enter the Description as ‘addition function’. the second definition causes a syntax error. "Set" After running the UnRegisterUserFunc statement. 7. Go to File->New->Test.blogspot.Always make sure that each function has a unique name. "MySet" RegisterUserFunc "WebEdit". Go to File->New->Function library. "Set".g. "Set". type the name of the function as my_sum. If there are two associated function libraries that define the same variable in the global scope using a Dim statement or define two constants with the same name. Now it will add the function to your already open Library. In Additional Information area. if more than one function with the same name exists in the test script or function library. If you need to use more than one variable with the same name in the global scope.’. 2. Now we have to write the function body (where it says TODO:) . and returns to the original QuickTest Set functionality. Most important of all: If you register a method within a reusable action. qtp. RegisterUserFunc "WebEdit". Open a new function library. 6. Go to Insert->Function Definition Generator.Example 1 1. so that tests calling your action will not be affected by the method registration.com Bottom of Form Function Library Example 1 QTP Function Library . Click on Ok to close that Function Definition Generator window. QTP will always call the last function because QuickTest searches the test script for the function prior to searching the function libraries. blogspot. qtp. Go to File->New->Test. Write the below two functions in the function library. (File-> Associate Library with Test) 12. Go to File->New->Function library. Finally it will look like: '@Description addition function Public Function my_sum(var1.e. When you run it. 2.vbs or . Open a new function library. one from the public function and second from the private function.Example 2 Another example which uses both public and private functions in the function library. In the expert view of the test type: my_name("sachin") 7.blogspot. Private Function my_name_tell(name2) msgbox "Hello " & name2 End Function Public Function my_name(name1) msgbox(name1) my_name_tell(name1) End Function 4. (Save it by giving any name and extension either . 2 qtp.qfl (by default) or . In expert view of Associated test type: my_sum 1. Open a new test. Save the function library.9.com Bottom of Form Function Library Example 2 QTP Function Library .com Bottom of Form Function Library Example 3 . it will show two message boxes. it can be accessed from within the function library itself and cannot be accessed from outside of this function library. One of the functions is private i. Save it. One of the functions is public which we will access from the test. var2) sum=var1+var2 msgbox sum End Function 10. Associate the function library with the test which is open. 1. Associate it. 3. 5.txt) 11. WinEdit("Attached text:=Name:".e. After it is associated go the ‘Expert View’ of the test already open and type: Window("title:=Filght Reservation"). ‘Function’ and ‘Public’ respectively. just type a new value 'New_operation_1' In the ‘Additional information’ area. (dot). Type the name of the function in the 'Function definition' area (I entered function_1). After it is copied to the already open library just complete the function body. 'By Value'.Example 3 Registering a Function to a test object by creating a new operation Open a new test. type 'my first operation on WinEdit' in Description text box. It copies the whole code it generated in the Preview area to the open library. as soon as you press . After selecting New_operation_1 just supply one argument to it because while creating the function we provided one argument called var1."function_1" Save the function library. the list of operations which WinEdit supports are displayed by default (also called IntelliSense). Open a Function Library. So the final line of code in the Expert View now looks like: . var1) ' TODO: add function body here msgbox var1 End Function RegisterUserFunc "WinEdit". Let Type and Scope be the default i. Click Ok. "New_operation_1". just select New_operation_1 from there. From Test Object dropdown select 'WinEdit' and in the Operation dropdown instead of selecting one of the values it already shows there. Finally it looks like this: '@Description my first operation on WinEdit Public Function function_1(test_object. "height:=20"). Check the 'Register to a test object' checkbox.QTP Function Library . Once it is saved. go to File->'Associate Library with Test' to associate this function library with the already open test. Make sure that we are in Function Library. Click on '+' in the ‘Arguments’ area and type 'var1' under Name to create a new variable and let the Pass Mode be default i.e. Go to Insert-> Function Definition Generator. (Insert->Call to New Action) Make sure ‘Flight Reservation’ window is open.com Bottom of Form Function Library Example 4 QTP Function Library . This New_operation_1 will be there for every object of class WinEdit.blogspot. qtp. This is just a simple example to show how it works.GetROProperty("text")) Edit.Click Edit. But if we do the below steps it surely will show: We will record a simple step in a new action.Example 4 Below is an example that creates a new copy method for winedit object which copies the data from the text field to clipboard and then shows that data (which is copied to the clipboard) in a message box. "height:=20"). It is displayed as an operation in the Keyword View Operation list when that test object is selected from the Item list.WinEdit("Attached text:=Name:".New_operation_1"merry" [You can open Flight Reservation window for this test. Click on Record. Make a single click under Operation column in that row (click where it shows Set) it will show a dropdown and you will be able to see New_operation_1 in that dropdown.WinEdit("Attached text:=Name:". Now insert another new action in this test. in the Name text box enter any name. go to the row which has 'Name' under item. In a new library file type: Sub Copy (edit) Edit. "Copy". Now if you open any new test and again type Window("title:=Filght Reservation"). It just displays a message box with the value which we have supplied to it at the time of writing the code.Window("title:=Filght Reservation"). Now go to the ‘Keyword View’. But here we are not using any Object repository (we are using Descriptive Programming) so we cannot select any object from the Item list as the Object Repository is empty.Type micCtrlDwn + "c" + micCtrlUp End Sub RegisterUserFunc "WinEdit". Stop recording.SetSelection 0. It does nothing for the WinEdit object or anything special. In the ‘Flight Reservation’ window. but it is not necessary] Just Run the test."Copy" . it will show you New_operation_1 under the Operation column. "height:=20"). If in this same test you go to ‘Keyword View’. Len(Edit. Now QTP will not display New_operation_1unless otherwise you associate the library we created earlier to this new test. In this ‘Keyword View’. getdata("text") msgbox a unRegisterUserFunc "WinEdit".Open a new test and type: Dialog("text:=Login"). 3. can be debugged). We will start with a very simple example and go on to elaborate more on ExecuteFile. I saved it under c:\ as add. Resources pane of Test Settings (File->Settings) dialog (has advantages like files are in global scope –all the actions in a test can use those. In this login window type "sachin" in the Agent Name text field and run the test. .vbs. "Copy" Associate the library with the test as we did earlier.clipboarddata.Copy Set objhtml=Createobject("htmlfile") a=objhtml. Now open a new test in QTP and write few lines as below and run the test. It will show the value of a as 5 in message box.vbs script as shown below.com Bottom of Form ExecuteFile ExecuteFile There are two ways (usually) to associate the library file to a test. Create a new . and you cannot debug a file that is called using an ExecuteFile statement). 2. ExecuteFile (Local scope . you can call the functions in the file only from the current action.winedit("attached text:=Agent Name:"). 1.When you run an ExecuteFile statement within an action. Make sure 'Login' window is open (Start-> All Programs -> QuickTest Professional -> Sample Applications ->Flight). qtp.blogspot. Another way (obviously it uses the first one in some way).parentwindow. After including that file we are calling the function from that file and showing its return value in a message box. the execution marker may not be correctly displayed. the ExecuteFile statement executes all global code in the function library making all definitions in the file available from the global scope of the action's script.Important points from QTP Guide.Above we are using ExecuteFile function to include the add. ExecuteFile . The syntax of ExecuteFile: ExecuteFile File Where File is a string . when debugging a test that contains an ExecuteFile statement. you can also call a function contained in any function library (or VBscript file) directly from any action using the ExecuteFile function. When you run an ExecuteFile statement within an action. ExecuteFile .e. The ExecuteFile statement utilizes the VBScript ExecuteGlobal statement.vbs and QTP test are like as shown below.vbs file we created earlier. i. In addition to the functions available in the associated function libraries. You cannot debug a file that is called using an ExecuteFile statement. one more variable z is added. Now what do you think it will show (for z) when the test is run.the absolute or relative path of the file to execute. you can call the functions in the file only from the current action. In addition.Try this Now my add. When you run your test. or any of the functions contained in the file. add the file name to the associated function libraries list in the Resources pane of the Test Settings dialog box. . You can also insert ExecuteFile statements within an associated function library. To make the functions in a VBScript file available to your entire test. vbs and QTP test are as below: qtp.com Bottom of Form . And guess what will it show now for the msgbox z if the add.blogspot.On running this QTP test it will show 7 for msgbox z. WinButton("OK").WinEdit("Name:").. 10.WinRadioButton("Business"). GetROProperty. GetROProperty retrieves the current property value of the object in the application during the test run. button. Click stop in order to stop recording and Save the test. delete all the lines. Flight Reservation window opens. GetROProperty. Click Flights. Select value from "Fly To" dropdown. 4. From Class area select Business radio button. 11.. . 5. From the above script which QTP recorded in Expert View.QTP GetTOProperty. Just record a simple test on Flight Reservation application. 1. Flights Table window opens. which sets the Business radio button as shown below.Set We did all the above steps just to enable the radio buttons in the Class area.Set "sach" Window("Flight Reservation"). GetTOProperty retrieves the values of only those properties that are included in the test object description in Object Repository by QTP. GetTOProperty It will Return the value of a particular property for a test object which QTP recorded to identify an object during Run time. Select value from "Fly From" dropdown. GetTOProperties on a radio button object. Below is the Expert View script of above steps: Window("Flight Reservation").WinComboBox("Fly To:").Click Window("Flight Reservation").WinObject("Date of Flight:"). GetTOProperties GetTOProperties Returns properties and values which QTP has recorded and will use to identify an object at run time. I will show very easy to understand example of GetTOProperty.WinButton("FLIGHT"). GetROProperty It will Return the current value (run time value) of the test object property from the object in the application. except one.Click Window("Flight Reservation"). 8. Go to Start->All Programs->QuickTest Professional->Sample Applications->Flight 2. 3.Select "Frankfurt" Window("Flight Reservation"). Click on Record in QTP to record a new test. Enter Date of Flight. 6.Dialog("Flights Table"). Click Ok 9. Enter Name. 7.Select "Denver" Window("Flight Reservation").WinComboBox("Fly From:").Type "120908" Window("Flight Reservation"). The value is taken from the Object Repository. set a=Window("Flight Reservation").Value MsgBox Prop_Name & " = " & Prop_Value Next This above code which uses GetTOProperties shows all the properties of Business radio button which QTP recorded in order to identify it.Set Go to Resources (menu)->Object Repository. Now to view all these properties through a script (and use them later somewhere)use GetTOProperties as below: GetTOProperties Convert the remaining one line in the Expert view like this below and add a For Loop.GetTOProperties count_of_prop = a.GetROProperty("checked") msgbox a Select Economy radio button and then run the above code again to see a different value.1 Prop_Name = a(i). Click on Business radio button as shown below It will show all the properties which QTP recorded for Business radio button.WinRadioButton("Business"). For GetROProperty & GetTOProperty you have to specify the property whose value you want to retrieve.WinRadioButton("Business"). GetROProperty a=Window("Flight Reservation").Name Prop_Value = a(i). Object Repository window opens.WinRadioButton("Business"). In the same test delete or comment all of the above code (GetTOProperties) and write the below code for GetROProperty and run the test. .Window("Flight Reservation").Count For i = 0 To count_of_prop . I added the following lines afterward.WinRadioButton("Business").com Bottom of Form QTP SetTOProperty QTP SetTOProperty The SetTOProperty method enables you to modify a property value that QuickTest uses to identify an object. then button (+). and do not affect the values stored in the test object repository.GetTOProperty("nativeclass") msgbox a a=Window("Flight Reservation").blogspot.GetTOProperty("text") msgbox a qtp. I clicked on button (7). It recorded the first six lines of the script as seen below. Value Example 1 of SetTOProperty I opened a new test in QTP and opened Calculator (Start -> All Programs -> Accessories>Calculator). Finally I closed the Calculator. . Because QuickTest refers to the temporary version of the test object during the run session. The Object Repository window is read-only during record and run sessions. any changes you make using the SetTOProperty method apply only during the course of the run session. then button (3) and finally button (=). Syntax of SetTOProperty method Object(description). a=Window("Flight Reservation"). I started Recording in QTP.WinRadioButton("Business").SetTOProperty Property. Object Repository shows that the text property of button named "7" has a value of 7. WinButton("7").WinButton("3").html file.Close below statement retrieves a value of the text property of a button named "7" using GetTOProperty from memory. Write the below text in the Notepad and save it as . "seven" below statement retrieves a value of the text property of a button named "7" using GetTOProperty from memory. x=Window("Calculator").WinButton("+").WinButton("7").GetTOProperty("text") msgbox x the following statement would set the button's (named "7") text property value to seven (remember temporarily) Window("Calculator"). x=Window("Calculator").Click Window("Calculator").Click Window("Calculator").SetTOProperty "text".GetTOProperty("text") msgbox x After running the above statements the Object Repository will still be the same as it was before running the above statements.Click Window("Calculator").WinButton("7").Activate Window("Calculator").WinButton("7").WinButton("="). Window("Calculator").Click Window("Calculator").html .'QuickTest refers to the temporary version of the test object during the run session. (See screenshot above) Example 2 of SetTOProperty Open a new Notepad. I saved it as a. Click So for this test.When you open the file in IE it will look like as shown below: Make sure that this above file (a.html) is open in IE and QTP is open.Page("Page"). Object Repository contains information only for Link 1 as can be seen below: .Link("Link 1"). It will record the below line of code: Browser("Browser"). Click on Record. While recording click on Link 1. Stop Recording. SetTOProperty "text".blogspot.Back Browser("Browser"). "Link 2" Browser("Browser").Click But if you use SetTOProperty as below.Link("Link 1").com Bottom of Form Descriptive Programming in QTP .Page("Page"). it will click on Link 2 although Link 2 is not in Object Repository.Link("Link 2").Page("Page").Link("Link 1").Page("Page").Page("Page").Click Browser("Browser"). it will show error: Browser("Browser").Click qtp. Browser("Browser").And now if we run this test it will click only Link 1.Link("Link 1"). [Object Repository does not contain information on Link 2] Now if you write the below line in this test. removing the above line (which clicks link 1) and run. and flexibility. You don't know how many check boxes will be there based on the geographical information you provided. efficiency. While running a test. directly. So one of the other advantages is you can copy this script and Run this from any other machine (other than on which it was created) and it is supposed to work fine.WinEdit("AttachedText:=Agent Name:"). We will see examples of both static and dynamic type of descriptive programming in QTP. suppose there are 8 check boxes on a web page with names as chk_1. based on the geographical information you provided and then after the email addresses are provided as checkboxes you have to send a rebate letter to them. Static is easier but Dynamic provides more power.Set "mercury" window("Title:=Login"). So in this case. you will better understand it as you read more.winbutton("Text:=OK").Set "sachin" window("Title:=Login"). We can also instruct QTP to perform methods on objects without referring to the Object Repository. Suppose in a web site you have to generate a list of all the customer's email addresses. SystemUtil. So it’s not a good idea to put these in an Object Repository.Whenever you record on any object using QTP. QTP adds the test object to the Object Repository. Descriptive programming can be done in two ways: Static: We provide the set of properties and values.g. chk_2 and so on. QTP can perform methods on those objects. that describe the object. For this time just read the script and move on. [ I have given Example 1a's recorded version (which uses Object Repository)in Example 1b just for your comparison of the two so that you can better understand both ] Example 1a: uses DP We can describe the object directly by specifying property: =value pairs. With the help of Descriptive Programming you can Set these check boxes ON or OFF according to your application needs.Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4b.WinEdit("AttachedText:=Password:"). Dynamic: We have to add a collection of properties and values to a description object and then provide the statement with the description object's name. If you are dynamically creating test objects during the run session then also Descriptive Programming goes a long way to help you. First let’s take a look at Static: This below example uses Descriptive Programming to open Flight Application and does not use object repository at all. Only after the object is found in the Object Repository. This implies that descriptive programming is very helpful if you want to perform an operation on an object that is not stored in Object Repository. Descriptive Programming is also useful to perform the same operation on several objects with certain matching properties e. who brought iPhone from you.close Examle 1b: uses OR . you can use a Descriptive programming to instruct QTP to perform a Set "ON" method for all objects that fit the description: HTML TAG = input. TYPE = check box. This is possible with the help of Programmatic descriptions or descriptive programming. QTP finds the object in the Object Repository and uses the stored test object’s description to identify the object in your application/website.Click window("Title:=Flight Reservation").exe" window("Title:=Login"). WinEdit("AttachedText:=Agent Name:").Set "mercury" .exe" var.WebEdit("Name:=Author". but cannot locate it in the repository because the parent objects were specified using programmatic descriptions. you cannot use the following statement.winbutton("Text:=OK").WebEdit("Name:=Author".WinEdit("AttachedText:=Password:").Exit" Note: When using programmatic descriptions from a specific point within a test object hierarchy. However.exe".Page("Title:=Mercury Tours").Set "mercury" var.Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4b. in the above Example 1a script.Set "sachin" var.Page("Title:=Mercury Tours").WinEdit("Password:"). since it uses programmatic descriptions from a certain point in the description (starting from the Page object description): Browser("Mercury Tours"). since it uses programmatic descriptions for the Browser and Page objects but then attempts to use an object repository name for the WebEdit test object: Browser("Title:=Mercury Tours").Page("Title:=Mercury Tours"). QTP cannot identify the object. "Index:=3").close Or We can use 'With & End With' Statement like below: SystemUtil. WebEdit("Author").Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4b.WinEdit("AttachedText:=Password:"). window("Title:=Login") is being used several times so we do this: Set var = window("Title:=Login") SystemUtil.Type micTab Dialog("Login").WinButton("OK"). For example. Page.exe" With window("Title:=Login") . you can use the following statement since it uses programmatic descriptions throughout the entire test object hierarchy: Browser("Title:=Mercury Tours"). If the same programmatic description is being used several times then we can assign the object to a variable: E.WinMenu("Menu"). WebEdit.g. "Index:=3")."C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\".SetSecure "476a9c021bc5a7422cf5a84ad08503823abcbaae" Dialog("Login").Click Window("Flight Reservation").Click window("Title:=Flight Reservation").close Now let’s take a look at the dynamic type: .Select "File.WinEdit("AttachedText:=Agent Name:").winbutton("Text:=OK").Set "sachin" Dialog("Login").WinEdit("Agent Name:"). you must continue to use programmatic descriptions from that point onwards within the same statement. You can also use the statement below. If you specify a test object by its object repository name after other objects in the hierarchy have been specified using programmatic descriptions.Click End with window("Title:=Flight Reservation").Set "Sachin" QTP tries to locate the WebEdit object based on its name.""."open" Dialog("Login").Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4a.Set "Sachin" Above line uses Object Repository for Browser object and Descriptive Programming for Page and WebEdit.Set "sachin" .WinEdit("Agent Name:").Set "Sachin" Above line uses Descriptive Programming for all objects like Browser.SystemUtil. Create" statement is used.Create() mydescription("Class Name"). edit.WinEdit("AttachedText:=Password:"). returned properties collection. This is just an example. We use Description object to return a Properties collection object containing a set of Property Objects.Open Order.Set "mercury" window("Title:=Login").count msgbox(a) Just try to understand the above code. For creating Properties collection "Description. can be specified in a statement.Set "mercury" window("Title:=Login").Click window("Title:=Flight Reservation").ChildObjects(mydescription) a=Checkboxes.exe" window("Title:=Login").WinEdit(myvar ).dialog("text:=Open Order").value=20 myvar("width"). Set Myvar = Description.Create() Once Property Object (Myvar) is created.Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4b.Programs.Create. In this Flight reservation window go to File.value="Agent Name:" myvar("height").Understand it like this – A Property Object is a property name and value. Lets take a complete example of this: [these extra values (height & width) are not important in our example.WinEdit("AttachedText:=Agent Name:".close Retrieving child objects in Descriptive Programming: There is a ChildObjects method which can be used to get all objects located within a specific parent object or only those that match some criteria for programmatic description.value="WinCheckBox" Set Checkboxes = window("text:=FLight Reservation").Set "sachin" window("Title:=Login"). Creating checkpoints programmatically: . remove and retrieve properties and values to or from properties objects can be entered during the run time.WinEdit("AttachedText:=Password:").winbutton("Text:=OK")."width:=119" ).QuickTest ProfessionalSample Applications.value=119 SystemUtil.Set "sachin" window("Title:=Login"). I will straightway show you an example of how to do this: Make sure that Flight Reservation window is open (Start. We will use this childobjects method to count the checkboxes in this 'Open Order' dialogbox. statements to add.close Now modifying the above script using Description.Flight). Set mydescription=Description.Click window("Title:=Flight Reservation"). In short we first of all need to create a description and then use a particular syntax to retrieve all child objects that match that description and manipulate them according to our own wish. I have just added those in order to make you understand this] SystemUtil.exe" window("Title:=Login"). Then only in place of an object name.Create() myvar("AttachedText"). in real life you can use this count in some kind of loop."height:=20".Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4b. Our example can run without height and width properties.winbutton("Text:=OK").In the below script childobjects method is being applied to dialog object and childobjects method uses mydescription property object we created. Set myvar= description. while learning an object. I will show a small example here which checks if the "Flights. DP is also useful in case of programming WebElement objects (A WebElement is a general web object which can represent any web object. Index property values are specific to an object and also the value is based on the order in which the object appears in the source code. regardless of type) because WebElement object is general object that applies to all objects.ReportEvent Statement to send the results to the result window of QTP.GetROProperty("enabled") msgbox (a) If a = True Then msgbox ("button is enable") else msgbox ("button is disable") End If In the above script GetROProperty method is being applied to 'Flight.page("title:=Welcome: Mercury Tours"). QTP also.com/) and write the following line in the expert view of new test: browser("title:=Welcome: Mercury Tours"). As an example. just open the website (. Descriptive programming checks are helpful for the object whose properties you want to check but the object is not stored in Object Repository. I have used a message box to show whether it is enable or disable. In the above line if you do Index:=0 then “hello” will be written in the “User Name” text box. Index property Index property is useful to identify a test object uniquely.com/) and make sure the cursor is in the “User Name” text box and write the following line in the Expert View of new test: browser("title:=Welcome: Mercury Tours"). QTP will search for the third object on the page (it can be any.webelement("name:=password".page("title:=Welcome: Mercury Tours"). .. [you can see an object's properties and methods from QTP help.) As an example. For the below script make sure that Flight reservation window is open: a=window("Title:=Flight Reservation"). QTP will search for the second WebEdit object on a web page.demoaut.Set "hello" This will write “hello” in the “Password” text box. If you use Index:=1 with WebEdit test object.demoaut.winbutton("Text:=FLIGHT"). just open the website (.' button to check the 'enable' property of the button." button in Flight Reservation window is enable or disable.Run-time value of a specified object property can be compared with expected value of that property by using programmatic description."index:=2"). can assign a value to test object’s index property to uniquely identify it. For all the methods and properties of WebElement object please refer QTP User Guide.Click It will just click the “Password” text box which just highlights that text box and places the mouse cursor in that box. On the other hand if you use Index:=2 to describe a WebElement object. you can use the Report.. Definitely there are other ways also to get these]. The value starts with 0.WebEdit("Index:=1"). .Close For opening the application we can use complete paths also e. window("title:=Untitled . (I saved it as Second.Last but not the least SystemUtil object SystemUtil object allows you to open and close application by writing its code manually in the Expert view of QTP.html extension.blogspot. Open another new blank notepad and type <title>Hello World2</title> And save it with . Open a new blank test in QTP and type the following code: Set myBrowser = Description. Also enter World1 and World2 in Cell A1 and A2 in Global Sheet.Run "C:\Program Files\Internet Explorer\iexplore.html under c:\) 3. qtp.Value = "Hello " & DataTable. systemutil.close 4.Value("A") Browser(myBrowser). Below example shows how to open or close a Notepad using code: systemutil.Notepad").g.CloseProcessByName("Notepad.exe") This example uses Run and CloseProcessByName methods to open and close the application (Notepad).Run "Notepad. we can use the below line also which is mostly used. (I saved it as First.html under c:\) 2. Instead of closing the Notepad with CloseProcessByName method.exe" This opens an Internet explorer.Create() myBrowser("opentitle").com Bottom of Form How we can parameterize Descriptive Programming statements? Open a new blank Notepad and type <title>Hello World1</title> And save it with .html extension.exe" wait(3) SystemUtil. There are certain situations when using descriptive programming has its own benefits (with descriptive programming along with other features you also get code portability) while in some other typical situations object repository works like a charm (No need to adjust the script when an object properties change. Running the test will close both the browsers.html in order to open them in Internet Explorer. there can be many more) . (Differences between object repository and descriptive programming are not limited to what is shown below. Below you can find some of the differences between object repository and descriptive programming.blogspot. Make sure that both First and Second are visible and run the test. Double click on First. Highlighting an Object in Your Application etc are couple of features you can’t just resist and of course there are many more). 6.5. qtp. So we have used data table in this example to parameterize the values. Above is a very small example that shows how we can data-drive a property value since the browsers have opentitle property values as “Hello World1” and “Hello World2” respectively.html and Second.com Bottom of Form QTP Object Repository Vs Descriptive Programming There is no specific answer as to which of the two (object repository or descriptive programming) is better. qtp. although you can use object spy to get help in selecting set of property/value pairs. Set myobj = Description. You can write the below example in a new test in QTP and make sure either a new Notepad or a WordPad window is open and run the test. If the mandatory and assistive properties do not uniquely identify an object.Object Repository With object repository QTP automatically resolves which properties and values are required to uniquely identify an object.Create() myobj("regexpwndtitle"). QTP can also use Smart Identification (if enabled).WinMenu("menuobjtype:= 2"). Let us take a small example.blogspot. Object repository is considered relatively faster It is considered relatively slower to create and if you take into account the performance for performance wise also in case of large large applications. How to use regular expressions with descriptive programming? Regular expressions can be used with descriptive programming.<item 2="">") Or . Descriptive programming statements need to be Object repository in QTP is created put into operation manually. Descriptive Programming With descriptive programming set of property/value pairs are created by you and all are mandatory. applications... So I have used regular expression for this where first four dots (.. So the Smart Identification mechanism is not used with Descriptive Programming or Programmatic Description. menu of either a new blank Notepad or WordPad which ever is open.. It uses regular expression in the second line where value of regexpwndtitle in case of Notepad is Notepad and in case of WordPad is WordPad. QTP uses an ordinal identifier.) correspond to any four characters and after these four characters there can be capital or lower case p and then ad. This below example clicks on File ->Open.com Bottom of Form QTP Descriptive Programming Questions Can Descriptive Programming be used with Smart Identification? Smart Identification works with the help of Object Repository. In case of Descriptive Programming we bypass the Object Repository. QTP starts with predefined mandatory and assistive properties in that order. Finding a set of automatically (manual creation is also possible) properties to distinctively identify the object as and when you record on the application.[Pp]ad" Window(myobj). time and again can be time consuming. Open… and Ctrl+O Is there a way to use special characters in descriptive programming? Let’s understand this with a very simple example: Open a new blank notepad and type <title>Welcome A*</title>And save it with .WinMenu("menuobjtype:=2")..Select "File..Value = ".html under c:\ After it is saved just double click it to open it with Internet Explorer.. Ctrl+O" In this above code there can be issues if there is no proper spacing between File.Create() myobj("regexpwndtitle").Open.Refresh Make sure that “Welcome A*” Internet Explorer window is open. Run the test.Microsoft Internet Explorer"). It will show an error.. . Now in a new test in QTP type: Browser("text:=Welcome A* .Set myobj = Description.[Pp]ad" Window(myobj).html extension as I saved it as sac.. Exist Msgbox a Is it possible to use descriptive programming inside a checkpoint? No it is not possible to use descriptive programming with the checkpoint object as in the below line of code: Browser("Browser").Microsoft Internet Explorer").Page("Page").Now rewrite the above line with a backslash “\” in front of * Browser("text:=Welcome A\* .check checkPoint("text:=sometext") . You can try another example: a=Browser("text:=Welcome A\* .Microsoft Internet Explorer").Refresh It will work fine.
https://www.scribd.com/document/57178172/qtp
CC-MAIN-2018-17
refinedweb
25,098
59.5
AppFabric has been discontinued by Microsoft and as a result, all AppFabric users are looking for caching solutions that are a good alternative to AppFabric so they can migrate their application to it. NCache, an extremely fast and scalable distributed cache, provides a perfect AppFabric migration platform. NCache provides a wrapper for AppFabric that makes the migration from your AppFabric application to NCache seamless. This NCache AppFabric wrapper lets you migrate your application by simply editing the namespaces and appSettings of your application. Let’s go through the steps required for migration: Full documentation available for NCache Wrapper as well as an AppFabric Migration Guide. Step 1: Remove the Microsoft.ApplicationServer.Caching.Client NuGet package or the references of the following AppFabric libraries from your application's source code: Step 2: Remove the following namespaces from your project: Step 3: Download the NCache AppFabric wrapper NuGet package in your application. You can find this NuGet package here. Step 4: Add the Alachisoft.NCache.Data.Caching namespace in your project. Step 5: Configure the appSettings section of your App.config as shown below: <add key="Default" value="name-of-the-default-cache-here"/> <add key="Expirable" value="True"/> <add key="TTL" value="hh:mm:ss"/> Step 6: If you are using the in-process cache that comes preconfigured in config.ncconf, then add the name of the local cache as the cacheName. If you are not using the local in-process cache, then you need to make sure the cache you want to use is created, running, and referenced as the cacheName. Here are the steps required to Create a Cache. To get a detailed guide on how to migrate from AppFabric to NCache, refer to our documentation on AppFabric to NCache Migration. Listed below are some of the reasons why NCache is an ideal replacement of AppFabric for your .NET/.NET Core application. +1 (214) 764-6933 (US) +44 20 7993 8327 (UK)
https://www.alachisoft.com/ncache/appfabric-migration.html
CC-MAIN-2021-17
refinedweb
323
55.64
<Core><Intermediate><Advanced> Overview | Packages | Class Internals | Collections | I-O | Network | Database Every class in Java belongs to a package, and is referenced in code by the import statement or by the fully-qualified class name. A package is a grouping of related classes and may have a sub package. Packages allow Java class variables to have class-level or package-level access via the access modifiers public, protected, or private. User-defined classes may be grouped to belong to user-defined packages, but not to built-in packages. Classes that don't belong to any package are said to belong to 'default' package, though the term is not a keyword as defined by the Java language specification. We will see more about these modifiers in the next section 'class internals'. Packages are arranged hierarchically, and are rooted in either java or javax package, the latter is an extension to the core Java classes originally released as version 1.0. The core packages consist of classes required to perform routine computing tasks such as text manipulation, input/output and networking. Here is an overview of Java core packages: java |____awt | |____event | |____lang | | |____util | | |____io | | |____net | | |____sql While we will see the classes in these packages in detail as we go along, it does not hurt to take a peek now. The lang package is available to all classes as default, that is, you don't have to explicitly specify this package in the import statement when you use any classes from this package such as String and StringBuffer. The util package, as the name suggests, consists of utility classes such as the collection classes and the Date class. The collection classes require that you only store objects. So, if you want to store a numeric, you need to convert it into an object first. The lang package has a class called Integer, which you can use to change a number to an object, and vice versa. The io package deals with input and output streams and is by far the largest package in core Java. It has a forest of classes that let you stream almost anything that can be digitized, and are so convenient that any device from a satellite to a mobile phone can be used if it supports streaming. All the horrible internals are transparent to the developer. The net package, as you might have guessed, has classes that let two systems talk, no matter what those systems are and where, as long as they are connected by a network and understand Java bytecode. This package relies heavily on the io package for communication. Again, as the name suggests, the sql package deals with database connectivity and management of a data store. The awt contains classes for the user interface and in its expanded form means Abstract Windowing Toolkit. The toolkit provides display of user interface elements that are platform specific and relies on the underlying GUI toolkit to provide the functionality. It has a sub package event that has classes for transmitting mouse and keyboard events to the underlying system via system calls (totally transparent to the developer). Event handling in Java has graduated quite a bit from its original avatar, and shall be dealt with in another section. Take a look at this code snippet: package com.nanpackage.store; import java.util.*; public class JavaStore { private static Hashtable pairs = new Hashtable(); public JavaStore() { pairs.put("nan", "1345"); pairs.put("tan", "5432"); pairs.put("zan", "3456"); } public static void main(String[] args) { new JavaStore(); Enumeration enum = pairs.keys(); while(enum.hasMoreElements()) { String key = (String)enum.nextElement(); String val = (String)pairs.get(key); System.out.println("Name: "+key+" "+"Number: "+val); } } } Note the second line of the code. We import the util package so we could use the Hashtable class in it. Hashtable is a class that lets us store name-value pairs. It is a kind of associative array that holds what are called keys and values associated with those keys. We also used an instance of the Enumeration class from this package. It is basically an iterator that helps you to loop over the contents of a collection. The pairs variable is declared static because we are using it in the function that is also by definition static. Declaring a variable or a function static has a significance we shall see later. There is a method by the same name as that of the class JavaStore. It is a constructor; it has no return type, and is used to instantiate objects of this class. We use it to populate the pairs instance of the Hashtable that is declared and initialized to default capacity before the constructor. Both Hashtable and Enumeration classes, like all the collection classes, store and return objects of the generic type. It means, if you store dogs, you will get back objects, which you must cast to the type that you originally stored it in. Theoretically, it is possible to type cast a Dog object that you stored to a Cat object on return, but you would do it with, perhaps, disastrous consequences. Here we store objects of the String type, so we get back our strings using the type-cast operator (<class-name>). At last, coming to the first line of code, it simply means that the class we defined is to be placed in a package called com.nanpackage.store. If a package is defined for a class, it shall be the first line of code. Package convention A convention followed by Java developers in naming the class packages is to reverse the domain name and append the package name to it. We assume that the domain name in our example is, and so we place the store sub package under it as com.nanpackage.store. When you compile this class, the compiled code is accessible as com.nanpackage.store.JavaStore. So you will have a directory hierarchy as under: com |___nanpackage |_______store The package hierarchy translates as a folder hierarchy in physical terms, and accessible accordingly. So when you compile the source code from, say my_src directory, you get a class file as com.nanpackage.store.JavaStore. But if you try to run it from there, you get an exception: Exception in thread "main" java.lang.NoClassDefFoundError: com/nanpackage/store/JavaStore You create the folder hierarchy as com/nanpackage/store under my_src and move the class to store. Then you can run it from my_src folder. There is compiler option with javac that automatically creates the necessary folders for you based on the package declaration. javac -d . JavaStore The -d option stores the class files relative to the directory specified; here wanted from the current directory ( the dot following -d means that ) where we have our source file. And lo! You have the correct directory structure specified by the package declaration.
http://javanook.tripod.com/core/javabook_1_1.html
CC-MAIN-2018-51
refinedweb
1,141
61.56
A pointer is a variable which contain the memory address. It also points to a specific data types. Three operator are commonly used when dealing with pointer. Example: In this example you will see how pointer works. In the above figure you see that c is a variable of char data-type and it has value "A" inside. The address of variable c is 0X3000 Now in this figure you see that c is a variable of char data-type and it has value 'A' inside. The address variable c is 0X3000. And a pointer variable cptr of char data-type. The cptr pointer variable having the address of c variable rather value. So this is one the important difference between a general variable and pointer variable is that a pointer variable contains memory address. #include <stdio.h> void main() { char c ='A'; char *cptr; cptr=&c; printf("\nThe address of c is\t%p",&c); printf("\nValue of c is\t\t%c",c); printf("\n\nThe address of cptr is\t %p",&cptr); printf("\nValue of cptr is\t%p",cptr); printf("\nAccess variable which cptr point to is\t%c",*cptr); } From the output of above example you can see that the address of these two variables are different and the pointer variable holding the address of another variable which it point to.+
http://www.roseindia.net/tutorial/datastructure/pointer.html
CC-MAIN-2016-07
refinedweb
226
61.46
base.pywith abstract models and models.pywith models extending the abstract ones and inheriting all the features. In the specific project you can either use the core app directly, or create a specific app which models extend from the base abstract models of the core app and additionally introduce new features. This is a quick example skipping all the unrelated parts like settings, urls, and templates: - core_project - apps - player - base.py from django.db impport models class PlayerBase(models.Model): name = models.CharField(max_length=100) class Meta: abstract = True - models.py from core_project.apps.player.base import PlayerBase class Player(PlayerBase): pass - specific_project - apps - player - models.py from core_project.apps.player.base import PlayerBase class Player(PlayerBase): points = models.IntegerField() The concept works fine until you need to use foreign keys or many-to-many relations in the abstract models. As Josh Smeaton has already noticed, you can't set foreign keys to abstract models as they have no own database tables and they know nothing about the models which will extend them. Let's say, we have the following situation: GameBaseand MissionBaseare abstract models and the model extending MissionBaseshould receive a foreign key to the model extending GameBase. Thanks to Pro Django book by Marty Alchin, I understood how the models get created in the background. By default, all python classes are constructed by the typeclass. But whenever you use __metaclass__property for your classes, you can define a different constructor. Django models are classes constructed by ModelBaseclass which extends the typeclass. In order to solve the problem of foreign keys to the models extending the abstract classes, we can have a custom constructor extending the ModelBaseclass. base.py # -*- coding: utf-8 -*- from django.db import models from django.db.models.base import ModelBase from django.db.models.fields import FieldDoesNotExist class GameMissionCreator(ModelBase): """ The model extending MissionBase should get a foreign key to the model extending GameBase """ GameModel = None MissionModel = None def __new__(cls, name, bases, attrs): model = super(GameMissionCreator, cls).__new__(cls, name, bases, attrs) for b in bases: if b.__name__=="GameBase": cls.GameModel = model elif b.__name__=="MissionBase": cls.MissionModel = model if cls.GameModel and cls.MissionModel: try: cls.MissionModel._meta.get_field("game") except FieldDoesNotExist: cls.MissionModel.add_to_class( "game", models.ForeignKey(cls.GameModel), ) return model class GameBase(models.Model): __metaclass__ = GameMissionCreator title = models.CharField(max_length=100) class Meta: abstract = True class MissionBase(models.Model): __metaclass__ = GameMissionCreator title = models.CharField(max_length=100) class Meta: abstract = True models.py # -*- coding: utf-8 -*- from base import * class Game(GameBase): pass class Mission(MissionBase): pass GameMissionCreatoris a constructor of GameBase, MissionBase, Game, and Missionclasses. When it creates a class extending GameBase, the game model is registered as a property. When it creates a class extending MissionBase, the mission model is registered as a property. When both models are registered, a foreign key is added dynamically from one model to the other. One drawback of this constructor-class example is that if there are more than one classes extending GameBaseor MissionBase, then the code won't function correctly. Anyway, the example shown illustrates the possible solution and gives a direction for further development of the idea. Is there a workaround for the drawback you mention? I can't seem to find a way to do it... I have the same problem but with only one parent ABC. Another class would contain a ForeignKey to the abstract class. I tried using your example to create a constructor to do so, but I still get "cannot dfine relation with absract class" error. Any idea why ? The idea of the trick was that it didn't create a ForeignKey to an abstract class, but rather a ForeignKey to a leaf class which is created from an abstract class. The constructor checks if the class created is created from an abstract class or not. Probably you could even check the abstractness explicitly by model._meta.abstract. If you have multiple models created from the abstract class which will get ForeignKeys from other models, you need a way to define to which non-abstract class the ForeignKey is assigned. That might be done saving final non-abstract classes in a dictionary and referencing to them by names (let's say, defined in the settings). If this didn't help you, maybe you can send a link to pasted code snippet so that I could check it and maybe give some advices. Recently I found out that all this can be achieved much simpler just telling the app name and model name in the foreign key definition, like this: class GameBase(models.Model): title = models.CharField(max_length=100) class Meta: abstract = True class MissionBase(models.Model): game = models.ForeignKey("game_app.Game") title = models.CharField(max_length=100) class Meta: abstract = True I like the "Wish this site were powered by Django button". What I wish is that all blogspot sites had a "Click here for printable view" button on them. :(
http://djangotricks.blogspot.com/2009/02/abstract-models-and-dynamicly-assigned.html
CC-MAIN-2014-10
refinedweb
822
50.84
I had a fun weekend analysing car parking data in Westminster at the Future Cities Hackathon along with - Amit Nandi - Bart Baddeley - Jackie Steinitz - Ian Ozsvald - Mateusz Łapsa-Malawski Apparently in the world of car parking where Westminster leads the rest of UK follows. For example Westminster is rolling out individual parking bay monitors. Our analysis gained an honourable mention. Ian has produced a great write-up of our analysis with fine watercolour maps and Bart’s time-lapse video of parking behaviour. We mainly used Python, Pandas and Excel for the actual analysis and QGIS for the maps. I thought it would be an interesting exercise to recreate some of the analysis in Haskell. A Haskell Implementation First some pragmas and imports. > {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-orphans #-} > > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE OverloadedStrings #-} > {-# LANGUAGE ViewPatterns #-} > {-# LANGUAGE DeriveTraversable #-} > {-# LANGUAGE DeriveFoldable #-} > {-# LANGUAGE DeriveFunctor #-} > > module WardsOfLondon ( parkingDia ) where > > import Database.Shapefile > > import Data.Binary.Get > import qualified Data.ByteString.Lazy as BL > import qualified Data.ByteString as B > import Data.Binary.IEEE754 > import Data.Csv hiding ( decode, lookup ) > import Data.Csv.Streaming > import qualified Data.Vector as V > import Data.Time > import qualified Data.Text as T > import Data.Char > import qualified Data.Map.Strict as Map > import Data.Int( Int64 ) > import Data.Maybe ( fromJust, isJust ) > import Data.List ( unfoldr ) > > import Control.Applicative > import Control.Monad > > import Diagrams.Prelude > import Diagrams.Backend.Cairo.CmdLine > > import System.FilePath > import System.Directory > import System.Locale > import System.IO.Unsafe ( unsafePerformIO ) > > import Data.Traversable ( Traversable ) > import qualified Data.Traversable as Tr > import Data.Foldable ( Foldable ) A type synonym to make typing some of our functions a bit more readable (and easier to modify e.g. if we want to use Cairo). > type Diag = Diagram Cairo R2 The paths to all our data. SHP files are shape files, a fairly old but widespread map data format that was originally produced by a company called ESRI. The polygons for the outline of the wards in Westminster. Surely there is a better place to get this rather than using tree canopy data. The polyline data for all the roads (and other stuff) in the UK. We selected out all the roads in a bounding box for London. Even so plotting these takes about a minute. The parking data were provided by Westminster Council. The set we consider below was about 4 million lines of cashless parking meter payments (about 1.3G). > prefix :: FilePath > > dataDir :: FilePath > > borough :: FilePath > > parkingBorough :: FilePath > > flGL :: FilePath > flGL = prefix </> dataDir </> "GreaterLondonRoads.shp" > > flParkingCashless :: FilePath > flParkingCashless = "ParkingCashlessDenorm.csv" The data for payments are contained in a CSV file so we create a record in which to keep the various fields contained therein. > data Payment = Payment > { _amountPaid :: LaxDouble > , paidDurationMins :: Int > , startDate :: UTCTime > , _startDay :: DayOfTheWeek > , _endDate :: UTCTime > , _endDay :: DayOfTheWeek > , _startTime :: TimeOfDay > , _endTime :: TimeOfDay > , _designationType :: T.Text > , hoursOfControl :: T.Text > , _tariff :: T.Text > , _maxStay :: T.Text > , spaces :: Maybe Int > , _street :: T.Text > , _xCoordinate :: Maybe Double > , _yCoordinate :: Maybe Double > , latitude :: Maybe Double > , longitude :: Maybe LaxDouble > } > deriving Show > > data DayOfTheWeek = Monday > | Tuesday > | Wednesday > | Thursday > | Friday > | Saturday > | Sunday > deriving (Read, Show, Enum) We need to be able to parse the day of the week. > instance FromField DayOfTheWeek where > parseField s = read <$> parseField s The field containing the longitude has values of the form -.1. The CSV parser for Double will reject this so we create our own datatype with a more relaxed parser. > newtype LaxDouble = LaxDouble { laxDouble :: Double } > deriving Show > > instance FromField LaxDouble where > parseField = fmap LaxDouble . parseField . addLeading > > where > > addLeading :: B.ByteString -> B.ByteString > addLeading bytes = > case B.uncons bytes of > Just (c -> '.', _) -> B.cons (o '0') bytes > Just (c -> '-', rest) -> B.cons (o '-') (addLeading rest) > _ -> bytes > > c = chr . fromIntegral > o = fromIntegral . ord We need to be able to parse dates and times. > instance FromField UTCTime where > parseField s = do > f <- parseField s > case parseTime defaultTimeLocale "%F %X" f of > Nothing -> fail "Unable to parse UTC time" > Just g -> return g > > instance FromField TimeOfDay where > parseField s = do > f <- parseField s > case parseTime defaultTimeLocale "%R" f of > Nothing -> fail "Unable to parse time of day" > Just g -> return g Finally we can write a parser for our record. > instance FromRecord Payment where > parseRecord v > | V.length v == 18 > = Payment <$> > v .! 0 <*> > v .! 1 <*> > v .! 2 <*> > v .! 3 <*> > v .! 4 <*> > v .! 5 <*> > v .! 6 <*> > v .! 7 <*> > v .! 8 <*> > v .! 9 <*> > v .! 10 <*> > v .! 11 <*> > v .! 12 <*> > v .! 13 <*> > v .! 14 <*> > v .! 15 <*> > v .! 16 <*> > v .! 17 > | otherwise = mzero To make the analysis simpler, we only look at what might be a typical day, a Thursday in February. > selectedDay :: UTCTime > selectedDay = case parseTime defaultTimeLocale "%F %X" "2013-02-28 00:00:00" of > Nothing -> error "Unable to parse UTC time" > Just t -> t It turns out that there are a very limited number of different sorts of hours of control so rather than parse this and calculate the number of control minutes per week, we can just create a simple look up table by hand. > hoursOfControlTable :: [(T.Text, [Int])] > hoursOfControlTable = [ > ("Mon - Fri 8.30am - 6.30pm" , [600, 600, 600, 600, 600, 0, 0]) > , ("Mon-Fri 10am - 4pm" , [360, 360, 360, 360, 360, 0, 0]) > , ("Mon - Fri 8.30-6.30 Sat 8.30 - 1.30" , [600, 600, 600, 600, 600, 300, 0]) > , ("Mon - Sat 8.30am - 6.30pm" , [600, 600, 600, 600, 600, 600, 0]) > , ("Mon-Sat 11am-6.30pm " , [450, 450, 450, 450, 450, 450, 0]) > , ("Mon - Fri 8.00pm - 8.00am" , [720, 720, 720, 720, 720, 0, 0]) > , ("Mon - Fri 8.30am - 6.30pm " , [600, 600, 600, 600, 600, 0, 0]) > , ("Mon - Fri 10.00am - 6.30pm\nSat 8.30am - 6.30pm", [510, 510, 510, 510, 510, 600, 0]) > , ("Mon-Sun 10.00am-4.00pm & 7.00pm - Midnight" , [660, 660, 660, 660, 660, 660, 660]) > ] Now we create a record in which to record the statistics in which we are interested: Number of times a lot is used. Number of usage minutes. In reality this is the amount of minutes purchased and people often leave a bay before their ticket expires so this is just a proxy. The hours of control for the lot. The number of bays in the lot. N.B. The !’s are really important otherwise we get a space leak. In more detail, these are strictness annotations which force the record to be evaluated rather than be carried around unevaluated (taking up unnecessary space) until needed. > data LotStats = LotStats { usageCount :: !Int > , usageMins :: !Int64 > , usageControlTxt :: !T.Text > , usageSpaces :: !(Maybe Int) > } > deriving Show As we work our way through the data we need to update our statistics. > updateStats :: LotStats -> LotStats -> LotStats > updateStats s1 s2 = LotStats { usageCount = (usageCount s1) + (usageCount s2) > , usageMins = (usageMins s1) + (usageMins s2) > , usageControlTxt = usageControlTxt s2 > , usageSpaces = usageSpaces s2 > } > > initBayCountMap :: Map.Map (Pair Double) LotStats > initBayCountMap = Map.empty We are going to be working with co-ordinates which are pairs of numbers so we need a data type in which to keep them. Possibly overkill. > data Pair a = Pair { xPair :: !a, yPair :: !a } > deriving (Show, Eq, Ord, Functor, Foldable, Traversable) Functions to get bounding boxes. > getPair :: Get a -> Get (a,a) > getPair getPart = do > x <- getPart > y <- getPart > return (x,y) > > getBBox :: Get a -> Get (BBox a) > getBBox getPoint = do > bbMin <- getPoint > bbMax <- getPoint > return (BBox bbMin bbMax) > > bbox :: Get (BBox (Double, Double)) > bbox = do > shpFileBBox <- getBBox (getPair getFloat64le) > return shpFileBBox > > getBBs :: BL.ByteString -> BBox (Double, Double) > getBBs = runGet $ do > _ <- getShapeType32le > bbox > > isInBB :: (Ord a, Ord b) => BBox (a, b) -> BBox (a, b) -> Bool > isInBB bbx bby = ea >= eb && wa <= wb && > sa >= sb && na <= nb > where > (ea, sa) = bbMin bbx > (wa, na) = bbMax bbx > (eb, sb) = bbMin bby > (wb, nb) = bbMax bby > > combineBBs :: (Ord a, Ord b) => BBox (a, b) -> BBox (a, b) -> BBox (a, b) > combineBBs bbx bby = BBox { bbMin = (min ea eb, min sa sb) > , bbMax = (max wa wb, max na nb) > } > where > (ea, sa) = bbMin bbx > (wa, na) = bbMax bbx > (eb, sb) = bbMin bby > (wb, nb) = bbMax bby A function to get plotting information from the shape file. > getRecs :: BL.ByteString -> > [[(Double, Double)]] > getRecs = runGet $ do > _ <- getShapeType32le > _ <- bbox > nParts <- getWord32le > nPoints <- getWord32le > parts <- replicateM (fromIntegral nParts) getWord32le > points <- replicateM (fromIntegral nPoints) (getPair getFloat64le) > return (getParts (map fromIntegral parts) points) > > getParts :: [Int] -> [a] -> [[a]] > getParts offsets ps = unfoldr g (gaps, ps) > where > gaps = zipWith (-) (tail offsets) offsets > g ( [], []) = Nothing > g ( [], xs) = Just (xs, ([], [])) > g (n:ns, xs) = Just (take n xs, (ns, drop n xs)) We need to be able to filter out e.g. roads that are not in a given bounding box. > recsOfInterest :: BBox (Double, Double) -> [ShpRec] -> [ShpRec] > recsOfInterest bb = filter (flip isInBB bb . getBBs . shpRecData) A function to process each ward in Westminster. > processWard :: [ShpRec] -> FilePath -> > IO ([ShpRec], ([[(Double, Double)]], BBox (Double, Double))) > processWard recDB fileName = do > input <- BL.readFile $ prefix </> dataDir </> borough </> fileName > let (hdr, recs) = runGet getShpFile input > bb = shpFileBBox hdr > let ps = head $ map getRecs (map shpRecData recs) > return $ (recsOfInterest bb recDB, (ps, bb)) We want to draw roads and ward boundaries. > colouredLine :: Double -> Colour Double -> [(Double, Double)] -> Diag > colouredLine thickness lineColour xs = (fromVertices $ map p2 xs) # > lw thickness # > lc lineColour And we want to draw parking lots with the hue varying according to how heavily they are utilised. > bayDots :: [Pair Double] -> [Double] -> Diag > bayDots xs bs = position (zip (map p2 $ map toPair xs) dots) > where dots = map (\b -> circle 0.0005 # fcA (blend b c1 c2) # lw 0.0) bs > toPair p = (xPair p, yPair p) > c1 = darkgreen `withOpacity` 0.7 > c2 = lightgreen `withOpacity` 0.7 Update the statistics until we run out of data. > processCsv :: Map.Map (Pair Double) LotStats -> > Records Payment -> > Map.Map (Pair Double) LotStats > processCsv m rs = case rs of > Cons u rest -> case u of > Left err -> error err > Right val -> case Tr.sequence $ Pair (laxDouble <$> longitude val) (latitude val) of > Nothing -> processCsv m rest > Just v -> if startDate val == selectedDay > then processCsv (Map.insertWith updateStats v delta m) rest > else processCsv m rest > where > delta = LotStats { usageCount = 1 > , usageMins = fromIntegral $ paidDurationMins val > , usageControlTxt = hoursOfControl val > , usageSpaces = spaces val > } > Nil mErr x -> if BL.null x > then m > else error $ "Nil: " ++ show mErr ++ " " ++ show x > availableMinsThu :: LotStats -> Maybe Double > availableMinsThu val = > fmap fromIntegral $ > fmap (!!(fromEnum Thursday)) $ > flip lookup hoursOfControlTable $ > usageControlTxt val Now for the main function. > parkingDiaM :: IO Diag > parkingDiaM = do Read in the 4 million records lazily. > parkingCashlessCsv <- BL.readFile $ > prefix </> > dataDir </> > parkingBorough </> > flParkingCashless Create our statistics. > let bayCountMap = processCsv initBayCountMap (decode False parkingCashlessCsv) > > vals = Map.elems bayCountMap Calculate the available minutes for each bay. > availableMinsThus :: [Maybe Double] > availableMinsThus = zipWith f (map availableMinsThu vals) > (map (fmap fromIntegral . usageSpaces) vals) > where > f x y = (*) <$> x <*> y Calculate the actual minutes used for each lot and the usage which determine the hue of the colour of the dot representing the lot on the map. > actualMinsThu :: [Double] > actualMinsThu = > map fromIntegral $ > map usageMins vals > > usage :: [Maybe Double] > usage = zipWith f actualMinsThu availableMinsThus > where > f x y = (/) <$> pure x <*> y We will need to the co-ordinates of each lot in order to be able to plot it. > let parkBayCoords :: [Pair Double] > parkBayCoords = Map.keys bayCountMap Get the ward shape files. > fs <- getDirectoryContents $ prefix </> dataDir </> borough > let wardShpFiles = map (uncurry addExtension) $ > filter ((==".shp"). snd) $ > map splitExtension fs Get the London roads shape file. > inputGL <- BL.readFile flGL > let recsGL = snd $ runGet getShpFile inputGL Get the data we wish to plot from each ward shape file. > rps <- mapM (processWard recsGL) wardShpFiles Get the roads inside the wards. > let zs = map (getRecs . shpRecData) $ concat $ map fst rps And create blue diagram elements for each road. > ps :: [[Diag]] > ps = map (map (colouredLine 0.0001 blue)) zs Create diagram elements for each ward boundary. > qs :: [[Diag]] > qs = map (map (colouredLine 0.0003 navajowhite)) (map (fst. snd) rps) Westminster is located at about 51 degrees North. We want to put a background colour on the map so either we need to move Westminster to be at the origin or create a background rectangle centred on Westminster. We do the former. We create a rectangle which is slightly bigger than the bounding box of Westminster. And we translate everything so that the South West corner of the bounding box of Westminster is the origin. > let bbWestminster = foldr combineBBs (BBox (inf, inf) (negInf, negInf)) $ > map (snd . snd) rps > where > inf = read "Infinity" > negInf = read "-Infinity" > > let (ea, sa) = bbMin bbWestminster > (wa, na) = bbMax bbWestminster > wmHeight = na - sa > wmWidth = wa - ea Create the background. > wmBackground = translateX (wmWidth / 2.0) $ > translateY (wmHeight / 2.0) $ > scaleX 1.1 $ > scaleY 1.1 $ > rect wmWidth wmHeight # fcA (yellow `withOpacity` 0.1) # lw 0.0 Plot the streets. > wmStreets = translateX (negate ea) $ > translateY (negate sa) $ > mconcat (mconcat ps) Plot the parking lots. > wmParking = translateX (negate ea) $ > translateY (negate sa) $ > uncurry bayDots $ > unzip $ > map (\(x, y) -> (x, fromJust y)) $ > filter (isJust . snd) $ > zip parkBayCoords usage Plot the ward boundaries. > wmWards = translateX (negate ea) $ > translateY (negate sa) $ > mconcat (mconcat qs) > > return $ wmBackground <> > wmWards <> > wmStreets <> > wmParking Sadly we have to use unsafePerformIO in order to be able to create the post using BlogLiteratelyD. > parkingDia :: Diag > parkingDia = unsafePerformIO parkingDiaM And now we can see all the parking lots in Westminster as green dots. The darkness represents how heavily utilised they are. The thick gold lines delineate the wards in Westminster. In case it isn’t obvious the blue lines are the roads. The Thames, Hyde Park and Regent’s Park are fairly easy to spot. Less easy to spot but still fairly visible are Buckingham Palace and Green Park. Observations We appear to need to use ghc -O2 otherwise we get a spaceleak. We didn’t explicitly need the equivalent of pandas. It would be interesting to go through the Haskell and Python code and see where we used pandas and what the equivalent was in Haskell. Python and R seem more forgiving about data formats e.g. they handle -.1 where Haskell doesn’t. Perhaps this should be in the Haskell equivalent of pandas. 2 thoughts on “Parking in Westminster: An Analysis in Haskell” Pingback: Future Cities Hackathon (@ds_ldn) Oct 2013 on Parking Usage Inefficiencies | Entrepreneurial Geekiness Pingback: Diagrams 1.0 | blog :: Brent -> [String]
https://idontgetoutmuch.wordpress.com/2013/10/23/parking-in-westminster-an-analysis-in-haskell/
CC-MAIN-2015-18
refinedweb
2,341
66.13
A Python cffi port of libtcod. Project description Contents - About - Changelog - 5.0.0 - 2017-07-19 - 4.0.0 - 2017-06-29 - 3.0.0 - 2017-06-24 - 2.5.0 - 2017-05-28 - 2.4.4 - 2017-05-20 - 2.4.3 - 2017-04-10 - 2.4.2 - 2017-04-10 - 2.4.1 - 2017-04-07 - 2.4.0 - 2017-04-03 - 2.3.0 - 2017-03-15 - 2.2.1 - 2017-03-12 - 2.2.0 - 2017-02-18 - 2.1.0 - 2017-02-16 - 2.0.0 - 2017-02-11 - 2.0a4 - 2017-01-09 - 2.0a3 - 2017-01-02 - 2.0a2 - 2016-10-30 - 2.0a1 - 2016-10-16 - 2.0a0 - 2016-10-05 - 1.0 - 2016-09-25 - 0.3 - 2016-09-24 - 0.2.12 - 2016-09-16 - 0.2.11 - 2016-09-16 - 0.2.10 - 2016-09-13 - 0.2.9 - 2016-09-01 - 0.2.8 - 2016-03-11 - 0.2.7 - 2016-01-21 - 0.2.6 - 2015-10-28 - 0.2.5 - 2015-10-28 - 0.2.4 - 2015-10-28 - 0.2.3 - 2015-07-13 - 0.2.2 - 2015-07-01 - 0.2.1 - 2015-06-29 - 0.2.0 - 2015-06-27 - 0.1.0 - 2015-06-22 About This projects API and documentation has been merged back into python-tdl. Changelog 5.0.0 - 2017-07-19 - Changed - This project was merged back into python-tdl 4.0.0 - 2017-06-29 - Added - Console instances can now be pickled. - Changed - get_cffi_callback renamed to get_tcod_path_ffi and must now return width and height values. All returned values will be kept alive by the caller. - Fixed - Image.get_pixel now returns a tuple instead of a CData instance. - SDL is now initialized lazily. It should be easier to import BearLibTerminal along side this library. 3.0.0 - 2017-06-24 - Added - PyPy v5.7/v5.8 wheels added to PyPi. - Pickle support for tcod.path classes. - Added wrapper classes EdgeCostCallback and NodeCostArray to tcod.path - Changed - AStar and Dijkstra no longer take the width or height parameters. You now set these parameters via EdgeCostCallback. - Fixed - Resolved an issue where pip install would clobber NumPyPy. - Removed - Removed broken tdl-style int/color conversions from Color. 2.5.0 - 2017-05-28 - Changed - Pickle-able objects will have any subclasses pickled correctly now. The new objects can not be unpickled on older versions of libtcod-cffi. - Updated cdata attribute names in Map, Console, and Random. 2.4.4 - 2017-05-20 - Fixed - Fixed crashes when exiting on some systems. 2.4.3 - 2017-04-10 - Fixed - Fixed signatures for MacOS builds. 2.4.2 - 2017-04-10 - Removed - Dropped support for Python3.3 2.4.1 - 2017-04-07 - Fixed - Made sure MacOS dependencies are bundled correctly. 2.4.0 - 2017-04-03 - Added - Renderer regressions fixed, OpenGL and GLSL renderer’s are available again. - Changed - The default renderer is now GLSL. - Removed - tcod clipboard functions which were never fully implemented removed. 2.3.0 - 2017-03-15 - Added - Added support for loading/saving REXPaint files. - Fixed - Console methods should be safe to use before a root console is initialized. - Fixed simplex noise artifacts when using negative coordinates. - Fixed backward compatible API inconsistencies with color indexes, console truth values, and line_iter missing the starting point. - The SDL callback should always receive an SDL_Surface. 2.2.1 - 2017-03-12 - Fixed - Fixed Console.print_frame not printing anything. - Fixed Noise.sample_ogrid alignment issue. - MacOS builds should work even if the system installed SDL2 library is old. 2.2.0 - 2017-02-18 - Added - You can now sample very large noise arrays using the Noise.sample_mgrid and Noise.sample_ogrid methods. - Noise class now supports pickle and copy modules. 2.1.0 - 2017-02-16 - Added - The root Console instance can now be used as a context manager. Closing the graphical window when the context exits. - Ported libtcod functions: sys_clipboard_get and sys_clipboard_set. 2.0.0 - 2017-02-11 - Added - Random instances can be copied and pickled. - Map instances can be copied and pickled. - The Map class now has the transparent, walkable, and fov attribues, you can assign to these as if they were numpy arrays. - Pathfinders in tcod.path can be given a numpy array as a cost map. - Changed - Color instances can now be compared with any standard sequence. - Deprecated - You might see a public cdata attribute on some classes, this attribute will be renamed at anytime. - Removed - Console.print_str is now Console.print_ - Some Console methods have been merged together. - All litcod-cffi classes have been moved to their own submodules. - Random methods renamed to be more like Python’s standard random module. - Noise class had multiple methods replaced by an implementation attribute. - libtcod-cffi classes and subpackages are not included in the tcod namespace by default. - Many redundant methods were removed from the Random class. - Map methods set_properies, clear, is_in_fov, is_walkable, and is_transparent were remvoed. - Pathfinding classmethod constructors are gone already. Not it’s just one constructor which accepts multiple kinds of maps. - Fixed - Python 2 now uses the latin-1 codec when automatically coverting to Unicode. 2.0a4 - 2017-01-09 - Added - Console instances now have the fg,bg,ch attributes. These attributes are numpy arrays with direct access to libtcod console memory. - Changed - Console default variables are now accessed using properties instead of method calls. Same with width and height. - Path-finding classes new use special classmethod constructors instead of tradional class instancing. - Removed - Color to string conversion reverted to its original repr behaviour. - Console.get_char* methods removed in favor of the fg,bg,ch attributes. - Console.fill removed. This code was redundant with the new additions. - Console.get_default_*/set_default_* methods removed. - Console.get_width/height removed. - Fixed - Dijkstra.get_path fixed. 2.0a3 - 2017-01-02 - The numpy module is now required as a dependency. - The SDL.h and libtcod_int.h headers are now included in the cffi back-end. - Added the AStar and Dijkstra classes with simplified behaviour. - Added the BSP class which better represents bsp data attributes. - Added the Image class with methods mimicking libtcodpy behaviour. - Added the Map class with methods mimicking libtcodpy behaviour. - Added the Noise class. This class behaves similar to the tdl Noise class. - Added the Random class. This class provides a large variety of methods instead of being state based like in libtcodpy. - Color objects can new be converted into a 3 byte string used in libtcod color control operations. - heightmap functions can now accept carefully formatted numpy arrays. - Removed the keyboard repeat functions: console_set_keyboard_repeat and console_disable_keyboard_repeat. 2.0a2 - 2016-10-30 - FrozenColor class removed. - Color class now uses a properly set up __repr__ method. - Functions which take the fmt parameter will now escape the ‘%’ symbol before sending the string to a C printf call. - Now using Google-Style docstrings. - Console class has most of its relevant methods. - Added the Console.fill function which needs only 3 numpy arrays instead of the usual 7 to cover all Console data. 2.0a1 - 2016-10-16 - The userData parameter was added back. Functions which use it are marked depreciated. - Python exceptions will now propagate out of libtcod callbacks. - Some libtcod object oriented functions now have Python class methods associated with them (only BSP for now, more will be added later.) - Regression tests were added. Focusing on backwards compatibilty with libtcodpy. Several neglected functions were fixed during this. - All libtcod allocations are handled by the Python garbage collector. You’ll no longer have to call the delete functions on each object. - Now generates documentation for Read the Docs. You can find the latest documentation for libtcod-cffi here. 2.0a0 - 2016-10-05 - updated to compile with libtcod-1.6.2 and SDL-2.0.4 1.0 - 2016-09-25 - sub packages have been removed to follow the libtcodpy API more closely - bsp and pathfinding functions which take a callback no longer have the userdata parameter, if you need to pass data then you should use functools, methods, or enclosing scope rules - numpy buffer alignment issues on some 64-bit OS’s fixed 0.3 - 2016-09-24 - switched to using pycparser to compile libtcod headers, this may have included many more functions in tcod’s namespace than before - parser custom listener fixed again, likely for good 0.2.12 - 2016-09-16 - version increment due to how extremely broken the non-Windows builds were (false alarm, this module is just really hard to run integrated tests on) 0.2.11 - 2016-09-16 - SDL is now bundled correctly in all Python wheels 0.2.10 - 2016-09-13 - now using GitHub integrations, gaps in platform support have been filled, there should now be wheels for Mac OSX and 64-bit Python on Windows - the building process was simplified from a linking standpoint, most libraries are now statically linked - parser module is broken again 0.2.9 - 2016-09-01 - Fixed crashes in list and parser modules 0.2.8 - 2016-03-11 - Fixed off by one error in fov buffer 0.2.7 - 2016-01-21 - Re-factored some code to reduce compiler warnings - Instructions on how to solve pip/cffi issues added to the readme - Official support for Python 3.5 0.2.6 - 2015-10-28 - Added requirements.txt to fix a common pip/cffi issue. - Provided SDL headers are now for Windows only. 0.2.5 - 2015-10-28 - Added /usr/include/SDL to include path 0.2.4 - 2015-10-28 - Compiler will now use distribution specific SDL header files before falling back on the included header files. 0.2.3 - 2015-07-13 - better Color performance - parser now works when using a custom listener class - SDL renderer callback now receives a accessible SDL_Surface cdata object. 0.2.2 - 2015-07-01 - This module can now compile and link properly on Linux 0.2.1 - 2015-06-29 - console_check_for_keypress and console_wait_for_keypress will work now - console_fill_foreground was fixed - console_init_root can now accept a regular string on Python 3 0.2.0 - 2015-06-27 - The library is now backwards compatible with the original libtcod.py module. Everything except libtcod’s cfg parser is supported. 0.1.0 - 2015-06-22 - First version released Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/libtcod-cffi/
CC-MAIN-2019-22
refinedweb
1,730
62.54
IRC log of xproc on 2008-04-24 Timestamps are in UTC. 14:53:48 [RRSAgent] RRSAgent has joined #xproc 14:53:48 [RRSAgent] logging to 14:54:54 [Norm] Meeting: XML Processing Model WG 14:54:54 [Norm] Date: 24 Apr 2008 14:54:54 [Norm] Agenda: 14:54:54 [Norm] Meeting: 109 14:54:54 [Norm] Chair: Norm 14:54:55 [Norm] Scribe: Norm 14:54:57 [Norm] ScribeNick: Norm 14:55:58 [PGrosso] PGrosso has joined #xproc 14:58:14 [richard] richard has joined #xproc 14:58:50 [richard] i'll be phoning in a couple of minutes... 14:59:05 [Vojtech] Vojtech has joined #xproc 14:59:32 [Zakim] XML_PMWG()11:00AM has now started 14:59:33 [Zakim] +Norm 15:00:13 [Zakim] +Vojtech 15:00:15 [Zakim] -Vojtech 15:00:15 [Zakim] +Vojtech 15:00:51 [Zakim] +[ArborText] 15:01:49 [Norm] Regrets: Alessandro, Rui 15:02:01 [alexmilowski] alexmilowski has joined #xproc 15:02:17 [Zakim] +??P31 15:02:19 [richard] zakim, ? isme 15:02:19 [Zakim] I don't understand '? isme', richard 15:02:21 [richard] zakim, ? is me 15:02:21 [Zakim] +richard; got it 15:02:58 [Zakim] +alexmilowski 15:03:48 [AndrewF] AndrewF has joined #xproc 15:04:39 [Zakim] +??P0 15:04:43 [AndrewF] zakim, ? is Andrew 15:04:43 [Zakim] +Andrew; got it 15:04:56 [Norm] Zakim, who's on the phone? 15:04:56 [Zakim] On the phone I see Norm, Vojtech, PGrosso, richard, alexmilowski, Andrew 15:05:07 [Norm] Present: Norm, Vojtech, Paul, Richard, Alex, Andrew 15:05:43 [ht] ht has joined #xproc 15:05:46 [Norm] Topic: Accept this agenda? 15:05:46 [Norm] -> 15:05:57 [ht] zakim, please call ht-781 15:05:57 [Zakim] ok, ht; the call is being made 15:05:58 [Zakim] +Ht 15:06:01 [Norm] Norm: I'd like to add this morning's email threads 15:06:20 [Norm] Topic: Accept minutes from the previous meeting? 15:06:20 [Norm] -> 15:06:30 [Norm] Accepted. 15:06:39 [Norm] Topic: Next meeting: telcon 1 May 2008? 15:06:52 [Norm] Vojtech gives regrets. 15:07:11 [Norm] Topic: Consideration of the proposed next working draft. 15:07:21 [Norm] -> 15:07:42 [Norm] No questions or comments. 15:08:00 [Norm] Henry will provide updated DTDs and W3C XML Schemas before 1 May. 15:08:13 [Norm] Topic: Default context for options and variables 15:08:19 [Norm] Norm attempts to summarize 15:10:38 [Norm] Norm: We could allow a sequence, but on balance I'd rather not. 15:10:55 [Norm] -> 15:11:25 [Norm] Norm: If we leave it an error now, we can always make it not an error later. 15:11:57 [Norm] Norm: Does anyone want to argue for a change? 15:12:03 [Norm] None heard, the status quo prevails. 15:12:24 [Norm] Topic: p:declare-step/p:import in p:declare-step (for atomic steps) 15:12:28 [Norm] Norm summarizes. 15:13:34 [Norm] -> 15:13:34 [MSM] zakim, please call MSM-617 15:13:34 [Zakim] ok, MSM; the call is being made 15:13:36 [Zakim] +MSM 15:13:45 [Norm] Henry: Sounds right to me 15:14:00 [Norm] Present: Norm, Vojtech, Paul, Richard, Alex, Andrew, Michael [xx:13] 15:14:47 [Norm] Proposed: Make the changes Norm suggests. 15:14:54 [Norm] Accepted. 15:15:22 [Norm] Topic: Exclude prefixes on p:inline 15:16:02 [Norm] -> 15:16:12 [Norm] Henry: It's a shameless lift from XSLT 2.0, very lightly edited. 15:16:29 [Norm] ...If we haven't changed our minds about doing this, the only thing that really requires peoples attention is the inventory of namespaces 15:16:52 [Norm] ...whcih are excluded by definition. I chose to exclude the two that might actually appear at the top-level in a pipeline. 15:17:07 [Norm] ...I excldued the error namespace and the instance prefix, because I don't think those are going to occur. 15:18:33 [Norm] Norm: I don't think the .../xproc/1.0 "namespace" is every going to be bound. 15:18:44 [MSM] What if my pipeline is creating a pipeline? 15:19:40 [Norm] Norm is confused about stripping the namespace. 15:20:06 [Norm] Henry: If you want to use the namespace, you can add it back in another step. 15:20:49 [Norm] Henry: Eliminates any namespace on every node on the tree. 15:21:00 [Norm] Alex: But it gets put *back in* by namespace fixup. 15:21:36 [Norm] Henry: Yeah, I guess that works. 15:22:01 [Norm] Norm: Bah. Do we really need to do this? 15:22:44 [Norm]. 15:22:55 [Norm] ...It may be an edge case, but it's a crucial edge case. 15:22:55 [MSM] [I don't understand Henry's argument that you MUST remove it everywhere. Why not just say the child of p:inline doesn't inherit any of the specified bindings, so that if it rebinds them they will be there. 15:23:07 [Norm] Henry: When you need it, you really need it. 15:23:12 [Norm] Alex: And I think it's easy to describe. 15:23:55 [Norm] Alex: Getting p:inline right is real work. 15:24:11 [Norm] Michael: Why do you have to remove them everywhere? 15:24:30 [Norm] Henry: There's no gaurantee that the datamodel that you have is efficiently implemented. So removing an in-scope namespace from my parent doesn't remove it from me. 15:24:55 [Norm] Michael: You have to recompute them, but I think it's a mistake to confuse information with APIs. 15:25:27 [Norm] Henry: It appears to only remove it in one place, but that's because if you have a literal XML fragment in your XSLT stylesheet, the removal applies to all of them.. 15:25:29 [Norm] s/.././ 15:26:37 [Norm] Some discussion of the XSLT case. 15:27:17 [Norm] Richard: The XSLT case is copying nodes from the stylesheet to the result. So they aren't copied. 15:27:40 [Norm] Henry: Right, so it's the same for us. Up until this point, there was no necessity to copy and now there is. 15:29:11 [Norm] Some discussion of whether nodes that are 'eq' to each other can get passed to different steps. 15:30:52 [Norm] Richard: I wonder if there's a whole can of worms addressed here. 15:30:59 [Norm] s/addressed/unaddressed/ 15:31:12 [Norm] Henry: I think anyone who uses any kind of stateful data model doesn't have a problem here. 15:32:31 [Norm] Richard: Suppose you ahve a sequence and the thing you do is count the union of the nodes in the sequence. 15:32:39 [Norm] Henry: We need to have this in the test suite. 15:33:13 [Norm] Richard: The excluding of namespaces seems to amount to a "when necessary". 15:34:54 [Norm] Norm: Anyone against doing this? 15:34:57 [Norm] None heard. 15:35:28 [Norm] Michael: I think being able to trim namespace declarations is extremely useful. This seems unnecessarily complicated. 15:35:52 [Norm] ...I agree that XSLT 2.0 does exactly the same thing. Maybe Alex is right that namespace fixup saves it for those of us who use one of the excluded namespaces. 15:35:59 [Norm] ...Do we have the same sort of namespace fixup rules? 15:36:01 [Norm] Norm: Yes. 15:36:16 [Norm] Richard: The case where namespace prefix doesn't work is when the prefix is used in content. Because then it isn't noticed. 15:37:04 [Norm] Richard: Namespace fixup won't gaurantee that you get the right prefix. 15:37:07 [Norm] Norm: True. 15:37:40 [Norm] Vojtech: I think the prefixes are the author's responsibility. 15:37:51 [Norm] Richard: But the excluded namespaces will remove the bindings. 15:38:12 [Norm] Vojtech: If the XProc namespace is removed automatically, that's a problem. But if you remove the prefix, that's your problem. 15:38:27 [Norm] Richard: That's not the way it works in XSLT. You specify it with a prefix, but it suppresses the namespace nodes that that prefix maps to. 15:39:06 [Norm] Henry: So, worst case, you need to use a namespace-rename step. 15:39:25 [Norm] Proposed: We adopt Henry's proposal for the 1 May draft. 15:39:35 [Norm] Accepted. 15:40:42 [Norm] Topic: What happens when @xpath-versions are mixed. 15:41:05 [Norm] Norm: Attempts to summarize from 15:41:38 [Norm] Norm: We allow @name, @psvi-required and @xpath-version on the decl. of atomic steps. 15:42:07 [Norm] Norm: I think they're mostly harmless on atomic steps andw es houldn't worry abou tit. 15:42:37 [Norm] Norm: What do we want to say about mixed @xpath-versions across calls? 15:43:07 [Norm] Norm: I think the obvious answers are either, ignore the nested ones or its an error. 15:44:56 [ht] ht has joined #xproc 15:44:59 [Norm] Consider: 15:45:00 [Norm] <p:pipeline 15:45:00 [Norm] <p:declare-step 15:45:00 [Norm] ... 15:45:00 [Norm] <ex:foo/> 15:45:00 [Norm] </p:pipeline> 15:45:20 [Norm] Vojtech: The default is 1.0 so what happens with the base steps. 15:45:28 [Norm] Norm: That's a good point. 15:47:03 [Norm] Norm: I don't think we can expect implementations to do both. 15:47:07 [Norm] Henry: The problem is in libraries. 15:49:33 [Norm] Norm: I think we need to say that an unspecified version is license to use whatever you want and mixing them is a dynamic error. 15:50:15 [Norm] Henry: How do we avoid screwing users unnecessarily. And simultaneiously avoid giving them weird results. 15:51:06 [Norm] Norm: Uhm... 15:51:13 [Norm] Henry: What we want is late binding. 15:51:35 [Norm] Vojtech: If the implementation is prepared to switch, then it should work. 15:52:24 [Norm] Norm muses 15:52:32 [Norm] Vojtech: I think the default now is 1.0. 15:53:51 [Norm] Norm: Static analysis should always show what versions could be used, so maybe late binding is possible. 15:54:15 [Norm] ACTION: Norm to propose how @xpath-version should deal with mixed versions. 15:54:48 [Norm] Topic: Any other business? 15:54:56 [Norm] None heard. 15:55:04 [Zakim] -PGrosso 15:55:05 [Zakim] -Andrew 15:55:06 [Zakim] -MSM 15:55:11 [Zakim] -Vojtech 15:55:16 [Norm] Adjourned. 15:55:17 [Zakim] -Norm 15:55:19 [Zakim] -richard 15:55:21 [Zakim] -Ht 15:55:22 [Zakim] -alexmilowski 15:55:22 [Zakim] XML_PMWG()11:00AM has ended 15:55:23 [Zakim] Attendees were Norm, Vojtech, PGrosso, richard, alexmilowski, Andrew, Ht, MSM 16:00:18 [Norm] RRSAgent, set log world-visible 16:00:21 [Norm] RRSAgent, draft minutes 16:00:21 [RRSAgent] I have made the request to generate Norm 17:17:54 [alexmilowski] alexmilowski has joined #xproc 17:53:43 [Norm] Norm has joined #xproc 18:00:14 [Zakim] Zakim has left #xproc 19:12:07 [Norm] Norm has joined #xproc
http://www.w3.org/2008/04/24-xproc-irc
CC-MAIN-2015-35
refinedweb
1,919
82.85
Type: Posts; User: willmotil sorry he's right, i probably said that poorly, though it still denotes the sign but it is a value as well to say no (positive signed integer) number in either in will have that first bit set true ... remember that a int and a uint (unsigned) are not the same the maximum positive value in binary for a int is 0111-1111-1111-1111-1111-1111-1111-1111 = 2147483647 but that's Not the maximum value... quoted from the same link posted above below a quote from this msn page found here then see the msn example code at the bottom of... well in that case just for the record its not that simple anyways see R. Martinho Fernandes answer to the question here... he means "you gonna get fired" when they realize your copying pasting lol though seriously your not supposed to ask questions for a interview your supposed to already know the answers using System; namespace GoToTest { public class GoToTest { public static void Main() { string... welcome, i edited it a bit for clarity and so you could experiment with it a bit to just reverse into a character array you would do this but i don't see why anyone would be using a char array... you can index a string like a array it counts from the end of the string to the beginning but adds to the character array from start to end for example pass your string to this method ... make sure you are flushing and closing a reading or writing stream if you are not wrapping the file writing operation with a using statement or setting flags to write and read from to individually... in this case you are actually looking to exclude a line e.g. visually it is the anti pattern of any two points next to each other that you will find and exclude all other points can be drawn to... Edit: i was half asleep waiting on a download , when i wrote that out, i should have waited ackk... let me clear it up consider the code at the bottom pesudo code if you don't have a vector2... Edit: correction i just realized i answered this question wrong misreading what he was asking ArrayList myList; public MainWindow() { // this should work or preferably... you can assign the variable in the for loop for(int row =0 ;...;...){...ect.... you can also simply write row < csv.NumRows to say instead of <= length -1 just write < length this line here... well i liked the article in general the whole point of the article could have been sumed up to rule of thumb don't use implicit operators on references use explicit operators or conversion... your error occuring because a int's max value is 10 digits a ulong can take about 20... ok so i thought this was a interesting point i was looking for a way to get around this concern reasonably however after reading this old post from 5 years back which addresses nearly all the... for reference dot net perls is a nice site all about arrays gamedev.net is were i would start for simple tutorials and questions its a dedicated game dev site and forum, its been around for a long time though more... what exactly are you trying to build do here...? you realize that subInput.Length is actually the number of characters in the string itself not a actual value so that if subInput = "99"; ... i would much more lean towards struct since its math focused it is by its own definition dealing with values "fractions" alternatively static class or public sealed class / static methods in... you would never want to do this because it can cause real problems later down the road unless, you had a very very good reason too. in that case there is probably a better way to do what you want... i would ask that you clarify the question more specifically? often just asking one individual specific question well spoken, is far better than, trying to squeeze in a whole set of problem's... if the library is using the a older version then .net 4.5 you can probably still use the CLR profiler from microsoft which is standalone to get a better picture of whats going on... i think this is a great idea ... im not sure about legal stuff involved in anything but educational purposes you should add run command prompt tools or auto script instructions like a... something is obviously wrong with this line of code cause ToString() should look like that not ToString i don't think a Ordinal would belong in a indexer but without seeing more code i dunno my...
http://forums.codeguru.com/search.php?s=4f01497bfd075257e6b8b09adc0ca27d&searchid=8138487
CC-MAIN-2015-48
refinedweb
791
66.07
0 Hi again. I'm having two issues with this code, first of then strlen only counts the characters in the first word of the line I put into console. Then the second one is that the console takes the second word put into the console in to next "cin" and does not ofcourse then stop after asking for the second text since it has it already from the first input. Ok I guess I can solve the input problem with clearing the memory between lines but what about the strlen issue then. Here is the current code. #include "targetver.h" #include <stdio.h> #include <tchar.h> #include <cstring> #include <cstdlib> #include <iostream> using namespace std; int count (char[]); void strCopy (char[]); int _tmain(int argc, _TCHAR* argv[]) { char pszString[256]; char pszSource[256]; int size = 0; cout << "Write a text that is under 256 characters = pszString" << endl; cin >> pszString; size = count(pszString); cout << "Your text is " << size << " characters long." << endl; cout << "Write another text that is under 256 characters = pszSource" << endl; cin >> pszSource; strCopy(pszSource); return 0; } int count ( char pszString[] ) { int size; size = strlen(pszString); return (size); } void strCopy(char pszSource[]) { char pszDestination[256]; strcpy (pszDestination,pszSource); cout << "The text in pszDestination is now " << pszDestination << " which should be the same you entered for pszSource" << endl; system("PAUSE"); } Please advice.
https://www.daniweb.com/programming/software-development/threads/144469/small-isssue-with-console-bahavior-and-strlen
CC-MAIN-2017-43
refinedweb
221
65.05
import "periph.io/x/periph/conn/i2c" Package i2c defines the API to communicate with devices over the I²C protocol. As described in, periph.io uses the concepts of Bus, Port and Conn. In the package i2c, 'Port' is not exposed, since once you know the I²C device address, there's no unconfigured Port to configure. Instead, the package includes the adapter 'Dev' to directly convert an I²C bus 'i2c.Bus' into a connection 'conn.Conn' by only specifying the device I²C address. See for more information. Code: // Make sure periph is initialized. if _, err := host.Init(); err != nil { log.Fatal(err) } // Use i2creg I²C bus registry to find the first available I²C bus. b, err := i2creg.Open("") if err != nil { log.Fatal(err) } defer b.Close() // Dev is a valid conn.Conn. d := &i2c.Dev{Addr: 23, Bus: b} // Send a command 0x10 and expect a 5 bytes reply. write := []byte{0x10} read := make([]byte, 5) if err := d.Tx(write, read); err != nil { log.Fatal(err) } fmt.Printf("%v\n", read) Well known pin functionality. Addr is an I²C slave address. Code: var addr i2c.Addr flag.Var(&addr, "addr", "i2c device address") flag.Parse() Set sets the Addr to a value represented by the string s. Values maybe in decimal or hexadecimal form. Set implements the flag.Value interface. String returns an i2c.Addr as a string formated in hexadecimal. type Bus interface { String() string // Tx does a transaction at the specified device address. // // Write is done first, then read. One of 'w' or 'r' can be omitted for a // unidirectional operation. Tx(addr uint16, w, r []byte) error // SetSpeed changes the bus speed, if supported. // // On linux due to the way the I²C sysfs driver is exposed in userland, // calling this function will likely affect *all* I²C buses on the host. SetSpeed(f physic.Frequency) error } Bus defines the interface a concrete I²C driver must implement. This interface is consummed by a device driver for a device sitting on a bus. This interface doesn't implement conn.Conn since a device address must be specified. Use i2cdev.Dev as an adapter to get a conn.Conn compatible object. BusCloser is an I²C bus that can be closed. This interface is meant to be handled by the application and not the device driver. A device driver doesn't "own" a bus, hence it must operate on a Bus, not a BusCloser. Dev is a device on a I²C bus. It implements conn.Conn. It saves from repeatedly specifying the device address. Duplex always return conn.Half for I²C. Tx does a transaction by adding the device's address to each command. It's a wrapper for Bus.Tx(). Write writes to the I²C bus without reading, implementing io.Writer. It's a wrapper for Tx() type Pins interface { // SCL returns the CLK (clock) pin. SCL() gpio.PinIO // SDA returns the DATA pin. SDA() gpio.PinIO } Pins defines the pins that an I²C bus interconnect is using on the host. It is expected that a implementer of Bus also implement Pins but this is not a requirement. Code: // Make sure periph is initialized. if _, err := host.Init(); err != nil { log.Fatal(err) } // Use i2creg I²C port registry to find the first available I²C bus. b, err := i2creg.Open("") if err != nil { log.Fatal(err) } defer b.Close() // Prints out the gpio pin used. if p, ok := b.(i2c.Pins); ok { fmt.Printf("SDA: %s", p.SDA()) fmt.Printf("SCL: %s", p.SCL()) } Package i2c imports 7 packages (graph) and is imported by 96 packages. Updated 2019-12-27. Refresh now. Tools for package owners.
https://godoc.org/periph.io/x/periph/conn/i2c
CC-MAIN-2020-40
refinedweb
608
71.61
No stable release available yet. If you are interested in getting the source code of this project, you can get it from the code repository. There are no experimental releases available at the moment. These are just the objects included in Miller's Pd compiled as stand-alone libraries based on their source files. Its a quick and dirty hack to strip Pd down to the bare essentials so that the namespace will be fully functional. The lib_x_* files are generated using the included script, generate.sh. They should not be modified directly. Ideally, these would be compiled as individual objects. The files named after the classes are GUI objects that originally had g_ prefixes on the file names. This stuff is currently here as a proof of concept for turning Pd core into a micro-language. If you want to start modifying these, then we should discuss how these should be maintained along with Miller's changes. This is not a place to fix bugs or add improvements. This library is an exact mirror of the code in Pd-vanilla, warts and all. The aim is 100% compatibility in a libdir form. This way we can have libdirs for each version number, and then choose to use old versions of this library for compatibilty (i.e. vanilla-0.42.5, vanilla-0.41.4, vanilla-0.40.3). puredata.info is hosted and serviced by IEM as a contribution to the Pd-community using Plone , see Impressum.
http://puredata.info/downloads/vanilla
CC-MAIN-2016-07
refinedweb
248
65.32
#include <cstdarg> #include <cstdio> #include <cstdlib> #include <cctype> #include <dsnlexer.h> #include <wx/translation.h> Go to the source code of this file. Definition at line 35 of file dsnlexer.cpp. Definition at line 454 of file dsnlexer.cpp. Referenced by isNumber(), and NET_SETTINGS::ParseBusVector(). Return true if the next sequence of text is a number: either an integer, fixed point, or float with exponent. Stops scanning at the first non-number character, even if it is not whitespace. Definition at line 476 of file dsnlexer.cpp. Referenced by DSNLEXER::NextTok(). Definition at line 461 of file dsnlexer.cpp. Referenced by DSNLEXER::NextTok(). Test for whitespace. Our whitespace, by our definition, is a subset of ASCII, i.e. no bytes with MSB on can be considered whitespace, since they are likely part of a multibyte UTF8 character. Definition at line 433 of file dsnlexer.cpp. Referenced by isSep(), and DSNLEXER::NextTok(). Definition at line 121 of file dsnlexer.cpp.
https://docs.kicad.org/doxygen/dsnlexer_8cpp.html
CC-MAIN-2022-27
refinedweb
159
63.36
When we talk about Kubernetes, we talk about pods and containers. There is a difference between the pod and a container. When I was first introduced with pods I mistook them as containers. In this article we will see the difference between them and what are init containers and why do we need them. POD vs Container A pod can run multiple containers sharing the same network namespace. It means that each container can see the other container on the localhost. This is great functionality when you want to run two parts of the app that need to connect very frequently. When they connect on the localhost it will be very fast and no packet will leave the pod. Its like VM with different types of processes and those process are like containers running in them. What are init Containers? These are the normal containers except they do some tasks before the main containers and always exits, this means they are not long-running containers but the containers have some specialized tasks. They come to do the task and complete after that the main containers start to come up. A single pod can have multiple init containers. They can run one by one and complete their tasks. Below is the workflow of how they look like. What are they used for? These containers are used to prepare the pod to run the main application containers. These tasks can be changing permissions of some files to changing some specific environments. They can also be used as a precheck to verify if the application can be run on this pod. There may be some tasks that only the root user can do. And since you never want to run your application as a root user as this is a security threat. Init containers can do the task for you that you wanted to run as a root user. This was what are init containers and why do you need them? Subscribe for more articles on kubernetes, SRE, DevOps, and sysadmin.
https://www.learnsteps.com/what-are-init-containers-and-why-do-you-need-them/
CC-MAIN-2022-27
refinedweb
338
74.19
Windows Phone 7.8 SDK Update Released Microsoft recently released Windows Phone SDK Update for 7.8 that enables you to provide Windows Phone 8 experience in your Windows Phone 7.5 apps and adds two new emulator images for 256 MB and 512 MB devices to your existing Windows Phone SDK 7.1 or Windows Phone SDK 8.0 installation. The update enables you to test drive how your Windows Phone 7.5 based apps live tiles will look and behave when they are run on a device running Windows Phone 7.8. It also include Windows Phone SDK 7.1.1 update but you should be already running a Windows Phone SDK 7.1 to use it. You will be able to view minimalistic UI and Windows Phone 7 device skin as soon as you run your app after the installation of this update. It enables you to make your app's tile smaller after pinning it to the start screen and also provides support for wide tiles and flip tile template. Moreover, secondary tiles can be enabled with support for Flip, Iconic and Cycle tile templates. According to official sources, the recently released update is the last ever update for Windows Phone 7 handsets, which has been orphaned by the company’s platform shift to Windows Phone 8 which is built on a different kernel. When asked about why Microsoft is abandoning WP 7 and shifting towards WP8, a Windows Phone spokesperson commented: While it is true that Windows Phone 8 is a generational shift in technology requiring new hardware, we care deeply about our existing customers and want to keep their phones fresh. We are therefore providing the most iconic feature of Windows Phone 8 - the new Start screen experience - to existing customers via a Windows Phone 7.8 update which will be rolled out to as many devices as possible in early 2013. Existing customers will not lose any functionality and will continue to benefit from services such as SkyDrive, Xbox Live, Office Mobile, Bing and more than 125,000 applications currently available in the Windows Phone Store. Our OEM partners continue to deliver innovation for existing Windows Phone devices - for example, Nokia offers Lumia customers some exclusive apps like PhotoBeamer, Cinemagraph and others. In addition developers can continue to create applications that will work on both Windows Phone 7.5/7.8 and Windows Phone 8 devices. When Necroman asked about the possibility of an offline installer, Cliff Simpkins, Product Manager, Windows Phone Developer Experience replied: Unfortunately, we don't have an ISO/offline installer for the update. I'll inquire and see if we can get one spun up, but I wouldn't count on it short term. However, Michael Crump, Program Manager, Telerik has already created an offline installer for the Windows Phone 7.8 SDK and is available for free. According to Tron 42, new emulator for 7.8 is not compatible with Windows 2008 R2 Server. However, LanceMcCarthy suggested to check Joe Healy's blog regarding issues related to Hyper-V and WP emulators. Dinchy87 commented: I tried this with appextra in my app but i get this error "Warning 1 The element 'Deployment' in namespace 'http:// schemas . microsoft . com /windowsphone/2009/deployment' has invalid child element 'AppExtra'. List of possible elements expected: 'App'." Cliff commented: Are you putting it in the right portion of the WMAppManifest.xml? You'll want to put it BEFORE the App element - most of the time when I've seen that error, I've put it at after the App element. "I can not find the c++ support as you promise," says Michael Hansen "As to C++, we added support for DirectX support and C++ via WinPRT components as part of the new kernel/platform in 8.0 - the 7.8 release still takes advantage of the old kernel," mentions Cliff in reply to Michael's comment. Niels9001 posted: I also have the error with the update installed on Windows 7 x64 with Visual Studio 2010. I can't run my app, as its an error. Cliff replied: I've verified (both personally and with the team) that the tooling in VS 2010 will indeed throw an error with the AppExtra element. The issue is happening because there were no changes to the VS2010 environment with this patch "When will Microsoft release Windows Phone app templates for IronRuby and F#," queries mcandre. However, Cliff asked him to post his suggestion on the official feedback
http://www.infoq.com/news/2013/02/wp-sdk-update-7-8
CC-MAIN-2014-41
refinedweb
750
62.27
The device on our current project has two unique software development kits (one for Android and one for iOS). My team wanted to use Xamarin.Forms to create an application for the device so that the core logic of the app could be stored in one place. However, in order to also incorporate the platform-specific SDK methods, we had to export Java code (for Android) and Swift code (for iOS) as libraries and then import those libraries into a Xamarin.Forms project. In this post, I will go through the steps I used to import the functionality from Android into a Xamarin.Forms project. There are three main steps involved in the process: - Generating an .AAR file in the Android Studio project. The .AAR file contains bundled Java classes, methods, and perhaps Android shared resources. - Creating a Xamarin.Forms binding project that creates a bindings library (generated as a DLL) for the .AAR file. The DLL can be used by any Xamarin.Forms project. - Importing this DLL file and then use the functionality the library provides. I used the imported functionality to implement the Android version of a message interface. Below is an example of exporting a Java class that stores hello and goodbye messages to a DLL that can then be used by a Xamarin.Forms project. 1. Create an .AAR File Start by creating a new Android Studio project. In this example, mine is called MyApplication. Next, create a new module (File -> New -> New Module -> Android Library). I called mine myarr. Now create a new package (right-click on Java, New -> Package -> in src/main). Use the name message so you’ll be able to access the contents of the package in Xamarin.Forms using the syntax message.{method}. Inside of this package, create a class with functionality that you would like to export to Xamarin.Forms. I created a class called MyMessage with the ability to display two custom messages: package message; public class MyMessage { static String hello = "Hello dear user!"; static String bye = "Bye for now. See you later!"; public String helloMessage(){ return hello; } public String byeMessage() { return bye; } } To build the .AAR file, right-click on your project name (in my case, messages), then click the Gradle tab on the far left of the screen. Go to Build -> Assemble Release. Your .AAR file will now be located at {project name}/{module name}/build/outputs/aar. For example, mine was located at MyApplication/myarr/build/outputs/aar. 2. Create a Bindings Library for the .AAR File In this step, the goal is to use the .AAR file containing the compiled Java code to generate a DLL which can then be referenced in the Xamarin.Forms project. Create a new solution in Xamarin.Forms. I called mine ExportAar. Add the .AAR file to the Jars folder included in the template. Right-click on Jars, click Add -> Add Files, and copy the the .AAR file generated in Step 1. The .AAR file should now appear in the Jars folder. Check to make sure the Build Action of the .AAR file is set to LibraryProjectZip by right-clicking on file -> Build Action -> Library Project Zip. Build the project. The DLL file should be in {SolutionName}/bin/Debug/{SolutionName}.dll. 3. Use the DLL in a Xamarin.Forms Project Finally, import the message-generating library in Xamarin.Forms and generate some messages. Create a new Visual Studio project to use the library. I created a BlankForms App and called it ImportAar. Next, we’ll import the DLL file created in Step 2 into the {ProjectName}.Droid. To do this, go to references -> Edit References -> .NetAssembly -> Browse. Find the DLL file generated in Step 2, click OK, and build. The name of the imported .NET Assembly should now appear under references. To view the members of the imported project, double-click on the project name. In order to test the library, I created an interface returning hello and goodbye messages and implemented the interface on Android and iOS (for now, iOS simply returns default values). using System; namespace ImportAar { public interface IMessage { String helloMessage(); String goodbyeMessage(); } } To access the variables/methods of the imported .NET Assembly, create an instance of the imported class. The Android implementation of the iMessage interface creates an instance of MyMessage and calls the appropriate methods on it: using System; using ImportAar.Droid; [assembly: Xamarin.Forms.Dependency(typeof(MyMessage))] namespace ImportAar.Droid { public class MyMessage : IMessage { public MyMessage() {} public String helloMessage(){ Message.MyMessage message = new Message.MyMessage(); String hello = message.HelloMessage(); return hello; } public String goodbyeMessage(){ Message.MyMessage message = new Message.MyMessage(); String bye = message.ByeMessage(); return bye; } } } Now that the interface returning messages has been implemented in Android, all we have to do is call it. I adjusted the blank forms project template by naming the default label myLabel. I then used dependency service to call the correct implementation of the helloMessage for the platform I was on. This allowed me to call the Android implementation of the iMessage interface. InitializeComponent(); IMessage platform = DependencyService.Get(); myLabel.Text = platform.helloMessage(); When I run the app in Xamarin.Forms, my Android phone displays the hello message specified in the MyMessage Java class created in Android Studio. By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy2 Comments Can you tell me which template you used for the solution used to create the DLL from the AAR? I’m using VS for Mac and I can’t find any solution template that gives me a Jars folder. Thanks The template used was Java Bindings Library. For more details, look at.
https://spin.atomicobject.com/2017/09/25/export-java-xamarin-forms/
CC-MAIN-2018-43
refinedweb
938
59.8
In this codelab, you'll learn how to integrate the C++ Firebase Games SDK in a sample Android game using Google Analytics as an example. You'll be able to add features you need, integrate some basic analytics logic to measure your player's progress, and share the game with testers to get early feedback. Walkthrough If you want to walk through this codelab with the authors watch this video: What you'll learn - How to add Firebase to your Android CMake based game. - How to figure out which C++ and Gradle dependencies you need. - How to log Analytics events. - How to debug analytics events. - How to share your game with App Distribution. What you'll need - Android Studio - The sample code - A test device or emulator with Google Play Services git clone Download the Firebase SDK MacOS/Linux: sh download.sh Windows (from PowerShell): ./download.ps1 You may also manually download the SDK. If you do this, the Firebase C++ SDK must be extracted into /third_party such that a folder named firebase_cpp_sdk has the root CMakeLists.txt from the Firebase SDK in it. First, play the sample game and ensure that everything is working. It's a simple infinite runner with a procedurally generated level and a single button to jump. - Select File > New > Import Project (or select Import Project from the splash screen) - Open the proj.android/folder included in the repository - [Optional] Open proj.android/gradle.propertiesand fine PROP_APP_ABI. You may remove all but your target architecture to reduce build times. PROP_APP_ABI=x86will build just for the emulator PROP_APP_ABI=armeabi-v7awill build for most phones - Click the Debug button to build and run the game. This will take time to build the Cocos2dx game engine. - Create a new project in the Firebase Console. - Give it a name like "Popsicle Runner" - Enable Analytics - Add or create an analytics account - Add a new Android app to your project - Add com.firebase.popsiclerunneras your package name. - Download the google-services.json and copy it into proj.android/app - Ignore the given instructions for adding the Firebase SDK and click next - You may click "Skip this step" when asked to verify your installation Add the Firebase SDK to CMakeLists.txt Open the root level CMakeLists.txt. This should have the following code near the top CMakeLists.txt cmake_minimum_required(VERSION 3.6) set(APP_NAME popsiclerunner) project(${APP_NAME}) and add the following lines to the end of that CMakeLists.txt file CMakeLists.txt add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/third_party/firebase_cpp_sdk) target_link_libraries(${APP_NAME} firebase_analytics firebase_app) add_subdirectory includes the Firebase C++ SDK and makes it available to this game target_link_libraries Hooks the game up with Firebase's C++ libraries built for Android. Add the Google Services plugin To hook up the Firebase SDK, you must add the Google Services plugin to your gradle build script. To do this, open the project level build.gradle file (this is in the proj.android folder). And add classpath 'com.google.gms:google-services:4.3.3' as a buildscript dependency. build.gradle buildscript { repositories { google() jcenter() } dependencies { classpath 'com.android.tools.build:gradle:4.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files classpath 'com.google.gms:google-services:4.3.3' // Google Services plugin } } Then add the plugin to your module level build.gradle file (this is in your proj.android/app folder). Add apply plugin: 'com.google.gms.google-services' underneath apply plugin: 'com.android.application': build.gradle apply plugin: 'com.android.application' apply plugin: 'com.google.gms.google-services' // Google Services plugin Locate the C++ SDK in Gradle To tell Gradle where to find the Firebase C++ SDK, add the following lines to the bottom of the settings.gradle file. settings.gradle gradle.ext.firebase_cpp_sdk_dir = "$settingsDir/../third_party/firebase_cpp_sdk/" includeBuild "$gradle.ext.firebase_cpp_sdk_dir" Add the Android dependencies To hook up the Android dependencies for Firebase, open the module level gradle file for popsicle_runner (in proj.android/app/build.gradle) and add the following just before the typical dependences { section at the end: build.gradle apply from: "$gradle.firebase_cpp_sdk_dir/Android/firebase_dependencies.gradle" firebaseCpp.dependencies { analytics } AndroidX and Jetifier Add AndroidX and Jetifier support by opening gradle.properties and adding this to the end: gradle.properties android.useAndroidX = true android.enableJetifier = true Initialize Firebase in your game Initialize Firebase in the game by opening up Classes/AppDelegate.cpp. Add the following #include directives to the top: AppDelegate.cpp #include <firebase/app.h> #include <firebase/analytics.h> Then add App::Create and initialize the Firebase features you need. To do this, find AppDelegate::applicationDidFinishLaunching and add this code before auto scene = MainMenuScene::createScene(): AppDelegate.cpp { using namespace firebase; auto app = App::Create(JniHelper::getEnv(), JniHelper::getActivity()); analytics::Initialize(*app); } If you debug the game and refresh the Firebase dashboard, you should see one new user appear after a minute or so. Even early in development, analytics is a useful tool to gauge how beta testers are interacting with the game. There are some analytics that are gathered automatically – such as retention reports – but it's useful to add custom events tailored for your specific game. A good starting point is to log an analytics event when the player starts a level. We can use the number of level start events to see how often a player might replay the game in a session. We'll also log an event when the player dies with how far they've gotten. This will let us see how changes we make change the duration of a single session and will help us determine if players want a shorter/harder game or longer/easier one. Add Analytics Headers Open Classes/PopsicleScene.cpp and add Firebase headers to the top so we can make analytics calls. PopsicleScene.cpp #include <firebase/analytics.h> #include <firebase/analytics/event_names.h> Log a Level Start event To log an event when this Scene is staged by the Cocos2dx Director, find the stubbed function PopsicleScene::onEnter(). Enter the following code to log the Level Start event here: PopsicleScene.cpp using namespace firebase; analytics::LogEvent(analytics::kEventLevelStart); Log a Level End event To see how well a player is doing, let's log a Level End event with how far the player got when they finally died. To do this, find PopsicleScene::gameOver(), and add this to the end of the if(!_gameOver) { block before setting _gameOver = true;: PopsicleScene.cpp { using namespace firebase; analytics::LogEvent(analytics::kEventLevelEnd, "distance", _lastDistance); } kEventLevelEnd is the level end event. Whereas "distance" is an "event parameter". We're adding the last recorded distance here, which is a good approximation for how far a player travelled before dying. You can click Debug now, but it'll take time for any events to get reported on the Analytics dashboard. There are two reasons for this: 1) events are batched and uploaded about once an hour to preserve battery and 2) reports are generated every 24 hours. Enabling Debug Mode It's still possible to debug Analytics events by putting your device into debug mode. First make sure you have the Android Debug Bridge (ADB) installed and setup. Typing adb devices should show the device you're going to test on: $ adb devices List of devices attached emulator-5554 device Then run the following adb shell command: adb shell setprop debug.firebase.analytics.app com.firebase.popsiclerunner This tells Firebase Analytics to log events immediately, and will automatically exclude them from your normal reports to avoid polluting your live events when testing. If you want to undo this action later simply write: adb shell setprop debug.firebase.analytics.app .none. Viewing Events Open the "DebugView" in your Firebase Console Click Debug and play the game. You should see new events appearing almost immediately after they occur in game. If you expand the level_end event, you'll also see the custom "distance" parameter you've logged. Next you'll want to get eyes on your game whether they're internal to your studio, among close friends, or from your community. Firebase App Distribution gives you a great way to invite players to play your game. Building a Standalone Binary First build a standalone APK to share from Build > Build Bundles(s) / APK(s) > Build APK(s) Android Studio will pop up a dialog box letting you locate the built file. If you miss it, you can click on "Event Log" to get the link again. Upload to Firebase App Distribution - Open App Distribution and click "Get Started" - Drag and drop your .apk file into the box that says "Drag any .apk here to create a new release." - Enter your email address as the first tester. - Click Next. - Add a description and Click Distribute Invite Testers Rather than having to manually enter every email address, you can create an invite link. When you capture a user with this invite link you can also add them to a group of testers. This would let you separate internal testers from external testers for instance. - Click "Testers & Groups" - Create a new group , and give it a name like "Android Testers." - Click "Invite links" - Click "New invite link" - Set the group here from the dropdown. - Click "Create Link" - Click "Copy link" and share it out however you wish You've successfully added analytics to your C++ based game, invited some friends to play, and you know how to find and link Firebase libraries in a CMake and Gradle based build system common in Android development. What we've Covered - How to add Firebase to your Android CMake based game. - How to figure out which C++ and Gradle dependencies you need. - How to log Analytics events. - How to debug analytics events. - How to share your game with App Distribution. Next Steps - Try logging in a user anonymously and saving their high score in Realtime Database. - Log Analytics events in your own game. - Try adding analytics to an iOS game. Learn More - See the list of games specific events and consider how they may fit into your own game.
https://codelabs.developers.google.com/codelabs/get-started-with-firebase-cpp/?hl=pt-br
CC-MAIN-2021-39
refinedweb
1,670
57.87
spawnl, spawnle, spawnlp, spawnlpe, spawnv, spawnve, spawnvp, spawnvpe, _wspawnl, _wspawnle, _wspawnlp, _wspawnlpe, _wspawnv, _wspawnve, _wspawnvp, _wspawnvpe Go Up to process.h Index Header File process.h Category Process Control Routines Prototype int spawnl(int mode, char *path, char *arg0, arg1, ..., argn, NULL); int _wspawnl(int mode, wchar_t *path, wchar_t *arg0, arg1, ..., argn, NULL); int spawnle(int mode, char *path, char *arg0, arg1, ..., argn, NULL, char *envp[]); int _wspawnle(int mode, wchar_t *path, wchar_t *arg0, arg1, ..., argn, NULL, wchar_t *envp[]); int spawnlp(int mode, char *path, char *arg0, arg1, ..., argn, NULL); int _wspawnlp(int mode, wchar_t *path, wchar_t *arg0, arg1, ..., argn, NULL); int spawnlpe(int mode, char *path, char *arg0, arg1, ..., argn, NULL, char *envp[]); int _wspawnlpe(int mode, wchar_t *path, wchar_t *arg0, arg1, ..., argn, NULL, wchar_t *envp[]); int spawnv(int mode, char *path, char *argv[]); int _wspawnv(int mode, wchar_t *path, wchar_t *argv[]); int spawnve(int mode, char *path, char *argv[], char *envp[]); int _wspawnve(int mode, wchar_t *path, wchar_t *argv[], wchar_t *envp[]); int spawnvp(int mode, char *path, char *argv[]); int _wspawnvp(int mode, wchar_t *path, wchar_t *argv[]); int spawnvpe(int mode, char *path, char *argv[], char *envp[]); int _wspawnvpe(int mode, wchar_t *path, wchar_t *argv[], wchar_t *envp[]); Note: In spawnle, spawnlpe, spawnv, spawnve, spawnvp, and spawnvpe, the last string must be NULL. Description The functions in the spawn... family create and run (execute) other files, known as child processes. There must be sufficient memory available for loading and executing a child process. The value of mode determines what action the calling function (the parent process) takes after the spawn... call. The possible values of mode are path is the file name of the called child process. The spawn... function calls search for path using the standard operating system search algorithm: - If there is no extension or no period, they search for an exact file name. If the file is not found, they search for files first with the extension EXE, then COM, and finally BAT. - If an extension is given, they search only for the exact file name. - If only a period is given, they search only for the file name with no extension. - If path does not contain an explicit directory, spawn... functions that have the p suffix search the current directory, then the directories set with the operating system PATH environment variable. The suffixes p, l, and v, and e added to the spawn... "family name" specify that the named function operates with certain capabilities. Each function in the spawn... family must have one of the two argument-specifying suffixes (either l or v). The path search and environment inheritance suffixes (p and e) are optional. For example: - spawnl takes separate arguments, searches only the current directory for the child, and passes on the parent's environment to the child. - spawnvpe takes an array of argument pointers, incorporates PATH in its search for the child process, and accepts the envp argument for altering the child's environment. The spawn... functions must pass at least one argument to the child process (arg0 or argv[0]). This argument is, by convention, a copy of path. (Using a different value for this 0 argument won't produce an error.) If you want to pass an empty argument list to the child process, then arg0 or argv[0] must be NULL. When the l suffix is used, arg0 usually points to path, and arg1, ...., argn point to character strings that form the new list of arguments. A mandatory null following argn marks the end of the list. When the e suffix is used, you pass a list of new environment settings through the argument envp. This environment argument is an array of character pointers. Each element points to a null-terminated character string of the form envvar = value where envvar is the name of an environment variable, and value is the string value to which envvar is set. The last element in envp[] is null. When envp is null, the child inherits the parents' environment settings. The combined length of arg0 + arg1 + ... + argn (or of argv[0] + argv[1] + ... + argv[n]), including space characters that separate the arguments, must be less than 260 bytes for Windows (128 for DOS). Null-terminators are not counted. When a spawn... function call is made, any open files remain open in the child process. Return Value When successful, the spawn... functions, where mode is P_WAIT, return the child process' exit status (0 for a normal termination). If the child specifically calls exit with a nonzero argument, its exit status can be set to a nonzero value. If mode is P_NOWAIT or P_NOWAITO, the spawn functions return the process ID of the child process. The ID obtained when using P_NOWAIT can be passed to cwait. On error, the spawn... functions return -1, and the global variable errno is set to one of the following values: Example #include <process.h> #include <stdio.h> void spawnl_example(void) { int result; result = spawnl(P_WAIT, "bcc32.exe", "bcc32.exe", NULL); if (result == -1) { perror("Error from spawnl"); exit(1); } } int main(void) { spawnl_example(); return 0; }
http://docwiki.embarcadero.com/RADStudio/XE3/en/Spawnl,_spawnle,_spawnlp,_spawnlpe,_spawnv,_spawnve,_spawnvp,_spawnvpe,_wspawnl,_wspawnle,_wspawnlp,_wspawnlpe,_wspawnv,_wspawnve,_wspawnvp,_wspawnvpe
CC-MAIN-2014-10
refinedweb
845
73.98
classifier is a JavaScript naive Bayesian classifier with backends for Redis and localStorage: var bayes = ;bayes;bayes;var category = bayes; // "spam" If you have node you can install with npm: npm install classifier Download the latest classifier.js. In the browser you can only use the localStorage and (default) memory backends. You can store the classifier state in Redis for persisting and training from multiple sources: var bayes =backend:type: 'Redis'options:hostname: 'localhost' // defaultport: 6379 // defaultname: 'emailspam' // namespace for persisting;bayes;bayes; You can serialize and load in the classifier's state with JSON: var json = bayes;bayes; Bayesian() takes an options hash that you can define these properties in: The backend property takes a type which is one of 'Redis', 'localStorage', or 'memory'(default). The backend also has an options hash. The Redis backend takes hostname, port, name, db, and error (an error callback) in its options. The localStorage backend takes name for namespacing. Specify the classification thresholds for each category. To classify an item in a category with a threshold of x the probably that item is in the category has to be more than x times the probability that it's in any other category. Default value is 1. A common threshold setting for spam is: thresholds: {spam: 3,not: 1} The default category to throw an item in if it can't be classified in any of the categories. The default value of default is "unclassified".
https://www.npmjs.com/package/classifier
CC-MAIN-2017-43
refinedweb
241
50.57
Concurrency Abstractions in Elixir Stay connected was a fair amount of boilerplate involved in doing common tasks, such as wanting to keep track of some state within a process or making a blocking call and waiting for a response. Thankfully Elixir comes with a number of common abstractions to make writing concurrent code even easier. In this article we will be looking at Task, Agent, and GenServer. Task The Task module makes doing work concurrently almost effortless. There are only two methods you tend to work with: async to begin asynchronous work done in another process, and await which waits for that work to finish and provides you with the result. Let's look at a very simple example to see it in action, and then we'll look at how it might be used to speed up a map by doing it in parallel. task = Task.async(fn -> 5 + 5 end) The above line will process the anonymous function in a separate process. You are free to continue on your way, and as long as you have the reference to task handy, you can access the result of the anonymous function. What the async function returns is basically a reference to the PID of the process running this task, which is needed to fetch the result: %Task{owner: #PID<0.80.0>, pid: #PID<0.82.0>, ref: #Reference<0.0.7.207>} Let's check on the pid of the task to see what is happening with that process: Process.info(task.pid) nil Process.alive?(task.pid) false As you can see, that process is no longer alive...as soon as the Task has finished its work, that process is shut down. So how do we access the response from it then? The key is to look in our own mailbox (the calling process): Process.info(task.owner)[:messages] [{#Reference<0.0.7.207>, 10}, {:DOWN, #Reference<0.0.7.207>, :process, #PID<0.82.0>, :normal}] task.owner happens to be our current process, or self(). Here we can see two messages waiting to be received. The first is the response to our async task, and the second is that the reference to our Task process has notified us that it has shut down. We could use a receive block to access these messages, but the Task module makes it even easier: Task.await(task) 10 If we now look back at our current process's messages, we'll see that they have been emptied: Process.info(task.owner)[:messages] [] Parallel map With Task Let's take a look at a parallel map example using Task's async and await functions. I used this example in the previous article without using Task, and it turns out to be similar but much simpler and easy to reason about now that the complexity has been hidden from us. defmodule Statuses do def map(urls) do urls |> Enum.map(&(Task.async(fn -> process(&1) end))) |> Enum.map(&(Task.await(&1))) end def process(url) do case HTTPoison.get(url) do {:ok, %HTTPoison.Response{status_code: status_code}} -> {url, status_code} {:error, %HTTPoison.Error{reason: reason}} -> {:error, reason} end end end We'll use the same tests from last time: These tests finish in approximately 2.1 seconds, which shows us that it is indeed working because it is about the length of the slowest HTTP call of two seconds. Agent Agents make it easy to store state in a process. This allows us to share state/date across multiple processes or without having to pass a "big bag" of data to every function that may need access to it. We'll create a small module that allows to get and set configuration keys which live in an Agent. The start_link function begins the Agent process, and then get and set access and update the configuration values. defmodule Configure do def start_link(initial \\ %{}) do Agent.start_link(fn -> initial end, name: __MODULE__) end def get(key) do Agent.get(__MODULE__, &Map.fetch(&1, key)) end def set(key, value) do Agent.update(__MODULE__, &Map.put(&1, key, value)) end end You'll notice we didn't return or need a PID in the example above. The reason for that is that we've bound this process to the module's name (which means there can only be one). Let's write some simple tests to ensure it is working as expected. defmodule ConfigureTest do use ExUnit.Case test "get with initial value" do Configure.start_link(%{env: :production}) assert Configure.get(:env) == {:ok, :production} end test "error when missing value" do Configure.start_link() assert Configure.get(:env) == :error end test "set and then get value" do Configure.start_link() Configure.set(:env, :production) assert Configure.get(:env) == {:ok, :production} end end Agents store their state in memory, so you should only store data that can be easily rebuilt in some sort of restart strategy (using a Supervisor!). Although we used a Map in the example above, their state can be any Elixir data type. All processes in Elixir process requests sequentially. Even though Elixir is highly concurrent, each process handles one request at a time. If a request to the process is slow, it will block everyone else wanting access to the data in this process. This is very useful, though. Requests are handled sequentially in the order that they come in, so it is very predictable in that sense. Just make sure that "heavy lifting" or slow calculations aren't done in the Agent process itself but rather in the caller process or elsewhere. If you are looking for something more powerful or flexible, check out ETS. For a comparison between ETS, Agents, and other external tools such as Redis, there is a great article written by Barry Jones about this topic. GenServer So far we have looked at Tasks for managing concurrent code execution and Agents for managing state within a process. Next we will look at the GenServer module, which combines state with concurrent code execution. We'll convert the MyLogger example from Part 1 into a module which uses GenServer. defmodule MyLogger do use GenServer # Client def start_link do GenServer.start_link(__MODULE__, 0) end def log(pid, msg) do GenServer.cast(pid, {:log, msg}) end def print_stats(pid) do GenServer.cast(pid, {:print_stats}) end def return_stats(pid) do GenServer.call(pid, {:return_stats}) end # Server def handle_cast({:log, msg}, count) do IO.puts msg {:noreply, count + 1} end def handle_cast({:print_stats}, count) do IO.puts "I've logged #{count} messages" {:noreply, count} end def handle_call({:return_stats}, _from, count) do {:reply, count, count} end end The first thing you'll notice is that we included the use GenServer line at the top of our module. This allows us to take advantage of the GenServer functionality. Next up, you'll see the start_link function. What this does is call the GenServer.start_link function, spawning a new process for the current module and providing the initial state, which in this case is 0. It returns a tuple with {:ok, pid}. GenServers provide both synchronous and asynchronous functionality. Synchronous, where you want to block for an answer before continuing, is called call, whereas asynchronous requests are called cast. Cast - asynchronous When we call the line of code GenServer.cast(pid, {:log, msg}) inside of the log function, it will send a message over to our process with {:log, msg} as the arguments. It's up to us to write a function for handling this message. We do that by implementing the handle_cast function, where the first argument should match the incoming arguments being sent over: {:log, msg}, and the second argument is the current state of our process. Inside of the handle_cast function, we should perform the task that we're handling and then respond with a tuple containing the atom :noreply along with the new state of our process. def handle_cast({:log, msg}, count) do IO.puts msg {:noreply, count + 1} end Call - synchronous When we need some sort of response from the process, it's time for us to use the call functionality. GenServer.call(pid, {:return_stats}) sends a message to our process with the argument {:return_stats}. Very much like the cast functionality, it is up to us to write a function called handle_call, which has the job of dealing with that incoming message and providing a response. def handle_call({:return_stats}, _from, count) do {:reply, count, count} end The arguments differ slightly from cast to call. It contains an additional from argument, which is the process that made the request. The response is different as well. It should be a tuple made up of three items: The first is an atom of :reply, the second is the value we should send back to the caller as the response, and the third is the new state our process should have. In the example above, the return value and the new state are the same, which is why count is repeated twice. Let's use our module to try it out! {:ok, pid} = MyLogger.start_link MyLogger.log(pid, "First message") MyLogger.log(pid, "Another message") MyLogger.print_stats(pid) stats = MyLogger.return_stats(pid) Conclusion In this article, we have taken a look at three concurrent code abstractions that build upon the underlying functionality provided by Elixir. The first was Task, which executes code inside of a separate process and optionally waits for a response. The second was Agent, which is a framework for managing state within a process. The third was GenServer, a framework for executing code synchronously or asynchronously and managing state at the same time. What we're missing is the ability to monitor our processes and react if and when they crash. We can do this ourselves, but if we did, we'd be missing out on some fantastic functionality which comes with Supervisors and OTP. Essentially they allow us to monitor processes and define what should happen if they crash, automatically restarting the monitored process(es) and their children. We'll explore this in my next article! Stay up to date We'll never share your email address and you can opt out at any time, we promise.
https://www.cloudbees.com/blog/concurrency-abstractions-in-elixir?utm_source=elixirdigest&utm_medium=web&utm_campaign=featured
CC-MAIN-2022-40
refinedweb
1,694
65.12
Did I find the right examples for you? yes no Crawl my project Python Jobs An accumulator that stops emission if any handler returns a zero value and sets emission result to it in this case. If all handlers return non-zero values, signal emission is not stopped and result is returned by last handler. If there are no handlers at all, emission result is C{True}. @note: Whether a value is non-zero is determined as by built-in C{bool} function. src/p/y/py-notify-0.3.1/test/signal.py py-notify(Download) def test_all_accept_accumulator (self): signal = Signal (AbstractSignal.ALL_ACCEPT) self.assertEqual (signal.emit (), True) signal.connect (lambda: True)
http://nullege.com/codes/search/notify.signal.AbstractSignal.ALL_ACCEPT
CC-MAIN-2021-43
refinedweb
113
59.8
User Tag List Results 1 to 3 of 3 [rails] Trouble writing a loop that will stop execution I'm trying to write a function that will accept a rails URL, and see if it matches one of my Rights. Rights are just controller URLs, so I can manage access at the controller level (don't need action level). What I'm trying to do is take the url passed, and see if its access controlled. If it is, the method should stop and return true, if not it will return false. This method is used to check when someone logs out if they can be redirected to where they are, or if they need to be redirected to the home URL, it protects from an infinite login loop. Code: # true if this is a protected url that would cause a login loop def looper_url?(url) rights = Right.find(:all) for right in rights if url_for(right) == url return true exit end end--Andrew AndrewLoe.com - Journal - Photos - Join Date - Jul 2004 - Location - Montreal - 211 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Hello Andrew, How about this: Code: def looper_url?(url) rights = Right.find(:all) return ! rights.detect {|right| url_for(right) == url }.nil? end For an isolated code to test the logic: Code: def url_for(right) return right end class Right def self.find(param) ['aaa','bbb','ccc'] end end def looper_url?(url) rights = Right.find(:all) return ! rights.detect {|right| url_for(right) == url }.nil? end # will return false puts looper_url?('afaa') # will return true puts looper_url?('aaa')Jean-Marc (aka Myrdhrin) M2i3 - blog - Protect your privacy with Zliki Code: - Join Date - Aug 2005 - 986 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) def looper_url?(url) Right.find(:all).any?{|right| url_for(right) == url} end Bookmarks
http://www.sitepoint.com/forums/showthread.php?450076-rails-Trouble-writing-a-loop-that-will-stop-execution&p=3246573
CC-MAIN-2016-50
refinedweb
297
74.59
Red Hat Bugzilla – Bug 507828 unable add broker in python qmf with URL which contains password Last modified: 2009-10-06 12:17:37 EDT Description of problem: Session.addBroker target could be "amqp://guest:guest@localhost:5672" acording but if I try to run this short example, it doesn't work: #!/usr/bin/env python import qpid from qmf.console import Session try: qmf_session = Session(); broker = qmf_session.addBroker("amqp://guest:guest@localhost:5672", 10); except Exception, e: raise "Cannot connect to the broker. %s" % e "amqp://localhost", "amqp://localhost:5672", "amqp://guest@localhost:5672" work fine. Version-Release number of selected component (if applicable): python-qpid-0.5.752581-3.el5 How reproducible: 100% Steps to Reproduce: Run example. Actual results: Example doesn't add broker to session. Expected results: Example adds broker to session. The documentation link above refers to Java documentation, not Python. I believe that these URL formats should be the same across languages. Which should we use? Is this a bug in the Python API or the Java documentation? The format for passwords in the Python API calls for using a slash ('/') between the username and password, not a colon (':') as was tried. The referenced documentation is for the Java API which has some differences, unfortunately including the URL format. -Ted
https://bugzilla.redhat.com/show_bug.cgi?id=507828
CC-MAIN-2017-47
refinedweb
216
51.44
Version 0.1.0 of RcppExamples, a simple demo package for Rcpp should appear on CRAN some time tomorrow. As mentioned in the post about release 0.7.8 of Rcpp, Romain and I carved this out of Rcpp itself to provide a cleaner separation of code that implements our R / C++ interfaces (which remain in Rcpp) and code that illustrates how to use it — which is now in RcppExamples. This also provides an easier template for people wanting to use Rcpp in their packages as it will be easier to wrap one’s head around the much smaller RcppExamples package. A simple example (using the newer API) may illustrate this: #include <Rcpp.h> RcppExport SEXP newRcppVectorExample(SEXP vector) { Rcpp::NumericVector orig(vector); // keep a copy (as the classic version does) Rcpp::NumericVector vec(orig.size()); // create a target vector of the same size // we could query size via // int n = vec.size(); // and loop over the vector, but using the STL is so much nicer // so we use a STL transform() algorithm on each element std::transform(orig.begin(), orig.end(), vec.begin(), sqrt); Rcpp::Pairlist res(Rcpp::Named( "result", vec), Rcpp::Named( "original", orig)); return res; } With essentially five lines of code, we provide a function that takes any numeric vector and returns both the original vector and a tranformed version—here by applying a square root operation. Even the looping along the vector is implicit thanks to the generic programming idioms of the Standard Template Library. Nicer still, even on misuse, exceptions get caught cleanly and we get returned to the R prompt without any explicit coding on the part of the user: R> library(RcppExamples) Loading required package: Rcpp R> print(RcppVectorExample( 1:5, "new" )) # select new API $result [1] 1.000 1.414 1.732 2.000 2.236 $original [1] 1 2 3 4 5 R> RcppVectorExample( c("foo", "bar"), "new" ) Error in RcppVectorExample(c("foo", "bar"), "new") : not compatible with INTSXP R> There is also analogous code for the older API in the package, but it is about three times as long, has to loop over the vector and needs to set up the execption handling explicitly. As of right now, RcppExamples does not document every class but it should already provide a fairly decent start for using Rcpp. And many more actual usage examples are … in the over two-hundred unit tests in Rcpp. Update: Now actually showing new rather than classic...
http://www.r-bloggers.com/rcppexamples-0-1-0-2/
CC-MAIN-2016-26
refinedweb
410
54.56
The more I look at *all* of the bits of code in the Monastery, the more frustrated my innner librarian becomes. This place is second only to CPAN as a resource for perlies, but it's such hard work finding what you need! Sorting the bits you need from the mass of code fragments lying around here takes a *lot* of Super Searching and reading. I'm sure even experienced monks have difficulty deciding whether something is a small enough for a Snippet, cool enough for a CUFP, crafty enough for Craft or should just be plonked in the Code Catacombs. Even Where Do I Post X? is slightly confused on this - there is no mention of CUFP other than in a 2-year old reply suggesting it should be added (and for some reason, error with CGI program: Can't locate object method "connect" via package "DBI" is appearing as a child node - looks like a misplaced SOPW) - and it seems to me that the difference between Craft and the other two is simply based on the history of <code> tags. The problem is that these are 'primary' categories. If you're trying the 'drill-down' approach to searching the Monastery for code, you have 4 main categories (cluttering, if I may say so, the main menu) to start off from, none of which are really intuitive or descriptive. What's more, sections have different display and categorisation methods. Craft and CUFP are simply listed like question nodes, with no indexing, Code is categorised (but seems to involve editors manually placing code into categories as and when), and have additional desciption fields, while Snippets (which also have descriptions) are simply listed by title, along with a however-many-year old message about "When it gets to be more than one page, we'll categorise them" :) There's a brave attempt by grinder to index these, but that's not much use unless the snippets can be categorised by more than just name and title. Add to this the amount of duplication and updating, and one realises why the some of the same questions get asked over and over, rather than the petitioner being able to simply find something that fits their needs. Many initial pieces of code, for instance, have been turned into CPAN modules - what remains in the Monastery may be historically important, but potentially confusing to somebody that finds the resource here first, especially if there's been a namespace change etc. and the original author hasn't updated the node or replaced with a 'Moved to CPAN' message. Soooo.....I'm proposing a shake-up. One section to rule them all, one section to find them. Code Catacombs is already the most 'organised' of the lot, with descriptions, categories etc., so I suggest that *all* code should go in there. Posters of CUFP's and Craft generally stick a small description at the top of the code anyway, so there'd be no extra work involved in posting other than self-categorisation, and any of the editors (that have been freed of the responsibility of maintaining four different code sections) can suggest / amend this. If it were felt that the 'previous' categorisations were important, they could be added to the description as an extra field ("Is this a snippet, CUFP or craft?"), but I think those categories would soon become redundant. They could maybe be added as Code sub-categories for now as somewhere to place the existing sections. Hopefully, with enough volunteers (see below), this 'backlog' could get cleared. I also think that some resource needs then to be devoted to the organisation of the Catacombs. I won't repeat my previous ramblings, but they still apply. (OK - maybe not the thing about rep., but there should be *some* 'quality control' somewhere - even if only as a recommendation). Having a DMOZ approach, with lots of editors classifying and (dare I say) rating (after input from the monastery in the form of rep...{g}) would be very handy indeed. If this were done, the Monastery would become more useful, useable and used place. I know it's a major job, and so I'm hereby volunteering myself (to whom, btw? editors? pmdev? gods? I get very confused...) to help with this particular Aegean Stable, and would urge others to do the same. Maybe if enough volunteered, we could have the job done by Christmas - a nice present to the Perl community. I think my $0.02 have been well-and-truly spent now :) Cheers, Ben. <Readmore> and slight edit per author - dvergin 2003-06-16. Deep frier Frying pan on the stove Oven Microwave Halogen oven Solar cooker Campfire Air fryer Other None Results (322 votes). Check out past polls.
http://www.perlmonks.org/index.pl/jacques?node_id=266180
CC-MAIN-2016-26
refinedweb
794
67.99
. Issue Links - is related to LUCENE-533 SpanQuery scoring: SpanWeight lacks a recursive traversal of the query tree - Closed Activity - All - Work Log - History - Activity - Transitions thinking about this one, for this to really work correctly with the current setup (e.g. with SpanOrQuery), this length might have to be in the Spans class... but with LUCENE-2878 we nuke this class, so we can keep the issue open to think about how the slop should be computed for these queries, i think just using the end - start is not the best. A related problem is that Spans does not have a weight (or whatever factor) of its own. Currently Spans can only be scored at the top level (by SpanScorer) and not when they are nested. In the nested case the only way to affect to score value is via the length. The getLength() method may not be straightforward. Does the getLength() method in SpanQuery also work in the nested case when there is an spanOr over two spanQueries of different length? It may be necessary to add this length to Spans because of this. Some reasons for a negative match length: - multiple terms indexed at the same position, - span distance queries with the same subqueries. I wish I had a good solution for this, but I did not find one yet. Paul I agree, I think the only way it would work is to be in Spans itself, which is the real 'Scorer' for spanqueries. Because its wrong for SpanOrQuery to have a getLength() really... just like it would be wrong for BooleanQuery to know anything about phrase slops of its subqueries! we can just leave this issue open and see what happens with LUCENE-2878, and maybe a good solution will then be more obvious. I subclassed DefaultSimilarity to work around this. Seemed simple enough. public class LUCENE2880_SloppyFreqDistanceAdjuster { private static Logger logger = Logger.getLogger(LUCENE2880_SloppyFreqDistanceAdjuster.class); public int distance(int distance) { if(distance < 2) { logger.warn("distance - distacne is <, 2, has LUCENE-2880 been resolved?"); return 0; } return distance - 2; } } public class LUCENE2880_DefaultSimilarity extends DefaultSimilarity { private static final long serialVersionUID = 1L; private static final LUCENE2880_SloppyFreqDistanceAdjuster ADJUSTER = new LUCENE2880_SloppyFreqDistanceAdjuster(); @Override public float sloppyFreq(int distance) { return super.sloppyFreq(ADJUSTER.distance(distance)); } } Bulk move 4.4 issues to 4.5 and 5.0 Move issue to Lucene 4.9. Here is an updated patch that moves the method to the Spans class as suggested. SpanTermQuery now scores like TermQuery and an ordered SpanNearQuery scores like a PhraseQuery where all terms are at consecutive positions (the common case). +1 Wow this is simpler than I thought it would be, based on the title & description any way. +1 Maybe width rather than distance as the method name? OK for width, I'll commit with distance renamed as width if there are no objections. Commit 1686301 from Adrien Grand in branch 'dev/trunk' [ ] LUCENE-2880: Make span queries score more consistently with regular queries. Commit 1686308 from Adrien Grand in branch 'dev/branches/branch_5x' [ ] LUCENE-2880: Make span queries score more consistently with regular queries. Commit 1686337 from Adrien Grand in branch 'dev/trunk' [ ] LUCENE-2880: Relax assertion: span near and phrase queries don't have the same scores if they wrap twice the same term. Commit 1686339 from Adrien Grand in branch 'dev/branches/branch_5x' [ ] LUCENE-2880: Relax assertion: span near and phrase queries don't have the same scores if they wrap twice the same term. Bulk close for 5.3.0 release Here's a quickly hacked up patch (core tests pass, but i didnt go fixing contrib, etc yet). Its just to get ideas. The approach I took was for SpanQuery to have a new method: This is called once by the Weight, and passed to SpanScorer. Then SpanScorer computes the slop as: instead of:
https://issues.apache.org/jira/browse/LUCENE-2880
CC-MAIN-2017-09
refinedweb
637
72.76
Hi, If I run the code below, my background does not turn red (nothing turns red). I am probably misusing the ‘body’ tag, but I do not know how to use it differently. In an example I have seen that the CSS ‘body’ tag is used. The python code can be found here: dash-sample-apps/app.py at main · plotly/dash-sample-apps · GitHub The CSS code here: dash-sample-apps/base.css at main · plotly/dash-sample-apps · GitHub Nowhere in the Python code is referred to the ‘body’ tag. Can someone explain to me why my CSS ‘body’ styling is not applied while it is applied in the example? Also, I would like to know what I should do to make sure my app uses the styling in the defined CSS ‘body’ tag (in the case of the example below, the background should turn red). import dash import dash_html_components as html import dash_core_components as dcc app = dash.Dash() body = { ‘background-color’:‘red’ } app.layout = html.Div([ html.Div(‘Hello World’, id=‘div-search-input’) ]) if name == ‘main’: app.run_server(debug=True, port=8888)
https://community.plotly.com/t/how-to-make-sure-the-css-body-tag-is-used/27427
CC-MAIN-2021-49
refinedweb
186
73.78
Blogging on App Engine, part 3: Dependencies Posted by Nick Johnson | Filed under coding, app-engine, tech, bloggart As you can see, get_resource_list() simply returns the path of the post - this is the only resource our PostContentGenerator knows about. get_etag() generates an etag for the post by running the SHA1 algorithm over the encoded contents of the post, as described in efficient model memcaching, thus ensuring that any change at all to the BlogPost entity results in regenerating the page. generate_resource() is almost identical to the render() method we just deleted; the only significant difference is that instead of returning the generated page to the caller, we instead update it in the static serving system ourselves. Finally, we add the new class to the generator_list, to ensure it gets processed. This is part of a series of articles on writing a blogging system on App Engine. An overview of what we're building is here. First, a couple of things of note. Between the last post and this one, I've snuck around behind your back and made a couple of minor changes. Don't worry, none of them are major. The most noticeable of these is that I've implemented a CSS design from the excellent site styleshout; our blog will now look halfway presentable. I've also refactored the existing admin code into a number of smaller modules; if you're browsing the source, you'll notice the code is now split between 'handlers.py' (the webapp.RequestHandlers), 'models.py' (the datastore models), and 'utils.py' (the utility functions such as those to generate content from templates). I'm also pleased to announce that a couple of dedicated coders are following along with the series by writing their own ports of Bloggart. Sylvain is writing 'bloggartornado', a port of Bloggart to the Tornado framework, the source to which is here; a demo can be seen at. Rodrigo Moraes is writing 'bloggartzeug', a port of Bloggart to the werkzeug framework, the source is here. This post is going to be a long one. Are you sitting comfortably? Then let's begin. The biggest challenge when writing a statically-generated blogging system such as ours is figuring out what pages need regenerating, and when. For that, we're going to use a dependency system. Our system will consist of a series of 'ContentGenerator' classes, each of which is responsible for (re)generating some specific part of the blog - such as the posts themselves, the index pages, the RSS feed, and so forth. We'll start by defining an interface for these classes in a new file, 'generators.py': generator_list = [] class ContentGenerator(object): """A class that generates content and dependency lists for blog posts.""" @classmethod def name(cls): return cls.__name__ @classmethod def get_resource_list(cls, post): raise NotImplementedError() @classmethod def get_etag(cls, post): raise NotImplementedError() @classmethod def generate_resource(cls, post, resource): raise NotImplementedError() Together, these methods define the interface all our ContentGenerator subclasses will have. Note that they're all class methods - we don't need to create instances of ContentGenerator anywhere, because it has no state to speak of. Let's look at them in order: - name() is straightforward - it returns a unique name for the ContentGenerator. By default, this is the name of the class. - get_resource_list() takes a BlogPost and is expected to return a list of strings representing resources this post will appear in. For example, if we were implementing tags (we're not, but we will sooner or later), this would return a list of tags in the post: ["foo", "bar", "baz"] - get_etag() takes a BlogPost and returns a short string that uniquely identifies the state of the content this generator cares about. For example, a ContentGenerator for the blog's index page should return an ETag that only depends on the title and summary of the post, while the ContentGenerator for the post itself should return one that depends on the entire post. This lets us figure out if we need to regenerate all the existing resources for a post when it changes. - generate_resource() takes a BlogPost object and a resource as returned by get_resource_list(); it's expected to generate that resource for that post and update it in the static serving system. Now we need to make use of this interface to regenerate only the changed content. First, add a new property to our BlogPost model: class BlogPost(db.Model): # ... deps = aetycoon.PickleProperty() Once again we're making use of the extra property classes in aetycoon - here, we're using a PickleProperty to store the dependencies we've previously observed on this BlogPost. Now, replace the publish() method of the BlogPost with this: def publish(self): if not self.path: num = 0 content = None while not content: path = utils.format_post_path(self, num) content = static.add(path, '', config.html_mime_type) num += 1 self.path = path if not self.deps: self.deps = {} self.put() for generator_class in generators.generator_list: new_deps = set(generator_class.get_resource_list(self)) new_etag = generator_class.get_etag(self) old_deps, old_etag = self.deps.get(generator_class.name(), (set(), None)) if new_etag != old_etag: # If the etag has changed, regenerate everything to_regenerate = new_deps | old_deps else: # Otherwise just regenerate the changes to_regenerate = new_deps ^ old_deps for dep in to_regenerate: generator_class.generate_resource(self, dep) self.deps[generator_class.name()] = (new_deps, new_etag) self.put() Starting at the top, we still have the code to find a path for posts that don't yet have one, but now instead of generating and publishing the content, we simply insert a blank page to hold the URL for us. Next, we check if self.deps is set; if it's not, we set it to an empty dictionary. You may also notice we're calling self.put() twice. This could be optimized down to a single put() call, but it would complicate the code, so for the purpose of demonstration we'll leave it as-is for now. The next section of code is concerned with finding and regenerating changed dependencies. Iterating over each generator in a list that will be provided by our generators module, it does the following: - Fetch the current list of resources and etag from the current ContentGenerator - Fetch the stored list of resources and etag from self.deps - If the etag has changed, we need to regenerate all resources - so we set to_regenerate to the union of the old and new resources. - If the etag has not changed, we only need to regenerate added or removed resources - so we set to_regenerate to the symmetric difference of the old and new resources. - For each resource that needs regenerating, we call generate_resource(). - Finally, we update the BlogPost's list of deps with the new set of resources and etag. Now that we've seen how the dependency system works in theory, let's see it in action by converting the old rendering code to use the new system. Remove the render() method from the BlogPost class, and add the following to the end of generators.py: class PostContentGenerator(ContentGenerator): @classmethod def get_resource_list(cls, post): return [post.path] @classmethod def get_etag(cls, post): return hashlib.sha1(db.model_to_protobuf(post).Encode()).hexdigest() @classmethod def generate_resource(cls, post, resource): assert resource == post.path template_vals = { 'post': post, } rendered = utils.render_template("post.html", template_vals) static.set(post.path, rendered, config.html_mime_type) generator_list.append(PostContentGenerator) If you try publishing or updating a blog post now, the system ought to behave exactly as it did before - but we've done too much work to stop when we're merely back where we started. Let's define a simple ContentGenerator to generate and update the index page of the blog, so we can finally have a homepage: class IndexContentGenerator(ContentGenerator): """ContentGenerator for the homepage of the blog and archive pages.""" @classmethod def get_resource_list(cls, post): return ["index"] @classmethod def get_etag(cls, post): return hashlib.sha1(post.title + post.summary).hexdigest() @classmethod def generate_resource(cls, post, resource): assert resource == "index" import models q = models.BlogPost.all().order('-published') posts = q.fetch(config.posts_per_page) template_vals = { 'posts': posts, } rendered = utils.render_template("listing.html", template_vals) static.set('/', rendered, config.html_mime_type) generator_list.append(IndexContentGenerator) This one is slightly - but only slightly - more complicated than the PostContentGenerator. get_resource_list() always returns the static string 'index', while get_etag() generates and etag that depends only on the title and the summary. Summary is a new property we've added to the BlogPost class; we haven't included it here for brevity, but you can see it in the source - it's very straightforward. generate_resource() fetches a list of the most recent blog posts - the number of which is determined by a configuration option - and renders a template "listing.html" with them, storing the results to the root URL. Note the use of an import statement inside the method, here: This is a nasty trick we need to pull because of the way Python handles imports. Because generators.py is imported by models.py, attempting to import models at the top level of generators.py would result in a recursive import, which is not permitted in Python. To work around this, we only import the models module inside methods that need it. Note that we don't even attempt to deal with posts that have scrolled off the bottom of the front page; that and other issues will be the subject of a future blog post. Finally, we need to define a template for our index page. Create 'listing.html' in the themes/default directory, and enter the following: {% extends "base.html" %} {% block title %}{{config.blog_name}}{% endblock %} {% block body %} {% for post in posts %} <h2><a href="{{post.path}}">{{post.title}}</a></h2> {{post.summary|linebreaks}} <p class="postmeta"> <a href="{{post.path}}" class="readmore">Read more</a> | <span class="date">{{post.published|date:"d F, Y"}}</span> </p> {% endfor %} {% endblock %} This is quite straightforward: After extending our base template, we iterate over each post in the list, outputting an h2 with the title, the post's summary, and a little bit of metadata about it. Try authoring a new post or editing an existing post. Not only should the existing behaviour continue to work as it always has, but you should now see a fancy listing of recent blog posts on the index page (/) of your blog. This is starting to look like a real blogging system! As always, you can see the blog-so-far at, and view the source of this stage here. In the next post, we'll enhance our listing pages, and add Atom and RSS support.Previous Post Next Post
http://blog.notdot.net/2009/10/Blogging-on-App-Engine-part-3-Dependencies
CC-MAIN-2019-09
refinedweb
1,750
55.64
Boost provides a macro, BOOST_FOREACH, that allows us to easily iterate over elements in a container, similar to what we might do in R with sapply. In particular, it frees us from having to deal with iterators as we do with std::for_each and std::transform. The macro is also compatible with the objects exposed by Rcpp. Side note: C++11 has introduced a similar for-each looping construct of the form for (T &elem : X) { /*do stuff*/ } However, CRAN does not (at the time of this posting) allow C++11 in uploads and hence this Boost solution might be preferred if you want to use a for-each construct in a package. The BOOST_FOREACH macro is exposed when we use #include <boost/foreach.hpp>. Make sure the Boost libraries are in your includepath so that they can be found and included easily. Because it’s a header-only library we don’t have to worry about external dependencies or linking. We’ll use a simple example where we square each element in a vector. #include <Rcpp.h> #include <boost/foreach.hpp> using namespace Rcpp; // the C-style upper-case macro name is a bit ugly; let's change it // note: this could cause compiler errors if it conflicts with other includes #define foreach BOOST_FOREACH // [[Rcpp::export]] NumericVector square( NumericVector x ) { // elem is a reference to each element in x // we can re-assign to these elements as well foreach( double& elem, x ) { elem = elem*elem; } return x; } square( 1:10 ) [1] 1 4 9 16 25 36 49 64 81 100 square( matrix(1:16, nrow=4) ) [,1] [,2] [,3] [,4] [1,] 1 25 81 169 [2,] 4 36 100 196 [3,] 9 49 121 225 [4,] 16 64 144 256 ## we check that the function handles various 'special' values x <- c(1, 2, NA, 4, NaN, Inf, -Inf) square(x) [1] 1 4 NA 16 NaN Inf Inf And a quick benchmark: x <- rnorm(1E5) library(microbenchmark) microbenchmark( square(x), x^2 ) Unit: microseconds expr min lq median uq max 1 square(x) 71.04 71.64 71.89 74.14 1518 2 x^2 346.36 350.70 359.18 433.33 1842 all.equal( square(x), x^2 ) [1] TRUE If you are defining your own classes / containers and want them to be compatible with one of these for-each constructs, you will need to define some methods for iteration across these objects. See this post on SO for more details. For more information on BOOST_FOREACH, check the documentation...
http://www.r-bloggers.com/using-boosts-foreach-macro/
CC-MAIN-2014-41
refinedweb
423
54.36
Writing XML in .NET Using XmlTextWriter XML is a hot topic. A primary reason for it being of interest is the fact that it is simple to understand and simple to use. Any programmer should be able to easily look at an XML file and understand its contents. .NET contains a number of classes that support XML. Many of these classes make working with XML as easy as understanding XML. I'm going to show you an example of one such class here. This is the XmlTextWriter class. The XmlTextWriter class allows you to write XML to a file. This class contains a number of methods and properties that will do a lot of the work for you. To use this class, you create a new XmlTextWriter object. You then add the pieces of XML to the object. There are methods for adding each type of element within the XML file. Following are several of these methods: If you are familiar with XML, then the above methods should make sense. You will create a document, add elements, and then close the document. Within elements you can add sub-elements, attributes, and more. The following listing creates a new XML file called titles. using System;using System.IO;using System.Xml;public class Sample{ public static void Main() { XmlTextWriter writer = new XmlTextWriter("titles.xml", null); //Write the root element writer.WriteStartElement("items"); //Write sub-elements writer.WriteElementString("title", "Unreal Tournament 2003"); writer.WriteElementString("title", "C&C: Renegade"); writer.WriteElementString("title", "Dr. Seuss's ABC"); // end the root element writer.WriteEndElement(); //Write the XML to file and close the writer writer.Close(); }} If you compile and execute this listing, you will create an XML file called titles.xml. This XML file will contain the following: <items><title>Unreal Tournament 2003</title><title>C&C: Renegade</title><title>Dr. Seuss's ABC</title></items> The listing created an XmlTextWriter object called writer. When it created this object, it associated it to a file called titles.xml. The program then started a root element called items. The call to WriteStartElement created the opening tag for items. This was then followed by three calls to WriteElementString. As you can see, this method creates an element tag using the first parameter (title in this case). The value of the element is the second parameter. Once you are done adding the elements, you need to close the root element. Calling WriteEndElement will close the element that was most recently opened. In this case, that is the root element. With all the data written and the root element closed, you are done sending information to your XmlTextWriter. This means you can close it by calling the Close method. This listing is relatively simple. The following listing includes a lot more functionality by using more of the XmlTextWriter methods. using System;using System.IO;using System.Xml;public class Sample{ public static void Main() { XmlTextWriter writer = new XmlTextWriter("myMedia.xml", null); //Use automatic indentation for readability. writer.Formatting = Formatting.Indented; //Write the root element writer.WriteStartElement("items"); //Start an element writer.WriteStartElement("item"); //Add an attribute to the previously created element writer.WriteAttributeString("rating", "R"); //add sub-elements writer.WriteElementString("title", "The Matrix"); writer.WriteElementString("format", "DVD"); //End the item element writer.WriteEndElement(); // end item //Write some white space between nodes writer.WriteWhitespace("\n"); //Write a second element using raw string data writer.WriteRaw("<item>" + "<title>BloodWake</title>" + "<format>XBox</format>" + "</item>"); //Write a third element with formatting in the string writer.WriteRaw("\n <item>\n" + " <title>Unreal Tournament 2003</title>\n" + " <format>CD</format>\n" + " </item>\n"); // end the root element writer.WriteFullEndElement(); //Write the XML to file and close the writer writer.Close(); }} The output for this listing is stored in a file called myMedia.xml: <items> <item rating="R"> <title>The Matrix</title> <format>DVD</format> </item><item><title>BloodWake</title><format>XBox</format></item> <item> <title>Unreal Tournament 2003</title> <format>CD</format> </item></items> The comments within the listing tell you what is happening. One thing to remember is that methods that start something, need to be followed at some point by methods that end what was started. For example, if you call StartElement, you will need to call EndElement. You can start a sub-element within another element. Whenever you call an EndElement method, it will always associate with the last StartElement method that was called. (This works like a stack, not like a queue). Working with the XmlTextWriter is easy. I suggest you play around with the code and the various methods. You'll quickly find that this code is easy to integrate into your applications. You should also remember that the XmlTextWriter is only one of many XML classes available in .NET. Like the XmlTextWriter, many of the other classes are also easy to use. # # #
http://www.developer.com/net/net/article.php/1482531/Writing-XML-in-NET-Using-XmlTextWriter.htm
CC-MAIN-2017-26
refinedweb
804
60.82
FLOCKFILE(3P) POSIX Programmer's Manual FLOCKFILE(3P) #include <stdio.h> void flockfile(FILE *file); int ftrylockfile(FILE *file); void funlockfile(FILE *file); These functions shall provide for explicit application-level locking of stdio (FILE *) objects. These functions can be used by a thread to delineate a sequence of I/O statements that are executed as a unit. The flockfile() function shall acquire for a thread ownership of a (FILE *) object. The ftrylockfile() function shall acquire for a thread ownership of a (FILE *) object if the object is available; ftrylockfile() is a non- blocking version of flockfile(). The funlockfile() function shall relinquish the ownership granted to the thread. The behavior is undefined if a thread other than the current owner calls the funlockfile() function. The functions shall shall be incremented. Otherwise, the calling thread shall be suspended, waiting for the count to return to zero. Each call to funlockfile() shall decrement the count. This allows matching calls to flockfile() (or successful calls to ftrylockfile()) and funlockfile() to be nested. All functions that reference (FILE *) objects, except those with names ending in _unlocked, shall behave as if they use flockfile() and funlockfile() internally to obtain ownership of these (FILE *) objects. None for flockfile() and funlockfile(). The ftrylockfile() function shall return zero for success and non- zero to indicate that the lock cannot be acquired. No errors are defined. The following sections are informative. None. Applications using these functions may be subject to priority inversion, as discussed in the Base Definitions volume of POSIX.1‐2008, Section 3.287, Priority Inversion. The flockfile() and funlockfile() functions provide an orthogonal mutual-exclusion lock for each FILE. The ftrylockfile() function provides a non-blocking attempt to acquire a file lock, analogous to pthread_mutex_trylock(). These locks behave as if they are the same as those used internally by stdio for thread-safety. This both provides thread-safety of these functions without requiring a second level of internal locking and allows functions in stdio to be implemented in terms of other stdio functions. Application developers and implementors should be aware that there are potential deadlock problems on FILE objects. For example, the line-buffered flushing semantics of stdio (requested via {_IOLBF}) require that certain input operations sometimes cause the buffered contents of implementation-defined line-buffered output streams to be flushed. If two threads each hold the lock on the other's FILE, deadlock ensues. This type of deadlock can be avoided by acquiring FILE locks in a consistent order. In particular, the line-buffered output stream deadlock can typically be avoided by acquiring locks on input streams before locks on output streams if a thread would be acquiring both. In summary, threads sharing stdio streams with other threads can use flockfile() and funlockfile() to cause sequences of I/O performed by a single thread to be kept bundled. The only case where the use of flockfile() and funlockfile() is required is to provide a scope protecting uses of the *_unlocked functions/macros. This moves the cost/performance tradeoff to the optimal point. None. getc_unlocked(3p) The Base Definitions volume of POSIX.1‐2008, Section 3.287, Priority Inversion, FLOCKFILE(3P) Pages that refer to this page: stdio.h(0p), ftrylockfile(3p), funlockfile(3p), getc_unlocked(3p)
http://man7.org/linux/man-pages/man3/flockfile.3p.html
CC-MAIN-2017-43
refinedweb
540
56.76
Bugtraq mailing list archives Sorry, I sent the wrong source file. Hopefully aleph1 can catch this in time to only allow this one through. Here is the correct one & sorry for the mix up. Greetings, Perhaps it is time to revisit the content filters on our mail servers before the inevitable exploit is released and until proper resolution can be made. By using sendmail's libmilter, it is possible to reject messages with .dll attachments (see below). I am sure that there are other methods as well (e.g. procmail, etc.). Most places don't have the need to email dll's on a regular basis, and if they legitimately did they should be able to zip them first. Cheers, - Bennett At 02:35 09/20/2000 , Lincoln Yeoh manipulated the electrons to say: ...snip... This is what makes it more dangerous. Being subscribed to Bugtraq is getting rather more hazardous, I sure hope Mr Simard's dll is harmless :). Fortunately my Bugtraq attachment directory is different from my office attachment directory. But in the future we could see something like "binary chemical weapons" where non or sublethal payloads combine to create a lethal payload. This can make detection harder, as the various payloads could come from different sources. And the trigger could be from an innocent party. We probably can't use the "binary" term in this field as it would be confusing and redundant. "Beware of binary dlls" yeah right ;). I am sure there are other cases where things are dumped into the same directory. The windows temp directory comes to mind. Maybe one could be tricked into storing the dll in suitable areas- by setting the MIME content type at the webserver, you should in theory be able to tell the browser it's an image, audio, or even word document. But once it's downloaded it will be treated as a dll due to the extension. Cheerio, Link. === Makefile === # Generic Makefile for libmilter filters CC = gcc -Wall # point this at your sendmail source tree SENDMAIL_SOURCE = /usr/local/src/sendmail-8.10.1 IFLAGS = -I$(SENDMAIL_SOURCE)/sendmail -I$(SENDMAIL_SOURCE)/include FLAGS = -pthread LIBS = -lmilter -lsmutil TARGETS = noattach all: $(TARGETS) noattach: $(CC) $(IFLAGS) -o noattach noattach.c $(LIBS) $(FLAGS) clean: rm -f $(TARGETS) === cut === === noattach.c === /* * noattach.c - libmilter filter to reject incoming messages with * specific attachments. * * Currently rejects VBS, SHS, and DLL attachments. */ #include <string.h> #include "libmilter/mfapi.h" static int bad_extension(SMFICTX *ctx, const char *s1, const char *s2, int len) { int n; const char *p, *q; char x, y; char m[1024]; sprintf (m, "Sorry, I can't accept this message due to its attachment(s)."); n = 0; for (p=s1, q=s2; *p && *q && n < len; p++, q++) { x = (isalpha((int)*p)) ? tolower(*p) : *p; y = (isalpha((int)*q)) ? tolower(*q) : *q; if ( x != y ) n++; } if (n == len) return (0); smfi_setreply(ctx, "554", "5.6.1", m); return (1); } sfsistat mlfi_body(SMFICTX *ctx, u_char *bodyp, size_t bodylen) { u_char *p, *q, *r; /* check body block for vbs data */ for(p = bodyp; p && (p = strstr(p, "Content-Type:")); p++) { if ((q = strstr(p, "name=\""))) { for(r=q+6; *r != '\n' && *r != '\0' && *r != '"'; r++); if (*r == '"') { /* Filter for bad extensions */ if (bad_extension(ctx, r-3, "vbs", 3)) return SMFIS_REJECT; if (bad_extension(ctx, r-3, "shs", 3)) return SMFIS_REJECT; if (bad_extension(ctx, r-3, "dll", 3)) return SMFIS_REJECT; } } } /* continue processing */ return SMFIS_CONTINUE; } struct smfiDesc smfilter = { "VBFilter", /* filter name */ SMFI_VERSION, /* version code -- do not change */ 0, /* flags */ NULL, /* connection info filter */ NULL, /* SMTP HELO command filter */ NULL, /* envelope sender filter */ NULL, /* envelope recipient filter */ NULL, /* header filter */ NULL, /* end of header */ mlfi_body, /* body block filter */ NULL, /* end of message */ NULL, /* message aborted */ NULL /* connection cleanup */ }; int main(int argc, char *argv[]) { char c; const char *args = "p:"; /* Process command line options */ while ((c = getopt(argc, argv, args)) != -1) { switch (c) { case 'p': if (optarg == NULL || *optarg == '\0') { (void) fprintf(stderr, "Illegal conn: %s\n", optarg); exit(EX_USAGE); } (void) smfi_setconn(optarg); break; } } if (smfi_register(smfilter) == MI_FAILURE) { fprintf(stderr, "smfi_register failed\n"); exit(EX_UNAVAILABLE); } return smfi_main(); } === cut === By Date By Thread
http://seclists.org/bugtraq/2000/Sep/386
CC-MAIN-2014-35
refinedweb
683
62.07
Lists in Accordion View are buggy - RafaŁ Buchner last edited by gferreira There is a problem in Accordion View: The lists inside the subviews are not hidden by default. It is true even for the example from the documentation: If I run the script with accordion view, the lists inside the subview are visible. Even if the subview that contain list is closed. The lists are covering the rest of the accordion view, so I don't have an access to the rest of the "shelfs". In order to gain that access I have to open the shelf, where the list is stored, close it and then I'm able to select the option that I'm looking for. It is a pain Here is the screenshot: the arrow next to the subview's label "list panel" indicates, that the panel is closed. And still, you can see the list inside of it from mojo.UI import AccordionView from vanilla import * class AccordionViewExample: def __init__(self): self.w = FloatingWindow((200, 400)) self.firstPanel = TextEditor((10, 10, -10, -10)) self.secondPanel = List((0, 0, -0, -0), ["a", "b", "c"]) self.thirdPanel = Tabs((10, 10, -10, -10), ["1", "2", "3"]) self.fourthPanel = Group((0, 0, -0, -0)) self.fourthPanel.checkBox = CheckBox((10, 10, 100, 22), "CheckBox") self.fourthPanel.editText = EditText((10, 40, -10, 22)) descriptions = [ dict(label="first panel", view=self.firstPanel, size=200, collapsed=False, canResize=False), dict(label="list panel", view=self.secondPanel, minSize=100, size=140, collapsed=True, canResize=True), dict(label="third panel", view=self.thirdPanel, minSize=100, size=140, collapsed=True, canResize=False), dict(label="fourth panel", view=self.fourthPanel, size=140, collapsed=False, canResize=False) ] self.w.accordionView = AccordionView((0, 0, -0, -0), descriptions) self.w.open() AccordionViewExample() hi @RafaŁ-Buchner this bug has already been fixed. it should work fine if you use the final RF 3.2 release… - RafaŁ Buchner last edited by RafaŁ Buchner Hi @gferreira, Sorry for being the pain, but it doesn't work for me, and for some other RF users that I'm working with. I was quite sure, that I updated the last version of RF. Just to make even surer I downloaded build 1901211134 again. Still Nada. that’s strange… it works fine for me on macOS 10.13.6. which version of macOS are you using? thanks - RafaŁ Buchner last edited by macOS 10.14.2 (18C54) This should be fixed in the next beta...
https://forum.robofont.com/topic/577/lists-in-accordion-view-are-buggy
CC-MAIN-2020-16
refinedweb
409
59.5
Lewis John McGibbney created INFRA-9079: ------------------------------------------- Summary: Have any23.org and any23.com resolve to any23-vm.apache.org Key: INFRA-9079 URL: Project: Infrastructure Issue Type: Wish Components: DNS Reporter: Lewis John McGibbney Hi @Infra team, A while back you guys did a sterling job helping the Any23 project migrate DNS any23.org and any23.com namespaces as a donation to the foundation. We would like both of the namespaces to resolve to our VM webservice which resides at any23-vm.apache.org as oppose to any23.apache.org. Thanks in advance Infra for any help on this one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
http://mail-archives.apache.org/mod_mbox/www-infrastructure-issues/201501.mbox/%3CJIRA.12770938.1422519285000.202979.1422519334454@Atlassian.JIRA%3E
CC-MAIN-2017-51
refinedweb
111
70.39
timer_create - Allocates a per-process timer #include <signal.h> #include <time.h> int timer_create ( clockid_t clock_id, struct sigevent *evp, timer_t *timerid ); Realtime Library (librt.so, librt.a) The type of clock on which the timer is based. The CLOCK_REALTIME clock is supported. A pointer to a sigevent structure, which defines the signal sent to the process on timer expiration. A pointer to the timer ID returned by the call to the timer_create function. The timer_create function allocates a per-process timer using the specified clock as the timing base. The timer_create function returns timer_id, which identifies occurs when the timer expires. If the sigev_notify member of evp is SIGEV_SIGNAL, the structure must contain the signal number and data value to send to the process when the timer expires. disarms and deletes a timer. Upon successful completion, a value of 0 (zero) is returned. The timer_create function also returns, in timerid, a pointer to the timer ID that has been created. An unsuccessful call returns -1, and errno is set to indicate the error type. The timer_create function fails under the following conditions: The system lacks sufficient signal queuing resources to honor the request. The calling process has already created all of the timers it is allowed. The specified clock ID is not defined. Functions: clock_getres(3), clock_gettime(3), clock_settime(3), timer_delete(3), timer_gettime(3), timer_settime(3) Guide to Realtime Programming timer_create(3)
https://nixdoc.net/man-pages/Tru64/man3/timer_create.3.html
CC-MAIN-2022-27
refinedweb
232
59.09
The weapons I choose for this battle are Modern C++, automated testing, static analysis from `the compiler, dynamic analysis tools, fuzzers and patience. Some people don’t like C++ or code C++ like it was plain old C. Sure, those Linux kernel programmers have a quantum computer in their head that lets them simulate all possible program paths at the same time to see where a lock or memory are not released. But I’m just a human and can barely keep track of the socks in my drawer so I have to find another way, one that is more fool-proof and doesn’t require so many socks. So here are my top n tools and techniques to make my C++ less buggy, crashing less often and more reliable. It’s totally not a complete list but rather an introduction, a base level of tooling that everybody should know about but somehow not always does. 1 Deleting objectsC doesn’t have a garbage collector so we have to clean up our garbage manually. If we don’t, the program will eventually drown in garbage and run out of memory (but not before trashing the hard drive by swapping). The old and deprecated option is to do a delete obj;manually. This is completely unreliable (because it’s manual). You may forget to do this when having multiple exits from a method. Or when a method throws an exception. Or when another method you call throws an exception. Not to mention returning objects that the caller then has to free. The modern approach is to use RAII which is a weird name for a simple concept: use the destructor to do all cleanup at the end of scope. If we have a scope with an object on the stack like this: then the destructor ofthen the destructor of { VictorTheCleaner v; ... } vwill be called when the scope is exited and it can take care of any deleting, releasing and cleaning that’s necessary. A scope can be exited in several ways: - normal program flow reaches the }brace - a statement such as return, breakor continuecauses program flow to jump out - an exception is thrown usingclause in languages such as C# and Python. C++ doesn’t have a dedicated keyword, instead we use the destructor. 1 2 unique_ptrThe standard library template class std::unique_ptr<T>is designed to take care of the most common case - when you need to automatically delete an object. Use std::unique_ptr<T>. Use it for function local variables, use it as object members (to be safe if your constructor throws). Use it especially for C land objects that are created with functions like EC_POINT_new()and must be deallocated with EC_POINT_free(). This is how you can set a user defined function as the deleter: The first class defines a functor (something that hasThe first class defines a functor (something that has template<typename T, void (*Fn)(T*)> class function_deleter { public: void operator()(T *p) { if (p != NULL) Fn(p); }; }; template<typename T, void (*Fn)(T*)> class unique_ptr_ex { public: typedef std::unique_ptr<T, function_deleter<T, Fn>> type; // do not instantiate this class, use unique_ptr_ex<T, Fn>::type unique_ptr_ex() = delete; }; operator()). This is necessary because the second template parameter of unique_ptris a type. The purpose of the second class is to act as a typedefwith a parameter. It’s a workaround because not all compilers I’m using support the new template typedef in C++11. These two classes simplify creating unique_ptrwith different cleanup functions: Then just initializeThen just initialize unique_ptr_ex<BIGNUM, BN_free>::type m_privkey; unique_ptr_ex<EC_POINT, EC_POINT_free>::type m_ec; m_privkeywith a new BIGNUM that belongs to you and deallocation using the openssl-provided function BN_freeis taken care of automagically. Same for m_ec! For details how to use unique_ptr, see the reference. I’ll try to give a few basic guides here. - Use it for local variables that you created in the function and need to clean in the same place. - Use it for member variables of a class that “owns” these variables. Owning them means that the class is responsible for cleaning them which typically happens when the class itself is deleted. - You can also use them as return types for functions that create an object and pass its ownership (responsibility for deleting) to the calling function. No more uncertainty on who should do the cleaning, unique_ptrtells you you are the responsible owner and will do it automatically for you too. - Do not use it for object pointers that do not transfer ownership. If I call a function that uses an existing object, I use a naked pointer or a reference (if it can’t be null). - Do not use it for member variables of a class if the class is not owning that object. shared_ptrwould come in. I try to keep things simple and have only one owner for each object so that I can use unique_ptr. 1 3 Other cleanupSometimes you need to do more things besides deleting an object upon scope exit. Maybe you need to close a database connection, restore the original value or roll back an object to a previous state. You could create a new class with the appropriate code in destructor for each of these cases (for example PostgreCloser, PwdRestorer, …) but that is a little inconvenient. That’s why there is ScopeGuard, a class where you can redefine the cleanup code in-place. 2 Program invariantsA short intermission is necessary to explain the term invariant. The term literally means “something that doesn’t change” and in programming that will be a condition (a logical statement) that mustn’t change and be always true while the program is running because the code relies on it and things would break otherwise. There can be invariants on particular lines in the code (at the start of a function, in a loop) or they are related to a data structure or an OOP class. Invariants are usually not expressed in the programming language itself, it’s just something that we keep in mind and use when thinking about how the code will behave. Or at least should keep in mind. Ideally. For example a data structure invariant for binary search tree is that each node has at most 2 children. A more interesting invariant requires that nodes under the left child are all less than the current node and all under the right child are more than the current node. If this condition doesn’t hold then efficient search in binary search tree will be broken. A string splitting function will have an invariant at the end of the function (called post-condition) that says that the two returned parts, actually form the original string if reassembled. A splitting function would not otherwise be very useful. A memory allocator will have an invariant stating that all active allocations are kept track of so that a further allocation cannot occur in a piece of memory already given to someone else. Invariants help us describe program behaviour and requirements using logic. At this time, mainstream programming languages don’t have any support for working with them. But there are languages that focus on correctness in the academic research community and they let you write those invariants in the code for formal checking. 3 Error handlingI’m coding a Modern C++ interface around some library from C land and using exceptions to signify any errors because I don’t want to deal with manual propagation of error codes up the stack and I always want to know when an error happens[ (see here)]](). Manual error code propagation clutters the code, makes it harder to grasp and is prone to human errors. Modern C++ / OOP programming encourages proper object initialization in the constructor (as opposed to an additional Init()method) and exceptions are the only way to report errors from there (also note this Boost article). Now some people don’t like exceptions in C++ but I found that most (not all) of their arguments are based on limitations in the old versions of the language or simply ignorance of how exactly the language works. Be sure to get familiar with recent development in the C++ language, most significantly the RAII pattern. Of course exceptions have some drawbacks too and some properties that need to be kept in mind (or your quantum computer brain): - Throwing and catching exceptions is s……l..o…w. If you expect this to happen often, for example in parsing code, you need error codes (or Expected, see below) see comparison. - Throwing an exception in a destructor is very destructive. It will probably crash your program (see here for an example). - Throwing an exception in the constructor means the object construction was cancelled and destructor won’t be called. That makes sense but also could mean that a delete m_obj;in the destructor may never be invoked (again, example here) even though you have already new’ed it in constructor. That is one more reason to use unique_ptrfor member variables since these variables will be protected against a sudden death by exception from constructor. - Only one exception can be in flight for a thread at a moment. This means that the exception model cannot support async callback based programming aka. Node.js or Python Twisted and you need to store exceptions manually (mentioned in this talk). - Exceptions can appear anywhere and there’s nothing in the code that will warn you about them. Expectedtype can be returned by functions that are supposed to produce a result but could also fail. It has type parameters for the result value as well as for the error. Then (for example) a parsing library could use the following interface: TheThe struct ParseError { int line, col; std::string expected; }; Expected<int, ParseError> parseInt(std::string); Expected<int, ParseError> parseHex(std::string); Expected<Url, ParseError> parseUrl(std::string); Expectedclass also seems to support the error monad programming style but that would be for another day :) 3 1 Error mis-handlingBeginners tend to ignore errors, I still remember I was doing it. I could barely manage to write the code to do what I wanted in the first place. But if we’re talking about reliable software, ignoring errors is unacceptable. Quite the opposite, we need to know each and every error that happened, is happening or may happen. Errors that have happened need to be logged using an appropriate logging framework and in some cases may even be stored in a database for further analysis or sent to a remote monitoring server. See more here Exception Driven Development Errors that are happening right now need to be detected. If calling an external library that doesn’t throw exceptions but returns an error code, check the returned code and throw, log or handle (retry?) the problem! From this point of view, exceptions (as opposed to returned error codes) really help because they are not ignored by default so the risk of forgetting to report an error is lower. But then again some newbies will very carefully put a catchblock around each function so that they can ignore the valuable exception object that bears details about the problem. How about errors that may happen in the future? Try to anticipate possible problems in the future but don’t try to recover, auto-repair or anything like that. Instead, check invariants and assumptions about your data structures consistency using assertions that report problems immediately. For example, if you have a class that is not thread-safe and you designated it to be only accessed from a single thread, assert it (this is a trick I found in Chrome source code):? void Gui::UpdateBlinkies() { assert(GetCurrentThread() == MainThread); m_blinkie ++; } Having some hard-to-debug problem that a client reports but you can never see on your machines? Then you should have a log file that provides additional information. If even that doesn’t help, rather than spending a week trying to reproduce the problem on your machine, you can ship a debug build to the customer that collects information that you need or one that has enabled asserts. If you have used them well, that alone may be able to pinpoint the bug. 3 2 Exception safetyBut no matter if you choose exceptions or error codes to handle those unusual unhappy cases, you still need to be careful about exception (or error safety). This means that you need to 1) release any resources that were acquired before an error 2) return the application into a consistent state (invariant safety) Everybody should be pretty familiar with point #1 where doing some new BigObject()must be always followed by a deleteeven if you get an exception in between. Point #2 is similar except that it is specific to your application invariants. For example, if you are keeping some data in two structures and always need to update both of them on inserting, you need to make sure both things happen (or get rolled back) even if an exception is thrown: Handling this case could be still easy, just add a catch, remove the item and exit:Handling this case could be still easy, just add a catch, remove the item and exit: void insert_both(string a, string b) { m_by_name.insert(a, b); // OMG, what if an exception happens here? DoSomethingElse(); m_by_addr.insert(b, a); } But if you need to do something like this twice in a function, it starts to get complicated. Fortunately, C++ provides an elegant and convenient way to take care of both requirements. Memory safety has already been described (rememberBut if you need to do something like this twice in a function, it starts to get complicated. Fortunately, C++ provides an elegant and convenient way to take care of both requirements. Memory safety has already been described (remember void insert_both(string a, string b) { m_by_name.insert(a, b); try { DoSomethingElse(); m_by_addr.insert(b, a); } catch (...) { m_by_name.remove(a, b); throw; } } unique_ptr). Invariant safety can be done with a ScopeGuardwhich is a more flexible alternative to unique_ptr. There’s an implementation in the Facebook’s folly library. For the above example with two maps, you could use it in the following way: If any exception occurs in the code, theIf any exception occurs in the code, the void insert_both(string a, string b) { m_by_name.insert(a, b); ScopeGuard insert_guard = makeGuard([&] { m_by_name.remove(a, b); }); DoSomethingElse(); m_by_addr.insert(b, a); insert_guard.dismiss(); insert_guardwill execute the remove operation to restore the original state. If everything goes smoothly to the end of the function, the scope guard will be cancelled by the dismiss()call. This way you can have nice linear code which is easy to understand even if there are more than 1 rollbacks. Just imagine the scope guard as “do this cleanup if anything goes wrong down there” as opposed to having nested try-catchclauses with many possible combinations of control flow. 3 2 1 Testing exceptionsWhen we test our code, we usually focus mostly on the ‘happy path’ where everything goes as planned and the edge cases receive less attention. But if we want to have truly reliable code, even those error or edge cases deserve some attention. If you use code coverage tools, they will keep flashing their red warnings in the exception handlers at you until you add them to ignore list (err, I mean, fix them). Proper unit testing (covered later) of course requires also testing the error and edge cases. You should try to come up with possible incorrect inputs (or problematic program state) and know, for each of them, how the program should handle it. And write this down in an unit test. In this way both the “happy” and failure behaviours of the function are well documented and verified to be correct. This approach for unit-level testing is well established. On the more coarse scale, there is another technique where we artificially throw exceptions and check that they are handled appropriately. We can throw exceptions at various places in the program and check general properties such as whether it causes memory leaks, memory corruption, or crashes. This is only relevant in languages such as C++ which have those memory issues by default. To automate this, you would put instrumentation points at interesting places in your program. Then you run your program or test suite over and over, triggering these instrumentation points in sequence. If each run of your program is deterministic (it takes the same path each time), you will have triggered each of the N points in the end, after running the test suite N times. Since this is sorts of mass exception injection approach, we cannot test for specific behaviour of specific cases, only for overall response to exceptions. Memory correctness will be the most typical case. Another one could be ensuring that all those exceptions are properly logged. This method is very useful if you’re creating a binding to another programming language such as Java or Python or even plain old C. Typically you need to catch exceptions in the C++ world and translate them somehow into exceptions or at least error codes in the target language without messing up the memory or exception safety. You also need to run this under a memory checker such as Valgrind, Asan or PageHeap which will inform you if any memory leak or access violation occurred. If all goes smoothly, you’ll know that exceptions can’t mess with you. It also probably means that you used RAII and unique_ptrcorrectly because without them it’s hard to make memory management right in the face of exceptions. This approach has also been described in Exception-Safety in Generic Components. This is how you may implement it:. // Once placed in code, it can be redefined to do different type // of instrumentation such as heap consistency checking. // NOTE: most likely, this should be disabled in release build #define INSTRUMENTATION_POINT { g_instrument->RunPoint(); } class ExceptionInstrument { public: ExplosiveInstrumentator(); static ExplosiveInstrumentator &instance(); void dispose(); bool should_throw(); void maybe_throw(const std::string &file, int line); static void instrument(const std::string &file, int line); void next_run(); void set_run(int no) { m_run_id = no; } private: // singleton static ExplosiveInstrumentator *g_instance; std::string get_filename_base(const std::string &path) const; int m_run_id; int m_counter; int m_threw_cnt; }; void ExceptionInstrument::dispose() { if (g_instance != NULL) { delete g_instance; g_instance = NULL; } } void ExceptionInstrument::maybe_throw(const std::string &file, int line) { string basename = get_filename_base(file); stringstream ss; ss << "Instrumentation exception F " << basename << " L " << line; if (should_throw()) throw std::runtime_error(ss.str()); } void ExceptionInstrument::instrument(const std::string &file, int line) { instance().maybe_throw(file, line); } ExceptionInstrument &ExceptionInstrument::instance() { // NOT THREAD SAFE if (g_instance == NULL) { g_instance = new ExplosiveInstrumentator(); } return *g_instance; } std::string ExceptionInstrument::get_filename_base(const std::string &path) const { string::size_type bk_pos = path.rfind('\\'); string::size_type fw_pos = path.rfind('/'); string::size_type pos; if ((bk_pos != string::npos) && (fw_pos != string::npos)) { pos = std::max(bk_pos, fw_pos); } else if (bk_pos != string::npos) { pos = bk_pos; } else if (fw_pos != string::npos) { pos = fw_pos; } else { return path; } return path.substr(pos); } bool ExceptionInstrument::should_throw() { bool res = false; if (m_counter == m_run_id) { res = true; m_threw_cnt += 1; } m_counter += 1; return res; } void ExceptionInstrument::next_run() { // this is called from Java, do nothing if instrumentation is disabled #ifdef ENABLE_INSTRUMENTATION_THROW if (m_threw_cnt == 0) { cerr << "Instrumentation run " << m_run_id << " threw no exceptions." << endl; } #endif m_counter = 0; m_threw_cnt = 0; m_run_id += 1; } QUESTION: how to handle exceptions in message loops / GUI / … in a way that is debuggable, readable, testable? 4 Automated testingI use automated unit tests and (more or less) automated integration tests where it makes sense. This is a big and complicated topic and I have an article with a few of my observations in progress. Make sure to keep your interpipes to this blog clean so you don’t miss it ;) 5 Tooling for correctnessIn no way complete or sufficient but this is what I use. 5 1 Address Sanitizer (+ more)If you write in C++(or god-forbid, C), you’re going to have memory bugs. Well, unless you are using Address Sanitizer or Asan. This kind of bugs can cause your program to crash, which is a little annoying to see. But in some cases a crash can lead to a Remote Code Execution exploit. RCE is basically when a snake whisperer (hacker) convinces your program to start executing some code the hacker offered.. So yeah, that’s a little less convenient, especially when they use it to steal your money or data. Asan helps you catch these bugs. Also memory leaks. It can’t detect every problem with your code, it only detects problems in code that you execute. So you still need a good test suite. It catches every access violation (segfault for unix folks) and prints beautiful coloured output in the console. And since everybody loves coloured console output (and not having segfaults), I hereby endorse using Asan for everything. Originally developed for the clang compiler, it’s now available for gcc as well (sorry for people stuck with gcc 2.x)(I wonder if that gcc version is written on papyrus?). Detailed usage is here but in short, you will need to create a new build variant for your project that will generate an Asan-instrumented build with the -fsanitize=addressflag. This build will be around 2x slower and will abort immediately when an access violation is detected. It does not report false positives, that abort will be something you’ll have to fix. You may have used Valgrind. For memory access violations, Asan is similar but works better. It does require you to recompile the code with Asan enabled but then it’s much more accurate. 5 2 afl-fuzzSo even though awesome, Asan won’t catch problems in obscure code branches that don’t get executed. One thing you can be sure: hackers will try to find them and run them so that they can pwn your machine and steal your candy. Here come fuzzers, tools that are designed specifically to execute code paths that normally never see the light of day. They do it by running your code, like, a million times, each time with a slightly different input and observing whether it caused some different behaviour in your code. This is mostly suitable for programs that read and parse some input files such as images, videos, PDFs or even antivirus software reading .exefiles. afl-fuzzis one such program, free, open source and pretty good. It’s not complete magic though. You need to adjust your project to do nothing but read the input data and sometimes you need to help the fuzzing process a bit with a hint. afl-fuzzworks by inserting instrumentation during compilation that informs the fuzzer where the control flow is going. Based on this instrumentation, it tries to alter the input data to find new control flow paths. And when it finds a control flow branch that crashes your code, it’ll happily returns the bad input. Programmers will take that sample input to go and fix the bug, hackers will take that sample to develop an exploit. 5 3 CatchAs far as automated testing frameworks for C++ are concerned, there’s quite a choice. You’ve got the one from Google, Boost, even Visual Studio comes with one. I can’t compare them but I tried Catch and enjoyed it very much. So besides the #1 requirement of coloured console input, it has the benefit of being very light and easy to include in the project (just one header file!) and very easy to use. To assert, you would simply write and it will automatically deconstruct it into two sides of theand it will automatically deconstruct it into two sides of the REQUIRE( factorial(2) == 2 ) ==operator, showing expected and actual if it doesn’t match. Testing this way is much more natural than the classic Assert.AreEqual(factorial(2), 2)or even Assert.That(factorial(2)).Equals(2)or whatever the latest fad in fluent interfaces is. The BDD style for organizing tests has been the most convenient from what I’ve seen so far. You can have a hierarchy of test conditions, delimited using SCENARIO, GIVEN, WHEN, THENand at each step of the hierarchy, you can set up some objects that will be used in levels below. The test framework will then take this tree and run each path independently. Let me give an example with a completely imaginary API: In this example, both “authenticated client” and “guest client” will be run separately, with a fresh instance ofIn this example, both “authenticated client” and “guest client” will be run separately, with a fresh instance of SCENARIO("web service test", "[web][http]") { WebServiceFake fake; RestClient client(fake.url()); GIVEN("authenticated client") { client.user("pete"); client.password("abcd"); WHEN("makes request about self") { Request r(client.new_request("/user/pete"); r.get(); THEN("gets its data") { REQUIRE(r.json().get("salary") == 123); } } } GIVEN("guest client") { WHEN("makes request about pete") { Request r(client.new_request("/user/pete"); r.get(); THEN("gets nothing") { REQUIRE(r.json().count() == 0); } THEN("error is reported") { REQUIRE(r.status() == 403); } } } } clientobject each time. In all but the most basic unit tests, we have to deal with setting stuff up and this layered structure is really helpful because it helps avoid duplication while putting the code where it’s easy to see. 5 4 PageHeapWhile I believe there are plans to make Address Sanitizer available for Windows, at the time of writing that port was not yet ready. PageHeap is a debugging tool built into Windows that can be used to detect buffer overflow errors. It’s not as versatile as Asan but it also helped save my code’s neck a few times (was particularly useful to catch a bug at the boundary of C# and C++ code). It doesn’t require you to recompile the code, you just enable it for a particular program using gflags.exeavailable with the Windows SDK. It works by putting each allocation at the end of a virtual memory page which allows the OS to catch any access over the page boundary. 5 5 Other tools - WinDbg is a very powerful debugger with bunch of scripts and extensions available. For source code based debugging, Visual C++ is pretty sufficient because you can see everything. But WinDbg sure comes handy when you don’t have the code and need to debug issues outside your own code or have problems calling closed source or system libraries. On a second thought, you probably don’t want to end up digging there unless you enjoy this kind of self-punishment. - radare2 looks pretty rad for digging in assembly. Sadly I didn’t have much time to play around with it. Yes I seem to enjoy this kind of self-inflicted pain. - rr is a project from Mozilla that lets you record a program run and then debug the bug out of it by running it over and over and over until you find it. - F* is the absolute heavy-weight here. It lets you write code, prove that it’s absolutely correct and then transalte it to C/C++. Except for the part where you have to be a genius to prove the correctness of any larger program. 6 That’s it?I’ve tried to compile my my approach to not shooting yourself in the foot while coding in C++. Note that while it’s not exactly short, it still doesn’t cover everything, for example how not to shoot yourself in your hand, knee or the back of your neck. The story is not over though, perhaps you, dear readers, can reveal some tricks you have up your sleeve? Discuss!
https://blog.rplasil.name/2016/10/
CC-MAIN-2018-26
refinedweb
4,607
60.65
This patchset is a first pass at overhauling the getname/putname interface to use a struct. The idea here is to add a new getname_info struct that allow us to pass around some auxillary info along with the string that getname() returns. This allows us to do some interesting things: - no need to walk the list of audit_names in certain cases since we can store a pointer to the correct audit_name - we can now call getname() more than once on a userland string. Since we track the original userland pointer, we can avoid doing a second allocation, and can instead fill out the getname_info from the audit_names struct. That makes the ESTALE patchset cleaner, and doesn't explode out the list of getname() callers like the last set. - eventually we might be able to track the length of the parent portion of the string so the audit code doesn't need to walk it again. I haven't implemented that yet, but it doesn't look too hard to do. This is based on top of Al's signal.git#execve2 branch, with my most recent audit series on top of that. Al is working on unifying much of the execve code, which will reduce the number of getname callers greatly. This set is still preliminary since Al's set isn't complete yet and will probably need to be respun again once he's completed that work. That should shrink patch #4 since we'll have fewer getname callers to deal with at that point. This set is based on top of my audit overhaul patchset (posted earlier today). I'll also be posting a respun version of my ESTALE retry patchset soon that's based on top of this one. While this all seems to work correctly, I have my doubts about patch #9 in this series. That was suggested by Al and should make it so that we only need a single allocation per getname() call in most cases. OTOH, it adds a rarely traveled codepath that could be a source of bugs in the future. Jeff Layton (10): arch/alpha/kernel/osf_sys.c | 16 +-- arch/avr32/kernel/process.c | 4 +- arch/blackfin/kernel/process.c | 4 +- arch/c6x/kernel/process.c | 4 +- arch/cris/arch-v10/kernel/process.c | 4 +- arch/cris/arch-v32/kernel/process.c | 4 +- arch/frv/kernel/process.c | 4 +- arch/h8300/kernel/process.c | 4 +- arch/hexagon/kernel/syscall.c | 4 +- arch/ia64/kernel/process.c | 4 +- arch/m32r/kernel/process.c | 4 +- arch/m68k/kernel/process.c | 4 +- arch/microblaze/kernel/sys_microblaze.c | 4 +- arch/mips/kernel/linux32.c | 4 +- arch/mips/kernel/syscall.c | 4 +- arch/mn10300/kernel/process.c | 4 +- arch/openrisc/kernel/process.c | 4 +- arch/parisc/hpux/fs.c | 4 +- arch/parisc/kernel/process.c | 4 +- arch/parisc/kernel/sys_parisc32.c | 4 +- arch/score/kernel/sys_score.c | 4 +- arch/sh/kernel/process_32.c | 4 +- arch/sh/kernel/process_64.c | 4 +- arch/sparc/kernel/process_32.c | 4 +- arch/sparc/kernel/process_64.c | 4 +- arch/sparc/kernel/sys_sparc32.c | 4 +- arch/tile/kernel/process.c | 8 +- arch/unicore32/kernel/sys.c | 4 +- arch/xtensa/kernel/process.c | 4 +- fs/compat.c | 12 +- fs/exec.c | 13 +- fs/filesystems.c | 4 +- fs/internal.h | 4 +- fs/namei.c | 214 +++++++++++++++++++++----------- fs/namespace.c | 6 +- fs/open.c | 33 ++++- fs/quota/quota.c | 4 +- include/linux/audit.h | 26 ++-- include/linux/fs.h | 23 +++- init/do_mounts.c | 7 +- ipc/mqueue.c | 9 +- kernel/acct.c | 6 +- kernel/auditsc.c | 124 +++++++++++------- mm/swapfile.c | 11 +- 44 files changed, 392 insertions(+), 236 deletions(-) -- 1.7.11.4
https://www.redhat.com/archives/linux-audit/2012-September/msg00012.html
CC-MAIN-2014-10
refinedweb
604
55.91
ACTIVITY SUMMARY (2021-02-26 - 2021-03-05) Python tracker at To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 7444 (+14) closed 47702 (+66) total 55146 (+80) Open issues with patches: 2959 Issues opened (53) ================== #43331: [Doc][urllib.request] Explicit the fact that header keys are s opened by axel3rd #43332: http/client.py: - uses multiple network writes, possibly causi opened by zveinn #43333: utf8 in BytesGenerator opened by darcy.beurle #43334: venv does not install libpython opened by anuppari #43336: document whether io.TextIOBase.readline(size>0) will always re opened by calestyo #43337: export the set newline value on TextIOBase/TextIOWrapper opened by calestyo #43338: [feature request] Please provide offical installers for securi opened by zby1234 #43340: json.load() can raise UnicodeDecodeError, but this is not docu opened by mattheww #43341: functools.partial missing __weakref__ descriptor? opened by bup #43346: subprocess.run() sometimes ignores timeout in Windows opened by eryksun #43347: IDLE crashes in macOS Apple chip, maybe completions opened by rhettinger #43348: XMLRPC behaves strangely under pythonw, not under python opened by tim_magee #43350: [sqlite3] Active statements are reset twice in _pysqlite_query opened by erlendaasland #43351: `RecursionError` during deallocation opened by andrewvaughanj #43352: Add a Barrier object in asyncio lib opened by yduprat #43353: Document that logging.getLevelName() can return a numeric valu opened by felixxm #43354: xmlrpc.client: Fault.faultCode erroneously documented to be a opened by jugmac00 #43355: __future__.annotations breaks inspect.signature() opened by 1ace #43356: PyErr_SetInterrupt should have an equivalent that takes a sign opened by pitrou #43357: Python memory cleaning opened by absvsb #43358: Bad free in assemble function opened by alex.henrie #43359: Dead assignment in Py_UniversalNewlineFgets opened by alex.henrie #43360: Dead initialization in parse_abbr function opened by alex.henrie #43361: Dead assignment in idna_converter function opened by alex.henrie #43362: Bad free in py_sha3_new_impl function opened by alex.henrie #43364: Windows: Make UTF-8 mode more accessible opened by methane #43365: Operation conflict between time package and file in python 3.8 opened by minaki_2525 #43366: Unclosed bracket bug in code.interact prevents identifying syn opened by aroberge #43367: submodule of c-extension module is quirky opened by mattip #43371: Mock.assert_has_calls works strange opened by dmitriy.mironiyk #43372: ctypes: test_frozentable fails when make regen-frozen opened by hroncok #43374: Apple refuses apps written in Python opened by adigeo #43377: _PyErr_Display should be available in the CPython-specific API opened by Maxime Belanger #43378: Pattern Matching section in tutorial refers to | as or opened by ramalho #43379: Pasting multiple lines in the REPL is broken since 3.9 opened by romainfv #43380: Assigning function parameter to class attribute by the same na opened by jennydaman #43381: add small test for frozen module line number table opened by nascheme #43382: github CI blocked by the Ubuntu CI with an SSL error opened by gregory.p.smith #43384: Include regen-stdlib-module-names in regen-all opened by nascheme #43387: Enable pydoc to run as background job opened by digitaldragon #43388: shutil._fastcopy_sendfile() makes wrong (?) assumption about s opened by lhuedepohl #43389: Cancellation ignored by asyncio.wait_for can hang application opened by nmatravolgyi #43391: The comments have invalid license information (broken Python 2 opened by poikilos #43392: Optimize repeated calls to `__import__()` opened by Kronuz #43395: os.path states that bytes can't represent all MBCS paths under opened by ericzolf #43397: Incorrect conversion path case with german character opened by voramva #43398: [sqlite3] sqlite3.connect() segfaults if given a faulty Connec opened by erlendaasland #43399: xml.etree.ElementTree.extend does not work with iterators when opened by alexprengere #43400: Improve recipes and howtos for the unittest.mock opened by eppeters #43405: DeprecationWarnings in test_unicode opened by ZackerySpytz #43406: Possible race condition between signal catching and signal.sig opened by pitrou #43407: time.monotonic(): Docs imply comparing call N and call N+M is opened by Alex.Willmer #43410: Parser does not handle correctly some errors when parsin from opened by pablogsal Most recent 15 issues with no replies (15) ========================================== #43398: [sqlite3] sqlite3.connect() segfaults if given a faulty Connec #43397: Incorrect conversion path case with german character #43392: Optimize repeated calls to `__import__()` #43388: shutil._fastcopy_sendfile() makes wrong (?) assumption about s #43387: Enable pydoc to run as background job #43384: Include regen-stdlib-module-names in regen-all #43381: add small test for frozen module line number table #43377: _PyErr_Display should be available in the CPython-specific API #43371: Mock.assert_has_calls works strange #43362: Bad free in py_sha3_new_impl function #43361: Dead assignment in idna_converter function #43360: Dead initialization in parse_abbr function #43359: Dead assignment in Py_UniversalNewlineFgets #43357: Python memory cleaning #43356: PyErr_SetInterrupt should have an equivalent that takes a sign Most recent 15 issues waiting for review (15) ============================================= #43410: Parser does not handle correctly some errors when parsin from #43407: time.monotonic(): Docs imply comparing call N and call N+M is #43406: Possible race condition between signal catching and signal.sig #43405: DeprecationWarnings in test_unicode #43400: Improve recipes and howtos for the unittest.mock #43399: xml.etree.ElementTree.extend does not work with iterators when #43398: [sqlite3] sqlite3.connect() segfaults if given a faulty Connec #43392: Optimize repeated calls to `__import__()` #43391: The comments have invalid license information (broken Python 2 #43384: Include regen-stdlib-module-names in regen-all #43382: github CI blocked by the Ubuntu CI with an SSL error #43381: add small test for frozen module line number table #43377: _PyErr_Display should be available in the CPython-specific API #43372: ctypes: test_frozentable fails when make regen-frozen #43364: Windows: Make UTF-8 mode more accessible Top 10 most discussed issues (10) ================================= #43382: github CI blocked by the Ubuntu CI with an SSL error 19 msgs #42128: Structural Pattern Matching (PEP 634) 15 msgs #43374: Apple refuses apps written in Python 8 msgs #43400: Improve recipes and howtos for the unittest.mock 7 msgs #43355: __future__.annotations breaks inspect.signature() 6 msgs #43364: Windows: Make UTF-8 mode more accessible 6 msgs #43060: Convert _decimal C API from pointer array to struct 5 msgs #43284: Wrong windows build post version 2004 5 msgs #43380: Assigning function parameter to class attribute by the same na 5 msgs #41972: bytes.find consistently hangs in a particular scenario 4 msgs Issues closed (65) ================== #11717: conflicting definition of ssize_t in pyconfig.h closed by pablogsal #14597: Cannot unload dll in ctypes until script exits closed by eryksun #18597: On Windows sys.stdin.readline() doesn't handle Ctrl-C properly closed by eryksun #19050: [Windows] fflush called on pointer to potentially closed file closed by eryksun #19809: Doc: subprocess should warn uses on race conditions when multi closed by eryksun #28462: subprocess pipe can't see EOF from a child in case of a few ch closed by eryksun #29561: Interactive mode gives sys.ps2 not sys.ps1 after comment-only closed by eryksun #29829: Documentation lacks clear warning of subprocess issue with pyt closed by eryksun #32477: Move jumps optimization from the peepholer to the compiler closed by Mark.Shannon #32795: subprocess.check_output() with timeout does not exit if child closed by eryksun #33105: os.path.isfile returns false on Windows when file path is long closed by eryksun #33245: Unable to send CTRL_BREAK_EVENT closed by eryksun #34064: subprocess functions with shell=1 pass wrong command to win32 closed by eryksun #38302: [3.10] __pow__ and __rpow__ are not reached when __ipow__ retu closed by brett.cannon #39169: TypeError: 'int' object is not callable if the signal handler closed by pitrou #39523: Unnecessary variable assignment and initial loop check in pysq closed by berker.peksag #41180: marshal load bypass code.__new__ audit event closed by tkmk #42129: Support resources in namespace packages closed by jaraco #42246: Implement PEP 626 -- Precise line numbers for debugging closed by Mark.Shannon #42603: Tkinter: pkg-config is not used to get location of tcl and tk closed by ned.deily #42782: shutil.move creates a new directory even on failure closed by orsenthil #42994: Missing MIME types for opus, AAC, 3gpp and 3gpp2 closed by orsenthil #43049: Use io.IncrementalNewlineDecoder for doctest newline conversio closed by zach.ware #43162: Enum regression: AttributeError when accessing class variables closed by ethan.furman #43189: <test.support> decorator function run_with_locale() crashes Py closed by Mark.Shannon #43190: < test.support > check_free_after_iterating( ) causes core dum closed by Mark.Shannon #43211: Python is not responding after running program closed by zach.ware #43233: test_os: test_copy_file_range_offset fails on FreeBSD CURRENT closed by pablogsal #43251: [sqlite3] sqlite3_column_name() failures should raise MemoryEr closed by berker.peksag #43271: AMD64 Windows10 3.x crash with Windows fatal exception: stack closed by gvanrossum #43288: test_importlib failure due to missing skip() method closed by orsenthil #43289: step bug in turtle's for loop closed by terry.reedy #43300: "bisect" module should support reverse-sorted sequences closed by rhettinger #43305: A typo in /Modules/_io/bufferedio.c closed by malin #43315: Decimal.__str__ has no way to force exact decimal representati closed by mark.dickinson #43321: PyArg_ParseTuple() false-returns SUCCESS though SystemError an closed by methane #43326: About Zipfile closed by Fcscanf #43328: make test errors closed by asholomitskiy84 #43335: _ctypes/callbacks.c cannot be compiled by gcc 4.4.7 (RHEL6) closed by corona10 #43339: Could not build the ssl module! | macOS with `CPPFLAGS` and `L closed by samuelmarks #43342: Error while using Python C API closed by eric.smith #43343: argparse.REMAINDER missing in online documentation for 3.9.x closed by rhettinger #43344: RotatingFileHandler breaks file type associations closed by vinay.sajip #43345: Add __required_keys__ and __optional_keys__ to TypedDict docum closed by pbryan #43349: [doc] incorrect tuning(7) manpage link closed by benjamin.peterson #43363: memcpy writes to wrong destination closed by josh.r #43368: Empty bytestrings are not longer returned on SQLite. closed by berker.peksag #43369: [sqlite3] Handle out-of-memory errors in sqlite3_column_*() closed by berker.peksag #43370: thread_time not available on python.org OS X builds closed by ned.deily #43373: Tensorflow closed by mark.dickinson #43375: memory leak in threading ? closed by igorvm #43376: Add PyComplex_FromString closed by brandtbucher #43383: imprecise handling of weakref callbacks closed by konrad.schwarz #43385: heapq fails to sort tuples by datetime correctly closed by steven.daprano #43386: test_ctypes hangs inside Portage build env since 'subprocess: closed by mgorny #43390: Set the SA_ONSTACK in PyOS_setsig to play well with other VMs closed by gregory.p.smith #43393: Older Python builds are missing a required file on Big Sur closed by ned.deily #43394: Compiler warnings on master (-Wstrict-prototypes) closed by brandtbucher #43396: Use more descriptive variable names in sqlite3 docs closed by toreanderson #43401: dbm module doc page redirects to itself closed by Numerlor #43402: IDLE shell adds newline after print even when `end=''` is spec closed by terry.reedy #43403: Misleading statement about bytes not being able to represent w closed by eryksun #43404: No SSL certificates when using the Mac installer closed by ned.deily #43408: about the method: title() closed by christian.heimes #43409: [Win] Call subprocess.Popen() twice makes Thread.join() interr closed by eryksun
https://mail.python.org/archives/list/python-dev@python.org/thread/MJE4YVT2CCXFNK4DNRADNX2C53DLUMBW/
CC-MAIN-2021-17
refinedweb
1,861
54.52
I am attempting to parse an XML file from the NCBI website using swift. When the file is in my Xcode project I have the code to parse that. I also have the code to download the file into swift. I am having trouble that once it is downloaded how to do I call the file and parse it from there. Below is my code for parsing the file that is in my Xcode project. import UIKit import XMLCoder struct GBSet: Decodable{ let GBSeq: GBSeq } struct GBSeq: Decodable{ let GBSeq_locus: String let GBSeq_length: String let GBSeq_strandedness: String let GBSeq_moltype: String let GBSeq_definition: String let GBSeq_topology: String let GBSeq_keywords: String let GBSeq_source: String let GBSeq_organism: String let GBSeq_taxonomy: String let GBSeq_sequence: String } class ViewController: UIViewController, XMLParserDelegate { override func viewDidLoad() { super.viewDidLoad() let filePath = Bundle.main.path(forResource: "sequence", ofType: "xml") let data = try! Data(contentsOf: URL(fileURLWithPath: filePath!)) let gbSet = try! XMLDecoder().decode(GBSet.self, from: data) print(gbSet) } } Below is my code for downloading the xml file via url. class ViewController: UIViewController, XMLParserDelegate { override func viewDidLoad() { super.viewDidLoad() let seqId = "AB003468" let urlString = "" let url = URL(string: urlString)! URLSession.shared.dataTask(with: url) { (data, response, error) in if let error = error { print(error.localizedDescription) return } guard let data = data else { print("invalid data!") return } print(String(data: data, encoding: .utf8)!) } .resume() How can I connect the two and download the file and then parse the file?
https://www.breathinglabs.com/monitoring-feed/genetics/how-to-parse-xml-files-from-a-downloaded-url-file-in-swift/
CC-MAIN-2022-05
refinedweb
236
50.12
WCF’s fundamental communication mechanism is SOAP-based Web services. Because WCF implements Web services technologies defined by the WS-* specifications, other software which is based on SOAP and supports WS-* specifications can communicate with the WCF-based applications. To build up a cross-platform WCF server, you can use Metro. Metro is a Web Services framework that provides tools and infrastructure to develop Web Services solutions for the end users and middleware developers. It depends on Java programming language. The latest version of Metro is 1.4. In the development process of SCM Anywhere (a SCM tool, with fully integrated Version Control, Issue Tracking and Build Automation, developed with WCF and METRO/WSIT), we found that METRO is NOT as mature as WCF. There are lots of small issues in METRO/WSIT. Luckily, METRO is an open source project and keeps evolving all the time. Our experience is that if you find some features are not working properly in METRO, keep downloading the latest version from Java.net. Several weeks later, you may discover that the features are working properly. To implement a Java client to communicate with the WCF server, you can follow the steps below: 1. Download METRO/WSIT from the home page of Metro:. 2. Download Eclipse. We use Eclipse + Metro to develop Dynamsoft SCM Anywhere. 3. Install Metro by executing the command: java –jar metro-1_4.jar. The installation folder of Metro contains some documents, tools and samples. You can find the documents in the “docs” folder. 4. Use the C# project “WcfService1” (provided in my WCF client and WCF service article) as the WCF server. Go to Property of the WCF project, and set the server port to one that is not occupied by other services. Here we used 8888 for example. In the “web.config” file, change the string “wsHttpBinding” to “basicHttpBinding”. 5. This is the key step. We use the wsimport tool included in Metro to generate the Java client code. Create a file named “service1.xml” and copy the following code to the file: <bindings xmlns:xsd="" xmlns: <bindings node="wsdl:definitions"> <enableAsyncMapping>true</enableAsyncMapping> </bindings> </bindings> The parameter “enableAsyncMapping” means generating the asynchronous method communicating to WCF server. Save this file, and execute the following command: bin\wsimport -extension -keep -Xnocompile -b TestService.xml Then you can find two new directories in Metro folder: “org” and “com”. They are the generated Java code. 6. Open Eclipse IDE, create a new Java project named “SimpleWCFClient”, and copy the two new directories “org” and “com” to the “src” folder of the project. Refresh the project, and you can find that some new code files are in the project. 7. Create a test class named “WCFTest” and write the following code to the file: import java.net.URL; import javax.xml.namespace.QName; import javax.xml.ws.BindingProvider; import org.tempuri.IService1; import org.tempuri.Service1; public class WCFTest { public static void main(String[] strArgs) { try { Service1 service1 = new Service1(new URL(“”), new QName(“”, “Service1”)); IService1 port = service1.getBasicHttpBindingIService1(); ((BindingProvider)port).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, “”); int input = 100; String strOutput = port.getData(input); System.out.println(strOutput); } catch(Exception e) { e.printStackTrace(); } } } 8. Four .jar files need to be added to the Java project. You can get these files in the “lib” folder of Metro: webservices-api.jar, webservices-extra.jar, webservices-extra-api.jar, webservices-rt.jar. Then go to Property of the project and add these jars to the project. 9. Compile and run the Java project. If the Eclipse console outputs “You entered: 100”, congratulations, you are successful. You can download these code from here. When you are familiar with these, you will find it’s very convenient to write a Java application communicating with a WCF client. Links: Previous article >>>>: WCF client and WCF service Next article >>>>: Data types between Java and WCF WCF & Java Interop series home page: WCF & Java Interop Pingback: Fix Webservices-api.jar Errors - Windows XP, Vista, 7 & 8 Pingback: All in One- Bookmarks Throught Career | MUHAMMAD ARSHAD
http://www.codepool.biz/java-client-and-wcf-server.html
CC-MAIN-2017-34
refinedweb
673
58.48
A customer had an issue where they had a cluster of ISAM nodes, and some of the Junctions they had configured used LTPA for authentication. They had only copied the LTPA keys onto the nodes in the cluster that hosted the Web Reverse Proxy Gateway (aka WebSEAL). When they upgraded from ISAM v8 to v9, both the LTPA keys and the Junctions that used those LTPA keys went missing. Answer by AndyMoore (58) | Jul 26, 2016 at 06:42 AM This problem only occurs if you have multiple ISAM nodes in a cluster. During the upgrade to v9.0, the cluster primary master node will replicate its copy of the LTPA keys to the other nodes. If the primary master node does not have any LTPA keys, it will push its "empty" keystore to the other nodes in the cluster, removing the LTPA keys from all those nodes as well. The way to fix this is: import the LTPA keys into the cluster primary node before upgrading to v9.0 (if you are reading this in preparation of an upgrade) or re-import the LTPA keys into each node after upgrading to v9.0 (if you have already done an upgrade and hit this problem) 68 people are following this question. What is the impact of load on front-end persistent-con-timeout and other front-end connection timeouts for ISAM WebSEAL? 1 Answer Why do my SPNEGO enabled proxy instances fail to start after upgrading to ISAM 9.0.3.0 1 Answer Query: ISAM Reverse proxy - increase request size for particular junction 1 Answer How Can I enable the Ldap clent Trace on the ISAM Appliance reverse proxy? 1 Answer can any one help me with creating Form-SSO? I have login page "index.php" in my linux VMs local host path "/www/html/test/index.php". 2 Answers
https://developer.ibm.com/answers/questions/290406/after-an-upgrade-from-isam-v8-to-v9-ltpa-enabled-j.html?smartspace=security-core
CC-MAIN-2019-35
refinedweb
311
69.82
You can subscribe to this list here. Showing 2 results of 2 jEdit 4.1pre5 is now available from <>. Thanks to Axel Biernat, Chris Petersen, Eric Benoit, Fan Ho Yin, Kris Kopicki, Marco Gotze, Ollie Rutherfurd, and Steve Snider for contributing to this release. + Syntax Highlighting Changes: - Added NQC syntax highlighting (Fan Ho Yin) - Added Ruby-HTML syntax highlighting (Eric Benoit) - Added Pike syntax highlighting (Marco Gotze) - Updated C-Sharp syntax highlighting (Ollie Rutherfurd) - Updated Perl syntax highlighting (Chris Petersen) - Updated PL-SQL syntax highlighting (Steve Snider) - Updated CSS syntax highlighting (Axel Biernat) - The "More accurate syntax highlighting" option is no longer. When it was on, it would do the following: - Cause the buffer to be parsed entirely by the syntax engine when first loaded. - When parsing a line for syntax tokens, it would scan back to the start of the buffer looking for a line with valid syntax info, instead of only looking 100 lines back. However, the second made the first unnecessary, and with the first behavior gone, the performance hit is not noticable. So this option is now effectively always on. + Global Options Dialog Changes: - The tool bar option pane now has an "Edit" button for modifying the currently selected tool bar entry. - Now org.gjt.sp.jedit.gui.OptionsDialog is an abstract class, with a concrete org.gjt.sp.jedit.options.GlobalOptions subclass with the Global Options-specific code. This allows plugins to create paned dialog boxes similar to Global Options. - The "Standard go to next/previous word behavior" setting has been removed; instead some new actions have been added which can be bound to C+LEFT, C+RIGHT, CS+LEFT and CS+RIGHT to achieve the behavior of this setting: Go to Next Word (Eats Whitespace) Go to Previous Word (Eats Whitespace) Select Next Word (Eats Whitespace) Select Previous Word (Eats Whitespace) + Plugin Manager Changes: - Added "Select All" button to Install and Update Plugins dialogs - The plugin list is only downloaded once per Plugin Manager dialog box instance + Miscellaneous Changes: - Added an "Unsplit Current" command, bound to C+0 by default. It removes the split pane containing the current edit pane only, as opposed to the "Unsplit All" command (previously "Unsplit", still bound to C+1) which removes all splits from the view. - Behavior of "Warn if file is modified on disk by another program" setting is now more intuitive; even if its off, the write protection status of a buffer is still updated if it changes on disk. Also this setting now controls the modification check when saving; previously it only controlled the check performed when jEdit received focus. - jEdit now makes sure that windows are within the bounds of the screen when loading saved geometry. This should improve matters for people who use a laptop with a docking station that has a different resolution, etc. (Kris Kopicki) + Bug Fixes: - Line numbers in the 'Markers' menu were off by one - On some Java versions, the popup menu code would not work in frames and dialog boxes and print a stream of exceptions - Fixed exception thrown on MacOS X when attempting to list "Local Drives" in the file system browser (Kris Kopicki) - Fixed problems if a macro file name had a space in it - Fixed a number of problems with mode property handling: - It was not possible to override a mode's property with a blank value; for example if you no longer wanted objective-c mode to open *.m files (and instead use matlab mode for those files) you had to enter a dummy filename glob in the objective-c settings. - Changing the filename or first line glob in the "Mode Specific" pane would not take effect until jEdit was restarted. - Entering a relative path in the file system browser's "Path" field didn't work - "Format Paragraph" command would insert extra newlines if a line ended with a space - The MacOS plugin had a version check that looked for an exact MRJ version match, rather than an equal or newer version. This broke the plugin when running on MacOS X 10.2. (Kris Kopicki) - If an action caused the creation of a dockable window, the standard variables (view, buffer, textArea, editPane) would be cleared from the action's namespace from that point on. This has been fixed by making dockable creation and action invocation take place in different namespaces. - Fixed a display problem if a SEQ_REGEXP, SPAN_REGEXP or EOL_SPAN_REGEXP syntax highlighting rule matched a tab. - Added a workaround for a Java problem were very wide rectangles were not painted properly in the selection painting code. - The TERMINATE_AT rule was broken; the number of characters to terminate at was taken to be from the start of the file, not from the start of the current line. This broke FORTRAN syntax highlighting, for example. - The "Close Current Docking Area" command should work now. Hello jEdit users- This evening, I have released the latest batch of Plugin Updates. Both of these plugins work with jEdit 4.0 and 4.1 under JDK 1.3 or higher. * FastOpen 0.6: fixed a small bug when FastOpen would get a NPE on clicking OK/Cancel in Global Options box when FastOpen options were not changed; new Regular Expressions support for finding files; added a new option in the Global Options to toggle Ignorecase when doing Search; documentation to FastOpen has been included in this release which was only in form of Release Notes & Change logs in previous version; requires jEdit 4.0pre1, ProjectViewer 1.0.2, and JDK 1.3 * JTools 1.1: adds Extend / Implement Wizard tool v1.0; adds Toggle Line Comment & Toggle Range Comment tools v1.0; Check Imports tool v1.1 now combines functionality of Check Imports v1.0 and Resolve Imports; addresses several issues the old versions didn't (i.e. duplicate imports, ambiguous imports & java.lang imports); Check Imports can now be applied to all buffers or just the current buffer; options now govern whether the tool inserts wildcard resolutions and automatically deletes unwanted imports or flags them with ErrorList; fixed a bug where a RuntimeException was thrown if the buffer was locked; resolved imports are now sorted; requires jEdit 4.0pre1, ErrorList 1.2, and JDK 1.3 -md
http://sourceforge.net/p/jedit/mailman/jedit-announce/?viewmonth=200210
CC-MAIN-2014-52
refinedweb
1,038
59.43
Thorsten Scherler wrote: > Hi all, > > the forrest site target is broken for views. Besides that you *need* to > copy commons-jxpath-1.2.jar from cocoon-2.2.x into lib/core of forrest. > That will break the whole forrest site as well for the "old fashion" > skins. > > The cli depends on the linkmap, which cannot be build because a conflict > of namespaces. site:bla will be interpreted as jxpath expression and > site: as namespace. Not only in the 'forrest' cli mode but also in 'forrest run mode': -David > I will earliest have time to look on that on the weekend. I am sorry for > any inconvenient and that views in the trunk is broken. Anyway the > problem is in our code and we need to fix it, otherwise we will never be > able to use JX in forrest which is a showstopper for any release. > > salu2 > -- > thorsten > > "Together we stand, divided we fall!" > Hey you (Pink Floyd)
http://mail-archives.apache.org/mod_mbox/forrest-dev/200509.mbox/%3C20050924035640.GD23834@igg.indexgeo.com.au%3E
CC-MAIN-2015-22
refinedweb
158
75.81
Projects and Spring You can easily use this Login application to quick start your... ASAP. These Struts Project will help you jump the hurdle of learning complex... learning easy Using Spring framework in your application Project in STRUTS of type as error when i m compiling it,can anyone pls help me to correct this error. public class WriteByteArrayToFile { public static void main(String illegal start type, HELP! illegal start type, HELP! import java.util.Scanner; public class Lab6ex9 { public static void main(String[] args) { Scanner... is " + shipping + " dollars."); } } } Illegal start type error? help?  . start tomcat automatically by double clicking on exe start tomcat automatically by double clicking on exe Hi I wanted to start tomcat automatically by double clicking on exe. i made project in jsp... start and simultaneously my first.jsp should start. PLZ help me out java.sql.SQLExceptioan: Before start of result set - JDBC java.sql.SQLExceptioan: Before start of result set java.sql.SQLException: Before start of result set?what do i do? Hi Friend, It seems that there is a problem with the resultSet.next() method: It is used before SQL get start date and end date result SQL get start date and end date result how to get ( 15 march 2011) and (15/03/2011) output using SQL Post your Comment
http://roseindia.net/discussion/2219-SPRING-...-A-JUMP-START.html
CC-MAIN-2015-35
refinedweb
218
69.18
Important: Please read the Qt Code of Conduct - Qt Quick Dialog focus not work ? hi when i open dialog and set focus = true , focus not move to dialog and stay in main page and TAB key not work until clicked in one child in dialog. how can auto move focus to child when dialog opened ? @sardar Dialogdoesnot have focusproperty. If you are trying to bring focus to one of the Item inside that Dialogthen you can explicitly set focus by setting it to true for that item. thanks DialogInherits from popupand popup have focusproperty. in Gallery example in Qt 5.8use focus property in Dialog but it not work. sample code in SDK samples : Dialog { id: settingsDialog x: Math.round((window.width - width) / 2) y: Math.round(window.height / 6) width: Math.round(Math.min(window.width, window.height) / 3 * 2) height: settingsColumn.implicitHeight + topPadding + bottomPadding modal: true focus: true standardButtons: Dialog.Ok | Dialog.Cancel } What is popup? Can you point out that QML type ? Dialogis part of Qt Quick Controls 2.1added in Qt 5.8. you can see in doc. Dialog QML Type Popup QML Type @sardar Well I was not aware of the unreleased version. You're right. It should work. focusproperty is meant for that. Btw. what is the root element from where you launch the Dialog? Can you try using ApplicationWindow? As per the the docs: In order to ensure that a popup is displayed above other items in the scene, it is recommended to use ApplicationWindow. So perhaps the newly opened Dialogwould get the focus automatically? thanks for your answer. i use Qt samples in SDK, Gallery sample. yes, Dialog use in ApplicationWindow. @sardar In your first post you said you have a child in the Dialog. You should also try setting focus to that particular child too once you open the Dialog. Looks like a regression caused by another focus-related fix. We'll fix this right away. As a temporary workaround, you can do for example: Button { onClicked: { dialog.open() dialog.forceActiveFocus() } } Sorry for the inconvenience. Any news on this ? On this day, it's still not working. I want to make a Dialog that is a serial shell. In this dialog I have one textArea to display the serial frames received and sent and one textInput for writing the frames I want to send. The problem is that when I type my frame and want to send it by pressing Enter, it is also quitting the dialog as it is pressing on the "Close" button. I can't figure how to force keyboard focus only on the textInput and not trigger the reject() signal of the dialog when pressing Enter. I tried changing the focus variable but of course, it can not "assign to non-existent property" for the reasons invoked above in this thread (temporary regression). I tried the forceActiveFocus() command but it says that forceActiveFocus is not a function (sic)... I tried to catch the key pressed event but it says "Could not attach Keys property to:" on any object. Here is a curated snippet : import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.1 import QtQuick.Dialogs 1.2 Dialog { property alias serialExchange: serialExchange id: serialShellRoot title: qsTr("Serial shell") + translator.emptyString modality: Qt.NonModal standardButtons: StandardButton.Close | StandardButton.Reset height: 480 width: 640 ColumnLayout { anchors.fill: parent Rectangle { Layout.fillHeight: true Layout.fillWidth: true border { color: "black" width: 1 } Flickable { anchors.fill: parent TextArea.flickable: TextArea { id: serialExchange wrapMode: TextEdit.WrapAnywhere textFormat: Text.RichText readOnly: true selectByMouse: true font.family: "Courier New" font.pixelSize: 18 color: "white" background: Rectangle { color: "black" } } ScrollBar.vertical: ScrollBar { } } } Rectangle { Layout.fillWidth: true Layout.preferredHeight: 24 border { color: "black" width: 1 } TextInput { id: serialPrompt anchors.fill: parent padding: 5 selectByMouse: true font.family: "Courier New" focus: true onAccepted: { console.log(this.text); } } } } onReset: { serialExchange.remove(0, serialExchange.length); } } This code is located in a separate SerialShell.qml file and is called from the main.qml file which is an ApplicationWindow. Any idea about this particular problem (probably) related to some regressions in the dialog code ? Sorry if this is not the good place for posting this. @Zametuppa You are using the Dialog type from QtQuick.Dialogs 1.2, so this is a different issue than the original poster had with the Dialog type from QtQuick.Controls 2.0. The difference between the two Dialog types is that the former is a top-level window on platforms that support multiple top-level windows, whereas the latter is not a top-level window. @jpnurmi Sorry for this, you are right. I haven't made the switch to Qt5.8 and was looking at the wrong documentation anyway. Because some other objects have a Qt Quick and Qt Quick 2 documentation and I use Qt Quick 2, I automatically assumed that I should look at the Qt Quick 2 documentation for the Dialog. Thank you.
https://forum.qt.io/topic/71840/qt-quick-dialog-focus-not-work/?
CC-MAIN-2021-25
refinedweb
824
53.17
84842/how-to-connect-to-the-kubernetes-cluster-from-terraform Hi Guys, I am new to Terraform. I have a Kubernetes cluster in my system. I want to control the Kubernetes cluster using Terraform code. How can I do that? Hi@akhtar, You need to install the plugins using the terraform init command for the Kubernetes cluster. From the plugins you will get Kubernetes provider. You need to use the provider name in your code as shown below. provider "kubernetes" { config_context = "minikube" } Hi Team, I am new to Terraform. I ...READ MORE Hi, I think you can use aws_db_instance resource to ...READ MORE Hi, You can use the file function in Terraform. ...READ MORE Hi@akhtar, You can use the file function in Terraform. This file function .., You can use kubernetes_namespace resource in your ...READ MORE Hi@akhtar, You can use Kubernetes as your Terraform ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/84842/how-to-connect-to-the-kubernetes-cluster-from-terraform?show=84845
CC-MAIN-2022-33
refinedweb
171
69.28
Giacomo Pati wrote: > > --- Stefano Mazzocchi <stefano@apache.org> wrote: > > Giacomo Pati wrote: > > > > > > Stefano Mazzocchi wrote: > > > > > > > > Still to do: > > > > > > > > 1) create a fake command line object model... yes, Giacomo, I'll > > do this > > > > first as you suggested then we'll see > > > > > > > > 2) add link filtering semantics to the sitemap > > > > > > > > I'll have a finished CLI for Cocoon2. > > > > > > > > In order to totally replace stylebook documentation we still need > > > > > > > > 3) internal aggregation (using the cocoon: uri) > > > > > > So, please explain me this. I think I need to reorganize my > > priority > > > list because my spare time is getting shorter the next weeks :( I > > think > > > it would help if other people could help us doing the stuff that is > > > still needed so much. > > > > Ok, I'll try to tackle it myself then (even if my time is getting > > short > > as well not that CLI is working as I wanted) > > I asked for explanation mainly because I have no idea what you meant > with "internal aggregation", sorry to claim your time. Oh, no problem... actually I have no idea myself of what I mean by internal aggregation so I'll spend the afternoon to come up with a proposal of some sort. > > > I'm still implementing the forthcomming avalon release and it scars > > me > > > alot because there are many things to change and reimplement (ie. > > the > > > nuked NamedComponentManager) :(. > > > > I perfectly understand. > > > > > And there are still many other things to do before we can call it > > beta. > > > Here is a small list: > > > > > > - XalanTransformers problem with namespaces > > > - XMLSerializers problem with namespaces > > > > should we move to Xalan2 to fix these? > > +1000 right > > Is there anyone that wants to help replacing Xalan1 with Xalan2 using > > the TRaX interfaces? > > +100000 :) > > > - integration of the ESQL logicsheet doesn't work because of a bug > > > somewhere > > > in the XSP stuff (it's not the integration itself that scares me > > but > > > the fact > > > that it generates uncompilable java code can cause other > > logicsheet to > > > fail as > > > well). I've send it to Ricardo but didn't received a valuable > > answer > > > :( > > > > yes, we have to fix this ourselves. > > > > > - implementing the Action proposal. This weekend I had the time to > > study > > > Struts because someone mentioned it here. I've seen that our > > Action > > > proposal > > > is exactly what Struts is all about (more or less). But I think > > we > > > have > > > a much more componentized model. > > > > hmmmm, I'll be -1 on entering beta state with such a big change in > > paradigm... don't get me wrong, I've seen your action proposal and I > > think it's a great addition to the sitemap model, but we haven't > > played > > with it directly and we cannot guarantee the semantics/api/schema > > will > > remain the same in the near future. > > To be honest, I'll need it for own projects. Then I will stay in the > proposal directory to further develop it! No, no. Move it to the trunk, blast that proposal directory... I'm not saying I don't like it, just that it's not stable enough for a beta release. > > > - And other think I forgot to write down here. > > > > I'd like to have an alpha release soon. > > Another alpha or the first beta? alpha, it will be alpha until the APIs are rock solid, so the code will solidify on the way thru. > > The showstopper, for me, it's stylebook presence: I won't release > > with > > stylebook-generated docs... but C2 is currently not powerful enough > > to > > match stylebook abilities. > > > > See next mail. > > Ok. > > > But after this, I'll write a small user guide and plan to come up > > with > > an alpha release... > > Oh, you really mean alpha then. yes > > I don't care if things will change or not... but > > I'm > > perfectly aware of the fact that without releasing, there will be > > very > > few people trying it out and less chances to fix things soon enough > > to > > avoid doing harm in the future. > > > > So, the day stylebook is matched by C2, I'd release the alpha, what > > do > > you say? > > I have no problem with that. Release early and often :) Great. > > > > Giacomo, why don't you focus on #3 and I focus on #1 and #2? they > > are > > > > totally orthogonal things and can be done in parallel. > > > > > > > > What do you say? It should take long, but you know better than me > > where > > > > to place the pieces in that (admittedly very hard) sitemap.xsl > > > > logicsheet (how about adding some comments to it? it is code > > after all) > > > > > > Yes, my friend. I'll do all the commenting as soon as I've finished > > all > > > the other implentation stuff and I've disziplined myself to > > > things during coding :) > > > > No problem... just reminding you of that. > > Yes, I need to be reminded! ------------------------- ---------------------
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200010.mbox/%3C39D88D0A.6C831487@apache.org%3E
CC-MAIN-2015-22
refinedweb
783
72.97
Unifier WSDL for ASP.NET?oweowe Jan 3, 2014 4:02 PM After the URL changed to for Web Services Integration on 12/22/2013, our Web Services integration program (using ASP.NET C#) couldn't work anymore. This is the error I got when using Visual Studio 2010 to debug: Unable to import WebService/Schema. Unable to import binding 'mainserviceSoapBinding' from namespace ''. Unable to import operation 'createObject'. The datatype '' is missing. It appears the WSDL provided in the link: cannot be resolved by ASP.NET. We used to encountered similar issues before. It was resolved after Skire Technical Support provided a WSDL file compatible with ASP.NET. I am wondering if this is the same situation. Any thought on this? 1. Re: Unifier WSDL for ASP.NET?Marc DiNick Jan 4, 2014 5:40 PM (in response to oweowe) If technical support previously solved a similar issue, I’d consider opening an SR with Oracle, particularly since this is related to the migration. 2. Re: Unifier WSDL for ASP.NET?oweowe Jan 5, 2014 3:11 PM (in response to Marc DiNick) I think this is a generic issue for all customers using ASP.NET for Web Services integration with Oracle host. This is why I posted here. 2 weeks passed already. There is no solution from Oracle to our service request yet.... 3. Re: Unifier WSDL for ASP.NET?1063579 Jan 8, 2014 4:08 PM (in response to oweowe) did you get it fixed? we got new wsdl from oracle support and that fixed our issue. 4. Re: Unifier WSDL for ASP.NET?cba828a6-8b07-4002-b2f1-26c962c001ac Jan 14, 2014 2:11 AM (in response to oweowe) We also have the same issue with the WSDL... 5. Re: Unifier WSDL for ASP.NET?user10066206 Jan 15, 2014 9:21 PM (in response to oweowe) Sorry this didn't copy and past very well, but should help you... enjoy -Luther The Unifier WSDL doesn’t appear to conform to WS-I standards.However with a slight change I’ve made it work with .NET. The server move has caused some slight changes in the WSDL and you can’t use the old WSDL with the new service, at least it didn’t work for me, but following these instructions will allow you generate a Web or Service reference in either case. This works with the new server for us, so hopefully it will for you too – our new URL is different than yours but I put yours in below for the example. The only errors generated by the .NET WSDL tool are because of two references in the WSDL to “apachesoap:DataHandler” which is not defined in the WSDL and is proprietary. Fortunately, we don’t seem to need these, at least not for getUDRData which is what we have been using so far. You can download and manually edit the WSDL to make it work with .NET by replacing the two references to “apachesoap:DataHandler” to “xsd:anyType”. Here some step-by-step instructions I wrote up after having to do this a couple times. - Create a new temp folder to store the WSDL - Open a “Developer Command Prompt for VS2012” (e.g. Start -> All Programs -> Visual Studio 2012 -> Visual Studio Tools) - Change to the new folder, e.g. “cd C:\test” from the command prompt - Run the following command and observe similar output: C:\test>disco Microsoft (R) Web Services Discovery Utility [Microsoft (R) .NET Framework, Version 4.0.30319.17929] Disco found documents at the following URLs: The following files hold the content found at the corresponding URLs: .\mainservice.wsdl <- The file .\results.discomap holds links to each of these files. C:\test> - Note that two new files will be created, as follows: C:\test>dir Volume in drive C is OS Volume Serial Number is 98C3-7CC5 Directory of C:\test 01/15/2014 11:11 AM 86,072 mainservice.wsdl 01/15/2014 11:11 AM 431 results.discomap 2 File(s) 86,503 bytes C:\test> - Open the newly created file mainservice.wsdl in your favorite text editor. - Search for replace "apachesoap:DataHandler" with "xsd:anyType" Two occurrences should be replaced. - Save and close. - You now have the option of manually running the wsdl tool to create the C# code and then adding that to your project, or letting Visual Studio take over from here. I recommend the latter so that you get entries in your app.config, but the following are instructions for both options. - To have Visual Studio complete the process for you adding a Service Reference, follow these steps: - Open your Visual Studio project. (I used Visual Studio 2012 for this) - Right click References and select Add Service Reference - In Address, put in the local file path to the WSDL, e.g. “C:\test\mainservice.wsdl” and then hit Go - You should see a Service on the left now called “MainService2Service” - Adjust the Namespace if desired - Click OK to complete the process - Observe that you now have a binding and endpoint for mainservice in your app.config. - For some reason the HTTPS in the URL gets changed to HTTP in the app.config – fix that and change it back to HTTPS. - Furthermore, to support HTTPS as a binding (vs. HTTP), edit the basicHttpBinding in the app.config so that it looks like this: <basicHttpBinding> <binding name="mainserviceSoapBinding"> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> … - Alternatively, to have Visual Studio complete the process for you with old-school .NET 2.0 Web Service style references, follow these steps: - Open your Visual Studio project. (I used Visual Studio 2012 for this) - Right click References and select Add Service Reference - Click on Advanced - In URL, put in the local file path to the WSDL, e.g. “C:\test\mainservice.wsdl” and then hit arrow button - You should see MainService2Service Description show on the left now called “MainService2Service” - Adjust the Web reference name if desired - Click Add Reference to complete the process - Observe that you now have a binding and endpoint for mainservice in your app.config. - You can optionally generate the C# code manually by running the following command to generate the web services code: C:\test>wsdl results.discomap Microsoft (R) Web Services Description Language Utility [Microsoft (R) .NET Framework, Version 4.0.30319.17929] Warning: This web reference does not conform to WS-I Basic Profile v1.1. R2706: A wsdl:binding in a DESCRIPTION MUST use the value of "literal" for the use attribute in all soapbind:body, soapbind:fault, soapbind:header and soapbind:headerfault elements. - Input element soapbind:body of operation 'createObject' on portType 'mainserviceSoapBinding' from namespace ''. - Output element soapbind:body of operation 'createObject' on portType 'mainserviceSoapBinding' from namespace ''. … For more details on the WS-I Basic Profile v1.1, see the specification at. Writing file 'C:\test\MainService2Service.cs'. C:\test> 6. Re: Unifier WSDL for ASP.NET?cba828a6-8b07-4002-b2f1-26c962c001ac Jan 15, 2014 10:09 PM (in response to user10066206) Thanks Luther, that should work great.
https://community.oracle.com/message/11330258
CC-MAIN-2017-39
refinedweb
1,175
57.37
I have found the offending code...he said sheepishly. In the __init__.py file I had the following: def initialize(context): ... import traceback; traceback.print_exc() instead of: def initialize(context): try: ... except: import traceback; traceback.print_exc() My thanks goes out to all who helped. Later, Mike > After some further investigation I have found that a product that I am > developing is causing the error to appear. The product appears to work > fine in zope. I can add it, create objects with it and the objects are > persistent. Can someone give me some advice on how to go about debugging > my product to identify the offending code? The error appears in the > following environments: > > SuSE 8.2 -- Zope 2.6.1 > WinXP Pro -- Zope 2.7.x > > Thanks, > Mike > > > _______________________________________________ > Zope-Dev maillist - [EMAIL PROTECTED] > > ** No cross posts or HTML encoding! ** > (Related lists - > > ) > > _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope-dev@zope.org/msg13932.html
CC-MAIN-2017-04
refinedweb
156
62.95
You can subscribe to this list here. Showing 11 results of 11 Hans Kieserman wrote: > Attached is the diff for a couple of minor lilypond updates. I'll take a look. > should I worry about the difference between > Segment.getStartTime and Segment.getFirstEventTime, and/or > will Events ever be non-contiguous? At the moment, it > appears that rests are inserted when necessary. I think it is reasonable to assume there are no gaps, although this is not guaranteed by the Segment format. All of the file import and editing operations _should_ be careful to preserve rests, if only by calling Segment::normalizeRests after doing some non-preserving modification. This means also that for "normal" editing segments, getStartTime and getFirstEventTime should return the same thing. (All of the above really applies only to segments found in compositions and used for interactive editing; there are also segments used in other contexts such as the clipboard, where the situation could be quite different.) Note events can however overlap arbitrarily, and this probably has the potential to lead to some rather unexpected rest layouts as well. I'd say if that happens the user should expect to have to do a bit of work to make the meaning clear. btw, I'd love to see Lilypond _import_ as well... okay, best not get ahead of things. Chris Attached is the diff for a couple of minor lilypond updates. -fix for Segment start at later than 0 -filename is used as title -fix for incorrect starting durations (ex. old glazunov started the piano staff with whole notes) -comment cleanup By the way, should I worry about the difference between Segment.getStartTime and Segment.getFirstEventTime, and/or will Events ever be non-contiguous? At the moment, it appears that rests are inserted when necessary. Thanks again, Hans Index: lilypondio.cpp =================================================================== RCS file: /cvsroot/rosegarden/gui/lilypondio.cpp,v retrieving revision 1.2 diff -u -3 -p -r1.2 lilypondio.cpp --- lilypondio.cpp 5 Jun 2002 18:36:51 -0000 1.2 +++ lilypondio.cpp 7 Jun 2002 17:54:56 -0000 @@ -1,3 +1,5 @@ +// -*- c-basic-offset: 4 -*- + /* Rosegarden-4 v0.1 A sequencer and musical notation editor. @@ -59,12 +61,6 @@ LilypondExporter::~LilypondExporter() // nothing } -static double -convertTime(Rosegarden::timeT t) -{ - return double(t) / double(Note(Note::Crotchet).getDuration()); -} - void LilypondExporter::handleStartingEvents(eventstartlist &eventsToStart, bool &addTie, std::ofstream &str) { for (eventstartlist::iterator m = eventsToStart.begin(); @@ -82,6 +78,10 @@ LilypondExporter::handleStartingEvents(e } else { // Not an indication } + // Incomplete: erase during iteration not guaranteed + // This is bad, but can't find docs on return value at end + // i.e. I want to increment m with m = events...erase(m), but + // what is returned when erase(eventsToStart.end())? eventsToStart.erase(m); } if (addTie) { @@ -157,18 +157,18 @@ LilypondExporter::write() { std::ofstream str(m_fileName.c_str(), std::ios::out); if (!str) { - std::cerr << "LilypondExporter::write() - can't write file" << std::endl; + std::cerr << "LilypondExporter::write() - can't write file " << m_fileName << std::endl; return false; } // Lilypond header information - str << "\\version \"1.5.55\"\n"; + str << "\\version \"1.4.10\"\n"; str << "\\header {\n"; - str << "\ttitle = \"Lilypond typesetting file\"\n"; + str << "\ttitle = \"" << m_fileName << "\"\n"; str << "\tsubtitle = \"Written by Rosegarden-4\"\n"; if (m_composition->getCopyrightNote() != "") { str << "\tcopyright = \"" -//??? Incomplete: need to remove newlines from copyright note? + //??? Incomplete: need to remove newlines from copyright note? << m_composition->getCopyrightNote() << "\"\n"; } str << "}\n"; @@ -194,15 +194,15 @@ LilypondExporter::write() bool isFlatKeySignature = false; int lastTrackIndex = -1; + // Lilypond remembers the duration of the last note or + // rest and reuses it unless explicitly changed. + Note::Type lastType = Note::QuarterNote; + int lastNumDots = 0; + // Write out all segments for each Track for (Composition::iterator i = m_composition->begin(); i != m_composition->end(); ++i) { - // Lilypond remembers the duration of the last note or - // rest and reuses it unless explicitly changed. - Note::Type lastType = Note::QuarterNote; - int lastNumDots = 0; - timeT lastChordTime = m_composition->getStartMarker() - 1; bool currentlyWritingChord = false; @@ -229,26 +229,25 @@ LilypondExporter::write() eventendlist eventsInProgress; eventstartlist eventsToStart; - // If the segment doesn't start at 0, add a "skip" to the start - // No worries about overlapping segments, because Voices can overlap - str << "\t\t\t\\context Voice {\n"; - // [Perl|LISP] hackers unite! - timeT segmentStart = (*(++((*i)->begin())))->getAbsoluteTime(); - if (segmentStart > 0) { -// Incomplete: Why does Note constructor segfault? -// long curNote = long(Note(Note::WholeNote).getDuration()); -// int wholeNoteDuration = curNote; -// while (curNote > 0 && ((int)(segmentStart / curNote)) > 2) { -//)) { + // If the segment doesn't start at 0, add a "skip" to the start + // No worries about overlapping segments, because Voices can overlap + str << "\t\t\t\\context Voice {\n"; + // [Perl|LISP] hackers unite! + timeT segmentStart = (*i)->getStartTime(); // getFirstEventTime + if (segmentStart > 0) { + long curNote = long(Note(Note::WholeNote).getDuration()); + long wholeNoteDuration = curNote; + while (curNote > 0 && (segmentStart / curNote) >= 1.0) { +)) { Note tmpNote = Note::getNearestNote((*j)->getDuration(), MAX_DOTS); @@ -286,8 +285,6 @@ LilypondExporter::write() handleEndingEvents(eventsInProgress, j, str); // Note pitch (need name as well as octave) - // Incomplete: Fun hack of the week- convert this - // into something smaller using ASCII arithmetic // It is also possible to have "relative" pitches, // but for simplicity we always use absolute pitch // 60 is middle C, one unit is a half-step @@ -392,7 +389,6 @@ LilypondExporter::write() str << "> "; } - // Incomplete: Set which note the clef should center on str << "\n\t\t\t\\key "; Key whichKey(**j); isFlatKeySignature = !whichKey.isSharp(); Index: lilypondio.h =================================================================== RCS file: /cvsroot/rosegarden/gui/lilypondio.h,v retrieving revision 1.1 diff -u -3 -p -r1.1 lilypondio.h --- lilypondio.h 4 Jun 2002 19:18:50 -0000 1.1 +++ lilypondio.h 7 Jun 2002 17:54:56 -0000 @@ -1,3 +1,5 @@ +// -*- c-basic-offset: 4 -*- + /* Rosegarden-4 v0.1 A sequencer and musical notation editor. -- _______________________________________________ Enrique Robledo Arnuncio wrote: > Rosegarden 0.1.6 is now running nicely here, compiled for ALSA and > Jack (a pity this one does nothing yet, though I managed to set it > up). JACK output does (well should) actually work it's just there's nothing to edit or add audio sample files with yet. If you try the test file outofspace.rg though you should hear some short samples playing (a beep, a kick drum, some hi hats). R Guillaume Laurent wrote: > > move the splash screen by alt-clicking on it (depending on > your window manager settings). And indeed on your window manager. (Although in any case this problem doesn't occur with all window managers -- I'd never noticed it, but I don't run KDE.) We can probably find a better solution than we currently have, anyway. The problem is that you need the splashscreen to stay in front of the application window when that appears, but as the splashscreen appears first, it can't be a transient child of the application window, so it must be either always-on-top or else literally raised after the application appears. (In fact we do both, because some window managers ignore the always-on-top hint, but if always-on-top is available it usually provides a smoother effect because otherwise the splashscreen disappears briefly as the main window appears.) Either of these may obscure the error dialog, although the latter will only obscure it if it appears before the raise happens. In practice I think the error dialog appears after the explicit raise, so dropping the always-on-top status will probably fix the problem for most users at the expense of a bit of tidiness. A far better solution would be just to remove the splashscreen when an error condition occurs; I wonder how hard that would be? Chris On Friday 07 June 2002 10:36, Chris Cannam wrote: > So I compared the test and release, and it turns out the release > tarball contains a generated gui/rosegardentransport.h already, > whereas test tarball didn't. I really wonder how I managed to pull that one off. All I did was to unta= r the=20 test tarball, cp the new INSTALL in, retar it. Most puzzling. I'll try=20 replacing the tarball. --=20 =09=09=09=09Guillaume =09=09=09=09 Guillaume Laurent wrote: > You're compiling on KDE2, right ? This is a file generated by uic, so I guess > there's an incompatibility between Qt2 and Qt3 versions. > > Since we didn't have any reports about this on 0.1.5 which Rich generated on > KDE2, this means that we really can't generate a KDE2/KDE3 tarball on a KDE3 > platform. *sigh*. I found this report rather surprising, since I'd compiled the last test tarball without trouble with KDE2, and I thought all you'd changed since then was the text of the INSTALL. But it's true: the release doesn't build. So I compared the test and release, and it turns out the release tarball contains a generated gui/rosegardentransport.h already, whereas test tarball didn't. Hence, to build the release with KDE2 you have to first delete gui/rosegardentransport.h. Chris On Friday 07 June 2002 02:16, Enrique Robledo Arnuncio wrote: > I am sorry about this. Don't be, it's our fault :-). > and would not be enough to make a compilable-from-source Debian > package, which is what I am now trying to do... To do this it would be better to start from the CVS sources, then follow = the=20 indications on docs/howtos/release_build.txt to build your own tarball. > Thanks for your great work! You're welcome. --=20 =09=09=09=09Guillaume =09=09=09=09 On Friday 07 June 2002 05:49, kennyz@... wrote: >. No, you can just move the splash screen by alt-clicking on it (depending = on=20 your window manager settings). But you're still right, this is really bad= =2E --=20 =09=09=09=09Guillaume =09=09=09=09 Rosegarden v0.1.6 KDE v3.0.1. I believe error dialogs should be able to cover the splash screen. Otherwise, users cannot respond to these errors, and the startup process is halted. (Sidenote: I can start the application with "rosegarden --nosequencer", which does allow the startup to proceed properly. The splash screen disappears, and the main windows opens.) Sincerely, Ken Zalewski I just compiled and installed Rosegarden v0.1.6. The compilation and installation process went without a hitch (I am using KDE3.0.1). However, the installer places the Rosegarden icons into $KDEDIR/share/icons/medium/..... and $KDEDIR/share/icons/small/..... KDE3 did not find these, because the icon directory structure has changed in KDE3. I moved the hicolor medium-sized icon to: $KDEDIR/share/icons/hicolor/32x32/apps/rosegarden.xpm and the hicolor small-sized icon to: $KDEDIR/share/icons/hicolor/16x16/apps/rosegarden.xpm The icon was then found by KDE3 with no problems. Therefore, I suggest that you update the installation location of the icons in the Makefiles. > You're compiling on KDE2, right ? This is a file generated by uic, > so I guess there's an incompatibility between Qt2 and Qt3 versions. Yes, I am using KDE2. > Since we didn't have any reports about this on 0.1.5 which Rich > generated on KDE2, this means that we really can't generate a > KDE2/KDE3 tarball on a KDE3 platform. *sigh*. I did not try 0.1.5... I am sorry about this. It is the only compilation error I had. Once fixed (a really trivial fix), it all went fine. But of course, the fix is applied to the generated file, so that would not be useful for you, and would not be enough to make a compilable-from-source Debian package, which is what I am now trying to do... Rosegarden 0.1.6 is now running nicely here, compiled for ALSA and Jack (a pity this one does nothing yet, though I managed to set it up). It is really amazing! It seems I am going to spend much more time making noise with my computer from now on... Thanks for your great work! Enrique.
http://sourceforge.net/p/rosegarden/mailman/rosegarden-devel/?viewmonth=200206&viewday=7
CC-MAIN-2014-52
refinedweb
1,972
57.27
Namespace from server? Expand Messages - My SOAP::Lite server is returning <namesp1:sayHelloResponse xmlns: How can I get access to this element and set the namespace to something more descriptive? (Or, does it matter what the namespace is?) I know how to get rid of "namesp1" on the client. I make the method a SOAP::Data object and set "->attr({xmlns => 'urn:Hello'})". But, I don't see how this is can be done in the server where it is apparently generating the "response" element? Thanks, Mark __________________________________ Do you Yahoo!? Yahoo! Mail - 50x more storage than other providers! Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/3794?var=1
CC-MAIN-2017-30
refinedweb
112
67.96