text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
A recent question on programmers.stackexchange.com asked What functionality does dynamic typing allow? I thought one of the best short answers to this was from Mark Ransom: Theoretically there’s nothing you can’t do in either, as long as the languages are Turing Complete. The more interesting question to me is what’s easy or natural in one vs. the other. This post is about providing an example to back that up, and to respond to people who claim that, since you can implement dynamic types in a statically typed language, statically typed languages give you all the benefits of dynamically typed languages. [Edit: to those who think I’m being a language or dynamic typing advocate or engaging in any kind of bashing, please read that last paragraph again, and note especially the use of word ‘all’.] Let’s set up a problem. It’s made up, but it illustrates the point I want to make: Given a file, ‘invoices.yaml’, take the first document in it, extract the ‘bill-to’ field, and save the data in it as JSON in an output file ‘address.json’. You can take it for granted that the contents of that field can be serialised as JSON (e.g. doesn’t contain dates), although that might not be true for the rest of the document. To keep the example focussed and simple, everything will be ASCII. The particular YAML file I used was taken from an example YAML document I found on the web, and then expanded for the sake of illustration: --- invoice: 34843 date : 2001-01-23 bill-to: given : Chris family : Dumars address: lines: | 458 Walkman Dr. Suite #292 city : Royal Oak state : MI postal : 48046 --- invoice: 34844 date : 2001-01-24 bill-to: given : Pete family : Smith address: lines: | 3 Amian Rd city : Royal Oak state : MI postal : 48047 I’ll use Python and Haskell as representatives of dynamic typing and static typing, because I know them and many would consider them to be very good representatives of their camps, and I’m a big fan of both languages. I also think that examining any programming problem in the abstract, or with respect to ideas like ‘dynamic typing’ or ‘static typing’, is not very relevant, because in the real world we have to use real, concrete languages, and they come with a whole set of properties (in terms of the language definition, tool sets, communities and libraries) that make a massive impact on how you actually use them. So I’m going to try to use real libraries that actually exist, ignore solutions that could theoretically exist but don’t, and ignore problems that could theoretically exist but don’t. Python Here is my Python solution: import yaml import json json.dump(list(yaml.load_all(open('invoices.yaml')))[0]['bill-to'], open('address.json', 'w')) Notes: I didn’t have to consult docs once. This isn’t just due to my familiarity with Python — it’s also the fact that I can fire up IPython and go: In [1]: import yaml In [2]: yaml.<TAB> and get a list of likely functions. I can then go: In [3]: yaml.load_all? and get help, or go: In [4]: yaml.load_all?? and get the complete source code of the function/method/class/module, in case I need it. Haskell Now for the Haskell version. First, a disclaimer: I’m much less experienced in Haskell than in Python. I did manage to write my blog software in Haskell at one point, but I don’t use Haskell on anything like a daily basis, and I do use Python that much. I first need to parse YAML. I’ve got a choice of packages. Unlike in Python, for a library like this, the choice you make is likely to have a big impact on the code you write — switching to a different (perhaps faster) package won’t be just a case of changing an import, as we will see. The choice of packages represents the fact that even designing how this thing should work in terms of API and data structures is not straightforward in Haskell, and represents a much bigger commitment, and therefore problem, for the library user. In Python, while there are a few API choices (like supporting streaming or not, potentially), mostly it’s pretty obvious how the library should work. Looking on Hackage, I first find the ‘yaml’ package. The first line of the Data.Yaml API docs reads: A JSON value represented as a Haskell value. (Yes, you read that right). This doesn’t look good. The whole file has stuff about JSON, not YAML, with no indication why I want to be using JSON values, not YAML. But I have a go anyway, perhaps it was deliberate. When trying to use the decodeFile function, I get an error about needing a type signature, due to the way decodeFile is defined: decodeFile :: FromJSON a => FilePath -> IO (Maybe a) There are lots of instances of FromJSON to choose from, but I have to know in advance the type of data. And it looks like I’ve got data that isn’t going to fit into any of those types, because it involves heterogenous collections. [Correction in comments, see below]. I gave up and tried another package - Data.Yaml.Syck. First try: import Data.Yaml.Syck main = do d <- parseYamlFile "invoices.yaml" print d This works - well, I’ve got some kind of parsing going on, at least. It looks like I’ve got some YamlNode datastructure, and the top thing is an EMap (it looks like it has only parsed the first document, which is worrying, but doesn’t matter given my requirements, so I’ll ignore that). But how do I get data out? OK, let’s try yaml-light - it wraps HsSyck and has some easier utility functions, like lookupYL.: lookupYL :: YamlLight -> YamlLight -> Maybe YamlLight That expects the lookup key to be a YamlLight, so I need to create one from a string, somehow. The docs show how to turn a ByteString into a YamlLight node, and I need to pass in a String, which from previous experience requires doing something like pack from Data.ByteString. My program so far: import Data.Yaml.YamlLight import Data.ByteString.Char8 (pack) import Data.Maybe main = do d <- parseYamlFile "invoices.yaml" print $ fromJust $ lookupYL (YStr $ pack "bill-to") d Which gives this output: YMap (fromList [(YStr "bill-to",YMap (fromList [(YStr "address",YMap (fromList [(YStr "city",YStr "Royal Oak"),(YStr "lines",YStr "458 Walkman Dr.\nSuite #292\n"),(YStr "postal",YStr "48046"),(YStr "state",YStr "MI")])),(YStr "family",YStr "Dumars"),(YStr "given",YStr "Chris")])),(YStr "date",YStr "2001-01-23"),(YStr "invoice",YStr "34843")]) Now I have to dump to JSON. From a Python perspective, all I want is a function that can take some ‘native values’ and dump them to JSON, like the Python json.dump function. But every piece of data in my data structure is wrapped in things like YStr and YMap. In addition, though I can see the structure of my data in front of me, the requirements I’ve been given don’t make guarantees that it will stay the same, just that it can be converted to JSON. I need a routine that will convert anything YAML to the equivalent in JSON, where that is possible. It looks like I could create a JSON instance for YamlLight, so that the encode function I want to use (which dumps JSON to a string) could take YamlLight as an input directly. I end up with this: import Data.Yaml.YamlLight (parseYamlFile, lookupYL, YamlLight(..), unStr) import Data.ByteString.Char8 (pack, unpack) import Text.JSON (JSON(..), encode, JSValue(..), toJSString, toJSObject) import Data.Maybe (fromJust) import Data.Map (toList) instance JSON YamlLight where showJSON yml = case yml of YStr bs -> JSString $ toJSString $ unpack bs YMap m -> JSObject $ toJSObject $ map (\(y1, y2) -> (unpack $ fromJust $ unStr y1, showJSON y2)) $ toList m YSeq ymls -> JSArray $ map showJSON ymls YNil -> JSNull main = do d <- parseYamlFile "invoices.yaml" writeFile "address.json" $ encode $ fromJust $ lookupYL (YStr $ pack "bill-to") d This works, and I’m sure there are other solutions. If I were cleverer, and knew Haskell better, I could perhaps write a cleverer, shorter solution, which would also be proportionately more difficult for someone else to understand, so I’m not particularly interested in making this code shorter, as it does the job. But this illustrates why some people like dynamically typed languages. The fact that you can implement a variant data type in Haskell (such as YamlLight or JSValue) doesn’t mean much, because these data types are not used everywhere, and therefore you have multiple competing ones that you’ve got to convert between. If you did have a single variant datatype that was used everywhere… you’d have a dynamically typed language, in effect. The strictness of the type system gave rise to a choice of libraries and APIs that made my life harder, not easier. I then had to write glue code to marshall between the dynamic types used by the two libraries I needed. [Edit: or, as it turned out, I need to know where to find it, possibly in the form of already written type class instances, or how to get the compiler to write it for me] Some people might still prefer the Haskell version. It has some nice properties, like the fact that compiler has checked that it can indeed convert any YAML object into JSON — you’d get a warning if you missed a case. One response to that might be that if the two types didn’t happen to match so well — for instance if the YAML library started supporting date/time objects — this benefit would disappear. If you need to avoid all possible problems up front, Haskell will help you out more. Python, on the other hand, will allow you to avoid spending time thinking about theoretical problems which may never happen in reality. But there are always runtime errors that you could come across, even in Haskell — for example, if you want to convert this to cope with non-ASCII documents, the compiler can’t point out all the places you need to fix, and if you forget one you could still get a runtime exception, or worse, silent data corruption. So, in my opinion, this is a case where dynamic typing shines, and the ability to implement dynamic typing on top of static typing simply doesn’t give you the benefits you get in a language that embraces dynamic typing to its core. There are, incidentally, some interesting developments in Haskell that might allow the possibility of running programs that aren’t quite typed correctly, as long as you don’t encounter the type errors in practice. This could counter some of the points I’ve raised — see this interview with Simon Peyton-Jones , from 27:45 onwards.
https://lukeplant.me.uk/blog/posts/dynamic-typing-in-a-statically-typed-language/
CC-MAIN-2017-39
refinedweb
1,828
68.1
issue. issue. The commission is empowered to penalise up to fifty million rupees or fifteen percent of the annual turnover of the industry which is indulging in illegal practices. There are demands that the fines be increased as these are still not enough to discourage anti-competitive practices. An official of the commission confided to The News On Sunday that CCP has been under pressure from some official and business quarters to prevent it from carrying any investigation against wheat, sugar, cement, pharmaceutical and other industries commonly believed to have been involved in anti-competitive practices. Officials say that CCP's inquiries and its decisions against banks, cement and fertilizer companies involved in cartelisation were creating more and more problems for the CCP, including the non-implementation of the competition law. "This has particularly resulted in government's continued refusal to allocate any budget for the CCP," sources added. In February this year, Chairman CCP Khalid A. Mirza told media that banks were operating like a cartel by charging unfair fees and paying less profit to their depositors, so an investigation was being conducted by the commission and in case banks were found to be involved in promoting cartelisation, action will be taken against them. Now the sources in CCP are claiming that the commission has been stopped from investigating. They say that State Bank of Pakistan wrote a letter to the Ministry of Finance demanding to keep the banks outside the purview of CCP. Similarly, when CCP started a probe into the cement sector, the Ministry of Industries approached the prime minister for a direction to CCP to stop such action for the time being as it would prove anti-productive and discourage industrial sector. Cartels are the most flagrant form of anti-competitive practice in which rival firms agree not to compete with each other and form an alliance aimed at limiting competition by jointly reducing output and raising prices. In this way they harm consumers because of upward movement in prices. Cartel busting is the most important activity of competition authorities around the world. Competition policy provides a well-functioning market for existing companies along with eliminating market entry barriers for nascent businesses. The authorities in the developed world aggressively fight cartels, as they are constitutionally empowered and are not influenced by any lobby trying to affect smooth functioning and implementation of their decisions. Across the globe, cartel activities are penalised. Record fines of more than $500 million have been imposed by competition authorities of the UK and the US on British Airways (BA) for colluding with Virgin on transatlantic flights. There are other airlines too, such as Korean Airlines, which have been fined. BA is also facing action under the EU laws and other jurisdictions. In Pakistan we don't see any such moves. Although Pakistan has had Monopoly Control Authority since 1970 under an anti-monopoly law namely 'Monopolies and Restrictive Trade Practices (Control and Prevention) Ordinance' (MRTPO) 1970. The Monopoly Control Authority (MCA), however, became irrelevant to the emerging corporatisation and consumerism because of its limited capacity. Although it wanted to go after cartels in cement, mineral water and pharmaceutical industry but the ratio of its fines was too low. In fact, on one occasion, the cement industry paid the nominal fine imposed by it and continued with the monopolistic activities as the margin of profit was much more than the amount of fine. The Monopoly Control Authority has been phased out and was replaced by CCP in December last year. But, as is the case in other donor-funded initiatives, CCP was incorporated through an ordinance at a time when emergency was imposed in the country. This short-cut legislation by some individual was carried out at a time when there was a caretaker government. This has deprived the commission of the requisite political strength to take on powerful business houses indulging in malpractices. The commission does not enjoy any backing of the parliament. Similarly the CCP Ordinance is also a weak law. One of its copies, put on the commission's website, does not give the procedure of the appointment of its chairman and members and their tenures. Similarly, it does not say which government body the commission is responsible to. Interestingly CCP's first chairman, a former World Bank employee, has also worked as the last chairman of Monopoly Control Authority for over a year. The CCP has its office in a restricted area in Islamabad's diplomatic enclave. In addition to government's feet-dragging in giving the commission the much-needed push, the commission so far has not made any mark since its establishment. Although the commission has announced that it is going to investigate the role of CNG station owners in the recent fixation of price, its other reports lack substance. A CCP report 'The Recent Cement Price Hike: Cartel or Not' of April this year carried out a study of price hike in 20 kg bag of cement on March 20 this year. But it did not conclude anything and said: "After analysing industry fundamentals the opinion of the commission is that the current price hike in cement could be the result of change in sector fundamentals affecting the demand and supply dynamics and due to commercial reasons. Nevertheless, the Commission cannot completely rule out the possibility that this across-the-board, simultaneous price increase may have arisen from collusive behaviour of the incumbent cement manufacturers." The report further states that cement is a classic product/industry which possess characteristics which tempt manufacturers to indulge in collusive or cartel-like practices. These characteristics are homogeneity and substitutability of product and excess/unutilised capacity. "Further, despite there being 29 cement manufacturers -- a fact which should encourage competition amongst manufacturers for market share -- there is concentration of market power with a few leading companies, namely Lucky Cement, DG Khan Cement, Bestway Cement, Maple Leaf Cement, Askari Cement & Fauji Cement, which control 65% of market between them." When the CCP investigated the price hike in cement it was raised from Rs 240 per bag to Rs 260 per bag. At present the price of per 20 kg bag is between Rs 360 to 370.. 'Shah Sahib,' as everyone respectfully and affectionately called Afridi, was a smiling, pleasant man in his early forties, immaculately dressed in crisp white shalwar kameez. At the make-shift offices of The Frontier Post above a car repair workshop in Lahore's bustling city centre, he was a genial, down-to-earth presence into whose office anyone, from a lowly guard to a young reporter, could enter without an appointment and be offered a cup of tea -- part of the egalitarian tribal code alien to class-conscious urban Pakistan. Shah Sahib countered rumours about his involvement in 'drug smuggling' by pointing out that his clan, the Afridi tribe, was legally engaged in cross-border trade with Afghanistan as part of an old agreement with the former British colonisers. Aziz Siddiqui had by then joined the Human Rights Commission of Pakistan (HRCP) as co-director along with his close friend and fellow journalist I.A. Rehman who was Director of HRCP. The organisation was among those that protested Afridi's arrest in 1999 on what most journalists believe to be trumped up charges of drug trafficking. After a district court on June 27, 2001 condemned Afridi to death by hanging, he spent the next three years on death row. There was sporadic news of him once he was convicted. One of his lawyers told me that he was terribly ill at one point and had lost much weight. The Lahore High Court on June 3, 2004 commuted his death sentence on the grounds that trafficking in hashish is not a capital crime. Still, he remained in Lahore's notorious Kot Lakhpat Jail for nearly a decade, with courts periodically turning down his bail applications, pleas to move him to a prison in Peshawar closer to his family and appeals for proper medical care. He was finally released on bail in May this year. Afridi's case symbolises a larger issue: the regular travesties of the justice system in Pakistan. He had the resources to hire well known, competent lawyers who got his death sentence converted to life imprisonment although even they could not manage to get him paroled or acquitted. Most of the 95,000 detainees crammed into Pakistan's over-crowded prisons have no such resources. Only about a third -- 31,400 or so -- have been convicted. A staggeringly large number of convicts are on death row -- over 7,000, including almost 40 women. Death row inmates "are either involved in lengthy appeals processes or awaiting execution after all appeals have been exhausted," noted the New York-based Human Rights Watch in a letter to the Pakistani prime minister on June 18 of this year. Appeals typically linger on for at least a decade, more often two. The letter urged Pakistan to abolish the death penalty and until then, to at least sign a UN moratorium on any further executions. "The number of persons sentenced to death in Pakistan and executed every year is among the highest in the world, with a sharp increase in executions in recent years" (134 in 2007, up from 82 in 2006, 52 in 2005, and 15 in 2004). Afridi's arrest in Lahore on the night of April 1, 1999 seemed like a bad joke. He was held without charge, beaten and tortured. Nothing surprising about that -- in the absence of proper forensic equipment and training, most police cases rely on witness testimonies and confessions routinely obtained through torture and intimidation, as the HRCP documents in its monitoring reports every year. The prime minister at the time of Afridi's arrest was Nawaz Sharif of whom The Frontier Post and its sister Urdu language publication the daily Maidan had been bitingly critical. The Sharif regime did not have a good track record with the media. They had earlier tried to squeeze the Jang group of newspapers (where I then worked), and prior to that, arrested Najam Sethi, Editor of weekly The Friday Times for making an 'unpatriotic' speech in 'enemy territory' (India). Sharif and his henchman, the all-powerful Saifur Rehman - through whom these actions were taken -- backed down from both cases only after journalists in Pakistan created a major uproar, which was also taken up internationally. The Pakistan Federal Union of Journalists (PFUJ) had also protested Afridi's arrest and held public demonstrations for his release. International organisations like the Reporters Sans Frontiers (RSF), Amnesty International and the Committee to Protect Journalists (CPJ) also took up Afridi's case and appealed to the government for his release. "However, once he was convicted there was little we could have done," reflected Mazhar Abbas, General Secretary of the PFUJ when I asked him why there wasn't more public outrage about the case. He added that the newspaper editors' and owners' bodies had "backed out of a joint struggle because of professional rivalries, since The Frontier Post had the potential to challenge some of their publications. Otherwise, he would have been out of prison much earlier." After army chief General Pervez Musharraf overthrew Sharif in a military coup of October 1999, there were hopes that Afridi would soon be freed. However, there were powerful forces ranged against him, including the Anti-Narcotics Force (ANF) officers who had arrested him (against whom his papers had written for their involvement in drug trafficking). It was not until after a civilian government came to power following the general elections of February 2008 that the provincial interior ministry ordered Afridi to be released on parole on May 24, on the grounds of good behaviour. Faisal Siddiqi, a young advocate at the Sindh High Court in Karachi who often takes on pro bono cases, told me that those who are awarded capital punishment are usually "the poorest of the poor." Most of them are illiterate and have no resources or support. Along with the HRCP's Javed Burki, a grizzled older advocate, Faisal tries to help condemned prisoners in Karachi Central Prison. "In death penalty cases, the absence of an effective private counsel appears to be the difference between whether the death penalty is confirmed or set aside. Prisoners are condemned 'not for the worst crime but for the worst lawyer,'" he says, quoting a 1994 Yale Law Review study. "Poor people lack access to competent counsel at both the trial and appellate stages," according to Human Rights Watch. "According to one study conducted in 2002, 71 percent of condemned prisoners in the NWFP were uneducated and over half (51 percent) had a monthly income below Rs 4,000 ($50 USD). The average fee for an appeal to the High Court in murder cases is around Rs 60,000 (about $900 USD). This creates an unequal system of justice, in which those with financial or political resources are able to obtain better legal services and avoid the death penalty." Sometimes, they don't even get a lawyer. In one recent case, an illiterate army janitor called Zahid Masih was hanged in Peshawar Central Jail after a court martial, having been denied a civilian legal counsel, noted Human Rights Watch in its letter. Three days later, the government announced (on the occasion of the late former Prime Minister Benazir Bhutto's birthday, June 21, 2008) that it was proposing to commute all death sentences to life imprisonment except for terrorists and those convicted of attempting to assassinate President Pervez Musharraf. The federal cabinet approved the proposal on July 2. However, the Supreme Court of Pakistan has since taken suo moto notice of the proposal, perhaps in response to opposition from the right-wing lobby which argues that the move would go against the Constitution of Pakistan as well as the teachings of Islam. Until the matter is decided, the only ray of hope for Pakistan's condemned prisoners is the Prime Minister's proposed commutation. This article was originally published by the Women's International Perspective ()! The English, who are notoriously finicky eaters, can go to a restaurant and spend an hour over a slice of Yorkshire Pudding which is all of a millimetre thick and you can see through it, and what they call "two veg" which is half a dozen peas and four beans. Yet the same people at a country breakfast can start the day tucking into kippers and fried kidneys. The French start very civilized, with a cup of coffee and a croissant, and then the rest of their lives talking about "haute cuisine" which is mainly showmanship and very little substance. And one has watched Italians spend a leisurely three ours over a lunch which starts with a ton of spaghetti and meatballs, meal enough for a dozen people, for soup, and end with juice dripping down a peach. Back home here it is very much a class thing. The chattering classes in the suburbs follow the English tradition and breakfast is a delicate little half-boiled egg eaten with a slice if toasted sandwich bread. Meantime the working classes can tuck into a brimming bowl of trotters, or meat cooked overnight with enough calories to keep the city lit for a week. Only the Americans are consistently healthy eaters and at an eatery they demand value for their money. An American breakfast, at an 'International House of Pancakes' or 'Friendlies' can be an endless feast starting with cranberry juice, and then wading through a sixteen ounce steak, sunny side up eggs and then half a dozen pancakes -- which are really crepes but so thick and juicy the French would kill themselves -- drowned in lashings of maple syrup. The good news is eateries are springing up in Lahore offering full-fledged American breakfasts which is obvious from the ton of hash-brownies that comes with each order of three-egg omelette, or easy-over eggs and coffee and fresh fruit juice. This is not really the US so they say it's a twenty four hour service, but you can't get it after eleven cause they are preparing for lunch. But in other ways we are one step ahead of the yanks. When they talk of steak, they mean beef-steak; here you can get a steak made of free-range chicken! And the pancakes come here with an added assortment of freshly baked muffins in all shapes and sizes and the maple syrup alternates with honey! The trotters can eat their hearts out! Of course in all this hullaballoo, I have forgotten what is the ultimate breakfast, which is Halva+Puree+Bhajee! Puree is the delicate equivalent of Crepe or pancake, only difference is it is then deep fried in oil and fished out dripping and served! And the Halva is the accompanying sweet dish to be eaten with the Puree or by itself as the Maal Puraa -- Hah, you didn't know that is what it was called!. Recently he was on a personal visit to Lahore during which The News on Sunday got a chance to interview him in the backdrop of Pak government's decision to increase imports from India. Excerpts follow: By Shahzada Irfan Ahmed The News on Sunday: The PPP government is being criticised for announcing an 'India-centric' trade policy while many outstanding issues between the two states remain unresolved. What would you say on this? Pradeep S Mehta: First, let me congratulate the Government of Pakistan on this visionary, bold and pro-Pakistan step. However, I would like to make a correction. It was Pakistan, sometime ago, under President Musharraf which first decided to put other bilateral matters on hold and concentrate more on bilateral and regional trade. This means even an army chief had to change his stance and focus his attention on issues related to trade and other fields where the people of the two countries could cooperate. About the second part of the question I would say the trade policy of Pakistan is not at all country specific. I feel the decision has been taken only for the reason that cheap imports from India, especially those of industrial raw materials, are the best option the country has to stay competitive in the international market, and also curb inflation. I don't want to say that the two countries must forget the outstanding issues between them. What I am suggesting is that these countries must not link mutual trade with the resolution of some difficult issues. I am sure peace will follow if trade is allowed to grow freely. You may call it 'peace dividend' if you want to use a pure economic term. This is by no means a utopian thought. Brazil and Argentina, who were once at daggers drawn with each other, had to abandon their nuclear programmes and concentrate mainly on mutual trade. As a result smaller neighbouring countries of that region have also benefited. TNS: Pakistan already has a yawning trade deficit with India which may increase manifold if the said trade policy is implemented. How do you think can Pakistan overcome this worry? PSM: Here I would try to remove a misperception that trade deficit is detrimental to an economy's growth or survival. What if Pakistan has trade deficit with India? It also has a trade deficit with China. The United States has had trade deficit with the rest of the world, forever. Even China has trade deficit with India. It's a fact that out of the $30 billion India-China trade, Indian exports have an overwhelming share. China imports cheap raw material from India and sells the finished goods to the world. I think Pakistan can also benefit from the import of cheap Indian raw materials and minimise the ever-increasing costs of production its industry is burdened with. Another count on which Pakistan can gain a lot is the transportation cost. The cost of transporting goods from the neighbouring India is much cheaper than getting them from distant countries. With global fuel prices constantly on the rise the transport cost factor is becoming more and more significant. About the import of cheap finished goods, it is very much expected that some businesses will shut down for being non-competitive. It happened in India too, when we had to liberalise the import of consumer goods. But at the same time the end consumers will have the opportunity to buy cheaper goods. Everywhere in the world imports are used to control prices and promote healthy competition. TNS: There is a perception among many Pakistanis that the goodwill measures have been one-sided. They say while Pakistan is opening up its market, India has not shunned its habit of imposing non-tariff barriers on imports from Pakistan. PSM: The perception is very much there on both sides of the border. But the fact is that they are mostly the sector lobbies that try to protect their interests by pressurising their respective governments. For example the Indian cement industry opposed cement imports from Pakistan on grounds that the product does not meet international quality standards. But finally the imports were allowed. My point is that if Pakistanis are using the same cement and their buildings are not coming down, there's no harm importing it. Similarly the Pakistani sugar industry kept on propagating for long that Indian sugar has excessive phosphorus content which is harmful to health. Here I would say if Indians can consume the same sugar without getting affected why can't Pakistanis have it. Some imports have often been opposed on religious grounds. In this respect I would say if the Arab states can import packed buffalo meat why can't any other Muslim country. In the overall context, if Pakistan liberalises imports from India, let me assure you that India will not lag behind. With the high growth in India, the market is huge and can absorb many goods from Pakistan, whether intermediates or finished goods. TNS: How helpful can import of fuel from India be for Pakistan? Do you think the two countries can work together to overcome the deepening energy crises especially that faces Pakistan? PSM: I have been asked this question several times over the last couple of days. Here I would like to clarify that India cannot help Pakistan much in terms of prices which are determined internationally. However, Pakistan can save a lot on the transportation cost normally incurred on getting oil goods from some other country. Indian refineries have a growing capacity to produce fuel much beyond India's domestic needs. India can export some of this surplus fuel to Pakistan. Another project that can bring India and Pakistan closer and help meet their energy needs is the proposed gas pipeline that will come to India from Iran and pass through Pakistan. Pakistan will be able to earn a huge rental if this project gets through as well as get its share of the natural gas passing through it. India cannot sell electricity to Pakistan as it is facing energy crisis itself. It is as severe as the one haunting Pakistan. It's a fact that hardly 10 years back India was planning to buy 500 megawatt electricity from Pakistan. It is strange that the same country, which had surplus energy at that time, is unable to meet its domestic needs. Down the line, new investments in the power sector in Pakistan did not happen. That is something which your government should address more seriously. TNS: You seem quite optimistic. But how can you convince those who fear smaller economies in South Asia will collapse due to India's exponential economic growth? PSM: This is another Cassandraic and poorly-argued thought. All economies of the region can ride on the bandwagon of India's huge economic growth and gain. That has been the experience around similar situations. Let me take the example of Vietnam which is growing at 10 percent per annum for long, in spite of the giant China. One critical factor is that the government in Vietnam has kept their economy open, rather than close it to competition. This only goes on to prove my point. Saath chalen gay tu sab ka fayda hoga!. These Pakistani seamen, nineteen in all, working on board Birger Jarl were served termination notices on May 15 with a three-month grace period. Birger Jarl, a cruise ship, is owned by Rederi Allandia shipping company. Rauf Butt has been working on Birger Jarl since 1987 and has been asked to leave. "I am soon going to turn 60, am diabetic and have developed pain in my knees; long time work on board a ship causes such ailments. I think it is due to constant vibrations," he says. He has no idea what he'd do if fired and forced to go back. "Rederi Allandia justifies the redundancies on the pretext that under the new EU legislation on employment, the non-EU workers deprive the ship of monetary support the company is otherwise entitled from EU," says Rauf. Rauf thinks the citing of EU-law as a reason to fire the workers is merely an excuse. "Actually these workers, some working for about twenty-five years, earn very good wages now because of the yearly pay raise. The company wants to fire them and employ new workers from East European countries at low wages." All these Pakistani workers were members of SEKO-Sjefolk (Service and Communication Workers Union-Seamen). Rauf accuses "the union bureaucracy" for "betraying the workers' cause as Pakistani workers rank low on bureaucracy's priority list." The resentment against Seko was strong when this scribe visited these workers at a meeting. One worker, Azhar, had saved in his mobile an SMS he got from Seko shop steward, Tedde Scot. The SMS read: "'No point in calling. We all lose jobs now. Why couldn't you be satisfied with the money you already earned?". "The Seko section on Birger Jarl has been telling these workers that if they fight back for their jobs, the company would go bankrupt. Hence, according to Seko, these Pakistani workers better sacrifice their jobs in order to save the jobs of Swedish workers," says Tariq Nasim, another worker hit by redundancies. Regardless of what these workers called "union betrayal", the workers have not given up the fight. Having lost hope in Seko, affiliated with Social-democrats, they have joined the syndicalist union SAC. Now, the Syndicalists have taken up the cause of these workers and SAC is representing them in this conflict with the company. The SAC has started a blockade since July 8. The blockade has got a lot of media attention. It was covered by national TV, SvT, when it began. On July 14, country's largest daily Dagens Nyheter devoted a full page to this. "The job is a matter of life and death as many of us have now reached their fifties. On return to Pakistan, nobody hopes to find either a job or start a business," says Rauf Butt. Termination for Rauf Butt, like his other Pakistani colleagues, means returning to Pakistan since they are not Swedish citizens even after working for over 20 years. "The company would fix visas for us on annual basis. We would work for three months and return to Pakistan for three months," Rauf Butt explains. The Swedish workers work for one week and are off the next week. The Pakistani workers, according to Rauf Butt, wanted the same privilege but the company would deny them the rights that were enjoyed by the Swedish workers. "I once worked one week and took the next week as off. I kept doing it for two years, but the company refused to get me a visa. Hence, I was forced to do as the other Pakistanis were doing," he says. "The company is now exploiting this visa situation," says Humayun who has his visa expired. "For instance, three workers -- Ishtiaq, Humayun and Azhar -- have had their visas expired. When their visas expired, they were told to leave the ship. Now in a way they are living as illegal citizens in Sweden," says Butt. These three workers had applied for new visas but the company did not send relevant papers to Migrationverket to help them renew their visas. All the workers will have their visas expired latest by September. "The company thinks once that happens, they would not cause any trouble and can easily be dealt through police and Migrationverket," says Rauf Butt. "We are counting on the solidarity of the Swedish workers and fighting back," Butt adds. An international solidarity campaign has also been launched to help save these jobs. All the details and updates are available on birgerjarl.info.. Although Sumar has had her education in the United States and that too in the subject of Political Science, she seems to have no sense of politics. Sumar rightly points out that democracy comes out of the bourgeoisie traditions, French Revolution, and stands for individual rights, but she has herself refused these individuals rights by supporting Musharraf's imposition of emergency. She asks the question: what do the people of Pakistan want? The answer is very simple: individual rights which were curtailed by the president on March 9, 2007 and then in a more pronounced way on Nov 3, 2007. The claim that the president has a democratic mind is little better than a joke. Her statement that Pakistanis are feudal-minded has often been made in the past by people in power who benefit by status quo. A close study of the political history of Western nations reveals that they attained their current political system -- which happens to be democracy -- after a struggle of centuries. How can someone say that ballot box or voting cannot bring democracy in Pakistan? Were most of the political thinkers, who supported democracy as the best form of government, fools? What did the military dictator do in the last eight years? Where are the power generation projects? How much has been spent on education sector? What have we done to solve Fata and Balochistan problems? Who is responsible for the massacre at Lal Masjid? What has our justice-loving president done with the superior judiciary of Pakistan? Who is responsible for the Kargil crises and the humiliating defeat? If Musharraf is so progressive and democratic, why hasn't he he found a political solution to these problems? People of Pakistan have used the ballot box to change their fate. If Nawaz Sharif sent Najam Sethi to jail, Musharraf sent the whole media home. He closed down half a dozen news channels with Geo suffering the most by seeing a ban of almost 100 days by the 'freedom-loving' and 'freedom-giving' president. If Nawaz Sharif had to be thrown out, not only from the government but also from his homeland, on charges that he did not let the aeroplane carrying General Musharraf land, thus endangering the lives of 180 people on board, Musharraf is responsible for all the lives claimed in all the terrorist acts, suicide bombings, and Pakistani and US forces' attacks on FATA, and Lal Masjid massacre. How can we call a person progressive, benevolent and secular-minded if he sacks the entire judiciary of the country to save his unconstitutional rule? What if a person says that I don't like corrupt people, but at the same time seeks support from NABzada politicians? The judges refusing to vacate official residences were taken to task but not a president leaving the Army House. If a woman working in the fields says she would vote for whoever her family says, it means her relatives are peasants of a very powerful landlord who is part of the political elite. How can she or her family dare to caste vote according to their own sweet will and that too without having any education and political socialisation? Does Sabiha Sumar really believe Musharraf has a solution to break the power of this powerful political elite?. The programme was quite different to the usual talk shows that one gets to see on television, not least because it had three women on it who were quite presentable but also because right in the middle of it, a man (who surely was a eunuch, not that one has anything against people of the third gender, but the way this segment was placed seemed awkward at best) suddenly came before the camera and started flapping his feet and arms about. It turns out that this 'man' happened to be Noor's friend, though she did not even have the courtesy to introduce him/her? to the audience. Only later it transpired, after Noor, apparently one of Lollywood's better dancers, tangoed with 'him', that his name was 'Bobby' (why do people of dubious gender in this country usually have such names? -- for instance, take the case of this other queen, Babloo, who apparently is a make-up artist in Lahore). Back to the guests -- first there was Mona Lisa -- of course someone needs to ask her why she is using the same stage name as the famous sixteenth century member of a reputed Italian family when her own accent in Urdu sounds as if she is Sindhi. Ms Lisa spoke at some length on her to-be-released film by Indian director Mahesh Bhatt and his daughter Pooja Bhatt. Since the Bhatts are known for using unknown -- but pretty -- names in his films, who also usually then have roles in which they reveal themselves considerably, the talk also centered on what kind of role Ms Lisa would have -- to her credit, instead of beating about the bush, she said that she did whatever the role demanded and that she wouldn't call such exposure vulgar. As for Meera, she was the last person to enter the show and as she walked in she seemed to, initially at least, greet only Saleem Shaikh. She then went to talk about the film that she and Saleem Shaikh are in but the more interesting thing that she said was when Noor asked her a question about the English language and what she thought of people who believed that she (as in Meera) was not all that proficient in it. And, again (pleasantly) surprisingly, the actress more or less said that those who criticised her for not speaking properly in English need to get basically a life. And one has to agree completely on this point. Those people who make fun of others who, according to them (i.e. the critics), do not speak English with the correct enunciation/pronunciation and intonation need to understand that English is not a Pakistani's mother tongue and that one can and should be able to have a perfectly decent conversation in the language of one's choosing -- which usually is one's own mother tongue. Of course, if one is equally or (as is the case with many English-speakers in Pakistan) more fluent in another language then that is fine also. However, it should not be made the basis for making fun of and ridiculing those who attempt to speak in that other language and make a hash of it. So, if Meera cannot speak properly in English or says many of its words in the 'wrong' way or accent that shouldn't be an issue, as long as she can communicate with other people effectively. In such a case, one would advise such people to stick to speaking and communicating in the language that they are most comfortable in because surely what they are saying is more important than how they are saying it. And this is where the shallowness of those who make fun of others who are deemed not proficient in English comes through -- they are in effect labelling a judging a person, usually without ever having met or coming to know them, on the basis of how they speak a language -- and which isn't even anyone's mother tongue! Lest people call me a hypocrite, I should clarify that yes I do speak and communicate mostly in English and I also think in English (and not so much in Urdu). But the point of what I am trying to say is that gaining proficiency/fluency in English should not and does not give anyone the right to make fun of others who are perceived as not speaking it in the 'proper' manner. In any case, what is this 'correct' way of speaking -- do we expect Pakistanis, most of whom are taught English only from Class VI (the standard route in government schools) to speak like BBC presenters? Far more ridiculous that Meera or Reema speaking English 'badly' is the thought that an entire segment of society makes fun of them given that English is no one's mother tongue or that it is generally not possible to speak it all the time in one's immediate environment. The writer is Editorial Pages Editor of The News. omarq@cyber.net.pk |Home|Daily Jang|The News|Sales & Advt|Contact Us| BACK ISSUES
http://jang.com.pk/thenews/jul2008-weekly/nos-27-07-2008/dia.htm
crawl-002
refinedweb
6,205
58.11
Another rookie question from Bumfluff: Is it possible to use characters in if statemnets? ie if myvariable == Y etc. Thanks Printable View Another rookie question from Bumfluff: Is it possible to use characters in if statemnets? ie if myvariable == Y etc. Thanks Yep. But, a character literal is written: 'Y'. Note the single quotes. Ok, single quotes, got it. At first I thought you said not the single quotes but that would be as I just typed it. Is it possible to use a word or sentence? Use double quotes for strings, which hold 0 or more characters. You should be using the C++ string class for your string variables. If you are, then you can use == like you do with char variables. If you use C style strings, which are just character arrays with a null character to indicate the end of the string, then you cannot use ==, you have to use a function like strcmp to compare the strings. I tried a string in my programme but it doesnt work properly: Code: #include <iostream> #include <string> #include <stdlib.h> using namespace std; int main() { string Question = ""; string LoopStop = ""; do { cout<<"Please Ask The Computer A Question:"<<endl; getline(cin,Question); cout<<endl; cout<<"NO!"<<endl; cout<<" \n"<<flush; cout<<"Would you like to continue? (yes / no *note lower case)"<<endl; getline(cin,Question); cin.ignore(); cout<<" \n"<<flush; system ("CLS"); } while ( LoopStop == "yes" ); return 0; } What doesn't work? You should give that kind of information so we can help better. I did notice that your second call to getline reads into Question, when you probably meant LoopStop. Also, I'm not sure that the cin.ignore() is necessary there (you might want it outside the loop). edit - I just noticed your other thread. The cin.ignore() was required there because you used cin >>. It is not required after getline because getline automatically ignores the newline character at the end of the user input from when the user hits enter. Stupid me! The getline Question was a stupid mistake and thanks for the advice. God I have learned a lot in the two days since I joined this forum.
http://cboard.cprogramming.com/cplusplus-programming/71907-if-char-printable-thread.html
CC-MAIN-2015-35
refinedweb
361
75.5
Abstract This document describes the extension of the latex module for sympy. The python module latex_ex extends the capabilities of the current latex (while preserving the current capabilities) to geometric algebra multivectors, numpy array’s, and extends the ascii formatting of greek symbols, accents, and subscripts and superscripts. Additionally the module is configured to use the print command to generate a LaTeX output file and display it using xdvi in Linux and yap in Windows. To get LaTeX displayed text latex and xdvi must be installed on a Linux system and MikTex on a Windows system. One of the main extensions in latex_ex is the ability to encode complex symbols (multiple greek letters with accents and superscripts and subscripts) is ascii strings containing only letters, numbers, and underscores. These restrictions allow sympy variable names to represent complex symbols. For example if we use the function symbols() as follows: xalpha,Gammavec__1_rho,delta__j_k = symbols('xalpha Gammavec__1_rho delta__j_k') symbols creates the three sympy symbols xalpha, Gammavec__1_rho, and delta__j_k. If these symbols are printed with the latex_ex modules the results are A single underscore denotes a subscript and a double undscore a superscript. In addition to all normal LaTeX accents boldmath is supported so that omegaomegabm\(\rightarrow \Omega\boldsymbol{\Omega}\) so an accent (or boldmath) only applies to the character (characters in the case of the string representing a greek letter) immediately preceeding the accent command. The actual LatexPrinter class is hidden with the helper functions Format(), LaTeX(), and xdvi(). LatexPrinter is setup when Format() is called. In addition to setting format switches in LatexPrinter, Format() does two other critical tasks. Firstly, the sympy function Basic.__str__() is redirected to the LatexPrinter helper function LaTeX(). If nothing more that this were done the python print command would output LaTeX code. Secondly, sys.stdout is redirected to a file until xdvi() is called. This file is then compiled with the latex program (if present) and the dvi output file is displayed with the xdvi program (if present). Thus for LatexPrinter to display the output in a window both latex and xdvi must be installed on your system and in the execution path. One problem for LatexPrinter is determining when equation mode should be use in the output formatting. To allow for printing output not in equation mode the program attemps to determine from the output string context when to use equation mode. The current test is to use equation mode if the string contains an =, _, ^, or \. This is not a foolproof method. The safest thing to do if you wish to print an object, X, in math mode is to use print ‘X =’,X so the = sign triggers math mode. The LatexPrinter class functions are not called directly, but rather are called when print(), LaTeX(), or str() are called. The two new functions for extending the latex module are _print_ndarray() and _print_MV(). Some other functions in LatexPrinter have been modified to increase their utility. _print_ndarray() returns a latex formatted string for the expr equal to a numpy array with elements that can be sympy expressions. The numpy array’s can have up to three dimensions. _print_MV() returns a latex formatted string for the expr equal to a GA multivector. str_basic() returns a string without the latex formatting provided by LatexPrinter. This is needed since LatexPrinter takes over the str() fuction and there are instances when the unformatted string is needed such as during the automatic generation of multivector coefficients and the reduction of multivector coefficients for printing. Format iniailizes LatexPrinter and set the format for sympy symbols, functions, and derivatives and for GA multivectors. The switch are encoded in the text string argument of Format as follows. It is assumed that the text string fmt always contains four integers separated by blanks. LaTeX() returns the latex formatted string for the sympy, GA, or numpy expression expr. This is needed since numpy cannot be subclassed and hence cannot be used with the LatexPrinter modified print() command. Thus is A is a numpy array containing sympy expressions one cannot simply code print A but rather must use print LaTeX(A) xdvi() postprocesses the output of the print statements and generates the latex file with name filename. If the latex and xdvi programs are present on the system they are invoked to display the latex file in a window. If debug=True the associated output of latex is sent to stdout, otherwise it is sent to /dev/null for linux and NUL for Windows. If LatexPrinter has not been initialized xdvi() does nothing. After the .dvi file is generated it is displayed with xdvi for linux (if latex and xdvi are installed ) and yap for Windows (if MikTex is installed). The functions sym_format(), fct_format(), pdiff_format(), and MV_format() allow one to change various formatting aspects of the LatexPrinter. They do not initialize the class and if they are called with the class not initialized they have no effect. These functions and the function xdvi() are designed so that if the LatexPrinter class is not initialized the program output is as if the LatexPrinter class is not used. Thus all one needs to do to get simple ascii output (possibly for program debugging) is to comment out the one function call that initializes the LatexPrinter class. All other latex_ex function calls can remain in the program and have no effect on program output. sym_format() allows one to change the latex format options for sympy symbol output independent of other format switches (see \(1^{st}\) switch in Table I). fct_format() allows one to change the latex format options for sympy function output independent of other format switches (see \(2^{nd}\) switch in Table I). pdiff_format() allows one to change the latex format options for sympy partial derivative output independent of other format switches (see \(3^{rd}\) switch in Table I). MV_format() allows one to change the latex format options for sympy partial derivative output independent of other format switches (see \(3^{rd}\) switch in Table I). latexdemo.py example of using latex_ex with sympy import sys import sympy import sympy.galgebra.latex_ex as tex if __name__ == '__main__': tex.Format() xbm, alpha_1, delta__nugamma_r = sympy.symbols('xbm alpha_1 delta__nugamma_r') x = alpha_1*xbm/delta__nugamma_r print 'x =', x tex.xdvi() Start of Program Output End of Program Output The program latexdemo.py demonstrates the extended symbol naming conventions in latex_ex. the statment Format() starts the LatexPrinter driver with default formatting. Note that on the right hand side of the output that xbm gives \(\boldsymbol{x}\), alpha_1 gives \(\alpha_{1}\) and delta__nugamma_r gives \(\delta^{\nu\gamma}_{r}\). Also the fraction is printed correctly. The statment print 'x =',x sends the string 'x = '+str(x) to the output processor (xdvi()). Because the string contains an \(=\) sign the processor treats the string as an LaTeX equation (unnumbered). If 'x =' was not in the print statment a LaTeX error would be generated. In the case of a GA multivector one does not need the 'x =' if the multivector has been given a name. Maxwell.py example of using latex_ex with GA import sys import sympy import sympy.galgebra.GAsympy as GA import sympy.galgebra.latex_ex as tex if __name__ == '__main__': metric = '1 0 0 0,' \ '0 -1 0 0,' \ '0 0 -1 0,' \ '0 0 0 -1' vars = sympy.symbols('t x y z') gamma_t, gamma_x, gamma_y, gamma_z = GA.MV.setup('gamma_t gamma_x gamma_y gamma_z', metric, True, vars) tex.Format() I = GA.MV(1, 'pseudo') I.convert_to_blades() print '$I$ Pseudo-Scalar' print 'I =', I B = GA.MV('B', 'vector', fct=True) E = GA.MV('E', 'vector', fct=True) B.set_coef(1, 0, 0) E.set_coef(1, 0, 0) B *= gamma_t E *= gamma_t B.convert_to_blades() E.convert_to_blades() J = GA.MV('J', 'vector', fct=True) print '$B$ Magnetic Field Bi-Vector' print 'B = Bvec gamma_0 =', B print '$E$ Electric Field Bi-Vector' print 'E = Evec gamma_0 =', E F = E + I*B print '$E+IB$ Electo-Magnetic Field Bi-Vector' print 'F = E+IB =', F print '$J$ Four Current' print 'J =', J gradF = F.grad() gradF.convert_to_blades() print 'Geometric Derivative of Electo-Magnetic Field Bi-Vector' tex' tex.xdvi(filename='Maxwell.tex') Start of Program Output \(I\) Pseudo-Scalar \(B\) Magnetic Field Bi-Vector \(F\) Electric Field Bi-Vector \(E+IB\) Electo-Magnetic Field Bi-Vector \(J\) Four Current Geometric Derivative of Electo-Magnetic Field Bi-Vector All Maxwell Equations are Div \(E\) and Curl \(H\) Equations Curl \(E\) and Div \(B\) equations End of Program Output The program Maxwell.py demonstrates the use of the LatexPrinter class with the GA module multivector class, MV. The Format() call initializes LatexPrinter. The only other explicit latex_x module formatting statement used is MV_format(3). This statment changes the multivector latex format so that instead of printing the entire multivector on one line, which would run off the page, each multivector base and its coefficient are printed on individual lines using the latex align environment. Another option used is that the printing of function arguments is suppressed since \(E\), \(B\), \(J\), and \(F\) are multivector fields and printing out the argument, \((t,x,y,z)\), for every field component would greatly lengthen the output and make it more difficult to format in a pleasing way.
http://docs.sympy.org/0.7.3/modules/galgebra/latex_ex/latex_ex.html
CC-MAIN-2018-30
refinedweb
1,532
54.12
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hi there, I am using openCV and face detection to control the transparency of images. The position of the detected face on the x axis controls transparency. What I would like to do is be able to ignore the other faces that get picked up by the webcam. Is there a way to get the value of face 1 and ignore face 2, face 3, face 4 etc., but upon face 1 being killed, make face 2 = face 1, face 3 = face 2 etc., and constantly only use the data from face 1 as the controlling parameter? I have looked into the OpenCV example (Example: Whichface) and I can't seem to wrangle it to what i need it for. Here is my existing code import gab.opencv.*; import processing.video.*; import java.awt.*; Capture video; OpenCV opencv; PImage ed; PImage genie; int offset = 0; float easing = 0.05; void setup() { size(724,960); video = new Capture(this, 640/2, 480/2); opencv = new OpenCV(this, 640/2, 480/2); opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); video.start(); ed = loadImage("ed.jpg"); genie = loadImage("genie.jpg"); } void draw() { scale(1); opencv.loadImage(video); noFill(); stroke(0, 255, 0); strokeWeight(3); Rectangle[] faces = opencv.detect(); println(faces.length); for (int i = 0; i < faces.length; i++) { tint(255, 230); // Display at half opacity image(genie, 0, 0); // Display at full opacity int dx = (faces[i].x-genie.width/2) - offset; offset += dx * easing; tint(255, faces[i].x); // Display at half opacity image(ed, 0, 0); } } void captureEvent(Capture c) { c.read(); } Answers Hi, You could simply say :
https://forum.processing.org/two/discussion/24820/opencv-kill-old-faces-loop
CC-MAIN-2019-43
refinedweb
285
68.26
What gives SBUX its edge over competition? Savvy shoppers know how to get good deals Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more! Investors eyeing a purchase of PLY Gem Holdings Inc (Symbol: PGEM) stock, but tentative about paying the going market price of $11.76/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the November put at the $10 strike, which has a bid at the time of this writing of 65 cents. Collecting that bid as the premium represents a 6.5% return against the $10 commitment, or a 12.9% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Top YieldBoost Puts?
http://www.nasdaq.com/article/commit-to-buy-ply-gem-holdings-at-10-earn-129-annualized-using-options-cm355627
CC-MAIN-2015-40
refinedweb
133
65.52
How to: Test for Reference Equality (Identity) (C# Programming Guide) You do not have to implement any custom logic to support reference equality comparisons in your types. This functionality is provided for all types by the static Object.ReferenceEquals method. The following example shows how to determine whether two variables have reference equality, which means that they refer to the same object in memory. The example also shows why Object.ReferenceEquals always returns false for value types and why you should not use ReferenceEquals to determine string equality. namespace TestReferenceEquality { struct TestStruct { public int Num { get; private set; } public string Name { get; private set; } public TestStruct(int i, string s) : this() { Num = i; Name = s; } } class TestClass { public int Num { get; set; } public string Name { get; set; } } class Program { static void Main() { // Demonstrate reference equality with reference types. #region ReferenceTypes // Create two reference type instances that have identical values. TestClass tcA = new TestClass() { Num = 1, Name = "New TestClass" }; TestClass tcB = new TestClass() { Num = 1, Name = "New TestClass" }; Console.WriteLine("ReferenceEquals(tcA, tcB) = {0}", Object.ReferenceEquals(tcA, tcB)); // false // After assignment, tcB and tcA refer to the same object. // They now have reference equality. tcB = tcA; Console.WriteLine("After asignment: ReferenceEquals(tcA, tcB) = {0}", Object.ReferenceEquals(tcA, tcB)); // true // Changes made to tcA are reflected in tcB. Therefore, objects // that have reference equality also have value equality. tcA.Num = 42; tcA.Name = "TestClass 42"; Console.WriteLine("tcB.Name = {0} tcB.Num: {1}", tcB.Name, tcB.Num); #endregion // Demonstrate that two value type instances never have reference equality. #region ValueTypes TestStruct tsC = new TestStruct( 1, "TestStruct 1"); // Value types are copied on assignment. tsD and tsC have // the same values but are not the same object. TestStruct tsD = tsC; Console.WriteLine("After asignment: ReferenceEquals(tsC, tsD) = {0}", Object.ReferenceEquals(tsC, tsD)); // false #endregion #region stringRefEquality // Constant strings within the same assembly are always interned by the runtime. // This means they are stored in the same location in memory. Therefore, // the two strings have reference equality although no assignment takes place. string strA = "Hello world!"; string strB = "Hello world!"; Console.WriteLine("ReferenceEquals(strA, strB) = {0}", Object.ReferenceEquals(strA, strB)); // true // After a new string is assigned to strA, strA and strB // are no longer interned and no longer have reference equality. strA = "Goodbye world!"; Console.WriteLine("strA = \"{0}\" strB = \"{1}\"", strA, strB); Console.WriteLine("After strA changes, ReferenceEquals(strA, strB) = {0}", Object.ReferenceEquals(strA, strB)); // false // A string that is created at runtime cannot be interned. StringBuilder sb = new StringBuilder("Hello world!"); string stringC = sb.ToString(); // False: Console.WriteLine("ReferenceEquals(stringC, strB) = {0}", Object.ReferenceEquals(stringC, strB)); // The string class overloads the == operator to perform an equality comparison. Console.WriteLine("stringC == strB = {0}", stringC == strB); // true #endregion // Keep the console open in debug mode. Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } } /* Output: ReferenceEquals(tcA, tcB) = False After asignment: ReferenceEquals(tcA, tcB) = True tcB.Name = TestClass 42 tcB.Num: 42 After asignment: ReferenceEquals(tsC, tsD) = False ReferenceEquals(strA, strB) = True strA = "Goodbye world!" strB = "Hello world!" After strA changes, ReferenceEquals(strA, strB) = False */ The implementation of Equals in the System.Object universal base class also performs a reference equality check, but it is best not to use this because, if a class happens to override the method, the results might not be what you expect. The same is true for the == and != operators. When they are operating on reference types, the default behavior of == and != is to perform a reference equality check. However, derived classes can overload the operator to perform a value equality check. To minimize the potential for error, it is best to always use ReferenceEquals when you have to determine whether two objects have reference equality. Constant strings within the same assembly are always interned by the runtime. That is, only one instance of each unique literal string is maintained. However, the runtime does not guarantee that strings created at runtime are interned, nor does it guarantee that two equal constant strings in different assemblies are interned.
http://msdn.microsoft.com/en-us/library/dd183759.aspx
CC-MAIN-2014-35
refinedweb
668
51.95
in reply to Faulty Control Structures? I tried running the program on the data you provided but it exited instantly and printed no output in output.txt so I assume that data is not representative. I don't exactly understand what the code is doing but I don't see anything that could cause an infinite loop. I see two things that make me suspicious: 1) your range data is all numbers but you're using the character-operators lt and eq to compare them, and 2) these lines: # look to see if the windows don't overlap if ($test lt $window[0]) {return 0;} # look to see if the windows don't overlap elsif ($test2 lt $probe[0]) {return 0;} [download] Update: Second thought - the second part of the code seems, if I understand it, to be designed to take two ranges of numbers and see if they overlap. To do so, it actually expands the ranges into lists and compares them one-by-one. This becomes, in the worst case, an N^2 calculation for what should be a constant operation: sub range_overlap { my ($start1, $end1, $start2, $end2); return ($end1 >= $start2 || $start1 <= $end2) && ($end2 >= $start1 || $start2 <= $end.
http://www.perlmonks.org/?node_id=664769
CC-MAIN-2017-26
refinedweb
199
61.29
Ctrl+P Some extensions for the students. Enjoy! ES Lint - Integrates ESLint into VS Code. npm - Run npm scripts from the command palatte and validate the installed modules defined in package.json. package.json JavaScript (ES6) Snippets - Adds code snippets for JavaScript development in ES6 syntax. React js snippets - Build react faster. NPM IntelliSense - Adds IntelliSense for npm modules in your code. Path IntelliSense - Autocompletes filenames in your code. Rest Client - Help students learn rest stuff without postman. comment todo-highlight - To help students communicate and document code better. bootstrap4 snippets - This may be removed if class selectors works good enough. CoenraadS.bracket-pair-colorizer - Help finds bracket closures. import cost - Helps students be more efficient in their imports this way their js projects can reduce the density of npm black holes... This may be removed because eats ram like crazy. Dracula++ THEME - Because ;-) ...
https://marketplace.visualstudio.com/items?itemName=westmec.westmec-vsc
CC-MAIN-2020-50
refinedweb
144
61.53
jordi_0071 138 Report post Posted June 9, 2005 hey i made this code: #include <iostream> using namespace std; int main() { int CountGames; cout << "How many games do you have: "; cin >> CountGames; if(CountGames > 30 || CountGames == 30) { cout << "You got a lot games. you are a game freak\n\n\n"; } else if(CountGames < 30 && CountGames > 0) { cout << CountGames << " Isn't much but atleast you got some\n\n\n"; } else if(CountGames == 0) { cout << "You got no games. wtf you should get on.\n\n\n"; } else if(CountGames < 0) { while(CountGames <0) { cout << "you cannot have less then 0 games. duh\n"; cout << "How many games do you have: "; cin >> CountGames; } } system("pause"); return 0; } but if you first enter -1 and then if you press the second time higher then 0 it pauses the program. how do i solve this? 0 Share this post Link to post Share on other sites
https://www.gamedev.net/forums/topic/324752-c-while-in-a-if-else/
CC-MAIN-2017-39
refinedweb
153
85.62
Radpath alternatives and similar packages Based on the "Files and Directories" category. Alternatively, view Radpath alternatives based on common mentions on social networks and blogs. arc9.7 0.0 Radpath VS arc:paperclip: Flexible file upload and attachment library for Elixir waffle8.8 4.4 Radpath VS waffleFlexible file upload and attachment library for Elixir fs8.1 1.2 L4 Radpath VS fs📁 FS: Native Filesystem Listeners exfswatch8.0 1.0 Radpath VS exfswatchFilesystem monitor for elixir exfile6.5 0.0 Radpath VS exfileFile upload persistence and processing for Phoenix / Plug ex_guard6.1 1.8 Radpath VS ex_guardExGuard is a mix command to handle events on file system modifications dir_walker5.1 0.0 Radpath VS dir_walkerSimple Elixir file-system directory tree walker. It can handle large filesystems, as the tree is traversed lazily. eye_drops5.1 0.0 Radpath VS eye_dropsConfigurable Elixir mix task to watch file changes and run the corresponding command. librex4.5 0.0 Radpath VS librexElixir library to convert office documents to other formats using LibreOffice. sizeable4.2 0.7 Radpath VS sizeableAn Elixir library to make File Sizes human-readable elixgrep3.9 0.0 Radpath VS elixgrepAn elixir framework to implement concurrent versions of common unix utilities, grep, find, etc.. zarex2.9 0.0 Radpath VS zarexFilename sanitization for Elixir ex_minimatch2.8 0.0 Radpath VS ex_minimatchGlobbing paths without walking the tree!. FormatParser2.8 4.3 Radpath VS FormatParserThe owls are not what they seem sentix2.7 0.0 Radpath VS sentixA cross-platform file watcher for Elixir based on fswatch. cassius1.6 0.0 L4 Radpath VS cassiusNot maintained. -- NIF-based linux file system events fwatch0.8 0.0 Radpath VS fwatchA file watcher for Elixir language Belt- - Radpath VS BeltExtensible file upload library with support for SFTP, S3 and Filesystem storage. Scout APM: A developer's best friend. Try free for 14-days * Code Quality Rankings and insights are calculated and provided by Lumnify. They vary from L1 to L5 with "L5" being the highest. Do you think we are missing an alternative of Radpath or a related project? Popular Comparisons README Radpath A library for dealing with paths in Elixir largely inspired by Python's pathlib. Getting Started To use Radpath, add a dependency in your mix: def deps do [ { :Radpath, github: "lowks/Radpath"}] end then mix deps.get fetches dependencies and compiles Radpath. Status Developed whenever I can find the time. Running Tests Running tests against a stable release of Elixir defined by 'STABLE_ELIXIR_VERSION' in the Makefile: make ci Running tests against your system's Elixir: make Docs (Lite Version) To list down files in a path: Radpath.files("/home/lowks/Documents") or if you wanted to filter out certain files with pdf extensions: Radpath.files("/home/lowks/Documents", "pdf") Listing down only directories: Radpath.dirs("/home/lowks") To create symlink: Radpath.symlink(source, destination) To create tempfile: {status, fd, file_path} = Radpath.mktempfile IO.write fd, "hoho" File.close fd File.read! filepath "hoho" File.rm! filepath This uses all the defaults To customize the location plus the extension: {_, fd, filepath} = Radpath.mktempfile(".log", "/home/lowks/Downloads") IO.write fd, "hoho" File.read! filepath "hoho" File.close! filepath The default is ".log". Checkout the rest of the docs in the docs folder. Run mix docs to generate a nice docs in a local folder or you can read them online: Radpath hexdocs
https://elixir.libhunt.com/radpath-alternatives
CC-MAIN-2021-43
refinedweb
561
51.65
Flex released its new version Flex 4 with huge changes in its architecture. Along with this new release, adobe also released new flex application builder tool named ?Flash Builder? which was formerly known as ?Flex Builder?. New version of flex emphasizes at features like: Flex 4 features: Flex 4 delivers wide variety of new and enhanced features. Here is a list of some important features. 1. New flex 4 components are spark based which are built on top of the existing flex 3 halo components 2. Spark and Halo components can be used together 3. Use of FXG (Adobe Flash XML Graphics) to create shapes like eclipse, rectangle etc. 4. Enhanced States Syntax 5. ASDoc to generate documentation for MXML components 6. Two-way Data Binding 7. New Effects for Components and Graphics 8. Support for 3D effects 9. Support for CSS namespaces 10. Virtualized Layouts and DataGroups 11. Support for Vector to restrict to a single type 12. RSL linking by default to reduce the application size rather than static linking 13. StyleManager for every module 14. Support for FTE and TLF text providing new style text 15. Support for MXML Graphics runtime layout modifications 16. Validation of styles against the theme 17. Support for layout mirroring for languages that are written from right to left instead of left to right List of spark components available in Flex 4+
http://roseindia.net/tutorial/flex/flex4/flex4-features.html
CC-MAIN-2016-18
refinedweb
230
69.38
Next.js gives us CSS Modules by default, providing benefits like scoped styles and focused development in our app. How can we give our Next.js CSS superpowers with Sass? - What are CSS Modules? - What is Sass? - What are we going to build? - Step 0: Creating a new Next.js app - Step 1: Installing Sass in a Next.js app - Step 2: Importing Sass files into a Next.js app - Step 3: Using Sass variables in a Next.js app - Step 4: Using Sass mixins with global imports in Next.js What are CSS Modules? CSS Modules are essentially CSS files that, when imported into JavaScript projects, provide styles that are scoped to that particular part of the project by default. When importing your module, the classes are represented by an object mapped with each class name, allowing you to apply that class right to your project. For instance, if I had a CSS module for the title of my page: .title { color: blueviolet; } And I import that into my React project: import styles from './my-styles.css' I can then apply that title right to an element as if it were a string: <h1 className={styles.title}>My Title</h1> By scoping styles, you no longer have to worry about breaking other parts of the application with cascading styles. It’s also easier to manage smaller chunks of code that pertain to a specific piece of the application. What is Sass? Sass is an extension of the CSS language that provides powerful features like variables, functions, and other operations that allow you to more easily build complex features into your project. As an example, if I wanted to store my color above in a variable so I can easily change it later, I can add: $color-primary: blueviolet; .title { color: $color-primary; } If I wanted to change that color but only in one spot, I can use built-in color functions to change the shade: $color-primary: blueviolet; .title { color: $color-primary; border-bottom: solid 2px darken($color-primary, 10); } One additional benefit is the ability to nest styles. This allows for easier, more logical organization of your CSS. For instance, if I wanted to change only a <strong> element nested in a title, I can add: $color-primary: blueviolet; $color-secondary: cyan; .title { color: $color-primary; border-bottom: solid 2px darken($color-primary, 10); strong { color: $color-secondary; } } What are we going to build? We're going to create a new React app using Next.js. With our new app, we'll learn how to install and configure Sass so that we can take advantage its features inside of Next.js. Once set up Sass, we'll walk through how to use Sass variables and mixins to recreate some of the default elements of the Next.js UI. Want to skip the tutorial and dive into the code? Check out Next.js Sass Starter on GitHub: Step 0: Creating a new Next.js app To get started with a new Next.js app, we can use Create Next App. In your terminal, navigate to where you want to create the new project and run: yarn create next-app my-next-sass-app Note: you can use npm instead of yarn for any examples with installation and package management. Once the installation finishes, you can navigate into the directory, and start your development server: yarn dev Which should start your new Next.js project at! If this is your first time creating a new Next.js app, have a look around! It comes with a basic homepage as well as two CSS files: /styles/globals.css /styles/Home.module.css Here we’ll be focusing on the home file. If you look inside pages/index.js, there, you’ll see that we’re importing the home file, making those styles available. Next.js has CSS Modules built in by default. This means that when we import our home styles file, the CSS classes are added to the styles object, and we apply each of those class names to the React elements from that object, such as: <h1 className={styles.title}> That means that our styles are scoped to that single page. To learn more about CSS Modules or the built-in support in Next.js, you can check out the following resources: - - Step 1: Installing Sass in a Next.js app While Next.js comes with some good built-in CSS support, it doesn’t come with Sass completely built in. Luckily, to get Sass up and running inside of our Next.js app, all we need to do is install the Sass package from npm, which will let Next.js include those files in its pipeline. To install Sass, run the following inside of your project: yarn add sass And if we start back up our development server and reload the page, we’ll actually notice that nothing has happened yet, which is a good thing! But next we’ll learn how to take advantage of our CSS superpowers. Follow along with the commit! Step 2: Importing Sass files into a Next.js app Now that Sass is installed, we’re ready to use it. In order you use any Sass-specific features though, we’ll need to use Sass files with either the .sass or .scss extension. For this walkthrough, we’re going to use the SCSS syntax and the .scss extension. To get started, inside of pages/index.js, change the import of the styles object at the top of the page to: import styles from '../styles/Home.module.scss' And once the page reloads, as we probably expect, the page is actually broken. To fix this, rename the file: /styles/Home.module.css to /styles/Home.module.scss The difference is we’re changing the file extension from .css to .scss. Once the page reloads, we’ll see that our Next.js site is loading and is back ready for action! Note: We’re not going to cover the global styles file here – you can do the same thing by renaming the global styles file and updating the import inside of /pages/_app.js Next, we’ll learn how to use Sass features for our Next.js app. Follow along with the commit! Step 3: Using Sass variables in a Next.js app Now that we’re using Sass in our project, we can start using some of the basic features like variables. To show how this works, we’re going to update the blue inside of our app to my favorite color, purple! At the top of /styles/Home.module.css, add the following: $color-primary: #0070f3; The color #0070f3 is what Next.js uses by default in the app. Next, update each location that uses that color in our home CSS file to our new variables, such as changing: .title a { color: #0070f3; to .title a { color: $color-primary; If we refresh the page, nothing should change. But now because we’re using a variable to define that color, we can easily change it. At the top of the page, change the $color-primary variable to purple or whatever your favorite color is: $color-primary: blueviolet; And when the page reloads, we can see that our colors are now purple! Variables are just the start to the superpowers Sass gives our CSS, but we can see that they allow us to easily manage our colors or other values throughout our application. Follow along with the commit! Step 4: Using Sass mixins with global imports in Next.js One of the other many features of Sass is mixins. They give us the ability to create function-like definitions, allowing us to configure rules that we can repeat and use throughout our app. In our example, we’re going to create a new mixin that allows us to create responsive styles using a media query throughout our app. While we can already do that with a media query alone, using a mixin allows us to use a single definition, keeping that consistent and allowing us to manage that responsive definition from one place. Because this mixin is something we want to use throughout our entire application, we can also use another feature of Sass, which is the ability to import files. To get started, create a new file under the /styles directory: /styles/_mixins.scss We’re using an underscore in front of our filename to denote that it’s a partial. Next, inside of our /styles/Home.module.scss file, let’s import that new file: @import "mixins"; Once the page reloads, we’ll notice nothing changed. If we lookout the bottom of Home.module.scss, we’ll see that we’re using a media query to make the .grid class responsive. We’re going to use that as a basis for our mixin. Inside of _mixins.scss, add the following: @mixin desktop() { @media (max-width: 600px) { @content; } } Note: while we probably can come up with a better name for this mixin than desktop, we’ll use that as the basis of our example. The @content means that any time we use our desktop mixin, it will include the nested content in that location. To test this out, back in our Home.module.css file, let’s update our .grid snippet: @include desktop() { .grid { width: 100%; flex-direction: column; } } If we open our app back up and shrink the browser window, we can see that we still have our responsive styles! We can even take this a step further. Sass allows you to nest styles. For instance, instead of writing: .footer { // Styles } .footer img { margin-left: 0.5rem; } We can include that img definition right inside of the original .footer definition: .footer { // Styles img { margin-left: 0.5rem; } } That img definition will compile to .footer img, just the same as if it was written in standard CSS. So with that in mind, we can use the same concept and move our desktop mixin into our original .grid class: .grid { @include desktop() { width: 100%; flex-direction: column; } } And if you notice, because we’re inside of the .grid class already, we can remove that from inside of the mixin, as it will already be applied. This allows for much easier organization of our responsive styles. Finally, if we look back at our app, we’ll notice that still, nothing has changed, which means we’re successfully using our Sass mixin. Follow along with the commit! What else can we do with Sass and Next.js? We’re only scratching the surface here with Sass. Because our CSS modules now have the power of Sass, we have a ton of capabilities that don’t come by default with CSS. Color Functions Sass has a ton of functions built in that allow us to manipulate colors, mixing and matching shades much more easily. Two that I use often are darken and lighten, that allow you to take a color and change the shade. Learn more about all of the available color functions in Sass. Custom Functions While mixins seem like functions, we can define true functions in Sass that allow us to perform complex operations and produce values based on an input. Learn more about custom functions in Sass. Other Value Types While most of the time with CSS we’re using strings or numbers, we saw that a simple extension of that is the ability to use variables. In addition to variables, Sass gives us more value types like Maps, which function sort of like an object, and Lists, which are kind of like arrays. Learn more about the value types in Sass. More There are a ton of features available in Sass and lots of articles that cover the most used features. Take some time to explore the documentation and find what’s out there!
https://www.freecodecamp.org/news/how-to-use-sass-with-css-modules-in-next-js/
CC-MAIN-2021-04
refinedweb
1,980
74.39
There seems to have been some confusion generated by readers trying to implement dynamic direct bound ports that I wanted to clarify. A dynamic direct bound port is merely a direct bound port whose address is set at runtime. It is not a dynamic send port. Dynamic send ports allow you to determine at runtime which send adapter the message will be routed to which is derived from the prefix of the address of the endpoint you have provided. If you try to set the address of a dynamic send port using the msgbox: prefix you will get a routing exception (transport cannot be resolved) as msgbox: is not an alias for any send adapter. The ability to modify the address of a direct bound port is only available in BizTalk Server 2006. A. Self. Partner direct bound ports provide the capability of having inter-orchestration communication via ports. To configure a partner direct bound port you must choose the orchestration and port for the ‘Partner Orchestration Port’ property. When configuring the two partner ports you must have one side select the orchestration.port it will communicate with and the other side will select its own orchestration.port. This is a little non-intuitive but will be explained in the sections below. Also the port types for both ports must be the same, which implies that the message-types must also be the same. This is one of the places where the Type Modifier property on the port-type matters. To be able to direct bind to a partner port the port-type type modifier must either be internal for orchestrations within the same assembly or public to allow an orchestration from another assembly to bind to it. Finally, the polarities of the ports must be opposite. In other words, if one side is a send port then the other side must be a receive port. There are two communication patterns that can be created; forward partner direct binding and inverse partner direct binding. These two patterns provide explicit inter-orchestration communication. By explicit I mean that there is an intended recipient orchestration (forward partner direct binding) or an intended sender (inverse partner direct binding). You can design implicit partner direct binding by having either the receiver be message box direct bound and create a filter that will accept messages from a particular sending orchestration or have the sender be message box direct bound and promote properties that will match a subscription on the receiving orchestration. This is the typical communication pattern that is used for partner direct binding. Orchestration A has a partner direct bound send port that will send a message to Orchestration B on its partner direct bound receive port. To configure this forward partner direct binding you must have orchestrationA.sendPort1, which is of type portType1, select orchestrationB.receivePort1, which is also of type portType1, as its Partner Orchestration Port. orchestrationB.receivePort1 will select itself, orchestrationB.receivePort1, as its Partner Orchestration Port. Figure 6 Forward Partner Direct Binding Configuration On the sender’s side what this says is, “I will send messages to orchestrationB.receivePort1” and on the receiver’s side it says, “I will receive any messages sent directly to my receivePort1”. Under the covers when messages are sent out of sendPort1 the orchestration engine will set the following properties: BTS.Operation to the operation on the port being used. BTS.PartnerPort to the name of the partner direct bound port configured in the Partner Orchestration Port property BTS.PartnerService to the strong name of the orchestration referenced in the Partner Orchestration Port property Note: the strong name of the partner service will usually look something like: OrchNamespace.OrchTypeName, AssemblyName, Version=1.0.0.0, Culture=neutral, PublicKeyToken=fedcba9876543210 On the receive side the subscription will be (for brevity I will use the BTS namespace instead of as you would see in the actual subscription) BTS.Operation == operation1 And BTS.PartnerPort == receivePort1 And BTS.PartnerService == MyTest.OrchestrationB, MyTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=fedcba9876543210 And BTS.MessageType == Something to note here is that there is a strong binding from the sender orchestration to the receiver orchestration. By strong binding I mean that the sender orchestration is referencing the receiver’s strong name as its partner service. What this means is that if you want to change the receiver’s side or if you change the version of the receiver’s side you must update the design time configuration of the sender’s port. But the receiver has no explicit knowledge of the sender so the senders’ orchestrations can be updated without affecting the receiver. This type of forward binding allows you to have multiple senders bound to the same recipient. Figure 7 N:1 communication Here orchestrationD would be doing some common asynchronous work needed by many different orchestrations. This is not the typical communication pattern that is used for partner direct binding as the direction of binding is inverse of its direction of communication. Orchestration A has a partner direct bound send port will send a message to Orchestration B on its partner direct bound receive port. To configure this inverse partner direct binding you must have orchestrationB.receivePort1, which is of type portType1, select orchestrationA.sendPort1, which is also of type portType1, as its Partner Orchestration Port. orchestrationA.sendPort1 will select itself, orchestrationA.sendPort1, as its Partner Orchestration Port. Figure 8 Inverse Direct Binding Configuration On the sender’s side what this says is, “I will send a message to anyone who is listening for messages from my send port” and on the receiver’s side it says, “I will receive messages sent from orchestrationA.sendPort1”. Under the covers when messages are sent out of sendPort1 the logic for setting properties is still the same as it was for the forward case. The orchestration engine will still set the following properties: On the receive side the subscription will BTS.PartnerPort == sendPort1 And BTS.PartnerService == MyTest.OrchestrationA, MyTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=fedcba9876543210 And In this case the receiver is strongly bound to the sender implying that if you want to change the receiver’s orchestration or update the version then you must update the sender’s port configuration. The sender has no explicit knowledge of the receiver so the receivers’ orchestrations can be updated without affect the sender. This type of inverse binding allows you to have a single sender communicate with multiple receivers. Figure 9 1:N communication Inverse direct bound ports allows for a recipient list pattern. The recipient list is determined by which receive ports are bound to a particular send port and is maintained as part of the orchestration design. Here either all of the recipient orchestrations can consume any message coming from the send port or they can each have a filter to determine which messages each of the recipients should consume from the sender. One thing to be careful about is that if you are using a two-way port type with inverse partner direct binding then you must setup your filters to ensure that only one of the recipients will consume (i.e. subscription will match) the message. This is because a solicit-response port is expecting a single response and if multiple recipients get the message, then it will accept the first response and all subsequent responses would be suspended non-resumable. The engine won’t let that happen and it will instead throw an exception when you try to send the message indicating that there would be multiple recipients for a solicit-response request. Message.. Have you ever wanted to speak to Microsoft developers of a specific feature of BizTalk Server? I am sure your answer was “Yes let me at them”, so the Business Process Integration Division is extending an invitation to all customers to join our key feature developers, program managers, and testers in the following newsgroups: We’ve been working very hard over the past year to connect with folks just like you and want to include you in our community of Most Valuable Professionals (MVP), developers, information technology professionals, chief information officers, chief executive officers, or any other role within large, medium, and small companies that hang out in our online newsgroup communities. We want to have you join in this vibrant online community to ask those questions you always wanted to ask but did not know where to go. Well, now you know where to go, we want you to come on in and join us! If you are new to BizTalk Server, try out the NewUser newsgroup, Microsoft.public.biztalk.newuser.. MSDN unmanaged newsgroups are available to all individuals. Go to the following URL to participate:. These newsgroups are monitored by Microsoft product group members, other customers like you, most valuable professionals, and various other individuals. Questions, suggestions, and direct feedback can be sent to me. James Fort BPI Community Lead mailto:jfort@microsoft.com This. We had a customer scenario such that when a complex business process was executed at the rate of 1 request per second the CPU utilization of the SQL Server hosting the master message box quickly grew up to 100% sustained. Having the master message box’s CPU utilization so high is not desirable as it can: In this scenario, there were two SQL Servers allocated for BizTalk; one held only the master message box with publication turned off and the other had all of the other BizTalk databases including a secondary message box. Both SQL Server machines had 8 hyper-threaded 3.0 GHz processors and 8GB of RAM. The orchestrations consisted of about 4 orchestrations chained via messaging including about 2 called orchestrations. In this scenario a new business process is created per order and there can only be one business process running at any one time for a particular {customer, order} pair. Each order can have updates which can interrupt the currently running business process handling that request. An interruption can only happen at certain points in the business process (i.e. business process atomicity). So if an update has come in and a business process is currently handling a request, then the update will be queued up to a point in the business process where it can check to see if this current business process instance should terminate and then allow the update to start a new business process. To accomplish this in orchestration we used correlations. At certain points in the orchestrations there would be a Listen shape with a Delay of 0 on one branch and a Receive following a correlation on the other branch. So when the orchestration gets to this point in the orchestration, if there is no update then the business process continues until it needs to check again at the next interrupt point. In the design of the orchestration there are several .NET remoting calls made. If the remoting call fails then an exception orchestration is “Called”. Since there are several remoting calls, then there are several of these exception orchestrations called throughout the main orchestration. The exception orchestration includes logic such that it can post a request to an operator to determine whether or not to terminate the instance or to try again. Since there is a blocking call waiting for user input, there can be a significant window of time where the orchestration instance is running. In the meantime an update message could come, which would invalidate this original request. To accomplish this interruption in the exception handling orchestration, the correlation set was passed in as a parameter to the called orchestration so that it can either wait (listen) for the response from the operator or an update message that interrupts the currently running instance. The master message box is responsible for doing subscription matching. If publication is turned off on the master message box, then it will only do subscription matching and the other message boxes will handle message publication and storage. The master message box can only be scaled up but not out. So eventually it can become the limiting factor in how far the message boxes can be scaled. The orchestration engine creates the necessary subscriptions when the orchestration is instantiated. One set of subscriptions it will create include the ones for followed correlations. The orchestration engine will find all of the points where the correlation is followed and create subscriptions for them. If a correlation set is passed to a called orchestration then the engine crawls the called orchestration to create those subscriptions as well. A called orchestration is essentially in-lined code which means that the called orchestration will look like it is part of the orchestration that calls it. A subscription, in this case, will consist of a message type, an orchestration instance, a port operation, and the properties used in the correlation set. So the subscriptions for the called orchestrations will actually have the name of the caller orchestration as its orchestration instance name. By default, when you create a port in the orchestration designer the first operation under the port is defaulted to Operation_1. Unless a developer has an explicit purpose for changing this (for example, if he is exposing an orchestration as a web service this operation will become part of the method name) the developer will typically leave it with the default name. Since the same called exception orchestration is called many times within the main orchestration with the same correlation set with the same port operation name, then identical subscriptions will be created in the master message box. As a general rule of thumb, the master message box can get overwhelmed when trying to match a message to a subscription with more than 20 identical subscriptions (I won’t go into the complexities of what happens when a number of subscriptions are matched for a particular message). So in this case each request is putting a lot of strain on the master message box trying to match the request to a particular subscriber since the message would match the subscription for activating receive as well as all of the receives that are in each call to the exception orchestration. In this case, messages which are destined for the receive points in the exception orchestration are not common, but they are still referenced since the subscriptions are all the same. To alleviate the strain put on the matching processing we changed the names of the port operations to be unique. Since the orchestration engine uses the port operation name as the distinguishing property for a subscription, the subscription for the activating receive is now unique and the master message box doesn’t have to waste CPU cycle trying to figure which of the other subscriptions it needs to match against. In the above described scenario, after making this change, the CPU utilization dropped from 100% to about 20%. Change the operation type name to something unique. Even if the design doesn’t have a called orchestration with a correlation set passed to it, it is a best practice to change these names in case in the future these orchestrations get repurposed. Also it allows you to more easily find the subscription for a particular operation port in the orchestration. Lee talks about this scenario in his blog with some more technical detail under his post on Is there a pub/sub system underneath BizTalk? I
http://blogs.msdn.com/kevin_lam/
crawl-002
refinedweb
2,571
50.77
Hi, I’d like to execute some C++ workload in parallel in multiple threads, and steer everything from python. Below is what I have so far. It seems that threads run sequentially, however. I suppose that’s because PyROOT does not release the GIL? If I run time.sleep(2) instead of the C++ function threads do run concurrently. How can I convince the threads to run concurrently in this scenario? Cheers, Enrico import ROOT from threading import Thread from time import sleep from collections import deque ROOT.gInterpreter.Declare(""" void foo() { for (int i=0; i < 1000000000; ++i) ; std::cout << "done" << std::endl; } """) ROOT.ROOT.EnableImplicitMT() threads = [Thread(target=ROOT.foo) for _ in range(ROOT.ROOT.GetImplicitMTPoolSize())] deque(map(Thread.start, threads)) deque(map(Thread.join, threads)) ROOT Version: master Platform: linux
https://root-forum.cern.ch/t/running-c-functions-in-parallel-with-python-multithreading-does-pyroot-release-the-gil/34527
CC-MAIN-2022-27
refinedweb
134
62.85
CodePlexProject Hosting for Open Source Software I would like to modify the placement of the HTML widget so that the header sits underneath the content. Any ideas? The shape tracing doesn't seem to show a part to manipulate for the header, only the body. I wondered about setting the position of the body element in the Placement, but wasn't sure how to target the widget in the particular zone? Thanks. Not sure I understand, but it seems like you want to change the order of local zones? It seems weird that you would want to put the head at the foot: why not just place the parts that you want into the footer zone? Anyway, if you really want to do that, you can use a widget-specific alternate to the content shape and redefine zones as you see fit. If I understand what you're asking, you can override the Widget.Wrapper.cshtml template (from the Views folder in the Orchard.Widgets module) in your theme. You can then reorganize the html in the wrapper to put the title/header below the body (which is being displayed in the "Child" local zone of the wrapper). I know its a bit of a strange one!! Basically our designer wants the title at the bottom for a block, so I thought I would just try and move the widget title. Thanks for your suggesstion Kevin, however I think this would change the layout for all widgets on the page, I have tried it for a particular zone, but it isn't picked up. Bertrand, I'm not too sure how to do that as the only two shapes in the widget are Parts_Contents_Publish and Parts_Common_Body, neither of which hold the title. The only layout file I could find that dictates the layout is the wrapper that Kevin has pointed out. The alternative widget shape just has @Display(Model.Content) Not too worry too much though, its probably a fair assumption that everyone would want the header at the top, and so its not easily customisable!! I can get around it by just leaving the title blank and getting them to add a h1 to the bottom of the html content. Thanks for both your help. It should be possible to do what you want. See this StackOverflow question: Bertrand commented on that question that you can't really have alternates for wrappers, but you can modify the collection of wrappers for a shape. So for example, you could remove the default wrapper from the collection and add your own HtmlWidgetWrapper when displaying HtmlWidgets. I haven't tried this, but in theory it should work. I went ahead and tried it because it sounded like fun. :-) Here's how you could swap the widget wrapper for all HTML widgets: using Orchard.DisplayManagement.Descriptors; namespace UserSubmissions { public class Shapes : IShapeTableProvider { public void Discover(ShapeTableBuilder builder) { builder.Describe("Widget") .OnDisplaying(displaying => { var widget = displaying.Shape.ContentItem; if (widget != null && widget.ContentType == "HtmlWidget") { displaying.ShapeMetadata.Wrappers.Remove("Widget_Wrapper"); displaying.ShapeMetadata.Wrappers.Add("HtmlWidget_Wrapper"); } }); } } } Put that code in your theme for example (or a custom module if you have one - if you put it in the theme your theme needs to have a project file which includes this code file). You'll need to restart your site in order for this provider to be picked up by Orchard. But this will allow you to have a HtmlWidget.Wrapper.cshtml template in your theme which will be used instead of the Widget.Wrapper template from the Widgets module. So your content editors can still use widgets like normal by entering the title instead of having to know to add an <h1> to the bottom of the body content. You can extend this farther to include the Zone name too if you need. Cheers for the code Kevink, I will give that a try :) If I could give u rep I would! Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/273097
CC-MAIN-2017-26
refinedweb
692
72.16
Avoiding expensive calculations in Python IRC bot I'm using this calculator in a public IRC bot. Given that Python uses arbitrary precision by default, this would allow any user to execute something like calc 10000**10000**10000 or calc factorial(1000000) and effectively "kill" the bot. What I'd like to know is if there is some way of avoiding this. I've tried casting all the terms in the expression to float but float(factorial(1000000) still takes a long time to finish in the Python interpreter, and I'm not sure if a multithreading approach is the right way of doing this. Answers Not really an answer but I'd do it that way. Everything that you are going to run should be ran inside a different process. As far as I know, it's not possible to limit CPU usage or Memory usage of a single thread in a process. That said, you have to create a new process which as with task to execute what the user entered and write it down to a file for exemple. You could do that with fork, create a new file using the PID and the main process will have to check until the child processes dies. Once the process dies, open the file "cool_calculator_[pid].out" and send it back to IRC. Quite simple to do I guess. Then using ulimit or other tools, you can limit the child process or even kill them using the master process. If the file with pid is empty just answer that there was an error or something. I guess you can even write some error like memory exceeded or cpu exceeded etc. It all really depend on how you want to kill the bad processes. In the end, your master process will have for job to spawn childs and kill them if needed and send back answers. It looks like the float() cast was the solution after all. First of all, the inverse trigonometric functions don't take values outside of their domain, so they're completely safe and the exception can be caught. >>> acos(5e100) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: math domain error The same thing happens with the fmod() function. The "normal" trigonometric functions don't seem to have any problem with big values unless they're really big, which makes the function return ValueError again. The rounding functions (ceil(), floor() and round()) work fine and return inf if the value is too big. The same goes for the degrees(), log(), log10(), pow(), sqrt(), fabs(), hypot() and radians() functions. The hyperbolic trigonometric functions and the exp() function throw OverflowErrors or return inf. The atan2() function works perfectly fine with big values. For simple arithmetic operations, the float cast makes the function throw an OverflowError (or an inf) instead of doing the calculation. >>> float(10) ** float(100) ** float(100) ** float(1000) Traceback (most recent call last): File "<stdin>", line 1, in <module> OverflowError: (34, 'Numerical result out of range') >>> float(5e500) * float(4e1000) inf Lastly, the problematic factorial() function. All I had to do was redefining the function in an iterative way and adding it to safe_dict. import sys def factorial(n): fact = 1 while (n > 0): fact = float(fact) * float(n) n -= float(1) if float(fact) > sys.float_info.max: return "Too big" return str(fact) print factorial(50e500) While this is a very ugly and grossly inefficient way of calculating a factorial, it's enough for my needs. In fact, I think I added a lot of unnecessary float()s. Now I need to figure out how to put float()s around all the terms in an expression so this happens automatically. Need Your Help New page in print in XSL HTML javascript html css xslt-1.0I need to put text in my XSL, but this text need to display after a page break in print. I know i can hide elements unless printing (media-print), but can I add some spacers until a page break? CSS
http://www.brokencontrollers.com/faq/11486819.shtml
CC-MAIN-2019-35
refinedweb
677
69.92
Hi, One way would be to get access to page itself and then check value of the property: public class SomeSelectionFactory : ISelectionFactory { public IEnumerable<ISelectItem> GetSelections(ExtendedMetadata metadata) { // depending on master language - returning some stuff var masterLanguage = ((PageData)metadata.Parent.Model).MasterLanguage; .... Hi Tahir, That's useful thanks but not quite what I'm after. I need a list to be populated based on the value selected in another property, I kind of have this working but the selection factory is not called if I update the selected value of the other property. Thanks, Mark Hi, You can try and see if it works for you. Disclaimer: I am the creator of this package. I mostly created it to test creating a EPiServer addon, so I cant guarantee that it works in all versions(10.x) of epi. /Peter Sure, go ahead Valdis:) Mark, you might need to create a custom editor if you want the options to be updated without reloading the ui. Maybe can help. I can't remember what works and what doesn't work in 7.x /Peter Hi, I have a block property that uses a selection factory to provide a SelectOne and the data returned from the selection factory needs to be filtered based on the value of another property on the block. Is this possible? If so how? Thanks in advance, Mark
https://world.optimizely.com/forum/developer-forum/CMS/Thread-Container/2017/6/can-you-pass-a-parameter-to-a-selection-factory/
CC-MAIN-2021-39
refinedweb
229
55.84
Now that the .NET mania is finally settling down and developers are starting to adapt to life with the platform, it's a good time to consider a few less-revolutionary (but very practical) details that can simply your programming life. One of these is the often-overlooked macro engine and extensibility model that's built into Visual Studio .NET. This model provides almost 200 objects that give you unprecedented control over the IDE. This includes the ability to access and manipulate the current project hierarchy, the collection of open windows, and the integrated debugger. Using this model, you can build everything from simple keystroke recordings to advanced add-ins that present their own user interface and interact with the IDE. In this article, we'll look at how you might use macros to automate common code-generation tasks, such as building property procedures for your classes. Macros are only one of the extensibility tools provided by Visual Studio .NET, but they are easy to develop quickly, and they have surprising clout. Unlike most other Microsoft products, Visual Studio .NET macros are built out of full .NET code, which means they can use any part of the .NET class library. This allows macros to write XML files, display forms, and even contact remote Web services as part of their work. (For information about other types of Visual Studio .NET extensibility, and help choosing which one suits a particular problem, you can consult Microsoft's automation whitepaper here.) You can create a basic macro simply by recording your actions in the Visual Studio .NET editor. Just follow these steps: Ctrl-Shift-P. To view the code for a recorded macro, select Select Tool > Macros > Macro Explorer. This window (shown below) shows a tree of macro modules, and the macros they contain. Each macro corresponds to a Visual Basic .NET subroutine. To edit the macro you just created, right-click on the TemporaryMacro subroutine in the RecordingModule and select Edit. A separate IDE will load for editing macro code; it closely resembles the ordinary Visual Studio .NET environment, right down to a Project Explorer and dynamic help. Visual Studio only stores one temporary macro at a time, which is overwritten every time you record a new one. To make a temporary macro permanent, you'll need to cut and paste the code into a different subroutine. As stated earlier, macro code uses ordinary .NET syntax and the class library. However, macro code also has access to a special set of Visual Studio .NET extensibility objects, which you won't recognize. These objects are used to interact with windows, insert and read editor text, and so on. For example, to add a new line into the editor, you would use the following macro code: ' Get the current insertion point (where the cursor is positioned). Dim TS As TextSelection = DTE.ActiveDocument.Selection ' Move to the end of the line. TS.EndOfLine() ' Add a new line (the programmatic equivalent of pressing Enter). TS.NewLine() ' Insert some text. TS.Insert("Sample Text") All of these objects are contained in a special EnvDTE namespace, which is contained in the EnvDTE.dll assembly. This assembly is referenced by default in all macro projects. Imports EnvDTE A good way to start learning about macros is to use the record facility, and then look at the code it generates. Visual Basic 6 included several add-ins that could automate class code, including wizards that would generate a series of property procedures. Unfortunately, Visual Studio .NET has nothing comparable. This problem becomes apparent when developers begin to design a class. Unfortunately, it's far simpler to type the following code statement: Public MyVar As String Than to create a full property procedure that ensures proper encapsulation of private data: Private _MyVar As String Public Property MyVar As String Get Return _MyVar End Get Set(ByVal Value As String) _MyVar = Value End Set End Property The solution, clearly, is to create a macro that generates property procedures automatically as needed. One approach is to create a straightforward macro that examines the text at the current insertion point, and uses it to generate a full property procedure. We'll begin coding this macro by writing a private function (which will be added to the macro module) that takes a line of text representing a private variable declaration, and creates a corresponding property procedure declaration. In this case, the code assumes that private variables will always start with a leading underscore ( _), which will be trimmed from the name of the property procedure. Depending on your conventions, the code may need to be adjusted. Private Function GetInsertion(ByVal text As String) As String Dim Words() As String = text.Trim.Split() If Words.Length < 4 Then ' This line is not a valid variable declaration. Return "" Else Dim Insertion As String Insertion = " Public Property " & Words(1).Trim("_") Insertion &= " As " & Words(3) Insertion &= vbNewLine Insertion &= " Get" Insertion &= vbNewLine Insertion &= " Return " & Words(1) Insertion &= vbNewLine Insertion &= " End Get" Insertion &= vbNewLine Insertion &= " Set(ByVal Value As " & Words(3) & ")" Insertion &= vbNewLine Insertion &= " " & Words(1) & " = Value" Insertion &= vbNewLine Insertion &= " End Set" Insertion &= vbNewLine Insertion &= " End Property" Insertion &= vbNewLine & vbNewLine Return Insertion End If End Function The next step is to create a macro that uses this function. The code below selects the current line, generates a property procedure using GetInsertion(), and adds it after the current The following figure shows this macro in the macro explorer. You can double-click it to run it on the current editor line. It's now easy to extend this macro to work with multiple lines. The ExpandPrivateMembersFromSelection() macro below passes each selected line to the GetInsertion() function, and builds up the returned text (any blank lines will be ignored by GetInsertion() automatically). Then, the full text block is inserted. For Now you simply need to type the following code: Private _MyVar As String And run one of the two macros to generate a corresponding property procedure for any basic data type. At this point, you may be wondering if it's possible to write this code in a language-independent way to support any .NET language. This functionality is called the CodeDOM in the .NET Framework. Using the CodeDOM is a complex operation that is outside of the scope of this article. Macros are a powerful tool in for automating repetitive tasks in any application. However, when used correctly, macros can become much more, and help enforce standards, promote good design, and improve consistency across an entire organization. One way is by using add-ins or macros that generate certain code structures in a standardized fashion. You can download the code for this article at.
http://archive.oreilly.com/lpt/a/2897
CC-MAIN-2015-22
refinedweb
1,112
55.03
Genksyms normally looks for explicit symbol table definitions in the source file, which are recognized by the construct X(symbol) All definitions and declarations of typedef, struct, union and enum will be saved for later expansion. Every global symbol will also be saved, together with pointers that will enable a full expansion later on. When a symbol table is found in the source (also see the -g option) looks like: symbol_R12345678, where 12345678 is the hexadicimal representation of the CRC. There are some obscure (but legal) tricks used with the C preprocessor, that will mark all relevant symbols in the object file, and in any exported symbol tables. I recommend a study of the first part of <linux/module.h> and the two companions: <linux/symtab_begin.h> and <linux/symtab_end.h> The options are as follows: cc -E -D__KERNEL__ -D__GENKSYMS__ -DCONFIG_MODVERSIONS -DEXPORT_SYMTAB /linux/kernel/ksyms.c | genksyms /usr/include/linux/modules This will create the file /usr/include/linux/modules/ksyms.ver, which contains version information for the symbols found in ksyms.c If you want to create your own symbol table in the kernel, or in a module, the skeleton looks like: #include <linux/module.h> ... int my_export; ... static struct symbol_table my_symtab = { #include <linux/symtab_begin.h> X(my_export), #include <linux/symtab_end.h> }; ... routine_init() { ... register_symtab(&my_symtab); ... } That is all there is to it! Just make sure that the call to register_symtab is done in the context of the module initialization, or, if you are calling from a kernel resident function; before any modules have been loaded. The last restriction might be lifted, if I decide to...> Table of Contents
http://www.fiveanddime.net/man-pages/genksyms.8.html
crawl-003
refinedweb
269
57.06
In the first post, I took a brief look at the programming languages ATS, C#, Go, Haskell, OCaml, Python and Rust to try to decide which would be the best language in which to write 0install (which is currently implemented in Python). Now it’s time to eliminate a few candidates and look in more detail at the others. Last time, I converted 4 lines of Python code from 0install into each language. This time I’m converting 576 lines, so this should give a more accurate picture of how each performs for a real-world task. As before, note that I’m a beginner in these languages. Please post corrections or suggestions of better ways of doing things in the comments. Thanks. (this post also appeared on reddit and on Hacker News, where there are more comments) Table of Contents - Conclusions from last time - Test case - Syntax - The “run” module - Data structures - Variants - Using the data structures - Handling XML - Building lists - String processing - Finding resources - API docs - Speed - Conclusions Conclusions from last time Based on the initial evaluation and feedback (thanks everyone!), I’m not going to look any further at these: - ATS - I’m glad I included ATS. Its excellent performance and tiny binary put the other languages into perspective. Still, it’s too hard to use, and makes it too difficult to separate safe code from unsafe code. - C# - Although C# is widely used, has an excellent cross-platform bytecode format and many libraries available, it is too large and too slow to start for 0install (to the people who suggested profiling: when even “Hello World” is too slow, there isn’t much you can do). - Go Although many Go users complained that Go’s score was unfairly low, they didn’t seem to disagree that it was the worst of the candidates for our requirements, only about by how much it was the worst. To summarise the discussion briefly: Go is good because errors are handled where they occur. Maybe, but ATS, Haskell, OCaml and Rust do that too, and they help you get it right. You can write pretty reliable code in Go. No doubt, since you can do it in C too. But maybe I can write even better code with less effort using the other languages? It’s OK to ignore errors silently in some places, because handling all errors would clutter the code up too much. This seems like a trade-off the other languages don’t require me to make. - Rust - Rust has excellent safety and a familiar imperative syntax. It also has excellent support for shared libraries, which I didn’t understand when I wrote the previous post (although I don’t feel too bad about this as it seems that almost no-one else in the Rust community understood it either). Speed is OK but not great, though likely to improve. Rust’s main weakness is its immaturity. The language is likely to change in incompatible ways in the near future and there are few libraries available (for example, there is no XML library for Rust). It will be a few years before this is usable in production (and the developers make no secret of this). So, here are the remaining candidates and a summary of the conclusions from last time: - Haskell (7.6.3) - Haskell is fast, but it has problems with shared libraries: libraries are only compatible when compiled by the exact same version of the compiler. Its pure functional style may make it very difficult to convert the existing code to Haskell. Diagnostics (e.g. getting stack-traces) may be a problem too. - OCaml (4.00.1) - OCaml is also fast and had good safety, but its support for shared libraries seems limited. Getting good diagnostics from production code may be tricky, as enabling stack-traces has a performance cost (OCaml code assumes exceptions are cheap). - Python (2.7.5 and 3.3.2) - Python is much slower than the other candidates, but it does have the advantages of an excellent standard library, easy distribution (no need to make platform-specific binaries), being the language we’re currently using, and being very well known. But it has no static type checking, which means a lot of work writing unit-tests for even trivial code (e.g. testing __repr__methods and logging in obscure error paths to make sure it won’t crash due to a typo). Several people said that D has improved a lot in the last few years. I didn’t have time to look at it carefully again, though. It’s a nice language, but probably too low-level/unsafe for us. For example, it’s trivial to make it segfault by dereferencing a null pointer. Test case 0install collects information about all available versions of a program and its libraries from around the web. Then it runs a solver to determine the best valid combination and writes the results to an XML selections document. For an app, foo, the current selections can be found in ~/.config/0install.net/apps/foo/selections.xml. When it’s time to run an application, we read this XML, set up a suitable environment for the process and then exec it. As before, this should happen as quickly as possible. The test program can be given either the name of an app (e.g. “foo”) or the path to a selections document. This task involves: - Using the XDG basedir spec to find the current selections and the cached software. - Parsing XML into a suitable data structure. - Manipulating pathnames, files, directories and symlinks. - Updating environment variables based on the requested bindings. - String manipulation. - Creating launcher executables (as in the previous test). We can’t go over every line in this post, so I’ll just highlight the most interesting bits. The full version in each language is in this GitHub repository. If you get bored, there’s a summary at the end. Syntax Python I guess most people are familiar with Python. It’s clear, simple and straight-forward. It uses indentation to see when a block ends, which means that the structure of a program is exactly what it looks like. Here’s a sample of the Python main module: Note: The real Python code uses the “optparse” module to handle option parsing and generate help text. However, because the Haskell/OCaml code developed here will be used as a front-end to the real Python, we don’t want any special handling. For example, if invoked with --help they should fall back to the Python version instead of handling it themselves. So proper option parsing isn’t part of this task. OCaml OCaml syntax is very compact, but somewhat prone to confusing syntax errors. Often an error reported at a particular line means you used “;” rather than “;;” 20 or 30 lines earlier. Here’s the OCaml main module: Some useful things to know when reading the examples: let a = b in ...assigns a variable (well, constant), like a = b; ...in Python. let foo a b = ...defines a function, like def foo(a, b): ...in Python. foo (a + 1) bmeans to call foowith two arguments, like foo(a + 1, b)in Python. foo acreates a partially applied function, like functools.partial(foo, a)in Python. ()means no data (you can think of it as an empty tuple). - Module names start with a capital letter (e.g. Run.execute_selections). We assign the result of Run.execute_selections to () just to check that it didn’t return anything (so if it did, we’d get a type error rather than just ignoring it). The syntax tends to emphasise the body of functions while minimising the signature, which I’m not convinced is a good thing. Consider: What is the argument b here? What is the return type? These things are inferred by the compiler. b is an element because ZI.get_attribute_opt takes an element as its last argument. The return type is env_source, because InsertPath and Value are constructors for env_source objects. You can include the types, but generally you don’t and that makes it hard to know the type of a function just by looking at its definition in the source code. Update: ygrek points out that most text editors can tell you the type of an OCaml identifier, e.g. see these instructions for Vim. It also tends to bulk out the code: e.g. Map.find key map rather than the more object-oriented map.find key. OCaml does support objects if you want them, but the normal style seems to be to avoid them. On the other hand, it does make it very easy to see which bit of code is being called, which isn’t so obvious in an object-oriented style. One thing to watch out for in OCaml is that, unlike Python and Haskell, it doesn’t look at your indentation. Consider the following code (this is a simplified version of a mistake I actually made): This code never prints “End”, because it treats that as part of the “many” branch, even with all warnings enabled. It would be nice if it would issue a warning if a single block (e.g. the last match case) contains lines with different levels of indentation. Haskell Like Python, Haskell uses whitespace for indentation and generally avoids semi-colons and braces. Here’s the main function in Haskell: Some things you should know to help you read the examples: ::means “has type”. Functions typically declare their type at the top. myfunc :: String -> Int -> Boolmeans that myFunctakes a Stringargument and then an Intargument and returns a Bool. donotation means that the expressions inside are connected together in some type-specific way (this is quite confusing). Since mainhas the type IO (), these statements are connected by the “IO monad”, which does them one at a time, interacting with the system for each one. If the do block had a different type, it might do something completely different. a $ b $ c dmeans a (b (c d))(without them it means ((a b) c ) d). It helps to avoid having lots of close brackets. The “run” module Once we’ve loaded the selections XML document, the basic steps to execute the program are: - For each selection, get the path to its directory (for packages not provided by the distribution). - Scan the XML and collect all the bindings (things we need to set up). - Ensure the launcher helper utility is installed (see last post). - Set up environment variables requested in the bindings. - Set up any launchers requested in the bindings. - Build the argv for the new process and exec it. OCaml This is all very straight-forward: Haskell This is rather complicated. Haskell functions can’t have side effects (like, say, creating a launcher or execing a process). Instead, the function returns an IO request to main, which returns it to whatever is driving Haskell. The IO request is basically a request to perform an operation and a callback to invoke when done. To avoid this becoming a syntax nightmare, Haskell’s do notation lets you write it in an imperative style: For example, take the first line: origEnv <- getEnvironment. getEnvironment is a request to get the current environment. It has the type IO [(String, String)] - a request for a list of (name, value) mappings. The rest of the do block is effectively a function which takes an argument origEnv, of type [(String, String)] (i.e. an actual list of mappings). When this is returned by main, the system gets the mappings and then calls this function with the results. The same thing then happens with the second line, and so on. There’s one tricky bit: resolvePath takes a single selection and finds where it’s stored (which requires access to the filesystem). It returns an IO request to check whether some paths exist. But when we loop over all the selections, we get a list of IO operations, not an IO operation. So you need to use mapM (monadic map), which turns a list of IO requests into an IO request for a list. That’s a lot of thinking to do something quite simple. Doing IO in Haskell is hard. Here’s another example (reading an environment variable): I want to do a getEnv operation and return Nothing if the variable isn’t set. At first, I tried calling getEnv and catching the exception using the normal exception handing function. That doesn’t work. The reason is that getEnv doesn’t throw an exception; it successfully returns an IO request for an environment variable. You have to use the special tryIOError function to get a request for either an environment variable or an error, and then pattern-match on that. The benefit of all this extra work is that you can instantly see which functions do (or rather request) IO by looking at their type signature. Take the XML parsing function, for example: Just by looking at the type, we can see that it does no IO (and, therefore, isn’t vulnerable to attacks which might try to trick it into loading local files while parsing the DTD). It also, incidentally, suggests that we’re not going the get any useful error message if it doesn’t parse: it will just return Nothing. Data structures We need to store the user’s configured search paths, following the XDG Base Directory Specification. We need a list of directories to search for configuration settings, cached data (e.g. downloads) and “data” (which we use mainly for binaries we’ve compiled). Python For storing general records, Python provides a choice of classes and tuples. Here’s a typical class and an example of creating an instance and accessing a field: When calling any Python function or constructor, you can use the (optional) name = value syntax so you don’t have to remember the order of the arguments. Python also provides named tuples, which saves some boilerplate code when you just want pure data with no methods attached. The syntax isn’t great, though, and you can’t do pattern matching on the names, only by remembering the order: OCaml OCaml provides classes, unnamed tuples and records (essentially named tuples). A record would be the obvious choice here: Curiously, there’s no need to tell it the type of the records you’re building. It works it out from the field names. However, this does mean that you can’t define two different record types with the same field name in the same module. Also, if you want to access a field from a record defined in a different module, you need to qualify the field name, e.g. There are also “polymorphic variants” which allow the same field name to be used in different structures, but I haven’t tried using them. The manual notes that the compiler can do a better job of finding errors with plain variants, however. A big win over Python is the ability to pattern-match on field names. e.g. to extract the cache and config fields: Update: if you compile with warnings on (and you should), it will complain that you’re ignoring the data field. This is a really useful check because if you add a new field later it will tell you all the places that might need updating. You can use data = _ to ignore that field explicitly, or just _ to ignore all remaining fields. Haskell Haskell doesn’t have classes (in the Python / OCaml sense), but it does provide named tuples: Note that the syntax for accessing a field is field record, not record.field as in other languages. Like OCaml, you can’t use the same field name in different structures. Pattern matching works: However, it doesn’t have OCaml’s short-cut for the common case where the field you want to match has the same name as the variable you want to store it in (to use Python terms). In fact, doing that is strongly discouraged, because if you matched with config = config, then that would shadow the config function used to access the record. Variants Sometimes, a value can be one of several sub-types. Bindings in 0install are a good example of this: There are two types of binding: an EnvironmentBindingsets an environment variable to tell a program where some resource is, while an ExecutableBindinggives the program an executable launcher for another program. An EnvironmentBindingcan be used to find a path within the selected component, or to provide a constant value. An EnvironmentBindingcan affect its variable in three different ways: it can append the new value to the end of the old value, prepend it at the start, or replace the old value completely. An ExecutableBindingcan store the launcher’s location in a variable or add the launcher’s directory to the application’s $PATH. Here’s an example of an environment binding which prepends a package’s lib directory to CLASSPATH: The code that parses the XML and generates a list of bindings needs to store different values depending on which kind it is. For example, “append” and “prepend” bindings let you specify optional separator and default values, while “replace” ones don’t. Then, the code that applies the bindings needs to handle each of the separate cases. We’d like to make sure that we didn’t forget to handle any case, and that we don’t try to access a field that’s only defined for a different case. Python The traditional object-oriented way to handle this is with subclasses (e.g. ExtendEnvironment with default and separator fields, and ReplaceEnvironment without; InsertPath and Value, etc). However, that’s a lot of classes to write, so 0install actually does everything in just two classes: Note the use of strings ( PREPEND, APPEND, REPLACE) in place of a proper enum, as Python doesn’t have them. OCaml Here’s a definition in OCaml. Variants use | to separate the various possibilities: That’s far more useful than the Python. It accurately describes the possible combinations, and is clear about the types and which bits are optional. Using varname and filepath as aliases for string doesn’t add any type safety, but it does make the signatures easier to read and gives better error messages. Note that the extra | on the first line after type binding isn’t strictly necessary, but it helps to line things up. Haskell This is essentially the same as the OCaml, except that I used tuples rather than records in Binding, because handling records is more awkward in Haskell due to the pattern matching problems noted above. Using tuples (which I could have done in OCaml too) makes the definitions shorter, because the definition of a tuple can be done in-line instead of with a separate structure. deriving Show causes Haskell to automatically generate code to convert these types to strings, which is handy for debugging (and sadly missing from OCaml). Using the data structures To apply the bindings, a runner module needs to collect all the bindings and then: - Process each EnvironmentBinding, updating the environment. - Process each ExecutableBinding, using the new environment to create the launchers. Here, we’ll look at the code to process an EnvironmentBinding. Python Here’s the Python code for applying an <environment> binding: The code for getting the value to append to is a bit messy. We’re trying to say: - Use the current value of the environment variable. Or, if not set: - Use the defaultfrom the <environment> element. Or, if not set: - Use the built-in default value for this variable. Python often makes this easy with its a or b or c syntax, but had to use the longer if syntax for this case because or treats the both None and the empty string as false, whereas we want to treat an empty string as a valid value. Actually, I noticed while writing this post that I got that wrong in the Python (I used the shorter or in one place) and had to fix it; a common mistake in Python. The two binding types must be handled differently. In the run code, we (rather messily) check the type to decide what to do: This isn’t a very object-oriented way to do it. But it made more sense to put the logic for handling the bindings all together in the run module rather than in the module which defines the data types (which I prefer to keep side-effect free). Also, the rule that we need to process all the EnvironmentBindings before all of the ExecutableBindings can’t easily go in the classes themselves. So, the existing Python code is really pretty poor. We’re using strings to simulate enums (simple variants), a single class with a load of if statements in place of variants for the different ways of setting an environment variable, and messy isinstance checks to let us keep the logic for applying bindings together in the run module. If we add or change binding classes, there’s several places we need to check, and no static checking to help us. Let’s see if the other languages can help us do better… OCaml Here’s the code to apply an EnvironmentBinding in OCaml: It’s not bad, and it’s nice to see all the different cases laid out. By the way let do_env_binding env impls = function ... is a convenient way to pattern match on the last (unnamed) argument. It means the same as let do_env_binding env impls binding = match binding with .... I initially thought that the Python version was easier to read. On reflection, however, I think it’s more subtle. It’s easier to see what the Python version does, but it’s not easy to see that what it does is correct. By contrast, it’s easy to see that the OCaml version handles every case (the compiler checks this), and you can just check that each individual case is handled correctly. As in the Python, getting old_value is a bit messy as there’s no null coalescing operator in OCaml: Haskell And here is the code to apply an EnvironmentBinding in Haskell: That’s a good bit shorter than both the Python and the OCaml, because I was able to use a `mplus` b handle the defaults easily. That’s “monadic plus”, in case you were wondering whether mplus is a bit of a silly name. Handling XML We need to parse the XML into some kind of internal representation. Originally, 0install parsed the XML into custom classes, but it turns out that we often want to write XML back out again, preserving attributes and elements we didn’t understand. So we’ve been slowly moving towards using generic XML trees as the in-memory representation, which saves having to write code to serialise our data structures as XML (which then requires ensuring that they’re consistent with the parsing code). Python Python’s standard library includes a selection of XML parsers: minidom, pulldom, ElementTree, expat and SAX. Disappointingly, the documentation says that none of them is safe with untrusted data. 0install uses the low-level “expat” parser, which isn’t in the vulnerabilities table, so hopefully we’re OK (many of the vulnerabilities are denial of service attacks, which isn’t a big problem for us). We build our own “qdom” tree structure rather than use the more standard minidom module because import xml.dom.minidom takes too long to be used in this speed-critical code (or at least, it did when I wrote the qdom code, I haven’t tested it recently). One of the nice things about writing in the other languages is not having to worry about speed all the time. OCaml OCaml doesn’t include an XML parser in the standard libraries, so I Googled for one. The first one I found was Xml-Light, but it turned out that it didn’t support XML namespaces. Then I tried the PXP parser, which is enormous, and parsed the test document I gave it incorrectly (but they have fixed that now - thanks!). Finally, I tried Xmlm, which is small and works. Xmlm doesn’t generate a tree structure, just events (like SAX), so you have to build your own structure. That could be a problem in some cases (it’s convenient to have a standard data structure in case you want to pass documents between modules). On the other hand, we already use our own document structure (“qdom”) in Python and the standard “DOM” interface is awkward anyway. Xmlm suggests the following structure: However, this is quite annoying to process, because every time someone gives you a tree you have to pattern match and handle the case of them giving you a D (a text node). Instead, I used ElementTree’s trick of attaching text to element nodes: With this, it’s easy to iterate over all the elements and ignore text, and you don’t have to worry about getting two text nodes next to each other. You generally don’t need text_before unless you’re using mixed content, but having it here means we don’t lose any data if we read a document and then write it out again. It was convenient to have the structure be mutable while building it, and in other code we may want to manipulate nodes, so I marked most of the fields as mutable. I made a serious mistake in my first attempt at pattern matching on elements. I wanted to match the elements <arg> or <for-each> in the 0install namespace: This is the worst kind of mistake: the kind that seems to work fine when you test it. I made the same mistake in Haskell, but luckily I decided to enable warnings ( ghc -Wall) and spotted the problem. The code above doesn’t check that the namespace is equal to xmlns_feed. Instead, it creates a new xmlns_feed binding with whatever the namespace actually was. Obvious in hindsight (note: turning on warnings in OCaml also catches the mistake, because it complains that xmlns_feed is unused). So, how can we fix this? Dealing with namespaces all the time when processing XML is annoying even when you get it right, so I decided to make a helper interface for doing queries in a particular namespace. Of course, I’d like this to generalise to other namespaces. I found an OCaml feature called “functors”, which seem to be basically module templates. Here’s how I made a namespace-specific query interface: Then I create a specialised version of this module for our namespace in constants.ml: With this, ZI.map applies a function to all elements in the 0install namespace and with the given name. ZI.tag returns the local name for 0install elements or Nothing for others, etc. Now the original code becomes: Much better! Haskell Again, there’s a choice of library. I went for Text.XML.Light which seems to work well. Dealing with namespaces is fairly painful; I’m not convinced that I’m using it right. Here’s how I find the <runner> element inside a <command>, for example: Using pattern matching doesn’t seem to improve things: I didn’t find any way to create a namespace-specific interface as in OCaml (Haskell “functors” do something different). Testing it on malformed XML, it sometimes just returns Nothing, which is rather unhelpful, and sometimes it successfully parses it anyway! For reference, here is a test document that Haskell loads happily: Python and OCaml, by contrast, both detect the problem and report the location of the error. Building lists We need to walk the XML tree and return all the bindings, in document order. The places where a binding may be found are: <selection> / [binding] <selection> / [dependency] / [binding] <selection> / <command> / [binding] <selection> / <command> / [dependency] / [binding] For bindings in a dependency, we need to know the dependency’s interface. For other bindings, we want the selection’s own interface. Python Python’s easy syntax for sets provides one way to do it, and its append method on lists makes it easy to collect the results: I liked the ZI namespace query thing I made in OCaml so much, I made a Python class to do the same thing. OCaml I originally wrote this in a purely functional style, threading the results list through all the functions using fold_left. That was pretty messy. Then I decided to take advantage of OCaml’s support for imperative programming and just mutate a single list ( bindings), as in the Python, which worked much better. Note the use of named arguments ( process ~deps:true) to avoid confusion between the two boolean arguments. This works like process(deps = true) in Python. A strange aspect of OCaml is that you generally write loops backwards, first declaring the code to handle each item and then calling an iter function to say what you want to loop over. OCaml does have a for loop, but it’s one of those old-fashioned BBC BASIC style ones that just updates a counter in that particular way that programmers never actually want to do. Note also that in OCaml you add to the start of a list, not the end, so we need to reverse it as the last step. Update: You can use the pipe operator ( |>) to make loops easier to write. It lets you write the input to a function first: Haskell Nice and short again, but again it requires some explanation. The do expressions here (unlike the previous one) have the list type, so they do the lines of the block in a way appropriate for lists. For lists, x <- items means to run the rest of the do block once for each item in the list items, producing a list for each one, and then join the resulting lists together end-to-end. So these do blocks are actually nested loops. There are no named arguments, so we just have to remember what the True and False mean here. Also, the | in the matches doesn’t separate alternatives (as in OCaml), but instead gives a condition which must also be true for it to match, like when in OCaml. String processing We need to expand environment variables in arguments. In the XML, it looks like this: Python I guess that’s cheating, since I picked Python’s preferred syntax when I designed the XML format. Here’s how to do it without using Template: Easy. Notice the handy r"..." syntax for raw strings, which avoids the need to escape backslashes everywhere. Oh, and I seem to have written an OCaml-style backwards loop here. Guess they’re not unique to OCaml after all. OCaml The regex is looking a bit ugly, but still pretty good. Oddly, instead of getting some kind of MatchResult object passed to expand, we just get the original string and then pass that to the global match function. I guess that’s just how the underlying library works. Haskell This was hard, but stackoverflow to the rescue! Finding resources When shipping stand-alone binaries or 0install packages, it’s useful if the program can find other resources in its own code directory. In our case, we need to find the launcher program so we can symlink to it. Python In Python, this is really easy. Every module has an attribute __file__ which has the location of that module: OCaml When called as a binary, we can get the path from argv[0] and use that. If we’re being used as a library then we’re either installed as a system package (and can look in some hard-coded path) or being run as a 0install dependency (and can get 0install to set an environment variable for us). Easy enough, but abspath (and realpath) are missing from the standard library. Haskell Wow. I have no idea how this works. API docs All three languages make it easy to generate cross-referenced API docs. I didn’t bother to fill in many strings for the OCaml and Haskell, but the linked sample pages show what it looks like: Python and Haskell require you to document the types in the code if you want them to appear in the docs. The Python syntax for this is a bit awkward. OCaml infers the types automatically. Speed My test case is running 0release (which has the largest selections.xml of all my apps), except that I edited the selections so it runs /bin/echo at the end rather than /usr/bin/python2. Otherwise we’re just measuring the speed of Python. Running echo directly with a trivial (hard-coded) C program takes 2-3 ms, so that’s the theoretical best time. Enabling stack traces in OCaml ( OCAMLRUNPARAM=b) increased the time from 7ms to 8ms. “Python (test)” is a version I wrote for this comparison to get a feel for the number of lines. It doesn’t do as much checking as the real version and lacks some features. “Python (real)” is the regular “0launch” command. The first number is the time with Python 2, the second with Python 3. It’s interesting how much faster the test version is! By the way, if you’re thinking “parsing XML just to set up environment variables is inefficient; I’ll just use a shell script to start my program”, a /bin/sh script which just execs “echo” takes 10 ms. Conclusions All three languages are easy to read, avoiding verbose boilerplate code, and ending up fairly similar in length. No doubt an expert in OCaml or Haskell would write much shorter code than I did. OCaml’s syntax is simple, but the indentation doesn’t necessarily match up with OCaml’s interpretation of the code, which is a concern. In Python and Haskell, the compiler always sees the same structure as the programmer. Haskell provides all kinds of special notations for writing shorter code. This means that reading other people’s Haskell code can be very difficult. Also, the same structure (e.g. do) can mean wildly different things in different contexts. OCaml syntax is easy to learn and easy to understand. Many things that are simple in Python or OCaml become very complex in Haskell. In this example: reading an environment variable that may be unset, doing any kind of IO, reporting errors in XML documents, search-and-replace in strings and reading argv. And although not part of this test, I think the converting 0install’s solver to Haskell would be very difficult. Haskell’s ability to control where IO happens is useful, but other languages (e.g. E and (hopefully) OCaml with Emily) achieve the same thing without all the complexity. Python’s data structures are very limited and error prone. OCaml and Haskell record and variant types significantly improve the code. OCaml makes working with record types easier, while Haskell simplifies debugging by making it easy to convert structures to strings. XML handling was easiest in OCaml, though I did have to make my own tree structure. Python’s XML libraries aren’t safe and Haskell’s doesn’t report (or even detect, in some cases) errors. Walking the XML tree and building a list was easy in all three languages. In Python and OCaml, because they let us mutate a single list while walking the tree, and in Haskell thanks to its List monad which handles everything behind the scenes. A major benefit of OCaml and Haskell is the ease of refactoring. In Python, once the code is working it’s best not to change things, in case you break something that isn’t covered by the unit-tests. In OCaml and Haskell, you can rename a function, delete old code or change a data structure and rely on the compiler to check that everything’s still OK. The static typing in OCaml and Haskell also gives me more confidence that that various untested error reporting code paths will actually work. Untested Python code is buggy code, generally. I’m not sure how stable the OCaml library APIs are (I assume they don’t change much), but it’s clear that Haskell’s APIs change frequently: code samples and documentation I found on the ‘net were frequently out of date and wouldn’t compile without changes. Python is generally pretty good, if you overlook the massive changes in Python 3. In the previous (trivial) test, OCaml and Haskell had very similar performance, but here OCaml clearly moves into the lead. Python is far behind, and only getting slower with the move to Python 3. Being able to write code without worrying about speed all the time is very liberating! The two statically typed languages didn’t require much debugging, except that OCaml’s find function throws an exception if the key isn’t found rather than using an option type (as in Haskell), which can lead to non-obvious errors. Turning on stack-traces in OCaml makes it easy to track these down however, and making my own wrappers for find and similar should mean it won’t be a problem in general. The big surprise for me in these tests was how little you lose going from Python to OCaml. You still have classes, objects, functions, mutable data and low-level access to the OS, all with an easy and concise syntax, but you gain type checking, much better data structures and a huge amount of speed for no additional effort. Why aren’t more people using it? Although Haskell and OCaml look more similar to each other than to Python, this is just syntax. In fact, OCaml and Python are conceptually pretty similar, while Haskell is different in almost every way. Overall, binary compatibility (i.e. difficulty of making cross-platform releases of the library, which is currently easy with Python) is my main remaining concern. But while further testing is required (particularly for networking, threading and GUI support), so far I’d be happy to move to OCaml.
https://roscidus.com/blog/blog/2013/06/20/replacing-python-round-2/
CC-MAIN-2019-22
refinedweb
6,363
69.92
WebDataInterfaceDesign There is now a fair amount of RDF/XML data around, thanks to various SeedApplications and other Semantic Web efforts. How do programmers deal with this in code? What is it like to be an RDF coder? How do RDF toolkits compare to the machinery familiar to XML developers, such as SAX, DOM, XPath, XSLT...? hmm... is this an API design discusion, aiming at convergence, or just a catalog, something for? need better PPR:OpeningStatement See also: RDFAccessProtocol for network protocol ... er... network protocol something... ugh... naming cop-out.. What RDF APIs can I use? - Jena from HP Labs Semantic Web Research - redland API in C and perl, python, Tcl, Java, Ruby, PHP. The latter pages contain links to other language APIs in each of those languages. - rdflib. XML.com: Building Metadata Applications with RDF [Feb. 12, 2003] is a nice intro. - swap/cwm; see CwmTips, toIcal.py -- convert RDF to iCalendar syntax posted by DanC at 2003-04-18 15:18 - RAP - RDF API for PHP V0.6 - Home - Sesame, Java APIs from Aduna - YARS Home language-independent RESTful HTTP API and JDBC-like Java API - mozilla, ... (LinkMe) - see also: RDF API requirements and comparison, SWAD-Europe Deliverable by Jan Grant Started 2003-01-14; revised 2003-02-27. What OWL APIs can I use? - jena has OWL support, per Reynolds/HP 7 May - OWL API (bechover/volz 7 May) - Cerebra from network inference - ... WebOnt WG OWL implementation report may list others over time. Tasks Many RDF toolkits offer similar facilities. In-memory and persistent representations of RDF graphs (often but not always SQL-based in the latter case). Parsers for the RDF/XML syntax, often a serializer too. Often a graph-oriented API, sometimes with richer textual query interfaces. This is an attempt (partial/incomplete) to list some common tasks that occur when building Semantic Web applications, so that we can walk through the way the different RDF tools address these needs. When learning a new RDF package, we might ask ourselves: "How do I...?" - load a graph from a URI - create a new SQL-backed RDF store - stash some RDF there - refresh a named portion of the DB with this graph - serialize to RDF/XML - get a graph that matches (some blanked out statement) - do fancier queries - create a... statement... graph... term... Language integration One pattern is to make APIs computer language independent so that the same libraies can be shared by users of many languages. Another is to make the interface use as many nice features of the language as possible. When using RDF seriously, one tends to need access to the properties of things just as often and just as easily as access to the slots in an object. This suggests a value for a specialized language-specific interface. Also, one needs a neat way of using namespaces. For example, in python, which is object-oriented and allows the object model to be bent in all kinds of ways, namespaces can be used as though they were dictionaries (rdf['type']) or objects (rdf.type) and objects in an RDF store can be represented by python objects. There have been various implementations of this in python, including Mark Nottingham's Sparta, Aaron Schwartz's Tramp, Andrew MK's thoughts and the Namespace class in cwm. To take this to a logical extreme, one makes a new language, a mixture of a declarative RDF langauge (like N3) and a procedural language (like python). Adenine is an example. genesis: Apr 2003 discussion among DanBri, DanC, eikeon.
http://www.w3.org/wiki/WebDataInterfaceDesign
CC-MAIN-2016-18
refinedweb
589
64.71
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. The Asset Store4:53 with Nick Pettit The Unity Asset Store is a marketplace of free and paid game assets. You can download scripts, editor extensions, 3D models, sounds, textures, and even entire projects. Any time you're creating a Unity project, it's a good idea to use the Asset Store as a resource. Resources - The Unity Asset Store - The Unity Asset Store can be viewed directly in the Unity editor, but it's also accessible from a web browser. Unity Documentation - Asset Store - This section of the Unity Manual details the Asset Store, including how to publish your own assets. - 0:00 The Unity Asset Store is a marketplace of free and paid game assets. - 0:06 You can download scripts, editor extensions, 3D models, - 0:09 sounds, textures, and even entire projects. - 0:12 Let's take a look. - 0:14 The Asset Store can be accessed from either a web browser or - 0:18 from directly inside the Unity editor. - 0:21 You can find the asset store in Unity by going to the window menu and - 0:26 choosing asset store. - 0:28 If this is your first time using the Asset Store, - 0:31 you may be prompted to create an account. - 0:35 Just so you can see this a little better, - 0:37 I'm going to tear off this Asset Store window and maximize it. - 0:43 The Asset Store can be an incredibly valuable resource for any Unity project, - 0:49 whether you're on a large team or if you're working by yourself. - 0:52 For example, many game developers don't have the time or - 0:56 the skills necessary to create all the custom art assets, so - 1:00 they'll fill in some gaps in their game by purchasing 3D models. - 1:04 And by contrast, - 1:06 game artists might purchase pieces of code to help speed along production. - 1:11 You can search the Asset Store for - 1:13 anything you might be looking for, or you can use the categories on the right. - 1:19 For example, let's choose 3D models and vegetation. - 1:26 Here, you'll find hundreds of vegetation models including flowers, trees, and more. - 1:33 You can sort them all by using factors like price, - 1:37 popularity, rating, and so forth. - 1:40 Let's look at the top rated assets in vegetation. - 1:45 And then, let's click one of them. - 1:49 When you click an asset listing you'll be taken to a page where you can - 1:53 download the asset. - 1:54 If the asset is paid you'll see a button with a price. - 1:58 If it's free, as is the case with this particular asset, you'll see a button - 2:03 that says download which allows you to download the asset into Unity. - 2:07 I'm going to click this button to download this free asset. - 2:11 In some instances, you might see a window that says, - 2:14 possible incompatibility with Unity 5, and this will let you know - 2:19 if there are any incompatibilities with the current version of Unity. - 2:23 In most cases, this is fine, so I'm just going to click Accept. - 2:28 This will download the asset and open an import window. - 2:31 We can't see it right now because it's behind the asset store window. - 2:35 I'm going to click this button to unmaximize the asset store window. - 2:41 And then we can see the import package window. - 2:44 And I'll click Import, to import this asset. - 2:49 if the asset it not compatible with the latest version of Unity it may say that - 2:54 there is an API update required. - 2:56 This window tells you that you should make the backup of your project, before - 3:01 up grading assets like this, but I'm going to say I made a backup, go ahead. - 3:06 Once the asset has finished importing, you'll find it in the project window. - 3:11 So here we have a folder that says palm trees, and - 3:15 this is the asset we just downloaded. - 3:17 I don't actually want this in my project though, so - 3:20 I'm going to click on the folder, hit the delete key, and then confirm the deletion. - 3:27 Let's go back to the Asset Store. - 3:32 At the top of the Asset Store window there's Forward and Back buttons. - 3:39 And there's also a Home button. - 3:42 So you can navigate the Asset Store just like you can using a web browser. - 3:46 In addition, there's also a shopping cart button, - 3:50 if you have several items that you'd like to purchase. - 3:54 And finally, there's a button to get to the Downloads Manager. - 3:59 Let's click this button and go there. - 4:02 The Downloads Manager shows you the progress of downloads, - 4:05 as well as, assets that you've already purchased. - 4:08 From this screen, you can download assets you've purchased on your account, - 4:13 import downloaded assets into your Unity project, or - 4:17 update assets that you've already imported into your project. - 4:22 That covers the basics of the Asset Store, - 4:25 but as you can imagine, there's a lot more to explore. - 4:28 Anytime you're creating a Unity project, - 4:31 it's a good idea to use the Asset Store as a resource. - 4:34 For example, if you're coding up a new feature, - 4:37 or thinking about modeling a prop that appears in lots of other games, - 4:42 try saving time by checking the Asset Store first. - 4:46 Even if you don't end up downloading or - 4:48 purchasing anything, you may get some inspiration for your own work.
https://teamtreehouse.com/library/unity-basics/assets-and-game-objects/the-asset-store
CC-MAIN-2017-09
refinedweb
1,044
70.33
: > Simon, what you will say about the following plan? > > ghc/win32 currently don't support operations with files with Unicode > filenames, nor it can tell/seek in files for positions larger than 4 > GB. it is because Unix-compatible functions open/fstat/tell/... that > is supported in Mingw32 works only with "char[]" for filenames and > off_t (which is 32 bit) for file sizes/positions > > half year ago i discussed with Simon Marlow how support for unicode > names and large files can be added to GHC. now i implemented my own > library for such files, and got an idea how this can incorporated to > GHC with minimal efforts: > > GHC currently uses CString type to represent C-land filenames and COff > type to represent C-land fileseizes/positions. We need to > systematically change these usages to CFilePath and CFileOffset, > respectively, defined as follows: > > > > and of course change using of withCString/peekCString, where it is > applied to filenames, to withCFilePath/peekCFilePath (this will touch > modules System.Posix.Internals, System.Directory, GHC.Handle) > > the last change needed is to conditionally define all "c_*" functions > in System.Posix.Internals, whose types contain references to filenames > or offsets: > > #ifdef mingw32_HOST_OS > foreign import ccall unsafe "HsBase.h _wrmdir" > c_rmdir :: CFilePath -> IO CInt > .... > #else > foreign import ccall unsafe "HsBase.h rmdir" > c_rmdir :: CFilePath -> IO CInt > .... > #endif > > (note that actual C function used is _wrmdir for Windows and rmdir for > Unix). of course, all such functions defined in HsBase.h, also need to > be defined conditionally, like: > > #ifdef mingw32_HOST_OS > INLINE time_t __hscore_st_mtime ( struct _stati64* st ) { return > st->st_mtime; } #else > INLINE time_t __hscore_st_mtime ( struct stat* st ) { return > st->st_mtime; } #endif > > That's all! of course, this will broke compatibility with current > programs which directly uses these c_* functions (c_open, c_lseek, > c_stat and > so on). this may be issue for some libs. are someone really use these > functions??? of course, we can go in another, fully > backward-compatible way, by adding some "f_*" functions and changing > high-level modules to work with these functions
http://www.haskell.org/pipermail/glasgow-haskell-users/2005-November/009325.html
CC-MAIN-2014-41
refinedweb
333
54.12
. View the Video My article in asp.netPRO Magazine that compliments the video can be viewed here. Thanks to Orcsweb for hosting my website () and for giving me space to host videos! If you're looking for a host I highly recommend them. You won't find better service anywhere. Great video! 30 minutes covers everything. We need more videos like this, not just the Hello World video's using drag 'n drop. I have a couple of questions: 1. Why not store the schema in the Model folder? 2. Why bind the data tier to a webapp by using webconfigurationmanager and httpcontext? 3. Why bother with an array, wouldn't it be better to just pass the generic list? Could you do a video where you have a totally different model from the database? Thanks again! Adding blog to favorites.... Glad you enjoyed the video. As far as your questions, if I stick the schema in the app_code folder it'll automatically generate a strongly-typed DataSet for me. While that's fine, it's not what I wanted to show here. I prefer a pure streaming approach where possible with small objects being passed between layers to keep overhead down to a minimum. As far as question 2, there are certainly different ways to handle that (such as the more generic ConfigurationManager...I probably should've used that actually) but, the goal was to keep it fairly simple. Trace listeners or writing to the event log could certainly be used instead of directly calling Trace.Warn(). Since I knew the video would already be fairly long I went with the quick approach. :-) I typically am working on Web apps or Web Service apps so HttpContext is always available...but to make it "truly" re-useable across maybe desktop apps then I'd certainly choose a different approach for logging errors. Since this was specifically for ASP.NET I went the route shown in the video though. For the List<Model.Customer>, that would be fine to return of course, but I prefer to keep the overhead of objects passed between the layers as minimal as possible. Just a personal choice. If I knew the collection would have to be increased potentially then I'd pass the List<Model.Customer>. Nice..concise and to the point. Helpful for newbies. thnks for you nice post. it's really usefull. Why the video can't be downloaded? You can download it. Once on the video page, right-click on the view in media player link and select Save Target As. That will let you download it to your machine. Thanks for a great video. I'm starting off a small project with this methods right away. A little question though: When creating my .cs-file from the .xsd the .cs-file wants to be set as a partial class below the .xsd (unless I set the output directory). As far as i can see I have used the same parameters as you do in the video (apart from setting a fully qualified namespace which you pointed out). Is there any special setup or any occasions where this is supposed to happen? Hi Dan, This is the best Video I have ever seen in his category. Please post more video in some advance level. Thanks, Mazhar Great video. One question. How would you bind the list of customers from the aspx codebehind to a gridview? I would normally just use the ObjectDataSource to bind the list of customers returned from the business layer. But, you could always write code to assign them to the GridView's DataSource property and then call DataBind(). I'm teaching a C# programming course this week at Interface Technical Training and promised that I'd Pingback from Ian Joyce » Blog Archive » links for 2007-02-11 Pingback from Connections - Page 2 - ASP.Net Development
http://weblogs.asp.net/dwahlin/archive/2007/02/05/video-creating-an-n-layer-asp-net-application.aspx
crawl-002
refinedweb
647
76.42
Data Science - Linear Regression Case Case: Use Duration + Average_Pulse to Predict Calorie_Burnage Create a Linear Regression Table with Average_Pulse and Duration as Explanatory Variables: Example import statsmodels.formula.api as smf full_health_data = pd.read_csv("data.csv", header=0, sep=",") model = smf.ols('Calorie_Burnage ~ Average_Pulse + Duration', data = full_health_data) results = model.fit() print(results.summary()) Example Explained: - Import the library statsmodels.formula.api as smf. Statsmodels is a statistical library in Python. - Use the full_health_data set. - Create a model based on Ordinary Least Squares with smf.ols(). Notice that the explanatory variable must be written first in the parenthesis. Use the full_health_data data set. - By calling .fit(), you obtain the variable results. This holds a lot of information about the regression model. - Call summary() to get the table with the results of linear regression. Output: The linear regression function can be rewritten mathematically as: Rounded to two decimals: Define the Linear Regression Function in Python Define the linear regression function in Python to perform predictions. What is Calorie_Burnage if: - Average pulse is 110 and duration of the training session is 60 minutes? - Average pulse is 140 and duration of the training session is 45 minutes? - Average pulse is 175 and duration of the training session is 20 minutes? Example return(3.1695*Average_Pulse + 5.8434 * Duration - 334.5194) print(Predict_Calorie_Burnage(110,60)) print(Predict_Calorie_Burnage(140,45)) print(Predict_Calorie_Burnage(175,20)) The Answers: - Average pulse is 110 and duration of the training session is 60 minutes = 365 Calories - Average pulse is 140 and duration of the training session is 45 minutes = 372 Calories - Average pulse is 175 and duration of the training session is 20 minutes = 337 Calories Access the Coefficients Look at the coefficients: - Calorie_Burnage increases with 3.17 if Average_Pulse increases by one. - Calorie_Burnage increases with 5.84 if Duration increases by one. Access the P-Value Look at the P-value for each coefficient. - P-value is 0.00 for Average_Pulse, Duration and the Intercept. - The P-value is statistically significant for all of the variables, as it is less than 0.05. So here we can conclude that Average_Pulse and Duration has a relationship with Calorie_Burnage. Adjusted R-Squared There is a problem with R-squared if we have more than one explanatory variable. R-squared will almost always increase if we add more variables, and will never decrease. This is because we are adding more data points around the linear regression function. If we add random variables that does not affect Calorie_Burnage, we risk to falsely conclude that the linear regression function is a good fit. Adjusted R-squared adjusts for this problem. It is therefore better to look at the adjusted R-squared value if we have more than one explanatory variable. The Adjusted R-squared is 0.814.. Conclusion: The model fits the data point well! Congratulations! You have now finished the final module of the data science library.
https://www.w3schools.com/datascience/ds_linear_regression_case.asp
CC-MAIN-2021-17
refinedweb
483
50.94
1791 Modified Files: BUGS version.lisp-expr Log Message: 0.8.12.9: Indentation change to debug-dump.lisp ... resulting from complete failure to find where source info is conditionally dumped on (debug 2). Also log the (SETF VALUES) bug Index: BUGS =================================================================== RCS file: /cvsroot/sbcl/sbcl/BUGS,v retrieving revision 1.405 retrieving revision 1.406 diff -u -d -r1.405 -r1.406 --- BUGS 26 Jun 2004 14:33:42 -0000 1.405 +++ BUGS 29 Jun 2004 12:13:44 -0000 1.406 @@ -1591,3 +1591,13 @@ should signal an invalid-method-error, as the :IGNORE (NUMBER) method is applicable, and yet matches neither of the method group qualifier patterns. + +340: SETF of VALUES using too many values + (reported by Kalle Olavi Niemetalo via the Debian bug system, with + bug id #256764) + + (let ((a t) (b t) (c t) (d t)) + (setf (values (values a b) (values c d)) (values 1 2 3 4)) + (list a b c d)) + should return (1 NIL 2 NIL), but under sbcl-0.8.12.x returns + (1 2 3 4) instead. Index: version.lisp-expr =================================================================== RCS file: /cvsroot/sbcl/sbcl/version.lisp-expr,v retrieving revision 1.1681 retrieving revision 1.1682 diff -u -d -r1.1681 -r1.1682 --- version.lisp-expr 29 Jun 2004 10:02:48 -0000 1.1681 +++ version.lisp-expr 29 Jun 2004 12:13:45 -0000 1.1682 @@ -17,4 +17,4 @@ ;;; checkins which aren't released. (And occasionally for internal ;;; versions, especially for internal versions off the main CVS ;;; branch, it gets hairier, e.g. "0.pre7.14.flaky4.13".) -"0.8.12.8" +"0.8.12.9" I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/sbcl/mailman/message/11212327/
CC-MAIN-2016-30
refinedweb
315
60.92
(For more resources on Plone, see here.) Designing our intranet information architecture No one uses a knowledge system (such as our intranet) if the information stored in it is hard to find or consume. We will have to specially emphasize on thinking about not only a good navigation schema, but also a successful one for our intranet. The definition of success is different for every interested group, organization, enterprise, or any kind of entity our intranet will serve. There are a lot of navigation schemas we may want to implement, but it is our task to find out what will be more suitable for our organization. To achieve this, we will have to use both hierarchy and metadata taxonomy wisely. Obviously, the use of folders and collections will help achieve this endeavor. The first-level folders or sections are very important and we will have to keep an eye on them when designing our intranet. Also, we should not forget the next levels of folders, because they have a key role in a success navigation schema. The use of metadata, and specifically categorization of content, will also play an important role in our intranet. The continuous content cataloging is crucial to achieve a good content search and the users should be made aware of it. An intranet where the search of content is inefficient and difficult is an unsuccessful intranet, and with time, the users will abandon it. At this point, we should analyze the navigation needs of our intranet. Think about how the people will use it, how will they contribute contents to it, and how will they find things stored in it. In this analysis, it is very important to think about security. Navigation and security are closely related because most probably we define security by containers. There are some standard schemas: by organization structure, by process, by product, and so on. By organization is the most usual case. Everybody has a very clear idea of the organizational schema of an enterprise or organization, and this factor makes it easier to implement this type of schema. In this kind of schema, the first-level sections are divided into departments, teams, or main groups of interest. If our intranet is small and dedicated to one or few points of interest, then these must take precedence over the first level section folders. Keep the following things in mind: - Our intranet will be more usable if we can keep our intranet sections clean and clear - Fight against those people who believe that his (or her) department is more important than others and want to assault our intranet sections - Let them know that maintaining a good intranet structure will be more useful and will help contribute to its success Second levels are also very important. They should be perdurable in time, interesting to users of all sections, and they should divide information and contents clearly. Two subsections shouldn't contain elements of the same subject or kind. For example, these might be a typical second level: - Documentation - Meetings - Events - News - Forums, tracker, or some application specific to the current section All of these are very commonly seen in an intranet. It is a good practice to create these second-level sections in advance, so that people can adapt to them. Teach people to categorize content. This will help intranet searches incredibly and will help create collections and manage contents more effectively. If needed, make a well-known set of categories publicly available for people to use. This would prevent the repetition of categories and the rational use of them. Notice that there can be several types of categories: Subject: Terms that describe the subject of the content - Process: Terms that identify the content with the organizational process - Flags: Flags such as Strongly Recommended - Products: Terms from the products, standards, and technology names that describe the subject matter of the resource - Labels: Terms used to ensure that the resource is listed under the appropriate label - Keywords: Terms used to describe the resource - Events: Terms used to identify events which are recurrent with the content There are other metadata also which influence the improvement of the navigation and search abilities of the intranet such as: - Title - Description - URL, the ID of each content Don't forget to teach your users about content contribution best practices before deploying the intranet. We and our intranet users will appreciate it a lot. Once we have settled down on some practices which are best for information architecture, we should know how to use some interesting Plone features that will help us build navigation and sort the information on our intranet. (For more resources on Plone, see here.) Using collections The collection is one of the most powerful content types available in Plone. It's probably the most misused of all Plone's default content types because it is more complex and difficult to understand for non-technical or non-experienced users. A collection is a real-time query for the ZODB (for the Plone portal catalog, to be more precise), and its contents are the results of this query. The query is defined by the user, who can also define how to display the results in the collection view. The query is executed with the rights and context of the current user, and the results are consistent with the user's rights over content. This means that a collection could return different results depending on the current user. Additionally, a collection is a content type, and, like any content type, has permissions and is assigned to a workflow. Thus, a collection can be for both public or restricted use. We can use them to store recurrent queries in the database, for example, the News and Events, Plone's default view, is implemented using a collection. In both cases, we have a recurrent task of displaying all published news or events. The collection will collect all news or events objects published at that precise moment and will display them. In addition, the events collection will provide an additional filter for omitting all events that occurred in the past. I can bet you can think of other useful use cases, such as a collection, that returns all the content authored by yourself to keep track of it easily. Or maybe a collection that returns all content contributed by the members of a team or workgroup in the last month (or year). The possibilities are infinite, of course. It's in your hands to find the right use case suitable for your need. Use collections every time you need a non-hierarchal display of content and access the information in a direct and grouped way. Creating a collection A collection is as easy to create as any other content type, but there are some concepts on fields and configuration that are worth elaborating. There are two places where we can configure a collection: in the edit mode and in the Criteria tab. The edit mode holds all the common content type default fields along with the following: - Number of items: The number of items to be shown in the results. Related to the "Limit Search results" setting. - Limit Search Results: If this is selected, the results will be restricted to the number defined in the Number of items field - Table Columns: Defines the columns to be displayed in the tabular view - Display as Table: Displays the results in tabular way, with the columns defined in the Table columns field But the most important setting of a collection is the Criteria tab. Here, we define the query on the database based on the fields or attributes of the target objects and its values. The criteria form is divided into two sections: the Add New Search Criteria and the Set Sort Order. In the first one, we define the criteria. In order to do so, we need to inform what would be the object's field or attribute to search for and the value that we want to match the criteria with. This is usually done in two steps. First, define the field and the criteria type. For example, in Field name select Categories, and in Criteria type, select one of the three options that will determine how we will define the criteria: select it from a compiled list from all the available categories in the site (Select values from list), type a single category into a text box (Text), or type a list of categories separated by a carriage return (List of values). Once we select one of them, the form will change to add the new criteria. The second step will consist of defining the category (or categories) that the criteria will match with. Almost all the criteria have this two step process. We can add as many criteria as we need. Don't forget to save any change made to the criteria settings. The second part of the form is related to the ordering of the results. This order can be set against a field name and can be specified to be in the reverse order, if desired. Reverse order is useful when ordering according to dates, the most recent on the top of the collection's results. Table of contents This is a useful feature of the page content type. We can enable it through the Settings tab in the Edit mode. It adds a handy table of contents menu to the top right of the default view of the current page content type. It's formatted using the headings of the contents of the page, and is created and updated automatically. It also features relative links to the headings of the page. It's very useful on very large pages, where we want to keep access to contents clear and quick. We can see the result in the following screenshot: Next/previous navigation We can enable horizontal navigation thorough all elements of a folder via this feature. It's useful in case we have a lot of information to show on a single page. We can divide this content into smaller pieces in order to make it more usable, clear, and searchable. Put each section into different pages inside a thematic folder on the subject we are writing (name it conveniently), and check the Enable next previous navigation checkbox in the folder's Settings tab in Edit mode, as shown in the following screenshot: When we access any content in the folder, we will be able see the additional controls to navigate to the next or previous item in the folder. It works with any content type existing in the folder. The title of the next or previous item will also appear on the next/previous controls. Presentation mode Meet one of the most unknown features of Plone 3 and one of the more appreciated features by management staff. This feature enables any page content type to make available a special view called Presentation mode. This view powered by the S5 JavaScript library shows each section of the page in a presentation-like slide. So we can easily create a content page both for documenting and for easily showing its content in a projected presentation, all in the same place as all the information lives in the same piece of content. The sections are delimited by the heading styles included in a page, so we don't need to create a page for each slide. In fact, the heading style in Kupu or TinyMCE will be transformed to a h2 HTML tag that will be used by S5 to format the presentation. All content between h2 tags will be formatted as slides. The content of the h2 tag will be the slide's title and the content located between a h2 tag definition. The next one will be the slide's content. A leading slide will be added automatically with the title, description, and author of the page. The S5 engine will also render basic controls over the presentation itself for navigating around the presentation's slides. Then it will close the presentation mode and return to the normal visualization mode. Enabling the presentation mode We can enable presentation mode by editing any page content type and then clicking on the Settings tab. The Presentation mode checkbox is available there. Once checked, a link will appear in the view of the page for a user to view the page in Presentation mode. Formatting a slide Following the most popular practices on how to construct a presentation slide, the presentation mode will not render text chunks or paragraph text. Slides are meant to display concepts, ideas, and summary information. For this reason, if we want to display content in the Presentation mode, we must format it with a style other than the Normal paragraph style. For example: - Subheading - Definition list - Bulleted list - Numbered list - Literal - Pull quote - Highlight (if not inside a paragraph) We can add images too, if they are not inside a paragraph tag. It's possible that we may want to hack the HTML code in order to achieve the best results if we want to format a complex presentation. We can do this by triggering the Kupu's HTML view button. Remember that all content inside a p tag will not be rendered. If you want to make some previously created content presentation mode ready, it would possibly require some minor adjustments before it will show properly. We can use this feature to add additional support information to the presentation in our page that will not be rendered in Presentation mode. By doing this, with smart usage of page formatting, we can write a page with two purposes: holding the detailed documentation of a particular subject, along with the presentation that consists of the summary and highlights of the subject.<> We can take the default Welcome to Plone page as an example on how to proceed with Presentation mode. The following screenshot is the third slide of Plone's default page in Presentation mode view: (For more resources on Plone, see here.) Third-party content types—best practices We've already learnt about how to use Plone's default content types wisely. Now is the time to talk about third-party content types provided by third-party add-on products. Sooner or later, we will find ourselves browsing the downloads section on the Plone site and will be tempted to try a lot of products that promise wonderful features and incredible content types. We recommend you to follow some rules when dealing with third-party content types—firstly, don't rush into it and be careful. A few golden rules We should observe some golden rules before a third-party product is put in production. They are valid for all types of third-party products and not only for those that provide a new content type, of course. They are very simple and can save us from trouble: - Find out who's the author (or authors) of the product, and how many other contributions they have made to the community - Check out if the product is uploaded to the SVN collective repository (), to make sure that all the members of the community can contribute and improve it - Check how long the life cycle of the product is and if it's a final release - The product must have decent documentation that would lead us to a better understanding of what the product does (and what it doesn't do) - Check if the product has enough test coverage - Always test the product in a development environment, and if possible test it with real data Ordering the "Add new" content type menu By default, all installed content types are shown in the Add new... menu, ordered alphabetically, but we can restrict the types of content that can be added. This menu has two configurable levels. The first level is the menu that unfolds when we click on the Add new... tab and shows the allowed types. The secondary one is a view that can be accessed from the item More... of the first level drop-down menu: This allows us to seperate the most used or preferred content types from the least used or less popular ones. Thus, simplifying the allowed content types drop-down menu and making it more usable at the same time. Believe it or not, this simplification usually provides a better user experience. If we make a proper selection of the more used content types (or those which we think are more appropriate for the user to use), the users will have more possibilities to find the right content type quickly. If we give them many possibilities, the chances of choosing the wrong content type is high. The user's confidence in the intranet will drop due to not finding a suitable content type for the job they require at first glance. In the end, this simplification provides a better user experience. If the user can't find the right content type from the first selection, the user will access the extended one in the secondary types view and continue with the creation of content. The item More... will be displayed when we tell Plone which content types we want to show as primary types and which as secondary types. All users with the Manager role on a folder (except on the site root) will be able to access the item Restrictions.... It will be located in the new content type drop-down menu. This item will lead us to the restriction policy form. In this form, we can choose between Allow the standard types to be added or Specify types manually. The former is the default option; whereas the latter will open an additional form where we can specify which will be the allowed types and which will be the secondary ones. Specify the allowed (and primary) types in the upper part of the form and the secondary (the ones that will appear in the More... view) types in the lower part of the form. Content type superseding To avoid user confusion it is recommended to not maintain two (or more) content types with similar purposes. This includes content types that have similar fields, functionalities, or views. It's a good thing that all intranet users choose the same content type to perform the same kind of job. If we still want to add the new type to our allowed content types, it's recommended to hide the creation option of the superseded type. Of course, all the content already created with the old type will be preserved, but the user will not be able to create more content using it. We can use the portal_types ZMI tool to hide the old content type. To access it, click on the content type we want to hide. Inside the type form, the Implicitly addable? property controls the visibility of the content type item in the drop-down menu Add new.... Once we uncheck it, the content type will not be available to add. It's also a good practice to inform the intranet users about the change, along with the new features of the content type and any significant information they might want to know. Mantaining usability Maintaining the intranet usability is a goal we have to continuously keep in mind. Making a lot of complex content types available in our intranet types, which are hard to create and consume, will lead to an intranet which will be hardly used. Even the more experienced techie users will end up not using it. Think about the interests of the non-technical users and you have a lot gained. Upgrades If we choose the right third-party products for our intranet, it will be easier for us if we want to upgrade the product itself or Plone. If a product has heavy community support, there is a probability that it will have good upgrade options between the versions as the product evolves. The same thing happens with Plone's upgrades. The product will be ported more easily if the product is well supported. Summary We have to be very careful about the content types we make available for our users. Failing to do so will lead to really hard upgrades, product conflicts, and user confusion. In this article, we've covered the following topics: - How to organize information on our intranet - How to get the maximum out of Plone features, such as collections, previous/next navigation, and so on. Further resources on this subject: - Building our own Plone 3 Theme Add-on Product [Article] - Content Rules, Syndication, and Advanced Features of Plone 3 Intranet [Article]
https://www.packtpub.com/books/content/using-content-type-effectively-plone-intranet
CC-MAIN-2018-09
refinedweb
3,450
58.72
Can't call methods on SOAP::Lite service from WSDL description Expand Messages - My SOAP::Lite service works perfectly using a SOAP::Lite client or when I manually put the SOAP XML together from my C# client. When I try to use my WSDL file so that I can just make a web reference, it is imported with no errors but the object does not have the method I have defined. When I use the validator at I get the error "Denied access to method (getDietInfo) in class (main) at /usr/lib/perl5/site_perl/5.8.0/SOAP/Lite.pm line 2509." Some searching suggests to me that the problem is something with my namespace or soapaction, but I can't figure out what I might have incorrect in the WSDL. <a href="">Here is my wsdl</a> The method takes 2 string inputs, both in the form of yyyy/mm/dd One thing I am noticing, though. My SOAP::Lite client is passing the following SOAPAction header, SOAPAction: "", yet if I specify that full header with uri in the SOAPAction in my WSDL, SOAP::Lite returns an error to the client stating that the SOAPAction header should not have a uri, just "#getDietInfo" Any ideas what is wrong? Any WSDL gurus here that can tell me what is wrong with my WSDL (that seems the most likely place of the problem). Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/5619?o=1&d=-1
CC-MAIN-2015-14
refinedweb
245
63.02
NAME getrandom - obtain a series of random bytes SYNOPSIS #include <linux/random.h> int getrandom(void *buf, size_t buflen, unsigned int flags); DESCRIPTION The getrandom() system call fills the buffer pointed to by buf with up to buflen random bytes. These bytes can be used to seed user-space random number generators or for cryptographic purposes._NONBLOCK. getrandom() was introduced in version 3.17 of the Linux kernel. CONFORMING TO This system call is Linux-specific. NOTES Maximum number of bytes returned As of Linux 3.19 the following limits apply: *; } BUGS As of Linux 3.19, the following bug exists: * Depending on CPU load, getrandom() does not react to interrupts before reading all bytes requested. SEE ALSO random(4), urandom(4), signal(7) COLOPHON This page is part of release 4.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
http://manpages.ubuntu.com/manpages/xenial/en/man2/getrandom.2.html
CC-MAIN-2018-13
refinedweb
159
59.19
User:Echo Nolan/Reactive Banana: Straight to the Point From HaskellWiki Revision as of 05:43, 7 October 2012 1 Introduction So I'm writing this tutorial as a means of teaching myself FRP and reactive-banana. It'll probably be full of errors and bad advice, use it at your own risk. All the tutorials on FRP I've read start with a long boring theory section. This is an instant gratification article. For starters, imagine a man attempting to sharpen a banana into a deadly weapon. See? You're gratified already! Here, I'll write some code for playing musical notes on your computer, attach that to reactive-banana and build increasingly complicated and amusing "sequencers" using it. Now for a boring bit: Go install sox: apt-get install sox # Or equivalent for your OS/Distro Get the git repository associated with this tutorial: git clone Install reactive-banana cabal install reactive-banana 2 Musical interlude Cd into the git repo and open rbsttp.hs in GHCi: cd rbsttp ghci rbsttp.hs Now, we can make some beepy noises. Try these: playNote (negate 5) C playNote (negate 5) Fsharp sequence_ . intersperse (threadDelay 1000000) $ map (playNote (negate 5)) [C ..] Play with the value passed to threadDelay a bit for some more interesting noises. It's the time to wait between Notes, expresssed in microseconds. sequence_ . intersperse (threadDelay 500000) $ map (playNote (negate 5)) [C ..] sequence_ . intersperse (threadDelay 250000) $ map (playNote (negate 5)) [C ..] sequence_ . intersperse (threadDelay 125000) $ map (playNote (negate 5)) [C ..] sequence_ . intersperse (threadDelay 62500) $ map (playNote (negate 5)) [C ..] You've probably figured out by now that C and Fsharp are data constructors. Here's the definition for my Note type. -- 12 note chromatic scale starting at middle C. data Note = C | Csharp | D | Dsharp | E | F | Fsharp | G | Gsharp | A | Asharp | B deriving (Show, Enum) playNote is a very hacky synthesizer. It's also asynchronous, which is why mapM_ playNote (negate 5) [C ..] doesn't sound too interesting. Here's playNote's type. -- Play a note with a given gain relative to max volume (this should be -- negative), asynchronously. playNote :: Int -> Note -> IO () 3 Ground yourself, then insert the electrodes into the banana Everything we've done so far is plain old regular Haskell in the IO monad. Try this now: (sendNote, network) <- go1 sendNote ((negate 10), C) sendNote ((negate 10), Fsharp) Congratulations! You just compiled your first EventNetwork and sent your first Events. I know this looks like I just made a excessively complicated version of uncurry playNote, but bear with me for a moment. Let's look at the code for go1: go1 :: IO ((Int, Note) -> IO (), EventNetwork) go1 = do (addH, sendNoteEvent) <- newAddHandler let networkDescription :: forall t. Frameworks t => Moment t () networkDescription = do noteEvent <- fromAddHandler addH reactimate $ fmap (uncurry playNote) noteEvent network <- compile networkDescription actuate network return (sendNoteEvent, network) From it's type we can see that this is an IO action that returns a tuple of what is, yes, just fancy uncurry playNote and something called a EventNetwork. The EventNetwork is the new, interesting bit. The two new important abstractions that reactive-banana introduces are Events and Behaviors. Behaviors, we'll get to a bit later. Events are values that occur at discrete points in time. You can think of an Event t a(ignore the t for now) as a [(Time, a)] with the times monotonically increasing as you walk down the list. go1 has two Events in it. The first is noteEvent :: Event t (Int, Note) the one you send at the ghci prompt. The second is anonymous, but it's type is Event t (IO ()). We build that one using fmap and uncurry playNote. In general, we'll be manipulating Events and Behaviors using fmap, Applicative and some reactive-banana specific combinators. Put the weird type constraint on networkDescription out of your mind for now. The Moment monad is what we use to build network descriptions. I don't understand exactly what's going on with forall Frameworks t. => Moment t (), but it makes GHC happy and probably stops me from writing incorrect code somehow. compile turns a network description into an EventNetwork, and actuate is fancy-FRP-talk for "turn on". 4 Plug a metronome into the banana In general, to get Events from IO we'll need to use fromAddHandler. Unsurprisingly, it wants an addHandler as its argument. Let's take a look at those types: type AddHandler a = (a -> IO ()) -> IO (IO ()) fromAddHandler :: Frameworks t => AddHandler a -> Moment t (Event t a) Reactive-banana makes a pretty strong assumption that you're hooking it up to some callback-based, "event driven programming" library. An AddHandler a takes a function that takes an a and does some IO and "registers the callback" and returns a "cleanup" action. Reactive-banana will hook that callback into FRP, and call the cleanup action whenever we pause our EventNetwork. (You can pause and actuate an EventNetwork as many times as you like.) Since GHC has such great concurrency support, and we were already using threadDelay back in section 2, we're going to use a couple of threads and a Chan () to build and attach our metronome. Here's a function that lets us build AddHandler as out of IO functions that take Chan a as an argument. addHandlerFromThread :: (Chan a -> IO ()) -> AddHandler a addHandlerFromThread writerThread handler = do chan <- newChan tId1 <- forkIO (writerThread chan) tId2 <- forkIO $ forever $ (readChan chan >>= handler) return (killThread tId1 >> killThread tId2) So, basically, we make a new Chan, forkIO the given function, passing the new Chan to it as an argument, create a second thread that triggers the callback handler whenever a new item appears on the Chan and returns a cleanup action that kills both threads. Some version of addHandlerFromThread may or may not become part of reactive-banana in the future, filing a ticket is on my to-do list. On to the actual metronome bit: bpmToAddHandler :: Int -> AddHandler () bpmToAddHandler x = addHandlerFromThread go where go chan = forever $ writeChan chan () >> threadDelay microsecs microsecs :: Int microsecs = round $ (1/(fromIntegral x) * 60 * 1000000) Easy peasy. goBpm is basically the same as go1, with a different event source and fixed gain. goBpm :: Int -> IO EventNetwork goBpm bpm = do let networkDescription :: forall t. Frameworks t => Moment t () networkDescription = do tickEvent <- fromAddHandler (bpmToAddHandler bpm) reactimate $ fmap (const $ playNote (negate 5) Fsharp) tickEvent network <- compile networkDescription actuate network return network Try it out: goBpm 240 -- Wait until you get tired of that noise pause it If you've gotten confused here, it is a special variable only available in GHCi, holding the return value of the last expression, and pause stops the operation of an EventNetwork. 5 Warming things up: Banana, meet Microwave Let's play some chords instead of notes. First, the easy part: -- The last two will sound ugly, but whatever I'm not an actual musician and -- this is a tutorial. chordify :: Note -> [Note] chordify n = let n' = fromEnum n in map (toEnum . (`mod` 12)) [n', n'+1, n'+2] Now how do we hook that up to FRP? We already know fmap, so we can get something of type Event t Note -> Event t [Note] but how do we get a list of Notes to play at the same time? Meet a new combinator: spill :: Event t [a] -> Event t a So, now we can define: chordify' :: Event t Note -> Event t Note chordify' = spill . fmap chordify
https://wiki.haskell.org/index.php?title=User:Echo_Nolan/Reactive_Banana:_Straight_to_the_Point&curid=9712&diff=54208&oldid=54206
CC-MAIN-2016-40
refinedweb
1,237
61.97
Related Tutorial Integrating and Using CSS Frameworks. CSS frameworks are great for many reasons; code is more universally understood, web applications are easier to maintain, and prototyping becomes less of an extra step and more part of the development process. Generally speaking, integrating each framework is generally the same so the installation process will work with either Bootstrap, Bulma, or Foundation. Code examples in this post will be written using Bootstrap 4 as it’s the most widely used. However, best practices apply to all. This is intended to be a general overview and not as a robust guide. Adding a Framework to Vue.js Before you begin downloading a CSS framework, be sure to install and create a new project with the Vue CLI and follow the prompts: $ npm install vue-cli $ vue init webpack project-name Installing Bootstrap 4 After you initialize a new Vue project, download Bootstrap 4 with npm. Since Bootstrap 4’s JavaScript is dependent on jQuery, you will also need to install jQuery. $ npm install bootstrap jquery --save You’ll want to add the Bootstrap dependencies in your main.js file below your Vue imports so it’s available globally to the entire application. import './../node_modules/jquery/dist/jquery.min.js'; import './../node_modules/bootstrap/dist/css/bootstrap.min.css'; import './../node_modules/bootstrap/dist/js/bootstrap.min.js'; If your application fails to build, just install the popper.js dependency. After that, it should build properly. $ npm install --save popper.js Congrats, Bootstrap 4 is installed! Bootstrap’s Docs are a great resource to get you started with the basics like using columns, rows, buttons, and more. Installing Bulma If you haven’t heard of Bulma, you should look into it. It’s a lightweight and flexible CSS framework that’s based on Flexbox. It’s created by Jeremy Thomas and has over 25k stars on GitHub at the time of this writing! Unlike Bootstrap, Bulma only contains CSS so there’s no jQuery or JavaScript dependencies. $ npm install bulma After Bulma is downloaded, open up your main.js and import it. import './../node_modules/bulma/css/bulma.css'; No extra steps. Bulma is ready to use in your Vue.js application! The Bulma Docs are a great resources to get you started. Installing Foundation Foundation is a framework created by the fine folks at Zurb. Foundation has two frameworks; one for email and one for websites. We want the Foundation Sites framework since we’re only concerned with using Foundation in our web app. Install Foundation Sites and import it into your main.js file. The Foundation Docs are an excellent resource for learning the ins-and-outs of Zurb’s Foundation framework. $ npm install foundation-sites --save import './../node_modules/foundation-sites/dist/css/foundation.min.css'; import './../node_modules/foundation-sites/dist/js/foundation.min.js'; Best Practices With Vue Down to their core, these three frameworks are very similar: they all work with rows and columns. These rows and columns create a “grid” that you can leverage to create your user interfaces. This grid lets you easily change the width of your columns by device width just by adding or changing the classes that are appended to an element. As stated before, the examples below are using Bootstrap 4. However, these best practices with row-column based frameworks apply to all. It’s considered best practice to utilize the framework’s classes whenever possible. Each of these frameworks have been carefully crafted for ease-of-use, scalability, and customization. Instead of creating your own button with its own classes, just create a button using Bootstrap, Bulma, or Foundation. <!-- Bootstrap --> <button class="btn btn-primary btn-lg">I'm a large Bootstrap button</button> <!-- Bulma --> <button class="button is-primary is-large">I'm a large Bulma button</button> You can configure each of these so that btn-primary (Bootstrap) or is-primary (Bulma) references your brand’s colors instead of the default blue/green color that gets shipped with Bootstrap and Bulma respectively. If you need to create your own theme with your own brand, you can create a global stylesheet that overrides the framework’s global styles; you do not want to modify the framework directly. Creating Your Own Theme To create your own ‘theme’, create a new CSS file and place it in the assets directory and import it into your App.vue file. It’s important not to scope your App.vue file. import '@/assets/styles.css'; ... Try mapping some default styles that match your brand to some Bootstrap components. /* Buttons --------------------------- */ .btn-primary { background: #00462e; color: #fff; } /* dark green */ .btn-secondary { background: #a1b11a; color: #fff; } /* light green */ .btn-tertiary { background: #00b2e2; color: #fff; } /* blue */ .btn-cta { background: #f7931d; color: #fff; } /* orange */ /* Forms --------------------------- */ .form-control { border-radius: 2px; border: 1px solid #ccc; } .form-control:focus, .form-control:active { outline: none; box-shadow: none; background: #ccc; border: 1px solid #000; } Keep the Reusability of Components in Mind When working with any CSS framework and Vue.js, it’s important to keep the reusability of the component in mind. What do I mean by that? Well, you don’t want to mix layout CSS with the component itself. You’ll want to reuse the component at some point and for that other instance, another layout may be needed. A Bad Example <template> <div class="row"> <div class="col"> <nav> <ul> <li><a href="#">Navigation Item #1</a></li> <li><a href="#">Navigation Item #2</a></li> <li><a href="#">Navigation Item #3</a></li> </ul> </nav> </div> </div> </template/> <template> <div> ... <Navigation/> </div> </template/> This navigation may be intended to be used in both the header and the footer. Both of which should look very different but contain the same information. Let’s strip out the layout HTML and move that to it’s parent/base component. A Better Example <template> <nav> <ul> <li><a href="#">Navigation Item #1</a></li> <li><a href="#">Navigation Item #2</a></li> <li><a href="#">Navigation Item #3</a></li> </ul> </nav> </template/> <template> ... <div class="row"> <div class="col"> <Navigation/> </div> </div> </template/> Conclusion CSS Frameworks make your life as a developer much easier. They make your application’s template code universally understood, consistent, easier to maintain, and easier to write. You can focus more on the app’s functionality and overall design rather than focusing on common tasks like creating a button from scratch. Bootstrap, Bulma, and Foundation are just the three widely used frameworks right now. However, you aren’t limited to just those. There are plenty of other frameworks to out there, ready for you to explore including, Semantic UI and UI Kit.
https://www.digitalocean.com/community/tutorials/vuejs-css-frameworks-vuejs
CC-MAIN-2020-34
refinedweb
1,110
56.45
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Project Help and Ideas » morse code hi i was wondering if anyone had tried this program, using the lcd in the kit, i also cleared the timer line in the code, when pressing the button and the lcd shows the characters on line 2, how do you keep the characters continuing onto line 3 then 4? i've been trying to for days. with the counter timer lines cleared( that originally showed on line 3) the characters will start on line 2 then continue onto line 4. can someone help me with this problem? troubled programmer 17 17 - I think when the code for that project was written the Nerdkit shipped with the 2-line LCD and not the 4 line LCD the newer kits have (not positive though, I have the newer kit like yours). I looked through the source code they have posted for the USB project and they are using some raw LCD commands to move to location of the next the character placement. The "lcd.c" and "lcd.h" that are part of the newer kits have functions created to do this more easily. You can move to any location with this function call... lcd_goto_position(0, 12); // 0 is row 1 and 12 is column 13 (zero indexed) You can still use this... lcd_line_two(); lcd_write_string(PSTR(" ")); To clear line 2 for example. I'm not sure how you want the program to behave but the "chars" variable in the program keeps track of how many characters are printed so you can use that to decide where you want to print the next character. Keep in mind the LCD updates it's position every time you print one character so you should only have to worry about moving to new lines. The 4 line LCD's are odd because if you go past column 20 on line one it will increment to column 1 on line 3. Same with rows 2 and 4. hey pcbolt, that is my problem, im using this code as a lab project( we can use anything online) but i'm trying to fix that problem, when line i go past line 2 the lcd increments to line 4 but i want it to continue from end of line 2 to beginning of line 3 and so on to line 4. i'm still not understanding in how to do this using the given code. I just want to make sure you have the same download I was looking at. It will help as a line reference if me or anyone else needs to change something. Here is the downloaded code... // morsedecoder.c // for NerdKits with ATmega168 // mrobbins@mit.edu #define F_CPU 14745600 #include <avr/io.h> #include <avr/interrupt.h> #include <avr/pgmspace.h> #include <util/delay.h> #include <inttypes.h> #include <stdlib.h> #include "../libnerdkits/delay.h" #include "../libnerdkits/lcd.h" #include "../libnerdkits/uart.h" // PIN DEFINITIONS: // // PC5 -- keyer input switch (pulled to ground when pressed) // (was PA7) // PC4 -- LED anode // (was PA1) // PB1 -- piezo element or speaker (OC1A) inline void wait_for_timer_tick() { while(!(TIFR0 & (1<<TOV0))) { // do nothing } // clear timer tick TIFR0 |= (1<<TOV0); } #define INTERSYMBOL_TIMEOUT 200 #define LONGSHORT_CUTOFF 60 void speaker_off() { TCCR1A &= ~(1<<COM1A0); } void speaker_on() { TCCR1A |= (1<<COM1A0); } uint8_t time_next_keypress() { uint16_t counter; // waits until the button is pressed // and then released. (OR timeout) // // returns timer cycles between press&release // or returns 255 on timeout // wait FOREVER until the button is released // (to prevent resets from triggering a new dit or dah) while((PINC & (1<<PC5))==0) { } // turn off LED PORTC &= ~(1<<PC4); // turn off speaker speaker_off(); counter = 0; // wait until pressed while(PINC & (1<<PC5)) { wait_for_timer_tick(); counter++; if(counter == INTERSYMBOL_TIMEOUT) { // timeout #1: key wasn't pressed within X timer cycles. // (should happen between different symbols) return 255; } } // turn on LED PORTC |= (1<<PC4); // turn on speaker speaker_on(); // wait one cycle as a cheap "debouncing" mechanism wait_for_timer_tick(); // wait until released counter = 0; while((PINC & (1<<PC5))==0) { wait_for_timer_tick(); counter++; if(counter == 255) { // timeout #2: key wasn't released within 255 timer cycles. // (should happen only to reset the screen) return 254; } } // turn off LED PORTC &= ~(1<<PC4); // turn off speaker speaker_off(); return counter; } // defining the lookup table. // bits 7 6 5 4 3 2 1 0 // 765 define the length (0 to 7) // 43210 define dits (0) and dahs (1) #define MORSE_SIZE 26 #define MORSE(s, x) ((s<<5) | x) #define DIT(x) (x<<1) #define DAH(x) ((x<<1) | 1) unsigned char morse_coded[MORSE_SIZE] PROGMEM = { MORSE(2, DIT(DAH(0))), //A MORSE(4, DAH(DIT(DIT(DIT(0))))), //B MORSE(4, DAH(DIT(DAH(DIT(0))))), //C MORSE(3, DAH(DIT(DIT(0)))), //D MORSE(1, DIT(0)), //E MORSE(4, DIT(DIT(DAH(DIT(0))))), //F MORSE(3, DAH(DAH(DIT(0)))), //G MORSE(4, DIT(DIT(DIT(DIT(0))))), //H MORSE(2, DIT(DIT(0))), //I MORSE(4, DIT(DAH(DAH(DAH(0))))), //J MORSE(3, DAH(DIT(DAH(0)))), //K MORSE(4, DIT(DAH(DIT(DIT(0))))), //L MORSE(2, DAH(DAH(0))), //M MORSE(2, DAH(DIT(0))), //N MORSE(3, DAH(DAH(DAH(0)))), //O MORSE(4, DIT(DAH(DAH(DIT(0))))), //P MORSE(4, DAH(DAH(DIT(DAH(0))))), //Q MORSE(3, DIT(DAH(DIT(0)))), //R MORSE(3, DIT(DIT(DIT(0)))), //S MORSE(1, DAH(0)), //T MORSE(3, DIT(DIT(DAH(0)))), //U MORSE(4, DIT(DIT(DIT(DAH(0))))), //V MORSE(3, DIT(DAH(DAH(0)))), //W MORSE(4, DAH(DIT(DIT(DAH(0))))), //X MORSE(4, DAH(DIT(DAH(DAH(0))))), //Y MORSE(4, DAH(DAH(DIT(DIT(0))))), //Z }; unsigned char morse_alpha[MORSE_SIZE] PROGMEM = { 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z' }; unsigned char morse_lookup(unsigned char in) { // linearly go through the table (in program memory) // and find the matching one uint8_t i; unsigned char tmp; for(i=0; i<MORSE_SIZE; i++) { tmp = pgm_read_byte(&morse_coded[i]); if(tmp == in) { // matched morse character return pgm_read_byte(&morse_alpha[i]); } } return '?'; } unsigned char bitwise_reverse(unsigned char in, uint8_t max) { // maps bits backwards // i.e. for max=5 // in XXX43210 // out YYY01234 unsigned char b = 0; uint8_t i; for(i=0; i<max; i++) { if(in & (1<<i)) b |= (1<< (max-1-i) ); } return b; } int main() { // enable internal pullup on PC5 (the button) PORTC |= (1<<PC5); // enable LED on PC4 DDRC |= (1<<PC4); // enable piezo out on PB1 DDRB |= (1<<PB1); // use Timer0 as our clock source. // divide the 14.7456MHz by 256, and then wait for overflow (another factor of 256) // so we get one overflow every 4.4ms // set prescale CK/256 TCCR0A = 0; TCCR0B = (1<<CS02); // use Timer1 for buzzer sound. // CTC mode, WGM12 TCCR1B |= (1<<WGM12); // enable output pin to toggle on match //TCCR1A = (1<<COM1A0); // see speaker_on() // toggle on 255 overflow, and get 450Hz OCR1AH = 0; OCR1AL = 255; // divide the 14.7456MHz by 64 TCCR1B |= (1<<CS11) | (1<<CS10); // fire up the LCD lcd_init(); lcd_home(); uint8_t counter=0; uint8_t ditdahs=0; // counts dits and dahs along the top row unsigned char curchar=0, lastchar='_'; uint8_t chars=0; // counts chars along the bottom row uint8_t spacetimes=0; // counts number of intersymbol times (before we call it a space) while(1) { counter = time_next_keypress(); // decide what to do based on the timing if(counter == 254) { // clear everything lcd_home(); lcd_write_string(PSTR(" ")); lcd_line_two(); lcd_write_string(PSTR(" ")); ditdahs = 0; chars = 0; curchar = 0; lastchar = '_'; spacetimes = 0; } else if(counter == 255) { // intersymbol timeout: clear 1st row lcd_home(); lcd_write_string(PSTR(" ")); if(ditdahs > 0) { // lookup the character curchar = MORSE(ditdahs, bitwise_reverse(curchar, ditdahs)); curchar = morse_lookup(curchar); // print it lcd_set_type_command(); lcd_write_byte(0x80 + 0x40 + chars); // move to 2nd line, chars^th column lcd_write_data(curchar); chars++; lastchar = curchar; spacetimes = 0; } else if(lastchar != '_') { spacetimes++; if(spacetimes == 4) { // as long as the last character wasn't a space, print a space // (as an underscore so we can see it) curchar = '_'; // print it lcd_set_type_command(); lcd_write_byte(0x80 + 0x40 + chars); // move to 2nd line, chars^th column lcd_write_data(curchar); chars++; lastchar = curchar; } } curchar = 0; ditdahs = 0; } else { // go to correct position lcd_set_type_command(); lcd_write_byte(0x80 | ditdahs); // move to 2nd line, chars^th column // dit or dah if(counter >= LONGSHORT_CUTOFF) { // dah lcd_write_data('-'); curchar = DAH(curchar); } else { // dit lcd_write_data('.'); curchar = DIT(curchar); } ditdahs++; spacetimes = 0; } // write the last timeout in the top right of the screen lcd_set_type_command(); lcd_write_byte(0x80 | 21); // go to position 21 lcd_write_int16(counter); lcd_write_data(' '); lcd_write_data(' '); } return 0; } I'm going to look over it a little more and see if I can find an answer. (Little late tonight :-) Ok I took a better look at the code and I think I can help. There are three places in the code you need to make some changes. First is right around line 226, comment out four lines and add one so it looks like... // lcd_home(); // lcd_write_string(PSTR(" ")); // lcd_line_two(); // lcd_write_string(PSTR(" ")); lcd_clear_and_home(); Now the next two changes are the same but in two different locations. The first occurs around line 249 and the next at 264. Change that to read... // lcd_set_type_command(); // lcd_write_byte(0x80 + 0x40 + chars); // move to 2nd line, chars^th column lcd_goto_position(((chars / 20) % 3) + 1, chars % 20); lcd_write_data(curchar); hey pc bolt , i just now tried that new line of code, well what happens, is that the characters do continue to the next line, but for some reason at the beginning line 3 the characters will show then disappear. This only occurs in the first four locations then at location 5 of 3rd line the charcters will stay on screen all the way through end of line 4. Also could you explain (((chars/20)%3)+1,chars%20) means? Also in regarding my question about the pushbutton, yes i am trying to use it to clear the screen for the morse program, i want to clear the screen at any time . i entered those code lines while(1){ if((PINC & (1<<PC3))==0){ lcd_clear_and_home(); }} after line 103 return counter, i'm not sure where to enter this while loop, it tried inserting it after line 55 but it just clears the screen with now switched turned, i think it just prevents program from running. please help On line 295, change the value 21 to 15. I'm posting from a phone so I'll add more later on. that part doesnt matter because im not displaying the timer, i noticed that these lines 240 lcd_home(); 241 lcd_write_string(PSTR(" ")); seem to affect line 3 spaces 1-4. if i comment out line 241, line one doesnt clear out but the characters will not disappear on line 3 space 1-4, but then again line 1 doesnt clear to show dits and dahs for each character. i figure that its the same relation as the previous problem with lines 1 and 3 connected somehow, i cant see anywhere in the code how line 1 is affecting the beginning of line 3. i'll be up late so feel free to reply I thought the old LCD screen was 2x20 but it looks like its 2x25. Try shortening the number of spaces between the " " marks on this line (and all like it) to 20 lcd_write_string(PSTR(" ")); In regards to what this does... (((chars/20)%3)+1,chars%20) "chars" is the variable in the program that keeps track of how many characters get printed. "chars/20" (integer division) will be 0 when "chars" is between 0 and 19, 1 when it's between 20 and 39, 2 when it's between 40 and 59 etc, so it's a way of determining which row the character will be printed on (I added 1 so it skips over line one). "chars%20" is the modulus operator and returns the remainder from an integer division. So 1/20 is 0 with a remainder of 1. This is an easy way to convert the character count to a column location. I put the %3 in there so the row numbers will always be between 1 and 3 even when "chars" goes above 60. You may want to clear the screen at this point but that's up to you. As for the PINC code, you shouldn't use a new while loop, just place the code inside the existing while(1) loop (on line 220) ... while(1) { counter = time_next_keypress(); // decide what to do based on the timing if((PINC & (1<<PC3))==0){ lcd_clear_and_home(); } if(counter == 254) { hey thanks for the tips, i do have one last question, when pressing the button or switch to clear the screen, the screen does clear, but the characters resume in the last spot from when cleared. how do you make it to where after a clear occurs, the characters begin at beginning of line 2, ive tried lcd_goto_position(((chars / 20) % 3) + 1, chars % 20); after clear thinking that the chars position will resume at first position i also tried using lcd_goto_position(1,0): please help I think you just need to reset "chars" to zero. So the code from the last post should look like... while(1) { counter = time_next_keypress(); // decide what to do based on the timing if((PINC & (1<<PC3))==0){ lcd_clear_and_home(); chars = 0; } if(counter == 254) { Everything else should be the same. Please log in to post a reply.
http://www.nerdkits.com/forum/thread/2680/
CC-MAIN-2020-29
refinedweb
2,226
62.11
lino.api¶ A series of wrapper modules to encapsulate Lino's core functionalities. They don't define anything on their own but just import things which are commonly used in different contexts. One module for each of the three startup phases used when writing application code: The lino.api.ad module (application design) contains classes and functions that are available already before your Lino application gets initialized. You use it to define your overall application structure (in your settings.pyfiles and in the __init__.pyfiles of your plugins). The lino.api.dd module (database design) is for when you are describing your database schema (in your models.pymodules). The lino.api.rt module (runtime) contains functions and classes which are commonly used "at runtime", i.e. when the Lino application has been initialized. You may import it at the global namespace of a models.pyfile, but you can use most of it only when the startup()function has been called. Recommended usage is to import these modules as follows: from lino.api import ad, dd, rt, _ Another set of modules defined here are for more technical usage in specialized context: (This module's source code is available here.)
https://lino-framework.org/api/lino.api.html
CC-MAIN-2020-16
refinedweb
199
58.28
KDEUI #include <khelpmenu.h> Detailed Description Standard KDE help menu with dialog boxes. This class provides the standard KDE help menu with the default "about" dialog boxes and help entry. This class is used in KMainWindow so normally you don't need to use this class yourself. However, if you need the help menu or any of its dialog boxes in your code that is not subclassed from KMainWindow you should use this class. The usage is simple: or if you just want to open a dialog box: IMPORTANT: The first time you use KHelpMenu::menu(), a KMenu object is allocated. Only one object is created by the class so if you call KHelpMenu::menu() twice or more, the same pointer is returned. The class will destroy the popupmenu in the destructor so do not delete this pointer yourself. The KHelpMenu object will be deleted when its parent is destroyed but you can delete it yourself if you want. The code below will always work. Using your own "about application" dialog box: The standard "about application" dialog box is quite simple. If you need a dialog box with more functionality you must design that one yourself. When you want to display the dialog, you simply need to connect the help menu signal showAboutApplication() to your slot. Definition at line 110 of file khelpmenu.h. Member Enumeration Documentation Definition at line 168 of file khelpmenu.h. Constructor & Destructor Documentation Constructor. - Parameters - Definition at line 107 of file khelpmenu.cpp. Constructor. This alternative constructor is mainly useful if you want to overide the standard actions (aboutApplication(), aboutKDE(), helpContents(), reportBug, and optionally whatsThis). - Parameters - Definition at line 117 of file khelpmenu.cpp. Destructor. Destroys dialogs and the menu pointer retuned by menu Definition at line 142 of file khelpmenu.cpp. Member Function Documentation Opens an application specific dialog box. The method will try to open the about box using the following steps: - If the showAboutApplication() signal is connected, then it will be called. This means there is an application defined aboutBox. - If the aboutData was set in the constructor a KAboutApplicationDialog will be created. - Else a default about box using the aboutAppText from the constructor will be created. Definition at line 270 of file khelpmenu.cpp. Opens the standard "About KDE" dialog box. Definition at line 315 of file khelpmenu.cpp. Returns the QAction * associated with the given parameter Will return NULL pointers if menu() has not been called. - Parameters - Definition at line 232 of file khelpmenu.cpp. Opens the help page for the application. The application name is used as a key to determine what to display and the system will attempt to open <appName>/index.html. Definition at line 264 of file khelpmenu.cpp. Activates What's This help for the application. Definition at line 384 of file khelpmenu.cpp. Returns a popup menu you can use in the menu bar or where you need it. The returned menu is configured with an icon, a title and menu entries. Therefore adding the returned pointer to your menu is enougth to have access to the help menu. Note: This method will only create one instance of the menu. If you call this method twice or more the same pointer is returned. Definition at line 181 of file khelpmenu.cpp. Opens the standard "Report Bugs" dialog box. Definition at line 326 of file khelpmenu.cpp. This signal is emitted from aboutApplication() if no "about application" string has been defined. The standard application specific dialog box that is normally activated in aboutApplication() will not be displayed when this signal is emitted. Opens the changing default application language dialog box. Definition at line 337 of file khelpmenu.
https://api.kde.org/4.x-api/kdelibs-apidocs/kdeui/html/classKHelpMenu.html
CC-MAIN-2019-30
refinedweb
612
58.69
Twitter Mention Mood Light using Arduino Twitter Mention Mood Light — a mood light that alerts you when @username is mentioned on Twitter. This. The Arduino Twitter mood light has been around for a few years. I made one in early 2010 and recently decided to update it for Twitter OAuth protocol (step 5.) This instructable has both the [ Arduino Processing Twitter ] and [ Arduino Python Twitter ] versions. The general idea: You’ve got a LED generating a peaceful glow, cycling through a bunch of colors, and meanwhile you’ve got Processing or Python listening to Twitter for any mention of @yourUsername. When it finds a Twitter status with @yourUsername it writes to Arduino over Serial and Arduino changes the LED from peaceful glow to alert. The circuit board (step 6) has two buttons: Reset and Send. The reset button changes the LED from “alert” back to “peaceful glow.” The send button sends a Twitter status update to your Processing or Python layer, and from there on up to Twitter. When pressing the SEND button, hold it down until you see the flash of WHITE LIGHT (approx 1 second.) This confirms the button push has been read by Arduino. There is some confusion about Twitter Rate Limits . The Rate Limit is the number of times Twitter will let you hit it’s servers per hour, currently 350 hits per hour. Go over that number and Twitter will block you. I made the same provision for this in both Processing and Python: wait 15 seconds between hits. In Processing, in void draw() , see delay(15000); . In the Python, in the while clause, see time.sleep(15). This does not affect the timing of any button pushing or Serial calls. The Arduino code is the same for both Processing and Python except when Arduino listens to Serial. Both Processing and Python pass an integer value of 1 (arduino.write(1) ) but Arduino receives a different value (numeric / ascii) from each, which is why you’ll see these two lines in the Arduino code: if (serialMsg == 1) state = “mention”; // processing if (serialMsg == 49) state = “mention”; // python This is because I’m not sure how to pass a string value of “mention ” over Serial in Python. It’s not critical, so I’ll leave working as above, but if anyone has a solution for sending a string (arduino.write(“mention”) ), particularly in Python, please post in the comments below. The center of this project is the getMentions() function. It’s interesting to compare how they look in Processing and in Python. In both cases the id value received is in “long” format. By converting from “long” to “string” with str() we’re able to compare the value and act on it. I had trouble doing that in long format. Step 2: Arduino twitterMentionMoodLight_arduino for use as is with either: twitterMentionMoodLight_processing twitterMentionMoodLight_python Generate a peaceful glow until someone on twitter mentions you. Requires a circuit with: two buttons and a pwm rgb led light, and 3 resistors at 220 ohm; 2 resistors at 100 ohm; 2 resistors at 10k ohm. Shout out to Tom Igoe, Adafruit, lurkers and msg boards everywhere. learn more at: Install twitter4j (Processing) You’ll need to install the twitter4j library so it can be used by Processing. Get it here: Install it here (or equivalent): C:\Program Files\processing-1.5.1\modes\java\libraries You’re done. Access it here: Processing > Sketch > Import Library… > twitter4j And when you do, it’ll add this to the top of your code: import twitter4j.conf.*; import twitter4j.internal.async.*; import twitter4j.internal.org.json.*; import twitter4j.internal.logging.*; import twitter4j.http.*; import twitter4j.api.*; import twitter4j.util.*; import twitter4j.internal.http.*; import twitter4j.*; Incidentally, you’ll also add Serial I/O from the Sketch > Library, but that’s not important right now. So why do you need twitter4j? The short answer is that it provides you with simple functionality so you don’t have to write a whole bunch of crazy code every time you want to access Twitter. We use twitter4j because it’s awesome and it makes our job easier. Step 6: Circuit Board This is a great board for all-purpose prototyping. Two buttons and a PWM RGB LED. For this configuration, the LED must be PWM to work This Twitter Mention Mood Light is not particularly unique. You’ll find other examples across the internets. But what makes this special is that it’s up to date with Twitter OAuth and you’ve got code comparisons in Processing and Python. If you are new to Arduino Twitter / Arduino Processing Twitter / Arduino Python Twitter / Twitter Mood Light, then please refer to simpleTweet_00_processing and simpleTweet_01_python (and How to install Python packages on Windows 7 ) for a crash course. Major Components in Project TwitterExceptio twitterMentionMoodLight simplejson For more detail: Twitter Mention Mood Light
http://duino4projects.com/twitter-mention-mood-light-using-arduino/
CC-MAIN-2017-26
refinedweb
810
63.19
Yesod for Haskellers The majority of this book is built around giving practical information on how to get common tasks done, without drilling too much into the details of what’s going on under the surface. While the book presumes knowledge of Haskell, it does not follow the typical style of many Haskell libraries introductions. Many seasoned Haskellers are put off by this hiding of implementation details. The purpose of this chapter is to address those concerns. In this chapter, we’ll start off from a bare minimum web application, and build up to more complicated examples, explaining the components and their types along the way. Hello Warp Let’s start off with the most bare minimum application we can think of: {-# LANGUAGE OverloadedStrings #-} import Network.HTTP.Types (status200) import Network.Wai (Application, responseLBS) import Network.Wai.Handler.Warp (run) main :: IO () main = run 3000 app app :: Application app _req sendResponse = sendResponse $ responseLBS status200 [("Content-Type", "text/plain")] "Hello Warp!" Wait a minute, there’s no Yesod in there! Don’t worry, we’ll get there. Remember, we’re building from the ground up, and in Yesod, the ground floor is WAI, the Web Application Interface. WAI sits between a web handler, such as a web server or a test framework, and a web application. In our case, the handler is Warp, a high performance web server, and our application is the app function. What’s this mysterious Application type? It’s a type synonym defined as: type Application = Request -> (Response -> IO ResponseReceived) -> IO ResponseReceived The Request value contains information such as the requested path, query string, request headers, request body, and the IP address of the client. The second argument is the “send response” function. Instead of simply having the application return an IO Response, WAI uses continuation passing style to allow for full exception safety, similar to how the bracket function works. We can use this to do some simple dispatching: {-# LANGUAGE OverloadedStrings #-} import Network.HTTP.Types (status200) import Network.Wai (Application, pathInfo, responseLBS) import Network.Wai.Handler.Warp (run) main :: IO () main = run 3000 app app :: Application app req sendResponse = case pathInfo req of ["foo", "bar"] -> sendResponse $ responseLBS status200 [("Content-Type", "text/plain")] "You requested /foo/bar" _ -> sendResponse $ responseLBS status200 [("Content-Type", "text/plain")] "You requested something else" WAI mandates that the path be split into individual fragments (the stuff between forward slashes) and converted into text. This allows for easy pattern matching. If you need the original, unmodified ByteString, you can use rawPathInfo. For more information on the available fields, please see the WAI Haddocks. That addresses the request side; what about responses? We’ve already seen responseLBS, which is a convenient way of creating a response from a lazy ByteString. That function takes three arguments: the status code, a list of response headers (as key/value pairs), and the body itself. But responseLBS is just a convenience wrapper. Under the surface, WAI uses blaze-builder’s Builder data type to represent the raw bytes. Let’s dig down another level and use that directly: {-# LANGUAGE OverloadedStrings #-} import Blaze.ByteString.Builder (Builder, fromByteString) import Network.HTTP.Types (status200) import Network.Wai (Application, responseBuilder) import Network.Wai.Handler.Warp (run) main :: IO () main = run 3000 app app :: Application app _req sendResponse = sendResponse $ responseBuilder status200 [("Content-Type", "text/plain")] (fromByteString "Hello from blaze-builder!" :: Builder) This opens up some nice opportunities for efficiently building up response bodies, since Builder allows for O(1) append operations. We’re also able to take advantage of blaze-html, which sits on top of blaze-builder. Let’s see our first HTML application. {-# LANGUAGE OverloadedStrings #-} import Network.HTTP.Types (status200) import Network.Wai (Application, responseBuilder) import Network.Wai.Handler.Warp (run) import Text.Blaze.Html.Renderer.Utf8 (renderHtmlBuilder) import Text.Blaze.Html5 (Html, docTypeHtml) import qualified Text.Blaze.Html5 as H main :: IO () main = run 3000 app app :: Application app _req sendResponse = sendResponse $ responseBuilder status200 [("Content-Type", "text/html")] -- yay! (renderHtmlBuilder myPage) myPage :: Html myPage = docTypeHtml $ do H.head $ do H.title "Hello from blaze-html and Warp" H.body $ do H.h1 "Hello from blaze-html and Warp" But there’s a limitation with using a pure Builder value: we need to create the entire response body before returning the Response value. With lazy evaluation, that’s not as bad as it sounds, since not all of the body will live in memory at once. However, if we need to perform some I/O to generate our response body (such as reading data from a database), we’ll be in trouble. To deal with that situation, WAI provides a means for generating streaming response bodies. It also allows explicit control of flushing the stream. Let’s see how this works. {-# LANGUAGE OverloadedStrings #-} import Blaze.ByteString.Builder (Builder, fromByteString) import Blaze.ByteString.Builder.Char.Utf8 (fromShow) import Control.Concurrent (threadDelay) import Control.Monad (forM_) import Control.Monad.Trans.Class (lift) import Data.Monoid ((<>)) import Network.HTTP.Types (status200) import Network.Wai (Application, responseStream) import Network.Wai.Handler.Warp (run) main :: IO () main = run 3000 app app :: Application app _req sendResponse = sendResponse $ responseStream status200 [("Content-Type", "text/plain")] myStream myStream :: (Builder -> IO ()) -> IO () -> IO () myStream send flush = do send $ fromByteString "Starting streaming response.\n" send $ fromByteString "Performing some I/O.\n" flush -- pretend we're performing some I/O threadDelay 1000000 send $ fromByteString "I/O performed, here are some results.\n" forM_ [1..50 :: Int] $ \i -> do send $ fromByteString "Got the value: " <> fromShow i <> fromByteString "\n" Another common requirement when dealing with a streaming response is safely allocating a scarce resource- such as a file handle. By safely, I mean ensuring that the resource will be released, even in the case of some exception. This is where the continuation passing style mentioned above comes into play: {-# LANGUAGE OverloadedStrings #-} import Blaze.ByteString.Builder (fromByteString) import qualified Data.ByteString as S import Data.Conduit (Flush (Chunk), ($=)) import Data.Conduit.Binary (sourceHandle) import qualified Data.Conduit.List as CL import Network.HTTP.Types (status200) import Network.Wai (Application, responseStream) import Network.Wai.Handler.Warp (run) import System.IO (IOMode (ReadMode), withFile) main :: IO () main = run 3000 app app :: Application app _req sendResponse = withFile "index.html" ReadMode $ \handle -> sendResponse $ responseStream status200 [("Content-Type", "text/html")] $ \send _flush -> let loop = do bs <- S.hGet handle 4096 if S.null bs then return () else send (fromByteString bs) >> loop in loop Notice how we’re able to take advantage of existing exception safe functions like withFile to deal with exceptions properly. But in the case of serving files, it’s more efficient to use responseFile, which can use the sendfile system call to avoid unnecessary buffer copies: {-# LANGUAGE OverloadedStrings #-} import Network.HTTP.Types (status200) import Network.Wai (Application, responseFile) import Network.Wai.Handler.Warp (run) main :: IO () main = run 3000 app app :: Application app _req sendResponse = sendResponse $ responseFile status200 [("Content-Type", "text/html")] "index.html" Nothing -- means "serve whole file" -- you can also serve specific ranges in the file There are many aspects of WAI we haven’t covered here. One important topic is WAI middlewares. We also haven’t inspected request bodies at all. But for the purposes of understanding Yesod, we’ve covered enough for the moment. What about Yesod? In all our excitement about WAI and Warp, we still haven’t seen anything about Yesod! Since we just learnt all about WAI, our first question should be: how does Yesod interact with WAI. The answer to that is one very important function: toWaiApp :: YesodDispatch site => site -> IO Application This function takes some site value, which must be an instance of YesodDispatch, and creates an Application. This function lives in the IO monad, since it will likely perform actions like allocating a shared logging buffer. The more interesting question is what this site value is all about. Yesod has a concept of a foundation data type. This is a data type at the core of each application, and is used in three important ways: It can hold onto values that are initialized and shared amongst all aspects of your application, such as an HTTP connection manager, a database connection pool, settings loaded from a file, or some shared mutable state like a counter or cache. Typeclass instances provide even more information about your application. The Yesodtypeclass has various settings, such as what the default template of your app should be, or the maximum allowed request body size. The YesodDispatchclass indicates how incoming requests should be dispatched to handler functions. And there are a number of typeclasses commonly used in Yesod helper libraries, such as RenderMessagefor i18n support or YesodJqueryfor providing the shared location of the jQuery Javascript library. Associated types (i.e., type families) are used to create a related route data type for each application. This is a simple ADT that represents all legal routes in your application. But using this intermediate data type instead of dealing directly with strings, Yesod applications can take advantage of the compiler to prevent creating invalid links. This feature is known as type safe URLs. In keeping with the spirit of this chapter, we’re going to create our first Yesod application the hard way, by writing everything manually. We’ll progressively add more convenience helpers on top as we go along. {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeFamilies #-} import Network.HTTP.Types (status200) import Network.Wai (responseBuilder) import Network.Wai.Handler.Warp (run) import Text.Blaze.Html.Renderer.Utf8 (renderHtmlBuilder) import qualified Text.Blaze.Html5 as H import Yesod.Core (Html, RenderRoute (..), Yesod, YesodDispatch (..), toWaiApp) import Yesod.Core.Types (YesodRunnerEnv (..)) -- | Our foundation datatype. data App = App { welcomeMessage :: !Html } instance Yesod App instance RenderRoute App where data Route App = HomeR -- just one accepted URL deriving (Show, Read, Eq, Ord) renderRoute HomeR = ( [] -- empty path info, means "/" , [] -- empty query string ) instance YesodDispatch App where yesodDispatch (YesodRunnerEnv _logger site _sessionBackend _ _) _req sendResponse = sendResponse $ responseBuilder status200 [("Content-Type", "text/html")] (renderHtmlBuilder $ welcomeMessage site) main :: IO () main = do -- We could get this message from a file instead if we wanted. let welcome = H.p "Welcome to Yesod!" waiApp <- toWaiApp App { welcomeMessage = welcome } run 3000 waiApp OK, we’ve added quite a few new pieces here, let’s attack them one at a time. The first thing we’ve done is created a new datatype, App. This is commonly used as the foundation data type name for each application, though you’re free to use whatever name you want. We’ve added one field to this datatype, welcomeMessage, which will hold the content for our homepage. Next we declare our Yesod instance. We just use the default values for all of the methods for this example. More interesting is the RenderRoute typeclass. This is the heart of type-safe URLs. We create an associated data type for App which lists all of our app’s accepted routes. In this case, we have just one: the homepage, which we call HomeR. It’s yet another Yesod naming convention to append R to all of the route data constructors. We also need to create a renderRoute method, which converts each type-safe route value into a tuple of path pieces and query string parameters. We’ll get to more interesting examples later, but for now, our homepage has an empty list for both of those. YesodDispatch determines how our application behaves. It has one method, yesodDispatch, of type: yesodDispatch :: YesodRunnerEnv site -> Application YesodRunnerEnv provides three values: a Logger value for outputting log messages, the foundation datatype value itself, and a session backend, used for storing and retrieving information for the user’s active session. In real Yesod applications, as you’ll see shortly, you don’t need to interact with these values directly, but it’s informative to understand what’s under the surface. The return type of yesodDispatch is Application from WAI. But as we saw earlier, Application is simply a CPSed function from Request to Response. So our implementation of yesodDispatch is able to use everything we learned about WAI above. Notice also how we accessed the welcomeMessage from our foundation data type. Finally, we have the main function. The App value is easy to create and, as you can see, you could just as easily have performed some I/O to acquire the welcome message. We use toWaiApp to obtain a WAI application, and then pass off our application to Warp, just like we did in the past. Congratulations, you’ve now seen your first Yesod application! (Or, at least your first Yesod application in this chapter.) The HandlerT monad transformer While that example was technically using Yesod, it was incredibly uninspiring. There’s no question that Yesod did nothing more than get in our way relative to WAI. And that’s because we haven’t started taking advantage of any of Yesod’s features. Let’s address that, starting with the HandlerT monad transformer. There are many common things you’d want to do when handling a single request, e.g.: Return some HTML. Redirect to a different URL. Return a 404 not found response. Do some logging. To encapsulate all of this common functionality, Yesod provides a HandlerT monad transformer. The vast majority of the code you write in Yesod will live in this transformer, so you should get acquainted with it. Let’s start off by replacing our previous YesodDispatch instance with a new one that takes advantage of HandlerT: {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeFamilies #-} import Network.Wai (pathInfo) import Network.Wai.Handler.Warp (run) import qualified Text.Blaze.Html5 as H import Yesod.Core (HandlerT, Html, RenderRoute (..), Yesod, YesodDispatch (..), getYesod, notFound, toWaiApp, yesodRunner) -- | Our foundation datatype. data App = App { welcomeMessage :: !Html } instance Yesod App instance RenderRoute App where data Route App = HomeR -- just one accepted URL deriving (Show, Read, Eq, Ord) renderRoute HomeR = ( [] -- empty path info, means "/" , [] -- empty query string ) getHomeR :: HandlerT App IO Html getHomeR = do site <- getYesod return $ welcomeMessage -- We could get this message from a file instead if we wanted. let welcome = H.p "Welcome to Yesod!" waiApp <- toWaiApp App { welcomeMessage = welcome } run 3000 waiApp getHomeR is our first handler function. (That name is yet another naming convention in the Yesod world: the lower case HTTP request method, followed by the route constructor name.) Notice its signature: HandlerT App IO Html. It’s so common to have the monad stack HandlerT App IO that most applications have a type synonym for it, type Handler = HandlerT App IO. The function is returning some Html. You might be wondering if Yesod is hard-coded to only work with Html values. We’ll explain that detail in a moment. Our function body is short. We use the getYesod function to get the foundation data type value, and then return the welcomeMessage field. We’ll build up more interesting handlers as we continue. The implementation of yesodDispatch is now quite different. The key to it is the yesodRunner function, which is a low-level function for converting HandlerT stacks into WAI Applications. Let’s look at its type signature: yesodRunner :: (ToTypedContent res, Yesod site) => HandlerT site IO res -> YesodRunnerEnv site -> Maybe (Route site) -> Application We’re already familiar with YesodRunnerEnv from our previous example. As you can see in our call to yesodRunner above, we pass that value in unchanged. The Maybe (Route site) is a bit interesting, and gives us more insight into how type-safe URLs work. Until now, we only saw the rendering side of these URLs. But just as important is the parsing side: converting a requested path into a route value. In our example, this code is just a few lines, and we store the result in maybeRoute. Coming back to the parameters to yesodRunner: we’ve now addressed the Maybe (Route site) and YesodRunerEnv site. To get our HandlerT site IO res value, we pattern match on maybeRoute. If it’s Just HomeR, we use getHomeR. Otherwise, we use the notFound function, which is a built-in function that returns a 404 not found response, using your default site template. That template can be overridden in the Yesod typeclass; out of the box, it’s just a boring HTML page. This almost all makes sense, except for one issue: what’s that ToTypedContent typeclass, and what does it have to do with our Html response? Let’s start by answering my question from above: no, Yesod does not in any way hard code support for Html. A handler function can return any value that has an instance of ToTypedContent. This typeclass (which we will examine in a moment) provides both a mime-type and a representation of the data that WAI can consume. yesodRunner then converts that into a WAI response, setting the Content-Type response header to the mime type, using a 200 OK status code, and sending the response body. (To)Content, (To)TypedContent At the very core of Yesod’s content system are the following types: data Content = ContentBuilder !Builder !(Maybe Int) -- ^ The content and optional content length. | ContentSource !(Source (ResourceT IO) (Flush Builder)) | ContentFile !FilePath !(Maybe FilePart) | ContentDontEvaluate !Content type ContentType = ByteString data TypedContent = TypedContent !ContentType !Content Content should remind you a bit of the WAI response types. ContentBuilder is similar to responseBuilder, ContentSource is like responseStream but specialized to conduit, and ContentFile is like responseFile. Unlike their WAI counterparts, none of these constructors contain information on the status code or response headers; that’s handled orthogonally in Yesod. The one completely new constructor is ContentDontEvaluate. By default, when you create a response body in Yesod, Yesod fully evaluates the body before generating the response. The reason for this is to ensure that there are no impure exceptions in your value. Yesod wants to make sure to catch any such exceptions before starting to send your response so that, if there is an exception, Yesod can generate a proper 500 internal server error response instead of simply dying in the middle of sending a non-error response. However, performing this evaluation can cause more memory usage. Therefore, Yesod provides a means of opting out of this protection. TypedContent is then a minor addition to Content: it includes the ContentType as well. Together with a convention that an application returns a 200 OK status unless otherwise specified, we have everything we need from the TypedContent type to create a response. Yesod could have taken the approach of requiring users to always return TypedContent from a handler function, but that would have required manually converting to that type. Instead, Yesod uses a pair of typeclasses for this, appropriately named ToContent and ToTypedContent. They have exactly the definitions you’d expect: class ToContent a where toContent :: a -> Content class ToContent a => ToTypedContent a where toTypedContent :: a -> TypedContent And Yesod provides instances for many common data types, including Text, Html, and aeson’s Value type (containing JSON data). That’s how the getHomeR function was able to return Html: Yesod knows how to convert it to TypedContent, and from there it can be converted into a WAI response. HasContentType and representations This typeclass approach allows for one other nice abstraction. For many types, the type system itself lets us know what the content-type for the content should be. For example, Html will always be served with a text/html content-type. Some requests to a web application can be displayed with various representation. For example, a request for tabular data could be served with: An HTML table A CSV file XML JSON data to be consumed by some client-side Javascript The HTTP spec allows a client to specify its preference of representation via the accept request header. And Yesod allows a handler function to use the selectRep/ provideRep function combo to provide multiple representations, and have the framework automatically choose the appropriate one based on the client headers. The last missing piece to make this all work is the HasContentType typeclass: class ToTypedContent a => HasContentType a where getContentType :: Monad m => m a -> ContentType The parameter m a is just a poor man’s Proxy type. And, in hindsight, we should have used Proxy, but that would now be a breaking change. There are instances for this typeclass for most data types supported by ToTypedContent. Below is our example from above, tweaked just a bit to provide multiple representations of the data: {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeFamilies #-} import Data.Text (Text) import Network.Wai (pathInfo) import Network.Wai.Handler.Warp (run) import qualified Text.Blaze.Html5 as H import Yesod.Core (HandlerT, Html, RenderRoute (..), TypedContent, Value, Yesod, YesodDispatch (..), getYesod, notFound, object, provideRep, selectRep, toWaiApp, yesodRunner, (.=)) -- | Our foundation datatype. data App = App { welcomeMessageHtml :: !Html , welcomeMessageText :: !Text , welcomeMessageJson :: !Value } instance Yesod App instance RenderRoute App where data Route App = HomeR -- just one accepted URL deriving (Show, Read, Eq, Ord) renderRoute HomeR = ( [] -- empty path info, means "/" , [] -- empty query string ) getHomeR :: HandlerT App IO TypedContent getHomeR = do site <- getYesod selectRep $ do provideRep $ return $ welcomeMessageHtml site provideRep $ return $ welcomeMessageText site provideRep $ return $ welcomeMessageJson waiApp <- toWaiApp App { welcomeMessageHtml = H.p "Welcome to Yesod!" , welcomeMessageText = "Welcome to Yesod!" , welcomeMessageJson = object ["msg" .= ("Welcome to Yesod!" :: Text)] } run 3000 waiApp Convenience warp function And one minor convenience you’ll see quite a bit in the Yesod world. It’s very common to call toWaiApp to create a WAI Application, and then pass that to Warp’s run function. So Yesod provides a convenience warp wrapper function. We can replace our previous main function with the following: main :: IO () main = warp 3000 App { welcomeMessageHtml = H.p "Welcome to Yesod!" , welcomeMessageText = "Welcome to Yesod!" , welcomeMessageJson = object ["msg" .= ("Welcome to Yesod!" :: Text)] } There’s also a warpEnv function which reads the port number from the PORT environment variable, which is useful for working with platforms such as FP Haskell Center, or deployment tools like Keter. Writing handlers Since the vast majority of your application will end up living in the HandlerT monad transformer, it’s not surprising that there are quite a few functions that work in that context. HandlerT is an instance of many common typeclasses, including MonadIO, MonadTrans, MonadBaseControl, MonadLogger and MonadResource, and so can automatically take advantage of those functionalities. In addition to that standard functionality, the following are some common categories of functions. The only requirement Yesod places on your handler functions is that, ultimately, they return a type which is an instance of ToTypedContent. This section is just a short overview of functionality. For more information, you should either look through the Haddocks, or read the rest of this book. Getting request parameters There are a few pieces of information provided by the client in a request: The requested path. This is usually handled by Yesod’s routing framework, and is not directly queried in a handler function. Query string parameters. This can be queried using lookupGetParam. Request bodies. In the case of URL encoded and multipart bodies, you can use lookupPostParamto get the request parameter. For multipart bodies, there’s also lookupFilefor file parameters. Request headers can be queried via lookupHeader. (And response headers can be set with addHeader.) Yesod parses cookies for you automatically, and they can be queried using lookupCookie. (Cookies can be set via the setCookiefunction.) Finally, Yesod provides a user session framework, where data can be set in a cryptographically secure session and associated with each user. This can be queried and set using the functions lookupSession, setSessionand deleteSession. While you can use these functions directly for such purposes as processing forms, you usually will want to use the yesod-form library, which provides a higher level form abstraction based on applicative functors. Short circuiting In some cases, you’ll want to short circuit the handling of a request. Reasons for doing this would be: Send an HTTP redirect, via the redirectfunction. This will default to using the 303 status code. You can use redirectWithto get more control over this. Return a 404 not found with notFound, or a 405 bad method via badMethod. Indicate some error in the request via notAuthenticated, permissionDenied, or invalidArgs. Send a special response, such as with sendFileor sendResponseStatus(to override the status 200 response code) sendWaiResponseto drop down a level of abstraction, bypass some Yesod abstractions, and use WAI itself. Streaming So far, the examples of ToTypedContent instances I gave all involved non-streaming responses. Html, Text, and Value all get converted into a ContentBuilder constructor. As such, they cannot interleave I/O with sending data to the user. What happens if we want to perform such interleaving? When we encountered this issue in WAI, we introduced the responseSource method of constructing a response. Using sendWaiResponse, we could reuse that same method for creating a streaming response in Yesod. But there’s also a simpler API for doing this: respondSource. respondSource takes two parameters: the content type of the response, and a Source of Flush Builder. Yesod also provides a number of convenience functions for creating that Source, such as sendChunk, sendChunkBS, and sendChunkText. Here’s an example, which just converts our initial responseSource example from WAI to Yesod. {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeFamilies #-} import Blaze.ByteString.Builder (fromByteString) import Blaze.ByteString.Builder.Char.Utf8 (fromShow) import Control.Concurrent (threadDelay) import Control.Monad (forM_) import Data.Monoid ((<>)) import Network.Wai (pathInfo) import Yesod.Core (HandlerT, RenderRoute (..), TypedContent, Yesod, YesodDispatch (..), liftIO, notFound, respondSource, sendChunk, sendChunkBS, sendChunkText, sendFlush, warp, yesodRunner) -- | Our foundation datatype. data App = App instance Yesod App instance RenderRoute App where data Route App = HomeR -- just one accepted URL deriving (Show, Read, Eq, Ord) renderRoute HomeR = ( [] -- empty path info, means "/" , [] -- empty query string ) getHomeR :: HandlerT App IO TypedContent getHomeR = respondSource "text/plain" $ do sendChunkBS "Starting streaming response.\n" sendChunkText "Performing some I/O.\n" sendFlush -- pretend we're performing some I/O liftIO $ threadDelay 1000000 sendChunkBS "I/O performed, here are some results.\n" forM_ [1..50 :: Int] $ \i -> do sendChunk $ fromByteString "Got the value: " <> fromShow i <> fromByteString "\n" = warp 3000 App Dynamic parameters Now that we’ve finished our detour into the details of the HandlerT transformer, let’s get back to higher-level Yesod request processing. So far, all of our examples have dealt with a single supported request route. Let’s make this more interesting. We now want to have an application which serves Fibonacci numbers. If you make a request to /fib/5, it will return the fifth Fibonacci number. And if you visit /, it will automatically redirect you to /fib/1. In the Yesod world, the first question to ask is: how do we model our route data type? This is pretty straight-forward: data Route App = HomeR | FibR Int. The question is: how do we want to define our RenderRoute instance? We need to convert the Int to a Text. What function should we use? Before you answer that, realize that we’ll also need to be able to parse back a Text into an Int for dispatch purposes. So we need to make sure that we have a pair of functions with the property fromText . toText == Just. Show/ Read could be a candidate for this, except that: We’d be required to convert through String. The Show/ Readinstances for Textand Stringboth involve extra escaping, which we don’t want to incur. Instead, the approach taken by Yesod is the path-pieces package, and in particular the PathPiece typeclass, defined as: class PathPiece s where fromPathPiece :: Text -> Maybe s toPathPiece :: s -> Text Using this typeclass, we can write parse and render functions for our route datatype: instance RenderRoute App where data Route App = HomeR | FibR Int deriving (Show, Read, Eq, Ord) renderRoute HomeR = ([], []) renderRoute (FibR i) = (["fib", toPathPiece i], []) parseRoute' [] = Just HomeR parseRoute' ["fib", i] = FibR <$> fromPathPiece i parseRoute' _ = Nothing And then we can write our YesodDispatch typeclass instance: instance YesodDispatch App where yesodDispatch yesodRunnerEnv req sendResponse = let maybeRoute = parseRoute' (pathInfo req) handler = case maybeRoute of Nothing -> notFound Just HomeR -> getHomeR Just (FibR i) -> getFibR i in yesodRunner handler yesodRunnerEnv maybeRoute req sendResponse getHomeR = redirect (FibR 1) fibs :: [Int] fibs = 0 : scanl (+) 1 fibs getFibR i = return $ show $ fibs !! i Notice our call to redirect in getHomeR. We’re able to use the route datatype as the parameter to redirect, and Yesod takes advantage of our renderRoute function to create a textual link. Routing with Template Haskell Now let’s suppose we want to add a new route to our previous application. We’d have to make the following changes: Modify the Routedatatype itself. Add a clause to renderRoute. Add a clause to parseRoute', and make sure it corresponds correctly to renderRoute. Add a clause to the case statement in yesodDispatchto call our handler function. Write our handler function. That’s a lot of changes! And lots of manual, boilerplate changes means lots of potential for mistakes. Some of the mistakes can be caught by the compiler if you turn on warnings (forgetting to add a clause in renderRoute or a match in yesodDispatch's case statement), but others cannot (ensuring that renderRoute and parseRoute have the same logic, or adding the parseRoute clause). This is where Template Haskell comes into the Yesod world. Instead of dealing with all of these changes manually, Yesod declares a high level routing syntax. This syntax lets you specify your route syntax, dynamic parameters, constructor names, and accepted request methods, and automatically generates parse, render, and dispatch functions. To get an idea of how much manual coding this saves, have a look at our previous example converted to the Template Haskell version: {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE ViewPatterns #-} import Yesod.Core (RenderRoute (..), Yesod, mkYesod, parseRoutes, redirect, warp) -- | Our foundation datatype. data App = App instance Yesod App mkYesod "App" [parseRoutes| / HomeR GET /fib/#Int FibR GET |] getHomeR :: Handler () getHomeR = redirect (FibR 1) fibs :: [Int] fibs = 0 : scanl (+) 1 fibs getFibR :: Int -> Handler String getFibR i = return $ show $ fibs !! i main :: IO () main = warp 3000 App What’s wonderful about this is, as the developer, you can now focus on the important part of your application, and not get involved in the details of writing parsers and renderers. There are of course some downsides to the usage of Template Haskell: Compile times are a bit slower. The details of what’s going on behind the scenes aren’t easily apparent. (Though you can use cabal haddockto see what identifiers have been generated for you.) You don’t have as much fine-grained control. For example, in the Yesod route syntax, each dynamic parameter has to be a separate field in the route constructor, as opposed to bundling fields together. This is a conscious trade-off in Yesod between flexibility and complexity. This usage of Template Haskell is likely the most controversial decision in Yesod. I personally think the benefits definitely justify its usage. But if you’d rather avoid Template Haskell, you’re free to do so. Every example so far in this chapter has done so, and you can follow those techniques. We also have another, simpler approach in the Yesod world: LiteApp. LiteApp LiteApp allows you to throw away type safe URLs and Template Haskell. It uses a simple routing DSL in pure Haskell. Once again, as a simple comparison, let’s rewrite our Fibonacci example to use it. import Data.Text (pack) import Yesod.Core (LiteHandler, dispatchTo, dispatchTo, liteApp, onStatic, redirect, warp, withDynamic) getHomeR :: LiteHandler () getHomeR = redirect "/fib/1" fibs :: [Int] fibs = 0 : scanl (+) 1 fibs getFibR :: Int -> LiteHandler String getFibR i = return $ show $ fibs !! i main :: IO () main = warp 3000 $ liteApp $ do dispatchTo getHomeR onStatic (pack "fib") $ withDynamic $ \i -> dispatchTo (getFibR i) There you go, a simple Yesod app without any language extensions at all! However, even this application still demonstrates some type safety. Yesod will use fromPathPiece to convert the parameter for getFibR from Text to an Int, so any invalid parameter will be got by Yesod itself. It’s just one less piece of checking that you have to perform. Shakespeare While generating plain text pages can be fun, it’s hardly what one normally expects from a web framework. As you’d hope, Yesod comes built in with support for generating HTML, CSS and Javascript as well. Before we get into templating languages, let’s do it the raw, low-level way, and then build up to something a bit more pleasant. import Data.Text (pack) import Yesod.Core getHomeR :: LiteHandler TypedContent getHomeR = return $ TypedContent typeHtml $ toContent "<html><head><title>Hi There!</title>\ \<link rel='stylesheet' href='/style.css'>\ \<script src='/script.js'></script></head>\ \<body><h1>Hello World!</h1></body></html>" We’re just reusing all of the TypedContent stuff we’ve already learnt. We now have three separate routes, providing HTML, CSS and Javascript. We write our content as Strings, convert them to Content using toContent, then wrap them with a TypedContent constructor to give them the appropriate content-type headers. But as usual, we can do better. Dealing with Strings is not very efficient, and it’s tedious to have to manually put in the content type all the time. But we already know the solution to those problems: use the Html datatype from blaze-html. Let’s convert our getHomeR function to use it: import Data.Text (pack) import Text.Blaze.Html5 (toValue, (!)) import qualified Text.Blaze.Html5 as H import qualified Text.Blaze.Html5.Attributes as A import Yesod.Core getHomeR :: LiteHandler Html getHomeR = return $ H.docTypeHtml $ do H.head $ do H.title $ toHtml "Hi There!" H.link ! A.rel (toValue "stylesheet") ! A.href (toValue "/style.css") H.script ! A.src (toValue "/script.js") $ return () H.body $ do H.h1 $ toHtml "Hello World!" Ahh, far nicer. blaze-html provides a convenient combinator library, and will execute far faster in most cases than whatever String concatenation you might attempt. If you’re happy with blaze-html combinators, by all means use them. However, many people like to use a more specialized templating language. Yesod’s standard provider for this is the Shakespearean languages: Hamlet, Lucius, and Julius. You are by all means welcome to use a different system if so desired, the only requirement is that you can get a Content value from the template. Since Shakespearean templates are compile-time checked, their usage requires either quasiquotation or Template Haskell. We’ll go for the former approach here. Please see the Shakespeare chapter in the book for more information. {-# LANGUAGE QuasiQuotes #-} import Data.Text (Text, pack) import Text.Julius (Javascript) import Text.Lucius (Css) import Yesod.Core getHomeR :: LiteHandler Html getHomeR = withUrlRenderer $ [hamlet| $doctype 5 <html> <head> <title>Hi There! <link rel=stylesheet href=/style.css> <script src=/script.js> <body> <h1>Hello World! |] getStyleR :: LiteHandler Css getStyleR = withUrlRenderer [lucius|h1 { color: red }|] getScriptR :: LiteHandler Javascript getScriptR = withUrlRenderer [julius|alert('Yay, Javascript works too!');|] main :: IO () main = warp 3000 $ liteApp $ do dispatchTo getHomeR onStatic (pack "style.css") $ dispatchTo getStyleR onStatic (pack "script.js") $ dispatchTo getScriptR URL rendering function Likely the most confusing part of this is the withUrlRenderer calls. This gets into one of the most powerful features of Yesod: type-safe URLs. If you notice in our HTML, we’re providing links to the CSS and Javascript URLs via strings. This leads to a duplication of that information, as in our main function we have to provide those strings a second time. This is very fragile: our codebase is one refactor away from having broken links. The recommended approach instead would be to use our type-safe URL datatype in our template instead of including explicit strings. As mentioned above, LiteApp doesn’t provide any meaningful type-safe URLs, so we don’t have that option here. But if you use the Template Haskell generators, you get type-safe URLs for free. In any event, the Shakespearean templates all expect to receive a function to handle the rendering of a type-safe URL. Since we don’t actually use any type-safe URLs, just about any function would work here (the function will be ignored entirely), but withUrlRenderer is a convenient way of doing this. As we’ll see next, withUrlRenderer isn’t really needed most of the time, since Widgets end up providing the renderer function for us automatically. Widgets Dealing with HTML, CSS and Javascript as individual components can be nice in many cases. However, when you want to build up reusable components for a page, it can get in the way of composability. If you want more motivation for why widgets are useful, please see the widget chapter. For now, let’s just dig into using them. {-# LANGUAGE QuasiQuotes #-} import Yesod.Core getHomeR :: LiteHandler Html getHomeR = defaultLayout $ do setTitle $ toHtml "Hi There!" [whamlet|<h1>Hello World!|] toWidget [lucius|h1 { color: red }|] toWidget [julius|alert('Yay, Javascript works too!');|] main :: IO () main = warp 3000 $ liteApp $ dispatchTo getHomeR This is the same example as above, but we’ve now condensed it into a single handler. Yesod will automatically handle providing the CSS and Javascript to the HTML. By default, it will place them in style and script tags in the head and body of the page, respectively, but Yesod provides many customization settings to do other things (such as automatically creating temporary static files and linking to them). Widgets also have another advantage. The defaultLayout function is a member of the Yesod typeclass, and can be modified to provide a customized look-and-feel for your website. Many built-in pieces of Yesod, such as error messages, take advantage of the widget system, so by using widgets, you get a consistent feel throughout your site. Details we won’t cover Hopefully this chapter has pulled back enough of the “magic” in Yesod to let you understand what’s going on under the surface. We could of course continue using this approach for analyze the rest of the Yesod ecosystem, but that would be mostly redundant with the rest of this book. Hopefully you can now feel more informed as you read chapters on Persistent, forms, sessions, and subsites.
https://www.yesodweb.com/book/yesod-for-haskellers
CC-MAIN-2021-39
refinedweb
6,301
57.57
Created on 2004-06-04 10:58 by arigo, last changed 2009-04-02 03:56 by brett.cannon. To get to the module object from the body of the module itself, the usual trick is to import it from itself, as in: x.py: import x do_stuff_with(x) This fails strangely if x is in a package: package/x.py: import package.x do_stuff_with(package.x) The last line triggers an AttributeError: 'module' object has no attribute 'x'. In other words, the import succeeds but the expression 'package.x' still isn't valid after it. Logged In: YES user_id=595483 The error seems to be due to the calling sequence of add_submodule and loadmodule in import.c:import_submodule. If load_module(..) is called after add_submodule(...) gets called, the above does not trigger Attribute Error. I made a patch that does it, but there is a problem... Currently, when import produces errors, sys.modules have the damaged module, but the patch does not. (That's why it cannot pass the test_pkgimport.py unittest, I think.) Someone who knows more about import.c could fix the patch to behave like that. The patch is in Logged In: YES user_id=595483 The behavior is definitely due to the calling sequence of add_submodule and load_module in import_submodule. (like following) ... m = load_module(fullname, fp, buf, fdp->type, loader); Py_XDECREF(loader); if (fp) fclose(fp); if (!add_submodule(mod, m, fullname, subname, modules)) { Py_XDECREF(m); m = NULL; } ... For "importing package.x;do_something(package.x)" from within package.x to be possible, add_submodule should be done before load_module, since in load_module, not only the module is load, but also executed by PyImport_ExecCodeModuleEx. So, if we make a module and call add_submodule before load_module, import ing package.x and using it is possible. (like following) m = PyImport_AddModule(fullname); if (!m) { return NULL; } if (!add_submodule(mod, m, fullname, subname, modules)) { Py_XDECREF(m); return NULL; } m = load_module(mod, fullname, fp, buf, fdp->type, loader); Py_XDECREF(loader); but above would make test_importhook fail because in IMP_HOOK case, module object is created by PyObject_CallMethod(... "load_module"..), not by calling PyImport_AddModule. So, we cannot know about the module before that method calling is returned. Thus, in IMP_HOOK case, load_module would not use the already-created module by PyImport_AddModule, but would make a new one, which is not added as submodule to its parent. Anyway, adding another add_submodule after load_module would make import-hook test code to be passed, but it's a lame patch since in IMP_HOOK case, import package.x in package/x.py cannot be done. So, for the behavior to be possible, I think load_module should be explicitly separated into two function - load_module, execute_module. And then we'll load_module, add_submodule itself to its parent and then execute_module. There does not seem to be any hack that touches only limited places, so I think this bug(?) will stay open for quite long time. =)
http://bugs.python.org/issue966431
crawl-002
refinedweb
480
51.55
This post uses Java servlets as an example, but applies more broadly to a situation where a program automatically converts streams of bytes to streams of Unicode characters. The Trick. These char values may be translated back into bytes with a simple cast: Converter.setEncoding("ISO-8859-1"); char[] cs = Converter.getChars(); byte[] bs = new byte[cs.length]; for (int i = 0; i < cs.length; ++i) bs[i] = (byte) cs[i]; Now you have the original bytes back again in the array bs. In Java, char values act as unsigned 16-bit values, whereas byte values are signed 8-bit values. The casting preserves values through the magic of integer arithmetic overflow and twos-complement notation. (I attach a program that’ll verify this works at the end.) You can now use your own character encoding detection or pull out a general solution like the International Components for Unicode (which I highly recommend — it tracks the Unicode standard very closely, performing character encoding detection, fully general and configurable Unicode normalization, and even transliteration). Use in Servlets for Forms I learned this trick from Jason Hunter’s excellent book, Java Servlet Programming, (2nd Edition, O’Reilly). Hunter uses the trick for decoding form data. The problem is that there’s no way in HTML to declare what character encoding is used on a form. What Hunter does is add a hidden field for the value of the char encoding followed by the Latin1 transcoding trick to recover the actual string. Here’s an illustrative code snippet, copied more or less directly from Hunter’s book: public void doGet(HttpServletRequest req, ...) { ... String encoding = req.getParameter("charset"); String text = req.getParameter("text"); text = new String(text.getBytes("ISO-8859-1"), encoding); ... Of course, this assumes that the getParameter() will use Latin1 to do the decoding so that the getBytes("ISO-8859-1") returns the original bytes. According to Hunter, this is typically what happens because browsers insist on submitting forms using ISO-8859-1, no matter what the user has chosen as an encoding. We borrowed this solution for our demos (all the servlet code is in the distribution under $LINGPIPE/demos/generic), though we let the user choose the encoding (which is itself problematic because of the way browsers work, but that’s another story). Testing Transcoding public class Test { public static void main(String[] args) throws Exception { byte[] bs = new byte[256]; for (int i = -128; i < 128; ++i) bs[i+128] = (byte) i; String s = new String(bs,"ISO-8859-1"); char[] cs = s.toCharArray(); byte[] bs2 = s.getBytes("ISO-8859-1"); for (int i = 0; i < 256; ++i) System.out.printf("%d %d %d\n", (int)bs[i],(int)cs[i],(int)bs2[i]); } } which prints out c:\carp\temp>javac Test.java c:\carp\temp>java Test -128 128 -128 -127 129 -127 ... -2 254 -2 -1 255 -1 0 0 0 1 1 1 ... 126 126 126 127 127 127
https://lingpipe-blog.com/2010/07/14/the-latin1-transcoding-trick-for-java-servlet/
CC-MAIN-2019-47
refinedweb
493
56.15
It is quite important for a web framework to integrate testing as seamlessly as possible with the web framework itself. This minimizes the friction developers encounter when coding functional specs and writing tests to validate their work. For Play Java projects, we will be utilizing the popular test framework JUnit. We will be using it to do a simple unit test and to test our model and controller action. For Play Scala projects, we will be using specs2 to do a simple unit test and to test our model, a controller action, and a route mapping. For Java, we need to take the following steps: ProductTest.java, in test/and add the following content: import static org.junit.Assert.*; ... No credit card required
https://www.oreilly.com/library/view/play-framework-cookbook/9781784393137/ch01s18.html
CC-MAIN-2019-43
refinedweb
123
64.81
In "Laziness, Part 1," I explored lazy libraries in Java™. In this installment, I demonstrate how to build a simple lazy list using closures as building blocks, then explore some of the performance benefits of lazy evaluation along with some lazy aspects of Groovy, Scala, and Clojure. Building a lazy list In an early installment of this series, I showed a simple implementation of a lazy list in Groovy. However, I didn't show the derivation of how it works, which is the subject here. As you know from the last installment, languages can be categorized as strict (eagerly evaluating all expressions) or lazy (deferring evaluation) until absolutely needed. Groovy is a strict language by nature, but I can transform a nonlazy list into a lazy one by recursively wrapping a strict list within a closure. This lets me defer evaluation of subsequent values by delaying execution of the closure block. A strict empty list in Groovy is represented by an array, using empty square braces: []. If I wrap it in a closure, it becomes a lazy empty list: {-> [] } If I need to add an a element to the list, I can add it to the front, then make the entire new list lazy again: {-> [ a, {-> [] } ] } The method for adding to the front of the list is traditionally called either prepend or cons. To add more elements, I repeat this operation for each new item; adding three elements ( a, b, and c) to the list yields: {-> [a, {-> [b, {-> [ c, {-> [] } ] } ] } ] } This syntax is clumsy, but once I understand the principle, I can create a class in Groovy that implements a traditional set of methods for a lazy collection, as shown in Listing 1: Listing 1. Building a lazy list in Groovy using closures class PLazyList { private Closure list private PLazyList(list) { this.list = list } static PLazyList nil() { new PLazyList({-> []}) } PLazyList cons(head) { new PLazyList({-> [head, list]}) } def head() { def lst = list.call() lst ? lst[0] : null } def tail() { def lst = list.call() lst ? new PLazyList(lst.tail()[0]) : nil() } boolean isEmpty() { list.call() == [] } def fold(n, acc, f) { n == 0 || isEmpty() ? acc : tail().fold(n - 1, f.call(acc, head()), f) } def foldAll(acc, f) { isEmpty() ? acc : tail().foldAll(f.call(acc, head()), f) } def take(n) { fold(n, []) {acc, item -> acc << item} } def takeAll() { foldAll([]) {acc, item -> acc << item} } def toList() { takeAll() } } In Listing 1, the constructor is private; it's called by starting with an empty list using nil(), which constructs an empty list. The cons() method enables me to add new elements by prepending the passed parameter, then wrapping the result in a closure block. The next three methods enable list traversal. The head() method returns the first element of the list, and tail() returns the sublist of all elements except the first. In both cases, I call() the closure block — known as forcing the evaluation in lazy terms. Because I'm retrieving values, it ceases to be lazy as I harvest the values. Not surprisingly, the isEmpty() method checks to see if any terms are left to resolve. The remaining methods are higher-order functions for performing operations on the list. The fold() and foldAll() methods perform the fold abstraction, also known as reduce — or, in Groovy only, injectAll(). I've shown the use of this family of methods in many previous installments (such as "Thinking Functionally, Part 3"), but this is the first time I've shown a recursive definition written purely in terms of closure blocks. The foldAll() method checks to see if the list is empty and, if it is, returns acc (the accumulator, the seed value for the fold operation). Otherwise, it recursively calls foldAll() on the tail() of the list, passing the accumulator and the head of the list. The function (the f parameter) should accept two parameters and yield a single result; this is the "fold" operation as you fold one element atop the adjacent one. Building a list and manipulating it appears in Listing 2: Listing 2. Exercising lazy lists def lazylist = PLazyList.nil().cons(4).cons(3).cons(2).cons(1) println(lazylist.takeAll()) //[1, 2, 3, 4] println(lazylist.foldAll(0, {i, j -> i + j})) // 10 lazylist = PLazyList.nil().cons(1).cons(2).cons(4).cons(8) println(lazylist.take(2)) //[8, 4] In Listing 2, I create a list by cons()ing values onto an empty list. Notice that when I takeAll() of the elements, they come back in the reverse order of their addition to the list. Remember, cons() is really shorthand for prepend; it adds elements to the front of the list. The foldAll() method enables me to sum the list by providing a transformation code block, {i, j -> i + j}, which uses addition as the fold operation. Last, I use the take() method to force evaluation of only the first two elements. Real-world lazy-list implementations differ from this one, avoiding recursion and adding more-flexible manipulation methods. However, knowing conceptually what's happening inside the implementation aids understanding and use. Benefits of laziness Lazy lists have several benefits. First, you can use them to create infinite sequences. Because the values aren't evaluated until needed, you can model infinite lists using lazy collections. I show an example of this implemented in Groovy in the "Functional Features in Groovy, Part 1" installment. A second benefit is reduced storage size. If — rather than hold an entire collection — I can derive subsequent values, then I can trade storage for execution speed. Choosing to use a lazy collection becomes a trade-off between the expense of storing the values versus calculating new ones. Third, one of the key benefits of lazy collections is that the runtime can generate more-efficient code. Consider the code in Listing 3: Listing 3. Finding palindromes in Groovy def isPalindrome(s) { def sl = s.toLowerCase() sl == sl.reverse() } def findFirstPalindrome(s) { s.tokenize(' ').find {isPalindrome(it)} } s1 = "The quick brown fox jumped over anna the dog"; println(findFirstPalindrome(s1)) // anna s2 = "Bob went to Harrah and gambled with Otto and Steve" println(findFirstPalindrome(s2)) // Bob The isPalindrome() method in Listing 3 normalizes the case of the subject word, then determines if the word has the same characters in reverse. The findFirstPalindrome() method tries to find the first palindrome in the passed string by using Groovy's find() method, which accepts a code block as the filtering mechanism. Suppose I have a huge sequence of characters within which I need to find the first palindrome. During the execution of the findFirstPalindrome() method, the code in Listing 3 first eagerly tokenizes the entire sequence, creating an intermediate data structure, then issues the find() command. Groovy's tokenize() method isn't lazy, so in this case it might build a huge temporary data structure, only to discard most of it. Consider the same code written in Clojure, appearing in Listing 4: Listing 4. Clojure's palindromes (defn palindrome? [s] (let [sl (.toLowerCase s)] (= sl (apply str (reverse sl))))) (defn find-palindromes [s] (filter palindrome? (clojure.string/split s #" "))) (println (find-palindromes "The brown fox jumped over anna.")) ; (anna) (println (find-palindromes "Bob went to Harrah and gambled with Otto")) ; (Bob Harrah Otto) (println (take 1 (find-palindromes "Bob went to Harrah and gambled with Otto"))) ; (Bob) The implementation details in Listing 3 and Listing 4 are the same but use different language constructs. In the Clojure palindrome? function, I make the parameter string lowercase, then check equality with the reversed string. The extra call to apply converts the character sequence returned by reverse back to a string for comparison. The find-palindromes function uses Clojure's filter function, which accepts a function to act as the filter and the collection to be filtered. For the call to the palindrome? function, Clojure provides several alternatives. I can create an anonymous function to call it such as #(palindrome? %), which is syntactic sugar for an anonymous function that accepts a single parameter; the long-hand version would look like: (fn [x] (palindrome? x)) When I have a single parameter, Clojure allows me to avoid declaring the anonymous function and naming the parameter, which I substitute with % in the #(palindrome? %) function call. In Listing 4, I can use the even shorter form of the function name directly; filter is expecting a method that accepts a single parameter and returns a boolean, which matches palindrome?. The translation from Groovy to Clojure entailed more than just syntax. All of Clojure's data structures that can be lazy are lazy, including operations on collections like filter and split. Thus, in the Clojure version, everything is automatically lazy, which manifests in the second example in Listing 4, when I call find-palindromes on the collection with multiples. The return from filter is a lazy collection that is forced as I print it. If I want only the first entry, I must take the number of lazy entrants I need from the list. Scala approaches laziness in a slightly different way. Rather than make everything lazy by default, it offers lazy views on collections. Consider the Scala implementation of the palindrome problem in Listing 5: Listing 5. Scala palindromes def isPalindrome(x: String) = x == x.reverse def findPalidrome(s: Seq[String]) = s find isPalindrome findPalindrome(words take 1000000) In Listing 5, pulling 1 million words from the collection via the take method will be quite inefficient, especially if the goal is to find the first palindrome. To convert the words collection to a lazy one, use the view method: findPalindrome(words.view take 1000000) The view method allows lazy traversal of the collection, making for more-efficient code. Lazy field initialization Before leaving the subject of laziness, I'll mention that two languages have a nice facility to make expensive initializations lazy. By prepending lazy onto the val declaration, you can convert fields in Scala from eager to as-needed evaluation: lazy val x = timeConsumingAndOrSizableComputation() This is basically syntactic sugar for the code in Listing 6: Listing 6. Scala's generated syntactic sugar for lazy fields var _x = None def x = if (_x.isDefined) _x.get else { _x = Some(timeConsumingAndOrSizableComputation()) _x.get } Groovy has a similar facility using an advanced language feature known as Abstract Syntax Tree (AST) Transformations. They enable you to interact with the compiler's generation of the underlying abstract syntax tree, allowing user transformations at a low level. One of the predefined transformations is the @Lazy attribute, shown in action in Listing 7: Listing 7. Lazy fields in Groovy class Person { @Lazy pets = ['Cat', 'Dog', 'Bird'] } def p = new Person() assert !(p.dump().contains('Cat')) assert p.pets.size() == 3 assert p.dump().contains('Cat') In Listing 7, the Person instance p doesn't appear to have a Cat value until the data structure is accessed the first time. Groovy also allows you to use a closure block to initialize the data structure: class Person { @Lazy List pets = { /* complex computation here */ }() } Finally, you can also tell Groovy to use soft references — Java's version of a pointer reference that can be reclaimed if needed — to hold your lazily initialized field: class Person { @Lazy(soft = true) List pets = ['Cat', 'Dog', 'Bird'] } Conclusion In this installment, I delved even deeper into laziness, building a lazy collection from scratch using closures in Groovy. I also discussed why you might want to consider a lazy structure, listing some of the benefits. In particular, the ability for your runtime to optimize resources is a huge win. Finally, I showed some esoteric but useful manifestations of laziness in Scala and Groovy relating to lazily initialized fields. Resources Learn - Lazy lists in Groovy: Thanks to Andrey Paramonov's blog for his perspective on constructing lazy lists using closures. - Scala: Scala is a modern, functional language on the JVM. - Clojure: Clojure is a modern, functional Lisp that runs on the JVM. - Totally Lazy: The Totally Lazy framework adds tons of functional extensions to Java, using an intuitive DSL-like interface. - Functional Java: Functional Java is a framework that adds many functional language constructs to Java. - The busy developer's guide to Scala: Learn more about Scala in this developerWorks series. -.
http://www.ibm.com/developerworks/library/j-ft19/index.html
CC-MAIN-2014-52
refinedweb
2,023
53.41
I need to format SQL query with default let formattedQuery = pgp.as.format('INSERT INTO some_table (a,b,c) VALUES ($(a), $(b), $(c))', object, {default: null}); db.none(formattedQuery); default db.none('INSERT INTO some_table (a,b,c) VALUES ($(a), $(b), $(c))', object, {default: null}) I'm the author of pg-promise. All query methods in pg-promise rely on the default query formatting, for better reliability, i.e. when a query template refers to a property, the property must exist, or else an error is thrown. It is logical to keep it that way, because a query cannot execute correctly while having properties in it that haven't been replaced with values. Internally, the query engine does support advanced query options, via method as.format, such as partial and default. And there are several objects in the library that make use of those options. One in particular that you should use for generating inserts is helpers.insert, which can generate both single-insert and multi-insert queries. That method, along with even more useful helpers.update make use of type ColumnSet, which is highly configurable, supporting default values for missing properties (among other things), via type Column. Using ColumnSet, you can specify a default value either for selective columns or for all of them. For example, let's assume that column c may be missing, in which case we want to set it to null: var pgp = require('pg-promise')({ capSQL: true // to capitalize all generated SQL }); // declaring a reusable ColumnSet object: var csInsert = new pgp.helpers.ColumnSet(['a', 'b', { name: 'c', def: null } ], {table: 'some_table'}); var data = { a: 1, b: 'text' }; // generating our insert query: var insert = pgp.helpers.insert(data, csInsert); //=> INSERT INTO "some_table"("a","b","c") VALUES(1,'text',null) This makes it possible to generate multi-insert queries automatically: var data = [{a:1, b:'text'}, {a:2, b:'hello'}]; // generating a multi-insert query: var insert = pgp.helpers.insert(data, csInsert); //=> INSERT INTO "some_table"("a","b","c") VALUES(1,'text',null),(2,'hello',null) The same approach works nicely for single-update and multi-update queries. In all, to your original question: Is it possible to pass default option directly without pre-formatting the query? No, and neither it should. Instead, you should use the aforementioned methods within the helpers namespace to generate correct queries. They are way more powerful and flexible ;)
https://codedump.io/share/AqUu1Jrrz2Zo/1/39default39-option-in-pgpasformat
CC-MAIN-2017-26
refinedweb
397
56.45
This blog details the implementation and utilization of Dynamic Tile in PeopleSoft Introduction: A tile on a fluid homepage is similar to a pagelet on a classic homepage. It leverages the ability to display dynamic content from PeopleSoft, which includes visual content from pivot grids or other information sources. We can show dynamic data on tile by specifying an application class as the data type in Tile Wizard. In addition to specifying the application class ID, we also specify these options in Tile Wizard: - Tile content type - Live data (on/off) - Badge data (on/off) Configuring Tile Options: We can configure the looks and behavior of tiles in PeopleSoft via ‘Fluid Attribute’ tab of target content reference.Dynamic content on tile is achieved by specifying an application class as the data type in Tile Wizard. Navigation: Menu -> PeopleTools -> Portal -> Structure and Content and drill down to appropriate folder to get the target content reference. Click on Fluid Attribute tab to create and configure the look of the tile which in case on Classic component is usually absent. These options determine what dynamic content and data will appear in the tile. Our implementation of the application class in PeopleCode will determine the specifics of the dynamic content and data. However, our PeopleCode implementation will override the options chosen in Tile Wizard. - Content , Live data and Badge area defined. - Tile Title – The tile’s title appears in the title bar at the top of the tile. The title is defined at Step 1 in Tile Wizard, and cannot be overridden in PeopleCode. - Tile Content -The tile content consumes the majority of the face of the tile, beneath the title bar and above the live data and badge areas on the tile. While the tile content type is specified at Step 2 in Tile Wizard, it can be overridden in PeopleCode. The specific tile data for any tile content type must be defined in PeopleCode. - Live Data – The live data region appears at the bottom of the tile. Whether live data is displayed on a tile is specified at Step 2 in Tile Wizard; this setting can be overridden in PeopleCode. Live data consists of four elements, each of which is optional: Live data value 1 + Trend image + Live data value 2 + Live data value 3. Creating Application Package for the Dynamic Tile: To implement an application class for tile content, do the following in your application class definition: - Import the base class (also referred to as the superclass): import PTGP_APPCLASS_TILE:Tiles:Tile; Note: Make sure the App Package contains a Sub Package. Otherwise, we might run into errors. - In the class declaration, do the following: - Indicate that your class (the subclass) extends the base class. class TileD1 extends PTGP_APPCLASS_TILE:Tiles:Tile - Declare the constructor method for your class. method TileD1(); - Declare the required getTileLiveData method. method getTileLiveData(); - (Optional) Declare any additional private methods and properties required by your implementation. - In the constructor method for your class, instantiate an object of the superclass: method TileD1 %Super = create PTGP_APPCLASS_TILE:Tiles:Tile(); end-method; - In your implementation of the required getTileLiveData method, do the following: - (Optional) Override the content type by invoking one of the optional SetTileContentAs*methods. Type of tile content (*): 1. Chart 2. Chart and 1KPI (key performance indicator) 3. 1 KPI 4. 2 KPIs 5. HTML 6. Image - (Optional) Override whether live data is displayed on the tile by invoking the setTileLiveData method or setting the hasLiveDataDescr property. - (Optional) Override whether badge data is displayed on the tile by invoking the setTileHasCount method or setting the hasLiveDataCount property. - For the tile content type selected, invoke any require methods and set all required properties for that content type. - If live data is enabled for the tile, set all required properties for live data. - If badge data is enabled for the tile, set all required properties for badge data. - (Optional) Implement error handling by setting the hasContent property. - Create a tile definition in Tile Wizard. At Step 2, specify your custom application class. Defining Application Class and Tile Content: Defining the Application Class when the Tile content is Image Invoke the SetTileContentAsImage method within your implementation of the getTileLiveData method only if you wish to dynamically override the tile content type to display an image at runtime. When the tile content is specified to be an image, either setTileImageRef method or the ImageReferenceField property is required. When the user want to use image as tile content, the following methods and property can be used to define the Tile Content. Methods - SetTileContentAsImage()-invoked in getTileLiveData() to dynamically override the tile content type at runtime. - setTileImageRef(image_name) This method is used to specify the image to be displayed when the tile content is hosen as image.it is equalivalent to specifying the value for property ‘ImageReferenceField’. Either of the two can be used when the tile content is specified as image. %This.setTileImageRef(IMG_NAME); Properties - ImageReferenceField –It is used to specify the image field property when the tile content is specified as image. As it is a Field object , we can give value property as follows: %This.ImageReferenceField.Value=Image.IMG_ID; - TileImageReferenceLabel – This property is used to set tooltip (Hovertext) for the image set as tile content.It is visible by the user when the image is hover over. %This.TileImageReferenceLabel(“This is a image hover text”); Dynamic tile with Image as tile content. Application Package PeopleCode. Setting Live Data Live Data is displayed at the left bottom of the tile. 3 Live data can be displayed on the dynamic tile. Live data can be enabled from Step 2 of Tile Wizard or we can also override it through PeopleCode. Live data consists of four elements, each of which is optional: Live data value 1 + Trend image + Live data value 2 + Live data value 3 Method and Properties used to define Live Data hasLiveDataDescr– This property is used to enable the Live Data area. It returns a Boolean value. It is equivalent to setTileLiveData method. %This.hasLiveDataDescr= “True”; setTileLiveData %This.setTileLiveData(“Y”); /* Enables the Live Data Area */ Setting Badge Data It is the data which appear at the botton right of the tile , right of Live Data.It is a simple interger, typically a count of item. First we need to enable the badge area (whether the user want the badge area to be displayed).This can be done using delivered method as well as property. Method - setTileHasCount(badge_display – Y/N) Y – Badge area is enabled. N – Badge area is disabled. We invoke this method in ‘getTileLiveData’ to override whether the badge area should enabled or not. We can enable it using hasLiveDataCount property also. After the badge area is enabled , the bdage data needs to be set through TileTransCount Property. %This.setTileHasCount(“Y”); /* Enables the badge area. */ %This.TileTransCount = MY_REC.COUNT; Properties - hasLiveDataCount – Use the hasLiveDataCount property to set or return a Boolean value indicating whether the badge area is enabled. %This.hasLiveDataCount = True; /* Enables the badge area. */ %This.TileTransCount = MY_REC.COUNT; - TileTransCount –this property is used to set the badge data when the badge area is enabled. PeopleCode setting Live Data for Dynamic tile. Dynamic tile with Male/Female percentage as Live Data. Creating a tile definition in Tile Wizard Navigation : Navigator – > PeopleTools – > Portal – > Tile Wizard Step 1: Give the tile Name and its Title and click on Next. Tile Wizard – Basic Information. Step 2: Specify the Data Source for the Tile Wizard .Select the Application Package , Sub-package and Class in this step. We also specify the tile content (chart , chart&KPI, 1KPI etc.) along with whether the tile has Live Data and Badge or not. - Root Package ID – Select a custom application package that includes the class that implements the PTGP_APPCLASS_TILE:Tiles:Tile base class and one of its required SetTileContentAs methods. - Qualified Package/Class Path -Select the package name or package path to the implemented class. - Application Class ID – Select the class that implements the required getTileLiveData method. Tile Wizard – Selecting Data Source. Tile Wizard – Select Tile Content Options. Step 3: Specify the Content Reference which you want see when the tile is tapped. Choose it through the prompt given beside the field. Give the Owner Id and Parent folder being mandatory. You can provide security to the tile of who can access and who can not by selecting the required Permission & Role for the tile. Tile Wizard – Selecting Target Page information and adding security. Step 4: Specify the width and height of the tile which could be (1:1, 1:2, 2:1, 2:2). An image for it which could be overided through PeopleCode. This width and height of the tile is datatype dependent. - Image – Select an image (in SVG format only) from the database to display a custom static image as the fluid content. Otherwise the default image is displayed. - Tile Refresh Timer (Seconds) -Enter the time in seconds to set an automatic refresh period for dynamic content on a tile. When the timer limit has been reached, the system re-draws the tile so that it displays the current data, such as in the case with chart. Note: If any Event Name is selected this field is disabled. Also the system enforces a 10 second minimum limit. Any value entered less than 10 seconds is ignored and is treated as 10 seconds. The default value of 0 disables any automatic refresh. - Disable Main Hotspot – Select this option to ‘Yes’ to disable displaying the target content defined for the tile when the tile is tapped. However, any links displayed dynamically (dynamic data /content) within the tile are not disabled. - Display In – Control how the content appears once a user taps n the tile. - Cur Window – target content opens up in current window. - Modal – target content opens up in model window. We get extra Modal Parameter to specigy on select this option. - NavBar – target content appears in NavBar only when the tile is added in NavBar else it will open in the current window. - New Window – target content opens up in a new window. - Interactive – Select this option to display the fluid content as interactive, which allows users to enter data or click buttons within the fluid content. It is disabled for Pivot grid tiles and OBIEEE tiles. Tile Wizard – Setting Tile Display Properties. Step 5: Review all the settings given for the tile in the last step and Publish the Tile Wizard.
https://www.kovaion.com/blog/dynamic-tile-in-peoplesoft/
CC-MAIN-2020-40
refinedweb
1,734
56.25
Subject: Re: [boost] namespace boost? From: Dave Abrahams (dave_at_[hidden]) Date: 2011-01-15 23:26:08 At Sat, 15 Jan 2011 14:30:46 -0800, Steven Watanabe wrote: > > AMDG > > On 1/15/2011 2:11 PM, Robert Ramey wrote: > > vicente.botet wrote: > >> favor this approach because it's already common practice. > I don't consider any of the arguments against it that > I've seen sufficient to justify going to something new. Yeah, it's OK by me as well. If there's not going to be a collision with boost/flyweight/xxx.hpp, then boost/flyweight.hpp isn't going to collide either. And if it were to collide, we'd have bigger problems than just a poor choice of header paths :-) -- Dave Abrahams BoostPro Computing Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2011/01/175042.php
CC-MAIN-2019-18
refinedweb
150
69.68
An Odd Way To Square February 26, 2013 I’m not sure where this comes from; it could be an interview question, or some kind of programming puzzle: Write a function that takes a positive integer n and returns n-squared. You may use addition and subtraction but not multiplication or exponentiation. Your task is to write the squaring function described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. [...] today’s Programming Praxis exercise, our goal is to implement an algorithm to square a number using only [...] My Haskell solution (see for a version with comments): Hm, something went wrong in posting. Let’s try that again: [...] Question is from here. [...] My java solution here. Isn’t it slightly cleaner to just add n directly? The time complexity remains the same. Nobody’s done it this way yet… [...] Pages: 1 2 [...] Python def poor_square(n): l = [n for i in range(n)] return sum(l) n = 12 print poor_square(n) use shift operator, shift and add, see: And in forth: : sqr.add ( n — n*n ) dup 1 > if dup dup 1 do over + loop swap drop then ; An O(log(N)) algorithm exploiting the binary representation of N. Assuming that N=2^k(1) + 2^k(2) + … + 2^k(t) the algorithm will add: N*2^k(1), N*2^k(2), … ,N*2^k(t) Python 3.3 based on the formula (n+1)**2 = n**2 + 2*n + 1 Here’s another O(n) one based on the fact that n*n is the sum of all odd integers less than 2n: [Java] private int squared(int n){ int result = 0; for(int i = 0; i < n; ++i){ result += n; } return result; } Haskell link JavaScript function OddSquare(n) { var result = 0; for (var i = 0; i < n; i++) { result += n; } return result; } Java int square(int n){ int sum = n; for(int i = 1; i < n; i++){ sum += n; } return sum; } Binary arithmetic yielding O(1): Woops the binary arithmetic isnt really o(1), its dependent on position of the MSB that equals 1 of the unsigned int (n2). Worst case would be 32 iterations. To improve on this you could swap n1 with n2 if n1 < n2. Easy to follow Python method:
http://programmingpraxis.com/2013/02/26/an-odd-way-to-square/
CC-MAIN-2014-10
refinedweb
396
70.53
It’s surprisingly easy to use more than one CPU core in Python. You can’t do it with straightforward threads, since the C implementation of Python has a Global Interpreter Lock (GIL) which means there can only ever be one thread performing active calculations at any one time, so threads in Python are generally useful only for waiting on I/O, handling GUIs and servers and such, not actually processing in parallel when you have multiple CPU cores. (The Java implementation has no GIL and really can run on multiple cores in parallel, but I’m assuming you have an existing Python project and want to stick with the C implementation.) But there are now ways of multiprocessing in standard C Python, and they’re not too difficult, even to add in to legacy Python code. Python 3.2 introduced the concurrent.futures module as standard [Python], and there’s a backport for Python 2.7 which can usually be installed on Unix or GNU/Linux via sudo pip install futures (in Debian or Ubuntu you might need sudo apt-get install python-pip first; on a Mac try sudo easy_install pip). One nice thing about this module is it’s quite straightforward to roll your own ‘dummy version’ for when parallelism is not available: see Listing 1. This gives you an object called executor which supports submit(function, arguments) returning an object that will, when asked for its result() later, give you either the result of the calculation or the exception raised by it, as appropriate. (Java programmers should recognise these semantics.) The executor object also has a map(function, iterables) which works like the built-in map(). If you’re on a multi-core machine and the real concurrent.futures is available in its Python installation, then some of the work will be done asynchronously on other CPU cores in between the calls to submit() and result(), so you can parallelise programs simply by looking for situations where independent calculations can be started ahead of when their results are needed, or even just by parallelising a few calls to map() as long as your map functions are not so trivial that the overhead of parallelising them would outweigh any benefit. But if your script is run on an older machine with no concurrent.futures available, it will fall back to the ‘dummy’ code which simply runs the function sequentially when its result is called for. (And if that result turns out not to be required after all and is not asked for, then the function won’t run. So if parallelism is not available then at least you can benefit from lazy evaluation. But this applies only if your algorithm involves speculative computations i.e. ones you start before knowing if you’ll really need them.) I like the idea of ‘if it’s there, use it; if not, do without’: it means users of my scripts don’t have to make sure concurrent.futures is available in their Python installation. If they don’t have whatever it takes to install it, they’ll simply get the sequential version of my script rather than an ImportError ( ImportErrors in your scripts can be bad PR). Note I’m not specifically catching ImportError around the concurrent.futures import, because it’s also possible for this import to succeed but still fail to make ProcessPoolExecutor available. This can be seen by reading __init__.py in the source code of concurrent.futures: if ProcessPoolExecutor cannot be loaded, then the module will just give you ThreadPoolExecutor. But there’s no point using ThreadPoolExecutor for multiprocessing, because ThreadPoolExecutor is subject to the GIL, so we want to verify that ProcessPoolExecutor is available before going ahead. The interface to the ‘dummy’ object is actually quite a far cry from that of the real thing. With the real concurrent.futures, you can’t pass lambda or locally-defined functions to submit() or map(), but the dummy object lets you get away with doing this. Also, the real concurrent.futures has extra functionality, such as add_done_callback and polling for completion status, and does not run a function twice if you call its result() twice. All of this can be worked around by writing a more complex dummy object, but if all you’re going to do anyway is call submit() and result() then there’s not a lot of point making the fallback that complicated: if a few lines of script are supposed to be a ‘poor man’s’ fallback for a large library, then we don’t want to make the substitute so big and complicated that we almost might as well bundle the library itself into our script. Just make sure to test your code at least once with the real concurrent.futures to make sure you haven’t accidentally tried to give it a lambda function or something (the dummy object won’t pick up on this error). You can of course insert print statements into the code to tell you which branch it’s using, to make sure you’re testing the right one; you may even want to leave something in there for the production version (i.e. ‘this script should run faster if you install futures’). Oversized data From this point on, I’ll assume the real concurrent.futures is present on the system and you are doing real multiprocessing. You don’t have to worry about causing too many context switches if too many tasks are launched at once, since ProcessPoolExecutor defaults to queuing up tasks when all CPU cores are already occupied with them. But you might sometimes be worried about what kind of data you are passing in to each task, since serialisation overheads could be a serious slow-down if it has to be large. If you’re on Unix, Python’s underlying ‘multiprocessing’ module will start the new processes with fork(), which means they each get a copy of the parent process’s memory (with copy-on-write semantics if supported by the kernel, so that no copying occurs until a memory-page is actually changed). That means your functions can read module-level global variables that have been set up at runtime before the parallel work started (just don’t try to change these during the parallel work, unless you want to cope with such changes affecting some future calculations but not others depending on which CPU or process ID happens to run what). fork() does, however, mean you’d better be careful if you’re also using threads in the same program, such as for a GUI; there are ways of working around this, but I’d suggest concentrating on making a command-line tool and let somebody else wrap it in a GUI in a different process if they must. But you can’t rely on having fork() if your script might be run on Windows, nor if you might eventually use multiple machines in a cluster using mpi4py.futures (more on this below), SCOOP [SCOOP], or a similar tool that gives you the same API as concurrent.futures. In these cases, it’s likely that your script will be separately imported on each core, so it had better not run unless __name__ == "__main__". You can set up a few module-level variables when that happens; the subprocesses should still have the same sys.argv and os.environ if that’s any help. However, you probably won’t want to repeat a long precalculation when doing this. Since most multiprocessing environments, even across multiple machines in a cluster, assume a shared filesystem, one fairly portable way of sharing such large precalculated data is to do it via the filesystem, as in Listing 2. To avoid the obvious race condition, this must be done before initialising the parallelism. Listing 2 can detect the case where fork() has been used and the data does not need to be read back from the filesystem, although without further low-level inspection it won’t be able to detect when it can avoid writing it to the filesystem at all (but that might not be an issue if you want to write it anyway). There are other ways of passing data to non- fork()ed subprocesses without using the filesystem, but they involve going at a lower level than concurrent.futures (you can’t get away with simply passing the data into a special ‘initialiser’ function to be run on each core, since the concurrent.futures API by itself offers no guarantee that all cores in use will be reached with it). MPI Message Passing Interface (MPI) is a standard traditionally used on high-performance computing (HPC) clusters, and you can access it from Python using a number of libraries for interacting with one of the underlying C implementations of MPI (typically MPICH or OpenMPI). Now that we have concurrent.futures, it’s a good idea to look for libraries supporting that API so we won’t have to write anything MPI-specific (if it’s there, we can use it; if not, we can use something else). mpi4py [MPI] plans to add an mpi4py.futures module in its version 2.1, but, at the time this article was written, version 2.1 was not yet a stable release (and standard pip commands were fetching version 2.0), so if you want to experiment with mpi4py.futures, you’ll have to download the in-development version of mpi4py. On a typical GNU/Linux box, you can do this as follows: become root ( sudo su), make the mpicc command available (on RedHat-based systems that requires typing something like module add mpi/mpich-x86_64 after installing MPICH, or equivalent after installing OpenMPI; Debian/Ubuntu systems make it available by default when one of these packages is installed), make sure the python-dev or python-devel package is installed (apt-get install python-dev or yum install python-devel), and then try: pip install At this point Listing 1 can be changed (after adding extra indentation to each line) by putting Listing 3 before the beginning. Here, we check if we are being run under MPI, and, if so, we use it; otherwise we drop back to the previous Listing 1 behaviour (use concurrent.futures if available, otherwise our ‘dummy’ object). A subtlety is that mpi4py.futures will work only if it is run in a command like this: mpiexec -n 4 python -m mpi4py.futures script.py args... and that in an MPI environment too (i.e. the above module add command will need to have been run in the same shell, if appropriate). Some versions of mpiexec also have options for forwarding standard input and environment variables to processes, but not all do, so you’ll probably have to arrange for the script to run without these. Also, any script that uses sys.stdout.isatty() to determine whether or not output is being redirected will need to be updated for running under MPI, because MPI always redirects the output from the program’s point of view even when it’s still being sent to the terminal. If you want MPI to use other machines in a cluster, then how to do this depends on your MPI version: it may involve extra setup steps before starting your program, as is the case with mpd in older versions of MPICH2 such as version 1.2. But in MPICH2 version 1.5 (the current mpich2 package in Debian Jessie), and in MPICH 3.1 (Jessie’s current mpich package), the default process manager is hydra and you simply create a text file listing the host names (or IP addresses) of the cluster machines, ensure they can all ssh into each other without password and share the filesystem, and pass this text file to mpiexec using the -f parameter or the HYDRA_HOST_FILE environment variable. (In OpenMPI you use the --hostfile parameter.) Modern MPI implementations are also able to checkpoint and restart processes in the event of failure of one or more machines in the cluster; refer to each implementation’s documentation for how to set this up. If our script is run outside of MPI, then our detecting and handling of ‘no MPI’ is a little subtle because mpi4py.futures (if installed) will still successfully import, and it will even let you instantiate an MPIPoolExecutor(), but then will likely crash after you submit a job, and catching that crash from your Python script is very awkward (normal try/ except won’t cut it). So we need to look at the command line to check we’re being run in the right way for MPI first. But we can’t just inspect sys.argv, because that will have been rewritten before control is passed to our script, so we have to get the original command line from the ps command. The ps parameters in Listing 3 were tested on both GNU/Linux and Mac OS X, and if any system does not support them then we should just fall back to the safety of not using MPI. A pattern for moving long-running functions to other CPUs If you have a function that normally runs quite quickly but can take a long time on certain inputs, it might not pay to have every call run on a different CPU, since in fast cases the overheads of doing so would outweigh the savings. But it might be useful if the program could determine for itself whether or not to run this particular call on a different CPU. Since, in Python, any function can be turned into a generator by replacing return x with yield x ; return (giving a generator that yields a single item), the pattern shown in Listing 4 seems natural as a way to refactor existing sequential code into multiprocessing. The part marked ‘first part of function goes here’ will be repeated on the other CPU, which seems wasteful but could be faster than passing variables across if they are large; it is assumed that this part of the function does what is necessary for us to be able to figure out if the function is likely to take a long time, e.g. if the first part of the function shows that we are now generating a LOT of intermediate data (which is why we probably don’t want to pass it all across once we’ve decided we’re better off running in the background). The my_function_wrapped part is necessary because submit() takes only functions not generators. I’m not suggesting writing new programs like Listing 4, but it might be a useful pattern for refactoring legacy sequential code. Avoiding CPU overload The above pattern for moving long-running functions to other CPUs should work as-is on MPI, but with concurrent.futures it will result in one too many processes, because ProcessPoolExecutor defaults to running as many parallel processes as there are CPU cores, on the assumption that the control program won’t need much CPU itself, an assumption that is likely to break down when using this pattern. The Linux and BSD kernels are of course perfectly capable of multiplexing a load that’s greater than the number of available CPU cores, but it might be more efficient to reduce the number of ‘slave’ processes by 1 to allow the master to have a CPU to itself. This can be accomplished using code like that in Listing 5. Evaluation The above methods were used to partially parallelise Annotator Generator [Brown12] resulting in a 15% overall speed increase when using concurrent.futures as compared to the unmodified code. This could almost certainly be improved with more parallelisation (recall Amdahl’s Law: the speedup is limited by the fraction of the program that must be sequential). Only a fraction of a percent was saved by subtracting 1 from the number of CPUs to achieve a more even load. Results using MPI were not so satisfactory. When running with 4 processes on a single quad-core machine using MPI, the program was actually slowed down by 8% compared with running single-core, which in turn was 6% slower than the unmodified code. I believe that 6% represents the overhead of converting functions into generators, and could be eliminated by duplicating and modifying the code for the single-core case, but that would introduce a maintenance issue unless it could somehow be automated. Given Annotator Generator’s desktop usage scenario, the prevalence of multi-core CPUs on desktops, and the speedup using concurrent.futures, it doesn’t seem very high-priority to invest code complexity in saving that 6% in the single-core case. MPI’s poor performance is more worrisome, but I later discovered it was due to the system running low on RAM (and therefore being slowed down by more page faults) while running four separate MPI processes: concurrent.futures was able to share the data structures, but MPI wasn’t (even though it could use shared memory for some message passing). Once I reduced the size of the input, MPI was 14% faster than the single-core case and concurrent.futures was 18% faster than the single-core case. Perhaps MPI would perform better on a real cluster, which I have not yet had an opportunity to test. A cluster of virtual machines with OpenMPI ran 5% faster than the single-core case, but because these machines were virtual and all running on the same actual machine, I do not believe that result to be meaningful other than as a demonstration that the underlying protocols were working. Still, I suspect a greater deal of parallelisation is required to outweigh the overheads of MPI beyond those of concurrent.futures. But as it can now use the same API as concurrent.futures, not to mention SCOOP, it is now possible to write for a single concurrency API and experiment to see which framework gives the best speed improvement to your particular application. References [Brown12] Silas S. Brown. Web Annotation with Modified-Yarowsky and Other Algorithms. Overload issue 112 (December 2012) page 4. The modified code is now at [MPI] MPI for Python [Python] Python library documentation [SCOOP] SCOOP (Scalable COncurrent Operations in Python)
https://accu.org/index.php/articles/2342
CC-MAIN-2019-09
refinedweb
3,043
55.88
0 The copyList function here is not working working. Anyone have any ideas? // header file #ifndef H_orderedLinkedList #define H_orderedLinkedList #include <list> #include <iostream> using namespace std; struct coordinates { int xValue; //variable to hold x coordinate int yValue; //variable to hold y coordinate }; struct node { coordinates info; node *link; }; class orderedLinkedList { //Overload the stream insertion operator. friend ostream& operator<< (ostream&, const orderedLinkedList&); public: void initializeList(); //Function to initialize the list to an empty state. //Postcondition: first = NULL; count = 0 bool isEmptyList(); //Function to determine whether the list is empty. //Postcondition: Returns true if the list is empty; // otherwise, returns false. int length(); //Function to return the number of nodes in the list. //Postcondition: The value of count is returned. void destroyList(); //Function to delete all the nodes of the list. //Postcondition: first = NULL; count = 0 void insertNode(int, int); //Function to insert newItem in the list. //Postcondition: first points to the new list and newItem is // inserted at the proper place in the list, count++ void deleteNode(int, int); //Function to delete deleteItem from the list. //Postcondition: If found, the node containing deleteItem // is deleted from the list; first points to // the first node of the new list, count++ // If deleteItem isn't in the list, an error message displays. orderedLinkedList(); //default constructor //Initializes the list to an empty state. //Postcondition: first = NULL; count = 0 orderedLinkedList(const orderedLinkedList& otherList); //copy constructor ~orderedLinkedList(); //destructor //Deletes all the nodes from the list. //Postcondition: The list object is destroyed. private: void copyList(const orderedLinkedList& otherList); //Function to make a copy of otherList. //Postcondition: A copy of otherList is created // and assigned to this list. int count; //variable ot store the number of elements in the list. node *first; //pointer that points to first element node *other; //pointer to traverse through the list }; #endif //copyList Function// void orderedLinkedList::copyList(const orderedLinkedList &otherList) { node *newNode; //pointer to create a node node *current; //pointer to travel the list if(first != NULL) //if the list is nonempty, make it empty destroyList(); if(otherList.first == NULL) //otherList is empty { first = NULL; other = NULL; count = 0; } else { current = otherList.first; //current points to the //list to be copied count = otherList.count; //copy the first node first = new node; //create the node assert(first != NULL); first->info = current->info; //copy the info first->link = NULL; //set the link field of //the node to NULL other = first; //make last point to the //first node current = current->link; //make current point to //the next node //copy the remaining list while(current != NULL) { newNode = new node; //create a node assert(newNode!= NULL); newNode->info = current->info; //copy the info newNode->link = NULL; //set the link of //newNode to NULL other->link = newNode; //attach newNode after last other = newNode; //make last point to //the actual last node current = current->link; //make current point to //the next node }//end while }//end else }//end copyList
https://www.daniweb.com/programming/software-development/threads/83565/ordered-linked-list-copy-function
CC-MAIN-2016-50
refinedweb
478
62.17
Definition of Object Class in Java Object class is the root class in every inheritance tree that can exist since JDK 1.0. Every class in java inherits the properties and methods of Object class either direct or indirectly that is, for instance, in case a class C1 extends other class C2 than since Object class is a parent of all class, C1 indirectly inherits Object class otherwise by default every class being created in Java extends Object class by default. Thus every data type object in Java is a type of Object class objects thus an object of Object class can hold any type of data inside it. How does Object Class work in Java? Object class in java.lang package is the top in the class hierarchy tree, thus every class is like direct or indirect descendant to this class. This implies, that all classes inherit the instance methods from Object class and can be called using their own objects. But to use the inherited methods, a class needs to override them inside them. Also, Object class act as a container class for all the data types thus in case the data type of one object is unknown, it can be easily referred using object of Object class since parent class object can be easily used to refer the child class objects, known as upcasting. Methods of Object Class in Java Below is the list of instance methods available in Object class: 1. toString() Syntax: public String toString() This function is used to extract the String representation of an object. This String representation can be easily modified according to the need For eg- in case there is a class Office that needs that its head of that branch must be displayed with location whenever one calls this function. Thus representation depends entirely on the object and useful while debugging. 2. hashCode() Syntax: public int hashCode() This function returns the hexadecimal representation of the memory address of the object, which is helpful in case of equating 2 objects as 2 objects are said to equal only if their hashcode matched, as per implementation present in the Object class. But in case one overrides equals() method of Object class, super implementation becomes inactive for objects of that class. 3. equals(Object obj) Syntax: public boolean equals(Object obj) This method is used to override the default implementation of equals method to compare 2 objects using their hashcode and returns true or false accordingly. In case you want to implement your own logic to compare 2 objects of your class, you must override equals method. 4. getClass() Syntax: public final Class getClass() This method returns the reference of a Class object that helps to retrieve various information about the current class. Below are examples of instance methods of Class - getSimpleName(): Returns the name of the class. - getSuperClass(): Returns the reference of the superclass of the specified class. - getInterfaces(): Returns the array of references of Class type for all the interfaces that have been implemented in a specified class. - isAnnotation(): Returns True is the specified class is an annotation otherwise false. - getFields(): Returns the list of fields of the specified class. - getMethods(): Returns the list of instance methods of the class. 5. finalize() Syntax: protected void finalize() throws Throwable This method is called whenever JVM depicts that there is no more reference exit to that object so that garbage collection can be performed. There is nothing defined in finalize function in Object class and returns nothing. Thus the subclasses can implement it to perform some action, but must not rely on when it will be invoked as the thread that invokes it does not hold a user-visible authentication lock. Any exception occurs while finalize() halts its execution, thus it must be handled with precautions. 6. clone() Syntax: protected Object clone() throws CloneNotSupportedException This method of Object class is meant to be overridden in a subclass in case Cloneable Interface is implemented in it and this method is used to clone, ie creating a copy of that object with values in member variables, using aobj.clone() syntax. In case one class does not implement the Cloneable interface, CloneNotSupportedException is thrown. By default, this method checks if Cloneable interface has been implemented in this case but in case you want to override this method according to your logic, you can easily do this using the following syntax:- protected Object clone() throws CloneNotSupportedException or public Object clone() throws CloneNotSupportedException Example: package Try; public class Desk implements Cloneable{ private int id; private String Mid; Desk(int id,String mid){ this.id=id; this.Mid = mid; } @Override public String toString() { return getClass().getName() + "@" + Integer.toHexString(hashCode()); } @Override public int hashCode() { return id; } public boolean equals(Desk dd){ return this.id == dd.id; } @Override protected void finalize() { System.out.println("Lets call Fialize"); } @Override protected Desk clone() { return this; } public static void main(String[] args){ Desk d1=new Desk(123,"344"); Desk d2=new Desk(234,"344"); System.out.println("toString Representation " + d1.toString()); System.out.println("HashCode for this object "+d1.hashCode()); System.out.println("Comparing 2 objects using equals method "+ d1.equals(d2)); Desk d3=d2.clone(); // cloning d2 object System.out.println("Comparing clone object with original "+d2.equals(d3)); d1=d2=d3=null; System.gc(); } } Output: Below three methods of Object class are used in case you need to implement multithreading behavior and need synchronization between different threads. 7. wait() This method is used to put the current thread in the waiting state until any other thread notifies. The time needs to be specified in the function in milliseconds for which you need to resume the thread execution. There are three definitions for this function as given below: - public final void wait() - public final void wait(long timeout) - public final void wait(long timeout, int nanos) Note: InterruptedException is thrown by this method. 8. notify() Syntax: public final void notify() 9. notifyAll() This method is used to wake up all the threads waiting in the waiting queue. Syntax: public final void notifyAll() Conclusion Object class is topmost in the class hierarchy of every class in Java thus every class inherits its instance methods and can use them after overriding them according to the scenarios. Also, its object can be used as a reference variable for an object of every class using the upcasting concept. Recommended Articles This is a guide to Object Class in Java. Here we discuss the definition and how object class works in java along with methods and examples. you may also have a look at the following articles to learn more –
https://www.educba.com/object-class-in-java/?source=leftnav
CC-MAIN-2020-34
refinedweb
1,102
51.38
# Python or R: Which Is A Better Choice For Data Science? ![](https://habrastorage.org/r/w1560/webt/bl/je/0s/blje0syg75allynzgvt0s31r5zy.png) Data science is going to revolutionize this world completely in the coming years. The tough question among data scientists is that which programming language plays the most important role in data science? There are many programming languages used in data science including R, C++, Python. In this blog, we are going to discuss two important programming languages namely Python and R. This will help you choose the best-fit language for your next data science project. Python is an open-source, flexible, [object-oriented](https://en.wikipedia.org/wiki/Object-oriented_programming) and easy-to-use programming language. It has a large community base and consists of a rich set of libraries & tools. It is, in fact, the first choice of every data scientist. On the other hand, R is a very useful programming language for statistical computation & data science. It offers unique technique's viz. nonlinear/linear modeling, clustering, time-series analysis, classical statistical tests, and classification technique. Also Read: [Uses of Google App Engine](https://habr.com/en/post/504814/) **Features of Python** * Dynamically typed language, so the variables are defined automatically. * More readable and uses less code to perform the same task as compared to other programming languages. * Strongly typed. So, developers have to cast types manually. * An interpreted language. This means that the program need not be compiled. * Flexible, portable and can run on any platform easily. It is scalable and can be integrated with other third-party software easily. **R features for data science apps** * Multiple calculations can be done with vectors * Statistical language * You can run your code without any compiler * Data science support Here, I have listed out some domains that are used to differentiate these two programming languages for data science. **1) Data structures** When it comes to data structures, binary trees can be easily implemented in Python but this process is done in R by using list class which is a slow move. Implementation of binary trees in Python is shown below: First, create a node class and assign any value to the node. This will create a tree with a root node. ``` class Node: def __init__(self, data): self.left = None self.right = None self.data = data def PrintTree(self): print(self.data) root = Node(10) root.PrintTree() ``` Output: 10 Now, we need to insert into a tree so we add an insert class & same node class inserted above. ``` class Node: def __init__(self, data): self.left = None self.right = None self.data = data def insert(self, data): # Compare the new value with the parent node if self.data: if data < self.data: if self.left is None: self.left = Node(data) else: self.left.insert(data) elif data > self.data: if self.right is None: self.right = Node(data) else: self.right.insert(data) else: self.data = data # Print the tree def PrintTree(self): if self.left: self.left.PrintTree() print( self.data), if self.right: self.right.PrintTree() # Use the insert method to add nodes root = Node(12) root.insert(6) root.insert(14) root.insert(3) root.PrintTree() ``` Output: 3 6 12 14 **Winning language:** Python **2) Programming language unity** The version change of Python from 2.7 to 3.x will not cause any disruption in the market while changing the version of R into two different dialects is impacting a lot because of RStudio: R & [Tidyverse](https://www.tidyverse.org/). **Winning language:** Python **3) Meta programming & OOP facts** Python programming language has one OOP paradigm while in R, you can print a function to the terminal many times. The meta programming features of R i.e. code that produce code is magical. Hence, it has become the first choice of computer scientists. Though functions are objects in both programming languages R takes it more seriously as that of Python. As a functional programming language, R provides good tools to perform well-structured code generation. Here, a simple function is taking a vector as an argument & returning element which is higher than the threshold. ``` myFun <- function(vec) { numElements <- length(which(vec > threshold)) numElements } ``` For a different threshold value, we will write a function that generates all these functions instead of rewriting the function by hand. Below, we have shown the function that produces many myFun type functions: ``` genMyFuns <- function(thresholds) { ll <- length(thresholds) print("Generating functions:") for(i in 1:ll) { fName <- paste("myFun.", i, sep="") print(fName) assign(fName, eval( substitute( function(vec) { numElements <- length(which(vec > tt)); numElements; }, list(tt=thresholds[i]) ) ), envir=parent.frame() ) } } ``` You can also consider the numeric example on the R CLI session as shown below: ``` > genMyFuns(c(7, 9, 10)) [1] "Generating functions:" [1] "myFun.1" [1] "myFun.2" [1] "myFun.3" > myFun.1(1:20) [1] 13 > myFun.2(1:20) [1] 11 > myFun.3(1:20) [1] 10 > ``` **Winning language:** R **4) Interface to C/C++** To interface with C/C++, R programming language has strong tools as compared to Python language. R's Rcpp is one of the powerful tools which interface to C/C++ and its new ALTREP idea can further enhance performance & usability. On the other hand, Python has tools viz. swig which is not that much power but working the same. Other variants of Python like Cython and PyPy can remove the need for explicit C/C++ interface completely anytime. **Winning language:** R programming **5) Parallel computation** Both programming languages do not provide good support for multicore computation. R comes with a parallel package which is not a good workaround and Python's multiprocessing package is not either. Python has better interfaces for GPUs. However, external libraries supporting cluster computation are good in both the programming languages. **Winning language:** None of the two **6) Statistical issues** R language was written by statisticians for statisticians. Hence there were no statistical issues involved. On the other hand, Python professionals majorly work in machine learning and have a poor understanding of the statistical issues. R is related to the S statistical language commercially available as S-PLUS. R provides numerous statistics functions namely sd(variable), median(variable), min(variable), mean(variable), quantile(variable, level), length(variable), var(variable). T-test is used to determine statistical differences. An example is hown below to perform a t-test: > t.test(var1, var2) Welch Two Sample t-test data: x1 and x2 t = 4.0369, df = 22.343, p-value = 0.0005376 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 2.238967 6.961033 sample estimates: mean of x mean of y 8.733333 4.133333 > However, the classic version of the t-test can be run as shown below: > t.test(var1, var2, var.equal=T) Two Sample t-test data: x1 and x2 t = 4.0369, df = 28, p-value = 0.0003806 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 2.265883 6.934117 sample estimates: mean of x mean of y 8.733333 4.133333 > To run a t-test on paired data, you need to code like below: > t.test(var1, var2, paired=T) Paired t-test data: x1 and x2 t = 4.3246, df = 14, p-value = 0.0006995 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 2.318620 6.881380 sample estimates: mean of the differences 4.6 > **Winning language:** R language **7) AL & ML** Python got huge importance after the arrival of machine learning and artificial intelligence. Python offers a great number of finely-tuned libraries for image recognition like AlexNet. Therefore, R versions can be easily developed. Python powerful libraries come from making certain image-smoothing ops which further can be implemented in [R's Keras wrapper](https://keras.rstudio.com/). Due to which a pure-R version of TensorFlow can be easily developed. However, R's package availability for gradient boosting & random forests is outstanding. **Winning language:** Python **8) Presence of libraries** The Comprehensive R Archive Network (CRAN) has over 12,000 packages while the Python Package Index ([PyPI](https://pypi.org/)) has over 183,000. PyPI is thin on data science as compared to R. ![](https://habrastorage.org/r/w1560/webt/vq/vj/kp/vqvjkpskihhz6_4latdp_ngrvr8.png) **Winning language:** Tie between the two **9) Learning graph** When it comes to becoming proficient in Python, one needs to learn a lot of material including Pandas, NumPy & matplotlib, matrix types while basic graphics are already built-in R. The novice can easily learn R programming language within minutes by doing simple data analysis. However, Python libraries can be tricky for him to configure out. But R packages are out of the box. **Winning language:** R programming language **10) Elegance** Being the last comparison factor, it is actually the most subjective one. Python is more elegant than R programming language as it greatly reduces the use of parentheses & braces while coding and making it more sleek to use by developers. **Winning language:** Python **Final Note:** Both languages are giving a head fight to each other in the world of data science. At some point, Python is winning the race while at some other R language is up. So the end choice between the two above programming languages for data science depends on the following factors: -> Amount of time you invest -> Your project requirements -> Objective of your business Thank you for investing your precious time in reading and I welcome your positive feedback.
https://habr.com/ru/post/482500/
null
null
1,591
50.84
Today, we’re going to begin a dive into the PostgreSQL Data Types. As my colleague Will Leinweber said recently in his talk Constraints: a Developer’s Secret Weapon that he gave at pgDay Paris: database constraints in Postgres are the last line of defense. The most important of those constraints is the data type, or the attribute domain in normalization slang. By declaring an attribute to be of a certain data type, then PostgreSQL ensures that this property is always true, and then implements advanced processing features for each data type, so that you may push the computation to the data, when needed. This article is the first of a series that will go through many of the PostgreSQL data types, and we open the journey with boolean. Table of Contents PostgreSQL Data Types PostgreSQL comes with a long list of data types. The following query limits the types to the ones directly interesting to someone who is an application developer, and still it lists 72 data types: select nspname, typname from pg_type t join pg_namespace n on n.oid = t.typnamespace where nspname = 'pg_catalog' and typname !~ '(^_|^pg_|^reg|_handler$)' order by nspname, typname; Let’s take only a sample of those with the help of the TABLESAMPLE feature of PostgreSQL, documented in the select SQL from page of the documentation: select nspname, typname from pg_type t TABLESAMPLE bernoulli(20) join pg_namespace n on n.oid = t.typnamespace where nspname = 'pg_catalog' and typname !~ '(^_|^pg_|^reg|_handler$)' order by nspname, typname; In this run here’s what I get as a random sample of about 20% of the available PostgreSQL types. If you run the same query again, you will have a different result set: nspname │ typname ════════════╪═══════════════ pg_catalog │ abstime pg_catalog │ anyelement pg_catalog │ bool pg_catalog │ cid pg_catalog │ circle pg_catalog │ date pg_catalog │ event_trigger pg_catalog │ line pg_catalog │ macaddr pg_catalog │ oidvector pg_catalog │ polygon pg_catalog │ record pg_catalog │ timestamptz (13 rows) So, let’s open our journey with the boolean attribute domain. SQL Boolean: Three-Valued Logic SQL introduces a NULL value in the boolean attribute domain, adding it to the usual TRUE and FALSE values. That gives us three-valued logic. Where that’s very different from other languages None or NULL is when comparing values. Let’s have a look at the SQL NULL); As you can see cross join is very useful for producing a truth table. It implements a Cartesian product over our columns, here listing the first value of a (true) with every value of b in order (true, then false, then NULL), then again with the second value of a (false) and then again with the third value of a (NULL). We are using format and coalesce to produce an easier to read results table here. The coalesce function returns the first of its argument which is not null, with the restriction that all of its arguments must be of the same data type, here text. Here’s the nice truth table we get:) We can think of NULL as meaning I don’t know what this is rather than no value here. Say you have in A (left hand) something (hidden) that you don’t know what it is and in B (right hand) something (hidden) that you don’t know what it is. You’re asked if A and B are the same thing. Well, you can’t know that, can you? So in SQL null = null returns NULL, which is the proper answer to the question, but not always the one you expect, or the one that allows you to write your query and have the expected result set. That’s why we have other SQL operators to work with data that might be NULL: they are IS DISTINCT FROM and IS NOT DISTINCT FROM. Those two operators not only have a very long name, they also pretend that NULL is the same thing as NULL. So if you want to pretend that SQL doesn’t implement three-valued logic you can use those operators and forget about Boolean comparisons returning NULL. We can even easily obtain the truth table from a SQL query directly: select a::text as left, b::text as right, (a = b)::text as "=", (a <> b)::text as "<>", (a is distinct from b)::text as "is distinct", (a is not distinct from b)::text as "is not distinct from" from (values(true),(false),(null)) t1(a) cross join (values(true),(false),(null)) t2(b); With this complete result this time: left │ right │ = │ <> │ is distinct │ is not distinct from ═══════╪═══════╪═══════╪═══════╪═════════════╪══════════════════════ true │ true │ true │ false │ false │ true true │ false │ false │ true │ true │ false true │ ¤ │ ¤ │ ¤ │ true │ false false │ true │ false │ true │ true │ false false │ false │ true │ false │ false │ true false │ ¤ │ ¤ │ ¤ │ true │ false ¤ │ true │ ¤ │ ¤ │ true │ false ¤ │ false │ ¤ │ ¤ │ true │ false ¤ │ ¤ │ ¤ │ ¤ │ false │ true (9 rows) You can see that we have not a single NULL in the last two columns. Boolean Aggregates You can have tuple attributes as Booleans too, and PostgreSQL includes specific aggregates for them: select year, format('%s %s', forename, surname) as name, count(*) as ran, count(*) filter(where position = 1) as won, count(*) filter(where position is not null) as finished, sum(points) as points from races join results using(raceid) join drivers using(driverid) group by year, drivers.driverid having bool_and(position = 1) is true order by year, points desc; In this query, we show the bool_and() aggregates that returns true when all the Boolean input values are true. Like every aggregate it silently bypasses NULL by default, so in our expression of bool_and(position = 1) we will filter F1 drivers who won all the races they finished in a specific season. The database used in the next example And here’s the result of our query: year │ name │ ran │ won │ finished │ points ══════╪═════════════════════╪═════╪═════╪══════════╪════════ 1950 │ Juan Fangio │ 7 │ 3 │ 3 │ 27 1950 │ Johnnie Parsons │ 1 │ 1 │ 1 │ 9 1951 │ Lee Wallard │ 1 │ 1 │ 1 │ 9 1952 │ Alberto Ascari │ 7 │ 6 │ 6 │ 53.5 1952 │ Troy Ruttman │ 1 │ 1 │ 1 │ 8 1953 │ Bill Vukovich │ 1 │ 1 │ 1 │ 9 1954 │ Bill Vukovich │ 1 │ 1 │ 1 │ 8 1955 │ Bob Sweikert │ 1 │ 1 │ 1 │ 8 1956 │ Pat Flaherty │ 1 │ 1 │ 1 │ 8 1956 │ Luigi Musso │ 4 │ 1 │ 1 │ 5 1957 │ Sam Hanks │ 1 │ 1 │ 1 │ 8 1958 │ Jimmy Bryan │ 1 │ 1 │ 1 │ 8 1959 │ Rodger Ward │ 2 │ 1 │ 1 │ 8 1960 │ Jim Rathmann │ 1 │ 1 │ 1 │ 8 1961 │ Giancarlo Baghetti │ 3 │ 1 │ 1 │ 9 1966 │ Ludovico Scarfiotti │ 2 │ 1 │ 1 │ 9 1968 │ Jim Clark │ 1 │ 1 │ 1 │ 9 (17 rows) If we want to restrict the results to drivers who finished and won every race they entered in a season we need to then write having bool_and(position is not distinct from 1) is true, and then the result set only contains those drivers who participated in a single race in the season. The main thing about Booleans is the set of operators to use with them: The =operator doesn’t work as you think it would. Use isto test against literal TRUE, FALSE or NULL rather than =. Remember to use the IS DISTINCT FROM and IS NOT DISTINCT FROM operators when you need them. Booleans can be aggregated thanks to bool_and() and bool_or(). The main thing about Booleans in SQL is that they have three possible values: TRUE, FALSE and NULL. Moreover the behavior with NULL is entirely ad-hoc, so either you remember it or you remember to check your assumptions. For more about this topic, you can read What is the deal with NULLs? from PostgreSQL Contributor Jeff Davis. Conclusion!
https://tapoueh.org/blog/2018/04/postgresql-data-types-an-intro/
CC-MAIN-2022-27
refinedweb
1,255
58.76
Asked by: Upstream Release: Microsoft Visual Studio 2015 RTM Question - User13824 posted I will close this discussion now for simplicity because today's (Sep 11, 2015):. Upstream Release: Microsoft Visual Studio 2015 RTM Date published: July 20, 2015 Release notes (from Microsoft): Current best Xamarin version to use with VS 2015 "Stable Release: XamarinVS 3.11.836, Cycle 5 – Service Release 3" or higher. (Xamarin for Visual Studio "Cycle 5 – Service Release 3" includes some specific fixes for Visual Studio 2015 that are not included in the previous "Cycle 5 – Service Release 2" version.) Guidelines for this thread This first post will be updated regularly. Hopefully this thread will help answer "what might break if I update to this release?" If you find a new problem that is specific to Xamarin after you update to Visual Studio 2015, please provide as many details as you can about how to reproduce the problem. You may reply directly in this thread or contact the Xamarin Support Team via email. Please discuss older bugs that are unchanged in this release compared to Visual Studio 2013 Update 4 either with the Xamarin Support Team via email or directly in Bugzilla. Of course for questions and discussions about topics other than bugs, feel free start new forum threads. Known issues for Xamarin, with more common or severe issues near the top Upstream Visual Studio bug here, Xamarin tracking bug here: Bug 32977 - [Visual Studio, upstream] VS 2015 hangs or crashes under certain circumstances when working on Xamarin.Forms projects. This appears to be an upstream bug in VS 2015 itself that is exposed by Xamarin's change to using Visual Studio's built-in IntelliSense for XAML pages starting in VS 2015 + Xamarin.Forms 1.5.0. Candidate fix now available in the Beta channel: the new Beta version switches XamarinVS back to using Xamarin's own Xamarin.Forms XAML IntelliSense extension as a temporary fix while waiting for the next update of VS 2015 from Microsoft. (Old possible partial workaround: disable the XAML IntelliSense (see the post later in the thread for additional details): - Download devenv.pkgundef to C:\Program Files (x86)\Microsoft Visual Studio [VSVERSION]\Common7\IDE\ - Run devenv /updateconfigurationfrom a developer command prompt (for the relevant VS versions). (If you have ReSharper, it might be possible to enable ReSharper's XAML IntelliSense as a replacement for the built-in IntelliSense after these steps.)) Bug 32622 - [XamarinVS] IntelliSense shows errors in VS 2015 when referencing certain types in app projects from PCL projects (due to the new Roslyn-based IntelliSense in VS 2015). (This one is a bug in the XamarinVS extensions themselves. It only affects VS 2015. Candidate fix now available in the Beta channel. After updating you might need to delete the hidden .vsfolder in the solution directory to force an IntelliSense refresh. Also be aware that Bug 32988 (mentioned below) has a different root cause and is not yet fixed. (Old workaround: see Comment 4 on the bug report.) Example error messages: - "The type 'Object' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Runtime, Version=4.0.0.0" - "The type 'Expression<>' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Linq.Expressions, Version=4.0.0.0" Bug 32988 - [Xamarin.Forms] Modifying XAML files causes IntelliSense for references to other PCLs to fail in the .xaml.cscode behind files with errors of the form "are you missing an assembly reference?" (This is a bug in Xamarin.Forms. It affects all versions of Visual Studio.) This seems to be another side effect of the same changes between Xamarin.Forms 1.4.3 and 1.4.4 that caused Bug 32341. Possible workaround: Downgrade to Xamarin.Forms 1.4.3.x (or earlier). Bug 32341 - [Xamarin.Forms] IntelliSense on code behind files for XAML pages is not available for InitializeComponent()or for elements of the XAML page that include the x:Nameattribute. (This is a bug in Xamarin.Forms. It affects all versions of Visual Studio.). Now fixed in Xamarin.Forms 1.4.4.6392 and higher. Also important: If your project contains any Xamarin.Forms XAML pages that still use the outdated MSBuild:CompileCustom Tool, you will also need to update them to use MSBuild:UpdateDesignTimeXaml(see Bug 32987 for additional details). (Old possible workarounds: Downgrade to Xamarin.Forms 1.4.3.x (or earlier), or try changing: <Target Name="UpdateDesignTimeXaml" Condition="'$(UseHostCompilerIfAvailable)' == 'true'" DependsOnTargets="Compile"/> ... to: <Target Name="UpdateDesignTimeXaml" Condition="'$(UseHostCompilerIfAvailable)' == 'true'" DependsOnTargets="PrepareResources; Compile"/> ... in packages\Xamarin.Forms.1.4.4.*\build\portable*\Xamarin.Forms.targets) [Xamarin.Forms] IntelliSense in VS 2015 does not work within XAML pages themselves, as discussed in the first comment on. (This is a bug in Xamarin.Forms caused by a change in the "ownership" of IntelliSense from the XamarinVS extensions into the Xamarin.Forms NuGet package. It only affects VS 2015.) Now fixed in Xamarin.Forms 1.5.0-pre1. (Old workaround: see the two "answer" posts in that thread here and here). EDIT Aug 11: Add a specific bug number for the VS hanging issue: Bug 32977. EDIT Aug 12: Add information about outdated MSBuild:CompileCustom Tool for Bug 32341. Add Bug 32988. EDIT Aug 12: Add upstream VS bug for the hanging and crashing issue, and update wording accordingly. EDIT Aug 16: Add new steps to deactivate XAML IntelliSense for Bug 32977. EDIT Aug 20: Update workaround instructions for Bug 32977. EDIT Aug 25: Candidate fixes for Bug 32977 and Bug 32622 now available in the Beta channel.Monday, August 10, 2015 6:53 PM All replies - User13824 posted Related discussions now closed to help consolidate information on this thread: Monday, August 10, 2015 7:12 PM - - - - - User80438 posted Comment 4 from the bug report does not fix the intellisense errors. Carried out steps 2,3,4 (1 and 5 don't apply). Opened solution, opened a view model that references classes in my core pcl. Build, no errors reported (see 1st screenshot) Open a Xaml file that references classes in my core pcl, made a small change (removed a line). Switch back to View Model, errors appear for every referenced namespace, class or class member. See attached screenshots, before and after.Monday, August 10, 2015 8:01 PM - User80438 posted And Visual Studio has hung/crashed again after doing not much in particular with the usual 'Visual studio is Busy' message. See attached screenshot. Probably happens several times an hour. I really hope this can be fixed soon as it is crippling me.Monday, August 10, 2015 8:16 PM - User80438 posted And again few minutes later. Seems to happen when I switch from a xaml file to a view model cs file. Unable to develop at all really. Please can you make this extremely urgent and endeavour to provide a fix ASAP.Monday, August 10, 2015 8:23 PM - User13824 posted If you get a chance, you can try collecting a stack trace for the hang as described in the first post in the thread: ... collecting a stack trace from the frozen instance of VS... It should at most only take a few minutes to run through the steps to collect the stack trace. See the first post in the thread for the steps.Monday, August 10, 2015 9:02 PM - User80438 posted Will do. I have also just tried this on W10 with latest update above with same results.Monday, August 10, 2015 9:23 PM - User13824 posted For the IntelliSense issue: errors appear for every referenced namespace, class or class member I'd be curious to see the precise IntelliSense error messages reported for these red underlines under the "View -> Error List" window. (You can switch the display mode for the Error List to "IntelliSense Only".) Additional notes that might be informative Monday, August 10, 2015 9:27 PM It is important to replace both the Xamarin.Android and Xamarin.iOS targets files with the new versions, even if you're only currently working on one type of project. (Also affects VS 2013) IntelliSense in Xamarin.Android projects currently behaves differently from Xamarin.iOS projects. For Xamarin.Android projects it is necessary to build the project once and then close and reopen the solution to get IntelliSense to work as expected. See Bug 30844, Comment 6. - User96360 posted I just updated because it still happens...Monday, August 10, 2015 9:44 PM - User80438 posted @BrendanZagaeski I followed the steps to get a stack trace when it hangs. See attached screenshot. Not much help I'm afraid. Nothing reported in CallStack.Monday, August 10, 2015 10:03 PM - User80438 posted @BrendanZagaeski And here is a screenshot of the Intellisense errors, 442 of them !!! I did the workaround for both iOS and Android. Makes no difference at all.Monday, August 10, 2015 10:10 PM - User80438 posted For reference please see my solution structure in the attached screenshot. As you can see I have a Core PCL (Silkwen.Mobile,Core) and an app specific PCL (Silkweb.Mobile.MountainWeather). MountainWeather references Core and classes in MountainWeather use classes in Core. A normal project reference set-up. On loading the solution all is good. After editing a Xaml file that also references Core the errors appear in the code files.Monday, August 10, 2015 10:17 PM - User96360 posted 1.5.0.6396-pre1 + XamarinVS 3.11.836, Cycle 5 + VS2015 is a bit of a disaster on my machine. I deleted all obj and bin folders, but I have intellisense problems and xaml crashing problems, and xaml hanging problems. I tried getting the xaml hanging call stack but it takes too long to attach and I can't get the call stack.Monday, August 10, 2015 11:33 PM - User13824 posted @jonathanyates, Not much help I'm afraid. Nothing reported in CallStack. You can right-click the "[External Code]" and select "Show External Code". I'll update the steps in the gist to mention that trick. And here is a screenshot of the Intellisense errors Thanks. So the "good" news is that those are not the same kind of IntelliSense errors as in Bug 32622 (they don't mention "You must add a reference to assembly"). They look like they might be problems with the generated .g.csfiles not being updated or reloaded into IntelliSense properly. At first pass, that sounds similar to Bug 32341, but it now seems there might be a second problem that closely resembles Bug 32341. As discussed in Bug 32341, Comment 5, the best way to proceed with this remaining issue will be to provide the Xamarin developers with a test case that reproduces the problem. I have filed a preliminary bug report where you can attach the test case if you get a chance:. Thanks!Tuesday, August 11, 2015 6:54 AM - User80438 posted @BrendanZagaeski See attached screenshot of call stack. Sorry I forgot to copy the call stack text before I force quit VS. Hope this is enough.Tuesday, August 11, 2015 7:35 AM - User141005 posted @jonathanyates Just upgraded to latest alpha and can no longer open up my solution. The solutions loads half way and then hangs until the VS2015 crash dialog gets displayed, prompting me to close the solution. As a result, I am unable to continue work because I am unable to open my solution.Tuesday, August 11, 2015 4:46 PM - User80438 posted @ScottNimrod Can you try the Beta.Tuesday, August 11, 2015 5:02 PM - User141005 posted @jonathanyates I'm not sure how to manage nuget packages without my solution being loaded.Tuesday, August 11, 2015 5:12 PM - User80438 posted @ScottNimrod Tools > options > xamarin > switch to beta channelTuesday, August 11, 2015 5:50 PM - User141005 posted @jonathanyates I installed the beta. However, the issue still exists. I'm not quite sure what else to do. I have approximately 35 projects in my solution.Tuesday, August 11, 2015 6:33 PM - User80438 posted Did the solution load?Tuesday, August 11, 2015 6:37 PM - User141005 posted No. It loads a quarter of the way and hangs. I have restarted my machine as well.Tuesday, August 11, 2015 6:38 PM - User141005 posted I deleted the suo file and the solution now loads. Thanks.Tuesday, August 11, 2015 7:06 PM - User80438 posted Just to confirm. I have suspended R# and the problem persists.Tuesday, August 11, 2015 7:26 PM - User80438 posted I have tried to replicate this in a very simple XF project with two PCL's and can not replicate the crashing. So I have no idea why it is occurring.Tuesday, August 11, 2015 7:52 PM - User80438 posted Although intellisense errors are very easy to replicate. All I needed to do was add a namespace declaration in a xaml page to a core namespace. Switch back to view model and voila, intellisense errors everywhere.Tuesday, August 11, 2015 8:10 PM - User80438 posted Another thing worth noting is that I am using xcc conditional compilation so I can have design time intellisense for bindings using d:DesignContext={DesignInstance... I doubt this is the cause but you never know. I am very loathed to try removing this as I have this in every single xaml file so it's removal would be a major task.Tuesday, August 11, 2015 8:37 PM - User13824 posted Thanks for the screenshot of the stack trace for the hang. I have now filed a preliminary bug report for that issue: Bug 32977. As noted on the bug report, the fact that the call stack does not mention any Xamarin assemblies suggests that this hang might be within Visual Studio 2015's XAML language support itself, but I don't know enough about the internals of how the XAML language support works to say for certain that the XamarinVS team will not be able to stop the problem somehow. If you can get ReSharper to completely disable Visual Studio's built-in IntelliSense for XAML, there's a chance that might be sufficient to avoid the problem. For the issue with IntelliSense errors (Bug 32944), I have a new lead for investigation that I will look into shortly: Bug 32439. That said, it still wouldn't be bad to attach a test case on Bug 32944 if you have one handy.Tuesday, August 11, 2015 9:16 PM - User80438 posted @BrendanZagaeski Thanks for the update. It's definitely not r#. I had this disabled and the crash and intellisense errors still occur. Test case to replicate intellisense errors is easy. - Create new XF project and and another Core PCL. - Add some classes to the Core PCL, perhaps ViewModelBase - Create a vm in the project pcl, inherited from ViewModelBase in core. - Create view (ContentPage will do) in project pcl and set binding context to VM. - Normal setup so far. Build and all is ok. - Now in the view add a namespace reference 'xmlns' to core. - Switch to ViewModel and BOOM, errors will appear. As for VS hanging, which is a bigger issue for me, I really haven't a clue. But, this did not happen in 2013.Tuesday, August 11, 2015 9:30 PM - User13824 posted Hang / crash It's definitely not r#. I had this disabled and the crash ... still [occurs]. The guess would in fact be the reverse of that. The thought is that enabling R# and asking R# to disable the built-in Visual Studio IntelliSense might fix the hangs (because it would hopefully avoid the chain of method calls shown in the screenshot you collected). (If it changes the call stack, that might be interesting to know too.) this did not happen in [VS] 2013. That is as expected. To borrow some wording from the bug report: "VS 2013 uses Xamarin's custom Xamarin.Forms XAML IntelliSense extension, rather than Visual Studio's built-in IntelliSense." In contrast, Xamarin.Forms XAML IntelliSense in VS 2015 does use VS's built-in IntelliSense. One possible workaround the XamarinVS developers are contemplating is to enable VS 2015 support for the old custom Xamarin.Forms XAML IntelliSense extension. IntelliSense errors Test case to replicate intellisense errors I had some luck reproducing an issue that might match what you're seeing. (I wasn't seeing the problem when I first tried to follow the steps with a new template app, but then I switched to using the "XLabs.Sample" app based on a tip from Philip.) It looks like the error messages approximately match your screenshot from earlier. The new bug report is here: Bug 32988. I have accordingly updated both the first post in the thread and the old "preliminary" bug report with additional details and possible next steps.Wednesday, August 12, 2015 5:31 PM - User80438 posted @BrendanZagaeski Thanks for the update. I completely isolated my project to my vm c drive and removed R# and still then hanging occurs, regularly. Regarding the intellisense errors, there is no need to use XLabs or whatever. Attached is a fairly vanilla solution which exhibits this behaviour which I use for the above test case. Load and do exactly as instructed above. The errors will occur.Wednesday, August 12, 2015 8:32 PM - User80438 posted @BrendanZagaeski Hanging appears to occur whenever a file transaction occurs, saving, updating, adding etc.Wednesday, August 12, 2015 8:42 PM - User80438 posted Actually, I think that is wrong cos I opened my solution, opened a xaml file and VS hung, again!Wednesday, August 12, 2015 8:55 PM - User13824 posted I have updated the info about the hang/crash in the first post in the thread based on some new information I received from the Xamarin developers.Wednesday, August 12, 2015 9:11 PM - User80438 posted @BrendanZagaeski Have you managed to reproduce the crash at all. I am now at the point where I now have to revert back to VS2013 and refactor all the code that I'd updated to C#6. I have now given up on 2015 as I can not develop with it. This is going to be a bit painful but at least I can progress to release, after which I very much doubt I will ever use Xamarin again. It has to be the worst development experience I have encountered. Way too many problems for the price. We are developers, not Testers!Wednesday, August 12, 2015 9:35 PM - User13824 posted Did you check the link to the upstream bug in the update to the first post that I just mentioned? Or see the same link in the latest comments on the bug report? This is a bug filed by the Xamarin.Forms team against Visual Studio 2015 RTM itself that includes a test case that reproduces a crash 100% of the time. For now I'd recommend contacting the Support Team via email if you have additional points (outside of specific new bugs seen in VS 2015) you'd like to discuss since I'm just one member of the support team (not anyone in charge of release strategy), and my available time to send replies on the forums is limited. See my comment on the Cycle 5 – Baseline release thread from April 29 for the extent of my knowledge about Xamarin's efforts to mitigate the number of regressions in the future. EDIT: Added a little extra clarification. Feel free to continue to report any new issues you see in VS 2015 compared to VS 2013 in this thread.Wednesday, August 12, 2015 10:04 PM - User138433 posted Voted as important. I confirm that I can reproduce it every time as well. I encourage everyone who's affected by this to vote it as important to put more pressure on Microsoft.Thursday, August 13, 2015 12:56 AM - User97337 posted I'm seeing a couple of problems with Visual Studio 2015 Enterprise, with the version of Xamarin that is installed by the VS installer (3.11.816) The first is a 100% reproducible lock up when trying to add a layout to \Resources\layout for Android. The second is that I cannot package any application, build is successful but when I try to create the package I get an error complaining of syntax errors in \obj\Debug\android\src\hubb\native\app\android\R.java. Processing: obj\Debug\res\layout\main.xml Processing: obj\Debug\res\values\strings.xml C:\Program Files (x86)\Java\jdk1.7.0_55\bin\javac.exe -J-Dfile.encoding=UTF8 -d obj\Debug\android\bin\classes -classpath "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\MonoAndroid\v5.0\mono.android.jar" -bootclasspath "C:\Program Files (x86)\Android\android-sdk\platforms\android-21\android.jar" -encoding UTF-8 "@C:\Users\nigel\AppData\Local\Temp\tmp2A95.tmp" error: expected package hubb.native.app.android; error: class, interface, or enum expected package hubb.native.app.android; error: class, interface, or enum expected package hubb.native.app.android; Build FAILED.Thursday, August 13, 2015 3:49 AM - User13824 posted @NigelSampson, The first is a 100% reproducible lock up when trying to add a layout to \Resources\layout for Android. That is Bug 32845, which unfortunately affects all versions of Visual Studio. The second is that I cannot package any application Thanks for the report. I have filed the preliminary description of the problem on Bugzilla for tracking and visibility to the engineers: Bug 33044. I included several questions about additional information to gather, if you get a chance.Thursday, August 13, 2015 4:17 AM - User80438 posted @BrendanZagaeski I have removed all copies of the metadata files Xamarin.Forms.Xaml.Design.dll and Xamarin.Forms.Core.Design.dll from the packages folder and still the hanging/crashing persists. This seems to make no difference at all other than bringing back the blue underlining throughout my Xaml. All I did was edit some stuff, then tried to open a xaml file and BOOM again, Visual Studio hangs!Thursday, August 13, 2015 6:51 AM - User80438 posted And crashed again trying to add a file. Reverting to 2013. I've had enough of this.Thursday, August 13, 2015 7:10 AM - User96360 posted I just up voted the MS bug I see only 3 other votes. Please up vote!Thursday, August 13, 2015 2:42 PM - User80438 posted +1 voteThursday, August 13, 2015 3:19 PM - User47183 posted Just voted!Thursday, August 13, 2015 3:23 PM - User75120 posted Voted (although my vs2015 won't crash with the test project attached to the bug - but there's no intellisense either - go figure!)Thursday, August 13, 2015 4:48 PM - User55830 posted My goodness, will this ever be solved :open_mouth: The blue lines come and go... I don't even care anymore... I'm programming @ ChuckNorris level, like all of you here. Why do we need IS and just-in-time code analysis? remember when you had to use simple text editors like VIM? sarcasm OFF ;)Thursday, August 13, 2015 11:49 PM - User115187 posted This may or may not help.. I also had the problem with VS2015 and XF 1.4 Intellisense in a PCL with iOS & Android projects. My error list had an error for every code-behind reference I made to an XAML element. XAML had blue underlines for every element as well. Code-behind was just a sea of red. However building and running the project still worked. When I initially upgraded to XF 1.5pre1 I couldn't open an XAML file without VS2015 crashing. I gave up and went back to XF1.4 and the result was as above. This morning I removed Xamarin completely from my system, from the registry, rebooted and re-installed it from scratch. Same problem with XF 1.4 despite doing the workarounds at fixing it listed in the first post of this thread. I upgraded to XF 1.5pre1 just now and still the same problem with Intellisense. I have an XAML file open already but if I open a new XAML file, VS still crashes. I have no explanation for this, but while I had that one XAML file opened, I just hit save and all my red and blue lines disappeared and Intellisense returned across the entire project. Now when I open the project I get code behind errors, my error list full of issues and Intellisense is gone, but once I save that XAML file, no more errors. I still can't open a new XAML file though without VS crashing.Friday, August 14, 2015 12:50 AM - User115187 posted As mentioned above, if I had an existing XAML file open - there was no issues, as soon as I opened another XAML file, VS crashed again. I removed the Design folder from the packages\Xamarin.Forms.1.5.0.6396-pre1\lib* directory and was now able to open XAML files but no Intellisense. I created the Design folder in each of those directories and only added in Xamarin.Forms.Xaml.Design.dll. Everything now seems to be working perfectly fine as soon as I save any XAML file after opening the solution <-- this seems to regenerate whichever files are required? No error list, no code behind errors, no blue lines in XAML, I can open XAML files successfully, Intellisense is back in both XAML & code behind. @jonathanyates @ErosStein @Marabunta @BrendanZagaeskiFriday, August 14, 2015 1:46 AM - User97337 posted @BrendanZagaeski Are there any known workarounds for 32845? Given that you said it affects all versions of Visual Studio it sounds like I should given up on any Android development till this has been resolved?Friday, August 14, 2015 9:49 AM - User80438 posted The best workaround for this is to stop using VS 2015 and revert to 2013. I have just done this, although I had to down grade some c#6 code, and all is working fine. In fact it is a joy to be working again without any of these issues what so ever. No blue lines in the xaml, not red lines in my code, xaml intellisense works perfectly and best of all it doesn't crash on me at all. VS 2015 seems a total disaster to me.Friday, August 14, 2015 11:59 AM - User80438 posted Not only that but 2013 just feels so much quicker and more responsive to, and that's with R# installed. The R# Xaml intellisense is excellent.Friday, August 14, 2015 12:19 PM - User55830 posted @jonathanyates Well, I do agree R# is a must, I've read the VS15 would kill it, but no way... But, I don't think our problem relates to VS15, it seems to be Xamarin.VS. I'm using VS15 for my MVC projects and it works just fine. It's snappier than VS13. But I could be wrong, though...Friday, August 14, 2015 12:43 PM - User80438 posted @ErosStein Yeah I meant in the context of Xamarin. For other stuff I'm sure it's probably fine. If only I had the time for other stuff.Friday, August 14, 2015 12:53 PM - User55830 posted @jonathanyates gotcha! and agree. When it's related to Xamarin, it's become a pain...Friday, August 14, 2015 1:04 PM - User48769 posted Here's a definite solution in the meantime: - Download devenv.pkgundef to C:\Program Files (x86)\Microsoft Visual Studio [VSVERSION]\Common7\IDE\ - Run devenv /updateconfigurationfrom a developer command prompt (for the relevant VS versions). You can read more about the issue and how we're addressing in the short and longer term at, August 14, 2015 6:56 PM - User115187 posted @ErosStein @jonathanyates After a full work day yesterday running VS2015 & XF 1.5 with the one DLL file in the Design folders, I've had no problems. XAML & code-behind intellisense work and no crashes in any VS operation. If you still had VS2015 installed it would be interesting to see if only having Xamarin.Forms.Xaml.Design.dllin the Design folders helped. FWIW I've been using VS2015 for some older MVC projects, class libraries, Windows services, etc and it's been performing really well. It feels faster than VS2013 did and the .NET 6 code changes are really nice. Now if only I could get my head around the new MVC project structure! :smile:Friday, August 14, 2015 10:35 PM - User75120 posted @DavidRedmond have tried your suggestion but it makes no difference on my setup (VS2015 ent, 3.11.836, 1.5.0.6396, no R#), thanks anyways. However I don't get lockups (probably because I don't get any intellisense either) so I'm reluctant to fiddle too much in case I change something I shouldn't and start getting hangs, as I don't want to have to remove the c#6 stuff and revert to 2013 as @jonathanyates has. I'm sure the guys at Xamarin know how serious this is for their reputation and, with help from their contacts at Microsoft, will get out a final, stable release soon (please :smile: )Saturday, August 15, 2015 8:04 AM - User80438 posted @kzu @BrendanZagaeski Tried the suggested workaround. Didn't work. After deleting static.14.pkgdef and running devenv /updateconfiguration I opened my solution in VS2015, then opened a cs and then a Xaml file. I then added 2 blank lines to my xaml, then clicked back to the cs file and VS hung.Tuesday, August 18, 2015 7:03 AM - User80438 posted Btw, I got a dialog during the crash. Here is a screenshot and the output. Problem signature: Problem Event Name: APPCRASH Application Name: devenv.exe Application Version: 14.0.23107.0 Application Timestamp: 559b7ead Fault Module Name: clr.dll Fault Module Version: 4.6.81.0 Fault Module Timestamp: 5584e56f Exception Code: c00000fd Exception Offset: 0001ce78 OS Version: 6.3.9600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: 0eef Additional Information 2: 0eefc76a114fd5616afd505338fe00f4 Additional Information 3: 1164 Additional Information 4: 11643620532de8af86b5269ab723c3c0 Read our privacy statement online: If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-GB\erofflps.txtTuesday, August 18, 2015 7:09 AM - User48769 posted Did it also crash after a restart?Tuesday, August 18, 2015 11:03 AM - User80438 posted @kzu YesTuesday, August 18, 2015 7:55 PM - User48769 posted One way to tell the XML-based editor vs XAML-based (crashy) one, is to note the color of the Could you confirm that it's the blue directive, and that doing devenv /setup doesn't remove it either? :(Tuesday, August 18, 2015 10:02 PM - User80438 posted @kzu It's blue. Tried the whole thing again. Didn't work. Running devenv /setup within Developer Command Prompt for VS2015 I get the following message: 'The operation could not be completed. The requested operation requires elevation' See screenshots. Am I doing something wrong?Wednesday, August 19, 2015 7:16 PM - User80438 posted @kzu Btw, I'm running on MBP with Parallels and W8.1Wednesday, August 19, 2015 7:17 PM - User47183 posted @jonathanyates make sure you run devenv.exe /Setup from an elevated (Admin) command prompt, otherwise it won't work.Wednesday, August 19, 2015 7:33 PM - User80438 posted @kzu @VGA Ok, so I ran both devenv.exe /updateconfiguration and devenv.exe /setup running command prompt as administrator (see screenshot). I then open my solution in VS2015, and then opened a xaml file and instantly it hangs. I've not had this before, it now instantly hangs when I open a xaml file. Also theWednesday, August 19, 2015 10:23 PM - User139040 posted Note this is happening in VS2013 as well for me.Thursday, August 20, 2015 2:21 PM - User139040 posted OK I got it to let me edit xaml in 2015 using the /setup command unfortunately it just locks up after about 5 mins again. I am dead in the water now. Programming at a halt till someone gets off their ass and responds to this from Xamarin. EDIT: And if I sounded pissed, I AM!Thursday, August 20, 2015 2:27 PM - User80438 posted DittoThursday, August 20, 2015 2:31 PM - User139040 posted I am going to recommend that Xamarin needs to hire a test and qa team to work out these issues in house rather than having their PAYING customers test this. This is getting ridiculous lately. The old motto "If it compiles, ship it" does not work any longer! Obviously if only ONE person had even tried to use this, it would have been found to be completely broken. I could understand if this was open source and free, but when Xamarin asks for this much money, they could ATLEAST hire one tester.Thursday, August 20, 2015 3:10 PM - User48769 posted Ok, @jonathanyates I've updated the instructions, since VS isn't refreshing the already-created registry entries unless you uninstall the MSI and install the one with the true fix (comming soon to the Alpha channel): - Download devenv.pkgundef to C:\Program Files (x86)\Microsoft Visual Studio [VSVERSION]\Common7\IDE\ - Run devenv /updateconfigurationfrom a developer command prompt (for the relevant VS versions). Please give it a try. It worked on a clean VS with just Xamarin stable bits.Thursday, August 20, 2015 3:22 PM - User80438 posted @kzu Thanks. I'll give it a try when I'm at my mbp. @BradChase.2011 Could you give this a try and see if it works for you?Thursday, August 20, 2015 3:44 PM - User139040 posted If I right click on the xaml file and do a "Open With..." I can set it to XML instead and it will open without crashing right now. I have to change it each time I open VS and then switch it back immediately before editing the XAML and it will work. Ill report back if I have any issues after some coding.Thursday, August 20, 2015 3:46 PM - User139040 posted @jonathanyates Right now the solution I just posted is still working and I definitely dont want to mess it up as we are actually coding again. If that fails then yea Ill try it out, sorry :(.Thursday, August 20, 2015 3:54 PM - User80438 posted @kzu Tried again renamed to pkgundef. Doesn't work. VS2015 still locks up as soon as I open a xaml file and header is still blue.Thursday, August 20, 2015 4:49 PM - User139040 posted @jonathanyates Did you try what I posted? EDIT: Right click on the file and choose Open With... XML and then set as default. I have to change the default each time I open VS but its working so far.Thursday, August 20, 2015 4:52 PM - User48769 posted @jonathanyates updated the instructions. Was missing one step. @BradChase.2011 with that workaround, that shouldn't be needed.Thursday, August 20, 2015 5:27 PM - User139040 posted @kzu, for somereason after recovering my static.14 file, it is now gone. Do I need it to repro those steps or is it ok if its missing?Thursday, August 20, 2015 5:34 PM - User48769 posted Heya, found an even simpler way, just two steps :)Thursday, August 20, 2015 5:58 PM - User139040 posted @kzu, Now your talkin! Dont leave us hanging :) EDIT: NM I see you updated the old post. Trying now.Thursday, August 20, 2015 6:05 PM - User139040 posted @jonathanyates So far @kzu 's latest fix works for disabling intellisense and the errors. I will continue to code and Ill report back if it breaksdown again at any point. Thanks much all!Thursday, August 20, 2015 6:30 PM - User80438 posted @kzu Hurray!!!!! That worked. Xaml file does not crash VS when opening and header is NOT blue :) So far it has not crashed and as I have R# I get all the intellisense as well. Does make me wonder why JetBrains can get it so right and Xamarin's attempt is a big Fail!! Btw, the Red code Intellisense errors still occur as soon as I edit a Xaml file, very irritating. See attached screenshot.Thursday, August 20, 2015 9:02 PM - User138433 posted @jonathanyates R#'s Intellisense is a completely different thing to Xamarin's. R# has its own Intellisense implementation whereas Xamarin is just extending Visual Studio's.Thursday, August 20, 2015 9:23 PM - User48769 posted Thanks @BradChase.2011 and @jonathanyates for your patience. Indeed @jonathanyates, as @opiants noted, we aren't doing any XAML intellisense in VS2015, since VS already has one. R# had the luxury of working for years and years on this stuff, extending VS and building their own infrastructure to make it flexible and powerful. They didn't have to build an IDE (like MS, whose built-in XAML/WPF intellisense isn't remotely as comprehensive as R#'s) or build an entire stack for .NET on mobile like Xamarin in a short few years. Xamarin Forms was announced a little over a year ago only! Bottom-line: we're making every effort to make the platform great. The timing with VS2015 release was extremely tight and there were external dependencies across many products that made it very hard to test it extensively sufficiently ahead of time. But we're learning from our mistakes: if it ain't ready, we won't ship it, no matter how eager we are to get stuff into people's hands. In due time, we hope we'll be on-par with companies that are dedicated exclusively to doing some of this stuff (as long as it applies to mobile development productivity), but for now, we have to pick our battles ;-)Thursday, August 20, 2015 10:31 PM - User80438 posted @kzu That's fair comment. You're all doing a great job. Any eta on the true fix release? Also What's the score with the red underlining errors for any reference to something in another PCL ? I still won't use VS2015 because of this.Friday, August 21, 2015 7:54 AM - User139040 posted @kzu, I fully understand. We would all like to get our latest and greatest out as soon as possible, we want our babies out in the world! Growing pains and timelines arent that much fun either. I still think ramping up internal testing could help out a ton to stem alot of this off and keep more of these issues internal rather than on customers. I hope you guys figure it out and I hope things get better, we all want this :). On a side note for some reason my VS2015 has reverted back and now it locks up every time. I will for now just go back to setting my default to XML as I know that sticks. Second issue is now all my C# has red lines underneath just about everything. Third issue is my solution compiles just fine even if the xaml is completely screwed up. I had worked all day trying to figure out why the hell anytime I added a grid column the xaml would blank out. Well found out that I had typed in ColumDefinition ARGG! I wasted almost an entire day on it. @jonathanyates, Are you seeing any of these issues? So in conclusion, this is extremely unproductive. I really wish there was a better way than what we currently have to work with. Maybe there is?Friday, August 21, 2015 4:11 PM - User80438 posted @BradChase.2011 Yes I am getting red underline errors all over my c# where ever I am referencing something from another PCL although it builds fine. For that reason alone I have reverted to 2013 as I can't tell what is or what isn't a real error. I haven't spent enough time in 2015 since the workaround worked to know if it's stable. Are you using R#? I strongly recommend anyone to use it at this point as it's intellisense for XF is excellent. Well worth the extra pennies. Would probably have resolved your ColumDefinition early on.Friday, August 21, 2015 9:48 PM - User55830 posted @jonathanyates Me again :) I thought those red lines @C# would've been solved by now =/ Well...Tuesday, August 25, 2015 1:45 AM - User75120 posted @ErosStein I wouldn't be surprised if they've already fixed it, but are taking a bit longer this time to check everything else out so they can put out a stable release that sorts intellisense and the red lines once and for all and, most importantly, doesn't break anything else. Hope they do it soon though, 'cause I don't want to install R# again as it bloats the whole thing and slows the editor down (admittedly the R# intellisense is good). I can work with the red lines for now by doing all the XAML work in one run of VS2015, then restarting and leaving XAML alone and doing code only so red lines for PCL refs don't appear. Not ideal but reasonably productive.Tuesday, August 25, 2015 7:03 AM - User55830 posted @Marabunta could be, I guess we just have to wait =/Tuesday, August 25, 2015 12:33 PM - User50849 posted The PCL issue is an msbuild/Roslyn issue (,,). The issue is marked as closed because it's scheduled to be resolved in msbuild (if I understood correctly,), but still affecting released Visual Studio. We did include a workaround in our latest, but it may not have worked correctly. What in my experience absolutely works until MS fixes it is to include this in your csproj (in the first section, before the property group with the Debug condition): <CheckForSystemRuntimeDependency>true</CheckForSystemRuntimeDependency> Hope this helps, jojTuesday, August 25, 2015 2:17 PM - User23660 posted Just to be sure: if I downgrade to VS2013 the problems with intellisense and the red underlining disappears?Friday, August 28, 2015 1:33 PM - User13824 posted There are some IntelliSense issues that are caused by VS 2015 and others that are caused by the Xamarin.Forms NuGet package itself. At least 2 of the issues caused by the Xamarin.Forms NuGet package affect both VS 2013 and VS 2015. See the first post in the thread for additional details.Friday, August 28, 2015 4:50 PM - User26025 posted Is there any news about these issues ? I cannot add an existing file to a XF Droid project. I tried 'Add existing item', drag&drop, and 'Include in project'. VS2015 Enterprise 14.0.23107.0 D14REL VS Hangs forever I tried adding [CheckForSystemRuntimeDependency>true[/CheckForSystemRuntimeDependency> and I downloaded that file and did devenv /updateconfiguration The problems remain.Monday, August 31, 2015 2:58 PM - User13824 posted I cannot add an existing file to a XF Droid project. ... VS Hangs forever There are various possible causes of hangs when adding items to Android projects. So far none of them are known to be specific to VS 2015. It looks like you have a Business license or higher, so the most direct way forward at this point would be to contact Xamarin Support via email to have them take a look at the specific situation on your system.Thursday, September 10, 2015 11:32 PM - User13824 posted I will close this discussion now for simplicity because today's:, September 11, 2015 5:10 PM
https://social.msdn.microsoft.com/Forums/en-US/7ba46e93-a2ee-47f8-9930-ec5c18335c94/upstream-release-microsoft-visual-studio-2015-rtm?forum=xamarinvisualstudio
CC-MAIN-2021-43
refinedweb
7,224
64.81
Control.Concurrent.MState Contents Description MState: A consistent State monad for concurrent applications. Synopsis - data MState t m a - runMState :: Forkable m => MState t m a -> t -> m (a, t) - evalMState :: Forkable m => MState t m a -> t -> m a - execMState :: Forkable m => MState t m a -> t -> m t - mapMState :: (MonadIO m, MonadIO n) => (m (a, t) -> n (b, t)) -> MState t m a -> MState t n b - withMState :: MonadIO m => (t -> t) -> MState t m a -> MState t m a - class MonadIO m => Forkable m where - forkM :: Forkable m => MState t m () -> MState t m ThreadId The MState Monad The MState is an abstract data definition for a State monad which can be used in concurrent applications. It can be accessed with evalMState and execMState. To start a new state thread use forkM. Instances Arguments Evaluate the MState monad with the given initial state, throwing away the final state stored in the MVar. Arguments Execute the MState monad with a given initial state. Returns the value of the final state. mapMState :: (MonadIO m, MonadIO n) => (m (a, t) -> n (b, t)) -> MState t m a -> MState t n bSource Map a stateful computation from one (return value, state) pair to another. See Control.Monad.State.Lazy.mapState for more information. withMState :: MonadIO m => (t -> t) -> MState t m a -> MState t m aSource Apply this function to this state and return the resulting state. Concurrency class MonadIO m => Forkable m whereSource The class which is needed to start new threads in the MState monad. Don't confuse this with forkM which should be used to fork new threads! Instances Arguments Start a new thread, using forkIO. The main process will wait for all child processes to finish. Example Example usage: import Control.Concurrent import Control.Concurrent.MState import Control.Monad.State type MyState a = MState Int IO a -- Expected state value: 2 main = print =<< execMState incTwice 0 incTwice :: MyState () incTwice = do -- First inc inc -- This thread should get killed before it can "inc" our state: kill =<< forkM incDelayed -- This thread should "inc" our state forkM incDelayed return () where inc = get >>= put . (+1) kill = liftIO . killThread incDelayed = do liftIO $ threadDelay 2000000 inc
http://hackage.haskell.org/package/mstate-0.1/docs/Control-Concurrent-MState.html
CC-MAIN-2018-09
refinedweb
363
70.53
On Tue, Apr 14, 2009 at 01:21:09PM -0400, Perrin Harkins wrote: > On Tue, Apr 14, 2009 at 12:48 PM, Roberto C. Sánchez > <roberto@connexer.com> wrote: > > As far as loading the module, I have tried: > > > > - "PerlModule Example::Image" in .htaccess > > - "use Example::Image;" in the main HTML::Mason component > > Either of those should be ok. If you decide to export the sub later, > you'd need to call it from Mason. > I'm not sure what you mean by this. > > As far as calling the function: > > > > - get_image_data('masthead.png', $urls{'img'}, $dirs{'img'}); > > - Example::Image::get_image_data('masthead.png', $urls{'img'}, $dirs{'img'}); > > Ok, I don't see how the first one could work with the code you've > shown. You're defining the sub in a package, so you either have to > export it or call it with the fully-qualified path, like you do in the > second case. > I have used the first syntax and have not noticed a different in the prevalence of the "Undefined subroutine" error between the two. > If you're not familiar with this stuff, read up on exporting in the > perlmod man page or your favorite Perl book. > > If you use the second syntax everywhere, I think this problem will go > away. The issue is with the sub being available in your current > namespace, not with loading the module. > I am currently using the latter call everywhere and it is still generating the "Undefined subroutine" error. > - Perrin Regards, -Roberto -- Roberto C. Sánchez
http://mail-archives.apache.org/mod_mbox/perl-modperl/200904.mbox/%3C20090414173422.GJ12408@connexer.com%3E
CC-MAIN-2019-35
refinedweb
251
73.07
The following error started appearing since running pkg upgrade this morning. The qiime-test script had been working for the past couple weeks and there have been no major changes to the qiime port lately. I can still build and run fortran programs using -Wl,-rpath,/usr/local/lib/gcc49. This looks to me like a numpy link error, maybe failing to use rpath to load the correct libgcc? Thanks, Jason FreeBSD unixdev.ceas bacon ~ 403: ./qiime-test qiime-tests/ Traceback (most recent call last): File "./all_tests.py", line 10, in <module> from qiime.util import (parse_command_line_parameters, get_options_lookup, File "/usr/local/lib/python2.7/site-packages/qiime/util.py", line 35, in <module> from numpy import (array, zeros, shape, vstack, ndarray, asarray,49/libgfortran.so.3 not found Duplicate of bug #207750 A workaround is setting LD_LIBRARY_PATH *** This bug has been marked as a duplicate of bug 207750 ***
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=217968
CC-MAIN-2020-50
refinedweb
148
60.01
On Sun, Jun 17, 2012 at 05:09:28PM +0200, Willy Tarreau wrote:> Hi Kay,> > I was failing to get any 3.5-rc[123] kernel to boot on my dockstar (armv5).> I finally found some time today to bisect it and found that the responsible> commit was :> > From 7ff9554bb578ba02166071d2d487b7fc7d860d62 Mon Sep 17 00:00:00 2001> From: Kay Sievers <kay@vrfy.org>> Date: Thu, 3 May 2012 02:29:13 +0200> Subject: [PATCH] printk: convert byte-buffer to variable-length record buffer> > The symptom is that the kernel loads and hangs during early boot without> displaying anything. My config had CONFIG_EARLY_PRINTK enabled so I tried> without it again just in case it would be related, but it desperately did> not change anything, the kernel still fails to boot.> > I have tried to revert printk changes on top of 3.5-rc3 and confirm that> now the kernel properly boots. Here's the list of what I reverted for> information :> > c313af145b9bc4fb8e8e0c83b8cfc10e1b894a50 printk() - isolate KERN_CONT users from ordinary complete lines> 3ce9a7c0ac28561567fadedf1a99272e4970f740 printk() - restore prefix/timestamp printing for multi-newline str> 1fce677971e29ceaa7c569741fa9c685a7b1052a printk: add stub for prepend_timestamp()> f8450fca6ecdea38b5a882fdf6cd097e3ec8651c printk: correctly align __log_buf> 649e6ee33f73ba1c4f2492c6de9aff2254b540cb printk() - restore timestamp printing at console output> 5c5d5ca51abd728c8de3be43ffd6bb00f977bfcd printk() - do not merge continuation lines of different threads> 7f3a781d6fd81e397c3928c9af33f1fc63232db6 printk - fix compilation for CONFIG_PRINTK=n> 5fc3249068c1ed87c6fd485f42ced24132405629 kmsg: use do_div() to divide 64bit integer> c4e00daaa96d3a0786f1f4fe6456281c60ef9a16 driver-core: extend dev_printk() to pass structured data> e11fea92e13fb91c50bacca799a6131c81929986 kmsg: export printk records to the /dev/kmsg interface> 7ff9554bb578ba02166071d2d487b7fc7d860d62 printk: convert byte-buffer to variable-length record buffer> > I understand that it will be hard to troubleshoot this with that little> information :-/> > I'm not posting the config not to pollute the list, but have it available> if needed. I haven't noticed anything seemingly related on the list, but> if you want me to test a patch or to provide more information, feel free> to suggest !> > I'm still checking if I can spot something.Try the patch below, which is in my set of patches to go to Linus soonand let me know if it works or not.thanks,greg k-hFrom 6ebb017de9d59a18c3ff9648270e8f6abaa93438 Mon Sep 17 00:00:00 2001From: Andrew Lunn <andrew@lunn.ch>Date: Tue, 5 Jun 2012 08:52:34 +0200Subject: printk: Fix alignment of buf causing crash on ARM EABICommit 7ff9554bb578ba02166071d2d487b7fc7d860d62, printk: convertbyte-buffer to variable-length record buffer, causes systems usingEABI to crash very early in the boot cycle. The first entry in structlog is a u64, which for EABI must be 8 byte aligned.Make use of __alignof__() so the compiler to decide the alignment, butallow it to be overridden using CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS,for systems which can perform unaligned access and want to savea few bytes of space.Tested on Orion5x and Kirkwood.Signed-off-by: Andrew Lunn <andrew@lunn.ch>Tested-by: Stephen Warren <swarren@wwwdotorg.org>Acked-by: Stephen Warren <swarren@wwwdotorg.org>Acked-by: Kay Sievers <kay@vrfy.org>Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>--- kernel/printk.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)diff --git a/kernel/printk.c b/kernel/printk.cindex 32462d2..f205c25 100644--- a/kernel/printk.c+++ b/kernel/printk.c@@ -227,10 +227,10 @@ static u32 clear_idx; #define LOG_LINE_MAX 1024 /* record buffer */-#if !defined(CONFIG_64BIT) || defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)+#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) #define LOG_ALIGN 4 #else-#define LOG_ALIGN 8+#define LOG_ALIGN __alignof__(struct log) #endif #define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT) static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);-- 1.7.10.2.565.gbd578b5
http://lkml.org/lkml/2012/6/17/52
CC-MAIN-2017-43
refinedweb
573
52.19
I want to initialize a numpy array of size (n,m) that can only contain zeros or ones. Furthermore, I want to later to np.bitwise_or with the array. For example, If I try: import numpy as np myArray = np.zeros([4,4]) myRow = myArray[1,] myCol = myArray[,1] np.bitwise_or(myRow, myCol) TypeError: ufunc 'bitwise_or' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' np.bitwise_or([0,0,0,0], [0,0,0,0]) By default, np.zeros will use a float dtype, and you can't perform the bitwise operations on floats. np.bitwise_or([0,0,0,0], [0,0,0,0]) acts on integers, which is why it works. If you pass an integer dtype instead when you construct myArray, it'll work too: In [9]: myArray = np.zeros([4,4], dtype=int) In [10]: myRow = myArray[1,:] In [11]: myCol = myArray[:,1] In [12]: np.bitwise_or(myRow, myCol) Out[12]: array([0, 0, 0, 0]) or we could call astype(int): In [14]: myArray = np.array([[1.0,0.0,1.0,0.0], [1.0,1.0,1.0,0.0]]) In [15]: np.bitwise_or(myArray[0].astype(int), myArray[1].astype(int)) Out[15]: array([1, 1, 1, 0]) That said, if you know the array is always going to contain only 0 or 1, you should consider a bool array instead: In [21]: myArray = myArray.astype(bool) In [22]: np.bitwise_or(myArray[0], myArray[1]) Out[22]: array([ True, True, True, False], dtype=bool)
https://codedump.io/share/ESsZ5KjWrVmm/1/npndarray-bitwiseor-operator-on-ndarrays-fails
CC-MAIN-2018-17
refinedweb
266
76.11
If you want users to type something at the keyboard, you should let them know first! The jargon for this is to give them a prompt: Instructions written to the screen, something like Console.Write("Enter your name: "); Then the user should respond. What the user types is automatically shown (echoed) in the terminal or console window. For a program to read what is typed, another function in the Console class is used: Console.ReadLine. Here the data for the function comes from a line typed at the keyboard by the user, so there is no need for a parameter between the parentheses: Console.ReadLine(). The resulting sequence of characters, typed before the user presses the Enter (Return) key, form the string value of the function. Syntactically in C#, when a function with a value is used, it is an expression in the program, and the expression evaluation is the value produced by the function. This is the same as in normal use of functions in math. With any function producing a value, the value is lost unless it is immediately used. Often this is done by immediately assigning the value to a variable like in string name; name = Console.ReadLine(); or in the shorter string name = Console.ReadLine(); Fine point: Notice that in most operating systems you can edit and correct your line before pressing the Return key. This is handy, but it means that the Return key must always be pressed to signal the end of the response. Hence a whole line must be read, and there is no function Console.Read(). Just for completeness we mention that you can read a raw single keystroke immediately (no editing beforehand). If you want to explore that later, see test_readkey/test_readkey.cs. You may well want to have the user supply you with numbers. There is a complication. Suppose you want to get numbers and add them. What happens with this code, in bad_sum/bad_sum.cs? using System; class BadSum { static void Main() { string s, t, sum; Console.Write ( "Enter an integer: "); s = Console.ReadLine(); Console.Write( "Enter another integer: "); t = Console.ReadLine(); sum = s + t; Console.WriteLine("They add up to " + sum); } } Here is a sample run: Enter an integer: 23 Enter another integer: 9 They add up to 239 C# has a type for everything and Console.ReadLine() gives you a string. Adding strings with + is not the same as adding numbers! We must explicitly convert the strings to the proper kind of number. There are functions to do that: int.Parse takes a string parameter that should be the characters in an int, like “123” or “-25”, and produces the corresponding int value, like 123 or -25. In good_sum/good_sum.cs, we changed the names to emphasize the type conversions: using System; class GoodSum { static void Main() { Console.Write ( "Enter an integer: "); string xString = Console.ReadLine(); int x = int.Parse(xString); Console.Write( "Enter another integer: "); string yString = Console.ReadLine(); int y = int.Parse(yString); int sum = x + y; Console.WriteLine("They add up to " + sum); } } Notice that the values calculated by int.Parse for the strings xString and yString are immediately remembered in assignment statements. Be careful of the distinction here. The int.Parse function does not magically change the variable xString into an int: the string xString is unchanged, but the corresponding int value is calculated, and gets assigned to an int variable, x. Note that this would not work if the string represents the wrong kind of number, but there is an alternative: csharp> string s = "34.5"; csharp> int.Parse(s); System.FormatException: Input string was not in the correct format .... csharp> double.Parse(s); 34.5 We omitted the long tail of the error message. There is no decimal point in an int. You see the fix with the corresponding function that returns a double. We have started to refer to whole programs that we have written. You will want to have your own copies to test and modify for related work. All of our examples are set up in a Xamarin Studio solution in our zip file that you can download. The zip file and the folder it unzips to have the long name introcs-csharp-examples-master. We suggest you rename the folder simply examples to match the name of the Xamarin Studio solution it contains. There are various way to access our files. We have one main convention in naming our projects: Most projects are examples of full, functional programs to run. Others are intended to be copied by you as stubs for your solutions, for you to elaborate. These project folders all end with “_stub”, like string_manip_stub. Even the stubs can be compiled immediately, though they may not accomplish anything. A further convention is using “chunk” comments inside example source files: To keep the book and the source code in sync, our Sphinx building routine directly uses excerpts from the exact source code that is in the examples download. We have to mark the limits of the excerpts that we want for the book. Our convention is to have a comment referring to the beginning or the end of an excerpt “chunk”. Hence a comment including “chunk” in a source file is not intended as commentary on the code, but merely a marker for automatically regenerating a revision of the book. If you are just starting Xamarin Studio, and you have not run our solution before: introcs-csharp-examples-master. We will assume you reduce the name of the folder to the much shorter examples. examplessolution. examples/examples.sln. The sln is short for solution. The next time you come to the Welcome screen, our examples should be listed in the Recent Projects, and you can click to open it directly. We strongly encourage you not to modify our examples in place, if you want to keep the changes, because we will make additions and modifications to our source download, and we would not want you to overwrite any of your modified files after downloading a later version of the examples. If you do want to alter our code, we suggest you copy it to a project in your solution (“work”, discussed in the first lab in the Steps). When creating modifications of previous examples, like the exercise below, you can often save time by copying in the related example, particularly avoiding retyping the standard boiler plate code at the top. However, when you are first learning and getting used to new syntax, typing reinforces learning. Perhaps after looking over the related example, you are encouraged to write your version from scratch, to get used to all the parts of the code. Later, when you can produce such text automatically, feel free to switch to just copying from a place that you had it before. Write a program that prompts the user for a name (like Elliot) and a time (like 10AM) and prints out a sentence like: Elliot has an interview at 10AM. If you are having a hard time and want a further example, or want to compare to your work, see our solution, interview/interview.cs. Write a version, add3.cs, that prompts the user for three numbers, not necessarily integers, and lists all three, and their sum, in similar format to good_sum/good_sum.cs. Write a program, quotient.cs, that prompts the user for two integers, and then prints them out in a sentence with an integer division problem like The quotient of 14 and 3 is 4 with a remainder of 2. Review Division and Remainders if you forget the integer division or remainder operator.
http://books.cs.luc.edu/introcs-csharp/data/io.html
CC-MAIN-2019-09
refinedweb
1,272
64.1
Created on 2013-02-19 14:31 by alanh, last changed 2013-05-11 15:16 by pitrou. This issue is now closed. On m68k this assert triggers in Python/unicodeobject.c:4658 because m68k architectures can align on 16bit boundaries. assert(!((size_t) dest & LONG_PTR_MASK)); I'm not sure of the wider implications in Python as to how to rectify. We don't have a m86k test box and I don't think we support this platform either. I'm willing to help fix, but there are m68k emulators out there which I guess suffice for a test box. TBH, I don't think we should support this platform officially. Is that processor still in use (e.g. in embedded systems)? Wikipedia says "derivative processors are still widely used in embedded applications." in m68k article. Freescale (ex-Motorola) ColdFire architecture is alive and doing well. And I know people still running Motorola 68040 Apple machines. The problem is not a m68k emulator, but to build all the needed environment on it: OS, compilers, network, etc. If the original submitter is willing to help... Which branch or version of Python is that? As mention in the versions. It is Python 3.3.0. Ok, could you post the whole C stack trace? (using e.g. gdb) By the way, this is not about the alignment of m68k architectures: x86 can align on a byte boundary and doesn't trigger the assert. It must be about pointer alignment, because that's the whole point of the ASSERT. As for the backtrace, the gdb support on the platform isn't great yet, but here it is.... Breakpoint 1, ascii_decode (start=0x30c5b04 "__len__", end=0x30c5b0b "", dest=0x1a6d592 "�������") at Objects/unicodeobject.c:4658 4658 assert(!((size_t) dest & LONG_PTR_MASK)); (gdb) bt #0 ascii_decode (start=0x30c5b04 "__len__", end=0x30c5b0b "", dest=0x1a6d592 "�������") at Objects/unicodeobject.c:4658 #1 0x030595a6 in .L4737 () at Objects/unicodeobject.c:4741 #2 0x03044dba in .L2648 () at Objects/unicodeobject.c:1806 #3 0x03096f54 in PyUnicode_InternFromString (cp=0x30c5b04 "__len__") at Objects/unicodeobject.c:14284 #4 0x030c69f6 in .L1892 () at Objects/typeobject.c:6090 #5 0x030c6dc8 in add_operators (type=0x33507c8) at Objects/typeobject.c:6244 #6 0x030bfc66 in .L1249 () at Objects/typeobject.c:4182 #7 0x030bfbae in .L1241 () at Objects/typeobject.c:4146 #8 0x02ff62a8 in _Py_ReadyTypes () at Objects/object.c:1576 #9 0x0300e688 in .L60 () at Python/pythonrun.c:301 #10 0x0300ea5c in Py_InitializeEx (install_sigs=1) at Python/pythonrun.c:401 #11 0x0300ea6e in Py_Initialize () at Python/pythonrun.c:407 #12 0x02ff9fca in .L135 () at Modules/main.c:657 #13 0x02ff24be in .L6 () at ./Modules/python.c:90 #14 0x03329d5a in .L76 () #15 0x0331731e in .L69 () What will happened if you just remove this line? > It must be about pointer alignment, because that's the whole point of > the ASSERT. Indeed, it's about pointer alignment, but it's not about the CPU. It's about the compiler (or the platform's C ABI). Apparently the compiler doesn't align struct fields to natural boundaries like most other compilers do, which means the size of the PyASCIIObject struct (in unicodeobject.h) ends up not being a multiple of 4, which in turn means the "dest" pointer (allocated from the end of that structure) is not 4 byte-aligned either. However, you can probably safely remove the assert(), since it is there to warn about misalignment on platforms which *are* alignment-sensitive. There is another assert() of the same kind in unicodeobject.c, which you can remove too. It would be nice if the C source could be improved here, but it's not obvious which rule to enforce exactly. We want to be lenient if the misalignment is a product of the compiler's alignment rules, but not if it's a mistake on our part. Which compiler is it? It's GCC 4.6.3. GCC has the -malign-int but mentions this isn't in best interest of m68k ABI compatibility if it's used. Oh, and as for pointer alignment, I probably just wasn't clear in the initial report. But we agree on m68k C ABI. Ok, so we could simply disable the assert on m68k, if you can confirm it works. Do you want to provide a patch? I don't know what the preprocessor conditional should look like. Perhaps we should disable not only assert, but all optimized code inside #if SIZEOF_LONG <= SIZEOF_VOID_P / #endif block. Benchmarks needed to measure either unaligned writing speedup or slowdown decoding. @skrah: “I don't think we should support this platform officially.” Please don’t break what works. We have almost complete (about three quarters of roughly 10'000 source packages) Debian unstable working on m68k, with several versions of Python in use. Thanks! @pitrou: “x86 can align on a byte boundary and doesn't trigger the assert.” That’s because most compilers on i386 do “natural alignment” – in fact, most compilers on most platforms do. “natural alignment” means 4-byte quantities get aligned on 4-byte boundaries, 8-byte quantities on 8-byte boundaries, etc. On m68k, the lowest alignment for almost all larger-than-a-byte data types is 2 byte, even though that one is strict. This means that, for example, an int is often only 2-byte aligned. @alanh: “GCC has the -malign-int but mentions this isn't in best interest of m68k ABI compatibility if it's used.” Indeed, because that would break the C and kernel/syscall ABI. @all: what _precisely_ is the assertion needed to check for? @pitrou: “since it is there to warn about misalignment on platforms which *are* alignment-sensitive” Well, m68k is. Just the numbers differ. (It also has int, long and pointer at 32 bit.). I can test any trees on my VMs, but that takes a day or two, of course, at 50-200 BogoMIPS. You can do that yourself by running a VM as well, the Debian Wiki has instructions, if anyone is interested. Otherwise, it’ll just get tested as soon as it hits Debian (unstable, usually we don’t build experimental packages except on explicit request by the packagers, due to lack of time / horsepower)… Thanks doko for linking this issue! >. It is not required since, as you say, m68k only requires 2-byte alignment. However, as Serhiy said, it may (or may not) be better for performance. At this point, only people with access to a m68k machine or VM (and motivated enough :-)) can do the necessary tests and propose a way forward. (but, performance notwithstanding, fixing the build should be a simple matter of silencing the assert with an appropriate #if line) mirabilos <report@bugs.python.org> wrote: > Please dont break what works. We have almost complete (about three quarters > of roughly 10'000 source packages) Debian unstable working on m68k, with > several versions of Python in use. Thanks! Are you saying that the complete test suite works on m68k except for this assert? @pitrou: As for performance, 2-byte and 4-byte are the same on m68k, given that they usually have RAM which doesn’t benefit from that kind of alignment, and systems that are structured accordingly. The “best” cpp define I don’t know, but my system defines __m68k__ and Alan would probably be able to say whether this is defined on ColdFire, too. @skrah: No, I specifically did not say that ☺ But it works pretty damn well.')" Serhiy Storchaka dixit: ')" > >---------- > >_______________________________________ >Python tracker <report@bugs.python.org> ><> >_______________________________________ Thanks, will actually do that, just not before the weekend, dayjob’s keeping me busy, and I need to be careful to not burn out myself in the evening too. Which tree should I build? A release (if so, which)? Or some CVS branch? Do note we clock at roughly 1000 pystones for the fastest machines… yes 1000 not 10000. > Which tree should I build? A release (if so, which)? Or > some CVS branch? It doesn't matter. > Do note we clock at roughly 1000 pystones for the fastest > machines… yes 1000 not 10000. It doesn't matter. Only relative values have a meaning. What is faster and on how many percent. And of course run the tests on non-debug builds. Serhiy Storchaka dixit: >> Which tree should I build? A release (if so, which)? Or >> some CVS branch? > >It doesn't matter. Erm, still, which one do I build? Not 3.2 because it obviously works, at least as packaged in Debian. bye, //mirabilos -- Yay for having to rewrite other people's Bash scripts because bash suddenly stopped supporting the bash extensions they make use of -- Tonnerre Lombard in #nosec > Erm, still, which one do I build? Not 3.2 because it obviously > works, at least as packaged in Debian. Any >= 3.3.0. Hi again, sorry for being a bit late in following up to this. Here are the results for the two runs you requested; these were done on the same machine, intermixed (the first one cache-clean, the second and third run subsequently (so that the third run is cache-filled)). I used static builds. For the “without assert”, I get approximately 570 usec per loop consistently: (590,580), (570,570), (560,580), (570,560) Interestingly enough, there seems to be no preference either way (ASCII or UTF-8). For the “without the whole if‥endif block”, I get mixed results: (650,540), (690,530), (520,530), (650,540) Interesting here is that the third run of them both has a lower ASCII than UTF-8 time, the others don’t – but the UTF-8 run is the second in a row, so cache might be of an issue. Repeating the runs, I get (560,590), (540,530) for the second case, and (760,570), (590,660) for the first case breaking its “consistently 570 usec” streak. Error of measurement may be large, thus. Which supports your suspicion that the optimised case may not be needed at all. Matthias asked me to provide links how to set up an Atari VM with Debian unstable and a clean/minimal unstable chroot, since there’s much use of Ubuntu here who ship ARAnyM (I even filed a Sync Request so that the latest release got a fixed version ☺). has got pointers for the first part (setting up an ARAnyM VM), and contains the whole documentation (we cannot currently use d-i but people are working on it). is the output of “debootstrap --variant=buildd” and as such should be usable in either cowbuilder or sbuild. Considering that we *only* have unstable available, you may want to be careful when upgrading ;) but an apt-get update should work out of the box (takes a few minutes). The VMs have 768+14 MiB RAM each in the sample configuration (maxed out), which makes them use a bit less than 1 GiB on the host side. A 3 GHz host CPU is of benefit. > Which supports your suspicion that the optimised case may not be needed > at all. So we can just skip the assert when __m68k__ is defined? Could you please repeat the tests with a larger data (1MB or more)? Antoine: precisely. Serhiy: sure. The times are now in msec per loop. I did three subsequent runs, so the second and third tuple are cache-warm. Without assert: (89,88), (87,87), (89,86) Without block : (79,78), (78,78), (79,78) And in a second run: Without assert: (87,88), (87,88), (92,87) Without block : (111,91), (78,85), (79,79) This means that, again, removing the “optimised” code speeds up operations (on this particular slow architecture. Well, it will be safer to skip this block on such platforms. file30203/ascii_decode_nonaligned.patch is potentially a nop (the struct being a multiple of, in the m68k case 4, bytes is not an indicator of whether to skip it). I think we can be bold and put #if !defined(__m68k__) and #endif around the entire block and, should there ever be another architecture with similar issues, whitelist them there. > I think we can be bold and put #if !defined(__m68k__) and #endif > around the entire block and, should there ever be another architecture > with similar issues, whitelist them there. Agreed. What is sizeof(PyASCIIObject) on m68k? Currently 22; it will increase to 24 if a few more bits are added. That’s why I said it’s “fragile” magic. I’m currently thinking this patch. (Will need another day or so for compiles to finish, though.) + dest is always aligned on common platforms + (if sizeof(PyASCIIObject) is divisible by SIZEOF_LONG). Actually, that’s the part that is not true. On m68k, and perfectly permitted by the C standard, even a 24-byte object has only a guaranteed alignment of 2 bytes (or one, if it’s a char array) in the normal case (contains integers and pointers, nothing special). We patched out this bogus assumption from things like glib already ;). This is a bugfix, please let's keep it simple. Checking for __m68k__ ensures that other architectures aren't affected by mistake. Right, keeping it simple helps in preventing accidents, and the code block looks full of magic enough as-is. Maybe add a comment block that says: /* * m68k is a bit different from most architectures in that objects * do not use "natural alignment" - for example, int and long are * only aligned at 2-byte boundaries. Tests have shown that skipping * the "optimised version" will even speed up m68k, so we #ifdef * for "the odd duck out" here. */ Then we have an in-situ documentation point for why that ifdef is there and why m68k is “the odd duck” and this whitelist method is used. Well, then already too much bikeshedding for such simple fix. Antoine, do you want to commit a fix? New changeset 0f8022ac88ad by Antoine Pitrou in branch '3.3': Issue #17237: Fix crash in the ASCII decoder on m68k. New changeset 201ae2d02328 by Antoine Pitrou in branch 'default': Issue #17237: Fix crash in the ASCII decoder on m68k. Ok, I hope I got the fix right :) Thanks mirabilos for the comment suggestion, I used a modified version. Thanks Antoine! Now, for “finishing up” this… to follow up on Stefan’s comment… is there any way I can run the testsuite from an installed Python (from the Debian packages)? (I build the packages with disabled testsuite, to get the rest of the system running again, since python3.3 was recently made required and we had never built it before.) Otherwise I guess I could run “make test” on one of the earlier trees I used for the timing… but that machine is currently building six Linux kernel flavours from the src:linux package and thus will not be available for the next one and a half week or so… > Now, for “finishing up” this… to follow up on Stefan’s comment… is > there any way I can run the testsuite from an installed Python (from > the Debian packages)? "python -m test" (with any options you might like), but we don't guarantee that all tests pass on an installed Python. But at least you will be able to spot any hard crashes :-) Antoine Pitrou dixit: >"python -m test" (with any options you might like), but we don't No, I tried that (as it was the only thing I could find on the ’net as well) on an i386 system and only get: tglase@tglase:~ $ python2.7 -m test /usr/bin/python2.7: No module named test.__main__; 'test' is a package and cannot be directly executed 1|tglase@tglase:~ $ python3.2 -m test /usr/bin/python3.2: No module named test.__main__; 'test' is a package and cannot be directly executed Same when adding ‘-h’. bye, //mirabilos -- Gast: „Ein Bier, bitte!“ Wirt: „Geht auch alkoholfrei?“ Gast: „Geht auch Spielgeld?“ > >"python -m test" (with any options you might like), but we don't > > No, I tried that (as it was the only thing I could find on the > ’net as well) on an i386 system and only get: Ah, that's because the system Python install doesn't include the test suite. Perhaps you have to install an additional package, python-dev perhaps? (note, on 2.7, it's "python -m test.regrtest") Antoine Pitrou dixit: >(note, on 2.7, it's "python -m test.regrtest") That indeed does things. So I had mistaken them for the same problem. >Ah, that's because the system Python install doesn't include the test >suite. Perhaps you have to install an additional package, python-dev >perhaps? tglase@tglase:~ $ l /usr/lib/python2.7/test/ __init__.py pystone.py* regrtest.py* test_support.py __init__.pyc pystone.pyc regrtest.pyc test_support.pyc tglase@tglase:~ $ l /usr/lib/python3.2/test/ __init__.py __pycache__/ pystone.py* regrtest.py* support.py Maybe it’s just not packaged… these are all I can find, and installing python3.2-dev doesn’t fix it. Oh well, then it’ll just have to wait. Do you have a preferred place where I can submit the test results, as it’s getting very off-topic here? bye, //mirabilos -- "Using Lynx is like wearing a really good pair of shades: cuts out the glare and harmful UV (ultra-vanity), and you feel so-o-o COOL." -- Henry Nelson, March 1999 > Oh well, then it’ll just have to wait. Do you have a preferred > place where I can submit the test results, as it’s getting > very off-topic here? Well, if everything works fine, you don't have to submit them! If you get test failures, you can open issues for the individual test failures.
http://bugs.python.org/issue17237
CC-MAIN-2016-30
refinedweb
2,945
75
Created attachment 601937 [details] test case (click me!) We shouldn't get frames shorter than 16 ms with requestAnimationFrame. The attached testcase shows that we do, especially in the first few frames which can be shorter than 1 ms! Reload a few times, as results vary considerably across runs. Results are not affected by the fact that this animation doesn't paint anything else than the text. I originally got similar results with a WebGL benchmark. Here's a typical result I'm getting (Nightly 13.0a1, Linux x86-64): frame 1, 13 ms after start, frame duration 3 ms, far too short! frame 2, 17 ms after start, frame duration 4 ms, far too short! frame 3, 23 ms after start, frame duration 6 ms, far too short! frame 4, 30 ms after start, frame duration 7 ms, far too short! frame 5, 38 ms after start, frame duration 8 ms, far too short! frame 6, 46 ms after start, frame duration 8 ms, far too short! frame 7, 56 ms after start, frame duration 10 ms, slightly too short frame 8, 65 ms after start, frame duration 9 ms, far too short! frame 9, 76 ms after start, frame duration 11 ms, slightly too short frame 10, 92 ms after start, frame duration 16 ms frame 11, 107 ms after start, frame duration 15 ms, slightly too short frame 12, 124 ms after start, frame duration 17 ms frame 13, 141 ms after start, frame duration 17 ms frame 14, 161 ms after start, frame duration 20 ms frame 15, 174 ms after start, frame duration 13 ms, slightly too short frame 16, 190 ms after start, frame duration 16 ms frame 17, 206 ms after start, frame duration 16 ms frame 18, 223 ms after start, frame duration 17 ms The XPCOM timer implementation is busted. We know it. Not much use filing more bugs; we just need to fix it... :( Benoit, how do things look for you if you change the 1.5 to a 0 on ? With the tweak suggested in comment 1, it's better, but I still get some too-fast frames. Also, the frame durations remain very irregular: frame 1, 45 ms after start, frame duration 18 ms frame 2, 63 ms after start, frame duration 18 ms frame 3, 80 ms after start, frame duration 17 ms frame 4, 97 ms after start, frame duration 17 ms frame 5, 114 ms after start, frame duration 17 ms frame 6, 131 ms after start, frame duration 17 ms frame 7, 149 ms after start, frame duration 18 ms frame 8, 169 ms after start, frame duration 20 ms frame 9, 184 ms after start, frame duration 15 ms, slightly too short frame 10, 207 ms after start, frame duration 23 ms frame 11, 227 ms after start, frame duration 20 ms frame 12, 236 ms after start, frame duration 9 ms, far too short! frame 13, 253 ms after start, frame duration 17 ms frame 14, 270 ms after start, frame duration 17 ms frame 15, 295 ms after start, frame duration 25 ms frame 16, 306 ms after start, frame duration 11 ms, slightly too short frame 17, 323 ms after start, frame duration 17 ms frame 18, 341 ms after start, frame duration 18 ms frame 19, 358 ms after start, frame duration 17 ms frame 20, 381 ms after start, frame duration 23 ms frame 21, 396 ms after start, frame duration 15 ms, slightly too short frame 22, 411 ms after start, frame duration 15 ms, slightly too short frame 23, 428 ms after start, frame duration 17 ms That's interesting. I would not have expected too-short frames in that case... And I can absolutely believe the frames are irregular. Especially on Windows, where if I recall correctly the relevant timer is only accurate to like 16ms at best to start with. Another strange phenomenon: i seem to get more requestAnimationFrame callbacks than I do really composited frames. steps-to-reproduce: 1. run the testcase. 2. as results accumulate, sometimes there is a noticeable pause (maybe a GC pause or something). That's fine. But then, we should not get requestAnimationFrame callbacks, and the next frame we get should show a long duration (like, 100 ms). Instead, when rendering resumes, a bunch of new frames are now reported all at once, with short durations each. Having more requestAnimationFrame callbacks than composited frames seems like a waste of time, and a different bug than the timers bug. Is that with the 1.5 changed to 0 or without? That is with the 1.5 changed to 0. But, the same issue (comment 4) also happens without that 1.5 changed to 0. The only thing is, you need a non-optimized build to get noticeable pauses in this simple test. > But, the same issue (comment 4) also happens without that 1.5 changed to 0. Yes, that's expected. With the 1.5 the timer code tries to "guess" when to fire timers so they'll fire "at the right time" based on previous delays, so it can trigger timers that run way too early. With the 1.5 changed to 0, though, that should not be happening. Can you post a testcase that reliably reproduces the problem for you when the 1.5 is changed to 0? You can trigger a guaranteed 100ms pause by doing something like this: setTimeout(function() { var start = new Date; while (new Date - start < 100) /* do nothing */ ; }, 200); Oh, and are you testing on Linux, not on Windows? In that case you should be getting better accuracy out of your XPCOM timers... I am seeing slightly different data than in the first comments here. There are in fact far-too-short requestAnimationFrame firings, but they happen after delays. So for example if frames are 20ms (just to make it simple), we often fire in patterns like this: 0 20 40 70 - took 30ms, not 20, due to some browser delay like a gc 80 - took 10ms, in order to return to the same rhythm as before 100 - back to normal 120 - etc. So a long frame leads to a short frame after it. The alternative to this would be something like 0 20 40 70 - 30ms frame, like before 90 - wait 20ms after the long frame, unlike last time 110 - continue in a new rhythm, now %20 is 10 and not 0 Short delays after long ones makes sense if you're trying to match some realtime timeline. There are 3 basic options with 1 variant each: 1) Approximate realtime - long delays are followed by short delays until the next quanta. 1a) if "long delay" is > 2 quanta, you can either play the next frame, or skip it to stick to realtime (keep sync with other events) 2) Pausing to the next quanta (frame duplication if you will) will guarantee each frame is seen for at least the specified quanta (long delay, followed by another long delay until the next quanta). 2a) Same, but drop frames if they can't be output "on time" (think vsync tracking - if it doesn't make the deadline, you drop it). 3) Display each frame for >= quanta - if there's a long delay, the nominal next frame will be quanta later, establishing a new rhythm. This means sync to other elements wanders, but all frames are displayed for quanta ms. Basically you ignore that a frame took too long and start a new timer after each frame. 3a) Same, but you drop frames if the accumulated slip pushes you past the expected point for that frame (i.e. frame-drop to maintain sync) Part of the problem is "which is appropriate for the application"? We either need multiple functions, parameters, or choose one and let applications that need others in some manner roll their own given what we provide. Sure. So in comment 10 terms translated to nsITimer, #1 sounds like TYPE_REPEATING_PRECISE_CAN_SKIP (which is what the refresh driver uses). #3 is TYPE_REPEATING_SLACK. I'm not sure we have an equivalent of #2 right now. One other note: other UAs are just tying requestAnimationFrame to vsync. This would be more like #2, in fact. To match vsync behavior, we need our timers trigger on 16.67ms intervals (read: 16 or 17), and no frame can be less than 16ms. If frame 1 takes 20ms, the timer for frame 2 should be 28ms, ideally giving us timer triggers at t=[0,20,48]. Even if frame 2 can hit the t=32ms mark, it should instead wait until t=48ms to fire. This is #2 above. It seems like #3 might be ideal for us, since it basically only serves to limit us to ~60fps while allowing maximum framerate, which are both desired in the absence of vsync. An aside that 16ms frames yields 62.5fps, while 17ms yields ~58.8. We can't assume that when the timer fires JS can draw and gecko can compose a frame instantaneously. If we want to achieve something that synchronized to when we present the frame we will need to tie in composite notification to our internal logic.). (In reply to Benoit Girard (:BenWa) from comment #14) >). That would need to be verified. I rather agree with Jeff here as I expect that the user experience is mostly determined by the worst framerate, not the average framerate, in a certain time window (maybe 1 second, maybe 0.1 second, whatever unit of time our brain works with). So I expect that having one frame shown twice gives the same user experiece as staying at 30 Hz for a certain amount of time (maybe 1 second, maybe 0.1 second). I expect that amount of time to be significantly greater than 0.033 second = 1/(30 Hz) so, if that is confirmed, trying to catch up after a repeated frame would be futile. Experimentation/documentation needed! is a build with one potential approach. It adds a new type of timer, NEVERSHORT, which are never too fast. They ignore the timer system's internal adjustment mechanism (that makes the next firing delay short if the current one was long), and they schedule the next firing time right before actually firing and not before scheduling the firing event (which can get stalled if there are other events). This ensures they are never too early. If they are too late, they just pick up from there, so delays can look like 17, 17, 17, 1000, 17, 17, 17 etc. A main downside of this is reduced frame rates. We have longer frames but no shorter frames, so on average we decrease. Also, even at the fastest we seem to get frames of 17-18 and not 16-17, I suspect because we (purposefully) don't adjust for the average wait times of the event queue. The upside is we no longer get frames that are too short. But, I don't personally see a subjective benefit to this, I'm not that sensitive to this sort of thing though so maybe I am a bad tester. If others can test those builds that would be cool. Builds showing up in Alon's build implements type 3 from my post above. Part of the issue is knowing the purpose of the animation timer. The requirements from a usecase could be any of 1, 2 or 3 (and the 'a' variants). They are of differing levels of frequency of use, and a number of them can be emulated to any degree or another if they don't exist (and the app knows what type it *is* getting). Games in particular have very different needs than something showing a cartoon-ish anim or a video clip. (they want enough time to render the next frame, and preferably have the firing synced to vsync). Items meant to be synchronized to other events want a different approach. And for smooth, 30 or 60Hz playback (as monitors are strongly likely to be 60Hz nowadays), you need consistent 16.6*ms rates, and preferably with vsync. (If you have 2ms of jitter, and the firing is roughly aligned with vsync, you can have a wildly variable framerate visible to the user of 30-60Hz and the firing time jitters to one side or another of vsync with each frame.) The people at the Games Work Week got direct feedback on this issue; I didn't head it all, but check with them (Martin Best, the JS guys who were there, etc) WebKit/Chromium code search links to their implementation: So Randell's insights are right on in my view. When using vsync for games you want to target either 30-60fps and that's 16.667ms. If you do drop down to 30fps, you want to average the idle time for the last few seconds to see if it's greater than 50% to jump decide to jump back up to 60fps to keep it from thrashing back and forth between the two. However, many gamers prefer to have vsync off and live with consequences. In fact most PC games have this as the default setting. This allows for 52 fps for example which provides a more responsive feel to game play at the cost of the occasional tearing artifact. Having frames go shorter than the computation and rendering time seems counter productive. A note that with vsync, there doesn't need to be any decision whether to render at 60 or 30 (or less), as this behavior is just the result of the constraints: When swapping, block until vblank, swap, then return. Or more generally: At most one swap per vblank cycle. Swap must occur at vblank. If the frame takes 20ms, it will naturally overflow the first vblank cycle into the next, taking two vblank cycles to display, yielding 30fps if the base framerate is 60fps. 1 16 25 2 32 3 48 50 4 64 75 5 80 6 96 100 7 112 Here is the issue that the averaging is meant to solve. May not be an issue but just wanted to make sure to discuss. The right column shows when a new frame is ready, in this case every 25ms, the middle column is the number of frames at 60fps and the left column is the 60fps ruff ms count. You see the gaps that are created, this can give a jagged look so you want to stick to 30fps in these cases. You can do this at the games level by simply requesting a larger on requestAnimationFrame. VSync with 25ms frame times looks like: vblank 1 at 16ms frame 1 finishes at 25ms, blocking until... vblank 2 at 32ms, swap frame 1, begin work on frame 2 vblank 3 at 48ms, frame 2 finishes at (frame_2_start + frame_2_time) = (32ms + 25ms) = 57ms, blocks until... vblank 4 at 64ms, swap frame 2, begin work on frame 3 etc. Note that frames are swapped in only every other vblank, since work on the next frame starts at the vblank that swaps the previous frame. Thus, for 25ms frames, they always miss the next frame window, but land part way through the second, and are swapped 32ms after work began. Right. Though if the calculation time is close to 16.6ms it will jitter back and forth between 30 and 60 (and all frame calcs have jitter, and other processes/processing/etc will cause more jitter). This is something game people are used to. For most other stuff (non-games), calculation time isn't often the issue, it's mostly timer jitter and thread pauses - wanting to show something smoothly *or* keep it in sync with a different animation, etc. This is causing a lot of problems for my demos in Firefox. Sometimes they run at 75fps or faster (due to requestAnimationFrame) and it's very difficult to correct for this without dropping frames because of the way the timing seems to work. Is there anything I can do to prevent the requestAnimationFrame timer from running too fast/slow - for example, using requestAnimationFrame to kick off a pushMessage so that the timer doesn't try to compensate for the duration of my frames? My desired behavior for requestAnimationFrame would be: When possible, run requestAnimationFrame callback once for every vertical sync When not possible, run as close to 60hz as possible and then have the frames I generate presented at the next vsync. Of course, at present, I'd be happy to just actually get 60hz :) Chrome doesn't seem to vsync requestAnimationFrame output either (lots of tearing), but it at least manages to hit 60hz very precisely. IE does too, except for the fact that their requestAnimationFrame is broken so I can't use it. If you use setTimeout(15) or so, iirc I got a more consistent trigger rate. The real answer is it needs to be fixed. So this is interesting: Without Alon's modification, I get about ~46fps (21.7ms), and with, I get ~26fps (38.4ms). It seem like the modification is now appending ~16ms to the time the frame takes, as 46fps = ~22ms, 22+16ms = 38ms = ~26fps. This would make sense if each requestAnimationFrame() call sets up a new timer which now always takes 16ms. (and I believe this is the current setup) Really what I think we want is a consistent timer which fires every 16ms, calling the RAF callback if one has been set. Except we don't actually want wakeups every 1ms. And we also want sane handling of cases when a callback takes 128ms. (In reply to Boris Zbarsky (:bz) from comment #28) > Except we don't actually want wakeups every 1ms. And we also want sane > handling of cases when a callback takes 128ms. What do you call sane handling of this case? I hope that doesnn't involve trying to catch up on skipped frames. If a callback takes 128ms, too bad for the skipped frames, I don't see anything to do about that. Honestly, if a frame callback takes longer than (1000 / refreshRate), just schedule the next frame as soon as possible (i.e. maintain UI responsiveness by pumping messages, but don't wait). That's how basically every modern game works, unless it's directly slaved to vertical sync. When you're directly slaved to vertical sync you're basically just running as fast as possible, but then you block your main thread on vsync every time you paint, which ends up locking you to integral factors of the vertical sync rate (60, 30, 15, etc). But for requestAnimationFrame it probably makes the most sense to just do the above - try to run at 60hz, and when the callback duration gets too long, just run as fast as possible. It shouldn't be necessary to do any crazy timer scheduling for this. Also, if you could actually just slave to vsync, that would eliminate the need to do any timer scheduling for 60hz either, but I imagine it's not easy to actually slave to vsync on linux/mac, and I know you can't on windows if you aren't using DirectX. > What do you call sane handling of this case? Something that doesn't involve 8 immediate calls in a row to the callback after the 128ms thing returns, which is what the setup in comment 27 is liable to produce (see also the REPEATING_PRECISE nsITimer variety and the reasons we stopped using it for requestAnimationFrame). (In reply to Boris Zbarsky (:bz) from comment #31) > > What do you call sane handling of this case? > > Something that doesn't involve 8 immediate calls in a row to the callback > after the 128ms thing returns Great, we're on the same page. So, to make sure; what we want (ignoring vsync): - requestAnimationFrame will ideally trigger a frame 16ms from the last time its callback started executing (note, started, not anything else) - if the last time one started executing was > 16ms, then it should trigger the callback immediately Note I'm using 16ms here for 60fps; substitute whatever other value is appropriate for the target frame rate. (Would be nice if we had a way to let the page specify its target fps..) Implementation wise, seems like we should schedule a 16ms timer right at the start of the callback routine. If RAF is called, just set a flag that it was called, we've already got a timer set up, and carry on. If it's not called, then the flag won't be set, and the next time the timer fires we should just ignore it. This approach would handle both cases above -- if the callback takes longer the desired time, the timer will naturally fire as soon as it can afterwards. > Implementation wise, seems like we should schedule a 16ms timer right at the start of > the callback routine. Here's how the current implementation works, for reference. First of all there is only one timer involved. All RAF calls for the tab, and all async reflow and restyling happen off that one timer. So the "set a flag" thing happens automatically. This timer is a TYPE_REPEATING_PRECISE_CAN_SKIP XPCOM timer, so its behavior is as follows. 1) When the XPCOM timer thread triggers and posts the event to the main thread to run the nsITimer's callback, we add 16ms to "now" and record that time in the timer object. 2) When the event posted in step 1 fires on the main thread, we run the website callback. 3) After running the website callback, we schedule a new timer to fire at the time recorded in step 1 (or ASAP if that time has already passed). Note that there can be a noticeable delay between #1 and #2 depending on what else the main thread is doing. Not much we can do to avoid that. So unless I'm missing something, this is in fact very much what comment 33 is asking for, no? Created attachment 630653 [details] IRC chat log bz, kevin, and I chatted about this in irc today; here's the log. there are the beginnings of a plan that someone will summarize soon! Created attachment 630707 [details] Updated test to work in all browsers So that we can compare, I updated the test case to work in all browsers. This falls back to setTimeout() if some version of requestAnimationFrame is not present (ie9). I tested the updated test in Firefox 12 and 15, Chrome 19, Safari 5.1.7, IE9. If someone can test that and make sure it's a valid test. Currently Chrome seems to have the most consistent result. Just tested on Opera a few times. Never dips below 17ms. It's interesting to note here that FF is the only browser that dips below 16ms. It's not that interesting. We know exactly why we dip below 16ms. The only question is what, if anything, should be done about it. (In reply to Boris Zbarsky (:bz) from comment #40) > We know exactly why we dip below 16ms. The only > question is what, if anything, should be done about it. Yes, exactly. There are benefits to dipping below 16ms (keeping frame rates stable). The question is whether there is a downside to doing so. We suspected that rendering might look less smooth that way, but I haven't seen a subjective visual test that showed a difference. mbest also did some testing and told me the same. Just to clarify, the test Alon is referring too ran about 18 fps without dipping and this test had no really impact on the precised quality in my opinion. That isn't to say that it wouldn't look better running closer to the refresh rate nor does it reveal that having a lower frame rate was a good or bad thing. After thinking about it, I'm not sure that having pretty far of the sync rate makes it a test that we can really consider definitive. To talk about the down side for a moment, here are my thoughts on that: I think we need to be careful not to get too comfortable with the idea that 14-18 ms spread when no significant JS work executing is acceptable. We should get as close to 16.667 as possible in the empty loop case. As long as the delta time provided to game developers is in microsecond, they will be able to deal with any reasonable drift but the closer we can get the better. Having animation trigger before the JS work of the current frame is done executing seems to me to provide no value from a games perspective. My understanding is that this will just lead to posting the same frame buffer as last time, will you not? I think that if we are the only ones doing it following the current model, then we should either change to conform to the implied standard or update/clarify the standard to reflect our desired solution. This is pretty fundamental as it provides the core animation (or game) loop to developers. The fracturing standard is the number 2 argument against html5 I run into when talking to developers. Poor and fractured Audio being number 1. Having this inconstant across browsers seems like a very bad idea to help with HTML5 adoption if there is no clear advantage to the current approach. Please see attached IRC discussion. The fact that people are using rAF for both their animation loop and their game loop is in fact a problem that can't be fixed by tweaks to rAF behavior; it needs a different solution. (In reply to Boris Zbarsky (:bz) from comment #43) > Please see attached IRC discussion. The fact that people are using rAF for > both their animation loop and their game loop is in fact a problem that > can't be fixed by tweaks to rAF behavior; it needs a different solution. It shouldn't be a problem to do that, though. This is sort of orthogonal to the discussion of rAF behaving poorly even in the empty loop case. Agreed - though they may want animations to be synced to game state changes - not sure. As for real animations, jitter in timing can cause really annoying variable frame rates if the nominal time is close to vsync (frames will semi-randomly fall on one or the other side of the vsync, causing dups or skips depending if it's on the same side as the previous frame. Even if they're using a separate mechanism for state updates, the same issues apply including jitter vs. vsync. And I haven't read the log (I will), we need to make sure developers have a standard way to do this - introducing new standard mechanisms can be done, but it will be a while(!) before they land and are universal, and game writers will need to have fallback code for current rAF (for example).. An alternative to #2 that might not require #1 as a requirement is improving the delay-line algorithm to somehow ignore outliers. > run the main game loop in a separate callback, presumably a setTimeout(0) callback? Why? How does that help? (In reply to Benoit Jacob [:bjacob] from comment #46) >? Possibly, but only because it's possible to get slightly better input->network latency with this method (basically pumping the input stack faster). For rendering something at 60fps already, it is at-best the same as batching update-and-render, and slightly worse in the simple case. (rendering using positional data which is up to 4ms old) A game which is pushing the envelope timing-wise will likely behave better with split callbacks, but a monolithic callback should work fine for anything that's sub-15ms anyways. > Possibly, but only because it's possible to get slightly better input->network latency Input should be happening off event handlers, not off a timer loop, I would think..... (In reply to Boris Zbarsky (:bz) from comment #47) > >. I think we will still have delays shorter than 16.67 ms. The timer code uses an adjustment to compensate for the lag between posting an event and that event running. So if the event loop is slow for a period of time, we fire events earlier. But when the event loop becomes fast again, we fire some timers earlier than they should. We can fix that too though (one approach, not a good one, is in the build I posted). (In reply to Boris Zbarsky (:bz) from comment #49) > > Possibly, but only because it's possible to get slightly better input->network latency > > Input should be happening off event handlers, not off a timer loop, I would > think..... Input event handlers should only update variables directly representing input (e.g. "up arrow key is pressed"), but the main game engine loop updating characters' positions, doing physics etc, should definitely not run off input event handlers (if only because it will need more frequent callbacks than just the two "key pressed" and "key released" callbacks). Since, in a responsiveness-sensitive game, the game engine loop should also be decoupled from rendering, it shouldn't run off requestAnimationFrame either. So I only see setTimeout(0) i.e. get a callback ASAP. > The timer code uses an adjustment Item 2 in my list of things to do is removing that code. Do I correctly understand that having requestAnimationFrame callbacks being given by a timer is only a fallback/temporary solution for lack of proper vsync handling, and that once we get code to use vsync, we should simply fire a requestAnimationFrame callback on vsync (unless another one is still pending)? Or is there some deeper reason why timers are relevant to requestAnimationFrame even in presence of vsync? (In reply to Boris Zbarsky (:bz) from comment #52) > > The timer code uses an adjustment > > Item 2 in my list of things to do is removing that code. Sorry, I should have read that more carefully. I'll comment in that bug. As far as vsync... there's some discussion about that in the attached log. But yes, if we stop triggering rAF from timers then the XPCOM timer stuff stops being relevant. Created attachment 631040 [details] testcase + buffer output + simulate game logic CPU time Submitted bug 763122 to track rAF discussion outside the scope of the main issue with timers/timing in this bug. Created attachment 634148 [details] updated benchmark, prettified output Updated the testcase and prettified the output. I've been experimenting with this with a build that has bug 590422 applied (removing timer filter/adjusting stuff), and that does timeBeginPeriod(1) around XRE_main. The problem is, with any combination of them, or without any changes, in a fresh build/launch I tend to get 16-17ms reliably basically throughout, with 5-10ms of logic time. Still experimenting more. Created attachment 634155 [details] win32 timer query tool Here's a little tool that can query the current timer resolution and change it. Note that it uses the internal NtQueryTimerResolution/NtSetTimerResolution functions that timeBeginPeriod/timeEndPeriod use instead of using the time* functions; there's no way to query the current resolution with the time* functions. On my laptop, when it's plugged in, I see a resolution of 0.5ms, with a range of 0.5ms-15.6001 ms. I can't get this to budge; I try setting it to 1ms, 5ms, 15ms, and it stays at 0.5ms. I'll try again when it's on laptop power later, but this could explain why I have a hard time reproducing any timer issues. (Just run to query, run with an (optionally floating-point) arg to set resolution in ms.) On my desktop, the range is also 0.5-15.6001, but the resolution happens to be set at 1.0ms. I have a bunch of flash running, which I think bumps it up; would be interesting to look at this on a fresh system boot. However, I think what it's showing is that just a higher resolution timer setting is probably not enough. If it helps, we have two approximate implementations of the requestAnimationFrame in Chromium. The first approach, which we've been using since Chrome 13 up to and including now, uses the following trick: on almost all OS', the first swapbuffers/present call on a double-buffered swapchain swapbuffers/D3D present is nearly instananeous, when the system was previously is idle. But, if you then call present *again* right after that one returns, the second present will almost always block until the next vblank interval. In chrome, we execute these swapbuffers on a secondary thread from javascript. So, effectively, when you first request an animation frame, we fire it immediately. This triggers rendering and sends a swapbuffers to our GL thread which completes immediately. The next request for an animation frame, we also fire immediately. This sends another swap to our GL thread. But, this swap now blocks until the next vsync. The third request animaiton frame that we get, we delay until the previous swap completes. As it turns out, this gives really smooth frame rates, and recovers quite gracefully when, for example, a webgl game is really GPU heavy. In that case, you dont want to keep ticking rAF at 60Hz--- doing so would just cause more and more work to pile up on the driver. Chrome Android uses our v2 scheduling approach. It is based on regular timers, but with error correction to deal with the fact that no OS will ever give you a 16.6666ms callback. The implementation of our timer for this is here, and is activated when chrome is in threaded compositing mode (about:flags): trac.webkit.org/browser/trunk/Source/WebCore/platform/graphics/chromium/cc/CCDelayBasedTimeSource.cpp . As it turns out, this timer has slight +/-1 noise when it runs per frame, but on average, it actually does average out to the correct target frame rate due to this error correction. "Locking to Vsync" is just a matter of starting this timer (or drifting it) such that it its m_timebase is aligned with the hardware vsync timer. Almost all OS' have apis for getting that timebase, so we periodically ask for it, then update the delay based time source to use that new timebase. Our measurements with about:tracing have shown that this is surprisingly effective. We're in the process of switching desktop chrome over to use this same technique. Our tracking bug for this is: Hope this is useful perspective! Hmm, we need to fix this, ideally for fx15. bz's two points that we need to resolve are: > 1) Resolve the 15ms minimal resolution problem on Windows. I propose that we do this by using timeBeginPeriod/timeEndPeriod based on whether there is requestAnimationFrame active, perhaps just in the current tab. (Or a low setInterval?) We can then evaluate the rAF algorithm and see if we need to do the adjustments that chrome does as descried above; I think this will get handled by the timer infrastructure. > 2) Reland the fix in bug 590422. Once we have the above, then look into doing this. We might regress whatever DHTML test, but we should actually look and see what the test is, and see if it's an actual regression or just on paper. (e.g. was it depending on old timer behaviour of firing a bunch of 0-spaced ticks to 'catch up', thus giving it a pile of frames at once? should the tests be rewritten to use rAF?) Any concerns with the above plan? I'm going to make myself responsible for this bug, unless anyone really objects :) >. (In reply to Boris Zbarsky (:bz) from comment #63) > >. Hmm. Do we care, though? I guess it was using setTimeout(0)? I'm going to have to get a separate machine to test... after a while, my laptop ends up at 0.5ms timer resolution mode, and there's no point in testing this unless the system is at the default 15ms. Could be virtualbox; 'powercfg -energy' says that the google talk plugin at least requests 5ms, though not 0.5ms. > Hmm. Do we care, though? That's a good question, yes. Real pages _do_ do this sort of thing. > I guess it was using setTimeout(0)? It might have been using 10, not 0. But something along those lines. (In reply to Boris Zbarsky (:bz) from comment #63) > The user experience the first time > I tried to land that bug was that scrolling using timeouts would take longer > 50% longer... So it wasn't just a paper regression. That makes sensek, if we remove the timer adjustment entirely then events will simply fire later than they should - because currently the timer adjustment is what tries to make them fire at the right time, when there is load. So removing the timer adjustment will make us less snappy. An alternative is to add a few lines of code to the RAF implementation that prevents frames that are too short (if it's too soon, schedule a callback for the right time). This would not regress general responsiveness, and would still prevent frames that are too short."? For testing, I should think we should shoot for roughly 60fps (16ms is 62.5fps) with <15ms of 'logic', with FPS slowly dropping below 60 as we add more 'logic'. (In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #67) >"? I think the success criteria for RAF is exactly 1 frame is produced in every vsync interval of the display. It's possible - deceptively easy, in fact - to have an average framerate that exactly matches the display but miss a ton of frames by producing them just before/just after vsync lines, producing a terrible user experience even though the averages look good. Yep, I think getting it lined up with vsync will be a followup though -- we need to get to solid 16ms first, and then get it aligned correctly. I buy that. You might shoot for a solid 1000/60.0 == 16.6' then, so that your timer code correctly handles the fact that the OS will never give you what you asked for. Created attachment 637954 [details] updated benchmark Added an option to run long, and some additional display to the results (fps, % for each delay time) Created attachment 637957 [details] [diff] [review] patch Here's a first attempt at a patch. - Instead of creating a new timer per RefreshDriver, it just creates two -- one regular, one throttled -- and shares them amongst all RefreshDrivers. - It enters 1ms-precision mode on windows while requestAnimationFrame is active/outstanding, leaves it afterwards. - It tracks the desired interval and similar as doubles with sub-ms precision, and then does explicitly figures out when to schedule the next timer instead of relying on nsITimer's interval mode. - It breaks tests that use advanceTimeAndRefresh. :( I'll fix this later, likely with another RefreshDriver implementation (and making some of the calls virtual). - I noticed that the event tracer often kicks in and throws things for a loop; I added an env var, MOZ_NO_EVENT_TRACER, to test what things are like without it. With this patch, I can get much smoother framerates with the testcase on Windows. Still not as smooth as I'd like, but much better. It doesn't even need the patch to remove the timer filter stuff, though we do see those effects on occasion (which is annoying; we really shouldn't, because we're not doing intervals!). There's an #if 0 in there too; right now, if the amount of work being done exceeds one frame time, we'll still delay the next frame until the next natural frame tick. So if you do 20ms of work, your next frame will show up at the 32ms mark. This is a tossup -- if we had vsync, this is what would happen, but it will also likely "regress" some performance benchmarks. Open to suggestions on which way we should go. Note that with this approach we have the option of creating another refresh timer which uses vsync, or uses its own timers separate from nsTimer; however, they all have to post back to the main thread's event loop for execution. bz: feedback ping! Comment on attachment 637957 [details] [diff] [review] patch I'm really sorry for the lag here. I really needed to sit down and think this stuff through. :( Tick() probably needs to hold refs to the refresh drivers in the array, right? This is no longer doing backoff on background refresh drivers, as far as I can tell. It just ticks them at 1Hz. Is that OK? Generally looks OK, I think.... (In reply to Boris Zbarsky (:bz) from comment #75) > Comment on attachment 637957 [details] [diff] [review] > patch > > I'm really sorry for the lag here. I really needed to sit down and think > this stuff through. :( No problem! > Tick() probably needs to hold refs to the refresh drivers in the array, > right? Hmm.. I convinced myself that it didn't need to, but perhaps erroneously. My thinking was that Tick() will always be called on the main thread, and that you can only remove refresh drivers on the main thread... so unless one Tick() can remove a later refresh driver, it should be safe to not take refs. (Though, now that I think about it -- the callback could close another window or something, which could cause a later refresh driver to be removed maybe?) > This is no longer doing backoff on background refresh drivers, as far as I > can tell. It just ticks them at 1Hz. Is that OK? That code looked a little weird to me, but maybe not: - rate = mThrottled ? DEFAULT_THROTTLED_FRAME_RATE : DEFAULT_FRAME_RATE; - PRInt32 interval = NSToIntRound(1000.0/rate); - if (mThrottled) { - interval = NS_MAX(interval, mLastTimerInterval * 2); - } Was the intent to keep slowing down background refresh drivers? 1s (1Hz from DEFAULT_THROTTLED_FRAME_RATE) -> 2s -> 4s -> 8s -> 16s etc? If so, ticking them along at 1Hz is probably not good, especially for mobile power usage. I can probably do something better there. > the callback could close another window or something Yeah, callbacks can run arbitrary script, including spinning the event loop, etc. Think sync XHR or alert() or opening a modal window in the callback. ;) > Was the intent to keep slowing down background refresh drivers? Yes, precisely. (In reply to Boris Zbarsky (:bz) from comment #77) > > Was the intent to keep slowing down background refresh drivers? > > Yes, precisely. My first thought to do this is to still have them all be driven off the same timer, but have it slow down globally, and reset (globally) to 1s whenever a new one is added. That's probably the simplest; it does mean if you have 50 tabs in the background for a while, and then you background a new one, they'll all start back up again at 1Hz, but I don't think that's that big of a deal. That case is somewhat crappy anyway, now that I think about it -- having *all* your background tabs trigger off the same timer is either good or bad, depending on how you look at it. It's bad in that they'll all trigger at the same time, potentially causing a user visible delay. It's good because they'll all trigger at the same time, letting the system be idle the rest of the time, instead of being interleaved. > and reset (globally) to 1s whenever a new one is added. Yeah, this is probably fine. I'm still moderately interested in just turning off the refresh driver altogether in background tabs, but I haven't been able to sell people on it yet. ;) (In reply to comment #79) > I'm still moderately interested in just turning off the refresh driver > altogether in background tabs, but I haven't been able to sell people on it > yet. ;) What are the arguments of the people who oppose this idea?. (In reply to comment #81) >. Can the script detect that this stuff has actually happened without receiving the event? If yes, then I don't think that's fine (unless you argue that very few people use them in practice...) >.... (In reply to comment #83) > >.... Hmm, OK, then this urges me to fall on your side in this battle! But that's the topic of another bug... Yep, the way I look at it is that you'd see similar behaviour if the user just suspended their laptop. I think we should treat background tabs basically exactly the same way if they haven't been visited in a while. It would be great if we can get a hacks blog post about this when it gets in! Created attachment 657410 [details] [diff] [review] use custom timer for refresh driver Ok, here's an updated patch. Things that are changed: - the RefreshDriverTimer class is now a virtual base class. There are two concrete implementations: -- PreciseRefreshDriverTimer: this implements the core algorithm described here, and is similar to the google approach. It uses one-shot nsITimers. -- InactiveRefreshDriverTimer: this is for inactive pages; it doubles the time of each firing, up to 15min (currently set just in the code), at which point it stops firing anything. It resets back to 1s any time a new timer is made inactive. - the testing functions work and tests pass. With the RefreshDriverTimer class, we should be able to implement vsync in a straightforward way in a followup patch. On windows, a debug build hits 60fps on the benchmark with over 95% of the frames being 16 or 17ms with no nsTimer changes. (MOZ_EVENT_TRACE from the profiler was kicking in occasionally which can cause hurt.) Try server run is in progress at Comment on attachment 657410 [details] [diff] [review] use custom timer for refresh driver Review of attachment 657410 [details] [diff] [review]: ----------------------------------------------------------------- Looks good, but we need some comments describing the requirements and the design. ::: layout/base/nsRefreshDriver.cpp @@ +61,5 @@ > + NS_ASSERTION(!mRefreshDrivers.Contains(aDriver), "AddRefreshDriver for a refresh driver that's already in the list!"); > + mRefreshDrivers.AppendElement(aDriver); > + > + if (mRefreshDrivers.Length() == 1) > + StartTimer(); {} @@ +71,5 @@ > + NS_ASSERTION(mRefreshDrivers.Contains(aDriver), "RemoveRefreshDriver for a refresh driver that's not in the list!"); > + mRefreshDrivers.RemoveElement(aDriver); > + > + if (mRefreshDrivers.Length() == 0) > + StopTimer(); {} @@ +117,5 @@ > + continue; > + > + drivers[i]->Tick(jsnow, now); > + } > + //printf_stderr("(done ticks)\n"); Get rid of these printf_stderrs or convert them to PR_LOG @@ +211,5 @@ > + } > +#endif > + > + // calculate lateness > + TimeDuration lastTickLateness = aNowTime - mTargetTime; Remove this since you're not using it? Or else inline it into the logging call, if you're going to keep that. @@ +265,5 @@ > + uint32_t delay = static_cast<uint32_t>(mNextTickDuration); > + mTimer->InitWithFuncCallback(TimerTick, this, delay, nsITimer::TYPE_ONE_SHOT); > + > + // double the next tick time > + mNextTickDuration += mNextTickDuration; *= 2 reads better @@ +486,5 @@ > > + // we got here because we're either adjusting the time *or* we're > + // starting it for the first time > + if (mActiveTimer) > + StopTimer(); Just call StopTimer() unconditionally. @@ +502,5 @@ > + mActiveTimer->RemoveRefreshDriver(this); > + mActiveTimer = nullptr; > + > + if (mRequestedHighPrecision) > + SetHighPrecisionTimers(false); {} (In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #87) > - the RefreshDriverTimer class is now a virtual base class. It's not a virtual base class, which is good because virtual inheritance is evil :-) Created attachment 658203 [details] [diff] [review] use custom timer for refresh driver (v2) Updated based on review comments. Added more comments, {}'s, and added useful logging (for debug builds, at least, since FORCE_PR_LOGGING is not enabled). Comment on attachment 658203 [details] [diff] [review] use custom timer for refresh driver (v2) Review of attachment 658203 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/base/nsRefreshDriver.h @@ +269,5 @@ > }; > + > + friend class mozilla::RefreshDriverTimer; > + > + void SetHighPrecisionTimers(bool aEnable); SetHighPrecisionTimersEnabled Comment on attachment 658203 [details] [diff] [review] use custom timer for refresh driver (v2) Nit, you're using inconsistent coding style. Try to use Gecko coding style. >+class RefreshDriverTimer { { should be in the next line >+public: >+ RefreshDriverTimer(double aRate) { ditto. Same also elsewhere with classes and methods. Comment on attachment 658203 [details] [diff] [review] use custom timer for refresh driver (v2) Doesn't this end up running all the inactive drivers in a loop? That could be rather bad, especially if one has several background tabs. Pause times might become significantly long. ;) Comment on attachment 658203 [details] [diff] [review] use custom timer for refresh driver (v2) >@@ -7,11 +7,27 @@ > /* > * Code to notify things that animate before a refresh, at an appropriate > * refresh rate. (Perhaps temporary, until replaced by compositor.) >+ * >+ * Each document has its own RefreshDriver, Btw, this comment is not correct. (In reply to Olli Pettay [:smaug] from comment #94) >. Depends; having multiple background tabs all trigger on their own timers could cause more overall slowness.. I'm happy to implement that if that sounds better. > + ;) Can you explain? > >@@ -7,11 +7,27 @@ > > /* > > * Code to notify things that animate before a refresh, at an appropriate > > * refresh rate. (Perhaps temporary, until replaced by compositor.) > >+ * > >+ * Each document has its own RefreshDriver, > Btw, this comment is not correct. What's the correct version? (In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #96) > Depends; having multiple background tabs all trigger on their own timers > could cause more overall slowness. Well, bg timers run very rarely. And when they run, they just handle one tab. >. This might work, although in my case it would mean about one minute delay immediately when tab goes to background. That is a lot. Also, each bg tab would be handled the same way. I think those tabs which have been in bg longer, should get longer intervals. > > This screams about sg:crit bug ;) > > Can you explain? Make sure you keep stuff alive when calling JS callbacks. > > Btw, this comment is not correct. > > What's the correct version? There is a refreshdriver per top level chrome document, and refresh driver for top level content document. how does this all work with dlbi? RefreshDrivers end up to the timer's list in random(?) order. Should we guarantee that top-level chrome refresh drivers are handled after content drivers? Created attachment 658586 [details] [diff] [review] use custom timer, part 2 Part 2, with smaug's comments incorporated: - Uses a nsTArray<nsRefPtr<nsRefreshDriver> > instead of bare nsRefreshDriver*s - The inactive timer ticks only one driver per tick, keeping the same rate until all of them have been poked once; then the rate doubles as before. This eliminates the possibility that lots of background tabs will cause lots of jank while still poking them on a regular basis -- less frequently based on the number of tabs, which seems like the right thing to do. I can roll part 1 and part 2 into a single patch if that's easier (and will probably do that for eventual landing); wanted to make the followup easier for roc and others who have looked at the original patch. (In reply to Olli Pettay [:smaug] from comment #98) > how does this all work with dlbi? > RefreshDrivers end up to the timer's list in random(?) order. Should we > guarantee that > top-level chrome refresh drivers are handled after content drivers? I don't know -- you tell me if that's a thing we should do, or if we need to do it in a followup :) It seems like they would fire in basically a random order before as well (with each one having its own timer), no? Comment on attachment 658586 [details] [diff] [review] use custom timer, part 2 - if (drivers[i]->mTestControllingRefreshes) + if (drivers[i]->IsTestControllingRefreshesEnabled()) continue; if (expr) { stmt; } >+ static void TickDriver(nsRefreshDriver* driver, int64_t jsnow, TimeStamp now) { aDriver, aJSNow, aNow And { goes to the next line. Though, I don't understand the reason for the method. >+ /* Runs just one driver's tick. */ >+ void TickOne() { { goes to the next line >+ int64_t jsnow = JS_Now(); >+ TimeStamp now = TimeStamp::Now(); >+ >+ ScheduleNextTick(now); >+ >+ mLastFireEpoch = jsnow; >+ mLastFireTime = now; >+ >+ nsTArray<nsRefPtr<nsRefreshDriver> > drivers(mRefreshDrivers); >+ if (mNextDriverIndex < drivers.Length() && >+ !drivers[mNextDriverIndex]->IsTestControllingRefreshesEnabled()) >+ { { should be in the previous line ('if' is a control structure, not method or class). >+ static void TimerTickOne(nsITimer *aTimer, void *aClosure) { * goes with types and { should be in the next line. >+ bool IsTestControllingRefreshesEnabled() const >+ { >+ return mTestControllingRefreshes; 2 space indentation everywhere, please. r=me with nits fixed. Ask roc or mattwoodrow about dlbi Comment on attachment 658586 [details] [diff] [review] use custom timer, part 2 Review of attachment 658586 [details] [diff] [review]: ----------------------------------------------------------------- I don't think this impacts DLBI. ::: layout/base/nsRefreshDriver.cpp @@ +354,5 @@ > + { > + TickDriver(drivers[mNextDriverIndex], jsnow, now); > + } > + > + mNextDriverIndex++; This approach is somewhat weird IMHO. Making the delay depend on the number of background tabs open doesn't seem very well-motivated. Spacing them out does seem like a good idea though. How about we give each background tab an independent timer? (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #103) > > How about we give each background tab an independent timer? That is the current behavior (which is IMO ok).. The constructor for RefreshDriverTtimer should document what aRate is. Is it Hz? Is it ms? Something else? Tick()'s documentation means to say "poking all the refresh drivers"? > It always schedules ticks on multiples of aRate I assume you've checked that this does not regress the testcase in bug 630127? Seems like it should be ok, but worth testing. >+? In the two interval getters, the prefName should probably be a static const char[], now that it's not conditional. > nsRefreshDriver::AdvanceTimeAndRefresh(int64_t aMilliseconds) Why are we adding a test for aMilliseconds > 0 here? Why do we not need to tick in RestoreNormalRefresh? Why is the removal of EnsureTimerStarted from the MostRecentRefresh getters ok? > nsRefreshDriver::EnsureTimerStarted(bool aAdjustingTimer) The comment on the mFrozen || !mPresContext case shouldn't mention "already been started". r=me modulo the concerns smaug had. Comment on attachment 658586 [details] [diff] [review] use custom timer, part 2 I think this is OK. (In reply to Boris Zbarsky (:bz) from comment #105) >. Fixed. > The constructor for RefreshDriverTtimer should document what aRate is. Is > it Hz? Is it ms? Something else? Done. It's ms (as a double). (Maybe aInterval is a better arg name?) > Tick()'s documentation means to say "poking all the refresh drivers"? Yep, fixed. > > It always schedules ticks on multiples of aRate > > I assume you've checked that this does not regress the testcase in bug > 630127? Seems like it should be ok, but worth testing. Yep, seems fine; tested. Even better than without, as the testcase is more consistent 60fps in many cases. > >+? Yeah, I added a multiply-by-double to TimeDuration. Mildly worried about overflow, but oh well. Combined some of the other calculations as well. > In the two interval getters, the prefName should probably be a static const > char[], now that it's not conditional. I just got rid of the separate prefName variables; it was only used once. > > nsRefreshDriver::AdvanceTimeAndRefresh(int64_t aMilliseconds) > > Why are we adding a test for aMilliseconds > 0 here? Sillyness. Earlier bug that left mMostRecentRefresh to be 0. Fixed. > Why do we not need to tick in RestoreNormalRefresh? I didn't think we'd want to -- we start up the normal timer again, and it'll tick when appropriate. > Why is the removal of EnsureTimerStarted from the MostRecentRefresh getters > ok? They're updated in the constructor, in Tick, and in the test function -- the constructor initializes it to "Now", which isn't technically correct; but I think initializing it to 0 would lead to problems elsewhere in code. We don't need the timer to be running to find out the most recent refresh. > > nsRefreshDriver::EnsureTimerStarted(bool aAdjustingTimer) > > The comment on the mFrozen || !mPresContext case shouldn't mention "already > been started". Fixed. Thanks for the review! New patch soon. > I didn't think we'd want to That depends on whether tests measure things immediately after restoring normal refresh, I expect. > We don't need the timer to be running to find out the most recent refresh. Hmm. I think it used to be that we wanted to make sure the timer started when we handed out recent refresh times so things would sync op properly, but with the new timer setup maybe it's ok. Looking forward to updated patch! This is looking good. Created attachment 659787 [details] [diff] [review] updated combined patch Updated, combined patch. Try server results at -- I'm still looking through all the failures. All of them seem to be known intermittent ones, but there's still a lot of them; not sure what to make of that. Builds are at if anyone wants to try them. Comment on attachment 659787 [details] [diff] [review] updated combined patch Review of attachment 659787 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/base/nsRefreshDriver.cpp @@ +47,4 @@ > > using namespace mozilla; > > +#ifdef PR_LOGGING Please verify that this won't be defined in regular release builds so that we can avoid the logging overhead in the cases where performance matters. @@ +198,5 @@ > +protected: > + > + virtual void StartTimer() > + { > + mTimer = do_CreateInstance(NS_TIMER_CONTRACTID); I think it's best to not create a timer object each time this code is called, can you move this to the ctor perhaps? @@ +379,5 @@ > + > + mNextDriverIndex++; > + } > + > + static void TimerTickOne(nsITimer* aTimer, void* aClosure) It would be good if we could ensure that calls to this function and ScheduleNextTick are always interleaved. ::: xpcom/ds/TimeStamp.h @@ +92,5 @@ > + TimeDuration operator*(const uint32_t aMultiplier) const { > + return TimeDuration::FromTicks(mValue * int64_t(aMultiplier)); > + } > + TimeDuration operator*(const int64_t aMultiplier) const { > + return TimeDuration::FromTicks(mValue * int64_t(aMultiplier)); Nit: this cast seems unnecessary. So, an update -- I'm working through fixing a few of the test issues; the only thing so far that isn't a test issue is a few bidi reftest failures, such as layout/reftests/bidi/779003-1.html . It's almost as if bidi resolution isn't happening correctly; on reload it's a tossup on whether it renders correctly or not (failures always look the same). But looking at the observer callbacks in the refresh driver, I don't see anything different being called between the good and bad cases! nsBidiPresUtils::Resolve is also being called an identical number of times for the two cases. Not really sure how to track it down, any suggestions would be appreciated. Cc'ing smontagu for potential bidi help -- see comment #111. (In reply to comment #112) > Cc'ing smontagu for potential bidi help -- see comment #111. So my ideas that I talked to Vlad about on IRC are: 1) Compare the frametrees in both cases. 2) Compare the mozFlushTypes passed to FlushPendingNotifications. One thing which _could_ be happening here is that we might be hitting a bug in bidi resolution depending on when we exactly reflow. Good news! I just filed bug 793233 for the failing reftest, because it fails in the same way in a nightly build in the face of a trivial dynamic modification. Ehsan's theory is that reflow is being triggered at a different point in time due to my changes and so is showing the problem more aggressively. Created attachment 666731 [details] [diff] [review] final updated patch Final patch; will carry forward bz's and smaug's reviews, but would like roc's final r+ as well. This is clean on the tryserver, with the following patches applied (all will be part of the same push): bug 793233: mark two bidi reftests as random bug 796156: fix test_bug574663 to drive scrolling directly via advanceTimeAndRefresh bug 768838: fix intermittent test_bug549262 failure (also drive scrolling directly via advanceTimeAndRefresh) Fingers crossed. Backed out for causing: ...on all platforms. Was this tested locally before pushing? (Seems like not all platforms & just opt; was all of the mochitest-other at time of posting :-)) Yeah, it had extensive tryserver runs :p The various tests are badly written, and depend on not-guaranteed timing. Working on fixing the tests now. FWIW, this seems to have regressed a bunch of Talos benchmarks, most notably trace malloc allocs increase of about 1% across the board, some ~30% tscroll regressions on 10.6 and 10.7, and others so far. Although it seems to have improved tscroll on Linux, for example... Hm, I'll have to look at how tscroll is written. I don't remember seeing that big percentages on try server runs, but I can see this changing tscroll for sure. Especially since we're doing everything on 16ms steps now... There seems to be a bunch of Ts, Paint regressions as well. Be careful looking at Talos numbers for things like this. Changing the number and timing of animations/paints/etc can make the code do more (or less) work. For example, if an anim was running at 8fps because of this, and the patch fixed it to run at the "proper" 10fps, you'll see a lot of extra paints, allocs, CPU use, etc. That isn't to say it isn't a bug - it might be - but it might not too. Talos may not be a great test for this type of change. Backed out for turning WinXP debug m-oth permaorange (and yes; our WinXP coalescing plus dire mochitest-other intermittent failure situation mean that we didn't notice for 3 days!): Backout: Once more into the breach... (includes crash fix from bug 799242) (In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #126) > Once more into the breach... > > (includes crash fix from bug 799242) Caused bug 797263 again. Am going to need to back this out, but currently cannot push to inbound due to bug 766810 :-( Backout: Fix the root bug (it sounds like a bug here), but please do not hard code a 16ms limiter, when requestAnimFrame(). In Safari and Chrome (GPU accelerated mode), requestAnimationFrame on 120Hz computer monitors (e.g. Acer VG236H or Samsung S23A700D) lasts for about 8.333ms, and rightfully so. Computers running 120Hz monitors are usually i7 computers with powerful GPU's, and the user has intentionally switched to 120Hz mode, so when you synchronize requestAnimationFrame() to VSYNC, _please_ honor the VSYNC. Laptops generally never run at 120Hz, so there's no power concern. Correction: Asus VG236H (not Acer) is the 120Hz model. Correction: Asus VG236H 120Hz or VG278H/VG278HE 144Hz (not Acer) P.S. The DEFAULT_THROTTLED_FRAME_RATE has got to go -- it is mutually exclusive from the future goal of being able to run requestAnimationFrame 120 times on a 120Hz computer monitor (like other VSYNC'd browsers already do). You don't need precise VSYNC, as long as you've got compositing. The frame timings could be +/- 50% of the VSYNC, and the presentation will still be perfectly smooth. Chrome's timer jitter during VSYNC is still approximately 1ms on a fast system. Heck! You could even poll the scan line register to predict when the next VSYNC will occur (on Windows, it's Direct3D "RasterStatus.ScanLine"), and schedule the timer based on that. The prediction will be accurate enough. Actually, just ran a small test on Chrome -- it has a framerate throttler when chrome://flags has "Disable GPU VSync" enabled; the throttler is apparently configured to 250 in Chrome. So, when fixing Mozilla to synchronize to VSYNC, then modify DEFAULT_THROTTLED_FRAME_RATE to a high value well beyond the best computer monitors (240 Hz is probably a future-proof value). Running rAF above 60hz is going to break tons of web games. I don't think you can make a change like that unilaterally without putting it behind a pref or a new API, or at least doing a survey of existing content to see how wide the damage would be. Can you point to any reasonably complex HTML5 games that work with 120hz rAF? A different perspective: How many users are going to run at 120Hz today? Not many right now. Damage will be near zero because requestAnimationFrame will keep running at 60Hz for the vast majority of users (even when the throttle is removed, because now VSYNC is throttling requestAnimationFrame). Web games that don't adjust themselves are the ones that "broken". I agree with Chrome's decision to sync to VSYNC, and web games are slowly adjusting to that. HTML5 games using WebGL, works wonderfully at 120Hz. Nothing needs to be fixed for most of these games, because many WebGL games update 3D game world by the real-world timer. Framerate fluctuates up and down due to performance anyway, so these games are all naturally framerate adaptive, and they all adapted to 120Hz with no modification -- without the game authors knowing it. It's the Canvas2D games that are written. Some of the most fiercely-complaining users are the users of emulators running illegal online ROM's. Other users are of the good HTML5 Canvas2D games are framerate adaptive. Games that adapt to framerate drops due to system performance, almost also always adapt to unexpectedly high framerate. The "60" for requestAnimFrame is an artifical number that came from the "60" of the common refresh rate. Don't we want to honor the original purpose of the number "60"? We don't want to stay as a 60Hz dinosaur, even when we go beyond to 120 Hz. Stop being a 60Hz dinosaur. Again, RISK IS LOW because of the low but gradual adoption rate of 120Hz. :-) Oh, and yes -- there's a lot of 2D games that won't work properly at 120Hz. Provide a fallback for those. The early 120Hz users will often be power users, and those will often know how to re-enable a framerate throttle. However, you're claiming a non-issue. Combined in all browsers syncing to VSYNC, more than 50% of the browser market sync's to VSYNC, and the games that don't adapt to that, are "broken". This bug is not about whether to sync to VSYNC or not, bug 707884 is about that. And before that bug is fixed, 60Hhz will be used. You're already busting out the ad hominems and broken reasoning over *framerate scheduling*. Chill. The rAF spec does not clearly specify anything resembling the behavior you describe as "correct", nor does it specify that the games you describe are "broken". If you want the behavior you describe to be standard, perhaps you should work to make it so. Even if a game happens to run logic based on elapsed time, that does not actually mean it will work correctly at 120hz (or above). Elapsed time based game logic can easily be subject to accumulation of error due to the fact that the numbers in logic computations get smaller at higher framerates. Accumulated error can cause incorrect game behavior, desyncs in multiplayer games, and even crashes. The only way to be sure that a game works at 120hz is to test it, and nobody without a 120hz monitor can realistically test games this way right now. Please note that this is not the same as a game dropping to 30hz (like you mentioned in #134). A change like this would need to come with a feature for letting developers test their games at other update rates. The low precision of Date.now() and other timing mechanisms also present a problem once you go past 60hz and for things to be robust those games would have to already be using the bleeding-edge high precision timing API. Lots of games run with fixed frame rates for design reasons or process things in an event-based fashion. Even if those games work with rAF running at 120hz, running it at 120hz is wasteful if the game logic only runs at, say, 30hz. Your comments about games being wrong because they're written using Canvas or because they're emulators add nothing to the discussion. Please refrain from painting developers who disagree with you as incompetent or criminals. You can write 'RISK IS LOW' in caps and insist that you're correct but given that you continue to provide no data or even examples of games you're wasting your breath. Building HTML5 games that actually run smooth and work right across all browsers is hard enough without introducing more complication, so there really needs to be a demonstratable upside and clear path to improvement. For example, what OS, GPU, and browser version did you use for your Safari and Chrome tests? Were the browsers using GPU accelerated rendering and compositing? Did you have any flash player plugins running in any processes on your computer? Were you on a desktop, a laptop on mains power, or a laptop on battery? Does your PC have power management enabled? All of these factors can affect the scheduling behavior you see for requestAnimationFrame in modern browsers. I can trivially convince Chrome to drop rAF to 30hz on my top-end gaming machine due to bugs in their implementation. Simply saying 'it works in Safari and Chrome' is not adequate here. Wrong place for this discussion, but decent discussion to have. What we have now clearly isn't working; the patch and approach here doesn't make it any worse, and makes it much easier to move to 120hz, 240hz, or whatever. 60Hz is the current target, so that's what it'll be. Thanks, yes, it is a decent discussion. And apologies, if this is the wrong place. And yes, the patch here is an improvement. But it can still be so much better. Answers to your questions: >>Even if those games work with rAF running at 120hz, running it at 120hz is wasteful if the game logic only runs at, say, 30hz. -- Depends. If the parallax logic still smooth scrolls at 120fps, it's worthwhile even if the enemies move only at 30Hz. -- Conversely, if the control logic is sampled during requestAnimFrame, you get less input latency (1/120sec rather than 1/30sec) even if animations run only at 30Hz. -- This is a mountain out of a molehill "nitpick" >>Tests: (several of the most popular ones) and a few apps from Chrome (Angry Birds, Plants vs Zombies, etc). My tests were more than a month ago so I cannot remember all names, but I will endeavour to find time in the next month to start a spreadsheet of observations at 60Hz and 120Hz on one of my systems. >> OS and GPU - I tested Opera, Safari, Chrome on their Mac, iOS, and Windows versions, as well as IE10 on Windows. (IE8 doesn't VSYNC). I have a Linux box, but it's currently headless so I haven't tested that. Chrome also claims VSYNC support on both Android and Linux according to chrome://gpu. >>Flash plugins - Installed but not running. Occasionally, the flash banners were running on some sites. >>2 desktops, 3 laptops, iPod, iPad, 2 Android, PlayBook. Applicable observations: -. >>"I can trivially convince Chrome to drop rAF to 30hz on my top-end gaming machine due to bugs in their implementation. Simply saying 'it works in Safari and Chrome' is not adequate here." I haven't seen that lately. I was only able to do that on the WindowsXP machine and the slower Mac, especially when opening new tabs and windows. I had lots of difficulty making Chrome on Windows 8 (i7, 3770K CPU, SSD, with Radeon HD6850) slow down, even when loading applications in the background and loading other webpages -- it barely went below VSYNC, with only a few frame drops. IE10 behaved the same (window resizing didn't slow down requestAnimFrame and animations were full framerate even during rapid window resizing). Animations in games (whether Angry Birds or the Quake 3 Arena clone) ran blissfully unaware of whatever I was doing on the desktop computer (quad core, good CPU, SSD). On the other hand, I easily got Chrome to stutter on the Windows XP laptop when I opened a new Chrome window while watching animations in existing WebGL games. Also, more the reason that games need to optimize for unexpected rates. Safari ran at only approximately 15Hz on my Windows machines. _____ Some notes: It appears that eventually, a group of us could approach W3C to standardize web-based VSYNC method. Perhaps non-requestAnimationFrame methods -- I'm interested in joining that group. There's a need. If requestAnimationFrame *isn't* it, there'll eventually be another standardized method, or a JavaScript method to turn on/off vsync in requestAnimationFrame. Debate and figure out something that makes you and me and others happy. This is a worthy, and important discussion, due to web browsers recently (as of 2012) suddenly gaining widespread VSYNC behavior. Suddenly, something that was formerly not practical, is now suddenly practical. Maybe you are right, but that does not deny the need of *some* kind of mechanism.. Although not fully relevant here, I should point out it is a best-practices in the video game industry (and even in the professionally made web games, like Angry Birds) to synchronize to real-world. Simple-made Canvas2D web games resembles assumptions that existed in DOS games. (leading DOS games designed for 386's to run excessively fast on Pentium's, etc). Vlad, please re-cc me if you need my feedback here again (and if the noise level dies down)... Created attachment 689051 [details] [diff] [review] newest version, v1 Here is the newest version of this, with a few changes. On windows, we drop the high precision timer only after 90 seconds of no requests that need high precision timers. This avoids us ping-ponging high precision to low precision all the time. Also caught a related bug, in that we were dropping to low precision every frame because we were scheduling a new timer before we actually got notified that we needed high precision timers for the request (because rAF and similar have to be called in the callback). A try server run is at with builds at . Unfortunately, there are still some orange tests, most of them known orange failures but unfortunately not random in some cases. It's really annoying. I've fixed a number of orange tests before, but now there's a new set; I'll look into those tomorrow. Created attachment 690941 [details] [diff] [review] interdiff The interdiff Created attachment 690945 [details] [diff] [review] more context in interdiff Created attachment 690946 [details] [diff] [review] more correct and context in interdiff Comment on attachment 690946 [details] [diff] [review] more correct and context in interdiff Review of attachment 690946 [details] [diff] [review]: ----------------------------------------------------------------- ::: nsRefreshDriver.cpp.old @@ +418,2 @@ > static int32_t sHighPrecisionTimerRequests = 0; > +static nsCOMPtr<nsITimer> sDisableHighPrecisionTimersTimer = nullptr; Please make this a plain pointer, to avoid adding a static constructor. @@ +710,5 @@ > + // after 90 seconds. This is arbitrary, but hopefully good > + // enough. > + NS_ASSERTION(!sDisableHighPrecisionTimersTimer, "We shouldn't have an outstanding disable-high-precision timer !"); > + > + sDisableHighPrecisionTimersTimer = do_CreateInstance(NS_TIMER_CONTRACTID); (Remember to AddRef the plain pointer here.) Fixed and pushed to inbound: Based on 100% green tryserver run (well, before the plain pointer changes): (In reply to Ehsan Akhgari from comment #145) > (From update of attachment 690946 [details] [diff] [review]) > > + sDisableHighPrecisionTimersTimer = do_CreateInstance(NS_TIMER_CONTRACTID); > (Remember to AddRef the plain pointer here.) Vlad improved on this by calling forget on a temporary nsCOMPtr into the plain pointer. (Alternatively CallCreateInstance(NS_TIMER_CONTRACTID, &sDisableHighPrecisionTimersTimer); would also have worked.) I know this is closed bug. But since around 4-5 days, I am seeing frames being dropped while doing tab animation. In a sense, that while dragging, I do not see the complete animation, the tab jumps from place to place sometimes. Windows 7 x64 . 32 bit Nightly build HWA On. NVidia GTX 260 Can you file a new bug for that please? (In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #150) > Can you file a new bug for that please? Sure, filed bug 822694.
https://bugzilla.mozilla.org/show_bug.cgi?id=731974
CC-MAIN-2016-50
refinedweb
12,592
70.84
I have already written a basic TCP client and server in two previous tutorials and would like to build upon those. Until now the server has been able to receive only one client message and terminate. I just picked up my server.c code from the previous tutorial and added a loop while(1) // Just before the print “Waiting for connection” { } // close right after close(conn_desc) statement The intention was, to be able to read from multiple clients. Now I can connect to my server through multiple clients (one after the other) though not in parallel. The problem is that each new client has to keep waiting until the server has finished processing the earlier one and you can’t really tell if a client is going to hold on for long and send data after some time. To accommodate such a situation C language has already provided an API to be used on any type of descriptors whether they are files or sockets. The basic concept of this API is that you can listen on multiple descriptors / sockets at the same time and keep waiting until data is available on ANY one of them. As soon as somebody sends you a message, select will return that descriptor to the program. Note the difference as well as constraints: 1. We are listening to receive data from multiple clients which are all going to send data to server, but we don’t know who would do that first. 2. This does not accommodate the situation when one client is reading and other is writing. You need multiple threads or processes to handle that for sure. 3. Without this, the problem could be that if client one was connected first, is not sending data while client two who wants to get connected and immediately send data, but is waiting for connection until one is serviced by the server. This situation would be resolved by using select since it allows both clients to be connected and server waiting for anyone who sends data first. If both of them send together, the one whose data is received first is serviced first. The output is demonstrated by the image. I have one server running and I connected two clients to it in parallel. While both clients are connected, I can type text in any one of them and it is displayed immediately on the server. Basically server is monitoring both of the descriptors using select and responds when data is available on any of them. Implementation wise, we create a descriptor set structure to which we can add any number of descriptors. Then we pass that descriptor set to select API. This will be a blocking call (unless we have provided a timeout to select) until data is available on any of those descriptors. Then upon return from select, we call accept using the monitored descriptor and receive data. The code below works for any number of descriptors (and hence clients). The reason of calling this server partial is that it only works when you are expecting all clients to be sending data. If one client is sending data and the other is expecting data from you, it requires creating multiple threads or processes. We will be creating those in forth coming tutorials. #include <stdio.h> #include <unistd.h> #include <sys/socket.h> #include <sys/types.h> #include <netinet/in.h> #include <stdlib.h> #include <string.h> #include <sys/select.h> #include <sys/time.h> #define MAX_SIZE 50 int main() { int listen_desc, conn_desc; // main listening descriptor and connected descriptor int maxfd, maxi; // max value descriptor and index in client array int i,j,k; // loop variables fd_set tempset, savedset; // descriptor set to be monitored int client[FD_SETSIZE], numready; // array of client descriptors struct sockaddr_in serv_addr, client_addr; char buff[MAX_SIZE]; listen_desc = socket(AF_INET, SOCK_STREAM, 0); if(listen_desc < 0) printf("Failed creating socket\n"); bzero((char *)&serv_addr, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_addr.s_addr = INADDR_ANY; serv_addr.sin_port = htons(1234); if (bind(listen_desc, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) printf("Failed to bind\n"); listen(listen_desc, 5); maxfd = listen_desc; // Initialize the max descriptor with the first valid one we have maxi = -1; // index in the client connected descriptor array for (i=0; i<FD_SETSIZE; i++) client[i] = -1; // this indicates the entry is available. It will be filled with a valid descriptor FD_ZERO(&savedset); // initialize the descriptor set to be monitored to empty FD_SET(listen_desc, &savedset); // add the current listening descriptor to the monitored set while(1) // main server loop { // assign all currently monitored descriptor set to a local variable. This is needed because select // will overwrite this set and we will lose track of what we originally wanted to monitor. tempset = savedset; numready = select(maxfd+1, &tempset, NULL, NULL, NULL); // pass max descriptor and wait indefinitely until data arrives //printf("Waiting\n"); if(FD_ISSET(listen_desc, &tempset)) // new client connection { printf("new client connection\n"); int size = sizeof(client_addr); conn_desc = accept(listen_desc, (struct sockaddr *)&client_addr, &size); for (j=0; j<FD_SETSIZE; j++) if(client[j] < 0) { client[j] = conn_desc; // save the descriptor break; } FD_SET(conn_desc, &savedset); // add new descriptor to set of monitored ones if(conn_desc > maxfd) maxfd = conn_desc; // max for select if(j > maxi) maxi = j; // max used index in client array } for(k=0; k<=maxi; k++) // check all clients if any received data { if(client[k] > 0) { if(FD_ISSET(client[k], &savedset)) { int num_bytes; if( (num_bytes = read(client[k], buff, MAX_SIZE)) > 0) { buff[num_bytes] = '\0'; printf("Received:- %s", buff); } if(num_bytes == 0) // connection was closed by client { close(client[k]); FD_CLR(client[k], &savedset); client[k] = -1; } if(--numready <=0) // num of monitored descriptors returned by select call break; } } } } // End main listening loop close(listen_desc); return 0; }
http://forum.codecall.net/topic/64205-concurrent-tcp-server-using-select-api-in-linux-c/
CC-MAIN-2015-11
refinedweb
953
50.97
Create an FSLogix profile container for a host pool using Azure NetApp Files We recommend using FSLogix profile containers as a user profile solution for the Windows Virtual Desktop Preview files. You can create FSLogix profile containers using Azure NetApp Files, an easy-to-use Azure native platform service that helps customers quickly and reliably provision enterprise-grade SMB volumes for their Windows Virtual Desktop environments. To learn more about Azure NetApp Files, see What is Azure NetApp Files? This guide will show you how to set up an Azure NetApp Files account and create FSLogix profile containers in Windows Virtual Desktop. This article assumes you already have host pools set up and grouped into one or more tenants in your Windows Virtual Desktop environment. To learn how to set up tenants, see Create a tenant in Windows Virtual Desktop and our Tech Community blog post. The instructions in this guide are specifically for Windows Virtual Desktop users. If you're looking for more general guidance for how to set up Azure NetApp Files and create FSLogix profile containers outside of Windows Virtual Desktop, see the Set up Azure NetApp Files and create an NFS volume quickstart. Note This article doesn't cover best practices for securing access to the Azure NetApp Files share. Prerequisites Before you can create an FSLogix profile container for a host pool, you must: - Set up and configure Windows Virtual Desktop - Provision a Windows Virtual Desktop host pool - Enable your Azure NetApp Files subscription Set up your Azure NetApp Files account To get started, you need to set up an Azure NetApp Files account. Sign in to the Azure portal. Make sure your account has contributor or administrator permissions. Select the Azure Cloud Shell icon to the right of the search bar to open Azure Cloud Shell. Once Azure Cloud Shell is open, select PowerShell. If this is your first time using Azure Cloud Shell, create a storage account in the same subscription you keep your Azure NetApp Files and Windows Virtual Desktop. Once Azure Cloud Shell loads, run the following two cmdlets. az account set --subscription <subscriptionID> az provider register --namespace Microsoft.NetApp --wait In the left side of the window, select All services. Enter Azure NetApp Files into the search box that appears at the top of the menu. Select Azure NetApp Files in the search results, then select Create. Select the Add button. When the New NetApp account blade opens, enter the following values: - For Name, enter your NetApp account name. - For Subscription, select the subscription for the storage account you set up in step 4 from the drop-down menu. - For Resource group, either select an existing resource group from the drop-down menu or create a new one by selecting Create new. - For Location, select the region for your NetApp account from the drop-down menu. This region must be the same region as your session host VMs. Note Azure NetApp Files currently doesn't support mounting of a volume across regions. When you're finished, select Create to create your NetApp account. Create a capacity pool Next, create a new capacity pool: Go to the Azure NetApp Files menu and select your new account. In your account menu, select Capacity pools under Storage service. Select Add pool. When the New capacity pool blade opens, enter the following values: - For Name, enter a name for the new capacity pool. - For Service level, select your desired value from the drop-down menu. We recommend Premium for most environments. Note The Premium setting provides the minimum throughput available for a Premium Service level, which is 256 MBps. You may need to adjust this throughput for a production environment. Final throughput is based on the relationship described in Throughput limits. - For Size (TiB), enter the capacity pool size that best fits your needs. The minimum size is 4 TiB. When you're finished, select OK. Join an Active Directory connection After that, you need to join an Active Directory connection. Select Active Directory connections in the menu on the left side of the page, then select the Join button to open the Join Active Directory page. Enter the following values in the Join Active Directory page to join a connection: - For Primary DNS, enter the IP address of the DNS server in your environment that can resolve the domain name. - For Domain, enter your fully qualified domain name (FQDN). - For SMB Server (Computer Account) Prefix, enter the string you want to append to the computer account name. - For Username, enter the name of the account with permissions to perform domain join. - For Password, enter the account's password. Note It's best practice to confirm that the computer account you created in Join an Active Directory connection has appeared in your domain controller under Computers or your enterprise's relevant OU. Create a new volume Next, you'll need to create a new volume. Select Volumes, then select Add volume. When the Create a volume blade opens, enter the following values: - For Volume name, enter a name for the new volume. - For Capacity pool, select the capacity pool you just created from the drop-down menu. - For Quota (GiB), enter the volume size appropriate for your environment. - For Virtual network, select an existing virtual network that has connectivity to the domain controller from the drop-down menu. - Under Subnet, select Create new. Keep in mind that this subnet will be delegated to Azure NetApp Files. Select Next: Protocol >> to open the Protocol tab and configure your volume access parameters. Configure volume access parameters After you create the volume, configure the volume access parameters. Select SMB as the protocol type. Under Configuration in the Active Directory drop-down menu, select the same directory that you originally connected in Join an Active Directory connection. Keep in mind that there's a limit of one Active Directory per subscription. In the Share name text box, enter the name of the share used by the session host pool and its users. Select Review + create at the bottom of the page. This opens the validation page. After your volume is validated successfully, select Create. At this point, the new volume will start to deploy. Once deployment is complete, you can use the Azure NetApp Files share. To see the mount path, select Go to resource and look for it in the Overview tab. Configure FSLogix on session host virtual machines (VMs) This section is based on Create a profile container for a host pool using a file share. Download the FSLogix agent .zip file while you're still remoted in the session host VM. Unzip the downloaded file. In the file, go to x64 > Releases and run FSLogixAppsSetup.exe. The installation menu will open. If you have a product key, enter it in the Product Key text box. Select the check box next to I agree to the license terms and conditions. Select Install. Navigate to C:\Program Files\FSLogix\Apps to confirm the agent installed. From the Start menu, run RegEdit as administrator. Navigate to Computer\HKEY_LOCAL_MACHINE\software\FSLogix. Create a key named Profiles. Create a value named Enabled with a REG_DWORD type set to a data value of 1. Create a value named VHDLocations with a Multi-String type and set its data value to the URI for the Azure NetApp Files share. Assign users to session host Open PowerShell ISE as administrator and sign in to Windows Virtual Desktop. Run the following cmdlets: Import-Module Microsoft.RdInfra.RdPowershell # (Optional) Install-Module Microsoft.RdInfra.RdPowershell $brokerurl = "" Add-RdsAccount -DeploymentUrl $brokerurl When prompted for credentials, enter the credentials for the user with the Tenant Creator or RDS Owner/RDS Contributor roles on the Windows Virtual Desktop tenant. Run the following cmdlets to assign a user to a Remote Desktop group: $wvdTenant = "<your-wvd-tenant>" $hostPool = "<wvd-pool>" $appGroup = "Desktop Application Group" $user = "<user-principal>" Add-RdsAppGroupUser $wvdTenant $hostPool $appGroup $user Make sure users can access the Azure NetApp File share Open your internet browser and go to. Sign in with the credentials of a user assigned to the Remote Desktop group. Once you've established the user session, sign in to the Azure portal with an administrative account. Open Azure NetApp Files, select your Azure NetApp Files account, and then select Volumes. Once the Volumes menu opens, select the corresponding volume. Go to the Overview tab and confirm that the FSLogix profile container is using space. Connect directly to any VM part of the host pool using Remote Desktop and open the File Explorer. Then navigate to the Mount path (in the following example, the mount path is \\anf-SMB-3863.gt1107.onmicrosoft.com\anf-VOL). Within this folder, there should be a profile VHD (or VHDX) like the one in the following example. Next steps You can use FSLogix profile containers to set up a user profile share. To learn how to create user profile shares with your new containers, see Create a profile container for a host pool using a file share. Feedback
https://docs.microsoft.com/en-us/azure/virtual-desktop/create-fslogix-profile-container
CC-MAIN-2019-43
refinedweb
1,508
56.25
Asked by: How to change the name of C# Project Dear friends my question may sounds little bit different. yes it is how to change the name of C# project. i have made a .dll file using VS2005 C# it contains several class files and windowsforms and referal dll's now after finishing that project i just want to change the name of the whole project. any guide lines to follow. Thnaks and regards RanuFriday, January 12, 2007 3:54 AM Question All replies - To rename the compiled dll *.dll click project then yourproject properties and change the text in the assembly name textbox, save your project and rebuild. To rename the project file *.csproj , right-click the project name in the solution explorer and select rename.Friday, January 12, 2007 3:57 AM both, i have to change the name in my source code and the compiled dll also i mean i have to completely change the name including the source code folder name. please help me out thanks and regards ranuFriday, January 12, 2007 4:02 AM thanks if i change the both *.dll name and *.csproj name is that enough is that changes change the name every where(like name spaces and displayname and all) regards ranuFriday, January 12, 2007 4:19 AM - right click on project > properties > and change the Default namespace to the name of the project you will be changing it to. This should hopefully now be ok.Saturday, January 13, 2007 8:03 PM Change the name and path of a project as well as the name of the solution Unfortunately all information given in Internet is incomplete and rather confusing. After spending some time I figure out the following steps that allow changing all: the name of the project the name of the starting directory as well as that of the solution the name of the solution Here we go... 1. open project "FileName.sln" in Visual Studio 1.1. click on the "Edit" menu. 1.2. move cursor on "Find and Replace" 1.3. select and click on "Replace in Files" 1.4. in the field "Find what:" insert the "FileName" 1.5. in the field "Replace with:" insert "NewName" 1.6. click on "Replace All" 1.7. save the file 1.8. exit Visual Studio. 2. Open the windows explorer. 3. rename the directory of the project to the "NewName" 4. go into the directory "NewName" 5. delete file "NewName.suo" 6. edit "NewName.sln" with a text editor 6.1. replace the "FileName" with the "NewName" 6.2. save the file 6.3. close the editor 7. rename the directory "FileName" to "NewName" 8. get into that directory 9. edit "NewName.csproj" with a text editor 9.1. repeat steps 3.1, 3.2, 3.3 10. edit "MainWindow.g.i.cs" with a text editor 10.1.repeat steps 3.1, 3.2, 3.3 11. edit "MainWindow.g.cs" with a text editor 11.1.repeat steps 3.1, 3.2, 3.3 12. edit "MainWindow.baml" with a text editor 12.1.repeat steps 3.1, 3.2, 3.3 13. delete all files starting with "FileName..." 14. delete file "Properties.Resources.Designer.cs.dll" lacated in .."\obj\x86\Debug\TempPE" 15. delete all files in directory .."\bin\debug" 16. Open project "NewName" 17. rebuild solution 18. DONE!!!Sunday, May 30, 2010 12:16 AM., June 02, 2010 6:32 PM. it doesn't work for me. i have a very simple project (solution?) for a class. it has a half-dozen webforms and a couple C# files. if i do what you said and press Ctrl-F5 (Run without debug), it gives me countless errors because none of the names of tools/controls i created in the ASP files (.aspx) are recognized (for instance, Label1). When I look at a .aspx file, the first line still has code like this: Inherits="OLDNAME.WebForm0". However, manually making that change doesn't get rid of the errors.Friday, May 11, 2012 9:35 PM Hey ApollonZinos ... your step by step worked for me. I didn't have the "MainWindow" files.Thursday, December 06, 2012 8:19 PM
https://social.msdn.microsoft.com/Forums/en-US/33291d71-1266-48ab-a7e6-1716447a893f/how-to-change-the-name-of-c-project?forum=csharpide
CC-MAIN-2015-48
refinedweb
700
78.65
When we review and approve an item submitted through a biz form the new item does not appear on the site until the cache is expired\cleared. Is caching different for BizForm items? Because when regular content is approved and published it instantly appears on the site. Then in the custom query repeater properties in the Performance section, set the Partial Cache Minutes to 0 otherwise it will cache with the rest of the site. Unfortunately, from a performance standpoint this stinks because it will run the query each time the page loads, but if this is what you need then this will do that. Sounds as if you may be doing something a bit different with the bizform data than what comes out of the box is that a correct statement? By default, the bizform results aren't meant to be displayed on the live site. Yes I have a query against the table where the bizform results are stored to display the data on the site. Do you see the results in Forms > YourFormName > RecordedData or when you query your form table directly in SQL? I assume you use some sort API classes from CMS.FormEngine namespace. You might need to use some cache clearing methods like FormHelper.Clear() FormHelper.Clear() Mark, have you specified cache dependency for the web part that presents data on a page? Yes, the results appear immediatly in the RecordedData grid. No I haven't specified any cache dependencies on the repeater that displays the data. Should I add a partial cache on customtableitem? Thanks Brenden, setting the partial cache to 0 on the repeater works! Yeah there will be a hit from running the query each time but the result set is small so it shouldn't be too bad. Please, sign in to be able to submit a new answer.
https://devnet.kentico.com/questions/caching-on-bizform-results
CC-MAIN-2020-50
refinedweb
309
71.34
Every application needs to manage state. In React, we can go a long way using hooks, and in particular useState(), and passing props around. When things get more complicated than that, I like to immediately jump to a state management library. One of my favorites lately is easy-peasy. It builds on top of Redux, and it provides a simpler way to interact with state. I like to keep my code as simple as possible. Simple is understandable. Simple is beautiful. Complexity should be avoided at all costs, and if possible hidden away in libraries that expose a simple interface to us. It’s the case of this library, and that’s why I like it! Install it using: npm install easy-peasy First of all we need to create a store. The store is the place where we’ll store our state, and the functions needed to modify it. Create the store in the file store.js in the root of your project, with this content: store.js import { createStore, action } from 'easy-peasy' export default createStore({}) We’ll add more things to this file later. Now wrap your React app into the StoreProvider component provided by easy-peasy. It depends on what you use. With create-react-app for example, add this to your index.js file: //... import { StoreProvider } from 'easy-peasy' import store from '../store' ReactDOM.render( <React.StrictMode> <StoreProvider store={store}> <App /> </StoreProvider> </React.StrictMode>, document.getElementById('root') ) This operation makes now our store available in every component of the app. Now you’re ready to go in the store.js file and add some state, and some actions to change that state. Let’s do a simple example. We can create a name state, and we create a setName action to change the name: import { createStore, action } from 'easy-peasy' export default createStore({ name: '', setName: action((state, payload) => { state.name = payload }) }) Now inside any component of your app you can import useStoreState and useStoreActions from easy-peasy: import { useStoreState, useStoreActions } from 'easy-peasy' We use useStoreState to access the store state properties: const name = useStoreState((state) => state.name) and useStoreActions to access the actions we defined: const setName = useStoreActions((actions) => actions.setName) Now we can call this action whenever something happens in our app, for example if we click a button: <button onClick={() => { setName('newname') }} > Set name </button> Now any other component that is accessing the state through useStoreState() will see the value updated. This is a simple example but it all starts from this. You can add as many state variables and as many actions you want, and I found that centralizing it all to a store.js file makes the application very easy to scale. Download my free React Handbook!
https://flaviocopes.com/react-easy-peasy/
CC-MAIN-2022-27
refinedweb
457
66.54
89185/how-to-write-a-hdf-file-from-a-dataframe Hi Guys, I am new to HDFS Cluster. I want to integrate Pandas with HDFS. Can anyone tell me how to write a HDF file from a DataFrame? Hi@akhtar, Hierarchical Data Format (HDF) is a set of file formats (HDF4, HDF5) designed to store and organize large amounts of data. You can use the below command to do so. df = pd.DataFrame([[1, 1.0, 'a']], columns=['x', 'y', 'z']) df.to_hdf('./store.h5', 'data') Hi @Mike. First, read both the csv ...READ MORE Using DictWriter there is no need in ...READ MORE You can use the pandas library to ...READ MORE import json from pprint import pprint with open('data.json') as ...READ MORE du command is used for to see ...READ MORE Hadoop put & appendToFile only reads standard ...READ MORE You can use dfsadmin which runs a ...READ MORE hdfs dfsadmin -report This command tells fs ...READ MORE Hi@akhtar, You can use the import keyword to ...READ MORE Hi@akhtar, Python pickle module is used for serializing ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/89185/how-to-write-a-hdf-file-from-a-dataframe
CC-MAIN-2020-50
refinedweb
190
78.85
This is your resource to discuss support topics with your peers, and learn from each other. 02-15-2013 03:40 AM Hello, I´m developing an app where it listens the messages from an email account and it does an specific action. For that, I use this: public class EmailListener implements FolderListener { public void registerEmailListener(boolean register) { ServiceBook sb = ServiceBook.getSB(); ServiceRecord[] srs = sb.findRecordsByCid("CMIME"); if (srs != null) { ServiceRecord sr; ServiceConfiguration sc; /* unregister all email listener to avoid duplicate listeners */ desregisterAllEmailListeners(srs); for (int i = srs.length - 1; i >= 0; --i) { sr = srs[i]; try { sc = new ServiceConfiguration(sr); registerEmail(sc, register); } catch (Exception e) { e.printStackTrace(); } } } } private void registerEmail(ServiceConfiguration sc, boolean register) { Session s = Session.getInstance(sc); if (s != null) { Folder[] folders = s.getStore().list(); for (int foldercnt = folders.length - 1; foldercnt >= 0; --foldercnt) { Folder f = folders[foldercnt]; // If the folder doesn't have the service book for this email, we do not want to register it if (f.getFullName().indexOf(sc.getEmailAddress()) >= 0) { recurse(f, register); } } } } private void recurse(Folder f, boolean add) { if (f.getType() == Folder.INBOX) { f.removeFolderListener(this); if (add) { f.addFolderListener(this); } } Folder[] farray = f.list(); for (int fcnt = farray.length - 1; fcnt >= 0; --fcnt) { recurse(farray[fcnt], add); } } public void desregisterAllEmailListeners(ServiceRecord[] srs) { ServiceRecord sr; ServiceConfiguration sc; for (int i = srs.length - 1; i >= 0; --i) { sr = srs[i]; try { sc = new ServiceConfiguration(sr); registerEmail(sc, false); } catch (Exception e) { e.printStackTrace(); } } } public void messagesAdded(final FolderEvent e) { .... } .... } When an email has come, it executes messageAdded and does the specific action. The problem is that in some devices it works, and in another doesn´t work. It registers the email listener correctly in every case, but in some cases don´t execute messageAdded. What´s the problem? It works in: 9300 OS 6.0 and OS 5.0 and 9220 OS 7.1 It doesn´t work in 9320 OS 7.1, 9380 OS 7.1 and 9800 OS 6.0 Thank you very much. 02-15-2013 06:12 AM "Doesn't work" is not a very useful problem description. Can you describe exactly what happens? I presume you are debugging on the device, can you tell us how far through your code it gets, what exceptions are thrown, etc... I think you need some additional logging in this code so that you can actually catch problems as they occur. WIthout it you are in a difficult situation with problems like this, especially when they occur in the wild. You can't always debug on device. In addition, you will not see anything from this: } catch (Exception e) { e.printStackTrace(); } If you want to see a stack Trace, then you need to catch Throwable, as follows: } catch (Throwable t) { t.printStackTrace(); System.out.printlin("Exception : " + t.toString()); } 02-15-2013 07:39 AM Thank you for reply. In my real code I don´t use t.printStackTrace(); it is a summary of my code. I use a log, but I haven´t shown here. My problem is that, when I do f.addFolderListener(this); it works (doesn´t show me any exception and any of my logs) but when I receive a message to my email account (with this listener) doesn´t show any logs of messageAdded. 02-15-2013 08:19 AM OK, sorry my comment was probably not justified. Ignore that. I see you have based your code on this KB article: My quick review suggests the code is fine too. So I must admit I am at a loss to explain your problem. Do you do something 100% safe right at the start of your listener, say System.out.println(...) to make 100% sure that the listener is never invoked? Given I think the registration code is working, the suspicion then fails on the listener which I don't think you have shown us. 02-18-2013 08:14 AM - edited 02-18-2013 10:43 AM Yes, I have a class Logger where I set EventLoggers. In the begin of messageAdded method I show a Logger. When I receive a message to my email, this log doesn´t appear. (This Logger class works perfecly in others parts of the code) public void messagesAdded(final FolderEvent e) { new Thread(new Runnable() { public void run() { try { logger.logDebug("EmailListener.messagesAdded BEGIN"); Message mail = e.getMessage(); ...... } What´s the optimal setting of emails accounts for this to work, I mean, only works the default account? it works without bis?, etc 02-19-2013 08:11 AM Yes!!, it executes messageAdded....debugging. I don´t know why doesn´t show the logs in some devices.... The moral is: Don´t trust just in logEvent, use this and Debug tool, both!!
https://supportforums.blackberry.com/t5/Java-Development/AddFolderListener-works-in-some-devices-but-in-another-doesn-t/m-p/2166251/highlight/true
CC-MAIN-2016-50
refinedweb
791
68.47
I am trying to keep my balance and make it loop for 5 times then give the current balance. What did I do wrong? Also is this what a five-element one-dimensional array is? I dont have my book with me so I'm running into problems. thanks Code:#include <iostream> using namespace std; int main() { float InitBal; float CheckAmount; float DepositAmount; float CurBal; cout << "Enter Initial Balance of 1st Customer that banks with Bank Of Bryan:"; cin >> InitBal; cout << "Write A Check For A Specified Amount:"; cin >> CheckAmount; cout << "Deposit A Specified Amount Into The Checking Account:"; cin >> DepositAmount; CurBal = InitBal - CheckAmount + DepositAmount; for(int i=0;i<=5;i++) { cout << "The current balance of this customer is:" << CurBal << endl; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/44143-looping-problem.html
CC-MAIN-2013-48
refinedweb
124
65.05
The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at fast.ai, including "out of the box" support for vision, text, tabular, and collab (collaborative filtering) models. If you're looking for the source code, head over to the fastai repo on GitHub. For brief examples, see the examples folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using resnet18 (from the vision example): path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit(1) To install or update fastai, we recommend conda: conda install -c pytorch -c fastai fastai For troubleshooting, and alternative installations (including pip and CPU-only options) see the fastai readme. To get started quickly, click Applications on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules. We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function¶ Types for each parameter, and the return type, are displayed following standard Python type hint syntax. Sometimes for compound types we use type variables. Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking Image in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents. For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by ::. For vision.transforms, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with import *. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding. In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using import * . In general, for interactive computing, to just play around the core modules and the training loop you can do from fastai.basics import * If you want experiment with one of the applications such as vision, then you can do from fastai.vision import * That will give you all the standard external modules you'll need, in their customary namespaces (e.g. pandas as pd, numpy as np, matplotlib.pyplot as plt), plus the core fastai libraries. In addition, the main classes and functions for your application ( fastai.vision, in this case), e.g. creating a DataBunch from an image folder and training a convolutional neural network (with cnn_learner), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use from fastai.basics import *. Of course, you can also just import the specific symbols that you require, without using import *. If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: Learner fastai.basic_train.Learner At the base of everything are the two modules core and torch_core (we're not including the fastai. prefix when naming modules in these docs). They define the basic functions we use in the library; core only relies on general modules, whereas torch_core requires pytorch. Most type-hinting shortcuts are defined there too (at least the one that don't depend on fastai classes defined later). Nearly all modules below import torch_core. Then, there are three modules directly on top of torch_core: basic_data, which contains the class that will take a Datasetor pytorch DataLoaderto wrap it in a DeviceDataLoader(a class that sits on top of a DataLoaderand is in charge of putting the data on the right device as well as applying transforms such as normalization) and regroup then in a DataBunch. layers, which contains basic functions to define custom layers or groups of layers metrics, which contains all the metrics This takes care of the basics, then we regroup a model with some data in a Learner object to take care of training. More specifically: callback(depends on basic_data) defines the basis of callbacks and the CallbackHandler. Those are functions that will be called every step of the way of the training loop and can allow us to customize what is happening there; basic_train(depends on callback) defines Learnerand Recorder(which is a callback that records training stats) and has the training loop; callbacks(depends on basic_train) is a submodule defining various callbacks, such as for mixed precision training or 1cycle annealing; train(depends on callbacks) defines helper functions to invoke the callbacks more easily. From basic_data we can split on one of the four main applications, which each has their own module: vision, text collab, or tabular. Each of those submodules is built in the same way with: - a submodule named transformthat handles the transformations of our data (data augmentation for computer vision, numericalizing and tokenizing for text and preprocessing for tabular) - a submodule named datathat contains the class that will create datasets specific to this application and the helper functions to create DataBunchobjects. - a submodule named modelsthat contains the models specific to this application. - optionally, a submodule named {application}.learnerthat will contain Learnerspecific to the application. Here is a graph of the key module dependencies:
https://docs.fast.ai/
CC-MAIN-2020-05
refinedweb
1,070
52.49
To Whom It May Concern: I am sorry to trouble you with a lenghty forum post, however I was wondering if anyone would be able to assist me in this matter. To reiterate, I am a beginner to Java programming. I am currently reading a book teaching Java game programming (my current interest) and practicing the activities contained in this book. I have hit a roadblock in my quest to complete one of the activities. The book requires (and therefore I am using) Java SE 6 and TextPad. The program is saved as JFrameDemo.java. The code is written as: package jframedemo; import javax.swing.*; import java.awt.*; public class JFrameDemo extends JFrame { public JFrameDemo() { super("JFrameDemo"); setSize(400,400); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } public void paint(Graphics g) { super.paint(g); g.setColor(Color.WHITE); g.fillRect(0,0,400,400); g.setColor(Color.orange); g.setFont(new Font("Arial", Font.BOLD,18)); g.drawString("Doing graphics with a JFrame!", 60, 200); } public static void main(String[] args) { new JFrameDemo(); } } The error I receive is: Exception in thread "main" java.lang.NoClassDefFoundError: jframedemo/JFrameDemo Caused by: java.lang.ClassNotFoundException: jframedemo.JFrameDemo> Could not find the main class:jframedemo.JFrameDemo. Program will exit. Press any key to continue... For the past few hours prior to signing up for this forum, I had been researching this issue and seen multiple responses on how to remedy this issue. Most involved classpath. The only file I could locate was the "JFrameDemo.java" file which I had saved on My Documents myself. I do not have any knowledge as to how to locate the classpaths of files such as the javax.swing and java.awt files, if that happens to be the remedy. If you could, kindly explain to me the answer to my inquiry, as well as to how to locate the classpaths on my computer and change them if necessary. Once again, I would like to apologize for this rather complicated and tedious question and post. I would appreciate any input you have to offer to what seems to me to be quite an involved process. Thank you very much for your assistance.
http://www.javaprogrammingforums.com/%20exceptions/12574-java-lang-noclassdeffounderror-printingthethread.html
CC-MAIN-2016-07
refinedweb
362
59.4
windowFlags SplashScreen error on exit Qt4.8 PyQt4.9 windows7 Hi everyone! I'm using windowsFlags(QtCore.Qt.SplashScreen) for main window of my project, to not showing app in taskbar. To close my app I rewrite keyPressEvent: def keyPressEvent(self, event): if (event.modifiers() == QtCore.Qt.ControlModifier) & (event.key() == QtCore.Qt.Key_Q): self.close() In Ubuntu it works well, but in windows it's not terminate the main process. When I press Ctrl+Q it close my app, and gui is disappears but the terminal is still busy. The PyCharm tells me that the process is still running. Without flag SplashScreen the app exits well. What I'm doing wrong? How to close the app with SplashScreen flag and terminate main process (thread)? I figured out by myself I use QApplication.quit and app exits well
https://forum.qt.io/topic/66505/windowflags-splashscreen-error-on-exit
CC-MAIN-2018-22
refinedweb
137
70.6
We’ve already learned about pickle, so why do we need another way to (de)serialize Python objects to(from) disk or a network connection? There are three major reasons to prefer JSON over pickle: - When you’re unpickling data, you’re essentially allowing your data source to execute arbitrary Python commands. If the data is trustworthy (say stored in a sufficiently protected directory), that may not be a problem, but it’s often really easy to accidentally leave a file unprotected (or read something from network). In these cases, you want to load data, and not execute potentially malicious Python code! - Pickled data is not easy to read, and virtually impossible to write for humans. For example, the pickled version of {"answer": [42]}looks like this: (dp0 S'answer' p1 (lp2 I42 as. In contrast, the JSON representation of {"answer": [42]} is …. {"answer": [42]}. If you can read Python, you can read JSON; since all JSON is valid Python code! - Pickle is Python-specific. In fact, by default, the bytes generated by Python 3’s pickle cannot be read by a Python 2.x application! JSON can be read by virtually any programming language – just scroll down on the official homepage to see implementations in all major and some minor languages. So how do you get the JSON representation of an object? It’s simple, just call json.dumps: import json obj = {u"answer": [42.2], u"abs": 42} print(json.dumps(obj)) # output: {"answer": [42.2], "abs": 42} Often, you want to write to a file or network stream. In both Python 2.x and 3.x you can call dump to do that, but in 3.x the output must be a character stream, whereas 2.x expects a byte stream. Let’s look how to load what we wrote. Fittingly, the function to load is called loads (to load from a string) / load (to load from a stream): import json obj_json = u'{"answer": [42.2], "abs": 42}' obj = json.loads(obj_json) print(repr(obj)) When the objects we load and store grow larger, we puny humans often need some hints on where a new sub-object starts. To get these, simply pass an indent size, like this: import json obj = {u"answer": [42.2], u"abs": 42} print(json.dumps(obj, indent=4)) Now, the output will be a beautiful { "abs": 42, "answer": [ 42.2 ] } I often use this indentation feature to debug complex data structures. The price of JSON’s interoperability is that we cannot store arbitrary Python objects. In fact, JSON can only store the following objects: - character strings - numbers - booleans ( True/ False) None - lists - dictionaries with character string keys Every object that’s not one of these must be converted – that includes every object of a custom class. Say we have an object alice as follows: class User(object): def __init__(self, name, password): self.name = name self.password = password alice = User('Alice A. Adams', 'secret') then converting this object to JSON will fail: >>> import json >>> json.dumps(alice) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.3/json/__init__.py", line 236, in dumps return _default_encoder.encode(obj) File "/usr/lib/python3.3/json/encoder.py", line 191, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python3.3/json/encoder.py", line 249, in iterencode return _iterencode(o, 0) File "/usr/lib/python3.3/json/encoder.py", line 173, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: <__main__.User object at 0x7f2eccc88150> is not JSON serializable Fortunately, there is a simple hook for conversion: Simply define a default method.: def jdefault(o): return o.__dict__ print(json.dumps(alice, default=jdefault)) # outputs: {"password": "secret", "name": "Alice A. Adams"} o.__dict__ is a simple catch-all for user-defined objects, but we can also add support for other objects. For example, let’s add support for sets by treating them like lists: def jdefault(o): if isinstance(o, set): return list(o) return o.__dict__ pets = set([u'Tiger', u'Panther', u'Toad']) print(json.dumps(pets, default=jdefault)) # outputs: ["Tiger", "Panther", "Toad"] For more options and details ( ensure_ascii and sort_keys may be interesting options to set), have a look at the official documentation for JSON. JSON is available by default in Python 2.6 and newer, before that you can use simplejson as a fallback. You might also like : *) The self variable in python explained *) Python socket network programming *) *args and **kwargs in python explained 9 thoughts on “Storing and Loading Data with JSON” A reblogué ceci sur Quantum Post. Nicely explained. I am using with open(‘database.txt’,’a+’) as myfile: json.dump(lst,myfile) to write a list of integers called ist in txt file. And it is in a loop, so it adds every time another list in the file. I would like to make it to write every new list to a new line in file, but don’t know how to do that. Or insert ‘,’ between everyblist in a file, so that when I open the file for read, have readable list of lists for manipulation. How can I do that? By default, json.dump will output just a single line, so you’re already good to go. If you want just one large list, simply read in the file with json.load, overwrite it (with myfile.truncate()), and write your new list out. If you’re worried about data consistency, create a temporary file in the same directory, write into that, and then rename it to ‘database.txt’. This was very very helpful! Thank you! Reblogged this on Yitong Wang and commented: Still trying to figure out this json thing… Hi, since a long time I’m searching for the sample code to store class object (like addresses) to file with json. I want to test your explanation but it does not run on my side! Can you please post the complete example code here ( I want to copy & past it for my understanding). Thx, if your object o is an object of a built-in class like str or list (say an email address “me@example.org”), import json;json.dumps(o) is literally everything you need. It sounds like your question would be better suited to stackoverflow. Feel free to drop me (phihag@phihag.de) a note about your stackoverflow question if it doesn’t get answered. Make sure to include all relevant details, including the definition of what you understand as a “class object” (that would be an object representing a class, which I don’t think you want) and “addresses” (postal? email? memory? inodes? IP?).
https://pythontips.com/2013/08/08/storing-and-loading-data-with-json/
CC-MAIN-2019-30
refinedweb
1,116
66.94
The writer for a MinidumpContextMIPS64 structure in a minidump file. More... #include "minidump/minidump_context_writer.h" The writer for a MinidumpContextMIPS64 structure in a minidump file. Returns a pointer to the context structure that this object will write. constpointer to this object’s private data so that a caller can populate the context structure directly. This is done because providing setter interfaces to each field in the context structure would be unwieldy and cumbersome. Care must be taken to populate the context structure correctly. The context structure must only be modified while this object is in the kStateMutable state. Returns the size of the context structure that this object will write. Implements crashpad::MinidumpContextWriter. Initializes the MinidumpContextMIPS based on context_snapshot. Writes the object’s content. trueon success. falseon error, indicating that the content could not be written to the minidump file. Implements crashpad::internal::MinidumpWritable.
https://crashpad.chromium.org/doxygen/classcrashpad_1_1MinidumpContextMIPS64Writer.html
CC-MAIN-2019-13
refinedweb
144
50.33
Strange... doing C++ style returns me "error: 'flowC' does not name a type'. What is this about? This sounds like a simple problem, but you're not providing enough information. Write up a complete but minimal code example that illustrates the problem you're talking about, and post it here.]]> Is flowC decalred in a different namespace?]]> Strange... doing C++ style returns me "error: 'flowC' does not name a type'. What is this about? Also.. Do you know how to write the buffer only when its full? I saw somewhere I can use something like this... char buffer[2048]; flowC.rdbuff()->pubsetbuf(buffer, 2048); ....... flowC << ValOne <<< ", " << ValTwo << endl; Basic I/O? char buffer[100]; // C++ style ofstream file; file.open ("data.bin", ios::out | ios::binary); file.write(buffer, sizeof buffer); /* C style */ FILE* file2 = fopen("data2.bin", "wb"); fwrite(buffer, sizeof buffer, 1, file2); A lot of people prefer the old C style functions for binary i/o even in C++ (and iostream for formatted i/o). Writing and readong more complex structures requires some castin, in C++ usually done with reinterpret_cast<char*>(someStructure).]]> How can I write output to a file in binary format instead of ascii? Or... if I am using a buffer.. to write a buffer to binary format instead of ascii? I am writing a lot of data to a text file and I would like to reduce the size by writing it all into a binary file instead of ascii?]]>
https://bbs.archlinux.org/extern.php?action=feed&tid=154572&type=atom
CC-MAIN-2016-36
refinedweb
246
76.93
A modifier is used to alter the meaning of the base data type so that, the new data type thus generated can fit into the requirements in a better way. C++ allows some of its basic data types, like the char, int, and double data types to have modifiers preceding them. The data type modifiers allowed in C++ are listed here: - signed - unsigned - long - short When these modifiers are prefixed then, the data storage and the range of the given basic data type is altered. Following modifiers can be applied to the base data types: - signed, unsigned, long, and short can be applied to integer base types. - signed and unsigned can be applied to char - long can be applied to double. - signed and unsigned can also be used as a prefix to long or short modifiers. Syntax: modifier base data type = value; For example: unsigned long int = 556790; C++ also allows a shorthand notation for declaring the unsigned, short, or long integers. One can simply use the word unsigned, short, or long, without writing the data type int. The compiler automatically implies int. For example: unsigned x; unsigned int y; Both of the statements above declare the unsigned int variables, x and y. These two statements are exactly the same, the first one is using the shorthand assignment while the second statement is complete in itself. #include <iostream.h> /* This program shows the difference between * signed and unsigned integers. */ void main() { short int i; // a signed short integer short unsigned int j; // an unsigned short integer j = 50000; i = j; cout << i << "\n " << j; } When this program is run, following is the output: -15536 50000 The above result is because the bit pattern that represents 50,000 as a short unsigned integer is interpreted as -15,536 by a short. Report Error/ Suggestion
https://www.studymite.com/cpp/modifier-types-in-cpp/?utm_source=related_posts&utm_medium=related_posts
CC-MAIN-2020-50
refinedweb
303
58.21
Hi, I have a lot of code written for the SSD1306 that I use with many ESP32 boards. I use to use the regular #include <SSD1306Wire.h> // from GitHub - ThingPulse/esp8266-oled-ssd1306: Driver for the SSD1306 and SH1106 based 128x64, 128x32, 64x48 pixel OLED display running on ESP8266/ESP32 that I use to instantiate with SSD1306Wire display(0x3c, I2C_SDA, I2C_SCL); //OLED 128*64 soldered and use the functions as display.drawString(10, 0, "| Set"); That does however not work with a Heltec ESP_LoRa module that I want to use. The display use the Heltec library #include "heltec.h" is instantiated with Heltec.begin(true /DisplayEnable Enable/, false /LoRa Disable/, true /Serial Enable/); and use the functions as Heltec.display->drawString(10, 0, "| Set"); Do I have a way to use the the classical notation display.drawString(10, 0, "| Set"); with this library? I just want to swap the libraries and eventually the instantiations with #ifdef directives. but keep the display instructions unchanged. Thank you for your help. Laszlo
https://forum.arduino.cc/t/oled-display-ssd1306-heltec-fashion/886645
CC-MAIN-2021-31
refinedweb
169
67.04
Set up and structure a basic App Engine application, test it locally, and deploy the application. This page and sample are part of an extended learning example of a simple blog application where users can upload posts. You should already be familiar with the Go 1.9 programming language and basic web development. To see an overview of all the pages in this example, go to Building an App with Go 1.9. Costs There are no costs associated with running this guide. Running this sample app alone does not exceed your free quota. Before you begin Before developing your application, complete the following steps: Create a new Google Cloud Platform project and App Engine application using the Google Cloud Platform Console: When prompted, select the region where you want your App Engine application located. After your application is created, you will be on the App Engine Dashboard. Download and install Cloud SDK and then initialize the gcloudtool: - Get the required external App Engine library: go get -u google.golang.org/appengine/.... Structuring your application project At the end of this example, you'll have a project named go-app with the following structure: go-app/: Project root directory. app.yaml: Configuration settings for your App Engine application. main.go: Your application code. Creating your app.yaml file app.yaml sets the basic configuration for your App Engine application's settings. To set up your app.yaml file: Create your project root directory: mkdir $GOPATH/src/go-app Change directory to your project root: cd $GOPATH/src/go-app Create app.yamlin your project root directory. Add the following lines to app.yaml: # api_version: go1 handlers: # All URLs are handled by the Go application script - url: /.* script: _go_app When App Engine receives a web request for your application, the server calls the handler script corresponding to the URL, as specified in app.yaml. The handler then calls the application object to perform actions to service the request, prepare a response, and return it as a list of strings. For information about the settings available for app.yaml, see the app.yaml reference documentation. Creating your main.go file This simple sample uses the net/http package to create an HTTP server that prints the title of your application, Gopher Network. To set up your main.go file: Create main.goin your project root directory. Add the package mainstatement to treat your code as an executable program: package main Import the following packages to main.go: import ( "fmt" "net/http" "google.golang.org/appengine" // Required external App Engine library ) The google.golang.org/appenginepackage provides basic functionality for App Engine applications. The net/httppackage provides the HandlerFunctype, which you use to respond to different HTTP requests. Define your HTTP handler in main.go: func indexHandler(w http.ResponseWriter, r *http.Request) { // if statement redirects all invalid URLs to the root homepage. // Ex: if URL is http://[YOUR_PROJECT_ID].appspot.com/FOO, it will be // redirected to http://[YOUR_PROJECT_ID].appspot.com. if r.URL.Path != "/" { http.Redirect(w, r, "/", http.StatusFound) return } fmt.Fprintln(w, "Hello, Gopher Network!") } The http.ResponseWritervalue assembles the HTTP server response; by writing to it, you send data to the browser. The http.Requestis a data structure that represents the incoming HTTP request. Register your HTTP handler in main.go: func main() { http.HandleFunc("/", indexHandler) appengine.Main() // Starts the server to receive requests } The mainfunction is the entry point of your executable program, so it starts the application. It begins with a call to the http.HandleFuncfunction which tells the httppackage to handle all requests to the web root ("/") with the indexHandlerfunction. Running your application locally Run and test your application using the local development server ( dev_appserver.py), which is included with main.goto change "Gopher Network" to something else. Reload to see the change. Deploying your application Deploy your application to App Engine using the following command from the project root directory where the app.yaml file is located: gcloud app deploy Viewing your application To launch your browser and view your application at http://[YOUR_PROJECT_ID].appspot.com, run the following command: gcloud app browse What's next You just set up and deployed your web application to App Engine! Next, learn how to securely serve static content such as HTML pages, CSS files, or images from your application.
https://cloud.google.com/appengine/docs/standard/go/building-app/creating-your-application?hl=ru
CC-MAIN-2019-35
refinedweb
722
52.15
Return the index of the Scene in the Build Settings. Scene.buildIndex varies from zero to the number of Scenes in the Build Settings minus one. This is because indexes start at zero, so the first Scene is at position zero in the buildIndex. For example, five Scenes in the Build Settings have an index from zero to four. Unity ignores any numbers in a Scene name. For example, if you add a Scene called scene15 to a list of five Scenes in the Build Settings, Unity gives it a buildIndex of 5. A Scene that is not added to the Scenes in Build window returns a buildIndex one more than the highest in the list. For example, if you don’t add a Scene to a Scenes in Build window that already has 6 Scenes in it, then Scene.buildIndex returns 6 as its index . If the Scene is loaded through an AssetBundle, Scene.buildIndex returns -1. using UnityEngine; using UnityEngine.SceneManagement; // Show the buildIndex for the current script. // // The Build Settings window shows 5 added Scenes. These have buildIndex values from // 0 to 4. Each Scene has a version of this script applied. // // In the Project, create 5 Scenes called scene1, scene2, scene3, scene4 and scene5. // In each Scene add an empty GameObject and attach this script to it. // // Each Scene randomly switches to a different Scene when the button is clicked. public class ExampleScript : MonoBehaviour { Scene scene; void Start() { scene = SceneManager.GetActiveScene(); Debug.Log("Active Scene name is: " + scene.name + "\nActive Scene index: " + scene.buildIndex); } void OnGUI() { GUI.skin.button.fontSize = 20; if (GUI.Button(new Rect(10, 80, 180, 60), "Change from scene " + scene.buildIndex)) { int nextSceneIndex = Random.Range(0, 4); SceneManager.LoadScene(nextSceneIndex, LoadSceneMode.Single); } } }
https://docs.unity3d.com/es/2019.4/ScriptReference/SceneManagement.Scene-buildIndex.html
CC-MAIN-2021-10
refinedweb
290
78.35
- Advertisement KakyoMember Content Count5 Joined Last visited Community Reputation278 Neutral About Kakyo - RankNewbie Why people say a Sealed Classes cannot be a base class in C# Kakyo replied to kitman20022002's topic in General and Gameplay ProgrammingAs others mentioned he "sealed" keyword is meant to *prevent* inheritance. what you might wanna do is the "abstract" keyword, that do not let the base class be instantiated. so, let's say you have this code: public abstract class BaseClass { } public class ChildClass : BaseClass {} the we have : public void SomeMethod() { new BaseClass(); // compile error new ChildClass(); // totally fine. } Also, you might be tring to hide the class from external sources. you could mess around with *protected*, *internal* and *private*. But with classes in namespace level, you will have some restrictions (and I advise to research a little bit about those first =D ) And, by your code, classes inside another class is one thing I don't like very much Struggling with JSON in text based RPG Kakyo replied to PixcelStudios's topic in General and Gameplay ProgrammingHello Joeb, i guess now i got what you mean, haha. you might be using the wrong function JSON.stringify : converts a JS object into a text file. JSON.parse : converts a text file into a JS object. Watch out for typos in the text file, one wrongly placed quote can be annoying to find out. BTW you can "test" you json in this site : jsoneditoronline.org it'll help you finding possible typos and show you the resulting Text/Object (depending which one you want) The main reason for having the json is : "Separation of concerns". Yes, you can write the entire data into JS, but also Blizzard could code WoW into one file. However , this creates a hell of a code to maintain and this is a receipt to disaster. so the goal of having the JSON is that the *actual data* is in one side and *how to use* this data is in another side. Another good point is, you could "stream" you json *during* the gameplay, so the player loads only what is needed. let's say, for example, you have 1000 locations, if you code all those into one file, this would have, let's say, 100mb. now you got a handful of problems to deal with : - you are increasing the time your user has to wait to *start* playing your game, possibly by several minutes (maybe hours, who knows) - the user will be downloading places that he might not even go yet. - your server will crawl to provide data to everyone. and finally , and usually the main reason why you should probably do it : Team Work this allow other people in your team to create toons of locations (maybe inside a separate tool) , while you go writing other parts of the code. The list of advantages go on, but i'll stop here - text wall already.. lol Learning how to use it, might not be *as useful* right now for this small project, but the concepts certainly will be crucial later on. EDIT : i didn't see your edit before i post this answer, so let me address that part : yes it's totally feasible. you only need to change the strings in the constructor to the parsed json variable. var parsedJSON = JSON.parse(<content of text file here>); for loop { location[i] = new Location(id, parsedJSON.Name, parsedJson.Desc, parsedJson.choices); } Struggling with JSON in text based RPG Kakyo replied to PixcelStudios's topic in General and Gameplay ProgrammingHi Joeb, as this looks like homework, i'll stay very simplistic in the explanation, to allow you some research. well, for Json parsing you can use JSON.parse() & JSON.stringify(), so you don't need to worry about the "parsing" part of you assignment. Unless your teacher specifically asked to manually parse it, which can become a quite huge topic. about the "search in json" part, you have 2 ways of doing it (there are more ways of doing it, but i guess those are simple enough for your task) 1). if you json is *not* yet parsed , you can use any text search mechanism, like indexOf(). 2). if you already parsed your json, you can loop through the object properties with Object.keys(). Now, for the "without store it in JS again", i didn't get what you mean. The Json, being it text or object, will be loaded into JS to be manipulated. Hope that helps, Good luck EDIT : just some notes about the methods i say - the JSON.parse/stringify are safe to use as all major/updated browser supports it. - however watch out for indexOf as some browsers do not support it very well - Object.keys is defined in ECMAScript5, which pretty much all major/updated browser already support RPG, Engines and Frustration Kakyo replied to drpaneas's topic in For Beginners's ForumWell, i'm pretty much on the same boat as you.. and one resource have made me move a LOT faster. And I also find it very VERY well explained It's a guy streaming the programming of *every single thing* (using C/C++) to create a game, by he's definition, very inspired in Zelda. while you'll have to spend a freaking amount of time to catch up (by the time of this post, ep173 with *at least* 1h each), i guess it's a freaking good series. EDIT : besides this go "against" you desire to use an engine , i guess you could watch it in parallel to other learning projects. This is what i'm doing now, "fast" projects in unity, to learn a few things, and a different project to go along with the series Questions from an Aspiring Game Programmer Kakyo replied to LeoLoon's topic in Games Career DevelopmentHi LeoLoon , I create this account just the give you an answer, and I was very much in a similar position as you are now. Before I went to university I dreamed about become a GameDev, and I even make some fun things with RPG Maker, however today I am a "Software Developer" (yep, that's right, not game dev :P) I'm from Brazil and when i was choosing my course at the university, there was no specific game-related course and I had *no ideia* about the game market over here, but "games" are softwares and i took the closest course i could find . I don't have the insight about the "global game market" the explain the minimal details of a lot of things, but i'll try to answer your questions. *Can I work on other country where I can received good salary as a game programmer even If I will go to college in the Philippines? Don't make your choices based on "how much you will earn". If you do what you enjoy, what makes you happy, what gives you pleasure, you will become good at it and you will eventually find a way to earn money doing it (look the e-sports championship, people earn money *playing* games!) With that said, yes, you can work for companies that'll pay you a good salary. Or you can take a chance and try to open you own studio, on you own country, and show to the rest of the world the ideas that came from there. Look at Chroma Squad and Toren (games from Brazil \o/ ) and Never Alone (a game from Alaska), those games were made outside huge companies, by people like you and me, and today they are being played from all around the world and people are liking it! I think Philippine Education is nothing if you compare it to other University on more developed countries. I feel like the fact that I came and studied in the Philippines will have an impact on the way companies will think of my skills. Maybe yes, maybe no. This will tell more about the company that you are trying to join than about yourself. Brazil also don't have a good education and even tho, today i work for a US company. It's not the college who creates a good/bad student. It's the person itself that do not let obstacles and excuses getting in the way of what you want to learn. And let me tell you one thing : Programming is NOT easy, you can be proud of yourself already! *Why does games from our country doesn't go well? this is a pretty big topic and very well talked in the below article. It's from a guy in Turkey with similar problems to you! *Should I just give up and find something new? No, no, no..no..no.. I never see "giving up" as option. It'll certainly be hard and thank God it's hard, otherwise any dumb person could do it and we would not have any fun from "creating our own beast". You may take different paths than a lot of people, but if you keep trying, eventually you will get where you want to be. Thomas Edison tried more than a thousand times to create the "Lamp" and when asked about "why so many tries?" he answered : "I learned more than one thousand times of how NOT to create it." you can read this : It's a motivational topic (about life in general) but i read this article almost every month. It's just amazing! *What college course do I need to take? Just like me, you may not find a specific game-related course, but you can look for "close enough" courses. Any programming course can lead you to "game programming" and any design course can lead you to "game design". Seriously, in the future you may even find people from Law background programming games/softwares, it's a very rich and funny environment. You may also not take a course at all , but it will be a harder path . The courses will never make you an expert in anything, they will only show you the "not so hard" path and, mainly, allow you to meet other people with the same goals as you. What if I just try and learn from books and tutorials THIS. This is so HUGE that i let it to the end. It doesn't matter what course you do, what background you have, which country did you came from. The desire to learn, to do, to understand it's what is really needed. No course in the world can teach you this. i'm in the 6-to-7 years of programming and I still learn from books and tutorials and they teach better then college. Putting what I learn in practice, trying it, is how I can evolve from where the book left me and how i can came up with better solutions for my specific challenges (ones that weren't in any books). You said that you know Html, Java, Flash and drawing. You, sir, already have the desire to learn! Keep feeding it and you will move mountains. Best wishes on your journey. - Advertisement
https://www.gamedev.net/profile/229995-kakyo/
CC-MAIN-2019-22
refinedweb
1,860
67.99
#include <ResourceHandler.h> List of all members. For example textures and meshes. Content the class for the objects we wish to store. Key the class for the identifier used to distinguish the objects. It must be orderable, i.e. it has an well-behaved operator<(Key lvalue, Key rvalue). Definition at line 32 of file ResourceHandler.h. Definition at line 72 of file ResourceHandler.h. Definition at line 77 of file ResourceHandler.h. If a key is not paired with a Content object, create one using the information provided. If the key is already present, nothing happens. Definition at line 95 of file ResourceHandler.h. Get a pointer to the data stored for the given key, or 0 if the key is unused. Definition at line 108 of file ResourceHandler.h. Get an instance to the resource handler. As the class is templatised, there will be exactly one object per combination of Content, Key and ConstructionInfo types: be careful not to mix Key types when you don't want to. For constructors that take multiple parameters, you'll have to make another class for the paramters. If you want to store objects with a different constructor argument signature in the same ResourceHandler, you'll need to make an abstract class like that and redesign the class. Definition at line 88 of file ResourceHandler.h. Definition at line 68 of file ResourceHandler.h. Generated at Mon Sep 6 00:41:17 2010 by Doxygen version 1.4.7 for Racer version svn335.
http://racer.sourceforge.net/classEngine_1_1ResourceHandler.html
CC-MAIN-2017-22
refinedweb
251
60.72
I was wondering when the C++ STL priority_queue sorts itself. I mean does it insert it into a correct place when you push the item in, or does it sort itself and give you the item of highest priority when you peek or pop it out? I'm asking this because my priority_queue<int> will contain an index to an array that may have values update, and I want it to update when I do pq.top();. #include <cstdio> #include <algorithm> #include <queue> using namespace std; int main() { priority_queue<int> pq; pq.push(2); pq.push(5); //is the first element 5 now? or will it update again when I top() or pop() it out? return 0; } Thanks. The work is done during push() and pop(), which call the underlying heap modification functions ( push_heap() and pop_heap()). top() takes constant time. It sorts itself when a new element is inserted or an existing one is removed. This might help. std::priority_queue::push/(and emplace) method causes a efficient reorder in heap.
http://m.dlxedu.com/m/askdetail/3/493799f42926a1d85cc913e857fc9ed8.html
CC-MAIN-2018-30
refinedweb
170
66.44
It is very rare that you learn something that completely changes how you program. Reading this post about the attrs package in Python was a revelation to me. Coming from C++, I am not too big a fan on returning everything as lists and tuples. In many cases, you want to have structure and attributes and the class in Python is a good fit for this. However, creating a proper class with attributes that has all the necessary basic methods is a pain. This is where attrs comes in. Add its decorator to the class and designate the attributes of the class using its methods and it will generate all the necessary dunder methods for you. You can also get some nice type checking and default values for the attributes too. - First, let us get the biggest confusion about this package out of the way! It is called attrs when you install it cause there is already another existing package called attr (the singular). But when you import and use it, then it is called attr. I know it is irritating, but this is the way it is. To install it: $ sudo pip3 install attrs - To decorate the class use attr.s. I read it is as the plural attrs. And to declare the class attributes, use attr.ibmethod. I read it as attribute. @attr.s class Creature: eyes = attr.ib() legs = attr.ib() - Once declared like this, the attributes can be provided while constructing an object of the class: c = Creature(2, 4) - Object of this class can be constructed using keywords too: c = Creature(legs=6, eyes=1000) - Notice that we have not specified any default value for the attributes. So, it will rightfully complain when constructing without values: c = Creature() TypeError: __init__() missing 2 required positional arguments: 'eyes' and 'legs' - Default values can be specified for attributes: @attr.s class Creature: eyes = attr.ib(default=2) legs = attr.ib(default=6) c = Creature() Note that if there are some rules you run up against if you provide default values for some attributes and not to others. - A beautiful __repr__dunder method is automatically generated for your class. So, you can print any object: c = Creature(3, 6) print(c) Creature(eyes=3, legs=6) This is for me the killer feature! This is far more informational than just looking at a bunch of list or dict values. - Attributes can be get or set just like normal class attributes: c = Creature(2, 4) c.eyes = 10 print(c.legs) - Comparison methods are already generated for you, so you can go ahead and compare objects: c1 = Creature(2, 4) c2 = Creature(3, 9) c1 == c2 - You can add some semblance of type checking to attributes by using the instance_ofvalidators provided by the package: @attr.s class Creature: eyes = attr.ib(validator=attr.validators.instance_of(int)) legs = attr.ib() c = Creature(3.14, 6) TypeError: ("'eyes' must be <class 'int'> (got 3.14 that is a <class 'float'>)." - By default, class attributes are stored in a dictionary. You can switch this to use slots by changing the decorator: @attr.s(slots=True) class Creature: eyes = attr.ib() legs = attr.ib() - Are you curious to see the definition of the dunder methods it generates? You can do that using the inspect package: import inspect print(inspect.getsource(Creature.__init__)) print(inspect.getsource(Creature.__eq__)) print(inspect.getsource(Creature.__gt__)) - Want to see what are all the methods and fields the package creates for a class? print(attr.fields(Creature)) (Attribute(>, repr=True, cmp=True, hash=True, init=True, convert=None), Attribute(name='legs', default=NOTHING, validator=None, repr=True, cmp=True, hash=True, init=True, convert=None)) There is a lot more stuff in this awesome must-use package that can be read here Tried with: attrs 16.1.0, Python 3.5.2 and Ubuntu 16.04 One thought on “attrs package in Python” OMG…this is amazing — I had no idea this library existed, and I’ve been aching for something like this ever since I started coding in Python. Thank you so much for highlighting and explaining this.
https://codeyarns.com/2016/09/07/attrs-package-in-python/
CC-MAIN-2019-30
refinedweb
689
66.54
Hi, I am looking for some assistance with regard to creating a label expression using python in ArcMap. I have a number of point features in a map in ArcMap 10.6 that I would need to label based on information from three fields (Test1, Test2 and Test3) with numeric values. The labels from these 3 fields would have to be stacked. And each of these labels would have to appended with a date text. The format of the label would be something like this; Some of the records in these fields contain zero values and so it would be required to loop through these fields and skip the zero values. That is not to have them displayed in the label.Currently I have the following scripts to work with; -script to stack labels -This does not skip through zero values def FindLabel ([Test1] , [Test2] , [Test3]): return "05/10/2018" + " - " + [Test1] + "\r\n" + "05/10/2018" + " - " + [Test2] + "\r\n" + "05/10/2018" + " - " + [Test3] -script to skip "0" values in the fields and stacks the labels, however does not append text string to each attribute value. def FindLabel ( [Test1] , [Test2] , [Test3] ): fields = ([Test1] , [Test2] , [Test3]) label = "" for field in fields: if field == "0": continue else: label = label + field + "\n" return label Is there a way achieve this using a python label expression in ArcMap 10.6. Any help with this would be appreciated. Looking over your second code snippet, it looks like it should work, i.e., skip 0 values and stack labels. What are you seeing with the second code snippet?
https://community.esri.com/thread/214648-question-regarding-displaying-labels-using-a-loop-in-python-expression-in-arcmap-106
CC-MAIN-2019-30
refinedweb
261
77.37
Running Pylint¶ Invoking Pylint¶ Pylint is meant to be called from the command line. The usage is pylint [options] modules_or_packages You should give Pylint the name of a python package or module, or some number of packages or modules. another Python program, thanks to the Run() function in the pylint.lint module (assuming Pylint options are stored in a list of strings pylint_options) as: import pylint.lint pylint_opts = ['--disable=line-too-long', 'myfile.py'] pylint.lint.Run(pylint_opts)) It is also possible to include additional Pylint options in the first argument to py_run: from pylint import epylint as lint (pylint_stdout, pylint_stderr) = lint.py_run('module_name.py --disable C0114', return_std=True) The options --msg-template="{path}:{line}: {category} ({msg_id}, {symbol}, {obj}) {msg}" and --reports=n are set implicitly inside the epylint module. Command line options¶ First of all, we have two basic (but useful) options. - --version show program's version number and exit - -h, --help show help about the command line options Pylint is architected around several checkers. You can disable a specific checker or some of its messages or message pyproject.tomlin the current working directory, providing it has at least one tool.pylint.section. The pyproject.tomlmust prepend section names with tool.pylint., for example [tool.pylint.'MESSAGES CONTROL']. They can also be passed in on the command line. setup.cfgin the current working directory, providing it has at least one pylint.section If the current working directory is in a Python package, Pylint searches up the hierarchy of Python packages until it finds a pylintrcfile. This allows you to specify coding standards on a module-by-module basis. Of course, a directory is judged to be a Python package: - --ignore=<file[,file...]> Files or directories to be skipped. They should be base names, not paths. - --output-format=<format> Select output format (text, json, custom). - --msg-template=<template> Modify text output message template. - --list-msgs Generate pylint's messages. - --list-msgs-enabled Display a list of what messages are enabled and disabled with the given configuration. - --full-documentation Generate pylint's full documentation, in reST format. Parallel execution¶ It is possible to speed up the execution of Pylint. If the running computer has more CPUs than one, then the work for checking all files could be spread across all cores via Pylints's sub-processes. This functionality is exposed via the -j command-line parameter. If the provided number is 0, then the total number of CPUs will be autodetected checking a module is complete. There are some limitations in running checks in parallel in the current implementation. It is not possible to use custom plugins (i.e. --load-plugins option), nor it is not possible to use initialization hooks (i.e. the --init-hook option). Exit codes¶ Pylint returns bit-encoded exit codes. If applicable, the table below lists the related stderr stream message output.
https://pylint.pycqa.org/en/latest/user_guide/run.html
CC-MAIN-2021-49
refinedweb
476
50.63
Previous: Adding another Service to NSS, Up: Extending NSS password database). enum nss_status _nss_database _setdb ent (void) One special case for this function is that it takes an additional argument for some databases (i.e., the interface is int setdb ent (int)). Host Names, which describes the sethostent function. The return value should be NSS_STATUS_SUCCESS or according to the table above in case of an error (see NSS Modules Interface). enum nss_status _nss_database _enddb ent (void) There normally is no return value different to NSS_STATUS_SUCCESS. enum nss_status _nss_database _getdb ent_r (STRUCTURE *result, char *buffer, size_t buflen, int *errnop) the implementation should store the value of the local errno variable in the variable pointed to be errnop. This is important to guarantee the module working in statically linked programs. ent all return value allowed for this function can also be returned here. enum nss_status _nss_DATABASE _getdb byXX _r (PARAMS ,STRUCTURE *result, char *buffer, size_t buflen, int *errnop) The result must be stored in the structure pointed to by result. If there is additional data to return (say strings, where the result structure only contains pointers) the function must use the buffer or length buflen. There must not be any references to non-constant global data. The implementation of this function should honor the stayopen flag set by the setDB ent function and networks database. The return value should as always follow the rules given above (see NSS Modules Interface).
http://www.gnu.org/software/libc/manual/html_node/NSS-Module-Function-Internals.html#NSS-Module-Function-Internals
crawl-003
refinedweb
238
54.32
An Arduino RSS Feed Display Introduction: An Arduino RSS Feed Display This Arduino project will display RSS feed headlines on an LCD via an Arduino and a USB cable. It works quite well, and lets you keep up with the world news while you're sitting at your desk. Many of the values in the code can be changed, and the system can be adapted to display Twitter and other information as well. It uses Python to interface with the Arduino. The project requires very few parts, generally things that most people with Arduinos will have lying around somewhere: The LCD should be wired up as is shown in this picture (given it's a 16x2 LCD that uses the HD44780 driver). Potentiometer controls the contrast. It should also be noted that most LCD's use pins 15 and 16 on the LCD as the +5v and GND for the back). Step 3: Getting the Required Software and Libraries 2 pieces of software and 2 Libraries/Extensions are needed for this project to work. The first library is an Arduino Library called LiquidCrystal440. It is available here:. It is an updated version of the LiquidCrystal library, and helps deal with some issues when it comes to addressing memory that isn't currently visible on the screen. Obviously to use the LiquidCrystal440 Library, you will need the first piece of software: The Arduino coding interface, which I assume all Arduino users have (if not, just check the Arduino.cc website) The second piece of software you will need is Python. Python is an easy to learn programming language for the PC, Linux, or Mac. It is available for free here:. The final thing you need is the extension that will let the Python computer program work with the Arduino itself, via the serial cable. The required extension is Pyserial, available here:. Make sure you get the correct version of Pyserial to work with your version of Python (2.7 to 2.7, 3.1 to 3.1, etc). Step 4: The Arduino Code // This code is for the Arduino RSS feed project, by Fritter // Read the comment lines to figure out how it works int startstring = 0; // recognition of beginning of new string int charcount = 0; // keeps track of total chars on screen #include LiquidCrystal lcd(12, 11, 5, 4, 3, 2); void setup() { Serial.begin(9600); // opens serial port, sets data rate to 9600 bps lcd.begin(16,2); // Initialize the LCD size 16x2. Change if using a larger LCD lcd.setCursor(0,0); // Set cursor position to top left corner pinMode(13, OUTPUT); } void loop() { char incomingByte = 0; // for incoming serial data if (Serial.available() > 0) { // Check for incoming Serial Data digitalWrite(13, HIGH); incomingByte = Serial.read(); if ((incomingByte == '~') && (startstring == 1)){ // Check for the closing '~' to end the printing of serial data startstring = 0; // Set the printing to off delay(5000); // Wait 5 seconds lcd.clear(); // Wipe the screen charcount = 0; // reset the character count to 0 lcd.setCursor(0,0); // reset the cursor to 0,0 } if (startstring == 1){ // check if the string has begun if first '~' has been read if (charcount <= 30){ // check if charcount is under or equal to 30 lcd.print(incomingByte); // Print the current byte in the serial charcount = charcount++; // Increment the charcount by 1 yes I know it's awkward } } if (charcount == 31){ // if the charcount is equal to 31 aka the screen is full delay(500); lcd.clear(); // clear the screen lcd.setCursor(0,0); // set cursor to 0,0 lcd.print(incomingByte); // continue printing data charcount = 1; // set charcount back to 1 } if (incomingByte == '~'){ // Check if byte is marker ~ to start the printing startstring = 1; // start printing } } digitalWrite(13, LOW); delay(10); // 10ms delay for stability } Step 5: The Python Code #import library to do http requests: import urllib2 #import pyserial Library import serial #import time library for delays import time #import xml parser called minidom: from xml.dom.minidom import parseString #Initialize the Serial connection in COM3 or whatever port your arduino uses at 9600 baud rate ser = serial.Serial("\\.\COM3", 9600) i = 1 #delay for stability while connection is achieved time.sleep(5) while i == 1: #download the rss file feel free to put your own rss url in here file = urllib2.urlopen('') #convert to string data = file.read() #close the file file.close() #parse the xml from the string dom = parseString(data) #retrieve the first xml tag (<tag>data</tag>) that the parser finds with name tagName change tags to get different data xmlTag = dom.getElementsByTagName('title')[2].toxml() # the [2] indicates the 3rd title tag it finds will be parsed, counting starts at 0 #strip off the tag (<tag>data</tag> ---> data) xmlData=xmlTag.replace('<title>','').replace('</title>','') #write the marker ~ to serial ser.write('~') time.sleep(5) #split the string into individual words nums = xmlData.split(' ') #loop until all words in string have been printed for num in nums: #write 1 word ser.write(num) # write 1 space ser.write(' ') # THE DELAY IS NECESSARY. It prevents overflow of the arduino buffer. time.sleep(2) # write ~ to close the string and tell arduino information sending is finished ser.write('~') # wait 5 minutes before rechecking RSS and resending data to Arduino time.sleep(300) Step 6: Getting It to Work. Upload the Arduino Code to the Arduino itself. Put the Python code into a .py file. If all goes according to plan, if you run the .py file, you should see the text start appearing after about 10 seconds. Every time a word is outputted, the LED should flash as well. If it doesn't work: Check the port in the python file. Your Arduino may be labeled differently or be numbered differently. Check that the RSS feed doesn't have a ~ in the data. That will throw things out of whack. Try running the .py file from the command line as an administrator. Sometimes the script doesn't have proper permissions to access the COM ports. I've hit a problem: "TypeError: unicode strings are not supported, please encode to bytes: '~' -I'm very new to Arduino & Python but I believe the problem is the above code is Python 2, and I'm using Python 3. Can anyone please help? Thank you. found it. looks like the lcd cannot support unicode (only support ascii) so in python script change line : ser.write(num) to ser.write(num.encode('UTF-8')) and the rss url is also not valid anymore, be carefull. it's just prototype so you have to revise all code based on your needs. did you found that problem? im having same errors I love the combination of Python and the Arduino. So I have created a collection about it. I have added your instructable, you can see the collection at: >>... How would I make this display weather forecast? Is the entire board really necessary, i.e. can't I get the results with a standalone arduino circuit? I got this error when I ran the code in phyton , please help me out! Can this be done with a LED matrix? I made this!! It's so cool here's the photo. How can you use arduino code for a 20x4 LCD? I tried changing the size in the code, but it does not work. When it comes to the second line it clears the lcd and the other part is displayed. Please help. Is there way to use multiple lines? i use this feed: which is a feed of the 911 calls nearby my village. i tried changing the charcount, but im not very good with arduino. PLEASE HELP ME I'm getting this error. What does this mean. Apologies I have never used python before. found a solution. I put this line num=str(num) just before ser.write(num) so python recognises num as a string i also think that the settings for the ports have also been changed i think you need to rewrite the python code because an update meshed urllib2 into urllib, i'm trying to debug it, but this is my first time seeing python, luckily programming is my thing great work!! But there is one problem I have came across after implementing this. What is news contents contains " 's " as part like in " that's " then this Python script is unable to parse it and send it to arduino via serial so any help on this ?? Very cool. Thank you for posting. I have gotten it work but it doesn't use the second line of text, therefore words are not showing up. I think there is an error in the Arduino code because if I use Arduino's example serial display it works but it only prints one word then reprints the whole screen. Help? Please What size is your LCD display? Could it be a problem with the LiquidCrystal Library? I don't exactly understand that part of it. Am I supposed to put the .h file in a certain place? 16x2. Hello World works with the second timer at the bottom. Very nice project. something like this awhile ago using a beta-bright moving message sign, PHP script, and a home made serial cable, works quite well, I use it right now to display weather information and forecasts, but it can be used to do just about anything else that you would like displayed. In fact it works so well that I got myself a 2nd larger sign for the living room (Wife doesn't really like that one thou LOL) - Thought you might be interested thou because it just uses the computer and the beta-bright display protocol. Thanks kd8bxp (And I must admit is sounds like Radio Amateur code), In this case the Arduino just manages the LCD and pieces together incoming words into proper, readable format. If you had a plain LCD with a USB connector and the proper software to interface with it via USB, then you could just use the screen. I used the Arduino because I had it on hand and this was never meant to be a permanent build. I built it for fun, got it to work then took it apart and went onto the next Arduino project. I know about fun, and totally agree with that. I could be wrong, but charcount=charcount++; won't do anything. If you want to increment, you only need this: charcount++; That will increment by one. It will in fact add 1 to the total, but it is redundant. the ++ modifier essentially acts as variable = variable + 1 So in this case it says variable = variable = variable + 1 It works, but should simply say charcount++ for the sake of simplicity and easier reading. Thanks for pointing it out. Technically, the behavior is "undefined" which means it might do what you want, or it might not. The actual behavior is up to your C compiler, and *any* behavior is permitted, including not compiling the code, or producing code with wildly unexpected behavior.: Thanks for the definitive answer. So while I can confirm it would work properly for me, it might not work properly if you use a different compiler. Really is playing a game of chance with your code, I guess. If you get a could not open port error try finding the correct port this way: In terminal type: python -m serial.tools.list_ports Find the port matching the one in the Arduino settings and copy paste into your python script. this is really simple and impressive but the python language , this is the first time i see somebody using it in arduino really really goooood big thank you bro even your project is simple but a new stuff has been learned today This is very cool. I like this a lot. Fixed the code on the instructable. Thanks for pointing it out. no problem I'm glad Icould help, its little things like that that are usually the reason why something doesn't work.. This is Pretty cool, and from the looks of the code it might be just simple, yet interesting enough to get me into python);
http://www.instructables.com/id/Wiring-up-the-LCD-and-the-LED/?comments=all
CC-MAIN-2017-30
refinedweb
2,029
73.07
In this Premium Tutorial, we'll be building a Breakout game; "Brick Breaker" from scratch using Flash and AS3. Step 1: Brief Overview Using the Flash drawing tools we'll create a good looking graphic interface that will be powered by several ActionScript 3 classes. The user will be able to play through a series of levels, you can easily add as many levels as you want! Step 2: Flash Document Settings Open Flash and create a 320 pixels wide, 480 pixels tall document. Set the Frame rate to 24fps. Step 3: Interface A colorful, nice-looking interface will be displayed. It will contain multiple shapes, buttons, bitmaps and more. Let's jump straight into creating this GUI. Step 4: Main Screen This is the Main Screen or view, it will be the first graphic to appear in our game. Step 5: Background Create a 320x480 rectangle and fill it with this radial gradient: #3A9826, #102A07. We're going to give it a little more detail by adding a Photoshop filter, if you don't have Photoshop you can try to add a nice effect using the Flash Tools. Open the image in Photoshop and go to Filters > Texture > Patchwork, use the following settings: You will end up with something like this: This Background will be on stage as well as the paddle, ball and text indicators. Convert the background to a MovieClip and name it bg. Step 6: Title Select the Text Tool (T), select a suitable font and write your game title. I used this format: Akashi, 55pt, #FFCC33. Select the TextField and use the Filters panel to add a Drop Shadow: Duplicate the text (Cmd + D) and move it 3px up to give it some emboss. Convert the graphics to a MovieClip and name it MenuScreen, remember to mark the Export for ActionScript box. You can delete this from stage when finished, as it will be called using AS3. Step 7: Paddle Use the Rectangle Primitive Tool (R) to create a 57x11.5px round rectangle, change the corner radius to 10 and apply this gradient: #4E4E4E, #BABABA, #B0B3BA. Add some detail lines with the Rectangle Tool, use your own style! You can also add some color to your paddle, here is the final result of mine, the color used is: #CC0000. Convert the graphics to a MovieClip and name it paddle. Step 8: Ball To create the ball, select the Oval Tool (O) and use it to create a 12x12px, #CCCCCC circle. Duplicate the circle (Cmd + D) change its size to 10x10px and fill it with this radial gradient: #95D4FF, #0099FF. Lastly, cut the second circle in half and use the Selection Tool (V) to make a curve in the bottom. Change its color to a white linear gradient with alpha 60, 10. Convert the graphics to a MovieClip and name it ball. Step 9: Brick Our Brick will be very simple. Use the Rectangle tool to create a 38x18px rectangle and apply the next gradient: #CC0000, #8E0000, #FF5656. Convert the rectangle to a MovieClip and apply the shadow filter used in the title text to give it a nicer look. Convert again the graphic to a MovieClip and name it Brick, remember to mark the Export for ActionScript box. Step 10: About Screen The About Screen will show the credits, year and copyright of the game. It will be pretty simple to create as we already have all the elements used in it. Convert the graphics to a MovieClip and name it AboutScreen, remember to mark the Export for ActionScript box. Step 11: Game Screen This is the game screen, it will be on stage from the beginning and it will contain the paddle, ball, background and text indicators. (We will add the bricks using code.) The instance names are pretty easy and self explanatory: paddle, ball, bg, scoreTF, livesTF and levelTF. Step 12: Embed Fonts In order to use the custom font dynamically we will need to embed it in the application. Select a dynamic textfield and click the Embed... button in the Properties Panel. Select/add all the necessary characters and click OK. Step 13: Alert Screen This screen will appear when the game has been decided; either you win, lose or you reach game over (winning all the levels or losing all the lives). Two Dynamic TextFields are used in this view, they will display the current game state plus a short message. The TextFields are named titleTF and msgTF. Convert the graphics to a MovieClip and mark the Export for ActionScript box, set the class name to AlertScreen. This ends the graphics part, let the ActionScript begin! Step 14: Tween Nano We'll use a different tween engine from the default included in flash, this will increase performance and be easier to use. You can download Tween Nano from its official website. Step 15: New ActionScript Class Create a new (Cmd + N) ActionScript 3.0 Class and save it as Main.as in your class folder. Step 16: Class Structure Create your basic class structure to begin writing your code. package { import flash.display.Sprite; public class Main extends Sprite { public function Main():void { // constructor code } } } Step 17: Required Classes These are the classes we'll need to import for our class to work, the import directive makes externally defined classes and packages available to your code. import flash.display.Sprite; import flash.ui.Mouse; import flash.events.MouseEvent; import flash.events.KeyboardEvent; import flash.events.Event; import com.greensock.TweenNano; import com.greensock.easing.Circ; Step 18: Variables and Constants These are the variables and constants we'll use, read the comments in the code to discover more about them. private const BRICK_W:int = 39; //brick's width private const BRICK_H:int = 19; //brick's height private const OFFSET:int = 6; //An offset used to center the bricks private const W_LEN:int = 8; //the length of the levels, only 8 horizontal bricks should be created on stage private const SCORE_CONST:int = 100; //the amount to add to the score when a brick is hit private var bricks:Vector.<Sprite> = new Vector.<Sprite>(); //stores all the bricks private var xSpeed:int = 5; private var ySpeed:int = -5; private var xDir:int = 1; //x direction private var yDir:int = 1; private var gameEvent:String = ''; //stores events like win, lose, gameover private var currentLevel:int = 0; private var menuScreen:MenuScreen; //an instance of the menu screen private var aboutScreen:AboutScreen; private var alertScreen:AlertScreen; private var lives:int = 3; private var levels:Array = []; //stores the levels Step 19: Levels All our levels will be stored in multidimensional arrayss. These are arrays containing arrays; you can write them in a single line, but if you align them you can actually see the form that the level will take. private const LEVEL_1:Array = [],]; //this forms a + sign! private const LEVEL_2:Array = [],]; //this forms a number 2! In these levels the 1s are representing the space in the stage where a brick will be placed, and the 0s are just empty space. These levels will be later read by a function that will place the bricks on the stage. You can add as many levels as you want using this class! Step 20: Constructor Code The constructor is a function that runs when an object is created from a class; this code is the first to execute when you make an instance of an object (or runs when the game loads, in the case of a document class). It calls the necessary functions to start the game. Check out these functions in the following steps. public final function Main():void { /* Add Levels */ levels.push(LEVEL_1, LEVEL_2); //we add the levels to the array in order to know how many they are /* Menu Screen, Buttons Listeners */ menuScreen = new MenuScreen(); addChild(menuScreen); menuScreen.startB.addEventListener(MouseEvent.MOUSE_UP, tweenMS); menuScreen.aboutB.addEventListener(MouseEvent.MOUSE_UP, tweenMS); } Step 21: Menu Screen & About View Animation The next lines handle the Menu Screen buttons and tween the Menu or About view depending on the button pressed. private final function tweenMS(e:MouseEvent):void { if(e.target.name == 'startB') //if start button is clicked { TweenNano.to(menuScreen, 0.3, {y: -menuScreen.height, ease: Circ, onComplete: init}); //tween menu screen } else //if about button is clicked { aboutScreen = new AboutScreen();//add about screen addChild(aboutScreen); TweenNano.from(aboutScreen, 0.3, {x: stage.stageWidth, ease: Circ}); //tween about screen aboutScreen.addEventListener(MouseEvent.MOUSE_UP, hideAbout); //add a mouse listener to remove it } } /* Removes About view */ private final function hideAbout(e:MouseEvent):void { TweenNano.to(aboutScreen, 0.3, {x:stage.stageWidth, ease:Circ, onComplete:function rmv():void{ aboutScreen.removeEventListener(MouseEvent.MOUSE_UP, hideAbout); removeChild(aboutScreen); }}); } Step 22: Init Function This function performs the necessary operations to start the game, read the comments in the code to know more about it. private final function init():void { /* Destroy Menu Screen */ menuScreen.startB.removeEventListener(MouseEvent.MOUSE_UP, tweenMS); menuScreen.aboutB.removeEventListener(MouseEvent.MOUSE_UP, tweenMS); removeChild(menuScreen); menuScreen = null; /* Hide Cursor */ Mouse.hide(); /* Build Level Bricks */ buildLevel(LEVEL_1); /* Start Listener */ bg.addEventListener(MouseEvent.MOUSE_UP, startGame); } Step 23: Move Paddle The Paddle will be Mouse Controlled, it will follow the mouse x position . private final function movePaddle(e:MouseEvent):void { /* Follow Mouse */ paddle.x = mouseX; } Step 24: Paddle Border Collision To stop the paddle from leaving the stage, we create invisible boundaries on the sides of the screen. { /* Follow Mouse */ paddle.x = mouseX; /* Borders */ if((paddle.x - paddle.width / 2) < 0) { paddle.x = paddle.width / 2; } else if((paddle.x + paddle.width / 2) > stage.stageWidth) { paddle.x = stage.stageWidth - paddle.width / 2; } } Step 25: Build Level Function The levels will be built by this function. It uses a parameter to obtain the level to build, calculates its size and runs a nested for-loop, with one loop for the height and one for the width. Next, it creates a new Brick instance that is placed according its width, height and the number corresponding to i and j. Lastly, the brick is added to the bricks vector to access it outside this function. private final function buildLevel(level:Array):void { /* Level length, height */ var len:int = level.length; for(var i:int = 0; i < len; i++) { for(var j:int = 0; j < W_LEN; j++) { if(level[i][j] == 1) { var brick:Brick = new Brick(); brick.x = OFFSET + (BRICK_W * j); brick.y = BRICK_H * i; addChild(brick); bricks.push(brick); } } } } Step 26: Game Listeners This function adds or removes the mouse and enter frame listeners. It uses a parameter to determine if the listeners should be added or removed: default is add. private final function gameListeners(action:String = 'add'):void { if(action == 'add') { stage.addEventListener(MouseEvent.MOUSE_MOVE, movePaddle); stage.addEventListener(Event.ENTER_FRAME, update); } else { stage.removeEventListener(MouseEvent.MOUSE_MOVE, movePaddle); stage.removeEventListener(Event.ENTER_FRAME, update); } } Step 27: Start Game Function The next code calls the gameListeners() function to start the game. private final function startGame(e:KeyboardEvent):void { bg.removeEventListener(MouseEvent.MOUSE_UP, startGame); gameListeners(); } Step 28: Ball Movement The ball speed is determined by the xSpeed and ySpeed variables, when the update function is executed, the ball starts moving using theses values every frame. private final function update(e:Event):void { /* Ball Movement */ ball.x += xSpeed; ball.y += ySpeed; Step 29: Wall Collision This code checks for collisions between the ball and the walls. /* Wall Collision */ if(ball.x < 0){ball.x = ball.x + 3;xSpeed = -xSpeed;};//Left if((ball.x + ball.width) > stage.stageWidth){ball.x = ball.x - 3;xSpeed = -xSpeed;};//Right if(ball.y < 0){ySpeed = -ySpeed;};//Up Step 30: Lose Game Event An if-statement is used to check for when the paddle misses the ball. If so, the player loses a life. if(ball.y + ball.height > paddle.y + paddle.height){alert('You Lose', 'Play Again ›');gameEvent = 'lose';lives--;livesTF.text = String(lives);};//down/lose Step 31: Paddle-Ball Collisions When the ball hits the paddle, the ySpeed is set to negative to make the ball go up. We also check in which side of the paddle the ball has hit to choose the side where it will move next. /* Paddle Collision, check the which side of the paddle the ball hits*/ if(paddle.hitTestObject(ball) && (ball.x + ball.width / 2) < paddle.x) { ySpeed = -5; xSpeed = -5; //left } else if(paddle.hitTestObject(ball) && (ball.x + ball.width / 2) >= paddle.x) { ySpeed = -5; xSpeed = 5; //right } Step 32: Brick Collisions We use a for and hitTest to check for bricks collisions, when the ball hits a brick the same technique used in the paddle is used to determine the side the ball will follow. /* Bricks Collision */ for(var i:int = 0; i < bricks.length; i++) { if(ball.hitTestObject(bricks[i])) { /* Check the which side of the brick the ball hits, left, right */ if((ball.x + ball.width / 2) < (bricks[i].x + bricks[i].width / 2)) { xSpeed = -5; } else if((ball.x + ball.width / 2) >= (bricks[i].x + bricks[i].width / 2)) { xSpeed = 5; } Step 33: Change Ball Direction & Remove Brick The following code changes the Y direction of the ball and removes the brick from stage and the vector. /* Change ball y direction */ ySpeed = -ySpeed; removeChild(bricks[i]); bricks.splice(i, 1); If you like, you could change this logic so that the ball's y-speed is only reversed if it hits the top or bottom of a brick, and not when it hits the sides. Try it out and see what you think. Step 34: Add Score and Check Win Each brick hit will add 100 to the score, the score will be taken from the score constant and added to the current score using int and String functions. This code also checks whether there are no more bricks in the Vector and displays an alert if so. /* Score++ */ scoreTF.text = String(int(scoreTF.text) + SCORE_CONST); } } /* Check if all bricks are destroyed */ if(bricks.length < 1) { alert('You Win!', 'Next Level ›'); gameEvent = 'win'; } } Step 35: Alert Screen The Alert Screen shows the player information about the status of the game, it is shown when a game event is reached, such as losing a life or completing a level. Two parameters are used in this function: - t: The alert title - m: A short message private final function alert(t:String, m:String):void { gameListeners('remove'); Mouse.show(); alertScreen = new AlertScreen(); addChild(alertScreen); TweenNano.from(alertScreen.box, 0.3, {scaleX: 0.5, scaleY:0.5, ease:Circ}); alertScreen.box.titleTF.text = t; alertScreen.box.msgTF.text = m; alertScreen.box.boxB.addEventListener(MouseEvent.MOUSE_UP, restart); } Step 36: Restart Function The next function checks the game status (win, lose, finished) and performs an action according to it. private final function restart(e:MouseEvent):void { if(gameEvent == 'win' && levels.length > currentLevel+1) //if level is clear but more levels are left { currentLevel++; changeLevel(levels[currentLevel]);//next level levelTF.text = 'Level ' + String(currentLevel + 1); } else if(gameEvent == 'win' && levels.length <= currentLevel+1) //if level is clear and no more levels are available { alertScreen.box.boxB.removeEventListener(MouseEvent.MOUSE_UP, restart); removeChild(alertScreen); alertScreen = null; alert('Game Over', 'Congratulations!'); gameEvent = 'finished'; } else if(gameEvent == 'lose' && lives > 0) //if level failed but lives > 0 { changeLevel(levels[currentLevel]);//same level } else if(gameEvent == 'lose' && lives <= 0) //if level failed and no more lives left { alertScreen.box.boxB.removeEventListener(MouseEvent.MOUSE_UP, restart); removeChild(alertScreen); alertScreen = null; alert('Game Over', 'Try Again!'); gameEvent = 'finished'; } else if(gameEvent == 'finished') //reached when no more lives or levels are available { /* Add menu screen */ menuScreen = new MenuScreen(); addChild(menuScreen); menuScreen.startB.addEventListener(MouseEvent.MOUSE_UP, tweenMS); menuScreen.aboutB.addEventListener(MouseEvent.MOUSE_UP, tweenMS); TweenNano.from(menuScreen, 0.3, {y: -menuScreen.height, ease: Circ}); /* Reset vars */ currentLevel = 0; lives = 3; livesTF.text = String(lives); scoreTF.text = '0'; levelTF.text = 'Level ' + String(currentLevel + 1); xSpeed = 5; ySpeed = -5; clearLevel(); } } Step 38: Change Level This function changes to the level written in the parameter. private final function changeLevel(level:Array):void { /* Clear */ clearLevel(); /* Redraw Bricks */ buildLevel(level); /* Start */ Mouse.hide(); bg.addEventListener(MouseEvent.MOUSE_UP, startGame); } Step 39: Clear Level A function to clear the remaining bricks and alerts from stage. It will also reset the position of the paddle and ball. private final function clearLevel():void { /* Remove Alert Screen */ alertScreen.box.boxB.removeEventListener(MouseEvent.MOUSE_UP, restart); removeChild(alertScreen); alertScreen = null; /* Clear Level Bricks */ var bricksLen:int = bricks.length; for(var i:int = 0; i < bricksLen; i++) { removeChild(bricks[i]); } bricks.length = 0; /* Reset Ball and Paddle position */ ball.x = (stage.stageWidth / 2) - (ball.width / 2); ball.y = (paddle.y - paddle.height) - (ball.height / 2) -2; paddle.x = stage.stageWidth / 2; } Step 40: Set Main Class We'll make use of the Document Class in this tutorial, if you don't know how to use it or are a bit confused please read this QuickTip. Conclusion The final result is a customizable and entertaining game, try adding your custom graphics and levels! I hope you liked this Active Premium tutorial, thank you for reading! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://gamedevelopment.tutsplus.com/tutorials/build-a-striking-breakout-game-in-flash--active-7523
CC-MAIN-2019-51
refinedweb
2,836
66.33
Red Hat Training A Red Hat training course is available for Managing Containers with Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host 7 Getting Started with Containers Getting Started with Containers Legal Notice containers on Red Hat Enterprise Linux 7 (RHEL 7) Server and Red Hat Enterprise Linux Atomic (based on RHEL 7), type man dockerto learn about the docker command. Then refer to separate man pages for each docker option (for example, type man docker-imageto read about the docker imageoption). Currently, to run the docker command in RHEL 7 and RHEL Atomic Host you must have root privilege. In the procedure, this is indicated by the command prompt appearing as a hash sign (#). Configuring sudo will work, if you prefer not to log in directly to the root user account. 1.3. Getting Docker in RHEL 7 To get an environment where you can develop Docker containers, you can install a Red Hat Enterprise Linux 7 system to act as a development system as well as a container host. The docker package itself is stored in a RHEL Extras repository (see the Red Hat Enterprise Linux Extras Life Cycle article for a description of support policies and life cycle information for the Red Hat Enterprise Linux Extras channel). Using the RHEL 7 subscription model, if you want to create Docker. NOTE: The docker packages and other container-related packages are only available for the RHEL Server and RHEL Atomic Host editions. They are not available for Workstation or other variants of RHEL. - Install RHEL Server edition: If you are ready to begin, you can start by installing a Red Hat Enterprise Linux system (Server edition) as described in the following: Red Hat Enterprise Linux 7 Installation Guide Register RHEL: Once RHEL 7 is installed, register the system using Subscription Management tools and install the docker package. Also enable the software repositories needed. (Replace pool_id with the pool ID of your RHEL 7 subscription.) For example: # subscription-manager register --username=rhnuser --password=rhnpasswd # subscription-manager list --available Find valid RHEL pool ID # subscription-manager attach --pool=pool_id # subscription-manager repos --enable=rhel-7-server-extras-rpms # subscription-manager repos --enable=rhel-7-server-optional-rpms NOTE: For information on the channel names required to get docker packages for Red Hat Satellite 5, refer to Satellite 5 repo to install Docker on Red Hat Enterprise Linux 7. Install Docker: The current release of RHEL and RHEL Atomic Host include two different versions of Docker. Here are the Docker packages you have to choose from: - docker: This package includes the version of Docker that is the default for the current release of RHEL. Install this package if you want a more stable version of Docker that is compatible with the current versions of Kubernetes and OpenShift available with Red Hat Enterprise Linux. docker-latest: This package. To install and use the default docker package (along with a couple of dependent packages if they are not yet installed), type the following: # yum install docker device-mapper-libs device-mapper-event-libs Start docker: # systemctl start docker.service Enable docker: # systemctl enable docker.service Check docker status: # systemctl status docker.service docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-in: /usr/lib/systemd/system/docker.service.d |-flannel.conf Active: active (running) since Thu 2016-05-09 22:398:47 EDT; 14s ago Docs: Main PID: 13495 (sh) CGroup: /system.slice/docker.service └─13495 /bin/sh -c /usr/bin/docker-current daemon $OPTIONS ... With the docker service running, you can obtain some Docker images and use the docker command to begin working with Docker images in RHEL 7. 1.4. Getting Docker in RHEL Atomic Host RHEL Atomic Host is a light-weight Linux operating system distribution that was designed specifically for running containers. It contains two different versions of the docker service, as well as some services that can be used to orchestrate and manage Docker containers, such as Kubernetes. Only one version of the docker service can be running at a time. Because RHEL Atomic Host is more like an appliance than a full-featured Linux system, it is not made for you to install RPM packages or other software on. Software is added to Atomic Host systems by running container images. RHEL Atomic Host has a mechanism for updating existing packages, but not for allowing users to add new packages. Therefore, you should consider using a standard RHEL 7 server system to develop your applications (so you can add a full compliment of development and debugging tools), then use RHEL Atomic Host to deploy your containers into a variety of virtualization and cloud environment. That said, you can install a RHEL Atomic Host system and use it to run, build, stop, start, and otherwise work with containers using the examples shown in this topic. To do that, use the following procedure to get and install RHEL Atomic Host. Get RHEL Atomic Host: RHEL Atomic Host is available from the Red Hat Customer Portal. You have the option of running RHEL Atomic Host as a live image (in .qcow2format) or installing RHEL Atomic Host from an installation medium (in .iso format). You can get RHEL Atomic in those (and other formats) from here: RHEL Atomic Host Downloads Then follow the Red Hat Enterprise Linux Atomic Host Installation and Configuration Guide instructions for setting up Atomic to run in one of several different physical or virtual environments. Register RHEL Atomic Host: Once RHEL Atomic Host is installed, register the system using Subscription Management tools. (This will allow you to run atomic upgrade to upgrade Atomic software, but it won’t let you install additional packages using the yum command.) For example: # subscription-manager register --username=rhnuser \ --password=rhnpasswd --auto-attach IMPORTANT: Running containers with the docker command, as described in this topic, does not specifically require you to register the RHEL Atomic Host system and attach a subscription. However, if you want to run yum install commands within a container, the container must get valid subscription information from the RHEL Atomic Host or it will fail. If you need to enable repositories other than those enabled by default with the RHEL version the host is using, you should edit the /etc/yum.repos.d/redhat.repo file. You can do that manually within the container and set enabled=1 for the repository you want to use. You can also use yum-config-manager, a command-line tool for managing Yum repo files. You can use the following command to enable repos: # yum-config-manager --enable REPOSITORY You can also use yum-config-manager to display Yum global options, add repositories and others. yum-config-manager is documented in detail in the Red Hat Enterprise Linux 7 System Administrator’s Guide. Since redhat.repo is a big file and editing it manually can be error prone, it is recommended to use yum-config-manager. Start using Docker: RHEL Atomic Host comes with the docker package already installed and enabled. So, once you have logged in and subscribed your Atomic system, here is the status of docker and related software: - You can immediately begin running the docker command to work with docker images and containers. - The docker-distribution package is not installed. If you want to be able to pull and push images between your Atomic system and a private registry, you can install the docker-distribution package on a RHEL 7 system (as described next) and access that registry to store your own container images. - A set of kubernetes packages, used to orchestrate Docker containers, are installed on RHEL Atomic Host, but Kubernetes services are not enabled by default. You need to enable and start several Kubernetes-related services to be able to orchestrate containers in RHEL Atomic Host with Kubernetes. 1. - docker daemon settings: Another way to change how the docker service behaves is to changes settings that are passed to the docker daemon in the /etc/sysconfig/docker file. To see a list of options available with docker daemon, type docker daemon --help. The next section shows examples of docker daemon features you might want to change. 1.6. Modifying the docker daemon options (/etc/sysconfig/docker) When the docker daemon starts in RHEL or RHEL Atomic Host, it reads the settings in the /etc/sysconfig/docker file and adds them to the docker daemon command line. See available options by typing the following command: $ docker daemon --help The following are a few options you may want to consider adding to your /etc/sysonfig/docker file so that they are picked up when your docker daemon runs. 1.6.1. Default options The: OPTIONS='--selinux-enabled --log-driver=journald' 1.6.2. Registry options When asked to search for or pull images, the docker command uses the Docker registry (docker.io) to complete those activities. In RHEL and RHEL Atomic Host, this entry in the /etc/sysconfig/docker file causes the Red Hat registry (registry.access.redhat.com) to be used first: ADD_REGISTRY='--add-registry registry.access.redhat.com' If you wanted to add a private registry that you installed yourself, just add another ADD_REGISTRY. For example: ADD_REGISTRY='--add-registry myregistry.example.com' If you want to prevent users from pulling images from the Docker registry, uncomment the BLOCK_REGISTRY entry so it appears as follows: BLOCK_REGISTRY='--block-registry docker.io' To access a registry that uses https protocol for security, but is not set up with certificates for authentication, you can still access that registry by defining it as an insecure registry in the /etc/sysconfig/docker file. For example: INSECURE_REGISTRY='--insecure-registry newregistry.example.com' 1.6. Working with Docker registries A Docker registry provides a place to store and share docker containers that are saved as images that can be shared with other people. While you can build and store container images on your local system without installing a registry, or use the Docker Hub Registry to share your images with the world, installing a private registry lets you share your images with a private group of developers or users. With the registry software available with RHEL and RHEL Atomic Host, you can pull images from the Red Hat Customer Portal and push or pull images to and from your own private registry. You see what images are available to pull from the Red Hat Customer Portal (using docker pull) by searching the Red Hat Container Images Search Page. This section describes how to start up a local registry, load Docker images to your local registry, and use those images to start up docker containers. The version of the Docker Registry that is currently available with Red Hat Enterprise Linux is Docker Registry 2.0. 1.7.1. Creating a private Docker registry (optional) To create a private Docker registry you can use the docker-distribution service. You can install the docker-distribution package in RHEL 7 (it’s not available in Atomic) and enable and start the service as follows: Install docker-distribution: To install the docker-distribution package you must have enabled the rhel-7-server-extras-rpmsrepository (as described earlier). They you can install the package as follows: # yum install -y docker-distribution Enable and start the docker-distribution service: Type the following to enable, start and check the status of the docker-distribution service: # systemctl enable docker-distribution # systemctl start docker-distribution # systemctl status docker-distribution ● docker-distribution.service - v2 Registry server for Docker Loaded: loaded (/usr/lib/systemd/system/docker-distribution.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2016-05-10 06:30:26 EDT; 1min 10s ago Main PID: 8923 (registry) CGroup: /system.slice/docker-distribution.service └─8923 /usr/bin/registry /etc/docker-distribution/registry/config.yml ... Registry firewall issues: The docker-distribution service listens on TCP port 5000, so access to that port must be open to allow clients outside of the local system to be able to use the registry. This applies regardless of whether you are running docker-distribution and docker on the same system or on different systems. You can open TCP port 5000 follows: # firewall-cmd --zone=public --add-port=5000/tcp # firewall-cmd --zone=public --add-port=5000/tcp --permanent # firewall-cmd --zone=public --list-ports 5000/tcp or if have enabled a firewall using iptables firewall rules directly, you could find a way to have the following command run each time you boot your system: iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5000 -j ACCEPT 1.7.2. Getting images from remote Docker registries To get Docker images from a remote registry (such as Red Hat’s own Docker registry) and add them to your local system, use the docker pull command: # docker pull <registry>[:<port>]/[<namespace>/]<name>:<tag> The <registry> is a host that provides the docker-distribution only Docker registry that Red Hat supports at the moment is the one at registry.access.redhat.com. If you have access to a Docker image that is stored as a tarball, you can load that image into your Docker registry from your local file system. docker pull: Use the pull option to pull an image from a remote registry. To pull the rhel base image from the Red Hat registry, type docker pull registry.access.redhat.com/rhel7/rhel. To make sure that the image originates from the Red Hat registry, type the hostname of the registry, a slash, and the image name. The following command demonstrates this and pulls the rhel image for the Red Hat Enterprise Linux 7 release from the Red Hat registry: # docker pull registry.access.redhat.com/rhel7/rhel An image is identified by a repository name (registry.access.redhat.com), a namespace name (rhel7) and the image name (rhel). You could also add a tag (which defaults to :latest if not entered). The repository name rhel, when passed to the dockerhel:latest, lets you choose the image more explicitly. To see the images that resulted from the above docker pull command, along with any other images on your system, type docker images: # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE registry.access.redhat.com/rhel7/rhel latest 95612a3264fc 6 weeks ago 203.3 MB/aep3_beta/aep-docker-registry latest 3c272743b20a 6 weeks ago 478.5 MB registry.access.redhat.com/rhel7/etcd latest c0a7c32e9eb9 9 weeks ago 241.7 MB docker load: If you have a container image stored as a tarball on your local file system, you can load that image tarball so you can run it with the docker command on your local system. Here is how: With the Docker image tarball in your current directory, you can load that tarball to the local system as follows: # docker load -i rhel-server-docker-7.2.x86_64.tar.gz To push that same image to the registry running on your localhost, tag the image with your hostname (or "localhost") plus the port number of the docker-distribution service (TCP port 5000). docker push uses that tag information to push the image to the proper registry: # docker tag bef54b8f8a2f localhost:5000/myrhel7 docker push localhost:5000/myrhel7 The push refers to a repository [localhost:5000/myrhel7] (len: 1) Sending image list Pushing repository localhost:5000/myrhel7 (1 tags) bef54b8f8a2f: Image successfully pushed latest: digest: sha256:7296465ccce190e08a71e6b2cfba56aa8279a1b329827c0f1016b80044c20cb9 size: 5458 ... 1.7.3. Investigating Docker images If images have been pulled or loaded into your local registry, you can use the docker command docker images to view those images. Here’s how to list the images on your local system: # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE/rhel7/rhel latest 95612a3264fc 6 weeks ago 203.3 MB The default option to push an image or repository to the upstream Docker.io registry (docker push) is disabled in the Red Hat version of the docker command. To push an image to a specific registry, identify the registry, its port number, and a tag that you designate in order to identify the image. 1.7.4. Investigating the Docker environment Now that you have the docker and docker-distribution services running, with a few containers available, you can start investigating the Docker environment and looking into what makes up a container. Run docker with the version and info options to get a feel for your Docker environment. docker version: The version option shows which versions of different Docker components are installed. # docker version Client: Version: 1.10.3 API version: 1.22 Package version: docker-common-1.10.3-44.el7.x86_64 Go version: go1.4.2 Git commit: 7ffc8ee-unsupported Built: Fri Jun 17 15:27:21 2016 OS/Arch: linux/amd64 Server: Version: 1.10.3 API version: 1.22 Package version: docker-common-1.10.3-44.el7.x86_64 Go version: go1.4.2 Git commit: 7ffc8ee-unsupported Built: Fri Jun 17 15:27:21 2016 OS/Arch: linux/amd64 docker info: The info option lets you see the locations of different components, such as how many local containers and images there are, as well as information on the size and location of Docker storage areas. # docker info Containers: 9 Images: 25 Server Version: 1.10.3 Storage Driver: devicemapper Pool Name: docker-253:0-21214-pool Pool Blocksize: 65.54 kB Base Device Size: 107.4 GB Backing Filesystem: Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 5.367 GB Data Space Total: 107.4 GB Data Space Available: 5.791 GB Metadata Space Used: 4.706 MB Metadata Space Total: 2.147 GB Metadata Space Available: 2.143.107-RHEL7 (2015-12-01) Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.10.0-327.18.2.el7.x86_64 Operating System: Red Hat Enterprise Linux Atomic Host 7.2 CPUs: 1 Total Memory: 1.907 GiB Name: atomic-7.2-12 ID: JSDA:MGJV:ALYX:N6RC:YXER:M4OJ:GYR2:GYQK:BPZX:GQOA:F476:WLQY 1.7.5. Working with Docker formatted containers Docker images that are now on your system (whether they have been run or not) can be managed in several ways. The docker run command lets you say which command to run in a container. Once a container is running, you can stop, start, and restart it. You can remove containers you no longer need (in fact you probably want to). Before you run an image, it is a good idea to investigate its contents. Investigate a container image After you pull an image to your local system and before you run it, it is a good idea to investigate that image. Reasons for investigating an image before you run it include: - Understanding what the image does - Checking that the image has the latest security patches - Seeing if the image opens any special privileges to the host system Tools (such as openscap) are being integrated with container tools to allow them to scan a container image before you run it. In the mean time, however, you can use docker inspect to get some docker inspect to see what command is executed when you run the container image, as well as other information. Here are examples of examining the rhel7/rhel and rhel7/rsyslog container images (with only snippets of information shown here): # docker inspect rhel7/rhel ... "Cmd": [ "/usr/bin/bash" ], "Image": "", "Volumes": null, "Entrypoint": null, ... # docker inspect rhel7/rsyslog "INSTALL": "docker run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=IMAGE -e NAME=NAME IMAGE /bin/install.sh", "Name": "rhel7/rsyslog", ", "Release": "21", "UNINSTALL": "docker run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=IMAGE -e NAME=NAME IMAGE /bin/uninstall.sh", "Vendor": "Red Hat, Inc.", "Version": "7.2", ... The rhel7/rhel container will execute the bash shell, if no other argument is given when you start it with docker run. If an Entrypoint were set, its value would be used instead of the Cmd value (and the value of Cmd would be used as an argument to the Entrypoint command). In the second example, the rhel7/rsyslog container image is meant to be run with the atomic command. The INSTALL, RUN, and UNINSTALL labels show that special privileges are open to the host system and selected volumes are mounted from the host when you do atomic install, atomic run, or atomic uninstall commands. Mount an image: Using the atomic command, mount the image to the host system to further investigate its contents. For example, to mount the rhel7/rhel container image to the /mnt directory locally, type the following: # atomic mount rhel7/rhel /mnt # ls /mnt bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr After the atomic mount, the contents of the rhel7/rhel container are accessible from the /mnt directory on the host. Use ls or other commands to explore the contents of the image. Check the image’s package list: To check the packages installed in the container, you can tell the rpm command to examine the packages installed on the file system you just made available to the /mnt directory: # rpm -qa --root /mnt | less You can step through the packages in the container or search for particular versions that may require updating. When you are done with that, you can browse the image’s file system for other software of interest. Unmount the image: When you are done investigating the image, you can unmount it as follows: # atomic umount /mnt In the near future, look for software scanning features, such as Openscap or Black Duck, to be available for scanning your container images. When they are, you will be able to use the atomic scan command to scan your images. Running Docker containers When you execute a docker run command, you essentially spin up and create a new container from a Docker image. That container consists of the contents of the image, plus features based on any additional options you pass on the docker run command line. The command you pass on the docker run command line sees the inside the container as its running environment so, by default, very little can be seen of the host system. For example, by default, the running applications sees: - The file system provided by the Docker image. - A new process table from inside the container (no processes from the host can be seen). - New network interfaces (by default, a separate docker network interface provides a private IP address to each container via DHCP). If you want to make a directory from the host available to the container, map network ports from the container to the host, limit the amount of memory the container can use, or expand the CPU shares available to the container, you can do those things from the docker run command line. Here are some examples of docker run command lines that enable different features. EXAMPLE #1 (Run a quick command): This docker command runs the ip addr show eth0 command to see address information for the eth0 network interface within a container that is generated from the RHEL image. Because this is a bare-bones container, we mount the /usr/sbin directory from the RHEL 7 host system for this demonstration (mounting is done by the -v option), because it contains the ip command we want to run. After the container runs the command, which shows the IP address (172.17.0.2/16) and other information about eth0, the container stops and is deleted (--rm). # docker run -v /usr/sbin:/usr/sbin \ --rm If you feel that this is a container you wanted to keep around and use again, consider assigning a name to it, so you can start it again later by name. For example, I named this container myipaddr: # docker run -v /usr/sbin:/usr/sbin \ --name=myipaddr # docker start -i myipaddr 22:. # docker run --rm registry.access.redhat.com/rhel7/rsyslog ls /root/buildinfo Dockerfile-rhel7-rsyslog-7.2-21 Now you know what the Dockerfile is called, you can list its contents: # docker run --rm registry.access.redhat.com/rhel7/rsyslog \ cat /root/buildinfo/Dockerfile-rhel7-rsyslog-7.2-21 FROM 6c3a84d798dc449313787502060b6d5b4694d7527d64a7c99ba199e3b2df834e MAINTAINER Red Hat, Inc. ENV container docker RUN yum -y update; yum -y install rsyslog; yum clean all LABEL BZComponent="rsyslog-docker" LABEL Name="rhel7/rsyslog" LABEL Version="7.2" LABEL Release="21" LABEL Architecture="x86_64" ... EXAMPLE #3 (Run a shell inside the container): Using a container to launch a bash shell lets you look inside the container and change the contents. Here, I set: # docker run --name=mybash -it rhel /bin/bash [root@49830c4f9cc4/]# Although there are very few applications available inside the base RHEL image, you can add more software using the yum command. With the shell open inside the container, run the following commands: [root@49830c4f9cc4/]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.2 (Maipo) [root@49830c4f9cc4/]# nmap bash: nmap: command not found [root@49830c4f9cc4/]# yum install -y nmap [root@49830c4f9cc4/]# # nmap 192.168.122.1 Starting Nmap 6.40 ( ) at 2016-05-10 08:55 EDT Nmap scan report for 192.168.122.1 Host is up (0.00042s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 5000/tcp open upnp ... [root@49830c4f9cc4/]# exit Notice that the container is a RHEL 7.2 container. The nmap command is not included in the RHEL base image. However, you can install it with yum as shown above, then run it within that container. To leave the container, type exit. Although the container is no longer running once you exit, the container still exists with the new software package still installed. Use docker ps -a to list the container: # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49830c4f9cc4 rhel "/bin/bash" 2 minutes ago Exited (0) 2 seconds ago mybash ... You could start that container again using docker start with the -ai options. For example: # docker start -ai mybash [root@a0aee493a605/]#. # docker run --name="log_test" -v /dev/log:/dev/log --rm rhel logger "Testing logging to the host" # journalctl -b | grep Testing May 10 09:00:32 atomic-7.2-12 logger[15377]: Testing logging to the host Investigating from outside of a Docker container Let’s say you have one or more Docker containers running on your host. To work with containers from the host system, you can open a shell and try some of the following commands. docker ps: The ps option shows all containers that are currently running: # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0d8b2ded3af0 rhel:latest "/bin/bash" 10 minutes ago Up 3 minutes mybash If there are containers that are not running, but were not removed (--rm option), the containers are still hanging around and can be restarted. The docker ps -a command shows all containers, running or stopped. # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92b7ed0c039b rhel:latest /bin/bash 2 days ago Exited (0) 2 days ago agitated_hopper eaa96236afa6 rhel:latest /bin/bash 2 days ago Exited (0) 2 days ago prickly_newton See the section "Working with Docker containers" for information on starting, stopping, and removing containers that exist on your system. docker inspect: To inspect the metadata of an existing container, use the docker inspect command. You can show all metadata or just selected metadata for the container. For example, to show all metadata for a selected container, type: # docker inspect mybash [{ "Labels": { "Architecture": "x86_64", "Authoritative_Registry": "registry.access.redhat.com", "BZComponent": "rhel-server-docker", "Build_Host": "rcm-img03.build.eng.bos.redhat.com", "Name": "rhel7/rhel", "Release": "56", "Vendor": "Red Hat, Inc.", "Version": "7.2" ... docker inspect --format: You can also use inspect to pull out particular pieces of information from a container. The information is stored in a hierarchy. So to see the container’s IP address (IPAddress under NetworkSettings), use the --format option and the identity of the container. For example: # docker inspect --format='{{.NetworkSettings.IPAddress}}' mybash 172.17.0.2 .HostConfig.PortBindings: # docker inspect --format='{{.State.Pid}}' mybash 5007 # docker inspect --format='{{.HostConfig.PortBindings}}' mybash map[8000/tcp:[map[HostIp: HostPort:8000]]] Investigating within a running Docker container To investigate within a running Docker container, you can use the docker exec command. With docker exec, you can run a command (such as /bin/bash) to enter a running Docker container process to investigate that container. The reason for using docker docker exec to look into a running container named myrhel_httpd, then look around inside that container. Launch a container: Launch a container such as the myrhel_httpd container described in Building an image from a Dockerfile or some other Docker container that you want to investigate. Type docker ps to make sure it is running: # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1cd6aabf33d9 rhel_httpd:latest "/usr/sbin/httpd -DF 6 minutes ago Up 6 minutes 0.0.0.0:80->80/tcp myrhel_httpd Enter the container with docker exec: Use the container ID or name to open a bash shell to access the running container. Then you can investigate the attributes of the container as follows: # docker exec -it myrhel_httpd /bin/bash [root@1cd6aabf33d9 /]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.2 (Maipo) [root@1cd6aabf33d9 /]# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 08:41 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 7 1 0 08:41 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND ... root 12 0 0 08:54 ? 00:00:00 /bin/bash root 35 12 0 08:57 ? 00:00:00 ps -ef [root@1cd6aabf33d9 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/docker-253:0-540464... 99G 414M 93G 1% / tmpfs 977M 0 977M 0% /dev tmpfs 977M 0 977M 0% /sys/fs/cgroup tmpfs 977M 320K 977M 1% /run/secrets /dev/mapper/rhelah-root 14G 8.5G 5.2G 63% /etc/hosts shm 64M 0 64M 0% /dev/shm [root@1cd6aabf33d9 /]# uname -r 3.10.0-327.18.2.el7.x86_64 [root@1cd6aabf33d9 /]# rpm -qa | more redhat-release-server-7.2-9.el7.x86_64 filesystem-3.2-20.el7.x86_64 basesystem-10.0-7.el7.noarch ... bash-4.2# free -m total used free shared buff/cache available Mem: 1953 134 354 0 1464 1655 Swap: 1055 0 1055 [root@1cd6aabf33d9 /]# exit The commands just run from the bash shell (running inside the container) show you several things. The container holds a RHEL Server release 7.1 system. The process table (ps -ef) shows that the httpd command is process ID 1 (followed by five other httpd processes), /bin/bash is PID 12 and ps -ef is PID 35. Processes running in the host’s process table cannot be seen from within the container. The container’s file system consumes 414M of the 9.8G available root file system space. There is no separate kernel running in the container (uname -r shows the host system’s kernel: 3.10.0-229.1.2.el7.x86_64).. Starting containers: A docker container that doesn’t need to run interactively can start with only the start option and the container ID or name: # docker. # docker start -a -i agitated_hopper bash-4.2# exit Stopping containers: To stop a running container that is not attached to a terminal session, use the stop option and the container ID or number. For example: # docker stop myrhel_httpd myrhel_httpd The stop option sends a SIGTERM signal to terminate a running container. If the container doesn’t stop after a grace period (10 seconds by default), docker sends a SIGKILL signal. You could also use the docker kill command to kill a container (SIGKILL) or send a different signal to a container. Here’s an example of sending a SIGHUP signal to a container (if supported by the application, a SIGHUP causes the application to re-read its configuration files): # docker kill --signal="SIGHUP" myrhel_httpd Removing containers To see a list of containers that are still hanging around your system, run the docker ps -a command. To remove containers you no longer need, use the docker rm command, with the container ID or name as an option. Here is an example: # docker rm goofy_wozniak You can remove multiple containers on the same command line: # docker rm clever_yonath furious_shockley drunk_newton If you want to clear out all your containers, you could use a command like the following to remove all containers (not images) from your local system (make sure you mean it before you do this!): # docker rm $(docker ps -a -q) 1.7.6. Creating Docker images So far we have grabbed some existing docker container images and worked with them in various ways. To make the process of running the exact container you want less manual, you can create a Docker image from scratch or from a container you ran that combines an existing image with some other content or settings. Creating an image from a container The following procedure describes how to create a new image from an existing image (rhel:latest) and a set of packages you choose (in this case an Apache Web server, httpd). NOTE: For the current release, the default RHEL 7 container image you pull from Red Hat will be able to draw on RHEL 7 entitlements available from the RHEL or RHEL Atomic Host system. So, as long as your Docker host is properly subscribed and the repositories are enabled that you need to get the software you want in your container (and have Internet access from your Docker host), you should be able to install packages from RHEL 7 software repositories. Install httpd on a new container: Assuming you have loaded the rhel image from the Red Hat Customer Portal into your local system, and properly subscribed your host using Red Hat subscription management, the following command will: - Use that image as a base image - Get the latest versions of the currently installed packages (update) - Install the httpd package (along with any dependent packages) Clean out all yum temporary cache files # docker run -i rhel:latest /bin/bash -c "yum clean all; \ yum update -y; yum install -y httpd; yum clean all" Commit the new image: Get the new container’s ID or name (docker ps -l), then commit that container to your local repository. When you commit the container to a new image, you can add a comment (-m) and the author name (-a), along with a new name for the image (rhel_httpd). Then type docker images to see the new image in your list of images. # docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6832df8da0a redhat/rhel7:0 /bin/bash -c 'yum cl About a minute ago Exited (0) 13 seconds ago backstabbing_ptolemy4 # docker commit -m "RHEL with httpd" -a "Chris Negus" f6832df8da0a rhel_httpd 630bd3ff318b8a5a63f1830e9902fec9a4ab9eade7238835fa6b7338edc988ac # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE rhel_httpd latest 630bd3ff318b 27 seconds ago 170.8 MB redhat/rhel latest e1f5733f050b 4 weeks ago 140.2 MB Run a container from new image: Using the image you just created, run the following docker run command to start the Web server (httpd) you just installed. For example: # docker run -d -p 8080:80 rhel_httpd:latest \ /usr/sbin/httpd -DFOREGROUND In the example just shown, the Apache Web server (httpd) is listening on port 80 on the container, which is mapped to port 8080 on the host. Check that container is working: To make sure the httpd server you just launched is available, you can try to get a file from that server. Either open a Web browser from the host to address or use a command-line utility, such as curl, to access the httpd server: # curl Building an image from a Dockerfile Once you understand how images and containers can be created from the command line, you can try building containers in a more permanent way. Building container images from Dockerfile files is by far the preferred way to create Docker formatted containers, as compared to modifying running containers and committing them to images. The procedure here involves creating a Dockerfile file that includes many of the features illustrated earlier: - Choosing a base image - Installing the packages needed for an Apache Web server (httpd) - Mapping the server’s port (TCP port 80) to a different port on the host (TCP port 8080) - Launching the Web server While many features for setting up a Docker development environment for RHEL 7 are in the works, there are some issues you should be aware of as you build your own docker containers: Entitlements: Here are a few issues associated with Red Hat entitlements as they relate to containers: - If you subscribe your Docker host system using Red Hat subscription manager, when you build a Docker image on that host, the build environment automatically has access to the same Red Hat software repositories you enabled on the host. - To make more repositories available when you build a container, you can enable those repositories on the host or within the container. - Because the subscription-manager command is not supported within a container, enabling a repo inside the /etc/yum.repos.d/redhat.repo file is one way to enable or disable repositories. Installing the yum-utils package in the container and running the yum-config-manager command is another. - If you build a RHEL 6 container on a RHEL 7 host, it will automatically pick up RHEL 6 versions of the repositories enabled on your host. - For more information on Red Hat entitlements within containers, refer to the Docker Entitlements solution. - Updates: Docker containers in Red Hat Enterprise Linux do not automatically include updated software packages. It is your responsibility to rebuild your Docker images on occasion to keep packages up to date or rebuild them immediately when critical updates are needed. The "RUN yum update -y" line shown in the Dockerfile example below is one way to update your packages each time the Docker image is rebuilt. Images: By default, docker build will use the most recent version of the base image you identify from your local cache. You may want to pull (docker pull command) the most recent version of an image from the remote Docker registry before you build your new image. If you want a specific instance of an image, make sure you identify the tag. For example, just asking for the image "centos" will pull the centos:latest image. If you wanted the image for CentOS 6, you should specifically pull the centos:centos6 image. Create project directories: On the host system where you have the docker and docker-distribution services running, create a directory for the project: # mkdir -p httpd-project # cd httpd-project Create the Dockerfile file: Open a file named Dockerfile using any text editor (such as vim Dockerfile). Assuming you have registered and subscribed your host RHEL 7 system, here’s an example of what the Dockerfile file might look like to build a Docker container for an httpd server: # My cool Docker image # Version 1 # If you loaded redhat-rhel-server-7.0-x86_64 to your local registry, uncomment this FROM line instead: # FROM registry.access.redhat.com/rhel # Pull the rhel image from the local registry FROM registry.access.redhat.com/rhel MAINTAINER Chris Negus USER root # Update image RUN yum update -y # Add httpd package. procps and iproute are only added to investigate the image later. RUN yum install httpd procps iproute -y RUN echo container.example.com > /etc/hostname # Create an index.html file RUN bash -c 'echo "Your Web server test is successful." >> /var/www/html/index.html' - Checking the Dockerfile syntax (optional): Red Hat offers a tool for checking a Dockerfile file on the Red Hat Customer Portal. If you like, you can go to the Linter for Dockerfile page and check your Dockerfile file before you build it. Build the image: To build the image from the Dockerfile file, you need to use the build option and identify the location of the Dockerfile file (in this case just a "." for the current directory): NOTE: Consider using the --no-cache option with docker build. Using --no-cache prevents the caching of each build layer, which can cause you to consume excessive disk space. # docker build -t rhel_httpd . Uploading context 2.56 kB Uploading context Step 0 : FROM registry.access.redhat.com/rhel ---> f5f7ddddef7d Step 1 : MAINTAINER Chris Negus ---> Running in 3c605e879c72 ---> 77828ebe8f6f Removing intermediate container 3c605e879c72 Step 2 : RUN yum update -y ---> Running in 9f45bb262dc6 ... ---> Running in f44ea9eb6155 ---> 6a532e340ccf Removing intermediate container f44ea9eb6155 Successfully built 6a532e340ccf Run the httpd server in the image: Use the following command to run the httpd server from the image you just build (named rhel_httpd in this example): # docker run -d -t --name=myrhel_httpd \ -p 80:80 -i rhel_httpd:latest \ /usr/sbin/httpd -DFOREGROUND Check that the server is running: From another terminal on the host, type the following to check that you can get access the httpd server: # netstat -tupln | grep 80 tcp6 0 0 :::80 :::* LISTEN 26137/docker-proxy # curl localhost:80 Your Web server test is successful. Tagging Images You can add names to images to make it more intuitive to understand what they contain. Using the docker tag command, you essentially add an alias to the image, that can consist of several parts. Those parts can include: registryhost/username/NAME:tag You can add just NAME if you like. For example: # docker tag 474ff279782b myrhel7 In the previous example, the rhel7 image had a image ID of 474ff279782b. Using docker tag, the name myrhel7 now also is attached to the image ID. So you could run this container by name (rhel7 or myrhel7) or by image ID. Notice that without adding a :tag to the name, it was assigned :latest as the tag. You could have set the tag to 7.2 as follows: # docker tag 474ff279782b myrhel7:7.2 To the beginning of the name, you can optionally add a user name and/or a registry name. The user name is actually the repository on Docker.io that relates to the user account that owns the repository. Tagging an image with a registry name was shown in the "Tagging Images" section earlier in this document. Here’s an example of adding a user name: # docker tag 474ff279782b cnegus/myrhel7 # docker images | grep 474ff279782b rhel7 latest 474ff279782b 7 months ago 139.6 MB myrhel7 latest 474ff279782b 7 months ago 139.6 MB myrhel7 7.1 474ff279782b 7 months ago 139.6 MB cnegus/myrhel7 latest 474ff279782b 7 months ago 139.6 MB Above, you can see all the image names assigned to the single image ID. Saving and Importing Images If you want to save a Docker image you created, you can use docker save to save the image to a tarball. After that, you can store it or send it to someone else, then reload the image later to reuse it. Here is an example of saving an image as a tarball: # docker save -o myrhel7.tar myrhel7:latest The myrhel7.tar file should now be stored in your current directory. Later, when you ready to reuse the tarball as a container image, you can import it to another docker environment as follows: # cat myrhel7.tar | docker import - cnegus/myrhel7 Removing Images To see a list of images that are on your system, run the docker images command. To remove images you no longer need, use the docker rmi command, with the image ID or name as an option. (You must stop any containers using an image before you can remove the image.) Here is an example: # docker rmi rhel You can remove multiple images on the same command line: # docker rmi rhel fedora If you want to clear out all your images, you could use a command like the following to remove all images from your local registry (make sure you mean it before you do this!): # docker rmi $(docker images -a -q) 1.8. Summary At this point, you should be able to get Red Hat Docker installed with the docker and docker-distribution services working. You should also have one or more Docker images to work with, as well as know how to run containers and build your own images. Chapter 2. Install and Deploy an Apache Web Server Container 2.1. Overview A Web server is one of the most basic examples used to illustrate how containers work. The procedure in this topic does the following: - Builds an Apache (httpd) Web server inside a container - Exposes the service on port 80 of the host - Serves a simple index.html file - Displays data from a backend server (needs additional MariaDB container described later) 2.2. Creating and running the Apache Web Server Container - Install system: Install a RHEL 7 or RHEL Atomic system that includes the docker package and start the docker service. Pull image: Pull the rhel7 image by typing the following: # docker pull rhel7:latest Create Directory to hold Dockerfile: Create a directory (named mywebcontainer) that will hold a file names Dockerfile and another named action. # mkdir ~/mywebcontainer # cd ~/mywebcontainer # touch action Dockerfile Create action CGI script: Create the action file in the ~/mywebcontainer directory, which will be used to get data from the backend database server container. This script assumes that the docker0interface later,: Create registry FROM rhel7:latest USER root MAINTAINER Maintainer_Name # Fix per RUN yum -y install deltarpm yum-utils --disablerepo=*-eus-* --disablerepo=*-htb-* *-sjis-*\ --disablerepo=*-ha-* --disablerepo=*-rt-* --disablerepo=*-lb-* --disablerepo=*-rs-* --disablerepo=*-sap-* RUN yum-config-manager --disable *-eus-* *-htb-* *-ha-* *-rt-* *-lb-* *-rs-* *-sap-* *-sjis* > mkdir /run/httpd ; /usr/sbin/httpd -D FOREGROUND: 2.3. Tips for this container Here are some tips to help you use the Web Server container: - Modify for MariaDB: To use this container with the MariaDB container (described later), 2.4. Attachments Chapter 3. Install and Deploy a MariaDB Container 3.1. Overview Using MariaDB, you can set up a basic database in a container that can be accessed by other applications. The procedure in this topic described later) - Offers tips on how to use and extend this container 3.2. Creating and running the MariaDB Database Server Container - Install system: Install a Red Hat Enterprise Linux 7 or Red Hat Enterprise Linux Atomic Host system that includes the docker package and start the docker service. Pull image: Pull the rhel7 image by typing the following: # docker pull rhel7:latest Get tarball with supporting files: Download the tarball file attached to this article (mariadb_cont_2 Create the Dockerfile: Create the Dockerfile file shown below in the ~/mydbcontainer directory and modify it --disablerepo=*-eus-* --disablerepo=*-htb-* --disablerepo=*sjis* \ --disablerepo=*-ha-* --disablerepo=*-rt-* --disablerepo=*-lb-* \ --disablerepo=*-rs-* --disablerepo=*-sap-* RUN yum-config-manager --disable *-eus-* *-htb-* *-ha-* *-rt-* *-lb-* \ *-rs-* *-sap-* *-sjis-* > /dev/null # Add Mariahdb software RUN yum -y install net-tools mariadb-server # Set up Mariahdb database ADD gss_db.sql /tmp/gss_db.sql RUN /usr/libexec/mariadb-prepare-db-dir RUN test -d /var/run/mariadb || mkdir /var/run/mariadb; \ chmod 0777 /var/run/mariadb; \ test -d /var/run/mariadb || mkdir /var/run/mariadb; chmod 0777 /var/run/mariadb;/usr/bin/mysqld_safe --basedir=/usr - Modify gss_db.sql: Look at the gss_db.sql file in the ~/mydbcontainer directory and modify it as needed:.44-MariaDB?acL3YF31?X?FWbiiTIO2Kd6mysql_native_password Ctrl-C 3.3.. There are several "in ?" in this article. Is this expected? I have removed all instances of "in ?" in this article. I assume that these instances of "in ?" were placeholders that had not been heretofore replaced with coherent content. I replaced each instance of "in ?" with text that makes sense in context and preserves the sense of the surrounding material. internal RH commit information Commit: 2942760464acaa8ac029c90e1f9c2d4ef9354154 Repository: git@gitlab.cee.redhat.com:rhel-atomic-host-documentation/atomic-host-content.git Thanks Zac. Document has now been updated to reflect the changes. Hi, following exactly from this howto, I got an error while starting flanneld, Jan 12 15:36:29 master flanneld: E0112 15:36:29.589235 33367 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured There is a Red Hat extension to its docker daemon ("--add-registry") that is not present on Docker Inc's release. Is it functionally equivalent to the standard "--registry-mirror"? Both seem to "transparently" look for images at a private registry (doesn't matter if this registry is a indeed a mirror or not). I launched the pods but can't access the web from the host I don't see how it would be possible from the host since docker0 in 172.17.0.0/24 network. Under 3.6, step 11 calls for docker run -d [...] dbforweb. This fails with: ...very probably because of Docker Issue 8334 (closed): "Docker lost CMD information after export and import". Perpaps 3.3 step 10 should use docker save/load instead of export/import? EDIT: yes, save/load work much better. And if you're on a m1.tiny instance with low disk space you can probably do: (assumes you have ssh keys in place) db-service.yaml [1] has incorrect selectors in its specification. Instead of "selector: name: db" it should be "selector: app: db" [1] docker pull library/webwithdb:latest Trying to pull repository docker-registry.usersys.redhat.com/library/webwithdb ... failed Trying to pull repository docker.io/library/webwithdb ... not found Error: image library/webwithdb:latest not found Looks like the image is gone. In webserver-rc.yaml use image: "micah/webwithdb" instead of image: "webwithdb" to make work (at least temporary) if "micah/webwithdb" is not made for playing. in Chapter 4.5, Start the kubelet service to launch the Kubernetes service containers I don't know how to run the kubernetes service container, and how to apply the xxx.pod.json file. Can you help me ? Hey Jun, Thanks for this feedback. I will try and get you a response shortly. In the procedure, you create the json files in the /etc/kubernetes/manifest/ directory. By adding the "--config=/etc/kubernetes/manifests/" option to the /etc/kubernetes/kubelet file, when you start the kubelet service, the images identified in those json files should start up. If you did those steps, just typing "docker ps" should show all the kubernetes services running. thanks , I found the kubernetes services which is runing in container failed to run , it is caused by my server failed to pull "pause" image, badlly I'm located in China, so I replace "pause" with rhel7/pod-infrastructure:latest , finnally succeed, the workaroud is as following: cat /etc/kubernetes/kubelet KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_HOSTNAME="--hostname_override=XXX" KUBELET_ARGS="--register-node=true --config=/etc/kubernetes/manifests --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_API_SERVER="--api_servers="KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" Great! I'm glad you got it working. Thanks for the feedback. In Section 6. I can't find the local-01.yaml and claim-01.yaml referenced (or where the examples folder is for that matter). Am I missing something? Thanks. Hey David, Thanks for reaching out. I will investigate and get back to you shortly. I haven't tried it yet, but it's possible the following files were used as local-01.yaml and claim-01.yaml. You can see if those work as I investigate further. We'll include the content of those files in this doc once we get them verified:local-01.yaml claim-01.yaml Thanks Chris and thanks for the patience David. The team is looking at this and the associated documentation for Kubernetes. We should have these updated in time for the next release of Atomic Host (7.2.6), around the first week of August. Thank you both for your answers. I will try with Chris suggestion for now and look forward to the updated docs in August. Hey David, Sorry for the delayed response. Although we updated the Kubernetes chapters in the first week of August, we are still testing the Kubernetes storage docs. Updates for that are coming soon. Thanks for your patience! Hey David, I just pushed an update to the Kubernetes Storage section that Chris had been working on. Here is the direct link (). I believe this should address the issues you encountered with the missing yaml files. Please let me know if it doesn't or if you have feedback for any other sections. Thanks for your patience! Hi Vikram, Today I finally got back to looking at the updated section and overall I am quite happy with it, here's a few small changes/suggestions: In section 6.2.3 it now suggests to run "setenforce 0" if there is a forbidden error. A cleaner solution would be to run "chcon -R -t svirt_sandbox_file_t /tmp/data01/" in the section where the /tmp/data01 folder and index.html gets created. In section 6.2.2 the pod.yaml file is copied twice. Cheers, David Thanks David. I will incorporate these fixes soon. Hi, I think there is a mistake in section 4.5.2. Configuring kubernetes services on the master. The guide never installs kubelet service because it is contained in the kubernetes-node package and NOT in the kubernetes-master so the point I think should be Then the example file for /etc/kubernetes/kubelet repeats twice the KUBELET_ARGS (not sure it supported). I will even add the option --register-schedulable=false to avoid this to become a full blown node on which pods are scheduled Final version should be something like this This in fact leads to the kubelet service to be enabled and started and related manifests to be deployed at reboot. Let me know and thanks. Thanks Luca - Going to review with the team and will get back to you next week. Thanks for reporting this. Thanks for the suggested changes, Luca. I made them both. Somehow the install command didn't get corrected when they split the kubernetes package into master and node. As for the KUBELET_ARGS value, you were right. It seems that the first one was ignored and only the second one was picked up. The revised chapter should appear in the next day or two. Thanks Chris. Hey Luca - the fixes described by Chris are now live starting from section 4.5.1. Thanks for letting us know about these issues, and if there are any more issues, please leave a new comment.
https://access.redhat.com/documentation/en/red-hat-enterprise-linux-atomic-host/version-7/getting-started-with-containers/index.html
CC-MAIN-2018-26
refinedweb
9,110
52.49
Argument Bingo!! Excerpted From Things to Say When You're Losing a Technical Argument 2001-01-05 16:42:57. By Mr. Bad That won't scale. That's been proven to be O(N^2) and we need a solution that's O(NlogN). There are, of course, various export limitations on that technology. The syntax is idiosyncratic. Unfortunately, the license would contaminate our standard. What? I don't speak your crazy moon-language. I don't think that's altogether clear. Please write it up in UML for me. Can you generate some USE CASES that would justify the change? It would probably be best if we deferred that until version 2.0. You've obviously ignored the various namespace issues. I don't think you're considering the performance trade-offs. Let's table this for now, and we'll talk about it one-on-one off-line. We need something that my mom can use. OK, but what about internationalization? Yes, but we're standardizing on XML.
http://www.w3.org/Encryption/2001/Minutes/0103-Boston/arguments-humor.html
CC-MAIN-2017-22
refinedweb
171
71.31
I program in SEVERAL, MANY, A-LOT of languages and I am constantly finding my self with no time to finish the project in that lkanguge or this one ect.. should I try to restrict my self to just 3-4 languages? This is a discussion on Restrict myself? within the A Brief History of Cprogramming.com forums, part of the Community Boards category; I program in SEVERAL, MANY, A-LOT of languages and I am constantly finding my self with no time to finish ... I program in SEVERAL, MANY, A-LOT of languages and I am constantly finding my self with no time to finish the project in that lkanguge or this one ect.. should I try to restrict my self to just 3-4 languages? "...since anyone who is anyone knows C..." -Peter Cellik I dont think we can really answer that, as its a matter of your personal opinion. Being that you dont have time, and that you asked, i'm assuming your not having fun. So my answer, yes. See the thing is once you have a concept of basic progmatic logic, you can program in any language. It just becomes a matter of having a reference material to help you along with syntax.See the thing is once you have a concept of basic progmatic logic, you can program in any language. It just becomes a matter of having a reference material to help you along with syntax.Originally posted by KrAzY CrAb I program in SEVERAL, MANY, A-LOT of languages "SEVERAL, MANY, A-LOT" will not impress me, mister. c++->visualc++->directx->opengl->c++; (it should be realized my posts are all in a light hearted manner. And should not be taken offense to.) oh wow! another one juuust like meeee. yah I to know so many languages. Html, javascript, qbasic, c, english ect. I'm sooo special. -------------------------- dude get a grip. even if you actualy do know the santax for multiple actual programming languages that does no mean you know the language. pick a single language. or at least only a couple. then focus your attention on mastering that language. if a contradiction was contradicted would that contradition contradict the origional crontradiction? Like everyone else here, I think you're extremely 1337. Kudos.Like everyone else here, I think you're extremely 1337. Kudos.Originally posted by KrAzY CrAb SEVERAL, MANY, A-LOT Seriously, though - drop pointless languages... C-- would be a start - I can't imagine why you'd use that for any other reason than saying "I know more languages than you, making me the victor." Dissata: "I know... English" "santax" Liar I wasnt bragging.. I'm not like many here who try to put others down. But I know many languages (Why would I lie and say i only know 1?) What is the point of replying to this message if you dont plan to help in anyway? And I use C-- because it makes small exes that are faster than anything you'd ever see, in many cases smaller than what many can do in asm. Last edited by KrAzY CrAb; 03-12-2003 at 10:00 AM. "...since anyone who is anyone knows C..." -Peter Cellik > in many cases smaller than what many can do in asm. How? I'm glad I'm not the only one that searched the boards for his old posts... The whole "C# is a tool of the Great Satan intended to make Sun give up computers" made me wonder. >>> How can a tool that automatically generates assembler code from human instructions possibly be faster than proper human-written assembler code? <<< A modern compiler is optimised to make use of the capabilities of the underlying hardware. It is aware of the cacheing, pipelining, etc. It is frequently the case that a hand written assembler program which follows classical procedural assembler style would be outperformed by a well written high level language program when using a GOOD compiler. Of course, if the writer of the assembler code is fully conversent with the hardware and all the tricks and techniques that the compiler uses, he/she should be able to write code that performs at least as well, and possibly, if he/she is a true expert, a little better. Wave upon wave of demented avengers march cheerfully out of obscurity unto the dream. I didnt mention the "i know lots and lots" and crap line because i thought i'd get flamed....damn missed opportunity! And for the record, html doesn't count :P Code:1)Visual Basic: msgbox "Hello World" 2)C++: #include<iostream.h> #include<conio.h> int main() { cout<<"Hello World"; getch(); } 3)Euphoria: puts(1,"Hello World") 4)Pascal: PROGRAM hola; uses crt; begin clrscr; write('Hello World'); end. 5)HTML: <H1>Hello World</H1> 6)JavaScript: <SCRIPT LANGUAJE="JAVASCRIPT"> document.write("Hello World"); </SCRIPT> 7)QBasic: PRINT "Hello World" 8)VisualBasic Script: <script language="vbscript"> msgbox "Hello World" </script> 9)Java: import java.awt.*; import java.applet.*; public class HelloWorld extends Applet { Label Msg; public void init() { Msg = new Label(); Msg.setText("Hello World"); add(Msg); } } 10)Visual C++: #include "stdafx.h" int main() { printf("Hello World\n"); return 0; } 11)ABC: WRITE "Hello World" 12)Lisp: print "Hello World" 13)Liberty Basic print "Hello World" 14)Berkeley Logo: print "Hello print "World 15)Python: print "Hello World" 16)Perl: #!/usr/bin/perl print "Hello World!", "\n"; 17)Delphi: ShowMessage ('Hello World'); 18)Rapid-Q: PRINT "Hello World" 19)BASEC: PRINT "Hello World" 20)D++: screenput"Hello World"; 21)Chipmunk BASIC: PRINT "Hello World" 22)Johnny's Experimental Language: PRINT "Hello World" 23)Envelop: Label1.Text = "Hello World" 24)InterBASIC: Form() Width:5250 Height:2565 Caption:New Application CreateCtrl Label(1) Left:0 Top:0 Width:1000 Height:200 Visible:True Caption:Hello World 25)BF: >+++++++++[<++++++++>-]<.---.+++++++..+++.>++++++++[<------>-]<+.>+++++++++[<++ ++++>-]<+.--------.+++.------.--------.>+++++++[<----->-]<.[-]++++++++++. 26)TURBO PL: Pout "Hello World". LOL! Go BrainF***!LOL! Go BrainF***!25)BF: >+++++++++[<++++++++>-]<.---.+++++++..+++.>++++++++[<------>-]<+.>+++++++++[<++ ++++>-]<+.--------.+++.------.--------.>+++++++[<----->-]<.[-]++++++++++. AppleSoft Basic ?"Hello! Wow thats alot of different ways to say hello world! But yea just cause you know the basic syntax to a language doesn't mean you know the language. I don't think you know a language until you've actually written a useful program in it like an ftp client, a nice text editor etc. Something that someone out there would actually use. Like me I know the C syntax but all my programs are complete garbage and really just practice so that some day maybe I can make that one program that someone will use. So I can say "yea I know C", but do I really? Nope not really because I haven't made a useful program yet. Anyways I've read that once you really "know" a language well you can pick up other languages in a matter 2-3 weeks and start to make useful stuff in it, is this true? Yes, it is true. The language that actually got me used to programming and allowed me to pick more up was BASEC (BASic Emulation Compiler) its very close to QB in syntax. I learned alo of rules about programming in that language. Also, I dont remeber asking people if they beleived I knew alot of languages. I merely asked if I should restrict myself.
http://cboard.cprogramming.com/brief-history-cprogramming-com/35878-restrict-myself.html
CC-MAIN-2014-23
refinedweb
1,215
65.01
Parameter types substitute the basic types for a more convenient templatised structure when used within a class. Ideally the only way any member variable should be modified is via member functions of the class itself. When you start allowing member variables to be extracted and modified outside a class, you inevitably lose control of the class (and in practice this happens by lazy/tired programmers more often than not) and bugs start creeping in. The usual way to implement this is with get and set methods providing access. However, this can get somewhat verbose for simple classes: class A { public: A() {}; int getCounter() const { return m_counter; } void setCounter (const int& counter) { m_counter = counter; } private: int m_counter; }; This can be simplified with the use of templates and the () operator resulting in the Parameter class defined here. Include the following at the top of any translation unit that requires compilation of class that uses parameters. #include <ecl/utilities.hpp> using ecl::Parameter; Since it is a template class, no linking is required if you are only using this class. The parameter implementation of a class A (compare with the set/get implementation described earlier) looks like: class A { public: A() {}; Parameter<int> counter; }; Here access to the variable is done very similarly (but without the verbosity of set/get). int main() { A a; a.counter(1); // <-- Sets the variable cout << a.counter(); << endl; // <-- Gets the variable } Note that even though Parameter<int> is defined publicly, the contents are still privately allocated, but here they are tucked away in Parameter's internal implementation. There are also convenience methods if the above usage is undesirable. a.counter = 1; int a_copy = a.counter; The difference between these and public variables is that these can never be set outside of the class. The reason - regualar public variables can be hooked by a pointer/reference and then passed off to another function where the variable can be modified...well outside of the control of the original class. In this case here, the only hooks that can be made to the variable are const pointers or const references. Thus, even though the notation is the same, these parameter variables are protected by their constness.
http://docs.ros.org/indigo/api/ecl_utilities/html/parametersGuide.html
CC-MAIN-2020-10
refinedweb
366
51.78
op<-option() on.exit(options(op)) options(abc = newvalue) ... some calculation ... on.exit()There are several problems with this approach: on.exitmechanism applies to a function as a whole. An alternative approach is to adapt the mechanism used in MzScheme. They call their dynamic variables parameters, but that is clearly not a good choice for R. I will call them dynamic variables. This note presents a prototype implementation of dynamic variables. It could easily be merged into the internal R context mechanism to make it more efficient, and I do not think it would be too hard to implement in S-plus. The implementation is available as a package. The interface consists of three functions, dynamic.variable, dynamic.bind, and dynamic.bind.list. dynamic.variable creates a new dynamic variable and gives it an initial binding with the single argument to dynamic.variable as its value. For example, v <- dynamic.variable(1)creates a dynamic variable with initial value 1. Dynamic variable are in fact functions. Their current values are obtained by calling them with no argument, and the value of their current binding is changed by calling them with the new value as a an argument: > v() [1] 1 > v(2) > v() [1] 2 dynamic.bind is called as dynamic.bind(expr, dv1 = a1, dv2 = a2, ...)It evaluates exprand returns the result. During the evaluation, the dynamic variable dv1is bound to the value of a1, dv2to a2, and so on. After the evaluation of exprand before returning the result, the previous bindings of the dynamic variables are restored. For example, > v() [1] 2 > dynamic.bind(v(), v = 3) [1] 3 > v() [1] 2 Changing the value of a dynamic variable within a dynamic.bind only changes the current binding, not any enclosing bindings: > v() [1] 2 > dynamic.bind({cat("v =", v(),"\n"); v(4); cat("v =", v(), "\n")}, v = 3) v = 3 v = 4 > v() [1] 2 dynamic.bind.list is a lower level version of dynamic.bind that allows the variables to use to be computed. It is called as dynamic.bind.list(expr, list.of.variables, lits.of values)For example, > v() [1] 2 > dynamic.bind.list(v(), list(v), list(3)) [1] 3 > v() [1] 2 dynamic.bind.listis most useful for implementing higher level functions like dynamic.bind. By default, dynamic variables are created with a unique identifier that should insure that saving a variable in two different workspaces and restoring it again will produce the same variable from both workspaces. There may be times however, when we want to be able to create the same variable by evaluating two separate expressions. To achieve this, we can give the variable a name when we create it: w <- dynamic.variable(1, name = "fred")evaluating the constructor expression again will create the same variable: > u <- dynamic.variable(1, name = "fred") Warning message: dynamic variable "fred" already exists in: dynamic.variable(1, name = "fred") > w() [1] 1 > u() [1] 1 > u(2) > u() [1] 2 > w() [1] 2 This should be useful for dynamic variables defined in packages. It does raise a problem of possible name conflicts. It may be useful to provide some explicit support for dynamic variables as part of the namespace system; I need to think about that a bit. Until that is resolved, a useful convention would be to define a dynamic variable foo in package bar to have name bar::foo, foo <- dynamic.variable(name = "bar::foo") The .First.lib package initialization routine is given two arguments, libname and pkgname. Occasionally it is useful to also have the environment frame the package is being loaded into. This information can be computed, but this is awkward and would not work with the name space mechanism I proposed. An alternative would be to provide a dynamic variable, perhaps libraryfor running the .First.libfunction to if (exists(".First.lib", envir = env, inherits = FALSE)) { firstlib <- get(".First.lib", envir = env, inherits = FALSE) tt <- try(dynamic.bind(firstlib(which.lib.loc, package), loading.package.frame = env)) ...A .First.libfunction can then be defined as .First.lib <- function(lib, pkg) { frame <- loading.package.frame() ... } user.callback = dynamic.variable() do.callback <- function(x) { fun <- user.callback() fun(x) } myopt <- function(fun, x) x <- as.double(x) n <- as.integer(length(x)) dynamic.bind(.C("myopt", x, n), user.callback = fun)with the C code double callback(double *x, int n) { /* call R function do.callback with argument vector x */ ... } void myopt(double *x, int *n) { ... Copt(callback, x, *n) ... } on.exitsetting to insure that the options are properly reset on error. One way to deal with this would be to represent options as dynamic variables. Thus options(foo=x) would create a new dynamic variable if the option does not exist and set it if one does exist. options() and getOption would retrieve the values of the dynamic variables. Suppose we used this approach and had a function getOptionDynvar for retrieving the variable. Then we could, for example, use showerr <- getOptionDynvar("show.error.messages") dynamic.bind(try(expr), showerr = FALSE)to evaluate a tryexpression with error printing turned off. To make this sort of thing more convenient we could use dynamic.bind.list to define a function with.options to allow the previous example to use with.options(try(expr), show.error.messages = FALSE)Assuming a function get.options.variablesthat returns the dynamic variables corresponding to a vector of opitons names, we could define with.optionssomething like < with.optionsdefinition>= with.options <- function(expr, ...) { values <- list(...) variables <- get.options.variables(names(values)) dynamic.bind.list(expr, variables, values) } Defines with.options(links are to index). Switching to using dynamic variables for options would require us to get rid of the .Options vector or to make it a sort of special object that prints like a vector and has its [, [[, $, and the corresponding assignment mechods defined to use the dynamic variables. I don't think this would be too hard to do, but I have not thought it through completely. One trick for preserving the .Options variable would be to define it something like this: <possible .Optionsdefinition>= \begin{verbatim} .Options <- local({ optfun <- function() { assign(".Options", delay(optfun()), env=NULL) options() } delay(optfun()) }) Defines .Options(links are to index). This installs a promise which reinstalls itself before returning the result of options(). sinkfunction redirects output to a specified connection. In a threaded environment this should only be done for the current thread; similarly for an event handler this should only affect the event handler context. One way to manage the context where output is redirected would be to have dynamic variables representing the standard connections, say input.connection <- dynamic.variable(getConnection(0)) output.connection <- dynamic.variable(getConnection(1)) error.connection <- dynamic.variable(getConnection(2)) sink(file)would then internally do the equivalent of output.connection(file)to change the dynamic binding of the output connection. Code writing output would get the connection to use by con <- output.connection() ... write to con ... sink also manages a stack of redirections. This could again be handled with a dynamic variable, since it would usually make sense for the redirection stack to be specific to a single thread or execution context. In a threaded context, the intent is that dynamic bindings created with dynamic.bind should only be visible in the current thread. It might be useful to be able to mark a dynamic variable as bind-only. That is, changing the value with v(x) is not allowed but creating a new binding for v with dynamic.bind is permitted. When a dynamic variable is stored in a workspace, the value saved is the initial global value. I'm not sure if it would be more appropriate to save the current value, or maybe have a mechanism for choosing, but at the moment this would be difficult to implement. There is also a small possibility that restoring a dynamic variable from a saved workspace will fail. <dynvars.R>= <global variables> <internal functions> <public functions> The corresponding NAMESPACE file would be <NAMESPACE>= export(dynamic.variable, dynamic.bind, dynamic.bind.list) The bindings of dynamic variables are stored in environments. Every dynamic variable has a global binding in the environment dynvars.database. This environment is created with attach/ detach to insure that it is hashed. <global variables>= (<-U) dynvars.database <- local({ env <- attach(NULL); detach(2); env}) Defines dynvars.database(links are to index). Dynamic bindings are created using deep binding by adding new environment frames onto an existing dynamic environment. <internal functions>= (<-U) [D->] new.dynamic.env <- function() eval(quote((function() environment())()), env=get.dynamic.env()) Defines new.dynamic.env(links are to index). When dynamic.bind creates a new dynamic environment, it stores it in a variable with a reasonably unique name in its frame. The current dynamic environment is thus either the value of the first variable by this name found on the frame stack or the global dynamic environment: <internal functions>+= (<-U) [<-D->] get.dynamic.env <- function() { name <- "__DYNVAR_ENV__" n <- sys.nframe() if (n > 1) for (i in (n-1):1) { env <- sys.frame(i) if (exists(name, env = env)) return(get(name, env = env)) } dynvars.database } Defines get.dynamic.env(links are to index). dynamic.bind first forces the evaluation of the ... argument to insure that its expressions are evaluated in the calling dynamic environment. Next, it creates and installs a new dynamic binding frame. It then gets the dynamic variables specified by the names in the ... argument and binds them in the new dynamic environment to the specified values. Then expr is evaluated and its result is returned. On return the new dynamic environment goes out of scope and thus the previous environment is restored. The mechanism for creating new bindings for dynamic variables is explained below. <public functions>= (<-U) [D->] dynamic.bind <- function(expr, ...) { values <- list(...) "__DYNVAR_ENV__" <- denv <- new.dynamic.env() penv <- parent.frame() names <- names(values) for (i in seq(along = names)) get(names[[i]], env = penv)(values[[i]], dynamic.environment = denv) expr } Defines dynamic.bind(links are to index). dynamic.bind.list differs from dynamic.bind only in the way the variables and values are supplied. <public functions>+= (<-U) [<-D->] dynamic.bind.list <- function(expr, variables, values) { variables <- as.list(variables) # forces evaluation in the values <- as.list(values) # caller's dynamic context "__DYNVAR_ENV__" <- denv <- new.dynamic.env() for (i in seq(along = variables)) variables[[i]](values[[i]], dynamic.environment = denv) expr } Defines dynamic.bind.list(links are to index). dynamic.bind can be defined in terms of dynamic.bind.list but this requires allocating a list of variables. An internal implementation should avoid this. <alternate definition of dynamic.bind>= dynamic.bind <- function(expr, ...) { values <- list(...) variables <- lapply(names(values), get, env = parent.frame()) dynamic.bind.list(expr, variables, values) } Defines dynamic.bind(links are to index). Dynamic variables store their values under a name. Unless a name is supplied, a name is chosen that is constructed to be unique. For save/load to work uniqueness should be guaranteed across processes and machines. This is of course not perfectly achievable in any reasonable way, but the accepted way of getting close enough is to use a DCE universally unique identifier (UUID). Most systems have a way of generating these; most current UNIX/Linux system seems to have uuidgen and the libuuid library (FreeBSD and Mac OS X seem to be exceptions, but presumably we could get a libuuid for those). MS Windows may have UUID's directly as well (in fact Cygwin and the MinGW toolkit we use both seem to contain libuuid). Windows does have globally unique identifiers (GUID) which serve the same purpose and may actually be exactly the same, i.e. it may be that a UUID and a GUID are as likely to clash as two UUID's or two GUID's but I'm not sure. So the right way to do this would be something along the lines of <UUID version of make.dynvar.name>= make.dynvar.name <- function() paste("__DYNVAR__UUID__", system("uuidgen", TRUE), sep = "") Defines make.dynvar.name(links are to index). but using a call into libuuid instead of the system call. But all this requires some configuration adjustments and the like, so in the interim I'll use something less reliable but easier to implement: <internal functions>+= (<-U) [<-D->] make.dynvar.name <- function() { ur <- function() floor(runif(1,max=2^32-1)) repeat { name <- paste("__DYNVAR__",ur(), ur(), ur(), ur(), sep = "") if (! exists.dynvar.name(name)) return(name) } } Defines make.dynvar.name(links are to index). Either as part of this primitive implementation or as part of loading a variable from a saved workspace, we need to check if the name exists in the data base: <internal functions>+= (<-U) [<-D->] exists.dynvar.name <- function(name) exists(name, env = dynvars.database) Defines exists.dynvar.name(links are to index). Creating a dynamic variable involves finding a unique name for it (unless a name is supplied), initializing its global binding, and creating its function. A warning is given if a variable by the chosen name already exists. Users should only call the dynamic variable function with zero or one arguments. But dynamic.bind needs a way of getting the dynamic variable to set the value of its new binding, and this is done by allowing an environment to be passed with a named argument. The named argument is placed after ... to insure it will only be matched if supplied explicitly with that name. <public functions>+= (<-U) [<-D] dynamic.variable <- function(init = NULL, name = make.dynvar.name()) { if (exists(name, env = dynvars.database)) warning(paste("dynamic variable \"", name, "\" already exists",sep="")) assign(name, init, env = dynvars.database) f <- function(newval, ..., dynamic.environment) do.dynvar(name, newval, init, dynamic.environment) <hack to work around environment removal in library> f } Defines dynamic.variable(links are to index). Ideally we could just rely on capturing the name and initial values in the environment. Unfortunately library currently eliminates environments in functions when loading a package, so this will not work if a dynamic variable is created in a package. To work around this for now we can replace the body of the function by one where the appropriate values have been inserted with substitute. <hack to work around environment removal in library>= (<-U) body(f) <- substitute(do.dynvar(name, newval, quote(init), dynamic.environment), list(name = name, init = init)) The code portion of a dynamic variable is contained in do.dynvar. If a dynamic variable is restored from a saved workspace then its name will not be registered in the global dynamic environment. Ideally we should deal with this by running some de-serialization code at load time, but we do not yet have a mechanism for this. Instead, every use checks to make sure that there is a global definition available. If there is, then either the variable has already been initialized or there is a clash of the names. I don't think there is currently a sensible way to distinguish these two cases, so we could get a silent error here. Using names based on UUID's or GUID's would essentially eliminate this possibility. If no global definition is available then the variable has been loaded from a saved workspace but not yet initialized, so it is initialized with the initial value supplied to the dynamic.variable call that created the variable. The remainder of the code corresponds to the three types of calls that can be made to the variable: dynamic.bindto initialize the value of a new dynamic binding. <internal functions>+= (<-U) [<-D] do.dynvar <- function(name, value, init, dynamic.environment) { if (! exists.dynvar.name(name)) assign(name, init, env = dynvars.database) if (missing(value)) get(name, env = get.dynamic.env()) else if (missing(dynamic.environment)) assign(name, value, env = get.dynamic.env(), inherits = TRUE) else assign(name, value, env = dynamic.environment, inherits = FALSE) } Defines do.dynvar(links are to index). dynamic.bind>: D1 library>: U1, D2 .Optionsdefinition>: D1 make.dynvar.name>: D1 with.optionsdefinition>: D1
http://homepage.divms.uiowa.edu/~luke/R/exceptions/dynvars.html
CC-MAIN-2017-17
refinedweb
2,665
51.95
19 January 2011 14:25 [Source: ICIS news] LONDON (ICIS)--Dow intends to increase its European polyethylene (PE) prices by €90/tonne in February, with INEOS estimating targeting an increase of around €70/tonne, company sources said on Wednesday. “We will increase [PE] prices by €30/tonne over whatever happens to ethylene,” said an INEOS company source. The February ethylene monomer contract was expected to increase but a very wide price range was mooted by market sources. Some PE producers expected the monomer contract to rise by €30/tonne, while others talked of hikes close to €100/tonne. The January contract rose by €105/tonne, to €1,110/tonne FD (free delivered) NWE (northwest ?xml:namespace> January PE prices have risen in line with the ethylene contract, with some producers managing to also add some extra margin. “We have paid minimum increases of €110/tonne on all our PE in January,” said one PE buyer. Low density PE (LDPE) has increased by a minimum of €105/tonne, with some sellers reporting a rise of €120-130/tonne on top of December levels. Net levels were reported at €1,400-1,450/tonne FD NWE. Buyers said ExxonMobil was offering increases of €150/tonne on a take-it-or-leave-it basis. This was not confirmed by the company itself. PE availability was such that buyers would probably have to grin and bear it again in February. “February will be another done deal, and all this talk of the Chinese economy cooling is ridiculous,” said one of the producers. A large buyer agreed. “At the moment we don’t have much choice but to accept. It’s at least easier getting big increases back than little ones, but it takes time, a couple of months at least.” PE prices were now close to their record high in 2008, when net LDPE prices peaked at €1,500-1,530/tonne FD NWE. If producers got their way, February LDPE levels would not fall far short of this record high. PE is used in packaging, household goods and agricultural sectors. (
http://www.icis.com/Articles/2011/01/19/9427555/Europe-producers-mull-70-90tonne-hikes-for-Feb-PE.html
CC-MAIN-2013-48
refinedweb
347
62.98
This package helps you to reduce boilerplate while composing TEA-based (The Elm Architecture) applications using Cmd.map, Sub.map and Html.map. Glue is just thin abstraction over these functions so it's easy to plug it in and out. It's fair to say that Glue is an alternative to elm-parts, but uses a different approach (no better or worse) for composing isolated pieces/modules together. This package is highly experimental and might change a lot over time. Feedback and contributions to both code and documentation are very welcome. This package is not necessary designed for either code splitting or reuse but rather for state separation. State separation might or might not be important for reusing certain parts of application. Not everything is necessary stateful. For instance many UI parts can be express just by using view function to which you pass msg constructors ( view : msg -> Model -> Html msg for instance) and let consumer to manage its state. On the other hand some things like larger parts of applications or parts containing a lot of self-maintainable stateful logic can benefit from state isolation since it reduces state handling imposed on consumer of that module. Generally it's good rule application in Elm where some modules lives in isolation from others. The goals and features of this package are: updateand initfunctions. init update subscribeand view. Is as you would expect... $ elm-package install turboMaCk/glue The best place to start is probably to have a look at examples. In particular, you can find: TEA is an awesome way to write Html-based apps in Elm. However, not every application can be defined just in terms of single Model and Msg. Basic separation of Html.program is really nice but in some cases these functions and Model and Msg tend to grow pretty quickly in an unmanageable way so you need to start breaking things. There are many ways you can start. In particular rest of this document will focus just on separation of concerns. This technique is useful for isolating parts that really don't need know too much about each other. It helps to reduce number of things particular module is touching and limit number of things programmer has to reason about while adding or changing behaviour of such isolated part of system. In tea this is especially touching Msg type and update function. It's important to understand that init update view and subscriptions are all isolated functions connected via Html.program. In pure functional programming we're "never" really managing state ourselves but are rather composing functions that takes state as a data and produce new version of it ( update function in TEA). Now lets have a look on how we can use Cmd.map, Sub.map and Html.map for separation in TEA based app. We will nest init, update, subscriptions and view one into another and map them from child to parents types. Higher level top-level module only holds the Model of a child module ( SubModule) as a single value, and wraps its Msg inside one of its Msg constructors ( SubModuleMsg). Of course, init, update, and subscriptions also have to know how to work with this part of Model, and there you need Cmd.map, Html.map and Sub. Let's take a look at view and Html.map in action: view : Model -> Html Msg view model = Html.div [] [ ... , Html.map SubModuleMsg <| SubModule.view model.subModuleModel , ... ] You can use Cmd.map inside init as well? update, init, view subscriptionsclean from wiring logic. The most important type that TEA is built around is ( Model, Cmd Msg ). All we're missing is just a tiny abstraction that will make working with this pair easier. This is really the core idea of the whole Glue package. To simplify glueing of things together, the Glue type is introduced by this package. This is simply just a name-space for pure functions that defines interface between modules to which you can then refer by single name. Other functions within the Glue package use the Glue.Glue type as proxy to access these functions. This is how we can construct the Glue type for counter example: import Glue exposing (Glue) -- Counter module import Counter counter : Glue Model Counter.Model Msg Counter.Msg Counter.Msg counter = Glue.simple { msg = CounterMsg , get = .counterModel , set = \subModel model -> { model | counterModel = subModel } , init = \_ -> Counter.init , update = Counter.update , subscriptions = \_ -> Sub.none } All mappings from one types to another ( Model and Msg of parent/child) happens in here. Definition of this interface depends on API of child module ( Counter in this case). With Glue defined, we can go and integrate it with the parent. Based on the Glue type definition, we know we're expecting Model and Msg to be (at least) as follows: type alias Model = { counterModel : Counter.Model } type Msg = CounterMsg Counter.Msg And this is our init, update and view for this example: init : ( Model, Cmd Msg ) init = ( Model, Cmd.none ) |> Glue.init counter update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of CounterMsg counterMsg -> ( model, Cmd.none ) |> Glue.update counter counterMsg view : Model -> Html Msg view = Glue.view counter Counter.view As you can see we're using just Glue.init, Glue.update and Glue.view in these functions to wire child module. A "polymorphic module" is what I call TEA modules that have to be integrated into some other app. Such a module has usually API like update : Config msg -> Model -> ( Model, Cmd msg ). These types of modules often performs child to parent communication but let's leave this detail for now. Basically these modules are using Cmd.map, Html.map, and Sub.map internally so you don't need to map these types in parent module or Glue type definition. To make Counter "polymorphic" we can start by adding one extra argument to its view function and use Html.map internally. Then we need to change type annotation of init and update to generic Cmd msg. Since both function are using just Cmd.none we don't need to change anything else but that. init : ( Model, Cmd msg ) update : msg -> Model -> ( Model, Cmd msg ) view : (Msg -> msg) -> Model -> Html msg view msg model = Html.map msg <| Html.div [] [ Html.button [ Html.Events.onClick Decrement ] [ Html.text "-" ] , Html.text <| toString model , Html.button [ Html.Events.onClick Increment ] [ Html.text "+" ] ] Note: As you can see view is now taking extra argument - function from Msg to parent's msg. In practice I usually recommend to use record with functions called Config msg which is much more extensible. Now we need to change Glue type definition in parent module to reflect the new API of Counter: counter : Glue Model Counter.Model Msg Counter.Msg Msg counter = Glue.poly { get = .counterModel , set = \subModel model -> { model | counterModel = subModel } , init = \_ -> Counter.init , update = Counter.update , subscriptions = \_ -> Sub.none } As you can see we've switch from Glue.simple constructor to Glue.poly one. Also type annotation of counter has changed. a is now Msg instead of Counter.Msg. This is because view now returns Html Msg rather then Html Counter.Msg. This also means we no longer need to supply msg since Glue.poly doesn't need it (we actually know this should be identity function). We also need to change parent's view since it's using Counter.view which is now changed: match it in parent. Anyway if you need to do such a thing you maybe made a mistake in separation design of state. Do these states really need to be separated? Using Cmd for communication with upper: import Cmd.Extra notify : (Int -> msg) -> Int -> Cmd msg notify msg count = Cmd.Extra.perform <| msg msg = let model = 0 in ( model, notify msg model ) update : (Int -> msg) -> Msg -> Model -> ( Model, Cmd msg ) update parentMsg msg model = let newModel = case msg of Increment -> model + 1 Decrement -> model - 1 in ( newModel, notify parentMsg newModel ) Now both init and update should now send Cmd after Model is updated. This is a breaking change to Counter's API so we need to change its integration as well. Since we want to actually use this message and do something with it let me first update the parent's Model and Msg: type alias Model = { max : Int , counter : Counter.Model } type Msg = CounterMsg Counter.Msg | CountChanged Int Because we've changed Model (added max : Int) we should change init and view of parent to: init : ( Model, Cmd Msg ) init = ( Model 0, Cmd.none ) |> Glue.init counterMsg CountChanged num -> if num > model.max then ( { model | max = num }, Cmd.none ) else ( model, Cmd.none ) As you can see we're setting max to received int if its greater than current value. Since the parent is ready to handle actions from Counter our last step is simply to update the Glue construction for the new APIs: counter : Glue Model Counter.Model Msg Counter.Msg Msg counter = Glue.poly { get = .counter , set = \subModel model -> { model | counter = subModel } , init = \_ -> Counter.init CountChanged , update = Counter.update CountChanged , subscriptions = \_ -> Sub.none } There we simply pass the parent's CountChanged constructor to the update and init functions of the child. See this complete example to learn more. BSD-3-Clause
https://package.frelm.org/repo/926/3.0.0
CC-MAIN-2020-16
refinedweb
1,543
67.76
KVM includes a reduced set of APIs from the Java 2 platform's java.io, java.lang, java.net, and java.util packages. It also includes a package called com.sun.kjava which includes user interface and event handling classes for writing applications. At the moment, it only appears to support the Palm OS. You can write a Palm application by subclassing com.sun.kjava.Spotlet, which provides callbacks for handling events. The KVM manages the event loop and forwards events to the Spotlet, which are handled by methods such as keyDown() and penDown(). A Spotlet first needs to register its event handlers with the register() method before it is able to receive events. This is very different from the traditional AWT/Swing event model. In fact, none of your AWT or Swing code will port to the current incarnation of the KVM. You unregister your event handlers with unregister(). You can compile your KVM apps for the Palm using your regular Java compiler, setting the classpath to use the KVM classes. But you'll need to use the utilities included with the KVM distribution in order to convert your programs into a format you can download to your Palm. These are the palm.database.MakePalmApp program and the palm.database.MakePalmDB program. MakePalmApp will convert a Java program into a Palm .prc file that will automatically load the KVM to run the program when invoked on the Palm. MakePalmDB allows you to add classes to the KVM's class database so that multiple KVM applications may share classes. The code below demonstrates a simple HelloWorld Spotlet which displays some text and an exit button: /*** * To compile adjust the bootclasspath to point to the KVM classes. * javac -bootclasspath kvmDR4.1_bin/api/classes.zip HelloWorld.java * To make a .prc file add the KVM tools to your classpath. * java -classpath kvmDR4.1_bin/tools/classes.zip \ * palm.database.MakePalmApp HelloWorld ***/ import com.sun.kjava.*; public class HelloWorld extends Spotlet { private Button __exitButton; static final String _HELLO_WORLD = "Hello World!"; static final Graphics _GRAPHICS = Graphics.getGraphics(); public HelloWorld() { __exitButton = new Button("Exit", 16, 144); _GRAPHICS.clearScreen(); paint(); } public void paint() { __exitButton.paint(); _GRAPHICS.drawString(_HELLO_WORLD, 48, 72); } public static void main(String[] args) { (new HelloWorld()).register(NO_EVENT_OPTIONS); } public void penDown(int x, int y) { if(__exitButton.pressed(x, y)) System.exit(0); } } Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/getHelpOn/10MinuteSolution/20450
CC-MAIN-2018-43
refinedweb
418
61.02
Hi, Is there a way for me to keep adding the next row of a 2d array to a file via load? (matlab's save had a very useful -append option). I managed to get 2d laod/save working, e.g. import numpy as N import pylab as P import bthom.utils as U # a numpy 2d array a = N.arange(12,dtype=N.float).reshape((3,4)) a[0][0] = N.pi # checking out the matploblib save/load stuff P.save("data.csv", a, fmt="%.4f", delimiter=";") aa = P.load("data.csv", delimiter= ";") x,y,z,w = P.load("data.csv", delimiter=";", unpack=True) The above took me longer than it perhaps should have b/c of advice I'd gotten elsewhere recommending trying to keep numpy and pylab separate when possible (to take advantage of all of numpy's features; it seems numpy doesn't even have the all-to-handy load/save functionality). When I try similar tricks to write one row at a time, I'm hosed in that the shape is gone: # checking out a way to keep appending fname = "data1.csv" U.clobber_file(fname) #this thing just ensures 0 bytes in file f = open(fname,"a") nrows,ncols = a.shape for i in range(nrows) : P.save(f, a[i,:], fmt="%d", delimiter=";") f.close() aaa = P.load("data1.csv", delimiter= ";") in particular: % cat data1.csv 3 1 2 4 <snip> 11 Thanks in advance, --b
https://discourse.matplotlib.org/t/saving-data-to-a-file/6561
CC-MAIN-2021-43
refinedweb
242
77.84
If you don’t use “with”, when does Python close files? The answer is: It depends. One of the first things that Python programmers learn is that you can easily read through the contents of an open file by iterating over it: f = open('/etc/passwd') for line in f: print(line) Note that the above code is possible because our file object “f” is an iterator. In other words, f knows how to behave inside of a loop — or any other iteration context, such as a list comprehension. Most of the students in my Python courses come from other programming languages, in which they are expected to close a file when they’re done using it. It thus doesn’t surprise me when, soon after I introduce them to files in Python, they ask how we’re expected to close them. The simplest answer is that we can explicitly close our file by invoking f.close(). Once we have done that, the object continues to exist — but we can no longer read from it, and the object’s printed representation will also indicate that the file has been closed: >>> f = open('/etc/passwd') >>> f <open file '/etc/passwd', mode 'r' at 0x10f023270> >>> f.read(5) '##\n# ' f.close() >>> f <closed file '/etc/passwd', mode 'r' at 0x10f023270> f.read(5) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-11-ef8add6ff846> in <module>() ----> 1 f.read(5) ValueError: I/O operation on closed file But here’s the thing: When I’m programming in Python, it’s pretty rare for me to explicitly invoke the “close” method on a file. Moreover, the odds are good that you probably don’t want or need to do so, either. The preferred, best-practice way of opening files is with the “with” statement, as in the following: with open('/etc/passwd') as f: for line in f: print(line) The “with” statement invokes what Python calls a “context manager” on f. That is, it assigns f to be the new file instance, pointing to the contents of /etc/passwd. Within the block of code opened by “with”, our file is open, and can be read from freely. However, once Python exits from the “with” block, the file is automatically closed. Trying to read from f after we have exited from the “with” block will result in the same ValueError exception that we saw above. Thus, by using “with”, you avoid the need to explicitly close files. Python does it for you, in a somewhat un-Pythonic way, magically, silently, and behind the scenes. But what if you don’t explicitly close the file? What if you’re a bit lazy, and neither use a “with” block nor invoke f.close()? When is the file closed? When should the file be closed? I ask this, because I have taught Python to many people over the years, and am convinced that trying to teach “with” and/or context managers, while also trying to teach many other topics, is more than students can absorb. While I touch on “with” in my introductory classes, I normally tell them that at this point in their careers, it’s fine to let Python close files, either when the reference count to the file object drops to zero, or when Python exits. In my free e-mail course about working with Python files, I took a similarly with-less view of things, and didn’t use it in all of my proposed solutions. Several people challenged me, saying that not using “with” is showing people a bad practice, and runs the risk of having data not saved to disk. I got enough e-mail on the subject to ask myself: When does Python close files, if we don’t explicitly do so ourselves or use a “with” block? That is, if I let the file close automatically, then what can I expect? My assumption was always that Python closes files when the object’s reference count drops to zero, and thus is garbage collected. This is hard to prove or check when we have opened a file for reading, but it’s trivially easy to check when we open a file for writing. That’s because when you write to a file, the contents aren’t immediately flushed to disk (unless you pass “False” as the third, optional argument to “open”), but are only flushed when the file is closed. I thus decided to conduct some experiments, to better understand what I can (and cannot) expect Python to do for me automatically. My experiment consisted of opening a file, writing some data to it, deleting the reference, and then exiting from Python. I was curious to know when the data would be written, if ever. My experiment looked like this: f = open('/tmp/output', 'w') f.write('abc\n') f.write('def\n') # check contents of /tmp/output (1) del(f) # check contents of /tmp/output (2) # exit from Python # check contents of /tmp/output (3) In my first experiment, conducted with Python 2.7.9 on my Mac, I can report that at stage (1) the file existed but was empty, and at stages (2) and (3), the file contained all of its contents. Thus, it would seem that in CPython 2.7, my original intuition was correct: When a file object is garbage collected, its __del__ (or the equivalent thereof) flushes and closes the file. And indeed, invoking “lsof” on my IPython process showed that the file was closed after the reference was removed. What about Python 3? I ran the above experiment under Python 3.4.2 on my Mac, and got identical results. Removing the final (well, only) reference to the file object resulted in the file being flushed and closed. This is good for 2.7 and 3.4. But what about alternative implementations, such as PyPy and Jython? Perhaps they do things differently. I thus tried the same experiment under PyPy 2.7.8. And this time, I got different results! Deleting the reference to our file object — that is, stage (2), did not result in the file’s contents being flushed to disk. I have to assume that this has to do with differences in the garbage collector, or something else that works differently in PyPy than in CPython. But if you’re running programs in PyPy, then you should definitely not expect files to be flushed and closed, just because the final reference pointing to them has gone out of scope. lsof showed that the file stuck around until the Python process exited. For fun, I decided to try Jython 2.7b3. And Jython exhibited the same behavior as PyPy. That is, exiting from Python did always ensure that the data was flushed from the buffers, and stored to disk. I repeated these experiments, but instead of writing “abc\n” and “def\n”, I wrote “abc\n” * 1000 and “def\n” * 1000. In the case of Python 2.7, nothing was written after the “abc\n” * 1000. But when I wrote “def\n” * 1000, the file contained 4096 bytes — which probably indicates the buffer size. Invoking del(f) to remove the reference to the file object resulted in its being flushed and closed, with a total of 8,000 bytes. So in the case of Python 2.7, the behavior is basically the same regardless of string size; the only difference is that if you exceed the size of the buffer, then some data will be written to disk before the final flush + close. In the case of Python 3, the behavior was different: No data was written after either of the 4,000 byte outputs written with f.write. But as soon as the reference was removed, the file was flushed and closed. This might point to a larger buffer size. But still, it means that removing the final reference to a file causes the file to be flushed and closed. In the case of PyPy and Jython, the behavior with a large file was the same as with a small one: The file was flushed and closed when the PyPy or Jython process exited, not when the last reference to the file object was removed. Just to double check, I also tried these using “with”. In all of these cases, it was easy to predict when the file would be flushed and closed: When the block exited, and the context manager fired the appropriate method behind the scenes. In other words: If you don’t use “with”, then your data isn’t necessarily in danger of disappearing — at least, not in simple simple situations. However, you cannot know for sure when the data will be saved — whether it’s when the final reference is removed, or when the program exits. If you’re assuming that files will be closed when functions return, because the only reference to the file is in a local variable, then you might be in for a surprise. And if you have multiple processes or threads writing to the same file, then you’re really going to want to be careful here. Perhaps this behavior could be specified better, and thus work similarly or identically on different platforms? Perhaps we could even see the start of a Python specification, rather than pointing to CPython and saying, “Yeah, whatever that version does is the right thing.” I still think that “with” and context managers are great. And I still think that it’s hard for newcomers to Python to understand what “with” does. But I also think that I’ll have to start warning new developers that if they decide to use alternative versions of Python, there are all sorts of weird edge cases that might not work identically to CPython, and that might bite them hard if they’re not careful. Enjoyed this article? Join more than 11,000 other developers who receive my free, weekly “Better developers” newsletter. Every Monday, you’ll get an article like this one about software development and Python: I just recently read this article. Thanks, it was helpful. I’m new and novice with Python. I wrote a couple of programs for our community radio station to write data to a file, say every 5 minutes. I used an open, write, and close sequence instead of the ‘with’. These programs have run reliably for months, but there is always reason to improve. RecentIy I noticed that when I opened a file with a text editor, the most recent line didn’t appear. I discovered that my code had “f.close”, not “f.close()”. The interesting thing is, that doesn’t raise an error on a Mac or Windows10 computer. I did a little testing to confirm the behavior Do you have any idea why Idle would be OK with “f.close” and what if anything “f.close” does. In the end you’ve got me converted, I changed over to “with”. Arrived here while myself trying to determine how best to approach this with beginner-intermediate students; my conclusion is that (a rarity for Python) the open() syntax / context managers etc is just confusing and might best be avoided altogether if possible unless the students really need the given knowledge. My approach will be to primarily teach without using `with` but will mention it briefly as a structure they may come across and briefly give the reasoning […] Reading entire file in Python The Python Tutorial – Input and Output If you don’t use “with”, when does Python close files? The answer is: It depends. […] Hi, I realized that one of the limitations of using the “with” is that you have to be careful if you have different pointer objects to the same filename and they are in different functions. In the above scenario, you actually have to explicitly close the file or else when you write to the file, it won’t be how you had intended for it to be, even despite having your print statement sequentially correct. However, this maybe due to the fact that I was using it in “a+” mode. […] 本网站用的阿里云ECS,推荐大家用。自己搞个学习研究也不错 本文由 伯乐在线 – 美洲豹 翻译,艾凌风 校稿。未经许可,禁止转载!英文出处:blog.lerner.co.il。欢迎加入翻译组。 […] The undefined nature of the result otherwise, is a perfect argument for using “with”. It would also constrain implementations unnecessarily and add to the baggage of the language, to specify what happens when you don’t explicitly close a file, or when it should be garbage collected. The simple answer is: “Don’t do that. If you want defined results, just close the file already.” Oh, and your captcha timeout is too short, it timed out while reading your article and writing this reply, and I nearly lost my reply. (Luckily my browser had saved it locally, so when I clicked on the back button, it was still there.) Have you thought about using Disqus instead of rolling your own comment system? A good point. As for the comment system, I’m just using what is built into WordPress, along with a plugin that does Captchas. I took a long time to include those, because I hate them so much, but after getting incredibly amounts of comment spam, I threw up my hands and decided to do that. Using Disqus isn’t a bad idea, but for now, the number of comments is low enough that I don’t see it as a major issue. (You could argue that this is why I have so few comments, of course…) Hello ! Very interesting article, thanks. I actually have a argument in favour of using `with` : if ever the program gets interrupted in an abnormal way, say by a system signal under Linux, the file won’t ever be written to disk. You can test this very easily: import os, signal open(‘/var/tmp/out.txt’, ‘w+’).write(‘WRITTEN\n’) os.kill(os.getpid(), signal.SIGTERM) Then check : ‘/var/tmp/out.txt’ will be empty. Regards. IronPython behaves the same as Jython and PyPy – the file is closed much later. This caused some problems for me, so I needed to either use a context manager or manually invoke close.
https://lerner.co.il/2015/01/18/dont-use-python-close-files-answer-depends/
CC-MAIN-2021-39
refinedweb
2,368
70.02
Adding React Components to a Laravel Application It's no secret that V. Want to learn how to use React? Check out our 4-part tutorial series which walks you through building a GIF search engine using React and Firebase! We're going to be building a very simple Laravel app to demonstrate how to use Elixir + Gulp (Laravel 5.3) or Mix + Webpack (Laravel 5.4) to pull in a React component. You can find the source code for the demo application here. To begin, we need to set up our application. You can follow the steps found in the official documentation or use Lambo to create a brand new Laravel project. You will also want to run php artisan make:auth to scaffold out an authentication system and php artisan migrate to migrate the database (after updating your .env file with proper credentials, of course). Setting up with Elixir To use Elixir, we need to install a few NPM dependencies; to do this, run npm install --save-dev react react-dom laravel-elixir-webpack-react. Next, head into the gulpfile.js at the root of your project and replace require('laravel-elixir-vue'); with require('laravel-elixir-webpack-react');. This package will use Webpack to transpile the ES2015 JSX used to build your React components into a single app.js file that can be served by your application. Setting up with Mix While Laravel Mix is set up to use Vue by default in Laravel 5.4, there is a mix.react() method that makes it very easy to get going with React in your projects. To install React and ReactDOM, run yarn add --dev react react-dom. Then, to compile those files using the React/JSX Webpack presets instead of the Vue ones, go into webpack.mix.js in the root of your file and change the mix.js('resources/assets/js/app.js', 'public/js') line to mix.react('resources/assets/js/app.js', 'public/js'). Finally, you'll need to go into resources/views/layouts/app.blade.php and update the app.css and app.js files to use the mix() method instead of asset(): <link href="{{ mix('css/app.css') }}" rel="stylesheet"> <script src="{{ mix('js/app.js') }}"></script> Building our application Let's build a simple React component just to make sure that our asset compilation is working. You can delete the Example.vue file in resources/assets/js/components and replace it with one called Example.js: resources/assets/js/components/Example.js import React, { Component } from 'react'; import ReactDOM from 'react-dom'; class Example extends Component { render() { return ( <div> <h1>Cool, it's working</h1> </div> ); } } export default Example; // We only want to try to render our component on pages that have a div with an ID // of "example"; otherwise, we will see an error in our console if (document.getElementById('example')) { ReactDOM.render(<Example />, document.getElementById('example')); } We will also need to update our Webpack entry point — namely, app.js — to import our new component. You can get rid of all the Vue-related code in that file and replace it with this: require('./bootstrap'); import Example from './components/Example'; And, finally, in resources/views/home.blade.php, replace the text "You are logged in!" inside the div with a class of panel-body with the following: <div id="example"></div> Now, you should be able to run gulp watch (Elixir) or yarn run start (Mix) to compile the assets. To finish your Laravel set up, register a user at http://<your-app-url>/register — we will be using this user data in our React component and placing it on the home.blade.php page, which (if you have run the Laravel auth scaffolding command) is only available if you have logged in. If you visit the /home page, you should see: For every main/parent React component you want to render on your page, you need to call ReactDOM.render() and then add an empty div with a matching id for it to live inside. Each of these will function like a mini-single page application within your Blade template file. Of course, it's not very useful to have a React component that only displays a <h1>. However, because your Laravel app is not aware of the React component — only the empty div that it lives within — you can't simply pass data into your JavaScript the way you would into your Blade template. If you want to get data into your React component, you have two options: 1) Use PHP to add variables to the global window object, which is accessible by your JavaScript code 2) Make a GET request to an API endpoint within your application We're going to be talking about both of these methods in this article, but let's start with the first one. A package that makes this very easy is Jeffrey Way's PHP-Vars-to-Js-Transformer; you can install it by running composer require laracasts/utilities. You will need to add the package's service provider to your config/app.php file and publish the default configuration using php artisan vendor:publish: config/app.php 'providers' => [ '...', Laracasts\Utilities\JavaScript\JavaScriptServiceProvider::class, ]; You will also need to create an empty footer.blade.php view within resources/views and include it in your app.blade.php layout file: resources/views/layouts/app.blade.php @yield('content') <!-- Scripts --> @include ('footer') <script src="/js/app.js"></script> </body> </html> Now, we can use a JavaScript facade to pass variables from our PHP to our JavaScript, and the PHP-Vars-to-Js-Transformer package will automatically inject them into our footer.blade.php. To test that it's working, let's pass our name along in the HomeController.php file: app/Http/Controllers/HomeController.php use JavaScript; class HomeController extends Controller { public function index() { JavaScript::put([ 'name' => Auth::user()->name ]); return view('home'); } } At this point, if we go into our Example React component, we can reference this global variable within our render() method: resources/assets/js/components/Example.js class Example extends Component { render() { return ( <div> <h1>Hey, { window.name }</h1> </div> ); } } If you run gulp watch (Elixir) or yarn run start (Mix) and visit the site, you should see the following: When you're using this method in a real application, you will probably want to go into config/javascript.php to update the location of your footer.blade.php file, and to namespace your global JavaScript variable (so that you're referencing, say, Tighten.user.name instead of window.name). As an alternative to this approach, we can make an API call from our React application to fetch a list of users. To do this, you'll first want to register another handful of users at http://<your-app-url>/register so we have actual data to pull. Next, add a basic endpoint in your API routes file that just returns all users: routes/api.php Route::get('/users', function () { return User::all(); }); Finally, we'll want to update our Example component to make an API call to fetch this data. While there are various HTTP request libraries you can pull in (such as superagent, which we used in our tutorial series), we are just going to use the basic fetch API that is now included in JavaScript for newer browsers: resources/assets/js/components/Example.js import React, { Component } from 'react'; import ReactDOM from 'react-dom'; class Example extends Component { constructor(props) { super(props); this.state = { users: [] } } componentDidMount() { fetch('/api/users') .then(response => { return response.json(); }) .then(users => { this.setState({ users }); }); } renderUsers() { return this.state.users.map(user => { return ( <tr> <td>{ user.id }</td> <td>{ user.name }</td> <td>{ user.email }</td> </tr> ); }) } render() { return ( <div> <h2>Hey, { window.name }</h2> <p>Here are the people using your application...</p> <table className="table"> <thead> <tr key={ user.id }> <th>ID</th> <th>Name</th> <th>Email</th> </tr> </thead> <tbody> { this.renderUsers() } </tbody> </table> </div> ); } } export default Example; if (document.getElementById('example')) { ReactDOM.render(<Example />, document.getElementById('example')); } Here, we're setting users on state to an empty array in our constructor and, once the component mounts, fetching the list of users from our /users API endpoint. Once this is complete and our state is updated, our component will render the users into a basic table. Now, we should see this: And that's it! Once you've included your parent component in your Blade template, you can treat it like any other React component, passing down props to embedded child components or making POST/PATCH/DELETE requests to manipulate data. If you want to include any other container parent components in your Blade templates, all you need to do is call ReactDOM.render() again, and you can include as many as you'd like. Want us to get more in-depth about anything we talked about in this article? Let us know on Twitter.
https://tighten.co/blog/adding-react-components-to-a-laravel-app
CC-MAIN-2018-47
refinedweb
1,496
56.76
How to use Web Components in React In this tutorial, you will learn how to use Web Components, alias Custom Elements, in React. If you want to get started to build your own Web Components before, check out this tutorial: Web Components Tutorial. Otherwise, we will install an external Web Component in this tutorial to use it in React. You will learn how to pass props as attributes/properties to your Custom Element and how to add event listeners for your Custom Element’s events in a React component. In the first step, you will pass props manually, however, afterward I will show you how to use a custom React Hook to automate this process. The custom React Hook is a library to bridge Web Components to React effortlessly. From React Components to Web Components: Attributes, Properties and Events Let’s say we wanted to use a premade Web Component which represents a Dropdown Component in a React component. We can import this Web Component and render it within our React component. import React from 'react'; import 'road-dropdown'; const Dropdown = props => { return <road-dropdown />; }; You can install the Web Component via npm install road-dropdown. So far, the React Component is only rendering the Custom Element, but no props are passed to it. It isn’t as simple as passing the props as attributes the following way, because you need to pass objects, arrays, and functions in a different way to Custom Elements. import React from 'react'; import 'road-dropdown'; const Dropdown = props => { // doesn't work for objects/arrays/functions return <road-dropdown {...props} />; }; Let’s see how our React component would be used in our React application to get to know the props that we need to pass to our Web Component: const props = { label: 'Label', option: 'option1', options: { option1: { label: 'Option 1' }, option2: { label: 'Option 2' }, }, onChange: value => console.log(value), }; return <Dropdown {...props} />; Passing the label and option property unchanged as attributes to our Web Components is fine: import React from 'react'; import 'road-dropdown'; const Dropdown = ({ label, option, options, onChange }) => { return ( <road-dropdown label={label} option={option} /> ); }; However, we need to do something about the options object and the onChange function, because they need to be adjusted and cannot be passed simply as attributes. Let’s start with the object: Similar to arrays, the object needs to be passed as JSON formatted string to the Web Component instead of a JavaScript object: import React from 'react'; import 'road-dropdown'; const Dropdown = ({ label, option, options, onChange }) => { return ( <road-dropdown label={label} option={option} options={JSON.stringify(options)} /> ); };: import React from 'react'; import 'road-dropdown'; const Dropdown = ({ label, option, options, onChange }) => { const ref = React.createRef(); React.useLayoutEffect(() => { const { current } = ref; current.addEventListener('onChange', customEvent => onChange(customEvent.detail) ); }, [ref]); return ( <road-dropdown ref={ref} label={label} option={option} options={JSON.stringify(options)} /> ); }; We are creating a reference for our Custom Element – which is passed as ref attribute to the Custom Element – to add an event listener in our React hook. Since we are dispatching a custom event from the custom dropdown element, we can register on this onChange event and propagate the information up with our own onChange handler from the props. A custom event comes with a detail property to send an optional payload with it. Note: If you would have a built-in DOM event (e.g. click or change event) in your Web Component, you could also register to this event. However, this Web Component already dispatches a custom event which matches the naming convention of React components. An improvement would be to extract the event listener callback function in order to remove the listener when the component unmounts. import React from 'react'; import 'road-dropdown'; const Dropdown = ({ label, option, options, onChange }) => { const ref = React.createRef(); React.useLayoutEffect(() => { const handleChange = customEvent => onChange(customEvent.detail); const { current } = ref; current.addEventListener('onChange', handleChange); return () => current.removeEventListener('onChange', handleChange); }, [ref]); return ( <road-dropdown ref={ref} label={label} option={option} options={JSON.stringify(options)} /> ); }; That’s it for adding an event listener for our callback function that is passed as prop to our Dropdown Component. Therefore, we used a reference attached to the Custom Element to register this event listener. All other properties are passed as attributes to the Custom Element. The option and label properties are passed without modification. In addition, we passed the options object as stringified JSON format. In the end, you should be able to use this Web Component in React now. React to Web Components Library The previous section has shown you how to wire Web Components into React Components yourself. However, this process could be automated with a wrapper that takes care about formatting objects and arrays to JSON and registering functions as event listeners. Let’s see how this works with the useCustomElement React Hook which can be installed via npm install use-custom-element: import React from 'react'; import 'road-dropdown'; import useCustomElement from 'use-custom-element'; const Dropdown = props => { const [customElementProps, ref] = useCustomElement(props); return <road-dropdown {...customElementProps} ref={ref} />; }; The custom hook gives us all the properties in a custom format by formatting all arrays and objects to JSON, keeping the strings, integers, and booleans intact, and removing the functions from the custom props. Instead, the functions will be registered as event listeners within the hook. Don’t forget to pass the ref attribute to your Web Component as well, because as you have seen before, it is needed to register all callback functions to the Web Component. If you want to know more about this custom hook to integrate Web Components in React, check out its documentation. There you can also see how to create a custom mapping for props to custom props, because you may want to map an onClick callback function from the props to a built-in click event in the Web Component. Also, if you have any feedback regarding this hook, let me know about it. In the end, if you are using this Web Components hook for your projects, support it by giving it a star. You have seen that it isn’t difficult to use Web Components in React Components. You only need to take care about the JSON formatting and the registering of event listeners. Afterward, everything should work out of the box. If you don’t want to do this tedious process yourself, you can use the custom hook for it. Let me know what you think about it in the
https://www.robinwieruch.de/react-web-components/
CC-MAIN-2019-35
refinedweb
1,081
52.19
Revision history for XML::Genx. 0.21 Wed Jun 7 07:45:55 BST 2006 - Add a missing definition in the header file in order to avoid a warning under -Wall. - Cope with non-UTF8 strings on input. If we see something that doesn't have the UTF-8 flag set, convert to UTF-8 before passing to genx. 0.20 Fri Feb 3 20:41:13 GMT 2006 - Fix compile warnings under gcc 4. Thanks to Daniel Jalkut for showing me the way to go. 0.19 Sun Oct 16 22:47:27 BST 2005 -.. 0.18 Mon Oct 3 22:56:09 BST 2005 - Add test for astral characters. - Test non-xml characters. 0.17 Fri Sep 2 20:34:46 BST 2005 - Correct MANIFEST. - Add POD coverage tests. And POD to XML::Genx::SAXWriter to satisfy them. - Fix a bug whereby an uninitialized status code would be used if you attempted to register the same element / namespace / attribute twice. 0.16 Wed Aug 31 20:55:45 BST 2005 - Add POD checking test. 0.15 Sun Jul 31 00:57:05 BST 2005 - Obfuscate my email address in an attempt to curtail spam. 0.14 Wed Mar 16 22:12:34 GMT 2005 - Correctly test for undef values. - Implement XML::Genx::SAXWriter, to enable genx to be used at the end of a SAX chain. 0.13 Wed Mar 2 09:52:44 GMT 2005 - [INCOMPATIBLE CHANGE] Return genx's status directly instead of storing it after the last error. Now it can only be relied upon until the next method call. - Ensure that Namespace, Element and Attribute objects report their errors correctly. - Warn about unknown build status under Win32. 0.12 Tue Mar 1 20:53:18 GMT 2005 - [INCOMPATIBLE CHANGE] Get rid of the dual valued scalar exception idea. The way it was implemented meant that you didn't get the line number information about where the exception actually happened. Instead, add a LastErrorCode() call. - Allude to Win32 compiler problems in README. - Remove some compiler warnings. - Cope with either a glob or a reference to a glob in StartDocFile(). 0.11 Sat Feb 19 18:17:12 GMT 2005 - Add ScrubText() wrapping. Thanks to A. Pagaltzis for requesting this feature. - Internal reorganisation to avoid global variables. 0.10 Thu Feb 17 20:40:03 GMT 2005 - Add a small benchmark. - Implement StartDocString() for the common case of outputting to a string. Thanks to A. Pagaltzis for suggesting this. 0.09 Tue Dec 14 22:51:25 GMT 2004 - Ensure that we take a reference to the filehandle being passed in to StartDocFile() so that it doesn't get closed behind our backs. 0.08 Sat Dec 4 22:52:56 GMT 2004 - Implement XML::Genx::Constants. - Make the thrown exceptionn be a dual valued scalar. - Cleanup declared attributes in XML::Genx::Simple. 0.07 Sat Dec 4 19:19:59 GMT 2004 - Add XML::Genx::Simple. - Make Declare*() die as well. - Make the sub-objects die correctly as well. 0.06 Fri Dec 3 14:10:35 GMT 2004 - On Windows, require the next version of Module::Build, which has a bug fix that we need in it. Unfortunately, that means that we won't be available on Windows until that's released. - Make a missing or undef prefix argument mean auto-create a prefix instead. - Add missing AddNamespace() on to namespace object. 0.05 Thu Dec 2 01:13:22 GMT 2004 - Fix the tests on systems where we get back a filehandle with an invalid file descriptor instead of NULL from the T_STDIO input typemap. 0.04 Tue Nov 30 23:14:44 GMT 2004 - Added example directory, and small demo script. - Make namespace optional on Declare{Element,Attribute}(). - Fix up the docs to mention AddAttributeLiteral(). - Make the namespace parameter in StartElementLiteral() optional. Ditto for AddAttributeLiteral(). 0.03 Tue Nov 30 13:44:28 GMT 2004 - Everything now dies by default instead of returning a status code. 0.02 Tue Nov 30 08:49:13 GMT 2004 - Added StartDocSender(), so you don't always have to output to a filehandle. - Fixed compilation warnings about "cast to pointer from integer of different size". 0.01 Sat Nov 27 09:03:47 GMT 2004 - Original version.
https://metacpan.org/changes/release/HDM/XML-Genx-0.21
CC-MAIN-2019-13
refinedweb
711
68.67
On Mon, Feb 5, 2018 at 7:28 AM, Waldemar Kozaczuk <jwkozac...@gmail.com> wrote: Advertising > This patch implements separate syscall call stack needed > when runtimes like Golang use SYSCALL instruction to make > system call. > > More specifically each application thread pre-alllocates > "tiny" (1024 bytes deep) syscall call stack. When SYSCALL > instruction is called the call stack is switched to the > tiny stack and the "large" stack is allocated if has not > been alllocated yet (first call on given thread). In the end > SYSCALL ends up being exexuted on the "large" stack. > > This patch is based on the original patch provided by @Hawxchen. > > Fixes #808 > > Signed-off-by: Waldemar Kozaczuk <jwkozac...@gmail.com> > --- > arch/x64/arch-switch.hh | 61 +++++++++++++++++++++++++++++++++++++ > arch/x64/arch-tls.hh | 9 ++++++ > arch/x64/entry.S | 81 ++++++++++++++++++++++++++++++ > +++---------------- > include/osv/sched.hh | 7 +++++ > linux.cc | 5 +++ > 5 files changed, 137 insertions(+), 26 deletions(-) > > diff --git a/arch/x64/arch-switch.hh b/arch/x64/arch-switch.hh > index d1a039a..f291a26 100644 > --- a/arch/x64/arch-switch.hh > +++ b/arch/x64/arch-switch.hh > @@ -11,6 +11,7 @@ > #include "msr.hh" > #include <osv/barrier.hh> > #include <string.h> > +#include <osv/mmu.hh> > Maybe this is not needed any more? > > extern "C" { > void thread_main(void); > @@ -114,6 +115,7 @@ void thread::init_stack() > _state.rip = reinterpret_cast<void*>(thread_main); > _state.rsp = stacktop; > _state.exception_stack = _arch.exception_stack + > sizeof(_arch.exception_stack); > + debug("***-> %d, Stack top: %#x\n", this->_id, stacktop); > Please remove this from the final version. > } > > void thread::setup_tcb() > @@ -146,6 +148,50 @@ void thread::setup_tcb() > _tcb = static_cast<thread_control_block*>(p + tls.size + > user_tls_size); > _tcb->self = _tcb; > _tcb->tls_base = p + user_tls_size; > + > + if(is_app()) { > + // > + // Allocate tiny syscall call stack > + auto& tiny_stack = _attr._tiny_syscall_stack; > I doubt we'll ever want to configure tiny_stack's size or location, so I don't think it needs to be configurable in _attr. Do you think it should? > + > + if(!tiny_stack.begin) { > + tiny_stack.size = 1024; //512 seems to be too small sometimes > + tiny_stack.begin = malloc(tiny_stack.size); > + tiny_stack.deleter = tiny_stack.default_deleter; > + } > + > + _tcb->tiny_syscall_stack_addr = tiny_stack.begin + > tiny_stack.size; > + _tcb->large_syscall_stack_addr = nullptr; > I think we don't really need two entries in the TCB here - we can have just one, which starts with the pointer to the tiny buffer and then replaced by the pointer to the bigger one after that is allocated. > + debug("***-> %d, Tiny syscall stack top: %#x with size: %d > bytes\n", this->_id, > + _tcb->tiny_syscall_stack_addr, tiny_stack.size); > Another debug to remove. > + } > +} > + > +void thread::setup_large_syscall_stack() > +{ > + auto& large_stack = _attr._large_syscall_stack; > + if(large_stack.begin) { > + return; > + } > Again, not sure we need this in _attr, would anybody need to configure it? (it's supposed to depend on OSv's needs, not the application's needs). > + > + large_stack.size = 16 * PAGE_SIZE; > + > + /* > + * Mmapping (without mprotect()) seems to work but occasionally leads > to crashes ( > + large_stack.begin = mmu::map_anon(nullptr, large_stack.size, > mmu::mmap_populate, mmu::perm_rw); > + //mmu::mprotect(large_stack.begin, PAGE_SIZE, 0); //THIS DOES NOT > WORK at all -> results in crash > Why would protect work? Doesn't it mean the page is not at all readable or writable, making it crash when used? I wonder why mmap didn't work. Should have... Unless someone tries to execute the stack? Or the mmap() call itself simply needs a bigger stack than 1024 bytes? > + large_stack.deleter = free_large_syscall_stack; > + */ > + > + large_stack.begin = malloc(large_stack.size); > + large_stack.deleter = large_stack.default_deleter; > + > + _tcb->large_syscall_stack_addr = large_stack.begin + large_stack.size; > Theoretically, we could free the old stack after setting up a new one. In practice, it will be ugly to do (how can a thread free the stack it is currently using?) so maybe not worth it. > +} > + > +void thread::free_large_syscall_stack(sched::thread::stack_info si) > +{ > + mmu::munmap(si.begin, si.size); > } > This function is not used (because the mmap() version is commented) and should be removed, right? > > void thread::free_tcb() > @@ -156,6 +202,21 @@ void thread::free_tcb() > } else { > free(_tcb->tls_base); > } > + > + if(is_app()) { > + auto& tiny_stack = _attr._tiny_syscall_stack; > + > + assert(tiny_stack.begin); > + > + if(tiny_stack.deleter) { > + tiny_stack.deleter(tiny_stack); > + } > + > + auto& large_stack = _attr._large_syscall_stack; > + if(large_stack.begin && large_stack.deleter) { > + large_stack.deleter(large_stack); > + } > + } > I think this could have been simpler - no need to check is_app(), just check if the syscall stack or stacks are set, and if they are, free them. Maybe can just check the _tcb for their address and you won't need it in _attr as well. > } > > void thread_main_c(thread* t) > diff --git a/arch/x64/arch-tls.hh b/arch/x64/arch-tls.hh > index 1bf86fd..64a1788 100644 > --- a/arch/x64/arch-tls.hh > +++ b/arch/x64/arch-tls.hh > @@ -8,9 +8,18 @@ > #ifndef ARCH_TLS_HH > #define ARCH_TLS_HH > > +// Don't change the declaration sequence of all existing members'. > +// Please add new members from the last. >; > Again, I think one would be enough, but then we need to remember the tiny one for freeing later if we don't do it immediately. You'll also need to know if the large stack was already created, e.g., perhaps by storing a flag in the beginning of the stack or something (e.g., the tiny stack could start in 0, the first word of the large stack will be set to 1 on every return from the system call, Or some other trick...). > }; > > #endif /* ARCH_TLS_HH */ > diff --git a/arch/x64/entry.S b/arch/x64/entry.S > index 04d809d..87f37b8 100644 > --- a/arch/x64/entry.S > +++ b/arch/x64/entry.S > @@ -170,25 +170,57 @@ syscall_entry: > .cfi_register rflags, r11 # r11 took previous rflags value > # There is no ring transition and rflags are left unchanged. > > - # Skip the "red zone" allowed by the AMD64 ABI (the caller used a > - # SYSCALL instruction and doesn't know he called a function): > - subq $128, %rsp > - > - # Align the stack to 16 bytes. We align it now because of limitations > of > - # the CFI language, but need to ensure it is still aligned before we > call > - # syscall_wrapper(), so must ensure that the number of pushes below > are > - # even. > Note that the above comment, on number of pushes, is still relevant. > - # An additional complication is that we need to restore %rsp later > without > - # knowing how it was previously aligned. In the following trick, > without > - # using an additional register, the two pushes leave the stack with > the > - # same alignment it had originally, and a copy of the original %rsp at > - # (%rsp) and 8(%rsp). The andq then aligns the stack - if it was > already > - # 16 byte aligned nothing changes, if it was 8 byte aligned then it > - # subtracts 8 from %rsp, meaning that the original %rsp is now at > 8(%rsp) > - # and 16(%rsp). In both cases we can restore it below from 8(%rsp). > - pushq %rsp > - pushq (%rsp) > - andq $-16, %rsp > + # Switch stack to "tiny" syscall stack that should be large > + # enough to setup "large" syscall stack (only when first SYSCALL on > this thread) > + xchgq %rsp, %fs:16 > This xchg (instead of saving to one offset and copying from another offset) looks scary, but I think that since system calls will always return, and exchange back, it's safe... > + > + # Skip setup large stack if has already happenned > + pushq %rcx > + movq %fs:24, %rcx > + cmpq $0, %rcx > I had a hard time understand why you singled out rcx here pushing it earlier. Do we really need to use rcx for this comparison? Can't cmpq work directly on the %fs:24 operand? If it can then you won't need to do this special treatment for rcx. > + jne large_stack_has_been_setup > I'm repeating things similar to what I said above, but if we already have a larger stack, if it was on %fs:16, we could have used that immediately, instead of having to switch stacks again (it's not a huge difference, but it feels unnecessary) > + > + # Save all registers (maybe it saves to many registers) > I don't think it's too many - we don't know what setup_large_syscall_stack() will use. > + pushq %rbp > I see you're not bothering with the CFI stuff here or in the rest of the new code, and I guess this is fine - it will only cause problems to debug inside setup_large_syscall_stack and that is less important. > + pushq %rbx > + pushq %rax > + pushq %rdx > + pushq %rsi > + pushq %rdi > + pushq %r8 > + pushq %r9 > + pushq %r10 > + pushq %r11 # contains rflags before syscall instruction > + pushq %r12 > + pushq %r13 > + pushq %r14 > + pushq %r15 > Maybe I miscounted, but didn't you push here (including rcx) an odd number of items, making the stack improperly aligned? This will not always cause problems (so it's easy to miss), but can cause problems if the called code uses optimizations like using SIMD instructions to speed up initialization of a lot of zeros, etc... > + > + # Call setup_large_syscall_stack to prepare large call stack > + # This function does not take any arguments nor returns > + # It it ends up allocating large stack it will store its address in > tcb > + callq setup_large_syscall_stack > + > + # Restore all registers > + popq %r15 > + popq %r14 > + popq %r13 > + popq %r12 > + popq %r11 > + popq %r10 > + popq %r9 > + popq %r8 > + popq %rdi > + popq %rsi > + popq %rdx > + popq %rax > + popq %rbx > + popq %rbp > + > +large_stack_has_been_setup: > + popq %rcx > + # Switch stack to "large" syscall stack > + movq %fs:24, %rsp > > .cfi_def_cfa %rsp, 0 > > @@ -204,9 +236,8 @@ syscall_entry: > # We do this just so we can refer to it with CFI and help gdb's DWARF > # stack unwinding. This saving not otherwise needed for correct > operation > # (we anyway restore it below by undoing all our modifications). > - movq 24(%rsp), %rbp > - addq $128, %rbp > - pushq %rbp > + pushq %fs:24 > Unless I'm misremembering, I think this is wrong - what needs to be pushed here is the original %rsp, before you overridden it (several times) above. This is so the following CFI command will work. Isn't the original user's stack pointer saved in %fs:16? Getting this wrong will ruin the possibility of gdb to backtrace through a system call. (you're right that if gdb has other problems with "backward" jumping in the stack then it won't work anyway, but I wouldn't want to ruin things that were already working...). + > .cfi_adjust_cfa_offset 8 > .cfi_rel_offset %rsp, 0 > > @@ -275,16 +306,14 @@ syscall_entry: > popq_cfi %rbp > popq_cfi %rcx > > - movq 8(%rsp), %rsp # undo alignment (as explained above) > - > # restore rflags > # push the rflag state syscall saved in r11 to the stack > pushq %r11 > # pop the stack value in flag register > popfq > > - #undo red-zone skip without altering restored flags > - lea 128(%rsp), %rsp > + # Restore original stack pointer > + xchgq %rsp, %fs:16 > I'm a bit confused here. Aren't we taking the current %rsp, which is the *large* stack, and saving it back to %fs:16, which supposed to be the small stack? So maybe after all, %fs:16 is always the latest stack (large one, after one system call) and not always the tiny? > # jump to rcx where the syscall instruction put rip > # (sysret would leave rxc cloberred so we have nothing to do to > restore it) > diff --git a/include/osv/sched.hh b/include/osv/sched.hh > index dada8f5..dfbd3bb 100644 > --- a/include/osv/sched.hh > +++ b/include/osv/sched.hh > @@ -338,6 +338,10 @@ public: > }; > struct attr { > stack_info _stack; > + // These stacks are used only for application threads during > SYSCALL instruction. > + // See issue #808 for why it's needed. > + stack_info _tiny_syscall_stack{}; // Initialized with zero since > C++11. > + stack_info _large_syscall_stack{}; // Initialized with zero since > C++11. > Did the "{}" syntax even exist before C++11? > cpu *_pinned_cpu; > bool _detached; > std::array<char, 16> _name = {}; > @@ -592,6 +596,7 @@ public: > * which could cause the whole system to block. So use at your own > peril. > */ > bool unsafe_stop(); > + void setup_large_syscall_stack(); > private: > static void wake_impl(detached_state* st, > unsigned allowed_initial_states_mask = 1 << > unsigned(status::waiting)); > @@ -618,6 +623,8 @@ private: > friend void start_early_threads(); > void* do_remote_thread_local_var(void* var); > thread_handle handle(); > + static void free_large_syscall_stack(sched::thread::stack_info si); > + > public: > template <typename T> > T& remote_thread_local_var(T& var) > diff --git a/linux.cc b/linux.cc > index 25d28ee..3edd598 100644 > --- a/linux.cc > +++ b/linux.cc > @@ -430,3 +430,8 @@ extern "C" long syscall_wrapper(long number, long p1, > long p2, long p3, long p4, > } > return ret; > } > + > +extern "C" void setup_large_syscall_stack() > +{ > + sched::thread::current()->setup_large_syscall_stack(); > +} > -- >.
https://www.mail-archive.com/osv-dev@googlegroups.com/msg02717.html
CC-MAIN-2018-34
refinedweb
1,985
64.2