text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Here is O(n) compute time version without precompute public class NumArray { int[] nums; public NumArray(int[] nums) { this.nums=nums; } public int sumRange(int i, int j) { int sum = 0; for (int k=i;k<=j;k++){ sum = sum+nums[k]; } return sum; } } Here is O(1) compute time version with precompute public class NumArray { int[] sum; public NumArray(int[] nums) { sum = new int[nums.length+1]; for (int i=0;i<nums.length;i++){ sum[i+1] = nums[i]+sum[i]; } } public int sumRange(int i, int j) { return sum[j+1]-sum[i]; } } The problem says "There are many calls to sumRange function", let's say the length n of nums[] is one million, and there are one million requests to call sumRange(0, 999,999), each time when the sumRange function gets called, your function needs O(n) time which is time consuming. We can preprocess the nums[] and bring the time complexity of sumRange down to O(1) constant time. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/29186/simple-java-solution
CC-MAIN-2017-51
refinedweb
181
63.83
table of contents NAME¶SSL_read - read bytes from a TLS/SSL connection SYNOPSIS¶ #include <openssl/ssl.h> int SSL_read(SSL *ssl, void *buf, int num); DESCRIPTION¶SSL_read() tries to read num bytes from the specified ssl into the buffer buf. NOTES¶If¶When an SSL_read() operation has to be repeated because of SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE, it must be repeated with the same arguments. RETURN VALUES¶The¶SSL) Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <>.
https://manpages.debian.org/stretch/libssl-doc/SSL_read.3ssl.en.html
CC-MAIN-2021-49
refinedweb
101
56.05
Tony Blair $18.89 By Alan Abramowitz $40 By Susie Linfield (Page 3)? By Benny Morris Yale University Press, 256 pages. More Below the Ad Advertisement Get truth delivered to your inbox every week. If you have trouble leaving a comment, review this help page. Still having problems? Let us know. If you find yourself moderated, take a moment to review our comment policy. By PatrickHenry, November 2, 2009 at 5:09 pm Link to this comment By firefly, November 2 at 8:02 pm # I agree wholeheartedly. I can support a democratic, pluristic Israel, however a jews only homeland is apartheid as any other “#### only” nation state and thats plain wrong in my view. By firefly, November 2, 2009 at 12:02 pm Link to this comment Firstly, I’d like know what defines a homeland? nomadic sheep herders who had been there for generations, but somehow aren’t entitled to be there because they aren’t Jewish. Which brings me to my second point. Countries like Iran and Saudi Arabia that declare an exclusive Islamic state where non-Muslims are seen as second class citizens are generally viewed with aversion by democratic countries. However, the age of exclusivity and elitism has past and in most modern societies, are a harsh reminder of the days of apartheid, or slavery or classism. Most people in the west now accept the world as a multicultural, multi-religious global entity, with all people sharing universal rights as laid down in the declaration of Human Rights. So, I wonder, on what basis do Jews deserve an exclusive land? Do Jews view themselves as a separate species? No. They are human beings; flesh and blood like all other human beings. They are not better, or more exceptional, they have not been granted a special status over other people and therefore should not be entitled to a land that is exclusively theirs. I just don’t agree with it. If an exclusively Jewish homeland is acceptable, then by the same token, why shouldn’t America claim to be an exclusively white Christian homeland, or Zimbabwe and South Africa an exclusively black homeland, or China an exclusively non-religious, Communist homeland, etc etc. Most people would know the answers to those questions. You get my point? By firefly, November 2, 2009 at 11:56 am Link to this comment Palestinian sheep herders who had been there for generations, but somehow aren’t therefore entitled to be there. By stcfarms, August 9, 2009 at 3:45 pm Link to this comment Inherit The Wind, Let’s kick the Europeans and Africans and Asians out of North America and give it back to the Indians. A better solution would be to return all of the land held by the federal, state and local governments and corporations and let all Americans keep up to 10 acres each (taken from corporate land, not from national parks!). Half of my relatives are French and it would be a pain in the ass to have to visit them there. By Apostolos, July 8, 2009 at 3:07 pm Link to this comment (Unregistered commenter) Nazionism is like a cancer that can’t be cured. The invadors of Palestine will not be satisfied until the have eliminated the indigenous people of an established country. Benji has claimed that the growing population needs an expansion of the settlements for natural growth. Has he ever seen the skyscrapers and high towered condos in the US? Why not add floors to the existing illegal homes? Our President Obama has insisted that the expansion of settlements must cease but he was spate on and told him, in so many words, that to mind his own business. This comes from a government that has been given over a trillion dollars in military aid since the early fifties - and that does not count the economic assistance.. By Inherit The Wind, June 14, 2009 at 11:09 am Link to this comment Anarcissie, June 12 at 12:34 am # ItW, I believe you are guilty of excessive irony. Please try to control yourself. ***************************** Moi? Ironic? Why, that would be like Limbaugh (Mr. Bouncy) making sense! By Anarcissie, June 11, 2009 at 9:34 pm Link to this comment ItW, I believe you are guilty of excessive irony. Please try to control yourself. By PatrickHenry, June 11, 2009 at 1:52 pm Link to this comment By Inherit The Wind, June 11 at 12:06 pm # When you state “lets kick the”...etc) you must have been impling yourself and anyone else who would be stupid enough to follow you. By Inherit The Wind, June 11, 2009 at 5:06 am Link to this comment PatrickHenry, June 10 at 5:32 pm # By Inherit The Wind, June 10 at 1:21 pm # You had better hope they don’t kick back. ************************************** You got that wrong—what else is new? YOU better hope they don’t kick back! By ardee, June 11, 2009 at 4:09 am Link to this comment Inherit The Wind, June 10 at 1:21 pm Please cease and desist from posting common sense and the realities of the situation forthwith! Do you not understand by now that we are obligated to post emotionalism, knee jerk reaction and the minutiae rather than see the overall picture? Why do you continue to fly in the face of convention and let yourself be on the bottom of the hill like this? Do you not understand what flows down that hill? By PatrickHenry, June 10, 2009 at 2:32 pm Link to this comment You had better hope they don’t kick back. By Inherit The Wind, June 10, 2009 at 10:21 am Link to this comment Let’s kick the Russians out of Eestern Poland and give it back to the Poles. Let’s kick the Poles out of East Prussia and Danzig (now called Gdansk) and give it back to the Germans. After all that all happened just after WWII too. But let’s not stop there. Let’s kick all the Prostestant Northern Irish out of Ireland and send them back to Scotland. Let’s kick the Europeans and Africans and Asians out of North America and give it back to the Indians. Let’s kick the Spanish out of Central America and give it back to the Mayans. Let’s kick the Turks out of Europe and Asia Minor and send them back to Central Asia. Let’s kick the Spanish and Portugese out of Iberia and give it back to the Moors. Let’s kick the Norse out of Iceland and give it back to the seals and marmots. Or is it only Jews that should be kicked out? By Anarcissie, June 9, 2009 at 4:33 pm Link to this comment If there is a coherent theory about who has a right to which land, I haven’t seen it, except of course for the religious theories, which are popular with more than one party. In all other cases the theory seems to be that land may be taken by force, and then suddenly it can’t be taken by force any more. By PatrickHenry, June 9, 2009 at 3:45 pm Link to this comment No justice no peace. Read some of the comments. The Israelis are trying to rid Palestine one Palestinian at a time and have been doing so for along time. By ardee, June 9, 2009 at 2:32 pm Link to this comment “The Palestinians are more native to Palestine (now Israel) than most of the Jews now living there.” In my opinion this is a thoughtless phrase that resolves nothing and adds only fuel to the fire. Considering that Israel was created in 1948 how many still living were born or once resided in “Palestine”? Further the question is not which people have the greater right to that region, both claim ancestral rights going back thousands of years. No, sorry, the question is how to stop the bloodshed and the eternal violence, how to bring peace and equity to both sides of this damned struggle. By PatrickHenry, June 9, 2009 at 2:12 pm Link to this comment By Howard, June 9 at 3:12 pm # Egyptian stock sounds like your making a soup. The Palestinians are more native to Palestine (now Israel) than most of the jews now living there. By KDelphi, June 9, 2009 at 12:52 pm Link to this comment Howard—“egyptian ‘stock’”??? What “stock” are you? By Howard, June 9, 2009 at 12:12 pm Link to this comment Let Jordan and Egypt absorb the Pal’s. They are mainly egyptian stock, anyway. Jordan is 45% palestenian already. Maybe Syria will take some. And Lebanon. By Jaded Prole, June 9, 2009 at 4:30 am Link to this comment Unfortunately it appears that Morris has been sucked into the racism and xenophobia of larger Israeli society. How can such different peoples cohabit a single country!? South Africa might be a good example. I’ve yet to hear an argument against the one state solution that the Boers did not make in supporting apartheid. Israel has done everything in its considerable power to make a two-state solution an impossibility on the ground. Had Israel offered the PLO ALL of the west bank up to the green line as a completely independent state they would have accepted it but no one will accept a partitioned bantustan run by the IDF with its finances controlled by Israel. It is Israel that is the problem and, having the power, it is Israel that can create a solution. If Israelis are unable to be anything but arrogant, racist militants who are incapable of living with their neighbors than the Zionist experiment is an utter failure. By Chris Horton, June 8, 2009 at 10:21 pm Link to this comment I read as far as ” ... it cloaks the suggested extermination of an extant country ...” and then skipped the rest of what was sure to be insufferable drivel. By PatrickHenry, June 8, 2009 at 6:26 pm Link to this comment USS Liberty day Lest you forget. By P. T., June 8, 2009 at 5:27 pm Link to this comment A factor that mainstream media largely ignore, for obvious reasons, is the role public policy plays in determining incomes. To wit: U.S. manufacturing workers are placed in competition with third-world workers via trade agreements. On the other hand, workers in the professions are, to a significant extent, shielded from competition by restrictions on immigration. The limits placed during the Clinton administration on the number of foreign doctors allowed in the country is a case in point. The effect is to redistribute income upward. By Inherit The Wind, June 8, 2009 at 11:54 am Link to this comment Did any of you geniuses calling me names actually READ Kristoff’s article and the stats it quotes? At least KDelphi did—you have my thanks. I think you might have expressed Kristoff’s point better than I did. Yes, it’s NURTURE over Nature. The point Kristoff was making was that a genetic basis for these groups being more prosperous does NOT exist. Jews and Asians and West Indians aren’t smarter—they just have cultural traditions that are taught from an early age that instill discipline and a hunger for learning. It’s not genetics—it’s teaching handed down from generations. I don’t believe any race or ethnic is or CAN be genetically superior to another, and especially not in intelligence. By P. T., June 8, 2009 at 8:44 am Link to this comment The Zionist expansionists need not worry. Any pressures on Israel to conform to the Road Map will be “largely symbolic,” so the New York Times reported (Helene Cooper, June 1). By tropicgirl, June 8, 2009 at 7:27 am Link to this comment (Unregistered commenter) ITW writes of “3 groups that do better EVEN WHEN THEY HAVE LOWER IQs than their WASPy lazier counterparts.”... However, from my observation, these three groups suffer more neurotic, psychological problems than others. Its all in what you think success is. By Ed Harges, June 8, 2009 at 6:40 am Link to this comment ITW writes of “3 groups that do better EVEN WHEN THEY HAVE LOWER IQs than their WASPy lazier counterparts.” Gee, not prejudiced much, eh ITW? It’s nice to know that if I go to a job interview conducted by a “liberal” Jew like ITW, then as soon as I walk in the door, ITW has me labeled “lazy WASP”. And also: to the extent that members of these 3 groups “do better” even when stupider, I wonder what role nepotism, as well as hatred of “WASPs”, may have to do with that? When members of these groups get into positions of power, do they use that power to take revenge on the “WASPs” whom they have always resented? Do they consciously seek to promote members of their own group, and to dilute the power of the WASP group? By P. T., June 7, 2009 at 8:28 pm Link to this comment What is a WASP anyway? Pretty broad category. On average, white Baptists are going to have quite different incomes than white Episcopalians or Presbyterians. However, I do not think Baptists are any lazier. This is all beside the point anyway. The point is the tail (Israel) cannot be allowed to wag the dog (the U.S.). By P. T., June 7, 2009 at 8:08 pm Link to this comment “He points out 3 groups that do better EVEN WHEN THEY HAVE LOWER IQs than their WASPy lazier counterparts.” No, no, no. The way to make the big bucks is to be a capitalist so you can live off of other people’s work. By KDelphi, June 7, 2009 at 8:04 pm Link to this comment Nick Kristoff’s column doesnt exactly say that… He talks alot about environment and the value of nurture over nature. It seems that he thinks that it is more of a cvultural phenomenon, as indicated by his quoting the book “Intelligence and how to Get It” ..” Id have to see the book but ,thats rather a sweeping statement—what is a “black child raised on welfare”?? He also doesnt state whether he includes Jews as “whites”...but, its just an op-ed, I never read him. Id be the last one to stick up for WASPS, who have certainly been the source of as much agony as Jews…I’m not pleasing anyone here, am I? OH, well, not my problem… Psychologists have known for a very long time that IQ tests are more a function of class than of innate intelligence…they are really just not sure what iQ Tests measure, other than being middle class and good at test-taking. How well you do on an IQ tests correlates about .71 with how well you will do on future tests, and, thats about it. By Ilene, June 7, 2009 at 7:46 pm Link to this comment (Unregistered commenter) I’m with Patrick Henry, the only solution to this problem is to isolate the pariah state in the same manner as was done to South Africa. Really surprised Truthdig published this tripe defending the apartheid state. By Inherit The Wind, June 7, 2009 at 7:39 pm Link to this comment Perhaps you anti-semitic idiots should read today’s op-ed by Nick Kristoff. He points out 3 groups that do better EVEN WHEN THEY HAVE LOWER IQs than their WASPy lazier counterparts. The 3 groups? Jews (Naturally…) Asian-Americans (No, they are NOT the “Lost Tribes” of Israel) West-Indies Blacks, who do significantly better than other Black Americans, on average. Anybody whose kids go to school with Asian kids know they work far harder than most of their non-Asian classmates and, for the same level of intelligence, do better. Lots of people curse them for it, but it’s no different than cursing “those smart Jews”. I say: If you can out-compete your European-descended peers, all power to you! As people move up, certain businesses change. Computer shops and on-line dealers were all once Asian. Now many are Russian-owned. Even diners here in NJ are now mainly Russian-owned as the Greeks have moved up and out of that business. But when you hit the top professions, why move out? But, naturally, the anti-semites (usually those who can’t keep up, or have to work harder to stay up) look for ANY reason other than the obvious one: The Jews, the Asians and the West Indians simply work harder on average. By KDelphi, June 7, 2009 at 7:29 pm Link to this comment Just saw Benny Morris on Fareed Zakaira—-he’s crazy. By P. T., June 7, 2009 at 6:24 pm Link to this comment Professor Marc Lynch, of George Washington University, on the Bolton/Morris/Linfield/neocon Zombie Idea Bolton’s Zombie Idea Mon, 01/05/2009 - 11:05am)? (Continued below) By P. T., June 7, 2009 at 6:21 pm Link to this comment (Continued from above)? By PatrickHenry, June 7, 2009 at 5:26 pm Link to this comment Actually alot of American jews attend marva’s in Israel as out of country reserves. They come from affluent communities whose parents Lobby “for” Israel and are agents of influence for that country in the arts and media, print and otherwise. Thousand of years of being denied the right to own anything, hardly. The natural avenue for building wealth are building assets, banks and usuary which have gone on for centuries within worldwide jewry. Check out the website and imagine if any WASP’s sit on those boards. Jews have been in usuary and banking, trading and retailing since before there were WASP’s. It was estimated that in the mid-1800’s the Rothchild banking cartel contolled over half the currency in Europe. The Oppenheimers of Debeers fame have done well in controlling the diamond monoply to this day, however their grip is loosening. I hope the 12 rabbis of the kaballah bestow brains. By ardee, June 7, 2009 at 4:23 pm Link to this comment Folktruther, June 7 at 5:29 pm # It is becoming clearer that the major roadblock to Mideast peace are American Jews supported by the American ruling class. ............................... While I understand that AIPAC casts a long shadow I believe you exaggerate greatly as to the influence of American Jews over Israeli policies. It was not America who elected Netanyahu thus supporting his expansionist policies, nor was it their influence that led to Israel’s invasion of Lebanon, the brutality of the actions in Gaza or much of the history of Israeli callousness towards those refugees they themselves created in ‘48. Perhaps right wing Christians see the increasing strife in the Middle East as the advancement of the end times but Jews may send money or write their Congressmen, but they do not cause the actions of the IDF or our own State Dept.. As to your earlier screed: .” I will not make the obvious charge you might expect, or possibly even deserve, but I will say you simply fail miserably to understand the history of the Jew that leads them to such careers. Thousands of years of being denied the right to own property has created a natural avenue to business management and the arts. Where you get your estimates of how many Jews are in the “ruling class” is not known to me but I do know that Jews are not allowed even yet in the clubs or boardrooms of those who make the decisions that make our politicians obey. No, sorry, but this nation is ruled solely by the white Anglo Saxon Protestants,and always has been. I would explain further but I am late for an appointment with one of the Twelve Rabbis who secretly rule the world…and you dont want to keep them waiting, trust me. By KDelphi, June 7, 2009 at 4:19 pm Link to this comment I cannot read all of these posts, but it seems to me that the writer falls all over herself, admittedly coming up with hugely absurd comparisons ,like “how far should we turn the clock back”. Of course, no one can answer that question. The Palestinians would probably like to turn it back to before 1930…the Zionist Israelis, apparently to —what—-0 BC? The crux of the matter, it seems to me, is that Palestinians have nowwhere to live, and, it has been that way for generations. I have no idea how to achieve the “goal”, or, from this article, even what the goal IS. It is disengenuous to say, “doesnt everyone have the right of return”—although I did learn something from this article, it was so slanted as to almost make it useless… Can someone explain to me what this “Jordanian” “solution” is?? I dont get it. If “no one on the Left” will “consider it anymore”, maybe its wrong… By sharonsj, June 7, 2009 at 3:37 pm Link to this comment (Unregistered commenter) When Palestinians call for a two state solution, they don’t really mean it. They could have had a state decades ago but Arafat always screwed it up because he never wanted two states. He wanted one state in which there were either no Jews or the Jews were subordinate to Muslims. Meanwhile, no one ever says a word about Jewish refugees. I don’t mean the people who fled the Holocaust, I mean the Jews who fled when five Arab countries attacked Israel in 1948. There were just as many Jewish refugees as Palestinians, but you don’t hear about them because they were welcomed into Israel while the Palestinians were used as pawns by their so-called brethren and thrown into camps. By PatrickHenry, June 7, 2009 at 3:35 pm Link to this comment There is a movement, I encourage everyone (including the zealots) to participate. www dot bdsmovement dot net For some reason “Truthdig” has blacklisted this site. I find it harmless considering the sites they do allow. By Folktruther, June 7, 2009 at 2:29 pm Link to this comment It is becoming clearer that the major roadblock to Mideast peace are American Jews supported by the American ruling class. Sepharad, Inherit and the Zionist lemmings on TC are actually to the left of the young Jewish Ziofascists being indoctrinated by Aipac, etc. Of course TC Zionists support them in practice, and ziofascists like Benny Morris, so it is largely a distinction without a difference. As is the policies of Obama and Bush. Obama is pursing the Bushite Roadmap to Peace, and arguing about the marginal issue of expanding settlements when they are already an half million settlers on Palestine land. And an Israel wall to implement apartheid. Consequently it is necessary to hold the American power structure responsible publically for Palestinian oppression, since in fact they are. These young Jewish louts are in Jerusulem only with the support of US power, and support war, racism and ethnic cleansing with US backing. Israel encourages them and bars people like Finklestein only with US tacit support. So the world boycott, etc of Isarael should emphasize US power in supporting Israeli oppression. By omop, June 7, 2009 at 11:02 am Link to this comment Thanks PH. By PatrickHenry, June 7, 2009 at 10:22 am Link to this comment Omop, Try here. By omop, June 7, 2009 at 9:55 am Link to this comment To Fadel. That footage has already been blocked. By omop, June 7, 2009 at 9:36 am Link to this comment Israelis/zionists are trapped in a “Dorian Gray World” ** ** Google, The Picture of Dorian Gray by Oscar Wilde. Their future lies in 3 and only 3 worlds. a) Comply with all the UN Resolutions that created it and the Palestenian State. b) Continue what its has been doing for the past 60 years and expect hostilities with increasing potentials for a military defeat within the next 5/10 years. c) Re-invent itself with the Palestenians as a non-jewish state. By Fadel Abdallah, June 7, 2009 at 7:19 am Link to this comment Interesting footage from Israel; a must see by everyone, including Obama! There is a possibility that this footage has already been blocked! -to-obama-speaking-to-arabs.html By Folktruther, June 7, 2009 at 1:36 am Link to this comment Damn ritht, Virgina. By Virginia777, June 6, 2009 at 7:01 pm Link to this comment any “man” who calls another Race “wild animals” that have to be “locked in a cage”, and advocates for bombs to be dropped on Iran, (bombs that he claims will “stave off war”) deserves any and all criticism he (or his books) gets. By Inherit The Wind, June 6, 2009 at 5:17 pm Link to this comment When you burn someone in effigy it’s fun to stab the dummy and make loud boasts about how brave and righteous you are… So all of sit there and tap away, not one actually CONSIDERING the points made because it’s some much easier to attack the man rather than the ideas expressed. Red meat for “The Contingent”. Useless thread. TD is getting just like one of Fecal’s extremist web sites. By dojero, June 6, 2009 at 2:28 pm Link to this comment Richard_east, we are agreed that the world is god-crazy and so unlikely to be able to resolve its problems. Folktruther, I’m not sure we agree. I don’t believe that Jews have undue influence in the US or elsewhere, and I’d caution that we need to distinguish between Jewish people and Zionists. Judaism is a religion and I cannot support any religion because they are all based on the fallacy of a belief in a god (or gods). But I don’t assume that any such erroneous belief carries with it inherently bad political or social values. Zionism is a political movement that insists that Jews are entitled to a state located in what they believe is the land designated in their bible as the promised land. That seems to me unsupportable. The guilt that the Western societies of the United States and Europe have over the Nazi extermination of the European Jews blinds them to the irrationality of the Zionist thesis. That guilt is, of course, amplified by the fact that antisemitism was rampant in the victors in WWII; better to put the remaining Jews in Israel than allow them to stay in Europe or the US. To return to the richard_east theme, I don’t believe that any religion is entitled to a state of its own. I don’t think that Muslim nations have any inherent right to exist, nor Jewish states, nor Catholic states (see Vatican City). This article on Truthdig depends on the contrary thesis: that somehow the Zionist state is sanctimonious. By carl moore, June 6, 2009 at 2:00 pm Link to this comment ”I have been assured by a very knowing American of my acquaintance in London, that a young healthy child well nursed is at a year old a most delicious, nourishing, and wholesome food, whether stewed, roasted, baked, or boiled ...” Netanyahu quoting from Jonathan Swift’s “A Modest Proposal” as a method for how to deal with Palestinians after implementation of the two-steak resolution. By Virginia777, June 6, 2009 at 12:37 pm Link to this comment I will NEVER forgive the New York Times for publishing Benny Morris’s inflammatory, warmongering editorial - bluntly demanding that the U.S. start bombing Iran! - this link works to Morris’ editorial “Using Bombs to stave off war”: Here again is Benny Morris, from a 2004 interview: [Referring to Sharon’s Security Wall] “Something like a cage has to be built for them. I know that sounds terrible. It is really cruel. But there is no choice. There is a wild animal there that has to be locked up in one way or another.”. See also Justin Raimondo’s excellent piece on Morris’ editorial here: A Brazen Evil from Antiwar.com By omop, June 6, 2009 at 9:27 am Link to this comment How about Americans especially, thinking about the Israeli-Palestenian Conflict along these lines. Arming both sides with an equal amount of military hardware and training so that by early 2012 they would be ready. Any American sympathetic to the Israeli side can volunteer and any non American sympathetic to the Palestenian side may be allowed to also volunteer. All those in favor of this kind of fair and somewhat more democratic way of thinking raise your hand. All those in favor of thinking otherwise think Benny. By Robert, June 6, 2009 at 9:14 am Link to this comment June 5 -7, 2009 Don’t Carp, Organize Our Convoy to Gaza By GEORGE GALLOWAY “Where is the ummah; where is this Arab world they tell us about in school.” “Those words will forever remain etched on my brain. They were spoken by a 10 year old girl in a bombed out ruin in Gaza in March. She had lost her.” By Folktruther, June 6, 2009 at 8:52 am Link to this comment It is true, as Dojero says, that truthdig publishes another rightwing review of a right wing war monger as they continue to do on a supposedly Progressive website. And as Rcih East says,it doesn’t make any moral or leagal sense But it makes financial and political sense.. So the ruling class uses Zionism the way it uses right wing Christianity, to support neoliberal policies. this is doen by both the Gops and Dems, who agree on financial policies but argue about cultural and identity policies. More important, the truth media helps promote this truth consensus and is itself influenced by it. Including Truthdig. You will notice the Zionist ads every once in while on truthdig and, more important, its attempt to stay in the center of the power truth conensus, which is far to the right of a people truth consensus. Media must serve the interests of power, which uses and identifies with Zionism, while appealing to the values of the truthers of the population. So truthdig panders to right wing interests to produce specialized pieces that serve the population occasionally. That is why it is essential for the population to develop its own media and legitimate its own truth consensns. The population will always be divided and confused by the mainstream truth, including the Progressive truth. That is its historical function. By PatrickHenry, June 6, 2009 at 7:03 am Link to this comment I don’t have time enough to read good books yet alone bad ones. By richard east, June 6, 2009 at 6:50 am Link to this comment dojero-I’m as dumbfounded by Zionism as you are (it’s true that it doesn’t make any sense whatsoever from a legal or moral viewpoint). “People do not get to establish states because they have an imagined (biblical) imperative.” -In an ideal world, yes. In the real, God-crazy world, it has already happened. But “the Lord works in mysterious ways,” right? By dojero, June 6, 2009 at 6:03 am Link to this comment Many here have rightly challenged Truthdig for publishing a right-wing review of a right-wing warmonger on its website. But this isn’t the first time TD has done this kind of thing. Perhaps it is to incite. Perhaps it is because they seem to think that if a writer has even the remotest liberal credentials, his or her work should appear on the site. The problem is that this kind of tripe and propaganda can be found all over the web. Readers of TD look for something better. If TD can’t deliver, then perhaps we should look somewhere else. One small comment I haven’t seen yet: why do Zionists get a free pass on the very concept of a partitioned people? The Arab people lived in Palestine. The number of Zionists was minimal. The West decided that Zionism was a good thing only after the Nazis exterminated the Jews. Zionism NEVER made any sense from a legal or moral standpoint. People do not get to establish states because they have an imagined (biblical) imperative. By prole, June 6, 2009 at 4:08 am Link to this comment You don’t have to be an anarchist - although it would be a good idea - to view nation states in concept as problmatic. Add to that, the unusual aggression of particular ones such as Israel and the U.S., and yes, “democracy, secularism, equality and justice” does not seem such an unsavory option at all. “But if you believe that an end to the decades of horrific bloodshed in Israel-Palestine can’t possibly be accomplished by wiping out an established” population of Palestine, i.e. ‘ethnic cleansing’ as many in the Jewish state have advocated, then “the one-state strategy is no strategy at all”. “Enter the historian Benny Morris” who himself publicaly lamented the “‘non-completion’ of the expulsions of Arabs” during the Nakba. Which is why.“Morris’ political trajectory is important because it is shared by so many Israeli leftist” militants, as well as a few rancid book reviews. Morris “believes in truth” - ergo, anyone who disagrees with him disagrees with “the truth” as handed down to Morris from Yahweh himself on Mt.Sinai. Another in a long line of old-fashioned truth-telling Jewish historians from the author of Genesis to Joan Peters. Linfield is right - albeit with unconcious irony - that “anyone who talks of “the secular, educated, mini-skirted women of Tel Aviv and the masked men of Gaza ... is either extremely deluded or playing a very cruel game” i.e. she herself For this is the kind of derogatory imagery used to portray Jews as progressive and enlightened and Arabs as suspicious and sinister. Perhaps we should ask how Palestinian Christian and Muslim women would get along with the Tel Aviv pimps in Israel, one of the world’s leading centers of ‘white slavery’. Worse still, she avers, “since such a state would, inevitably and fairly quickly, become demographically dominated by Palestinian Arabs, why would anyone imagine that the rights, the freedoms and the cultural integrity of the Jewish minority in this ‘binationa’ society would be protected?” Once again, we’re to assume, according to Linfield’s leftism, that Arabs can’t be trusted and that a binational state would automatically mean that Jews would be treated badly. Or worse. that they might be treated as badly themselves as they have treated Arab citizens of the Jewish state. These are the kinds of crude stereotypes that gives the game away in Linfield’s none-too-subtle anti-Arab backlash. The rest follows in a steady train, e.g..“the destruction of Israel, far more than the building of a Palestinian state, has been the holy grail”, “[in 2000], when Israel (and the United States) offered it, the Palestinians turned it down” etc., etc. Then too if there is an “essential bankruptcy of the concept of restoration” than why did the ‘free world’ go to war over Kuwait? But if Linfield is so anxious in “forging a workable, good-enough, resilient solution for the future rather than seeking to eradicate the humiliations of the past through presumably glorious, and apparently unending, battles of redemption” then maybe we can at last tear down all those tacky Shoa memorials and discard all the rest of the tiresome Holacaust industry bric-a-brac. “How, then, to merge moral imperative with political, economic and social reality? Why dust off the old ‘transfer’ proposal and herd Palestinians off to ‘Transjordan’ - in cattle cars presumably. Still if it “is impossible to imagine any leftist writing such a book today” like ‘This Is Israel’ that was “a celebration of the founding of the new state”, then we’re at last making some small progress. The same could be said about Morris’ book, only someone as confused as Linfield could possibly imagine it to be ‘leftist’. “As a description of the larger conflict her [review] is, to be blunt, utter hogwash, and anyone who is genuinely interested in solving the suffering and statelessness of the Palestinians would best ignore this kind of” stupidity. By SINGLE PAYER, June 5, 2009 at 8:14 pm Link to this comment REF: POPPYCOCK AND BS Resist all this bs and the “news” distraction about mesmerizing tripe. It’s the economy, stupid. The nation has been robbed, everything has been transferred to the banksters and they remain in power. But for the distractions and lies of mainstream media, these SOBs would already be indicted and awaiting sentencing. Wait, do nothing, don’t work, don’t give them anything. Don’t buy, don’t feed the beast. Month my month they will grow desperate. Those with nothing can wait. Just wait and they will come to justice and be hanged. Time is not on the side of these gangsters. By geronimo, June 5, 2009 at 7:57 pm Link to this comment (Unregistered commenter) One more Zionist apologist trying to justify the existence of the Zionist entity, Israel. But no matter how favorably reviewers treat this latest apologia, the settler-state (not its people) is doomed, the reason being that old fashioned colonialism (as per the genocide of Native Americans) has fallen our of favor. How to resolve the conflict? One way is for Jewish colonizer and colonized Palestinians to sit down together and work things out on the basis of one equals one, with liberty and justice for all, with those Zionists who refuse to deal, being able to immigrate to any country in the world that will accept them, paid for by Israel’s stanchest supporters, the U.S.A. and Great Britain with contributions from Saudi Arabia and the other oil states. But wouldn’t such an outcome be unfair to the Jewish settlers? Unfair? What’s unfair is having European settlers barge into the Palestinian homeland, uninvited, after which they proceed to take over and expell most of the natives, not to mention turning Gaza into the Warsaw Ghetto. By Ed Harges, June 5, 2009 at 6:03 pm Link to this comment re: By Virginia777, June 5 at 8:39 pm: Thanks, Virginia, you sharpie, for reminding us how Morris arrogantly ordered America, from the NY Times op-ed page, to go to war against Iran for Israel. Benny Morris has done exactly one good thing in his otherwise obnoxious little life: he helped to write an honest historical record of Israel’s violent ethnic cleansing of Palestine, debunking a lot of pretty Zionist myths about how Israel came into being. But he also has made it clear that he thinks this ethnic cleansing was a good thing, because Jews were doing it, and they’re special - unlike bad ethnic cleansers such as the Nazis or the Serbian ethnic nationalists. By Virginia777, June 5, 2009 at 5:39 pm Link to this comment here is Benny Morris from the New York Times:.” “Which leaves the world with only one option if it wishes to halt Iran’s march toward nuclear weaponry: the military option, meaning an aerial assault by either the United States or Israel.” By Virginia777, June 5, 2009 at 5:04 pm Link to this comment Benny Morris?? NO THANKS, I’ll pass on his book he’s a warmonger By Sallyport, June 5, 2009 at 4:48 pm Link to this comment (Unregistered commenter) Why is it that, Israel ALWAYS balks over the necessity for whoever is representing Palestine to acknowledge Israel’s right to exist, while never offering such an assurance to the Palestinians ? And why do we keep falling for it? (Rhetorical question, of course.) By richard east, June 5, 2009 at 4:28 pm Link to this comment Like other truthdiggers, I am quite surprised this review was posted on this site. I think WriterOnTheStorm hit the nail on the head with: “I’m not quite sure if TD posted this review as a provocation to stir legitimate debate, or if it’s simply one of those red meat pieces designed to increase the site’s ad revenue.” As for the two-state solution, it’s quite obvious where Israel stands on the issue: ship them to Jordan! By Ed Harges, June 5, 2009 at 4:05 pm Link to this comment hippie4ever writes: ”...this “two state solution” is very close historically to the orginal proposal of 1948. Had they only accepted it (meaning the Arabs) back then, imagine the carnage prevented.” Hippie4ever, two quick responses: (1) Propagandized Americans are not aware, but the Arabs have always been aware, that Israel never intended to stay inside the 1948 borders, but was viciously, ruthlessly expansionist from the beginning, and so there would have been lots more war and bloodshed in any case. (2) And besides, Americans would never accept being dispossessed by a bunch of ethnic nationalist fanatics in this manner, even if the invaders promised to stay inside the new borders, and so we have no business wishing that Arabs had been so accepting. By hippie4ever, June 5, 2009 at 2:54 pm Link to this comment Firefly, the irony is that this “two state solution” is very close historically to the orginal proposal of 1948. Had they only accepted it (meaning the Arabs) back then, imagine the carnage prevented. Everyone would be better off than today, even the fascist Israelis in their Med villas. Of course the Palestinians were unhappy being thrown out of their country by the Zionists. This attitude of displacing entire nations, by the way, echoes the American experience. Ask any native American; it isn’t a happy tale full of justice or democracy. By Ed Harges, June 5, 2009 at 1:33 pm Link to this comment Linfield writes that the one-state solution “...cloaks the suggested extermination of an extant country, which would ordinarily be regarded as a fairly unsavory project, in attractive words like democracy…” Oh please, that is just garbage. The end of apartheid in South Africa didn’t “exterminate” South Africa. And besides, Linfield’s use of the word “exterminate” is calculated to push our buttons, to suggest that advocates for a one-state solution are calling for a mass murder of human beings. And by the way, Benny Morris openly approves of violent ethnic cleansing through forced expulsion, terrorism, and murder. He’s a racist creep, and so is Linfield. By tropicgirl, June 5, 2009 at 12:54 pm Link to this comment To Firefly— Just a bit of an explanation here… The two state solution will never happen. The time is long past to have accomplished that, but the Israelis have made that impossible. This has painted Israel into a corner. Because what we have here, is a so-called civilized country stealing the land of another country and removing those inhabitants to an area with a designation of another state. So now, it is basically a sort of two state solution with one state under seige by the other. The reason why there must be a ONE state solution is this: 1. That is what originally existed until the invasion of the Israelis. 2. In reality, that is what makes sense. 3. The Israelis will have to accept Palestinian freedom to travel and to return (because it is ONE state, ONE citizenship). 4. The Palestinians MUST be granted full rights of citizenship. Otherwise it is apartheid. Which is what it, basically is, today. THIS INCLUDES THE RIGHT TO RUN FOR POLITICAL OFFICE. 5. It is impossible to the civilized world to accept apartheid in another supposedly “civilized” country. And now, because of South AFrica, we have mechanisms to break it down. 6. The Palestinians would ultimately be in the majority, which in reality, they already are. They are having children at the rate of sound while Israel is literally “petering out” (no pun intended). 7. Palestine would then emerge as the majority government and that will be the end of Israel as a state. This is the alternative to which we have come. NO THINKING PERSON WHO HAS BEEN PAYING ATTENTION WANTS A TWO STATE SOLUTION ANYMORE. THAT MAY HAVE WORKED BACK IN THE 40’S BUT NOT TODAY. If you don’t hear this from your Palestinian news sources then you should question the sources. Anyone who really wants a solution has come to this conclusion. There is NO MORE appetite for a two state solution and Israel never had that intention. 8. Then I’d like to see Israel threaten the Muslim world with an A-bomb. That would be interesting. By firefly, June 5, 2009 at 12:02 pm Link to this comment Why is it that all the arguments against a two state solution come from Israel who say that it is the Palestinians (Arabs) who are fundamentally against it. And yet, publically at least, all Palestinian leaders, commentators, writers etc, consistently claim that that is what they wish for. I’ve never actually heard a Palestinian say that they do not want the two-state solution. By WriterOnTheStorm, June 5, 2009 at 11:54 am Link to this comment I’m not quite sure if TD posted this review as a provocation to stir legitimate debate, or if it’s simply one of those red meat pieces designed to increase the site’s ad revenue. Be that as it may, the one-state, two-state debate (sounds like a Dr Seuss title) in an interesting one. Some of the arguments used to make Morris’ case are princely in their hypocrisy and lack of integrity. Take the attack on the Palestinian right of return as an example. Do Linfield and Morris really want to pretend that centuries-dead descendants, and living Palestinians should be entitled the same moral rights and considerations? And how can they ignore the cold fact the entire Zionist re-conquest of Canaan is predicated on the very same moral entitlement they seek to deny in others through violence if necessary? Indeed, the pretense that a Jewish Israel was ever something other than a homeland taken from others by blunt force is a perverse denial of history. Go ask the Amalekites. If you can find one. Another weak argument is that the one-state solution can’t work because the Palestinians would impose Sharia law. This is yet another manifestation of the old canard that Israelis make decisions based on rationality, while the Palestinians are fanatical horde of religious monsters. But the truth is that Israel has plenty of religious monsters of its own. The Shas party yearns to impose halakha on the entire country. And the sad irony is that halakha and sharia have much more commonality than difference - at least to this outlier’s eye. The curdled pragmatism of the two-staters leaves me unmoved. I’m an atheist, in America I live surrounded by people whose religious ideas I find silly and abhorrent. But do I dream of pushing the creationists into the sea? No. I treat them with the same respect that I would want them to extend to me. We make it work. If this is good enough for me, why should it not suffice for others? In truth I’m not interested in how hard a single-state is for the Israelis or the Palestinians. Things are tough all over. I care more about what kind of political philosophy my government supports. I care about avoiding more September-elevens. I care about the principals of tolerance and humanism. A single secular democracy is the way to go. Supporting anything less is a step back toward the primordial sludge. By Jack, June 5, 2009 at 11:11 am Link to this comment (Unregistered commenter) It is amazing to think that anyone would publish an article by and author so blissfully unaware of the reality of the Middle-East. Jordan will never accept jurisdiction over the two and a half million Palestinains now in the West Bank and certainly not accept the other four million of the diaspora. Case closed. That leaves the two state solution and the one state solution. The one state solution is clearly the end of the Zionist enterprise and Israel will never accept it. A real two state solution would still be a ticking demographic time bomb for Israel. Immediately upon accepting the Palestinian diaspora in the new state, there would be twice as many Palestinians living on less than 20% of the land compared to less than half as many Jews living on the other 80+ % of the land. And, as Benny Morris points out, there would be real questions about the economic and political viability of such a small, crowded entity, especially since Israel has already claimed most of the water and useful land. Unlike Susie, the Israelis have recognized that their only feasible option for the short term is apartheid since transfer (ethnic cleansing) is not politically feasible at this time. That is the solution they have settled on and continue to push through their actions on the ground. The Israeli vision of two states seems to be that Israel controls all of the land and that Palestinians politically belong to a “Palestinian State”. This allows them to vote for a meaningless Palestinian “government” and forbids them from any say in the real government of Israel which would continue to exercise all real control. There is no real solution; the issue can only be temporarily managed. The apartheid solution kicks the problem down the road for perhaps another 20 to fifty years. Eventually, there will be another military conflict and many more years of the losing side calling for justice. If this is the holy land, God certainly is a practical joker. By Fadel Abdallah, June 5, 2009 at 11:01 am Link to this comment It seems that this lengthy book review by Susie Linfield about Denny Morris’s is going to generate more than just one comment from me as I make the time to read it thoroughly. First, the reviewer states, “Morris is an old-fashioned historian: He believes in truth, and he believes in documents.” It goes without saying that every writer, commentator or scholar lays the claim that he / she is believing in the relative truth of what they say and thus goes selectively about quoting the documents that support his / her relative view. This I call “half truths”, and in my book “half truths” are by default the half-sister of falsehoods. This certainly applies to both Morris and his reviewer. On several occasions before I have warned about the myth of so-called left-wing Zionism as being more benign and peaceful than right-wing Zionism as being more fascist and fanatic. Minor differences might exist between left-wing and right-wing Israeli internal politics related to social, economic and other internal issues. However, when it comes to the external politics of colonialism, occupation, and racism one cannot tell the difference, except, of course, at the level of rhetoric! As I repeated several times on these threads, most of Israeli wars of terror, expansion, and colonialism were initiated and carried out by the so-called left Labor-controlled governments. And unless people with very short memories consider the latest Israeli terrorism in Gaza as forgotten ancient history, this war of terror was initiated by the so-called reformed Kadima with strong backing and major roles played by the so-called leftist foreign minister and the leftist minister of war (i.e. defense). Does any body remember whom I am referring to here?! This is just a little quiz! Morris might be relatively labeled as a leftist, but he, like most Israelis, is a mentally sick Zionist who embraced Zionism as a higher level of religion over traditional Judaism. We have two notorious posters on TD that belong to the same category! Can any one guess who are they?! By Folktruther, June 5, 2009 at 10:55 am Link to this comment this is a racist Zionist defense to the increasing left articulation against Zionist war and ethnic cleansing. It seems to imply the old racist argument of shoving Palestinians into Jordan and anexing Palestine. It is conceivable, however, that the Palestians could overthrow the Western puppet-king of Jordan as US power decays, and create one Palestinian state which included the Gaza, the West Bank and Jordan. And this could include some of the Jewish settlers, devoid of Zionist aspirations. According to Pfaff in his current article, a third of these are orthodox American Jews, promoting US imperialism as well as a racist Zionism. It is therefore conceivable that a two state solution might work out historically, although not of course on Zionist terms. The traditional American Zionist media, which includes truthdig, prevents the mainstream discussion of this, and other, historical possiblities. By P. T., June 5, 2009 at 10:49 am Link to this comment Israel would accept removing the settlers from the occupied territories before it would accept a one-state solution. And the Palestinians also prefer a two-state solution. Compensation could be offered to Palestinians who had their property confiscated (in lieu of a right of return), as with Jews who had their property confiscated by Germany. Jews forced from Arab countries would be entitled to the same. All the solutions are bad and difficult solutions. And Israel does not believe it has to choose—that it can continue as it has been. Israel does not believe it has painted itself into a corner. By hippie4ever, June 5, 2009 at 10:40 am Link to this comment I had to wipe my screen clean after reading this drivel. Another apologist stance attempting to obscure the historical record, and all thanks to . Imperialism always prefers a one-state solution: see how well it worked for the British in India? Oh, now there are three nations and remaining territorial disputes? Whoops! And all thanks to “Morris” who “is an old-fashioned historian: He believes in truth, and he believes in documents.” I’m so impressed. By Anarcissie, June 5, 2009 at 10:13 am Link to this comment PSmith: ‘“ONE STATE - NIR ROSEN” How long will that take?’ It already exists—the Settlers and their enablers have ensured that. Now the question is how long it will take people to understand that it exists, and determine what kind of state it will be. I doubt if the South African model will prove any more viable in Palestine than it did in South Africa, but that seems to be the next move. By P. T., June 5, 2009 at 8:50 am Link to this comment What Benny Morris and Susie Linfield seem to be up to is figuring out a way for Israel to keep the land and water stolen in the West Bank and East Jerusalem. They want the land and settlements but not the Palestinians. The solution: Take those parts of the West Bank that Israel doesn’t want and turn them over to a U.S. client—the Jordanian monarch. The Israeli right-wing has long been attracted to some variation of that plan: the so-called Jordanian Option. It would moot the issue of the Palestinians being left with non-viable, non-contiguous, small pieces of land and preclude self-determination. Now, what reasonable indigenous Palestinian could oppose that? Such a deal! By Anarcissie, June 5, 2009 at 7:34 am Link to this comment Inherit The Wind: ‘As usual, “TC” will ignore the facts cited, misinterpret the analysis based on those facts, and heap insults on the author, the reviewer and anyone who dares defend either. ...’ I have to suspect any author who ritually flogs the Serbs in best mainstream-media-and-politics practice as being some sort of shill, although possibly a victim rather than a conscious promoter of the necessary lies. Beyond that, though, while I think Benny Morris may have made some valid points, the inescapable problem for the Israelis is that they have won, and they have what is in fact a single state in Palestine, called Israel. Victory is a trap, and often a fatal one. The sort of tribalism which is intrinsic to all national states, and is particularly acute in the founding of new ones, is bound to excite violent responses of the same type among all upon whom it impinges. Unfortunately, the situation has been greatly exacerbated by the Israelis’ yielding to the temptation offered by the American empire to be America’s junkyard dog in the Middle East, because this posture precludes the only possible route to long-term survival of the Israeli project, which does not lie in kicking the Arabs around forever. But why go on? Back to your ritual denunciations. By tropicgirl, June 5, 2009 at 6:32 am Link to this comment Oh, and it is way too late for a two state solution. Waaay too late. By tropicgirl, June 5, 2009 at 6:31 am Link to this comment Benny Morris is a sick man. He has to resist the draft to obtain his “impeccable liberal credentials”. From Benny… “the fatal inability to forge any sort of national unity or create any national institutions; the lack of military prowess and, even, military willingness; and the incompetence and sheer opportunism of the surrounding Arab states… Your speaking about Palestinians that had their land, food, water, self-respect, homes, children, fathers, brothers, mothers destroyed by Israel. What the F do you think happens to a people you destroy? And you fault them for not being “statesmen?” You are a very sick man, Benny. By omop, June 5, 2009 at 6:19 am Link to this comment The “Israeli-Palestenian Conflict” per se is basically a conflict between a certain group of people who have been [excuse my french] shafted by another group of people and driven off their homes and land they lived on for centuries. And were replaced by peoples from a variety of countries who claimed a God given grant to it. ( The inference being that their God promised the land to one of Adam and Eve’s many descendants. If this conflict were to be legally and ethically resolved. The one side claiming a “grant from God” as their right would lose since “hearsay” cannot be legally binding. The only way that it can be enforced is by ‘Force’. Mr. Benny Morris’s “chutzpah” to “grant Palestenian’s lands and homes” to others (Jordan and Egypt) and keep their homes and lands for the descednats of Adam and Eve are attempts at playing the “God” card. In time and with added more “chutzpah” the logic would be that every peson of the Jewish faith is an Israeli no matter where they were born or live with attendant questionable loyalties. One could ascribe a Hitlerian-like stigma reminiscent of “Duetchland uber alles” or in the extreme a copy cat ” aryan complex” based on the mythology of “God’s chosen people”. By JP, June 5, 2009 at 5:12 am Link to this comment (Unregistered commenter) I am very surprise of the level of knowledge that Susie Linfield has about the Israel Palestinian conflict, it is really low to say the least. This is really a low point in the normal high standards of truthdig. As another commentator point out before, “pathetic”. By ardee, June 5, 2009 at 4:54 am Link to this comment This article is a blatant apology for the inexcusible actions of the new Jewish State towards those who occupied that territory. It also contains a blatant fraud when it states that the expulsion of over 700,000 Arab residents was an “unfortunate result of war”. In the late 1930’s David Ben Gurion and eleven other future leaders of the coming State of Israel lived in a house in Haifa and planned for the future with regard to governance. Ben Gurion himself stated that expulsion of a million Arabs was going to be necessary to prevent the overwhelming of the emigrating Jews in a democratic state. That they failed to achieve their goal by a quarter million is not the point; the real point being that there is much propaganda generated by Israeli apologists, this article being one such. I understand that a two state solution is a political necessity, caused by the hatred and violence of both sides for a very long time. But a single state solution, which would be possible if the Israeli govt was a true democracy and fairly represented the entirety of that community, would be so very beneficial to both Arab and Jew, Israelis all. By Bubba, June 5, 2009 at 4:35 am Link to this comment Zionist/Israeli apologetics don’t work any more. They’ve not been working for some time. Especially Benny’s. Come into present time, Susie. By Inherit The Wind, June 5, 2009 at 4:32 am Link to this comment As usual, “TC” will ignore the facts cited, misinterpret the analysis based on those facts, and heap insults on the author, the reviewer and anyone who dares defend either. In fact “TC” are the very deluded Lefists Morris and the reviewer denounce, so OF COURSE they’ll strike back in anger and personal attacks. What else is new? By Herman Edward Schmidt, June 5, 2009 at 4:19 am Link to this comment (Unregistered commenter) By the reasoning of the reviewer and Mr. Morris, injustice cannot be addressed because the consequences are too dire. In this kind of thinking, there is the fear of the jailer of the consequences once the prisoners are set free. That fear is irrational if it is heartfelt, but in fact it is not. It is just another roadblock thrown up by the Israeli Jews and their fellow travelers to reaching any solution that requires the Jews to give up their priviledges as masters of another group of people and dominion over Palestine. It is clear at this point in time the Jews in Israel and those who support them are unwilling to concede anything and a solution must be imposed on them. Given their power in the West and their zeal, that is no small task. It is foolhardy to try to persuade them that a solution which equalizes the rights and privileges of both Jews and Palestinians is in their own interest, which it is. By thebeerdoctor, June 5, 2009 at 3:27 am Link to this comment The analysis in this book review is utterly pathetic. By doublestandards/glasshouses, June 5, 2009 at 3:02 am Link to this comment (Unregistered commenter) Oh the “fog” of war. It worked for McNamara. By P. T., June 5, 2009 at 2:17 am Link to this comment The book review is disingenuous. The Zionists never abandoned their expansionist plans, and the Arabs knew (and know) it. Expansion and ethnic cleansing continue no matter what political party is in power in Israel. The right of return of the ethnically cleansed, indigenous Palestinian people is grounded in international law. It is odd to see Jews who have never before set foot in Palestine claim a right of return that they oppose for the indigenous people—talk about bad faith! By Allan Siegel, June 5, 2009 at 12:40 am Link to this comment (Unregistered commenter) Is this supposed to be a book review or a polemic? Or a mash-up of both? Either way it is so sloppy, so careless in the way it skips back and forth between its paean to Morris and Linfield’s skewed sense of history that one wonders really why this is here.The problem with such reviews is that they act more like a garbage compactor pressing together events and people in what appears to be some chronological logic. Like most forms of compression, what results is various types of distortion and misplaced conclusions. The fact of the matter is that a two-state vs. one-state solution is not at all a superficial debate and is part of an ongoing process to resolve the conflict. And, in her glib manner, Ms. Linfield’s understanding of the creation of the Warsaw Ghetto and its liquidation is pitiful. Posted on Nov 24, 2015 A Progressive Journal of News and Opinion Publisher Zuade Kaufman | Editor Robert Scheer About Us | Contact Us | Advertise on Truthdig | User Agreement | Privacy Policy | Comment Policy Website development by Hop Studios
http://www.truthdig.com/arts_culture/page3/20090604_susie_linfield_on_how_to_think_about_the_israeli-palestinian_confl
CC-MAIN-2015-48
refinedweb
10,842
68.5
The opinions expressed herein are my own personal opinions. They are not necessarily fact or sactioned by any other person or organization. If you disagree that's your right. It's also my right to not care. In one of the mailing lists I'm on today, someone noted that on their Windows Mobile device the performance of GDI is much better in portrait mode than it is in landscape mode and was wondering why. The reason is actually pretty simple. Think of the image on the screen as just a contiguous stream of bytes (which is probably is). The physical display needs to get that data and "paint" it. Displays are engineered to essentially take in the data in a linear fashion, left to right, top to bottom. So it's a pretty simple operation to just send out the framebuffer data to the display. Now consider what happens if you want the display rotated 90 degrees. Remember, the display requires the data top to bottom, left to right, as it will show on the screen. To get it into that state you have 2 options. You can either A) rotate the data as you take it out of the frame buffer and send it to the display or B) rotate all calls to draw into the framebuffer. In either case you have to do a matrix transform, which is expensive and gets worse the larger your display gets (so a VGA device will be more affected than a QVGA device). Of course some devices have hardware acceleration that can greatly improve things by having matrix functions right in the silicon, but that still requires that the OEM actually modifies the driver to use that function (you'd be surprised how many don't). So how bad is the performance penalty? Well, since I like to actually quantify stuff I decided to resurrect an old GDI test app I had and rework it for this. The results of my testing, as well as tests sent in from other people (if you want to add a test to the table, send me your results): As you can see, there's a significant price to running rotated. The test application (source and ARMv4I binary) is available here GDIPerf.zip (10.46 KB) If you've ever needed to automatically launch an application under Windows CE then you probably know that one of the most common ways is to put an entry into the device registry under HKLM\Init.M\Init, it is responsible for calling SignalStarted once it is running. This allows any other items launching from HKLM\InitM\Init by simply having the gate application launch first, and then having the CF app depend on the gate. So in the registry, it would look like this: [HKEY_LOCAL_MACHINE\Init] "Launch90"="gateapp.exe" "Depend90"=hex:1e,00 ; depend on GWES, which is at 30 "Launch91"="MyCFApp.exe" "Depend91"M\Init\r\n"))); } return 0;} If).); } } }} Since the Compact Framework doesn't have support for App.Config file, we created our own implementation that follows the full framework model. It requires that the config file be named “MyApp.exe.config” (which is how it works on the desktop) with a subset of the desktop functionality and it's worked well for some time. Until recently that is. Yesterday I set out to create some unit tests for some of our OpenNETCF.Rss namespace objects. Well the FeedEngine object requires information from an app.config file on construction, so I figured it would be simple - I'd just add an app.config file to the test project and mark it as a DeploymentItem. After a little investigation I found that the calling assembly for a device unit test is SmartDeviceTestHost.exe, which by default runs out of \Program Files\SmartDeviceTest on the target device. That means that to use your own app config file from a unit test it would need to be named SmartDeviceTestHost.exe.config. Interestingly (or frustratingly, depending on when you asked me yesterday), this test host deploys *its own* version of a config file with the same name with some info on what framework it’s running against. The test framework just heavy-handedly overwrites any existing file rather than merging its contents into the existing one, and it overwrites *after* it deploys all of the test pieces, so you can’t just merge its contents into your own app config and use it. As a workaround I actually modified the OpenNETCF implementation for app config files. I didn't really want to, but the only other solution I could think of was to write code that would open the MS-deployed version and do a manual merge in the unit test code before the test is run, and that seemed like a much uglier route. The OpenNETCF Configuration implementation now looks for a file named MyApp.exe.config.unittest before looking for MyApp.exe.config and uses the "unittest"-suffixed version if it’s there. I then modified my TestBase class (from which all of my unit tests derive) to add this: [TestInitialize] public virtual void TestInitialize() { CopyTestConfigFile(); } private void CopyTestConfigFile() // copy the config file to the test host folder string src = Path.Combine(TestContext.TestDeploymentDir, "SmartDeviceTestHost.exe.config"); string dest = Path.Combine(TestHostFolder, "SmartDeviceTestHost.exe.config.unittest"); if ((File.Exists(src)) && (!File.Exists(dest))) { File.Copy(src, dest); } public string TestHostFolder get return Path.GetDirectoryName( Path.GetDirectoryName( Assembly.GetCallingAssembly().GetName().CodeBase))); Now I simply add SmartDeviceTestHost.exe.config to the unit test project, mark it as a deployment item and voila - it works as expected. Just how it should have yesterday morning when I set out to write a couple simple tests. And for the record - the current "solution" for debugging device unit tests (which involves putting in a Debugger.Break() call in the unit test and then doing an "attach to process" from another instance of Studio) is an unweildy pain in the ass. It takes no less than a minute just te get a unit test running and in a state that you can step through code. That might not sound like a lot, but try this: put a breakpoint in your code and when the debugger hit is, wait a full minute before you step or look at the Locals window. Now do this every time you want to debug. This morning OpenNETCF announced a new web initiative - our Community Web Site. I'll spare you explaining it in detail here, as it's on the front page of the site, but we've got articles and white papers, a public SVN server for shared-source projects and a monthly coding competition. This month we're giving away a copy of Studio 2005 professional and a Windows Mobile 5.0 device of your choice. To come are Forums and a Wiki. Let us know what you think.). I was doing a search for some info just now and came across a really well done article describing drivers in Windows CE. Check it out. We came across this article while doing some research. It's well worth the read for anyone doing any kind of development. Ever want to make all CAB file installs on your device be silent? Simply add the following registry key to your platform: [HKEY_CLASSES_ROOT\cabfile\Shell\Open\Command] @="wceload.exe \"%1\" /nodelete /noaskdest /noui" OK here is what I have learned so far that is necessary for PocketAccess synchronization under Windows Mobile 5/ActiveSync 4:The files adoce30.dll and adocedb30.dll must be on the device in theWindows folder. The file adosync.dll must be on the device andregistered. Then, on the desktop, remove the following registry key:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows CEServices\SpecialDefaults\PocketPC04\Services\Synchronization\Objects\~MicrosoftTableYou may have to disconnect and reconnect the device before you see the"Pocket Access" sync enabled. That seems to be all there is to it,except for a percentage of users who have been reporting an "AccessDenied" error when they try the transfer of a database file. I haven'tfigured out yet why some users are getting this message, so if anyonehas any suggestions on that please let me know. If.: I'm trying to debug an error in some managed code related to timezones when a P/Invoke is throwing a native access violation exception. To make sure it's not something with the platform itself I decided to do the same in native code first to ensure that it's not some platform limitation or bug. Surprisingly I didn't readily find any sample code for doing it, so here's my contribution to the community at large. It uses dynamic loading (and it doesn't make sure that the function loads succeed, so you might want to add error checking if you actually intend to use this): #include <windows.h>typedef void (*INITCITYDB)(void);typedef void (*UNINITCITYDB)(void);typedef void (*LOADTZDATA)(void);typedef void (*FREETZDATA)(void);typedef int (*GETNUMZONES)(void);typedef void * (*GETTZDATABYOFFSET)(int, int*);typedef void * (*GETTZDATA)(int);struct TZData{ TCHAR *Name; TCHAR *ShortName; TCHAR *DSTName; int GMTOffset; int DSTOffset;};int _tmain(int argc, _TCHAR* argv[]){ TZData *pTZ = NULL; int index; // load the library HINSTANCE hLib = LoadLibrary(_T("CityDB.dll")); // load the CityDB functions INITCITYDB InitCityDB = (INITCITYDB)GetProcAddress( hLib, _T("InitCityDb")); UNINITCITYDB UninitCityDB = (UNINITCITYDB)GetProcAddress( hLib, _T("UninitCityDb")); LOADTZDATA ClockLoadAllTimeZoneData = (LOADTZDATA)GetProcAddress( hLib, _T("ClockLoadAllTimeZoneData")); FREETZDATA ClockFreeAllTimeZoneData = (FREETZDATA)GetProcAddress( hLib, _T("ClockFreeAllTimeZoneData")); GETNUMZONES ClockGetNumTimezones = (GETNUMZONES)GetProcAddress( hLib, _T("ClockGetNumTimezones")); GETTZDATABYOFFSET ClockGetTimeZoneDataByOffset = (GETTZDATABYOFFSET)GetProcAddress(hLib, _T("ClockGetTimeZoneDataByOffset")); GETTZDATA ClockGetTimeZoneData = (GETTZDATA)GetProcAddress( hLib, _T("ClockGetTimeZoneData")); // Init the library InitCityDB(); // load the TZ data ClockLoadAllTimeZoneData(); // find out how many zones are defined int zoneCount = ClockGetNumTimezones(); // interate through them all for(int zone = 0 ; zone < zoneCount ; zone++) { // these are pointers to a timezone data struct pTZ = (TZData*)ClockGetTimeZoneDataByOffset(zone, &index); } // unload the TZ data ClockFreeAllTimeZoneData(); // uninit the library UninitCityDB(); return 0;} You. If you're a Windows CE device developer and you've got a device with a persistent registry, you're probably already aware that the CF Registry class doesn't help much for saving the registry, restoring branches from a file or creating volatile keys. Once again the SDF is here to help with CreateVolatileSubkey, RestoreHiveBasedKey, RestoreRamBasedRegistry, SaveHiveBasedKey and SaveRamBasedRegistry. One note - the doc says they're in the Registry2 class - that's already been changed to RegistryHelper. Making apps power aware, especially through the device Power Manager works, but the code is kind of ugly and a pain to implement. Once again SDF 2.0 simplifies things: using System;using System.Windows.Forms;using OpenNETCF.WindowsCE;namespace WindowsCETest{ public partial class MyPowerAwareClass { public MyPowerAwareClass() { DeviceManagement.DeviceWake += new DeviceNotification(DeviceManagement_DeviceWake); PowerManagement.PowerUp += new DeviceNotification(PowerManagement_PowerUp); } void PowerManagement_PowerUp() { MessageBox.Show("The Power Manager says I'm awake!"); } void DeviceManagement_DeviceWake() { MessageBox.Show("Device notifications say I'm awake!"); } }} We're extremely close to a beta release of SDF 2.0. To give you a taste of what's in it, we've posted the online documentation. Any feedback is appreciated. Enjoy. A couple new articles from OpenNETCF have finally gone public: Device Debugging and Emulation in Visual Studio 2005 from Alex FeinmanUsing Visual Studio 2005 to Design User Interfaces and Data for Device Applications from Maarten Struys I just got word today from the Group Program Manager of the Compact Framework team that Compact Famework 2.0 Service Pack 1 (due out next year - dates not finalized, so I won't speculate) will have the following support for general Windows CE 4.2: The picture for integration into Studio 2005, debugging, emulators and all of that is still fuzzy, so for now I'd plan pessimistically and assume that the two points above are all you'll get, but at least it lets you move forward. If you've not heard here are a few bits: Yesterday I was working on a project using C++ for a device under Visual Studio '05 and for some reason it stopped linking with the following error: error LNK2019: unresolved external symbol __security_check_cookie referenced in function "int __cdecl RegisterAndActivate(void)" (?RegisterAndActivate@@YAHXZ) No idea even what the error meant, so I messed with some settings, rolled back code - all the usual things to try to find it. Nothing. I decided to bag it for the night - maybe fresh eyes today would help. I loaded the project again today. Same error. I removed the project from the solution, created a new one, and re-added my code. Same error. So I turned to my fellow coders to see if anyone else had seen this. Jeff Abraham of Microsoft replied as follows:. Sure, enough, I added secchk.lib to the Additional Dependencies line under the Project's Linker | Input section and it's now building again. Why the error occured in the first place I'm not too concerned with - the release of Studio is only weeks away and I'll be uninstalling soon, but Googling on this error turned up nothing helpful at all. Hopefully this blog entry will remedy that if someone else is looking.
http://blog.opennetcf.com/ctacke/CategoryView,category,CE%20Device%20Development.aspx
crawl-002
refinedweb
2,180
53.61
Setting Up a React Environment In the last article, we learned about ref which helps us communicate with DOM elements when used in React components. At this stage, if you’ve gone through all the articles in the series and practiced a bit, you should be pretty familiar with how we create components and applications. As such, I think the time has come to build a real React environment. Until now, we’ve used React and Babel Scripts. React ran the scripts and Babel happily parsed the JSX. Everything looked great except… that’s fine for learning but won’t do for production. The environment we used until now is great for learning and experimenting, but it won’t work well if we want to build a real program. To do things right, we’ll need to do a build and we’ll need to do some real transpiling. Then we’ll take the JavaScript we get back and run various processes on it. We might need routing, lazy load, compression, aggregation. We’ll use Node.js to carry out the build. You won’t need to have a deep understanding of Node—we’ll just be using it for the build. In this article, I’ll explain how to install an environment that can support a site based on React from the ground up. No more of this kinda thing: <script src=''></script> <script src=''></script> <script src=''></script> So how do we start? First, we need to clear a few things up about the build. Until now, we more or less wrote JSX. Browsers don’t know how to handle JSX so we’ve been using Babel to convert the JSX to something a browser can process. That was when we used this line: <script src=''></script> We made the folder called babel perform the conversion from JSX to JavaScript code that the browser can handle. This process is called transpiling, which by the way, is pretty expensive when it comes to time and resources. When we were learning about components, it didn’t matter if we did that at runtime, but in a production environment on a real project, we would get some pretty lousy results if we called Babel like that. So what we’ll do of course is take care of it beforehand. And on the way, we can also do some syntax checking, minifying, and aggregation. This process is what we call the build and we’ll do it using Node.js. If you’re thinking that this sounds complicated, don’t worry—it’s easier than you might think. First off, install Node.js which is easy whether you’re on Mac, Linux, or Windows. Once you’ve got Node installed, open the command line or console and run node -v. If you see the version number then you’re ready to start working. Go to the folder with your documents or whatever folder is comfortable to work from and type in the interface: npx create-react-app my-app This will make my-app the name of the React application, but you can of course use any name. Directly after running this command, the application will install itself using Node. Yes, that’s right, you don’t really need to know anything about Node to use it here. This installation might take a while, but it will eventually finish. You can fire it up on your local machine using the following command: npm start This will immediately open our React application! Yep, right on localhost:3000 Behind this application, there’s quite a bit of technology: along with Node.js you’ve got, Webpack, Babel, and of course all the React folders. So at this point, the only thing left for us to do is to start coding. It’s really that quick and easy. Now that we’re playing in the big leagues, we’ll need to write the syntax of our components a bit different. Remember our React components? Let’s have a look at how they work. First we’ll make a components folder in the src directory. Inside of it, we’ll make another folder called MyComponent and inside that, we’ll create a file called MyComponent.jsx. Here’s what’s in the file: import React from 'react'; class MyComponent extends React.Component { render() { return <p>Hello world!</p> } } export default MyComponent; There are two essential changes here that we should have a closer look at. The first is import React from 'react'; which is syntax from web component (the components that helps us ‘import’ what we need). In this case, what we’re importing is React. If we would have needed to use another component, we would need to import it the same way. The second is the export default MyComponent; which exports the component. That’s about it. In order to use the component, go to app.js and make sure that we’re importing it. import MyComponent from './components/MyComponent/MyComponent.jsx'; <MyComponent/> And you thought it would be difficult. From this point foreward, we’re working on a real React application. There are a million things we didn’t cover in create react app which comes with tons of good stuff like webapp, SASS right out of the box, static code analysis, and much more. But I’ll try to focus on React here and not on the infrastructure. In the next article we’ll have a look at routing, debugging, and a few other important subjects.
http://www.discoversdk.com/blog/setting-up-a-react-environment
CC-MAIN-2019-26
refinedweb
924
73.68
- - - I'm having the same problem as well. Poll still triggers after setting excluded messages/users It's kinda important to us and probably others because we use maven-release-plugin, as shown in the example in the snippet generator. Jesse Glick Any comment on this issue? I can't imagine being the only one with this problem. This is really the only thing preventing us to move to a pipeline/multi-branch workflow. I know that pipelines are the future of Jenkins so we really want to make the switch so we can benefit from all the recent improvements in Jenkins. +1 For me, it's the polling ignores that are configured in the job itself and not the Jenkinsfile. Preston Jennings you're describing a condition which has a very different root case compared to the conditions which cause this bug report. Please find an existing bug report which describes cases where a job ignores polling exclusion rules which are defined within the job, or submit a new bug. Mark Waite Understood. I've created a separate defect as I couldn't find anything specific to my case: I'm not sure how it would honor the polling settings inside the Jenkinsfile in the repository. This is automatic, based on the checkout done in the last build. Not sure offhand, should work, since Pipeline supports workspace-based polling, but perhaps there is some bug, Git-specific or not, and missing test coverage. Generally speaking, polling exclusions are discouraged for Git since the plugin does not currently support them without reverting to workspace-based polling, which is always bad. AbstractGitSCMSource does define a caching system which could be used to fix that, but non-multibranch parts of the Git plugin do not currently take advantage of it. Not to be confused with JENKINS-35988: multibranch indexing does not use SCM.compareRemoteRevisionWith at all. Jesse Glick Thanks for chiming in. I'm not sure I understand all the comments other than it should work. I use polling but in a sneaky way: I use Bitbucket server with an add-on that triggers Jenkins job git polling. The job is set to poll but on an infinite schedule (never). So I get job triggering only when the git repository is updated but with all the git polling rules being observed. I use the git caching too to speed up the data transfers (I fetch updates to the repository once daily using a Jenkins job). Mark Waite Jesse has chimed in. Any comments on his remarks? No comments from me without a detailed investigation. There are higher priority bugs in the git plugin which will get my attention before this bug, and several pull requests which will get my attention before this bug. My apologies, but I won't do a detailed investigation of this bug report for a long time. If it is crucial to you, you're encouraged to fork the git plugin source code, write a test which shows the failure, write the fix, and then submit the pull request. Mark Waite, this has been a fairly bothersome bug for my team. Most of our pipelines are set up to do maven releases, which can be a bit chatty with Git commit/push; one for the release version commit, a second for the dependency version bump. The two extra concurrent builds can create heavy load on the machine, sometimes causing the release build to fail which can leave our SCM and Nexus repos in a weird limbo state. There are two potential workarounds we've identified to getting this down to a single extra build. Either we restrict the maven push til the very end or attempt to filter out webhook calls from our repo service. Both are a bit of a hack, and we were trying to gauge this bug, ultimately filtering out certain commits with particular messages/users. Could you make a recommendation on any other potential solutions, or point me in the right direction of which projects/code would likely need to change to get this fixed? We are using multi-branch for this, so that may complicate our position a bit further as mentioned above, it doesn't use workspace polling. Matt Traynham the git-plugin is the most likely place that needs the change. The AbstractGitSCMSource class is the one that is used by the pipeline code for its git operations. That seems like the first place to start looking. There are various guides to getting started as a Jenkins plugin developer, including plugin tutorial and Extending Jenkins. The jenkins-dev mailing list is also a good place for help. Most of our pipelines are set up to do maven releases, which can be a bit chatty with Git commit/push; one for the release version commit, a second for the dependency version bump. The two extra concurrent builds can create heavy load Ultimately the Maven release plugin is not really compatible with CI workflows. We are using multi-branch for this, so that may complicate our position This bug is about regular polling, which is supposed to support polling rules. Multibranch scanning definitely does not support polling rules. See JENKINS-35988. Can this functionality be moved to later stages? So some steps can be skipped in case there are no appropriate changes as described in JENKINS-37978. At least Included/Excluded regions actually works ok if you also specify Disable Remote Poll, e.g.: checkout( scm: [ ... , extensions: [[$class: 'DisableRemotePoll'], [$class: 'PathRestriction', excludedRegions: '', includedRegions: 'foo/.*']], ... ]) Yes. polling REGION inclusion/exclusion is working. The USER inclusion/exclusion is not working. We too have the issue of automatic releases where the release is tagged and the version is bumped by Jenkins. This results in a builds after the push. In Jenkins 1.x this was easily remedied using the user exclusions. Hoping a fix is in the works. No fix is in the works as far as I know. This particular use case is quite far down my priority list (behind large file support, submodule bug fixes, and authentication issues). Can we also make it possible to pass properties (like excludedUsers) from git step to checkout step, as currently git step allows only for url, branch and credentialsId. It would make life a lot easier, as we wouldn't have to switch from git to checkout (which is less readable) just to pass some property. Krzysztof Wolny the git step is only for the most simplistic use cases. Use checkout. To be clear, from the description I am assuming this is a bug not a feature. Would require someone to take time to do the analysis. Offhand I have no idea where the problem lies. Jesse Glick I know that, and I think it should be easy to pass properties to "parent". But that's the point of using git - simplicity. If you go this way you proposed further, we can remove git - and use checkout, right? I have an even trickier problem. We use standardized jenkinsfiles from a seperate repository to facilitate 200+ java projects. And these Jenkinsfiles are fed with parameters in the job, with properties like gitUrl and branchName. These values are used in the jenkinsfiles in a checkout step. So the 'pipeline script from SCM' bit in the job is not the git repository that contains the code. It does pick up commit changes in the parameterized git repository, but exclusions are not honored. So we have git repository 'A' containing jenkinsfiles. These are used in the pipelinescript from SCM bit in the Job. And we have git repository 'B' containing sourcecode to be built. The git url of this repository is fed to the jenkinsfile from repository 'A' The problem now is that we have an SCM trigger on the pipeline job to run every night, but the [maven-release-plugin] triggers itself because of version update commits. So we want to exclude either the jenkins user commits or commits containing [maven-release-plugin]. I've tried adding exclusions to the pipeline script from SCM additional behaviours as well as in the jenkinsfiles checkout step. We really need this for our pipeline implementation Just wanted to share our workaround for the polling issue. We came up with a solution using the groovy script. It is not perfect but it helps us by preventing unwanted releases triggered by the maven-release-plugin. In one of the first stages we check the "git log" for the comments of the last day using: def gitComments = sh(returnStdout: true, script: "git log --pretty=format:\"%cd %h %s\" --date=short --since=\"last day\"") We then parse this list to see if it only contains [maven-release-plugin] commits. If so we stop the Jenkins job using the following lines: currentBuild.result = "NOT_BUILT" error "Pipeline not built. No new user commits found" It is a bit ugly but the job status shows up as NOT_BUILT and that is enough for us. It's a bummer to have something given to you by the Pipeline Syntax page, only to figure out it is not working properly after some trial and error. If it's not working properly - Pipeline Syntax page should not allow to configure it, and if someone tries to - it should at least generate a warning during runtime (as long as it is a known limitation and not an undiscovered bug). Mark Waite When that happens, the question that remains is whether this issue would be considered a bug or RFE... I prefer to see it as a bug. We work around this by checking the commit authors post checkout and abort the build if only the build user committed. Soon we will switch to post-receive hook and have no need for this but its still worth doing. I consider this a request for enhancement, rather than a bug. The git plugin special polling rules have been consciously chosen to not apply in the pipeline case. They are not a general purpose technique across all the SCM systems supported in pipeline. Mark Waite : I understand the decision made, but it's not obvious from UI. UI clearly shows those options, and it's confusing why they are not working. Slawomir Demichowicz the UI confusion is resolved or will be resolved with the release of git plugin 3.4.0 and the SCMNavigator improvements from Stephen Connolly. I believe this still should remain open as an enhancement request for those users who want pipeline to be able to ignore certain users or certain regions. I think this would be best implemented as a Filtering trait if people want to ignore branches from certain users. If people want the branches to remain but ignore specific commits, the best bet IMHO is to have the pipeline abort in a defined way from the Jenkinsfile parse so that Jenkins is happy that the commit has been built and everyone else is happy that the build did nothing Not sure if this is the place to post this, but it may be useful to someone. I have it implement this way (outside of `step` block) if (stash.isJenkinsLastAuthor() && !params.SKIP_CHANGELOG_CHECK) { echo "No more new changes, stopping pipeline" currentBuild.result = "SUCCESS" return } setting SKIP_CHANGELOG_CHECK - this allows me to force build ( in case I do want the pipeline to run regardless of who was the last one to commit ) Then it looks like this in the UI As I have noted several times here, there are two completely related cases. For a standalone Pipeline job, all SCM features including exclusions are supposed to work exactly as they do for freestyle projects. If they do not, that is a bug. I just do not know how to reproduce it. For a branch project, there is no support for commit exclusions, nor should any such be offered in the recently amended UX. If we want to add support, I think it should just work as people expect: branch changes with only insignificant changes should not trigger new builds. In either case, you always have the workaround of inspecting currentBuild.changeSets from a script and returning early if the changes look boring. JENKINS-27092 would allow you to do this with the error step so you could get the natural NOT_BUILT status (and not have to deal with a return statement, which does not work inside a library, closure, etc.). +1 to have the "Polling ignores commits from certain users" option available for Multibranch Pipeline Plugins. I would like to ignore the commits from the Jenkins-User. The Reason for that: We are doing java development and using Maven with the official maven-release-plugin. Thus the call of "mvn release:prepare release:perform" results in a commit on the current branch. And this commit results then in a new build. I can check for the workaround mentioned by Jesse, but at least i want to have at less scripting as possible in our Jenkinsfiles. So having the option in the Jenkins GIT-Plugin would be much helpful. //edit: The Option "Polling ignores commits from certain users" is available in the old plugin version 3.3.2 Stephan Watermeyer please reread my above comments. You cannot use commit exclusions in multibranch Pipeline jobs. They are available in standalone jobs. This is by design. If you want a job doing Maven releases, it should be a standalone job, not a branch job. Thanks Jesse Glick for your reply. Understood. So we have to create two jobs per component, one job to make the ci-builds for each branch and another job to release the component. So i've to create/maintain a second Jenkinsfile in each component. Would this be the recommended way to go? That would be nearly the way how jenkins-plugins are developed but i am still unsure if Jenkins should be used in that way. Further: I am still thinking about an option to extend my Jenkinsfile to update the "scm-revision-hash.xml" file after a mvn release. My idea is to put in something in my pipeline like: ... stage('Release') { when { branch 'master' } steps { sh 'mvn release:prepare release:perform' // get the latest commit id sh '$LASTCOMMITID=git log -1 --pretty=format:"%H";' // Search and Replace the Hash in the scm-revision-hash.xml file with $LASTCOMMITID sh 'search/replace in jobs/<projectname>/<branchname>/scm-revision-hash.xml and replace hash with $LASTCOMMITID' } } ... What do you think? What would be a nice & clean way to update the scm-revision-hash.xml file? Or is this just crap? Stephan Watermeyer yes you should have a separate job definition for release builds, based on some Pipeline script from SCM, but not multibranch. I do not advise trying to touch scm-revision-hash.xml. This is not supported, and is also incompatible with a secured Jenkins instance. Good day. I have been watching this item for a while and thought I would chime in. We have a non-multi branch pipeline project that does the maven release steps and during the release process it will check the pom back in with updates to <version> and we encountered this issue where the polling exclusion doesn't work to ignore changes made by the build user, so we would get stuck in an infinite build loop until we killed the job. The work around I came up with was to use the 'ci-skip' feature of the GitLab CI project trigger and I modified our maven check-in comments to prepend '[ci-skip]' to any of the check-in commit messages performed by our maven release process. Jesse Glick Let me dig up my test jobs and version info to be able to pass on some information about reproducing this. I had encountered this a while ago and haven't tested it with recent versions, so it will be good to make sure I have the details of the problem again. Thanks for all your work. Ron ron swanson it is expected that a Maven release will trigger a new CI build by default, since sources have indeed changed (even if only trivially). There should be no “loop” since a job with an SCM trigger should never be performing releases! A release job should have only a manual trigger. Please keep it to the user’s list. This issue is about a claimed bug which I do not know how to reproduce: that in a standalone, not multibranch, job using an SCM trigger and an SCM checkout which specifies polling exclusions using whatever SCM-specific configuration (reported here for Git), that ignorable commits nonetheless trigger builds. No other comments are relevant in this issue. Jesse Glick Sorry, i thought I was talking about what was relevant. I have a non-multi branch project which attempts to use excludedUsers to ignore checkins that were made by the user running the build. I DONT want to get into a discussion about what maven should or shouldn't be doing, but I think I am trying to stay on what I think the topic is, which to me is, the excludedUsers doesn't work in the scm checkout to ignore checkouts by specific users. Am I still off-base? Thanks Ron the excludedUsers doesn't work in the scm checkout to ignore checkouts by specific users If reproducible (using a minimal self-contained test case etc.), that is a bug. Jesse Glick Thanks.... and I have to ask as I am new to this, do you mean a test case like this? Thanks Ron Well an actual JUnit test case reproducing the bug (for bonus points, in a PR with an initial @Ignore annotation) is great, since then a developer can get straight to work analyzing and fixing it. But detailed manual steps suffice: install Jenkins 2.xx, install such-and-such plugins, create a job with the following script, run the following Git commands in a shell, whatever. AFAICT neither this nor this currently test polling exclusions explicitly. This call is where Pipeline passes control to the SCM to determine whether there are changes, implemented for example here. Thanks for that detailed response, that gives me some good stuff to look at. I have an environment that i just tested this out in, but i will have to recreate to make sure I get the details right and I can post it back. Doing a PR is a bit out of my wheelhouse, but I do want to learn how to write Jenkins plugins, so this is good. Thanks Ron Same problem here, using: - Jenkins ver. 2.60.3 - v 2.5 I have tried to ignore poll SCM with Excluded Users and Excluded Messages, none worked. I am at Jenkins ver 2.46.2 1.4.7 2.5 My test case was: -Pipeline job that checked out single Jenkinsfile -"Poll SCM" was checked, but schedule was empty -"Triggered builds remotely (eg from scripts)" and I set a Authentication token -Gitlab has this trigger for my job: -Tried the exclude logic from the job config in UI, as well as from in Jenkinsfile using "checkout scm". -My Jenkinsfile calls these maven commands to simulate a release: versions:set, versions:commit, scm:add and scm:checkin, which just updates the pom.xml file with the next version. I tested every combination I could think of. Setting different user.name at checkout time using additional behaviour, changing displayname as defined in /etc/passwd, adding multiple users to exclude list, trying logic at job level, and in pipeline using 'checkout SCM'. Something to note is Jenkins reports the job was triggered by the user 'jenkins' as that is the user Jenkins is running as, but the user that checks into GIT is what is defined as the display name from /etc/passwd file. So I am not exactly sure what user it SHOULD be expecting to try to exclude, but I have tried to exclude all the different users, individually as well as in a multiple list. This is from the "Changes" link in a particular build and shows that 'jenkins' did it, as 'jenkins' is the unix user that Jenkins is running under. *Commit f3b85jh3jh4k3j4h5kj3h45kj2k3j2bfc9c99727c566ba718c2 by *jenkins [ci-skip] updating pom to 2017.08.427 But in GITLAB, the checkin comment shows the user that is defined as the display name in the /etc/passwd file for the user. I have also tried changing that displayname to match the actual username, so it is the same value on both sides, to no avail. [ci-skip] updating pom to 2017.08.427 CMJenkins committed 2 minutes ago In my pom i use these GIT connection urls, so I have an SSH key setup between the user jenkins is running as, and the user we are connecting to GIT with, which is a different user, (and I've tried to exclude that one too) <connection>scm:git:ssh://git@myGITLABserver/myGROUP/myPROJECT.git</connection><developerConnection>scm:git:ssh://git@myGITLABserver/myGROUP/myPROJECT.git</developerConnection> It doesn't mean much, but I have a similar setup using 1.X Jenkins interacting with GitHub and things work as they should. Thanks for everything. Ron Do we have any way to git with include / exclude path for polling in pipeline? Because from pipeline syntax, I could not able to add exclude / include path behavior. Is this functionality is missing in pipeline? Even in simple scripted pipeline? Michael Andrews: Did you able to test this with pipeline with git? How did you configure it? Can you share your working pipeline syntax? Nikhil Sidhaye It is not missing. You just need to use the full SCM step, and then use the extensions attribute. $class: 'GitSCM', branches: [ [ name: gitBranch ] ], doGenerateSubmoduleConfigurations: false, extensions: [ [ $class: 'UserExclusion', excludedUsers: """${scmPollingUserExclusions}""" ], [ $class: 'PathRestriction', excludedRegions: scmPollingPathExclusions, includedRegions: scmPollingPathInclusions ] ], ... Where, scmPollingUserExclusions, scmPollingPathExclusions, and scmPollingPathInclusions are pipeline parameters. Using v3.9.1 version of this plugin, I tried Michael Andrews suggestion about using pipeline parameters (scmPollingPathInclusions, "$scmPollingPathInclusions", "${scmPollingPathInclusions}", '$scmPollingPathInclusions'), and no luck so far. Also tried adding [$class: 'DisableRemotePoll'], job was still triggered on a region outside of the specified IncludedRegion. However, the region inclusion/exclusion works in freestyle jobs. Any plan to have this fixed soon? Pipeline - does not work [$class: 'PathRestriction', [$class: 'PathRestriction', excludedRegions: '', includedRegions: 'src/.*']] Freestyle - works Xiaohui Cui no plans from me to fix it soon. There were discussions between Jesse Glick and Stephen Connolly of techniques that might work to add region exclusion to the SCM API and then allow that region exclusion to be used by the git plugin and other SCM plugins. However, there has been no implementation of those discussions and I don't expect any implementation of those discussions in the near future. You need to use regular expressions - not glob style pattern matching. E.g., ^foo\/bar\/.*$ for all files in the /foo/bar directory. ...and be sure to escape the separator as shown in the previous comment. Mark Waite and Michael Andrews, thanks for the reply. FYI, I did more testing and seems like the regular expression does not make any difference. To simplify the paths to include, I limited to the README.md in the root folder. Again, the freestyle job was triggered correctly but pipeline job did not trigger correctly (as in changing to any file in the repo would trigger). Works: Does not work: [$class: 'PathRestriction', excludedRegions: '', includedRegions: 'README.md'] [$class: 'PathRestriction', excludedRegions: '', includedRegions: "README.md"] [$class: 'PathRestriction', excludedRegions: '', includedRegions: '^README.md\$'] [$class: 'PathRestriction', excludedRegions: '', includedRegions: "^README.md\$"] I even tried to use the pipeline syntax function on jenkins to make sure that I got the syntax correct. And I got exactly this line which did not work. [$class: 'PathRestriction', excludedRegions: '', includedRegions: 'README.md'] No luck finding a workaround yet... You should use an online regex tester to make sure your expressions is correct before attempting to use the pattern in Jenkins. Need to double escape the backslash. Just tried the following and all triggered by change to another file file in the repo. 'README\\.md' "README\\.md" '^README\\.md$' '^README\\.md\$' Michael Andrews, and thanks for the reminder about the dot. But the dot in regex would match "." anyways? What is causing the problem is that when the Git CLI client calls the "git log" command giving it a range of revisions to filter by (last build's most recent revision to the latest committed revision), all files in the repo/tree are returned instead of just the newly committed file(s). Example: I committed a file named "dir2/README8", and when Jenkins polled the SCM, it invoked: git log --full-history --no-abbrev --format=raw -M -m --raw 645221fb9fb8b752db7736d40c4c311e032482bb..d36267d8d88bdd085c894f35b1585a9f1ee9d755 # timeout=10 Which resulted in the following output: 576c717174a2f49675f4b57429424e3d54cf2a07 A README.md :000000 100644 0000000000000000000000000000000000000000 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 A dir1/README2 :000000 100644 0000000000000000000000000000000000000000 e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 A dir2/README8 Notice that other, unchanged files, also appear in the output. However, When I ran the same command not from Jenkins, I got: {{}} e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 A dir2/README8 {{}} Which is the expected result. This is turn causes the PathRestriction extension to iterate all these files instead of only the relevant ones, which means that if at least one of these files matches the "included" regions pattern (I made it so only the "dir1/.*" path is included), it will return PollingResult.SIGNIFICANT instead of PollingResult.NO_CHANGES. I verified that Jenkins is using the same git client I used in the command line, which is the one found in the path. I can't think of any explanation for this. Any ideas?? I was able to reproduce this easily and also to find a workaround. Reproduce I have the repo "user/repoA" that have the following Jenkinsfile: node { deleteDir() dir("repoA") { checkout scm: [$class: 'GitSCM', branches: [[name: 'master']], extensions: [[$class: 'UserExclusion', excludedUsers: 'jenkinsbot']], userRemoteConfigs: [[credentialsId: 'mycredentials', url: '']]] } dir("repoB") { checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: 'master']], extensions: [[$class: 'IgnoreNotifyCommit']], userRemoteConfigs: [[credentialsId: 'mycredentials', url: '']] ] } dir('repoB') { try { sh "git checkout master" sh "git config --global user.name 'jenkinsbot'" sh "git config --global user.email 'jenkinsbot@example.com'" sh "echo ${JOB_NAME}/${BUILD_NUMBER} > test.txt" sh "git add ." sh "git commit -m 'Automated: Updating test.txt during build'" sh "git push" } catch (err) { echo "No change in test.txt" } } } The Pipeline jobs looks like the following: Expected Behavior Achange to "repoA" and "repoB" should trigger polling. Polling should ignore "repoA" commits from the "jenkinsbot" user. Polling should ignore "repoB". Actual Behavior So a change to "repoA" and "repoB" should trigger polling. Polling should ignore "repoA" commits from the "jenkinsbot" user. Polling should ignore "repoB". In that particular case, because the user exclusion doe snot work the job builds in loop... Workaround The workaround is to set the User Exclusion in the Pipeline Definition SCM instead of inside the Jenkinsfile. Doing the following fix this scenario: Note: This is not required but in this case it makes sense anyway to use checkout scm in the Jenkinsfile. **** So it seems that the problem come from polling rules of the SCMs defined in checkout steps (vs. SCM definition in the Pipeline job). This workaround works in a scenario where the the commits that must be ignored are from the repository holding the Jenkinsfile. I have not tested a scenario where the polling rules are set to "repoB" with "poll: true". I am hoping this helps to find the root cause here. Are there plans to address this? Working with a scripted pipeline it's possible to define polling as follows properties( [ pipelineTriggers([ [$class: "SCMTrigger", scmpoll_spec: "H/5 * * * *"], ]) ] ) It would be very handy to be able to specify, includes / excludes at this level There have been some brief discussions about it, but there are no plans to address? Since the git plugin doesn't read the contents of Jenkinsfile when performing polling, I'm not sure how it would honor the polling settings inside the Jenkinsfile in the repository. It seems like it would be even more challenging in a repository with more than one branch, since the polling settings might differ between branches within the same repository. I wonder if Jesse Glick has any ideas how to handle this case and JENKINS-35988(and others like it).
https://issues.jenkins-ci.org/browse/JENKINS-36195?focusedCommentId=268447&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2020-05
refinedweb
4,699
62.98
In this article we will have a look at implementing a custom web control. This web control will inherit from the TextBox web control, and will automatically add a required field validator at run-time. This way, when we need a required field text box, we don't need to mess about with the required field validator, but instead we just define an instance of our custom web control. The class definition for our web control is quite straightforward. Since we want to create a TextBox control, we will inherit from the TextBox class. This means we will still be able to use all of the existing attributes which are defined for the TexBox - after all, we want a texbox as the basis with a required field validator 'embedded'. Let's have a look at our code-behind file for our RequiredTextBox control. using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace WimdowsControls.Web.UI { public class RequiredTextBox : TextBox { private RequiredFieldValidator req; public string InvalidMessage; public string ClientScript="true"; protected override void OnInit(EventArgs e) { req = new RequiredFieldValidator(); req.ControlToValidate = this.ID; req.ErrorMessage = this.InvalidMessage; req.EnableClientScript = (this.ClientScript.ToLower()!="false"); Controls.Add(req); } protected override void Render(HtmlTextWriter w) { base.Render(w); req.RenderControl(w); } } } As we can see, we define the class RequiredTextBox which inherits from the System.Web.UI.WebControls.TextBox class. Since our text box does not have any attributes like the ErrorMessage attribute of the RequiredFieldValidator, we need to define a public property for that. Note that we don’t necessarily need the get and set methods for the property here. Also, we define a ClientScript attribute which will map to the corresponding EnableClientScript attribute of the validator control. The OnInit method is a method we need to override, so we can explicitly add (by using the Controls.Add method) the required field validator control to our TextBox control. Also, we will have to override the Render method which renders the HTML to the browser. By calling base.Render() method, we call the Render method on our base class, in our case the TextBox class. In addition to that we need to render the HTML code for our RequiredFieldValidator control by calling its RenderControl method. We now need to compile this into an assembly. Using the C# compiler, we use the following command (assuming the directory is our virtual directory where our .aspx and code behind file are located): RequiredTextBox System.Web.UI.WebControls.TextBox ErrorMessage RequiredFieldValidator ClientScript EnableClientScript OnInit Controls.Add TextBox base.Render() RenderControl csc /out:bin\wimdowscontrols.dll /target:library /r:system.dll wimdowscontrols.cs Now, let's see what our cool ASP.NET page looks like: <%@ Register TagPrefix="Wimdows" Namespace="WimdowsControls.Web.UI" Assembly="WimdowsControls" %> That's it! We need do need the Register directive to register our component, almost in the same way as we do with a user control. Whereas the user control would point to a Src .ascx file, we specifiy our namespace in which we declared our custom control and the physical assembly, the DLL. By using any TagPrefix, we can refer to our new custom control as <TagPrefix:ClassName>. Of course we need the runat=server attribute, and we specify some other attributes that our control supports. We're ready to roll! Let's give it a go! Our control should not use client validation, since we specified the ClientScript=false attribute. Here's the screenshot: TagPrefix <TagPrefix:ClassName> runat=server ClientScript=false You will notice that on the submit of the form, it will always perform a server round-trip, because we explicitly specified the ClientScript attribute value to false. In this article we've briefly seen how beneficial it can be to create your own composite custom control. Of course you can extend the sample by adding more controls, and more attributes - but I will leave that up to you. You know the basics.
https://www.codeproject.com/articles/2748/building-an-asp-net-custom-web-control-a-textbox-a
CC-MAIN-2017-09
refinedweb
657
58.18
Awfully simple module to check a date for obsoletion Project description Really simple module to decide wether a a date string is considered “obsolete” or not. Given a time-string like 2010-01-01 it will return True if this date should not be retained, or False if it should. Other time strings can be used by specifying a different format string (see strftime and strptime for the detailed syntax) Example Usage: from retaindate import is_obsolete from datetime import datetime, timedelta for i in range(10*365, -1, -1): date = datetime.today() - timedelta(i) if not is_obsolete(date): print date See the module docs (or the source) for more details Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/retaindate/
CC-MAIN-2019-51
refinedweb
137
53.95
[cross-posting this to boost.threads.devel as I tried putting out a request for support there as well] Peter Dimov wrote: [snip] > > It seems that the static destructors are being run after > on_thread_exit for the main thread; that's pretty odd. I see some > logic in tss.cpp that looks like it should handle the case of > ordering/race issues between ~thread_specific_ptr and the thread exit > cleanup, but the code is too complicated to tell. Maybe you need to > instrument tss.cpp and tss_hooks.cpp to see what is being called and > when. As I couldn't find any problem with my own code, and as the problem seemed to spread to all of my Windows machines, I finally had to give in and instrumented some of the tss code. As you said - the static thread_specific_ptr<> destructors are run after on_thread_exit for the main thread (actually, some of them before, and one of them after, which is even more confusing). Reading the comments in tss_pe.cpp, it seems to be a deliberate decision to run on_thread_exit from the main thread before the static destructors are run. Perhaps to be able to use static objects safely during "normal" thread exits? Anyway, the real problem seems to be that the value of the native TSS slot is not reset to 0 (zero) after the pointed-to data has been destroyed (tss_slots) inside cleanup_slots. This is not a problem in general, as this is called when the actual thread is ending. But - in the context of the main thread cleanup_slots is called prematurely - this leaves a dangling pointer which is later erroneously dereferenced. In my case in this particular code segment (some comments added): --- tss.cpp --- void* tss::get() const { tss_slots* slots = get_slots(false); // slots can actually reference a dangling pointer when called // from the main thread during static object destruction, as // on_thread_exit might have been called previously for // this thread // if (!slots) return 0; // // The statement slots->size() causes an access violation // occasionally, when slots refers to a dangling pointer // if (m_slot >= slots->size()) return 0; return (*slots)[m_slot]; } -------------- My suggestion is to modify on_thread_exit (Win32 specific) to cleanup the native TSS slot after cleanup_slots is called: ------- tss.cpp ------- void __cdecl tss_thread_exit() { tss_slots* slots = get_slots(false); if (slots) { cleanup_slots(slots); ::TlsSetValue(tss_native_data_key, NULL); // New code } } -------------- The above fixes my problems (at least I've been able to run my testsuite 100 consecutive times without further crashes). Now if it would only be possible to get the above fix into the 1.34 release. All tests under libs/thread/test passes on my SMP+HT Win XP machine after the change. A related idea: on_thread_exit should perhaps instead be scheduled for invocation after the user defined static objects destructors have been run, but before the library provided static objects destructor have been run. IIRC it should be possible under MSVC, as suggested by the "pragma init_seg" documentation. If possible, it would probably make more sense ... or? / Johan
http://article.gmane.org/gmane.comp.lib.boost.user/26591
crawl-002
refinedweb
495
60.14
>> this article. Military might is a relic of the distant cold war. Today's wars are fought on the economic front. I suspect that the wasted billions the US spends on its military come from a few key reasons. First of all, the corporations that build and sell these weapons, planes, ships, etc. have a LOT of clout politically and each state that has a robust defense industry fights to keep this welfare system going. Secondly, conservatives cling to an old fashioned ideology that "might makes right" and will never vote to curtail military strength, even though the military itself is saying that it doesn't need the weapons. It is also tragic that having all this military strength at the ready enabled the debacle of the last 10 years in Iraq and Afghanistan. After all, if all these weapons and soldiers are just sitting around, let's deploy them and flex our muscle. What a waste of money, lives and resources. In an age of nuclear arsenals what is hegemony? ` Seems it comes down to a consortium of nations limiting traditional forms of military conflict between significant countries. ` Much as FDR anticipated, in an unexpected way, roughly matching the UN security council (Russia, China, the UK, France and the US; and then throw in India). ` Of course their are some exceptions, as there always is in life (Pakistan, N. Korea, Israel, maybe Iran in the future). ` And then their are regional conflicts, unconventional conflict, dirty wars and violence pursued by non-state actors at the local level that seem to persist regardless of big power arrangements. Interesting article and idea. However, the costs and complexity of maintaining today´s system seem to be gravely underestimated. Today´s system depends on the (near) absence of interstate conflicts. Sure, one or two wouldn´t gravely undermine today´s open world, but once the incentive system changes, there might be many more than that. There are so few interstate conflicts today because American power reduces the profitability or even feasibility of these to a large degree, acting as a large disincentive for war (you can´t win against us) and a large incentive for peace (play by our rules and get rich like us). The incentive for peace might be enough on its own, but just as likely it might not, as the outbreak of WWI despite the trade links and inter-dependencies at that time demonstrated. To maintain today´s system and with it the absence of most interstate conflicts, the US needs to be and needs to be seen as the most powerful nation, so that it can act as a disincentive even to the second most powerful nation. It is not the American power, but the failures of it in Vietnam and Iraq, serve powerful historic lessons for peace. "the failures of it in Vietnam and Iraq, serve powerful historic lessons for peace." So there is a Baath party run dictatorship operating in Iraq currently? ` Or an Al Qaeda/Taliban inspired emirate? ` So how do you define failure? ` Otherwise, seems there are people pressing for an ill-conceived intervention in Syria. There IS a real point to America's military dominance. Many countries have security concerns centered around their neighbors or armed groups. America is able to help in all kinds of military ways, provided, of course, that the country will help support the dollar. (Of course, such deal making can never be made public. E.g. the secret agreement with West Germany only came to light after the Cold War.) Protecting the dollar as the global reserve currency is essentially protecting America's power to print money. There is no need to prove that this is beneficial to America, whatever the cost of its military budget. This morning a currency trader and investor said on the Bloomberg that USD has not been a global reserve currency awhile. People like Euro because EU did not print irresponsibly like the USD. USD as a reserve currency is a function of America's financial and economic strength, not its military strength. If what you say is true, then why are there Eurozone members in NATO (France, Germany, Spain, etc)? Surely they would benefit more from a Euro(the currency)-dominated world than whatever protection they receive from a military alliance. Sorry, can't agree at all. Eurozone countries stay in NATO for whatever benefits they receive from the club. They can't change the fact that America will still dominate the world militarily if they leave NATO, so they might as well stay in. On the merits of US financial and economic strength, the dollar as the world reserve currency would have collapsed long ago. Only the use of US military and political domination (in combination with the US economic strength) has propped it up. For example, the only reason the US protects certain undemocratic Arab governments is that these countries refuse to sell oil for anything other than dollars, and then invest their profits in dollar assets. The "petro-dollar" system is one of the main supports for the dollar. The real reason why the US is "befriending" certain SE Asian countries at the risk of angering China is the same. In exchange for US protection, these countries will support the dollar as their economies grow. The huge US military-industrial complex is one of the major arms of what is essentially an imperial system which taxes the rest of the world by printing dollars. Otherwise, there is no justification of this expense in the US. "USD has not been a global reserve currency awhile." ` I guess it depends on the definition of "awhile" but suspect the currency trader is talking through his hat or playing some BS game if he believes that. ` The US had been the largest economy since something like 1913. Was twice as large as the next country as recently as 2006, even when including PPP guestimates (see page 26 of The Economist "Pocket World in Figures" 2009 edition. ` I think there is a state on composition of reserves by currency, and that the dollar makes up a clear majority of breakouts still (like 60 plus percent). ` And if you hadn't noticed, the EU has taken a hit on its credibility/prestige dealing with the fiscal crises dating from 2008. And its still does not operate as one cohesive market yet. ` And, there can be multiple reserve currencies. Right now the US dollar and Euro are held in reserve by many. The British Pound and Yen still make the list. The Swiss Franc seems to kick around a little, and you have more attention being given to the Yuan with arrangements being set up over the past year with countries like Japan. ` And who knows, maybe in the future countries of the world will team up to set up a SDR system as a substitute for any one nation currency (or what Keynes had envisioned)? "On the merits of US financial and economic strength, the dollar as the world reserve currency would have collapsed long ago." ` Depends on your definition of "long ago" but as recently as 2006 the US economy was twice the size of the next national economy, even when calculated in PPP. ` As in 13.164 trillion to China's 6.092 trillion USD. At market exchange rates, it was 13.164 trillion to Japan's 4.368 trillion USD, and to Germany's 2,897 trillion USD (China was in fourth place then). ` As mentioned in the other post, that s from page 26of the Economist's 2009 Edition of "Pocket World in Figures". ` Now you could argue that times are changing, and willing to hear you out. ` But what share of reserves around the world does the dollar now comprise? Like 60%? Higher maybe? ` And that comes down to domination? I doubt it. Heck I would put incumbency as a bigger reason first (just like the British Pound was the premier currency right up to WWII, long after losing outright economic leadership to the US and facing challenges from both Germany and a Soviet Union that renewed rapid industrialization). It's not just the size of the economy that underpins reserve currency status. (And in any case the figures are based on current exchange rates which are artificially in favor of reserve currencies.) The ultimate criteria is investor confidence (including that of central banks.) And that is based to a large extent, if not mainly, on how much paper asset is outstanding in that currency relative to the productivity of the home economy. Based on this criterion, the dollar was looking to collapse in the mid-60s. More and more trading partners converted their dollar reserves to gold and eventually the fixed exchange rate system between US and Europe collapsed. If it were not for the painful monetary tightening in the late Carter/early Reagan years, the dollar would have been in trouble. And that play was possible only because the US had enough fiscal room to dull the pain by having big budget deficits. I would agree that incumbency can be a big factor, assuming there is a lack of alternative candidates (a condition which seems to hold now.) But military power is probably a necessary part of the mix. In this age it's not acceptable for one country to dominate another overtly. Domination is expressed in the name of "help." Why else would the US insert itself politically and militarily into so many disputes around the world? Why else would it maintain such an expensive military system? It's not convincing to chalk it up to willingness to shoulder the lion's share of the world's security needs for altruism. "The ultimate criteria is investor confidence (including that of central banks.)" ` So are you saying business confidence didn't exist with respect to the dollar in the 1980s, 1990s or first half of the 2000s? ` Why the Plaza Accord? ` And compared to what? ` A currency that dates from 2002 that covers region versus a single nation state? And bore the brunt of wild speculation of countries exiting the currency (which I thought was a little over-the-top)? ` Another currency of a opaque communist dictatorship? ` Trying to hark back to the gold crisis of 1968 and the Nixon Shock of 1971 is interesting, but the dollar remained the reserve currency afterwards. Counterfactuals are interesting but the fact is, the US remained the reserve currency in the 1970s in inspite of everything. ` No offense, but messing around Korea, Vietnam, Panama, the Balkans, etc. didn't exactly hit areas that impacted flows and use of US currency. Now you could maybe contrive a quid pro quo of the US defenestrating Iraq from Kuwait (for the Saudis abiding with the arrangement of using dollars for energy trading). Maybe the same happened with invading Iraq in 2003? But that is all speculation... "So are you saying business confidence didn't exist with respect to the dollar in the 1980s, 1990s or first half of the 2000s?" I'm not sure what you are asking. What I meant was that although business confidence has always been an important support for reserve currency status, investor confidence also depends on other factors, such as military and political power. The proof that economic and financial position alone don't support the status was the dollar crisis starting in the mid-60s. Since then, there are even more paper dollars (plus related assets like bonds) in existence compared to US goods and services at current prices. (This condition never seems to get better with time.) So why would investors put their savings in such a currency? Certainly part of it is the reasoning that other investors haven't bailed yet (though this condition itself is a scary bubble.) A major part must be the demonstrated willingness of the US to support the dollar through geopolitical and military intervention. The old guns vs butter debate, with a twist of the end of history thrown in. No-one (well maybe some hard core neo-cons might) would dispute that there is fat to be cut from the US defence (another misnomer?) budget. But I think MS is missing a point with his simplistic historical segmentation. A primary purpose of a large military is not that it is profitable per se, but that the deterrant effect of it, coupled with astute deployments at times of crisis, can save the (far greater) money, uncertainty and horror of actual wars. Imagine if the US had not built up its forces during the Cold War and deployed them to fight for non communist forces across the world. It worked and the Soviet Union was bankrupted by the effort to keep up. That is a good example of the successful use of a large military. Similarly one could say that the lack of a credible military deterrant to either Nazi Germany or Imperial Japan was an underlying cause of WW2. Neither the US nor the UK were willing to spend on a deterrant military coupled with astute deployments. They were both far too willing to accept the paper promises of these regimes. It is when aggressive regimes believe they have a chance that things get nasty. For them the economic benefits of peace and trade hold less weight. Looking around the world today, the US has built up a solid network of alliances with most of the other big military spenders. The two obvious exceptions being China and Russia. I support US policies which maintain a credible military deterrant to these. USA was the biggest military leading to the WWII and it did not deter Nazi Germany and Fascist Japan’s aggression at all. During the Cold War USA and its allies never dare to confront USSR on the battlefield, and Kennedy needed to bribe Khrushchev in order to avoid Armageddon despite USA had largest military. It proves that US’ large military is a deterrent that benefiting peace is a myth. The USA was defeated time and again during the Cold War despite its large and advanced military. The only purpose of USA’s large military is to bomb and kill hapless nations into total destruction for its imperialist purposes. USSR’s failure is due to its ruling class detaching from people, it was not caused by the USA’s large military, stealing Russian people’s success to justify USA’s draconian militarism is a sign of Smiley Fascism. Very well spoken:) The USA never dared to risk an open conflict wit the USSR or any other communist country after their painful experiences in Korea, Vietnam an Cuba:) In regard of the USSR they would have faced their final Waterloo. And the results of American interventions the best examples now are the Iraqu and Afghanistan. No, the US was not the biggest military power before WWII. The navy was ok, but the army was pitifully small. It had less than 200,000 soldiers in 1939, plus reserves. To put that into perspective: Germany invaded Poland with 1,500,000 soldiers, while Poland defended with about 1 million. Germany was not impressed by American power at all, nor was Japan. In the run-up to WW2 the US did not militarise its economy as Germany and Japan did. It's foreign policies were isolationist. It did not deploy its navy to deter Japanese aggression. It left them as sitting ducks in Pearl Harbour. My point about the Cold War was that a very different US policy kept it (just) from going hot. Aggressive deployments around the world and the numerous smaller proxy wars made it clear to the Soviet Union that it would have to fight for any gain. Remember that this was a state idealogically committed to global communism. Both sides lost some won some in these proxy wars. S Korea was successfully defended, Vietnam was lost, Cuba was lost but prevented from becoming a Soviet missile base. A catastrophic WW3 was deterred. Without a significant military the US and its allies could not have done this. How come its the same ruling class today in Russia? Or maybe you've not been? "USA was the biggest military leading to the WWII" ` When do you date the beginning of WWII? 1937 with China, 1939 with Europe? ` As for defense expenditures the US was spending less than Japan, Germany, the USSR and UK in 1938. (page 296 of Paul Kennedy's "The Rise and Fall of the Great Powers") ` Not surprising since at least up to 1937 the US was spending only 1.5 percent or so of its national income on defense. ` There are other indices, like the fact that the US produced less aircraft than Germany, Japan, the UK or USSR in 1939 (page 324 of "Paul Kennedy's "The Rise and Fall of the Great Powers"). ` The US military was small in 1940, with something like 270,000 men (possibly smaller), before the first modern peace time draft was instituted, and the authorizations were given to expand the military drastically. I believe the air force was under the army at this time too. ` The Army didexceed 1 million men sometime in the summer of 1941, but that is still pretty small compared to the other great powers who were already mobilized and/or actively engaged in combat. ` Considering Khruschev lost his job in 1964, I don't think the Cuban missile crisis was such a win for the USSR. ` Otherwise, all sides pretty much said deterrence worked when it came to conflict/tensions between the USSR and US during the Cold War. ` Not sure S. Korea was a defeat - the country is still around in spite of invasions by N. Korea and China, and its economy began to boom well before the Cold War came to a close in the 1989/1991 timeframe. The US did not have the biggest military prior to WWII, the US military prior to the Cold War was always reduced to a minimum after a war. The US could have killed the USSR, but bringing on world-wide nuclear Armageddon was not attractive nor a reasonable choice. At most Korea was a draw. The NK, China & Russia aggression was halted and then pushed back. I don't agree that stopping aggression is a failure. These and the other statements are merely propaganda to soothe the losers. Losers you are because the US is still standing and the USSR is dead and unlamented [at least by neighbors like Lithuania, Poland, Czech Republic, and many more]. To offer some caution, we demilitarized to some degree following world war two and after the korean war. That influenced the evolution of the American military in Vietnam and the policy following the war. I believe there's some cost to be able to engage and execute a war quickly and competently. This level of dominance might be ridiculous but there still could be a case for heavy scaleable investment. One of the points omitted in the article is the commercial benefit obtained by China from the positive externality created by America's enforcement of freedom of navigation is quite similar to the benefit obtained by Imperial Germany by the Royal Navy's enforcement of freedom of navigation a century ago. The author has said in the article that the USA has no interest in enforcing freedom of navigation; it is the ASEAN nations are doing the hard work to maintain freedom of navigation. "The author has said in the article that the USA has no interest in enforcing freedom of navigation; " ` The US has stated time and again that enforcing freedom of navigation was a priority, over decades. With respect to its own parochial interests, think of war of 1812 and impressment, or dealing with the Barbary pirates at around the same time. ` Don't believe the author said "the USA has no interest in enforcing freedom of navigation" - seems the author doubts the rationale when it comes to justifying expenditures on a large military. ` I think further clarification is warranted. No one argues that America should disarm. Clearly we will always need a capable, dynamic and lethal military. But my argument is that there will be no more Trafalgars, no more Barbarossas, no more Pearl Harbors, and no more D Days. Large nation states, democratic or not, have far more to lose than they could ever gain by war in an era of globalization where human productivity has begun to eclipse raw materials as the principle source of prosperity. Will there be conflict? Always. I believe it will generally draw upon some combination of the following sources: 1 - Ethno/religious strife - Too many of the world's current borders were drawn centuries ago by remote colonial powers for purely colonial reasons. Many others were formed or hardened during the cold war. Whenever these borders do not accurately circumscribe populations who would prefer to self-aggregate around their principle value systems, there will be conflict. Pashtunistan/Balochistan, Kurdistan, Palestine, Latakia, Northern Nigeria, Kashmir, NE India, various Burmese provinces, and yes, Tibet do not today exist as states, though sizable populations within them may greatly desire it. The states which control all or parts of those populations are forced to engage in long term mollification or oppression, neither of which is desirable for either party. In other situations, ethnicities may have been separated by ideological conflicts which are now less resonant, and they may ultimately seek some level of reunification or anschluss. The separation of Koreans and Han across the DMZ and the Taiwan strait originates in Cold War orthodoxy and took the parties on highly divergent paths. As those ideological differences fade, their political trajectories will presumably become more convergent. The tension and conflict created by all these situations is real and persistent. But China is patient. She did not invade Hong Kong, and she will not invade Taiwan. Simply put, such a thing would not be good for business. The Kim regime remains a wild card for which our military must retain credible deterrence. But I still think an eventual grand bargain with China over Taiwan can pave the way for North Korean reform and reunification. The sunni/shia conflict playing out today from Beirut to Kabul is grave, but it remains regional and containable. Even in its worst manifestation, all out war between Iran/Iraq and S Arabia/Turkey, a fraction of American forces acting with NATO and the Sunnis could significantly degrade Shia military power within 30 days and restore a serviceable level of oil deliveries (since that's all we care about, right?). 2 - Border/resource skirmishes - eternal, but minor. Take the devil du jour, China. They will try to take as much oil and gas from the South and East China Sea as they can. So what? We would too. Again, we can cut a deal. We'll demilitarize the area as long as they drill jointly with Vietnam and the Philippines. 3 - Modernism versus traditionalism. This is how I see the problem with Islam and terrorism. Anyone shackling themselves to a medieval ideology is going to have conflicts with the modern world. Military might can play a temporary tactical role in containing this conflict, but it will never solve it. That will only come with culture change, and culture change will happen through young Muslims, particularly women. So I think the best way to deal with the clusterfuck known as Afghanistan is to largely decamp but offer refugee status to anyone there who belongs to a non-violent oppressed group. That would include most of her young women. We could educate the lot of them here in the states for less than we spent there on air conditioning. The takeaway message here is that Churchill or FDR would not recognize these conflicts as existential threats or real wars. We will need deterrence, peacekeeping, and the occasional quick hard strike, so we should pay for those capabilities. Beyond that, smart, practical statecraft will earn us the lasting prosperity and security that no destructive power can ever provide. Pax Americana will not be defeated by another empire. It will fall apart, suffering the death of a thousand cuts. And it will be the fault of the American people. It's not hard to picture. Just imagine a Pax Europa, led by the people of the EU. Would they be willing to do what it took to maintain a world system of peace and trade? Not likely. A lot of Americans wish America was more like Europe. Hopefully not in my lifetime. "Large nation states, democratic or not, have far more to lose than they could ever gain by war in an era of globalization" In fairness, this is exactly what smart people were saying in 1913 or so. War isn't exactly an enterprise driven by rational cost-benefit analysis. It may be a lose-lose proposition, but sometimes we (whoever "we" are) just want our opponents to *really* lose. We = unapologetic war criminal Japanese + smiley fascist USA My opinion is that up until the end of the colonial era, prosperity and power could be accumulated by conquering and colonizing territories and populations, essentially to appropriate their resources and labor. France and England fought endlessly over banana republics basically to steal the bananas. But now, we have crossed an inflection point into a world where greater wealth is to be found in unleashing the productivity and purchasing power of the former banana farmers. Why go to the trouble and expense of conquering and oppressing them for some squishy fruit when you can make more selling them smartphones, dataplans, and dodgy financial products? In this world, a massive domineering military is doubly useless. It can't be profitably employed in conquest, and since no one else is out to conquer much either, there isn't that much policing to do. Note: I will for now draw the line at the Kuwait War. That was (as far as we can tell) the last classic war of international conquest, and was initiated by a regime foolishly relying on recently expired cold war pretenses. And that was the last war where the full spectrum of American military might could be justifiably deployed. Enjoy the memory. It will not happen again. Sadly, border skirmishes, civil wars, ethnic strife, and the occasional Third World Napoleons will go on for many years (until colonial era borders hardened during the cold war are finally redrawn to reflect actual ethnic preferences), but I do not see these rising to the level of international war of the type and scale we saw in the 20th century. And while nations will continue to compete over resources like oil, they will do so not with gunboats but by offering to build pipelines and ports to secure favorable supply contracts. Basically, the modern message is this: "You want my goods? Build me an airport, don't send me your ridiculous aircraft carrier." Consequently, junius brutus is entirely wrong with his comment; the reason why states are no longer seeking to conquer each other is precisely because conquest by anyone of anyone is no longer a viable business model. Hence, America's excessive expenditures on hard power are an exceedingly poor investment, and the opportunity costs represent a net reduction in our overall long term competitiveness. We all want a stronger America. We'll get it when we spend more on infrastructure meaningful in the 21st century, and less on the ancient infrastructure of force. Don't be so sure about that inflection point. The Roman empire was far more productive and wealthy before the barbarians broke it apart, but they did it anyway, because it suited their immediate interests. A free, liberal democracy makes the people more money, but not necessarily their leaders. A weakened Pax Americana could fall apart, to be ruled by a varied set of despots, with some pockets of liberalism holding out like Byzantium. History can and has moved backwards before. Progress is not inevitable. That's why I hate that Europe is so militarily weak. There is no stopgap if America loses its edge. “History can and has moved backwards before. Progress is not inevitable.” You are right, USA has been bombing and killing all over the world, and creating confrontation here and there on the fabricated accusations in order to prevent progress of humanity moves forward. Unapologetic war criminal Japan under the leadership of history denier and atrocity denier Abe backed up by its smiley fascist master is inciting war again so that they can benefit immediately by burning, killing, looting and sacking. Rome analogies don't work. There are big differences in communications, technology, culture and institutions. ` Aside from no telegraph, phone, internet or iPad, there was also rarely an established mechanism for succession. ` So what you got were frequent outbreaks of all against all civil war amongst generals (or at least between two large landlords). ` Throw in hybrid warfare and the fact that the Romans faced a serious powers to the east (first the Parthians with defensive war, then the more aggressive Sassanids), you got a regional power that could easily find itself in trouble. ` Particularly sucks when dependent on agricultural - disruption or loss of land and people subtracts from power of the empire and contributes to a downward spiral in the face of enemies. Frankly I think it's naive to view America's worldwide and complete military dominance in a vacuum. Wars would very much happen more frequently and cost us economically if the US was not so dominant, and depending on the participants could very much affect our economy. Just because it's difficult to measure when wars did not happen because of the US strength, doesn't mean it's not an effective deterrent. Tin-pot dictators the world over are deathly afraid of the wrong sort of attention from the US. Perhaps M.S., on this issue and many others, needs to pick up a modern history book with some military analysis in it, because even with the immediate threat of massive retaliation from the US, countries *still* invade each other for "fun and profit" purposes. There's many good examples, but we'll just stick with Iraq invading Kuwait. Or perhaps M.S. thinks a genocidal dictator controlling even more of the world's oil supply is good for the global (read, American) economy? Perhaps M.S. would've let Saddam continue on to invade Saudi, or try again with Iran? Would M.S. seriously allow the new Empire of Iraq to threaten us with economic ruin by turning off the oil on a whim? Why even let them get close? "Well, *they* should've formed a regional security pact now that *our* economy is ruined" will sound pretty silly. Does M.S. believe Taiwan would still exist today if not for American naval dominance of the Pacific, despite decades of China openly threatening and preparing for invasion? Who would guarantee freedom of navigation through the Straight of Hormuz if the Middle East exploded into regional war? Do you seriously think they wouldn't attack neutral shipping? May seem like other people's problems at the moment, but these sorts of things tend to come around and bite us in the ass down the line. We're just no longer stupid enough to let anybody get away with it anymore. Seems a few of us still are though. And scale back our navy just as China is rising? Insanity. Do you think they are massively building up and modernizing their military and building a blue water navy just so their citizens can take a tour on a Chinese aircraft carrier? They have staked out claims and are clearly signalling their intent to militarily control the south china sea and other areas, against international law and norms, to the active detriment of other nations, some of which depend on our active support to avoid immediate invasion. They're still in the process of integrating their last conquest, Tibet. China is not a liberal democracy, it's a communist dictatorship, and we're right to be skeptical of their intent and prepared for war. They aren't building supersonic area-denial anti-ship missiles to fight Thailand. They are preparing for a possible conflict with the US. Just because it seems silly today that China would start a war doesn't mean the same will be true in twenty years. And the excuse of "well, it seemed silly at the time" after they've gained de-facto control of SE Asia and we've scrapped too many ships to stop them will sound so ignorant it'll make you blush as you say it. Like or not, being the biggest and strongest nation on the planet carries moral and ethical obligations that we owe to our fellow man. Leaving entire regions to just fend for themselves invites and encourages aggressive expansionists, who will eventually threaten us economically. It's a win-win to stay a step ahead and have overwhelming force. The American system is so dominant that people think it will continue forever no matter the hegemon. Not so. The American system, emphasizing freeish trade, free and fair commerce, and promoting liberal democracy, is new to most of the world, and not older than 2 centuries anywhere. Even the British did not fully buy into it in their days of colonial expansion. The alternative is extractive economic schemes where elites milk the masses while keeping under repressive political control. The American system is more productive overall, but the extractive system makes the rulers richer (particularly relative to their subjects). Given a chance, despots will upend the American system. Given sufficient weakness, the American system will shrink, and despots will take over.? As American strength decreases, more and more despots with extractive economies become established. How many is too many? Also part of the problem is that American dominance has gone on for so long. Which means that, for a substantial part of the population, there is no memory of the world being any other way. And that, in turn, means that only by applying imagination and knowledge of history (neither exceptionally widespread) can they see how the world might be different. What you said is so true! I second Ash. That's probably a big part of the problem, When we try to imagine a world without complete American military supremacy, nobody younger than 30 has any precedent to inform the image. Natural, then, that the boogie man, 7 of Borg and Zombie Bin Ladin are what our imagination supplies. And nobody under about 85 has any memory at all of a world where America wasn't clearly the biggest power on the planet. Even when Americans agonized over how powerful the USSR was getting, there wasn't any real doubt that we were more powerful -- it was just a matter whether they were getting uncomfortably close. You complimented me. I only knew enough about American history to recognize what makes sense and what doesn't when someone says something about it. You amplify the point. :) I was awfully young to remember all that much about the Cuban Missile crisis but I remember weird yellow and black posters on the wall which meant you had to get under your desk in case of a nuclear bomb. I think that was before or possibly after I remember my parents sitting in the living room crying because of a scary funeral on a black and white television console. Ash! Dear woman! Please tell us where you lived before you moved to America! Enlighten us further on what you haven't learned yet about the American Experience in medieval 20th-century history! Or post-modern 21st-century. If I hear one more time how much ashbird doesn't know about anything I'll presume her to be an idiot. Why do you assume that Ash didn't grow up here? than an American born-and-educated to know something about American history. (And to have their facts straight.) %$&@#! editor. That second sentence is supposed to read: "On my experience, someone educated abroad is more likely than an American born-and-educated to know something about American history." "? " The answer is that it probably depends on if America starts thinking again in terms of spheres of influence. The big problem with post-Cold War thinking is that it generally assumes that the US should be able to fight and win any type of conflict,any time, anywhere in the world. The US in its current strategic thinking isn't really comfortable with the idea of other powers having a large say in the running of other parts of the globe with little US input (we can condemn actions, but we don't really oppose something as long as its not too destabilizing). The problem is that it has blurred historic red lines, whether that was the Western Hemisphere a la the Monroe Doctrine. jouris, . Thanks for this sentence: "On my experience, someone educated abroad is more likely than an American born-and-educated to know something about American history." . ** On occasions, they may even be less rude and vulgar and petty. :) . BTW, I only claim I know something when I am able to cite facts and/or precedents and/or authority to back it up (with the exception of cases of strict personal opinions as in I like color blue versus color grey), whether this position is a popular one in a common public forum or not. My culture teaches me to be humble. I assume other people to know more than I do, until and/or unless absolutely proven otherwise (This "proven otherwise" appears to be the instant case with a blogger who unfailing chases after me to shower me with fault-finding, fight-picking ad hominen attention for reasons no one can fathom except to say it is weird). Humility is too fine a point for some raised differently, either to practice or appreciate. I cannot say being a co-American, I would brag to the world about this feature as "standard American". The good news is I have discovered it isn't, either on this blog or the world outside this blog. . Again, thank you for your sentence. . Meatime, Kochevnik and Ohio have a great exchange on substantive ideas which I'd hate to miss. I also would like to thank both of them for not being distracted. Thank you for your clarifcation to the %$&@#! editor. Time to get some return on our investment and annex Canada? You mean Canada isn't already part of the United States?!?!? Seriously, a big part of the population (of the US) is already unclear on that. Military hegemony is the opposite of virginity. You commit to not losing it long before you know why. You should put together a little book of your best maxims. I will totally buy it. (^_^) A century from now, Pascover may rank with Confucius. . Especially if we can figure out a way to print his maxims on little bits of paper baked inside cookies. ;-) Why thank you but I think that little book is available online as "Doug Pascover's Comments" minus 99.9% of them. The price is in curating and it's pretty high. Yeah, this one is particularly good. M.S writes good posts; this is not one of them, in my opinion. junius brutus is entirely right with his comment; the reason why states are no longer seeking to conquer each other is precisely because of the presence of the global policeman, i.e. America. Everyone is reasonably sure that a significant violation of a state of even middling importance will trigger military action by the Americans, and no one thinks they can beat the US in open field. The phrase ‘Pax Americana’ is not there just for superficial reference to the Roman times. But don’t take my word for it, just look at history. In the 19th century, when there were more great powers than you can count in your hand and no one really held a commanding military superiority over the others, wars were a frequent occurrence and a natural way of settling disputes amongst the states. It culminated in two World Wars. After 1945, when the number of great powers dwindled down to two (US and USSR), peace and security was enhanced but was nevertheless ruptured by bouts of proxy wars. It was in good timing too, because humanity just acquired nuclear weapons and outright war amongst the major powers would have been catastrophic. Ever since 1991, we now live in a world with only one great power. We still do not have complete peace, because the Iraqi War showed that it is still vulnerable to a deranged warcamp taking control of the United States, the only true autonomous actor in the world. But overall, the last two decades have been remarkably tranquil and prosperous. So expect this era to end when America takes up the advice offered by M.S and starts going for a mere military parity with China and God knows who else. Two chief casualties will be trade and the sovereignty of smaller nations. Maybe even a nuclear war will flare up somewhere. But then, at the end of the day, there really isn’t anything that the US can do except accept M.S’s suggestion; it will be forced to. No great power is eternal, and there is no chance that China won’t surpass the US in its economic output. America should probably work out some sort of joint global-policing arrangement with China, and make sure it behaves. 2 things. First, just because there happens to be fewer wars does not make the world more peaceful. There is no simple dichotomy between "war" and "peace," but a continuum of conflict which is termed "war" towards one end and "peace" to another. The fact that America is powerful enough to intimidate many countries from engaging in outright aggression does not mean that America's strength has brought "peace." If anything, the power of the hegemon has likely driven violent conflict into a newer form, some of which America itself experienced in September of 2001. The last two decades, Im afraid, have not been really more peaceful (though arguably more prosperous). Second, Pax Americana is actually, in my mind, rather poorly policed if the premise of the "peace" is constant and overpowering American intervention. Massive, traumatic conflicts like the Rwandan genocide and the subsequent bloodletting in the eastern Congo, "dirty wars" in Myanmar/Burma, and the latest conflagration in Syria are all remarkably un-policed. It seems to me that far from being a "policeman," America is more of a vigilante who acts selectively, and not necessarily from a sacrosanct list of rules. SO, instead of guaranteeing stability, Pax Americana seeks to maintain an "acceptable" level of stability in select areas close to the hegemon's heart while tolerating absurd levels of violence in other areas, provided there is no "spillover." American hegemony then becomes a system of great disparaty in wealth, safety, power, and influence between different areas of the globe, thereby spawning anger and violent resistance to that order. The rise of another potential hegemon, China, may ease this vicious cycle by forcing America to devote real resources and attention to previously neglected places like Africa (finally), or by offering a real political and economic partner independent of American influence against which the US will have to compete. Alternatively, America can spend itself into oblivion like its Soviet opponents before it against a country with a bigger GDP. I think it is time someone in the world exerted some positive influence upon America. I recommended your post not because I like it, but because I don't. You see through the glass darkly, but the outlines are too terrible and potentially real to be ignored. Leapin' lizards! From Monkey Cage all the way to Christian Science Monitor! How about a detour through Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. [Liz Cheney] said on the video. . It's like M.S. has Dr. Strangelove's uncontrollably- spastic prosthetic hand. How else can "Military Primacy Doesn't Pay (Nearly As Much As You Think)" explode into a war on Cheney's 2014 Wyoming Senate bid in a one-sentence conclusion? "But why on earth would China do anything to threaten the security of the sea lanes in South-East Asia? America is China's largest trading partner; the country's entire economic model is based on exporting things to America and Europe." M.S., you need to read up more on what, specifically, the Chinese are claiming. China is not merely trying to claim an exclusive economic zone, as many others in the region are. They are separately advancing the claim - unsupported by any previous international law, and also not supported by a majority of coastal countries - that they have a legal right to restrict all foreign military activity in their EEZ. This is in part what has caused the various US Navy run-ins with Chinese forces in 2001, 2002, and 2009. This has obvious strategic implications if their claims in the nine-dashed line become firm. Their legal innovation on this issue is also worthwhile to note, since it's also worthwhile to ask how much the Chinese will be willing to respect the current rules-based international order. Yes, wags might joke about international law but there is enough respect out there for international institutions that countries still expect others to behave within their bounds. Broadly speaking, China is using superior military and economic power to openly defy clearly written treaty law and to assert its "right" to unilaterally re-open settled disputes and change their terms without the consent of other parties. Hell yes this is a security problem. The US defense establishment is bloated and overfed, but let's not let that distract us from actual strategic threats. This paper from the Congressional Research Service is worth reading in its entirety: I agree entirely that the Chinese cow-tongue map violates international law of the sea. The question is, at what point does that start to make any difference to Americans? I completely understand why *as a Vietnamese person* I would be furious at Chinese efforts to claim fishing rights and mineral rights throughout the East Sea. But what American interest is involved here? Will Chinese start to harass non-Chinese carriers of international freight through those waters for rent-seeking purposes? Is that the claim? Then make the claim. Would China really risk the WTO consequences? How much would this cost importers? Why won't it just shift demand to countries that can avoid those waters (Philippines, Mexico) and be as self-defeating as any other tariff? This basically seems like a really good reason for Vietnam, the Philippines, Indonesia, Thailand and Malaysia to enter into a naval alliance against China; I'm not sure what clear interest this is of America's. My emphasis was on the legal argument that they have a right to restrict all military activity in areas that aren't their territorial waters. So what strategic interests are there for the US? American security agreements with Japan and the Philippines, for one. For another, reinforcing and maintaining outposts in Southeast Asia (try getting to Singapore outside of the nine-dashed line without going through someone's territorial waters. You'd more or less have to go via Australia.) Still more, the ability to transit any military units in Chinese EEZ waters during any future military conflict, where they may not particularly like us but won't get involved with force. It may also be used to promote economic nationalism because any US-flagged ship can be argued to be a military vessel because of status as a reserve ship in the merchant marine. The U.S. would like to maintain freedom of navigation in accordance with customary international law. In order to counter Chinese claims that the South China Sea is "Chinese," it's useful to occassionally send naval ships through the area without receiving prior approval from Beijing. It's not just the South China Sea, though, as the U.S. conducts similar freedom of navigation operations in the Middle East. So the U.S. interest here is pressuring China to respect international maritime law as it becomes an ever growing power (which would be easier to do if the U.S. Senate would ratify the UN Convention on the Law of the Sea...). Another rationale is to keep countries in the region calmer. Asia is already outspending the rest of world in the defense sector as China modernizes its military and everyone else reacts by doing the same. However, no country in the region has the confidence to deter potential Chinese military action (including harassment of their civilian ships, like several incidents in 2011). So the U.S. presence, while odious to Beijing, is nonetheless helping to keep nerves calm in capitals from Tokyo to Hanoi. ...at what point does that [China's violation of the international law of the sea] start to make any difference to Americans? . In itself, and in that location, there isn't much impact on America. But the problem with ignoring the flouting of the law in one place is that it makes the law weaker elsewhere -- in places where there is a serious impact on American interests. You can't have an effective regime of the rule of law when you only apply it to the places that you care about. (Think about what would happen if you wrote a law that said theft is only illegal if it occurs outside the city limits of cities with over 500,000 population.) China is actually stepping back from that particular legal stance, as they realized benefits of unfettered naval activities around Japan. The claim that Chinese South China Sea claims impact America because it would prevent American access to that sea is a false argument. While China would like that outcome to occur, no thinking person believes that it could be achieved by political claims. The real security problem in the SOuth China Sea is China's perception that piquant states like Vietnam and the Philippines could, if they possessed sovereign control of the same areas China was claiming, exploit the energy resources there at China's expense and possibly then prevent Chinese shipping from being able to transit through them. Hence China's insistence on it having sovereign control. Such a serious threat, which amounts to a potential blockade of China's shipping and loss of access to the Middle East energy corridor has never been addressed or even acknowledged by V. and P., or by America, and serves to undermine the so called "rules-based-order" by demonstrating to China, a major stakeholder in the world, the inability of that order to provide security. Why should China or anyone else for that matter follow these so called "rules" if they can be bent to another's benefit provided they are best friends with the hegemon? Why does this "rules-based-order" only take seriously the security of America, followed by its chosen lackeys, without so much as a pretense at acknowledging the security interests of other states? I maintain that there is no flouting of international law by China, since what China's activities amount to is a maintaining of its sovereign claim, which cannot be determined by "international law." The law is weak not because China is undermining it, but because the "law" is being used as an amorphous term used to describe any Chinese behaviour which Vietnam doesnt like. Without a clear settlement of territorial claims agreeable to all sides, there is no party to the dispute which can claim legal authority with which accusations of lawbreaking in the SCS can be made. Your argument makes precisely zero sense. Do you even know about what an EEZ is? Sovereign claims only extend 12 miles beyond a recognized coastline. Chinese territorial disputes are and have been about EEZ rights, which don't grant sovereign control over anything but mineral and fishery rights (not that China cares, given its desire to regulate military activity). China's perception of the South China Sea is based on fear of Vietnam and the Philippines exercising sovereign control? Your argument depends both on Vietnam and the Philippines arguing for territorial water rights in their EEZs (they aren't) and China acquiescing by not doing so in return (it isn't and won't). In fact, I'd wager that you don't have any idea what the dispute is about - blockades? Are you kidding? Fishing rights and oil exploration, maybe, but a blockade in international waters, even within an EEZ, is and always has been an act of war. China's control over the South China Sea wouldn't change anybody deciding to go to war with them, it would only change the legality of peacetime operations. Similarly, your comments about "cutting off access" make no sense unless you assume that "security" for China means territorial water rights for China and everyone else has to play along as a lesser partner. What kind of security do you want? International waters, even within an EEZ, have the right of free navigation. If someone interdicts shipping in international waters it's an act of piracy, if not war - yes, that means for the US, too. In fact, international law does none of the things you say it does and your interpretation of it only serves to show your lack of knowledge and anti-American conspiratorial mindedness. The current international law order is based on decades of jurisprudence and mutual agreement starting in the 1950s between the US and the USSR, and continued on by all relevant parties after the fall. It reflects a state of balance irrelevant to any one dominant power and enforceable by the parties involved. On reflection, I think I'd cede you the point that this is a genuine US interest, but I think it's mostly a hypothetical one. I see a huge mismatch between the threat, which is basically a potential inconvenience to the US economy several steps down the road if things develop badly, and the size of the resources the US devotes to it. If the US slashed its fleet in half to 5 carrier groups, what would it save, $60bn a year? Could the threat over the course of 5 years possibly amount to $300bn plus interest? I can't see it. Add to this that the naval conflict with China is the *only* conflict in the world that poses a reasonable prospect of the kind of peer-to-peer combat the US Navy is really designed to fight. More or less my sentiments exactly, M.S., though I'd add that the Russian fleets might still hold that kind of potential, at least until the oil runs out. I think this is on-point. The UK, when it was the global hegemon of the 19th century, had a "two-power standard", ie that it be able to match the combined fleets of the next two largest navies. If you looked at the issue in terms of military expenditures today, the US could slash its defense spending by 25% and still spend more than *two times* the next two biggest military budgets (Russia and China). So I'd agree: if anything it's less that the US (and the US-centered global economy) doesn't benefit from US military hegemony, but that the US is overspending to get that hegemony, and consequently not getting a good bang for its buck (no pun intended). It seems you have wasted time on trigger happy Anglo-Saxon, your effort to save the USA from walking the path of USSR is admirable, and your way of exiting a pointless bickering is graceful. Looking back to history, Anglo-Saxon is short circuited in the brain to bombing and killing, they do not have the capacity to appreciate your kind of cool approach to issues. I should say Chinese should never let the Anglo-Saxon get their hands on gun-powder in the first place. " I should say Chinese should never let the Anglo-Saxon get their hands on gun-powder in the first place" ` Who said the Chinese did? ` I heard that some believe the Mongols gave Europeans the first whiff of gun powder. Otherwise, the Turks demonstrated the value of artillery when it took Constantinople. The principal reasons spending on the the military remains sacrosanct are 1) the vestigial jingoism of significant segments of the American public ("thank you for your service" is the new "Dominus vobiscum"), 2) defense spending is a reliable source of jobs in many Congressional districts (but wait a minute, the government can't create jobs, can it?), and 3) defense contractors make large campaign donations. But actually using this military is another matter. Outside of the Beltway, people appear to be pretty tired of foreign military adventures - at least for now. The main counter-argument I would consider runs: there is actually no domestic demand for defence workers (military or civilian) to do any other jobs. DoD is a full employment programme; aircraft carriers play the role in our economy that pyramid-building did in the Egyptian one. And for political-philosophical reasons, the US lacks willingness to put people to work on useful public projects like high-speed rail, a smart electric grid, teaching high-school music and art, etc. So shrinking the military budget will only result in more unemployment. But a first-best approach would be to convince people to fund more education spending etc rather than resign oneself to useless military spending. MS -- fair warning, I'm going to repeatedly plagiarize your comparison of aircraft carriers to pyramids. Between the military as a driver of technological development and an employer that instills broadly valuable work habits and skills, I think it's an uphill battle to argue that it offers zero, let alone insignificant, non-defense value. For example, DoD's research on powering networks of far flung operating bases with renewable energy will likely shape the distributed generation practices of public utilities providers, aka smart grid. I would actually be very surprised if by this point DARPA doesn't tilt its funding to prioritize research that has knock-on value advancing civic technology when possible. You will get no argument from me on what our spending priorities should be, but just try to sell that to Congress. With respect to the pyramids, I like to compare the cost of an aircraft carrier group to the cost of building Hagia Sophia (also a very sophisticated engineering achievement). Converting the cost of the former to grams of gold, I think the costs work out to be about the same. The difference being that one has been around for 1500 years, and the other may last at most 50. Et cum F-35 tuo. Mariable dictu. On the other hand, we could choose to fund tchnological development outside the military context. We do some of this already (NIH, etc.) -- but those efforts are constantly under harsh attack by the same people who want to spend vast sums on the military. Technological development outside a military context?! Maybe if we can win wars with unicorns and rainbows! Could it be that, unlike high-speed rail (which serves a particular region) or education (which serves a particular class, namely the ones that don't pay to go to private schools), Members of Congress could argue simultaneously and strenuously that defense spending is good for U.S. citizens in their districts/states (via job creation) AND good for all U.S. citizens? Whether it's true or not is another matter, and with budgetary strains, it's becoming harder for Members to argue for defense spending that is too oriented towards a particular district or region (e.g., the East Coast Ballistic Missile Shield so beloved by our New England Senators). Well, you don't have to put your technological development funds into "creation science" and the like. M.S. I would actually take issue with your assertion that DoD provides valuable employment to people who would otherwise be unemployable, at least without further education. The military-industrial complex is swollen with highly educated and highly skilled workers, of both the white collar (engineers, accountants, supply chain managers, etc) and blue collar (journeymen welders, electricians, etc) varieties. Almost every business in America (including my own) is aching for skilled labor at the moment, while in the meantime the DoD and its civilian contractors/suppliers are crowding out invaluable human capital from the private marketplace. The sad truth of the matter though is that if we demobilize military units, close bases, and defund shipyards and aircraft assembly plants, it will take time for people who work there to find new jobs in the civilian sector, which is to say for labor to be reallocated by the market, and in the mean time no congressman or governor wants hordes of angry, recently unemployed workers in his district or state complaining about why they just lost their jobs, even if a private sector employer is already making plans to pick up the slack at the same time. I think you and M.S. are actually saying the same thing. "I don't think most Americans are interested in a second helping of that stuff." Think again. We Americans eat that stuff up like reinstated Twinkies! U-S-A! U-S-A! U-S-A! Need I go on? We Americans eat that stuff up like reinstated Twinkies! What 9.5% inflation? "But the fans aren't mistaken - for much of 2012, and in earlier years, a box of 10 Twinkies weighed 15 ounces. The boxes on store shelves now weigh 13.58 ounces." NPWFTL Regards Well, if the market will bear it, why not? Or, on a related note, the consumer must ask himself, "What would I do for a Klondike Bar (or twinkie)?" Pay the same price for 90% product? Pony up another 10% for the same product? Clearly the marketing material suggests these are mere inconveniences to the multitude of well-neigh Herculian labors one might perform to obtain such a novelty. I'm not worried about whether the market can or will bear it. What we constantly see/hear/read about here is that there is no inflation. Sure, prices are stable but the size of the product is falling. Back in the 1970's, the size of the product was taken into account. Pony up another 10% for the same product? No, just slap it onto the credit card. Back in the 1970's, credit cards were not given out to just about anybody. NPWFTL Regards Um, I'm trying to remember when the overwhelming hegemon won a war. Grenada, hell of a victory! WW2, if you think it was a great war. The alternative use for the resources proposed by MS and Drezner, that of the government building up the domestic economy doesn't work either, tho no doubt, they will label me a bigot for disagreeing with them on this issue. M.S. didn't specify an alternative use. The ideal alternative is to perhaps return the money to the taxpayer or pay down the national debt. In all fairness, the US and coalition executed a very successful war against Iraq in 1990-91. GHW Bush also stayed true to his promise: "No more Viet-nams." Regrettably, wisdom is not passed automatically from one generation to the next. Concurrence, Ah Beng-- in a somewhat more perfect world, the US would currently be having heated debates on how best to spend the tax surplus; instead, we fund a bloated MIC and will be paying medical bills (which we should) for veterans wounded in wars of questionable origin. "I just don't see the rationale for preserving a military that can defeat any other militaries anywhere in the world twice at the same time in an age when states are no longer seeking to conquer other states for fun and profit, as they were during the struggle of liberal democracies against totalitarianism. " It seems fairly obvious to me that states 'are no longer seeking to conquer' due to the *existence* of that military. (Which is not at all to say I agree with the size of the military budget, I'm just noting that failing to address the possible causal relationship leaves a hole the size of an aircraft carrier in your argument) No not really; Iraq had no problem invading Kuwait at the height of American post-Cold-War euphoria. Russia had no problem invading Georgia in 2008. The US military is not so much deterring other states as intimidating them. The problem with that is, some states will be cowed while others become defiantly combative. You think the Pakistanis or Chinese are so scared of American invasions that they will roll over for Uncle Sam? Although arguably one of the three strongest countries in the world, China is quite fragile. A major war would threaten the domestic stability [not as strong as presented], this is especially true if the war stopped the export machine. True, the many newly unemployed could be employed in the military, but that would also raise questions. The US has no rebellious provinces or country folk, and does not [despite numerous spurious claims] have a dictatorship. If attacked or engaged in a war that seems vital to US interests the people tend to unite behind the government. The proof is that the initial support for spurious war in Iraq became increasingly unpopular as the lies were exposed [not, however among NeoCons].
http://www.economist.com/comment/2092178
CC-MAIN-2014-52
refinedweb
11,017
61.06
Ive been at it for quite a while...possibly because my understand of C++ is not very good or maybe i just suck at it. Well here it goes, Problem: Write the following function, including a main() function, in index.cpp: vector<int> noDupeIndex(vector<int> indexlist) which function returns the integers in indexlist, with duplicate indices removed. For example, if the vector indexlist is: 1 4 9 16 9 7 4 9 11, then the function noDupeIndex would return the vector: 1 4 9 16 7 11. Solution: This is what i have so far... #include <iostream> #include <vector> using namespace std; vector<int> noDupeIndex(vector<int> indexlist) { vector<int> b(indexlist.size()); //Defines the size of the new vector int i = 0; //Variable used to count the entries in vector indexlist int j = 0; //Variable used to count the entries in vector B //While i is less than the total number of entries in vector indexlist the following loop will assign entry n from vector indexlist to entry n of //vector B. Each time it does this it addes 1 to the entry counters for the respective vectors indexlist and B. for (int i = 0; i < indexlist.size(); i++) { b[j] = indexlist[i]; j++; } return b; } int main() { //This variable is a counting variable used later to manage a loop. int x; //Defining vector size and entries for indexlist vector<int> indexlist(9); indexlist[0] = 1; indexlist[1] = 4; indexlist[2] = 9; indexlist[3] = 16; indexlist[4] = 9; indexlist[5] = 7; indexlist[6] = 4; indexlist[7] = 9; indexlist[8] = 11; vector<int> b = noDupeIndex(indexlist); cout << "Result of no duplicate index is: "; // While the counting variable is less than the total length of vector B, the following loop displays that corresponding entry. //Everytime the loop is run, a integer of 1 is added to the counter. Thus each proceeding time, a new corresponding entry is displayed //until there are no more entries left. for (x = 0; x < b.size(); x++) cout << b[x] << " "; cout << "\n"; return 0; }
http://www.dreamincode.net/forums/topic/54495-removing-duplicate-indices-within-a-vector-then-printing-the-new-vect/
CC-MAIN-2017-43
refinedweb
339
68.7
Provides information about a file. Standard C Library (libc.a) #include <sys/stat.h> int stat (Path, Buffer) const char *Path; struct stat *Buffer; int lstat (Path, Buffer) const char *Path; struct stat *Buffer; int fstat (FileDescriptor, Buffer) int FileDescriptor; struct stat *Buffer; int statx (Path, Buffer, Length, Command) char *Path; struct stat *Buffer; int Length; int Command; int fstatx (FileDescriptor, Buffer, Length, Command) int FileDescriptor; struct stat *Buffer; int Length; int Command; #include <sys/fullstat.h> int fullstat (Path, Command, Buffer) struct fullstat *Buffer; char *Path; int Command; int ffullstat (FileDescriptor, Command, Buffer) struct fullstat *Buffer; int FileDescriptor; int Command; Note: The stat64, lstat64, and fstat64 subroutines apply to Version 4.2 and later releases. int stat64 (Path, Buffer) const char *Path; struct stat64 *Buffer; int lstat64 (Path, Buffer) const char *Path; struct stat64 *Buffer; int fstat64 (FileDescriptor, Buffer) int FileDescriptor; struct stat64 *Buffer; Note: The stat64, lstat64, and fstat64 subroutines apply to Version 4.2 and later releases. The stat subroutine obtains information about the file named by the Path parameter. Read, write, or execute permission for the named file is not required, but all directories listed in the path leading to the file must be searchable. The file information, which is a subset of the stat structure, is written to the area specified by the Buffer parameter. The lstat subroutine obtains information about a file that is a symbolic link. The lstat subroutine returns information about the link, while the stat subroutine returns information about the file referenced by the link. The fstat subroutine obtains information about the open file referenced by the FileDescriptor parameter. The fstatx subroutine obtains information about the open file referenced by the FileDescriptor parameter, as in the fstat subroutine. The st_mode, st_dev, st_uid, st_gid, st_atime, st_ctime, and st_mtime fields of the stat structure have meaningful values for all file types. The statx, stat, lstat, fstatx, fstat, fullstat, or ffullstat subroutine sets the st_nlink field to a value equal to the number of links to the file. The statx subroutine obtains a greater set of file information than the stat subroutine. The Path parameter is processed differently, depending on the contents of the Command parameter. The Command parameter provides the ability to collect information about symbolic links (as with the lstat subroutine) as well as information about mount points and hidden directories. The statx subroutine returns the amount of information specified by the Length parameter. The fullstat and ffullstat subroutines are interfaces maintained for backward compatibility. With the exception of some field names, the fullstat structure is identical to the stat structure. The stat64, lstat64, and fstat64 subroutines are similar to the stat, lstat, fstat subroutines except that they return file information in a stat64 structure instead of a stat structure. The information is identical except that the st_size field is defined to be a 64-bit size. This allows stat64, lstat64, and fstat64 to return file sizes which are greater than OFF_MAX (2 gigbytes minus 1). In the large file enabled programming environment, stat is redefined to be stat64, lstat is redefined to be lstat64 and fstat is redefined to be fstat64. Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and the errno global variable is set to indicate the error. The stat, lstat, statx, and fullstat subroutines are unsuccessful if one or more of the following are true: The stat, lstat, statx, and fullstat subroutines can be unsuccessful for other reasons. See "Base Operating System Error Codes for Services that Require Path-Name Resolution" on page A-1 for a list of additional errors. The fstat, fstatx, and ffullstat subroutines fail if one or more of the following are true: The statx and fstatx subroutines are unsuccessful if one or more of the following are true: These subroutines are part of Base Operating System (BOS) Runtime. The chmod subroutine, chown subroutine, link subroutine, mknod subroutine, mount subroutine, openx, open, or creat subroutine, pipe subroutine, symlink subroutine, vtimes subroutine. Files, Directories, and File Systems for Programmers in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs.
https://sites.ualberta.ca/dept/chemeng/AIX-43/share/man/info/C/a_doc_lib/libs/basetrf2/statx.htm
CC-MAIN-2022-40
refinedweb
684
51.99
We have a Chinese version (SAP River (一):SAP River概述) of this blog. Introduction The most common architecture of SAP HANA application is like below(From SAP River Datasheet): Figure 1: Traditional SAP HANA application architecture With the traditional architecture shown in Figure 1, application developer is responsible for both creating data model in database level and implementing control logic in XS level. SQL and SQL Script are required to create data model in SAP HANA database, while xsjs needed to implement control logic in XS level. Therefore, the developer of SAP HANA application at least needs to grasp two technologies to finish a SAP HANA application. Sometimes, developing a SAP HANA application needs the cooperation of two or more developers. SAP River is an option to void this problem. SAP River is a brand new method to develop SAP HANA application. SAP River consists of a programming language, a programming model and a suit of development tools. With help of SAP River, developer can concentrate on the design of business intent, ignoring how is it implemented and optimized in SAP HANA database. SAP River only exists in design-time. When SAP River objects activated, all SAP River code is compiled into SQL Script or XSJS code and then transferred to XS engine indexserver for execution. Figure 2: Function model of SAP River The function model is shown in Figure 2 (From SAP River Datasheet). SAP River integrates all the segments involved in development of SAP HANA application, including data modeling, control logic and access control, which makes developer capable of accomplishing a SAP HANA application grasping only one single technology. Firstly, developer can design the data model for SAP HANA application with SAP River language. During compilation, SAP River will create corresponding database objects for the data model you designed using SAP River language. For example, the entity in SAP River program will be mapped to a table in SAP HANA database. Secondly, using SAP River language, developer can define methods for data objects, which implements business logic. Last but not least, SAP River provides developer a way to design the access control for data and method. SAP River Language As a new generation of SAP HANA application development method, SAP River provides a strongly typed and declarative programing language. With this programming language, even a non-computer-major developer can also easily develop a SAP HANA application. In addition, SAP River also supports embedded SQL Script and XSJS code in SAP River program, which is useful for some complicated logic. SAP River Language mainly includes: - Application: Application is the largest object in SAP River Language, all other objects must be included in an application. Objects in an application can be exposed to external applications via some ways, such as OData. - Entity: Entity is used to store data in database. Usually, one entity is mapped to a table in SAP HANA Database. An entity consists of one or more elements. Usually, one element is mapped to a column of a table. Each element has its own type, the type here can be fundamental type such as integer, string, and can also be a custom type or entity. - Types: Type defined the size and validity of data. SAP River supports different kinds of types, including fundamental type, structured type and stream type. Each entity will automatically define a corresponding implicit type, which has same name with the entity. - Action: Action is similar to the function or method in other programming languages. Usually, SAP River define the business logic of the application in the actions. - View: With the help of view, you can create a data stream using select statement, this data stream is a subset of target data set. The data of this data stream will be dynamically extracted from target data set when the data stream is used. - Role: Role can be created in SAP River code, and assigned some privileges to this role. In this way, access control is implemented. Figure 3. SAP River example program: Hello World As a programming language, SAP River also provides some libraries, which contains many common functions. These libraries can be divided to some categories according to their functionality: - Math Functions: mathematic calculation functions, such as absolute, ceil, floor, etc. - Integer Functions: process the data of integer type, such as toDecimalFloat, toString, etc. - String Functions: deal with strings, such as length, substring, etc. - Date Functions: date functions, such as day, month, etc. - Session Functions: session functions, such as getUserName. - Logging Functions: log function, such log. - Utility Functions: some utility functions, such as parseDecimalFloat, parseInt, etc. - HTTP Calls: Http library is used to send Http request in SAP River code, such as OData or REST services. OData Calls SAP River can expose the data and business logic to client via OData protocol. SAP River can expose data to client either in application level or in namespace level via OData. If exposed in namespace level, then every entities, views, and actions in this namespace are exposed to client. If exposed in application level, then only the objects that are tagged for exposure are exposed to client. Here, let’s take application level as example to illustrate how to expose data via OData. Figure 4. Expose SAP River application via OData ① when expose object in application level, you must use the keyword export to specify which objects are to be exposed to client. Such as Employee in TestSyntax here. ② There are more than one ways to expose SAP River object, OData is one of them. So it is necessary to add a notation “@OData” to declare the way of exposure. SAP River will create corresponding OData service for the application or namespace. ③ By Default, SAP River object is private. To let specified users to access certain object, you need to use the keyword “accessible by” to tell which role or privileges are required to access the object. If every users are allowed to access, you can just use “accessible by sap.hana.All”. How to learn SAP River SAP HANA begin to support SAP River from SPS07, we have many materials to study SAP River. Here are some materials for studying SAP River: - SAP River Datasheet - SAP River Help Docs - Videos and PPTs introducing SAP River - Series of videos introducing SAP River in SAP Academy Conclusion In this blog, we talked about functionality of SAP River, advantage and structure, including the advantages of SAP River over traditional development framework, main features of SAP River and how to expose data to client via OData. The Chinese version is great!! Would it make more sense to simply add a link to the Chinese version on the top of the original document (SAP River Datasheet) instead of creating a second document? Daniel Hi, Daniel. Thanks a lot for your great work in SAP River Datasheet, it helps me a lot. Yes, it is great idea to add a link. As i plan to write a series of SAP River, this is just a part of it. Thanks a lot again for your help!!! 🙂 And sorry for forgetting to add reference for the two Figures, added already. 🙂
https://blogs.sap.com/2014/06/02/sap-river1-sap-river-overview/
CC-MAIN-2018-22
refinedweb
1,191
54.12
I've very new to programming so sorry if this is a stupid question, but I'm trying to make a program with multiple functions, but whenever I attempt to define one it comes up with an error. def startUp(): promptName() def promptName(): name = input("Hello. Please enter your name: ") startUp() SyntaxError: invalid syntax I'd bet you're trying to paste the entire thing into a Python interpreter session. The command line interpreter needs things entered one block at a time, so try pasting the startUp function, hit enter, then promptName and enter, and then run the whole thing with the last line. Alternatively, save it all as a .py file and run the file.
https://codedump.io/share/uPRMJbRFhjJI/1/syntax-error-when-defining-2-functions-in-python
CC-MAIN-2017-34
refinedweb
116
69.62
I am looping through my file structure with a C program. It I hit a new folder, i appending it's path to a linked list so I can look through all subdirectories iteratively. The program consists of: main function, calling the iterative function (which loops through the files) When i loop through all the files once everything works fine. However when i have a while loop in my main function to call the iterative function more often, it always fails on the second time due to segmenatation error. So i was investigating a bit and it seems that one element of the linked list is having an invalid address. All addresses of my elements have this format and length: 0x2398ff0 or 0x2398ee0 However the illegal pointer has an address from 0x7f3770304c58 Does anyone have any thoughts why this address is so long? I have checked throught printf("%p", element) every new address that gets added to the linked list, and this address never appear anywhere before in the code. It like magically appears. I was thinking about a wild pointer maybe, but after i free any pointer i set it to NULL to, which should prevent this right? Thanks for any tip. I haven't posted the code right now cause it is very long and thought maybe there are obvious things i just dont see. EDIT: the entire code, including main function #include <stdio.h> #include <stdlib.h> #include <dirent.h> #include <string.h> #include <errno.h> #include <unistd.h> void iterativefile(FILE *f, char** field, int looper){ DIR *d; struct dirent *dir; typedef struct nextpath { // Define element type of linked list char *thispath; struct nextpath *next; }nextpath; nextpath *startpath = malloc(sizeof(nextpath)); char * beginning = (char *) malloc(2); //create first element in linked list, starting on root node "." strcpy(beginning, "."); startpath->thispath = beginning; int found = 0; nextpath *currentzeiger = startpath; nextpath *firstelement = startpath; char *newdir, *currentfile, *currentpath; do { currentpath = currentzeiger->thispath; d = opendir(currentpath); if (!d){ //IF the path is invalid or cannot be opened firstelement = currentzeiger->next; free(currentzeiger); currentzeiger = firstelement; continue; } while((dir = readdir(d)) != NULL){ if (dir->d_type != DT_REG){ // current element is a directory -> add it to linked list if (strcmp(dir->d_name, ".") != 0 && strcmp(dir->d_name, "..") != 0){ newdir = (char *) malloc(2+strlen(currentpath) + strlen(dir->d_name)); strcpy(newdir, currentpath); strcat(newdir, "/"); strcat(newdir, dir->d_name); nextpath *new = malloc(sizeof(nextpath)); // add new folder to linked list new->thispath = NULL; new->thispath = strdup(newdir); new->next = currentzeiger->next; currentzeiger->next = new; free(newdir); newdir = NULL; } } else { // current element is a file -> check if already included in list, if not, add it currentfile = (char *) malloc(2+strlen(currentpath)+strlen(dir->d_name)); strcpy(currentfile, currentpath); strcat(currentfile, "/"); strcat(currentfile, dir->d_name); found = 0; if (field != NULL) { for (int z = 0; z < looper; z++){ if (field[z] != NULL){ if(strcmp(currentfile,field[z]) == 0){ found = 1; free(field[z]); field[z] = NULL; } } } } if (found == 0){ char *renamefile = (char *) malloc(strlen(currentpath) + 6); strcpy(renamefile, currentpath); strcat(renamefile, ".cbsm"); free(renamefile); renamefile = NULL; } free(currentfile); currentfile = NULL; } } firstelement = currentzeiger->next; free(currentzeiger->thispath); currentzeiger->thispath = NULL; free(currentzeiger); currentzeiger = firstelement; closedir(d); }while(currentzeiger != NULL); } int main() { int counterofwhile = 1; while(1){ printf("Loop number: %d\n", counterofwhile); counterofwhile++; FILE *fp = fopen("datasyn.txt", "rw+"); if (fp == NULL) { printf("FILE ERROR"); FILE *fp = fopen("datasyn.txt", "ab+"); iterativefile(fp, NULL, 0); } else { int lines = 0; int ch = 0; int len = 0; int max_len = 0; while((ch = fgetc(fp)) != EOF){ ++len; if (ch == '\n'){ if(max_len < len) max_len = len; ++lines; len = 0; } } if (len) ++lines; fprintf(stderr, "%d lines\n", lines); if (lines > 0){ int numProgs = 0; char *programs[lines]; char line[max_len + 1]; rewind(fp); while(fgets(line, sizeof(line), fp)){ int new_line = strlen(line) - 1; if (line[new_line] == '\n') line[new_line] = '\0'; programs[numProgs++] = strdup(line); } iterativefile(fp, programs, numProgs); for (int j = 0; j < numProgs; j++){ free(programs[j]); } } else { iterativefile(fp, NULL, 0); } sleep(1); printf("Done\n"); fclose(fp); } } return 0; } In the function iterativefile(), you don't use calloc() to allocate startpath and you don't set startpath->next to null. The memory returned by malloc() is not necessarily zeroed. When you subsequently use startpath->next, all hell breaks loose. You also don't use the file pointer passed into iterativefile(). When you remove the parameter from the definition, you change the calls, and you've got a shadowed fp in main() (in the if (fp == NULL) block, you create a new FILE *fp which is really not needed). It really isn't clear what else is meant to happen; you've not given clear instructions on what the program is meant to be doing. I don't have the datasyn.txt file, but it shouldn't matter since the file stream is not used. I got lots of lines like FILE ERRORLoop number: 280 from the code, but no crash where previously I was getting a crash. Compile with more warning options. I called the file fp17.c and compiled my hacked version using: gcc -O3 -g -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes \ -Wold-style-definition fp17.c -o fp17 With a few other simple changes ( static before the function; int main(void)), the code compiled cleanly (and would have needed -Wshadow to spot the shadowing if it hadn't been for an 'unused variable' warning that pointed me to it).
https://codedump.io/share/w11JjGV6YDiw/1/c-returns-invalid-address-in-loop
CC-MAIN-2017-34
refinedweb
900
61.26
Edited by mike_2000_17: Fixed formatting Hello everybody! I have to Questions to ask please: 1) How can I find the maximum value in a single linked-list recursively? this is what I tried to do: int findMax(int key){ Node max=head; while (max != null){ if (max < max.getKey()) return(max.getNext()); } } it ends up to an error.. :( 2) How can check palindrome in a single linked-list recursively? Thank you for All ^ ^ Edited by mike_2000_17: Fixed formatting The solution you have provided for the first question is not recursive. You can have a recursion that will stop when max.next() is null, and otherwise will return the max value between the current node and the next node. int findMax(Node node) { if(node.next() == null) { return node.getKey(); } else { return max(node.getKey(), findMax(node.next())); } } And you will need a method that will call the recursion with head int findMax() { if(head == null) { return null; } else { return findMax(head); } } Regarding the second question - how would you check for a palindrome in a non-recursive manner? Try to do this and maybe the recursion will seem more clear. Good luck! The comparison is done in this line: return max(node.getKey(), findMax(node.next())); Where you return the max of the current result and the result of the next recursion call. Try to visualize it... can you see it? :) I understand that it is difficult to understand recursive when you learn about it the first time. (I did have difficulty understanding it as well when I first learned about it.) One way to be more clear about it is to follow its iteration and write it down in a paper. Write all variables you use in the function call before a call. Then, write them all again after the first call, and keep doing it until you hit the base case. Try to make the limit the call to at most 3 call before it hits the base case, so you would get an idea how it works. // i.e. // aList: 5 -> 10 -> 4 -> null // // findMax() 1st iteration // current node is 5, next node is 10 // 10 (next node) is not null, comparing 5 with whatever comes back ** // findMax() 2nd iteration // current node is 10, next node is 4 // 4 is not null, comparing 10 with whatever comes back ** // findMax() 3rd iteration // current node is 4, next node is null // next node is null, return 4 // now it is time to come back to each of the recursive call // compare 4 with 10, 10 is max so return 10 // compare 5 with 10, 10 is max so return 10 // done Edited by Taywin: n/a I agree with Taywin, recursions can be hard at the beginning. Let us try and review our code for a linked list of length 4, containing [22]->[43]->[107]->[45]->[null]. First, we call findMax(), it will call findMax(Node node) where node == head: //Iteration 1 int findMax(Node node) // node == head, and head.getKey() == 22 { if(head.next() == null) //we know that head.next() != null, so we are going to the else clause. { return node.getKey(); } else { return max(node.getKey(), findMax(node.next())); //meaning we need to return the max between 22 and the result of findMax(head.next()). } } The method then needs to call to findMax(head.next()) in order to determine which one is bigger: //Iteration 2 int findMax(Node node) // node == head.next(), and head.next()getKey() == 43 if(head.next() == null) //we know that node.next() != null, so we are going to the else clause. { return node.getKey(); } else { return max(node.getKey(), findMax(node.next())); //meaning we need to return the max between 43 and the result of findMax(head.next()). } } And again, the method needs to call to findMax(head.next().next()) in order to determine which one is bigger: //Iteration 3 int findMax(Node node) // node == head.next().next(), and head.next().next().getKey() == 11 if(head.next() == null) //we know that node.next() != null, so we are going to the else clause. { return node.getKey(); } else { return max(node.getKey(), findMax(node.next())); //meaning we need to return the max between 107 and the result of findMax(head.next()). } } And yet again, the method has to call the findMax(head.next().next().next()) in order to determine which one is bigger, 107, the value of the current node checked, or the next node: //Iteration 4 int findMax(Node node) // node == head.next().next().next(), and head.next().next().next().getKey() == 45 if(head.next() == null) //this node is the last one, and its next points to null (head.next().next().next().next() == null), therefore we are inside the if clause. { return node.getKey(); //simply return 45. } else { return max(node.getKey(), findMax(node.next())); //meaning we need to return the max between 45 and the result of findMax(head.next()). } } Now, all the results are coming back to the calling methods. Iteration 4 returned the value of 45 to the calling method, which is the method shown in iteration 3. Meaning that iteration 3 now needs to compute: return max(107, 45) That means that iteration 3 returns 107 to the calling method, iteration 2. The method in iteration 2 will also need to compute and return its value: return max(43, 107) Meaning iteration 2 will return 107 to its calling method, iteration 1. Iteration 1 will also calculate and return: return max(22, 107) Meaning that the final result that will return to the user is 107, which is indeed the highest number stored in the linked list. I hope that this step-by-step dive into the recursion helped you to understand the concept. Don't worry, even though it's hard to grasp at the beginning, it becomes easier as you practice it :) Thaaaaaank you for your great help, I really appreciate your time and effort. Now I see the code more clearer as tracing.. but still not creating a new algorithm ^^ I run the code in my program, but the keyword ( max ) in the last line was not initialized! I try to omit the ( max ) and edit the code like this: int findMax(Node node){ if(node.getNext() == null) { return node.getKey(); } else { int m = findMax(node.getNext()); if (node.getKey()> m) return (node.getKey()); return m; } } but it always return the first element.. ! Edited by mike_2000_17: Fixed formatting You can't omit the max, since this is the actual line that does the comparison between the two numbers - the number in the current node, and the number returned from the following nodes in the recursion. You can implement a max method of your own, or simply use the max(int a, int b) provided in the Math class. I did the following max method: int max (int a, int b){ Node node; int maximam; if (node.getKey() > node.getNext()) maximam = node.getKey(); return maximam; } but there is an error : operator > cannot be applied to int,Node Sorry for annoying you all Edited by mike_2000_17: Fixed formatting Ok... First, the method is accepting two integers, a and b - and then you trying to compare the nodes instead of a and b in question. The method should accept a and b and return the higher value of the two. public int max(int a, int b) { if(a > b) { return a; } return b; } Can you see the difference between this and what you wrote? Second, regarding the complication error - you are trying to compare an int (node.getKey()) and a node (node.getNext()). Perhaps you want to compare it with node.getNext().getKey() aha, so the comparison doesn't take parameters of the class type ( Node ), only int. ..... Now the program compiles with no error, but it still returns only the first value ( key ), I captured the code that I did to create a linked list, to find out the problem: the main program the class Node Is the way that I create the list right? Please post the code as text and not as a picture... import java.util.*; import java.util.Scanner; class Node{ int key; Node next; Node head; public Node (){ this.key=key; next = null; } public Node (int a){ key = a; next = null; head = null; } public void setNext(Node n){ next = n; } public Node getNext(){ return next; } public int getKey(){ return key; } public void newNode(int n){ key = n; Node i ; if (head == null) head = new Node(key); else { i = head; while (i.getNext() != null) i = i.getNext(); } } public int max(int a, int b){ if(a > b) { return a; } return b; } int findMax(Node node){ if(node.getNext() == null) { return node.getKey(); } else { int m = findMax(node.getNext()); return max(node.getKey(), findMax(node.getNext())); } } void findMax(){ System.out.println(" The maximum value is: " + findMax(head)); } } public class maxRecursion { public static void main(String[] args) { Node myList = new Node(); Scanner input=new Scanner(System.in); System.out.println("Enter a set of integers, press y to continue and x to stop: "); int n; char exit = ' '; do{ System.out.print("Enter an integer: "); n = input.nextInt(); myList.newNode(n); System.out.print("continue? "); exit = input.next().charAt(0); }while (exit != 'x' ); myList.findMax(); } } Edited by CLina: n/a Your newNode(int n) method only inserts the new node if (head == null), but does nothing afterwards except to iterate all the way to the end. public void newNode(int n) { key = n; Node i ; if (head == null) { head = new Node(key); // Great, we will have a new node here. } else { i = head; while (i.getNext() != null) { i = i.getNext(); } // ok, i is the last node here... now you need to insert the node. } } Your list only has the first node inserted, which can explain your results... public void newNode(int n) { key = n; Node i ; if (head == null) { head = new Node(key); // Great, we will have a new node here. } else { i = head; while (i.getNext() != null) { i = i.getNext(); } i.getNext()= new Node(key); // ok, i is the last node here... now you need to insert the node. } } Th insertion state is unexpected type, I tried with: i.getNext()= new Node(key); i.getNext()= key; How about the simple old fashioned i.next = new Node(key); :) Yeeeeeh!,, it's reeaally working... :) This is my first try to create a linked list.. I need to get more trainings on that.. Thank you so much for all ^ ^ ....... for the second question ( palindrome ) I prefer to create another thread to be a whole complete reference for other.. Glad to hear that it's working! Try to work on the palindrome with the techniques you have learned :) Please mark the thread as solved.
https://www.daniweb.com/programming/software-development/threads/323475/how-to-find-max-recursively-in-linked-list
CC-MAIN-2018-30
refinedweb
1,783
75.1
- Tutorials - 2D UFO tutorial - Introduction to 2D UFO Project Introduction to 2D UFO Project Checked with version: 5.2 - Difficulty: Beginner An introduction to the 2D UFO project in which we set up the project and download the required assets. Introduction to 2D UFO Project Beginner 2D UFO tutorial Transcripts - 00:03 - 00:05 In this beginner assignment we're going to make a - 00:05 - 00:08 very simple but playable 2D game - 00:08 - 00:10 to use many of the basic concepts - 00:10 - 00:12 from the beginner tutorial modules. - 00:12 - 00:14 We'll be making a 2D UFO game - 00:14 - 00:16 where we'll be collecting gold nuggets. - 00:18 - 00:20 We'll see how to create new game objects, - 00:20 - 00:22 add components to these game objects, - 00:22 - 00:25 set the values on their properties, - 00:25 - 00:27 and position these game objects in the scene - 00:27 - 00:29 to create a game. - 00:30 - 00:32 In our game the player will control - 00:32 - 00:34 a UFO flying around the game board in a - 00:34 - 00:36 top-down 2D view. - 00:36 - 00:39 We'll move the UFO using physics and forces. - 00:39 - 00:41 We'll look at the input from the player - 00:41 - 00:43 through the keyboard and we'll use those inputs - 00:43 - 00:44 to apply forces to the UFO, - 00:44 - 00:46 making it move in our scene. - 00:47 - 00:49 We'll see how to detect contact between - 00:49 - 00:52 the UFO and the pickup game objects - 00:52 - 00:54 and use these events to collect - 00:54 - 00:56 the pickup game objects. - 00:56 - 00:58 When we've done we'll have made a simple - 00:58 - 01:00 UFO game where the player controls the - 01:00 - 01:03 UFO with the keyboard, picks up and counts - 01:03 - 01:05 special collectable objects, - 01:05 - 01:07 displays the current count, - 01:07 - 01:09 and ends the game when all of the - 01:09 - 01:11 game objects have been picked up. - 01:13 - 01:15 To complete this project we'll use custom - 01:15 - 01:17 created 2D art assets - 01:17 - 01:19 which can be downloaded from the - 01:19 - 01:21 Unity Asset Store. - 01:21 - 01:23 Let's begin by creating a new project. - 01:24 - 01:26 First open the Unity editor - 01:26 - 01:28 if you haven't done that already. - 01:28 - 01:30 You can create a new project by - 01:30 - 01:33 choosing File - New Project. - 01:36 - 01:38 This will bring us to the home screen. - 01:39 - 01:41 On the home screen you can create a - 01:41 - 01:43 new project either by hitting the - 01:43 - 01:45 blue Create New Project button - 01:45 - 01:47 in the centre of the window, - 01:47 - 01:49 or by clicking the New button - 01:49 - 01:51 at the top of the window. - 01:53 - 01:55 The first thing that we need to do is - 01:55 - 01:56 to give our project a name, - 01:56 - 01:58 I'm going to call the project UFO Game. - 02:01 - 02:03 The next thing we need to do is set the - 02:03 - 02:05 destination, or path, to our new project. - 02:06 - 02:08 I'm going to put this new project on my desktop. - 02:16 - 02:18 You'll see we have a choice - 02:18 - 02:20 to set the project preferences - 02:20 - 02:24 to either 3D or 2D mode. - 02:24 - 02:25 Since we're working in 2D - 02:25 - 02:27 I'm going to click 2D. - 02:30 - 02:32 This will set the preferences for the Unity editor - 02:32 - 02:35 to useful defaults for creating 2D game.s - 02:36 - 02:39 For more information on working in 2D, - 02:39 - 02:41 please see the information linked below. - 02:43 - 02:47 Now click the Create Project button - 02:47 - 02:49 to create our new project. - 02:49 - 02:51 Unity will create a new empty scene - 02:51 - 02:53 for us to work in. - 02:54 - 02:56 The first thing we'll need to do is to download - 02:56 - 02:58 and import our art assets - 02:58 - 03:00 from the Unity Asset Store. - 03:01 - 03:03 To access the Asset Store choose - 03:03 - 03:05 Window - Asset Store. - 03:05 - 03:07 Or use the keyboard shortcut - 03:07 - 03:09 command + 9 on mac, - 03:09 - 03:12 or control + 9 on windows. - 03:13 - 03:15 The Asset Store is a service that - 03:15 - 03:17 Unity Technologies provides - 03:17 - 03:19 where creators can both buy and sell - 03:19 - 03:21 assets to make games. - 03:21 - 03:25 These include art, music, scripts, effects, - 03:25 - 03:28 all the way up to complete games and projects. - 03:30 - 03:32 Unity also publishes content - 03:32 - 03:34 to the Asset Store, including teaching - 03:34 - 03:36 projects like this one to help - 03:36 - 03:38 you learn how to make games. - 03:39 - 03:41 By default the Asset Store is - 03:41 - 03:43 opened as a docked tab. - 03:44 - 03:46 Let's undock the tab so that we can - 03:46 - 03:48 expand the window. - 03:58 - 04:00 In the upper-right corner of - 04:00 - 04:02 the expanded Asset Store window - 04:02 - 04:05 we'll see a listing of asset categories. - 04:05 - 04:07 At the bottom we should find - 04:07 - 04:09 Unity Essentials. - 04:09 - 04:11 Expand this category and we'll find - 04:11 - 04:15 a subcategory called Sample Projects. - 04:16 - 04:18 Within Sample Projects we'll find - 04:18 - 04:21 an item called 2D UFO tutorial. - 04:22 - 04:25 Click on the item title to open it. - 04:26 - 04:28 Once the page loads we'll see a - 04:28 - 04:31 download button in the upper-left corner. - 04:31 - 04:33 Click on the Download button to begin - 04:33 - 04:35 downloading the assets. - 04:35 - 04:37 The Download button will be replaced - 04:37 - 04:39 by the percentage of the download completed - 04:39 - 04:41 once we click on it. - 04:42 - 04:44 Once the download is complete we'll see - 04:44 - 04:47 a dialogue labelled Importing Complete Project. - 04:48 - 04:50 This warns us that importing a - 04:50 - 04:52 complete project will overwrite - 04:52 - 04:54 our current project settings. - 04:55 - 04:57 In this case, because we've just created a - 04:57 - 04:59 new project this is fine. - 05:00 - 05:02 Click the Import button to continue. - 05:03 - 05:05 Next we'll be given a choice - 05:05 - 05:07 of which assets we would like to - 05:07 - 05:09 import in to our project. - 05:09 - 05:12 The default is All, which is what we want, - 05:12 - 05:14 so go ahead and click Import. - 05:16 - 05:18 Close the Asset Store window. - 05:19 - 05:22 We now have our new project with our assets imported. - 05:22 - 05:24 And the default new empty scene open. - 05:25 - 05:27 Before creating anything in the new scene - 05:27 - 05:29 we need to save our scene. - 05:29 - 05:31 We can save our scene by choosing - 05:31 - 05:33 File - Save Scene, - 05:33 - 05:36 or by using the keyboard shortcut - 05:36 - 05:38 command + S on mac, - 05:38 - 05:40 or control + S on windows. - 05:42 - 05:44 I'm going to save this scene - 05:44 - 05:46 in the Assets directory in the - 05:46 - 05:48 folder called Scenes. - 05:53 - 05:55 I'm going to call the scene Main. - 06:00 - 06:02 We can now see in our Scenes folder - 06:02 - 06:04 the scene called Main. - 06:05 - 06:07 It's worth noting that the - 06:07 - 06:11 Completed, Prefabs, Scenes, Scripts - 06:11 - 06:13 and Sprites folders - 06:13 - 06:17 were all created when we imported our asset package. - 06:19 - 06:21 The Completed folder contains - 06:21 - 06:24 a completed version of the project, - 06:24 - 06:27 which you can refer to if you get stuck. - 06:29 - 06:31 Great, that's the end of our first lesson. - 06:32 - 06:35 In our next lesson we're going to lay out our play field.
https://unity3d.com/learn/tutorials/projects/2d-ufo-tutorial/introduction-2d-ufo-project
CC-MAIN-2018-34
refinedweb
1,524
76.15
fredrikj.net / blog / Hypergeometric series with SymPy July 7, 2008 SymPy 0.6.0 is out, go get it! (It will be required for running the following code.) Here is a nice example of what SymPy can be used for (I got the idea to play around with it today): automated generation of code for efficient numerical summation of hypergeometric series. A rational hypergeometric series is a series (generally infinite) where the quotient between successive terms, R(n) = T(n+1)/T(n), is a rational function of n with integer (or equivalently rational) coefficients. The general term of such a series is a product or quotient of polynomials of n, integers raised to the power of An+B, factorials (An+B)!, binomial coefficients C(An+B, Cn+D), etc. The Chudnovsky series for π, mentioned previously on this blog, is a beautiful example: Although this series converges quickly (adding 14 digits per term), it is not efficient to sum it term by term as written. It is slow to do so because the factorials quickly grow huge; the series converges only because the denominator factorials grow even quicker than the numerator factorials. A much better approach is to take advantage of the fact that each (n+1)’th term can be computed from the n‘th by simply evaluating R(n). Given the expression for the general term T(n), finding R(n) in simplified form is a straightforward but very tedious exercise. This is where SymPy comes in. To demonstrate, let’s pick a slightly simpler series than the Chudnovsky series: The SymPy function hypersimp calculates R given T (this function, by the way, was implemented by Mateusz Paprocki who did a GSoC project for SymPy last year): >>> from sympy import hypersimp, var, factorial >>> var('n') n >>> pprint(hypersimp(factorial(n)**2 / factorial(2*n), n)) 1 + n ------- 2 + 4*n So to compute the next term during the summation of this series, we just need to multiply the preceding term by (n+1) and divide it by (4n+2). This is very easy to do using fixed-point math with big integers. Now, it is not difficult to write some code to automate this process and perform the summation. Here is a first attempt: from sympy import hypersimp, lambdify from sympy.mpmath.lib import MP_BASE, from_man_exp from sympy.mpmath import mpf, mp def hypsum(expr, n, start=0): """ Sum a rapidly convergent infinite hypergeometric series with given general term, e.g. e = hypsum(1/factorial(n), n). The quotient between successive terms must be a quotient of integer polynomials. """ expr = expr.subs(n, n+start) num, den = hypersimp(expr, n).as_numer_denom() func1 = lambdify(n, num) func2 = lambdify(n, den) prec = mp.prec + 20 one = MP_BASE(1) << prec term = expr.subs(n, 0) term = (MP_BASE(term.p) << prec) // term.q s = term k = 1 while abs(term) > 5: term *= MP_BASE(func1(k-1)) term //= MP_BASE(func2(k-1)) s += term k += 1 return mpf(from_man_exp(s, -prec)) And now a couple of test cases. First some setup code: from sympy import factorial, var, Rational, binomial from sympy.mpmath import sqrt var('n') fac = factorial Q = Rational mp.dps = 1000 # sum to 1000 digit accuracy Some formulas for e (source): print hypsum(1/fac(n), n) print 1/hypsum((1-2*n)/fac(2*n), n) print hypsum((2*n+1)/fac(2*n), n) print hypsum((4*n+3)/2**(2*n+1)/fac(2*n+1), n)**2 Ramanujan series for π (source): print 9801/sqrt(8)/hypsum(fac(4*n)*(1103+26390*n)/fac(n)**4/396**(4*n), n) print 1/hypsum(binomial(2*n,n)**3 * (42*n+5)/2**(12*n+4), n) Machin’s formula for π: print 16*hypsum((-1)**n/(2*n+1)/5**(2*n+1), n) - \ 4*hypsum((-1)**n/(2*n+1)/239**(2*n+1), n) A series for √2 (Taylor series for √(1+x), accelerated with Euler transformation): print hypsum(fac(2*n+1)/fac(n)**2/2**(3*n+1), n) Catalan’s constant: print 1./64*hypsum((-1)**(n-1)*2**(8*n)*(40*n**2-24*n+3)*fac(2*n)**3*\ fac(n)**2/n**3/(2*n-1)/fac(4*n)**2, n, start=1) Some formulas for ζ(3) (source): print hypsum(Q(5,2)*(-1)**(n-1)*fac(n)**2 / n**3 / fac(2*n), n, start=1) print hypsum(Q(1,4)*(-1)**(n-1)*(56*n**2-32*n+5) / \ (2*n-1)**2 * fac(n-1)**3 / fac(3*n), n, start=1) print hypsum((-1)**n * (205*n**2 + 250*n + 77)/64 * \ fac(n)**10 / fac(2*n+1)**5, n) P = 126392*n**5 + 412708*n**4 + 531578*n**3 + 336367*n**2 + 104000*n + 12463 print hypsum((-1)**n * P / 24 * (fac(2*n+1)*fac(2*n)*fac(n))**3 / \ fac(3*n+2) / fac(4*n+3)**3, n) All of these calculations finish in less than a second on my computer (with gmpy installed). The generated code for the Catalan’s constant series and the third series for ζ(3) are actually almost equivalent to the code used by mpmath for computing these constants. (If I had written hypsum earlier, I could have saved myself the trouble of implementing them by hand!) This code was written very quickly can certainly be improved. For one thing, it should do some error detection (if the input is not actually hypergeometric, or if hypersimp fails). It would also be better to generate code for summing the series using binary splitting than using repeated division. To perform binary splitting, one must know the number of terms in advance. Finding out how many terms must be included to obtain a accuracy of p digits can be done generally by numerically solving the equation T(n) = 10-p (for example with mpmath). If the series converges at a purely geometric rate (and this is often the case), the rate of convergence can also be computed symbolically. Returning to the Chudnovsky series, for example, we have >>> from sympy import * >>> fac = factorial >>> var('n') n >>> P = fac(6*n)*(13591409+545140134*n) >>> Q = fac(3*n)*fac(n)**3*(-640320)**(3*n) >>> R = hypersimp(P/Q,n) >>> abs(1/limit(R, n, oo)) 151931373056000 >>> log(_,10).evalf() 14.1816474627255 So the series adds 14.18165 digits per term. With some more work, this should be able to make it into SymPy. The goal should be that if you type in a sum, and ask for a high precision value, like this: >>> S = Sum(2**n/factorial(n), (n, 0, oo)) >>> S.evalf(1000) then Sum.evalf should be able to automatically figure out that the sum is of the rational hypergeometric type and calculate it using the optimal method.
http://fredrikj.net/blog/2008/07/hypergeometric-series-with-sympy/
CC-MAIN-2017-13
refinedweb
1,144
52.29
Exception: type 'sage.rings.real_mpfr.RealLiteral' is not a valid type for a Constant value. I added cvxpy and ecos to my Sage installation. When I run the following code for the first time, I get the exception in the title. However, the second time the code runs fine. RealNumber=float Integer=int import numpy from pylab import * import math from cvxopt import matrix as m from cvxpy import * x = Variable(1) y = Variable(1) constraints constraints = [ x+y >= -1.0, x+y <= 10.0] objective objective = Maximize(x+y) p = Problem(objective, constraints) result = p.solve() print result The optimal value print x.value print y.value add a comment
https://ask.sagemath.org/question/10898/exception-type-sageringsreal_mpfrrealliteral-is-not-a-valid-type-for-a-constant-value/
CC-MAIN-2018-13
refinedweb
111
62.85
In the world of browser-based development, interoperability is king. Unfortunately, interoperability can be at the expense of performance. With support for bi-directional, full-duplex messaging simply out of the reach of the HTTP protocol, real time messaging support in browser-based applications can be severely limited. Fortunately, standards groups including the W3C and IETF have been hard at work over the last few years on a standard specification called the WebSocket protocol which aims to bring these much needed capabilities to the masses. As with most industry standards, the WebSocket protocol has seen a significant amount of volatility over the last couple of years, but the good news is that the dust has settled making this a great time to start considering how WebSockets fits into your solution architectures. Why WebSockets? Before I explain what the WebSocket protocol is and how it works, let me talk about some of the challenges that it aims to solve. Today’s applications demand real-time, low-latency messaging. Whether you are building a web, mobile or composite application, users expect to be able to interact with data as close to real-time as possible while minimizing the impact to their overall user experience. The key to enabling real-time, immersive experiences is to connect as closely as possible to the source of the events of interest so that when an event takes place, your application, service, or user experience is notified as quickly as possible. This isn’t that big of a problem for applications and services within the enterprise that can leverage socket-based messaging like TCP or UDP, but more and more applications are being built for the web and devices that rely entirely on the Internet for connectivity. Even enterprise applications today are becoming more and more dependent on services hosted by external vendors/partners and commercial cloud providers bringing hybrid solutions into the mainstream. There are many applications you likely interact with today that require this type of connectivity and yet deliver real-time or near-real-time user experiences. If you’ve interacted with Twitter or Facebook, you’ve experienced near-real-time streams of activity constantly being updated all over the web, on your mobile device or browser without needing to hit a refresh button. Periodic Polling via XHR One common approach for delivering near-real-time messaging is commonly known as AJAX or XML Http Request (XHR). This works by polling an endpoint at a given interval and returning data when it is available. Since there are no page refreshes or post backs happening, this gives the user the illusion of a dedicated connection, but the reality is that XHR is both latent and consumes resources and bandwidth with each subsequent poll. Long Polling An alternative to XHR is long polling. With long polling, the server/endpoint holds on to the HTTP request until there is data/payload available and then returns to the client. The client then follows up with another request and waits until new data or event/payload is available. While both of these approaches have powered a vast number of innovative solutions, they aren’t without liabilities. In addition to the high latency of both models, the programming model for each isn’t the most intuitive. If you’ve worked with AJAX/XHR or long polling, you may have gotten the feeling that there’s a bit of “magic” happening under the hood because there is. In addition, scalability can become a problem as the number of clients increase. In addition, consuming endpoints that reside outside of application’s domain requires clever hacks like JSONP or the use of non-interoperable adapters. The WebSocket Protocol The WebSocket protocol addresses many if not most of these issues by enabling fast, secure, two-way communication between a client and a remote host without relying on multiple HTTP connections. WebSockets support full-duplex, bi-directional messaging, which is great for real-time, low-latency messaging scenarios. Best of all, WebSockets is fully interoperable and cross-platform at the browser level, natively supporting ports 80/443 as well as cross-domain connections. There are two parts to the WebSocket protocol: client and server. The W3C owns the JavaScript client API and the IETF Hypertext Bidirectional (HyBi) Working Group is responsible for guidelines governing how the server-side protocol should be implemented. Browser vendors like Google, Microsoft and Mozilla natively support the WebSocket protocol by implementing the client API directly in their browsers. This means that if Chrome or IE support WebSockets, the API is native to the browser and you can start programming against WebSocket endpoints right away. On the server side, platform vendors implement the IETF specification providing middleware tooling that enables you to expose your back-end services over the WebSocket protocol. How it Works As shown in Figure 3, when a WebSocket-enabled browser/client establishes a connection to a WebSocket endpoint, it conducts a handshake over port 80/443 and requests an upgrade to the WebSocket protocol. If the server supports the WebSocket protocol, and the protocol version of the client and server match, the web server will accept the upgrade request and upgrade the request. From this point on, the client and server have a direct, socket connection and can freely exchange messages. Let’s take a look at what the HTTP request and response from step 2 above looks like: GET ws://localhost/TweetStreamService/ TwitterStreamService.svc Connection:Upgrade Host:localhost:80 Origin:null Sec-WebSocket-Key:GleUPijQdfEAxSyUGqNugw== Sec-WebSocket-Version:13 Upgrade:websocket The first thing you’ll notice is that this is just an HTTP GET request. However, the URI is using a protocol scheme of ws. All WebSocket endpoints must be addressed in this manner. Everything after the prefix is that of a typical URI, including the server and resource path. In this case, I’m addressing a WCF service that implements the WebSocket protocol, but this is just one implementation option we’ll explore along with others. The Origin request header is optional and can be used to demarcate the domain in which the communication will take place. Using this header, the server can decide to only serve requests originating from a given domain and reject all others. It is in this way that cross-domain connections are implicitly supported but can be constrained as needed. Next, notice that the Connection request header is set to Upgrade. This is to indicate that the client is requesting an upgrade to the WebSocket protocol, specified in the Upgrade header if the server supports it. The Sec-WebSocket-Key header consists of a hash which prevents an intermediary from impersonating the server (more on this in a bit). Finally, the Sec-WebSocket-Version header indicates the protocol version which the client supports, which maps to the hybi server reference, in this case IETF RFC 6455. If the server supports WebSockets and the versions are compatible, the server will respond as follows: HTTP/1.1 101 Switching Protocols Connection:Upgrade Sec-WebSocket-Accept:tR1jreg0cIPCg0/jekUcTubHV5M= Upgrade:websocket Notice the first line in the response along with the Connection and Upgrade response headers. This indicates that the HTTP request has been upgraded to WebSockets and from this point forward, the client and server can exchange messages in a fully duplex bi-directional manner! As I mentioned, the Sec-WebSocket-Key request header is used to prevent malicious scripts from fooling a WebSocket server into accepting non-WebSocket payloads. The way this works is that the server takes the value of the Sec-WebSocket-Key and concatenates it with a key, computes an SHA1 hash and returns that value to the client in the Sec-WebSocket-Accept response header. As a result, only the original WebSocket server that accepted the upgrade request can communicate with the client. WebSockets Today It is very important that the client version and server match (Sec-WebSocket-Version), and this wasn’t always easy to guarantee in the early days when the protocol was being standardized. Imagine writing and shipping an application written for Google Chrome only to find that a month later, the client API was revised. If the server vendor didn’t update their stack to the new specification in a timely manner, or the browser vendor lagged in updating their API behind the IETF standard, you could be stuck in Sec-WebSocket-Version version hell. Fortunately, most of the standardization dust has settled and both the client and server standards are much more stable so you are unlikely to encounter the same pitfalls early WebSocket pioneers encountered. Even better, some server implementations like Socket.IO and WebSocket.IO are kind enough to support older client API versions and silently bridge the versions for you. The latest version of the IETF server standard is RFC 6455. This is expected to be the final standard. On the client side, the latest W3C specification is Editor’s Draft 26 June 2012. You can find links to both of these specifications in the sidebars. IETF RFC 6455 requires that the value of the Sec-WebSocket-Version is set to 13. A W3C browser that is compatible with RFC 6455 will automatically include this version number. Although the W3C specification continues to be refined, many browser vendors and server stack providers have invested in RFC 6455. Microsoft has invested significantly in the WebSocket protocol, both by contributing time and talent to the standards bodies and including support for WebSockets on the Microsoft platform including browser, client and server. On the server side, ASP.NET and WCF 4.5 supports the latest version, RFC 6455, bringing the power and flexibility of WebSockets to the server-side developer’s fingertips and providing the option to use whichever programming model you are most comfortable with. Let’s take a look at what the current state of support is on both the client and the server. Browser Support As of this writing, all major browsers including IE 10, Chrome 16, FireFox 11 and Safari support RFC 6455. The easiest way to ensure that your browser supports WebSockets is to use a JavaScript library called Modernizr which allows you to verify support for several emerging browser features (see the “Bullet Proof your Web Apps with Modernizr” sidebar for more information). In addition, you can attempt to create a WebSocket connection and inspect the request/response headers using a tool like Fiddler or F12 developer tools. Figure 4 shows a test I conducted with Google Chrome. I’m consuming a WebSocket service that I’ll cover in detail shortly, but as you can see, the service supports protocol version 13 which indicates that RFC 6455 is supported. Server Support on the Microsoft Platform As I’ve shared, WebSockets support is a feature in .NET 4.5, which has not been released as of this writing. However, that should not stop you from starting to experiment with WebSockets today. To take advantage of WebSockets in ASP.NET and/or WCF, you’ll need the .NET 4.5 Release Candidate (RC) which I recommend acquiring with Visual Studio 2012 RC. These pre-release bits provide a great opportunity to get your hands on the bits early and the RC is very stable. In addition, WebSockets is only supported on Windows 8 and Windows Server 2012. This means that you can develop on Windows 7 or earlier OS versions, but your ASP.NET or WCF WebSocket services will only work on Windows 8 or Windows Server 2012. Paul Batum, Program Manager on the WebSockets team at Microsoft, has a great post that walks through the steps for configuring Windows 8/Server 2012 for WebSockets so I won’t repeat them. See Paul’s post here:. While unfortunate, believe it or not, this is not a ploy by Microsoft to get developers to migrate to the latest OS. The problem is with the way that HTTP.sys works in Windows 7/Server 2008/R2 and earlier OS versions. Rather than issue a patch that could result in large regressions for existing customers that rely on the current behavior of HTTP.sys, the newest versions of Windows include a modification to HTTP.sys that enables WebSockets. If you are using the Release Preview of Windows 8, I encourage you to install Visual Studio 2012 RC to get the most out of WebSockets today. One Common Core, Multiple Choices Support for WebSockets is provided via the System.Net.WebSockets namespace which provides powerful low-level APIs for leveraging WebSockets natively if you so choose. This namespace is common to both ASP.NET and WCF so regardless of your preference, you share a common base which ensures a consistent API. While you are free to start programming right away against System.Net.WebSockets, Microsoft has developed some higher level abstractions for ASP.NET and WCF that make it very easy to get up and running via the Microsoft.WebSockets namespace, which is available via NuGet, and I will focus on these types in this article. When you install the NuGet package, an assembly called Microsoft.WebSockets.dll (the latest version is 0.2.3 as of this writing) will be added to your project which contains two higher level classes for working with ASP.NET or WCF. ASP.NET 4.5 WebSocketHandler The WebSocketHandler class provides common methods that you’ll work with when programming WebSockets. These methods wrap the lower-level APIs in System.Net.WebSockets. Fundamentally, you’ll work with three main methods when programming WebSockets. Take a look at the snippet below which represents a simple, yet fully functional WebSocket service: public class ServerEventHandler : WebSocketHandler { public override void OnOpen() { base.Send("You connected to a WebSocket!")); } public override void OnMessage(string message) { // Echo message base.OnMessage(message); } public override void OnClose() { // Free resources, close connections, etc. base.OnClose(); } } As you can see, with ASP.NET you simply inherit from WebSocketHandler and override the OnOpen method which fires when a client establishes a WebSocket connection to your service. At this point, you can do nothing or simply send back a friendly greeting to indicate that the connection was successful using the Send method. When your service receives a message, the OnMessage method is called and the payload of the message is provided as a parameter. WebSockets support text, XML and binary payloads. Here you would do some work, blocking or asynchronous, and send a response back to the client. The OnClose method provides an opportunity to clean up resources such as closing connections, etc. It fires when the client explicitly sends a close request or automatically when a user closes a browser tab or the browser itself. Simple Eventing with ASP.NET WebSocketHandler Let’s take a look at an example of using the WebSocketHandler to build a simple demo. While the canonical WebSocket example is a chat application (search the web to find a plethora of examples), I want to distill the power of WebSockets by showing you a simple event-driven sample that simply echoes the date in time at a given interval. I have an HTML5 page consisting of some JavaScript references. The implementation of the form consists of an input button, a paragraph element and an unordered list element: <body> <h2>Simple Event Sample</h2> <input type="button" id="connect" value="Connect" class="btn" /> <p id="Status"></p> <ul id="messages"/> </body> Next, take a look at the JavaScript in Listing 2. I’m using some simple JQuery to wire up the simple HTML. First, I use Modernizr to detect if the browser supports WebSockets. This is very important for the reasons previously covered. If the browser supports WebSockets, I just append a string indicating so to the paragraph element. From there, when I click on the Connect button, I initialize the host variable to the URI for my WebSocket service: var host = "ws://localhost/SimpleEventingHandler/ SimpleEventingService.ashx"; Next, since WebSockets is supported, I instantiate a native WebSocket class and set the instance to the connection variable: connection = new WebSocket(host); The WebSocket class is native to the browser, meaning that if the browser supports the W3C WebSocket API, the class will be available to you. From there, I call onopen on the WebSocket instance and if the call is successful, I turn my button’s text green to provide a visual indication that I’ve connected: connection.onopen = function () { $(".btn").css("color", "green"); } At this point, I’m waiting for messages. As soon as a message arrives, I’ll convert to a string and append to my unordered list: connection.onmessage = function (message) { var data = window.JSON.parse(message.data); $("<li/>").html(data).appendTo($('#messages')); }; Now let’s take a look at the ASP.NET implementation of the server using the WebSocketHandler that is available in Microsoft.WebSocket.dll. Listing 3 and 4 provide the full code sample which consists of an implementation of an IHTTPHandler which will host the server as an .ashx and a WebSocketHandler which contains the implementation of the service. First, as shown in Listing 3, I implement a simple IHTTPHandler which defines two methods, ProcessRequest and IsReusable. ProcessRequest is called when the HTTP request arrives at the handler and is passed to my code via an instance of HttpContext: public void ProcessRequest(HttpContext context) { if (context.IsWebSocketRequest) { context.AcceptWebSocketRequest(new SimpleEventingWebSocketHandler()); } } If the request is a WebSocket request, I call AcceptWebSocketRequest to transition to the WebSocket protocol and pass an instance of a WebSocketHandler implementation which serves the initial and future requests. As shown in Listing 4, the code for the WebSocketHandler is very simple. Upon establishing the client connection, a JSON message is returned along with the date and time every second: public override void OnOpen() { JavaScriptSerializer serializer = new JavaScriptSerializer(); base.Send(serializer.Serialize("Connected!")); while (true) { base.Send(serializer.Serialize("Time now is: " + DateTime.Now)); System.Threading.Thread.Sleep(1000); } } Figure 4 shows the sample running in IE 10, which natively supports IETF RFC 6455. While this is a very simple example, it shows the power of WebSockets in a simple way. Each second represents an event that the client cares about, and each time a second ticks, a message is sent to the browser over the same socket connection negating the need to poll at the client. Of course, this could be a stock ticker, a Twitter stream, or a price change from a pricing engine. The possibilities are endless. WCF 4.5 WebSocketService As I mentioned earlier, the Microsoft.WebSockets.dll also contains support for WCF. Since both the ASP.NET and WCF implementations are based on the same System.Net.WebSockets, the APIs are very similar. As shown in Listing 5, after implementing the WebSocketService class, I simply override the OnOpen method and the implementation is otherwise identical: public class SimpleEventingService : Microsoft.ServiceModel.WebSockets.WebSocketService { public override void OnOpen() { JavaScriptSerializer serializer = new JavaScriptSerializer(); base.Send(serializer.Serialize("Connected!")); while (true) { base.Send(serializer.Serialize("Time now is: " + DateTime.Now)); System.Threading.Thread.Sleep(1000); } } To host the WebSocketService, all of the options for hosting WCF apply. You can self-host, host as an NT Service or host in IIS/Windows Server AppFabric/AppFabric 1.1 for Windows Server. The Microsoft.WebSocket.dll includes a WebSocketHost which simplifies the process (I recommend using AppFabric 1.1 for Windows Server, a free extension to IIS as a host when possible as it provides a number of benefits around management and hosting of WCF services). To host a WCF WebSocket service in IIS, you must create a service host factory as shown in Figure 5. Within the ServiceHost class, you use WebSocketHost to create a binding that is compatible with WebSockets. The CreateWebSocketBinding method on the WebSocketHost builds up a custom binding consisting of a ByteStreamMessageEncodingBindingElement and an HttpTransportBindingElement which results in a simple channel stack for supporting WebSockets natively from browser clients: var host = new WebSocketHost(serviceType, new ServiceThrottlingBehavior { MaxConcurrentSessions = int.MaxValue, MaxConcurrentCalls = 20 }, baseAddresses); var binding = WebSocketHost.CreateWebSocketBinding(https: false, sendBufferSize: 2048, receiveBufferSize: 2048); binding.SendTimeout = TimeSpan.FromMilliseconds(500); binding.OpenTimeout = TimeSpan.FromDays(1); host.AddWebSocketEndpoint(binding); Since we’re using a custom factory, be sure to add a reference to the factory in your .svc file: <%@ ServiceHost Language="C#" Debug="true" Service ="Service.SimpleEventingService" CodeBehind ="SimpleEventingService.svc.cs" Factory ="Microsoft.ServiceModel.WebSockets .Util.WebSocketServiceHostFactory" %> Now, go back to the client-side JavaScript we reviewed in the first sample, and change the host variable from: var host = "ws://localhost/SimpleEventingHandler/ SimpleEventingService.ashx"; To: var host = "ws://localhost/SimpleEventingService/ SimpleEventingService.svc"; If you refresh (or restart) the HTML page in Listing 1, the client will work in exactly the same way as when it was using the HTTP hander with ASP.NET. WCF NetHttpBinding While support for WebSockets in browser clients is very powerful, this isn’t the only scenario where WebSockets is useful. The WCF NetHttpBinding brings the flexibility of WebSockets to the WCF programming model for duplexing. By using the WebSocket protocol, the reach of WCF services is greatly extended due to the firewall-friendly nature of the WebSocket protocol and Microsoft reports that performance of the NetHttpBinding is close to that of NetTcp. The beauty of the WCF programming model is that taking advantage of a new channel is as simple as swapping out the binding, so if you are using duplexing today, I encourage you to consider NetHttpBinding. While the focus of this article is on WebSockets for the browser, my friend and fellow CSD MVP, Damir Dobric, has a great blog post that shows you how to take advantage of the NetHttpBinding in non-browser scenarios:. In addition, Paul Batum has published a great sample that demonstrates how to consume a WebSocket service from Metro style clients in Windows 8. Please see the sidebar “Native WebSockets and Windows 8” for more information. Building a Streaming Twitter Client with the WCF WebSocketService In the previous example, I showed you how to expose a WebSocket service to a browser using both ASP.NET and WCF. We simulated the “real-time” aspect of the service by sending the date and time every second. Obviously, this sample isn’t very useful, but it does demonstrate a very simple way of exposing real-time events to the client. Now, let’s take a look at a more realistic scenario. Someone recently said that Twitter demos have become the “hello world” of the modern web, which speaks to how pervasive Twitter has become. What makes Twitter interesting for a WebSocket application is that it not only epitomizes a real-time app, but also exposes an API that allows you to interact with it in a variety of fun and interesting ways. I’m not going to turn this article into a primer on Twitter development as there are plenty of resources out there to help get you started. It is worth noting, however, that Twitter supports two APIs for consuming events: The REST Search API and the Streaming API. Both APIs are useful and provide varying features, and for this sample, I’m going to use the REST Search API to demonstrate one approach for providing real-time updates to your browser clients. The sidebar includes a link to the Twitter docs for the REST Search API, but to summarize, the API is exposed via a URI with varying query string parameters. For example, the following URI would be used to search all updates that include the term “AZ”, with a maximum of 5 status updates per page and including any images or other media resources if available: include_entities=true If you paste the above URL into your browser and are signed in to Twitter, you will get a JSON result resembling something like: {"completed_in":0.12,"max_id":219118633002078208,"max_ id_str":"219118633002078208","next_page":"?page=2&max_ id=219118633002078208&q=mtb&rpp=5&include_entities=1" ,"page":1,"query":"mtb","refresh_url":"?since_id=219118 633002078208&q=mtb&include_entities=1","results":[{ "created_at":"Sat, 30 Jun 2012 17:22:07 +0000" ,"entities":{"hashtags":[],"urls":[{"url":"http:\/\/t. co\/Kr2ZC3IZ","expanded_url":"http:\/\/youtu.be\/MDaSZR 7siwo?a","display_url":"youtu.be\/MDaSZR7siwo?a","indic es":[36,56]}],"user_mentions":[{"screen_name":"YouTube" ,"name":"YouTube","id":10228272,"id_str":"10228272","in dices": … From here you would serialize the results into an array of literals and if you wanted to page ahead, or see if there are any more tweets that match your parameters, you could call the REST API in some kind of a loop, or leverage XHR, both of which will definitely work. The opportunity is to push some of that messaging, looping, logic to the back end where it belongs (at least in my opinion) and this is where WebSockets comes in. By wrapping the APIs in a service, the client can focus on presentation and validation and not have to worry about tying up browser threads, managing polling intervals, etc., providing for a much cleaner separation of concerns. For the service, you could use any server-side technology that supports WebSockets (or even build your own server by implementing RFC 6455) and in the steps that follow, I’ll show you how to write a WCF service that leverages the Microsoft.WebSockets WebSocketService helper along with a great tool by fellow MVP Joe Mayo called Linq2Twitter, available on NuGet, which is exactly what you think it is: a Linq client library that handles all the heavy lifting of authenticating and interacting with the REST APIs. (You can learn more about Linq2Twitter by referring to the sidebar.) Building the StatusStreamService with WebSocketService Since I’ve already covered the fundamentals of how WebSocketService works, let’s jump right in. Listing 6 provides the full implementation of the StatusStreamService which, along with all the other code I reference in this article, is available for download on code-magazine.com. If you’ve read this far, the code should be pretty familiar as it includes the OnOpen method and a Send method that I demonstrated in the previous WCF and ASP.NET examples. I’ve kept the service implementation pretty light, delegating the interaction with the Twitter API (via Linq2Twitter) to a StreamManager class that I’ll review shortly. The StatusStreamService issues calls to the REST API given a hash tag (which for the purposes of this sample is hardcoded) in a loop, given a polling interval. Since there are API quotas that Twitter will hold you to in order to ensure that they can scale and ward off denial of service attacks, you want to be mindful and throttle your request accordingly. One of the problems I’ve had with the REST Search API is that it is quite possible for duplicate tweets to come back. Even though you’ve already sent the status to the browser, if the tweet is in the search window, it will come back in the results again and again and there’s really no way I’ve found to reliably avoid this. The solution is a simple hack which caches the last StatusId, which is a unique identifier for each tweet and compares it with the latest retrieved tweet. If they match, I simply ignore the tweet; otherwise I send the status to the client. That’s really all there is to it, and I could have just as easily opted for the ASP.NET WebSocketHandler which would resemble the same approach. As you can see in Listing 7, the StreamManager exposes a method called GetSimpleStatus which returns an instance of SimpleStatus which is custom DTO I created to simplify the JSON that is returned as I am really only interested in a few fields: while (true) { SimpleStatus simpleStatus = mgr.GetSimpleStatus(hashTag); //Dedupuping hack } I serialize the SimpleStatus DTO to JSON to make it easy for the browser to work with, focusing only on the properties that I need to reduce bandwidth and complexity: public class SimpleStatus { public string User; public string UserId; public string StatusId; public string Status; public string ProfileUrl; public string MediaUrl; } The GetSimpleStatus method, shown in full in Listing 6, starts with a call to a private method I wrote for encapsulating the authentication goo so that it is reusable. Linq2Twitter makes this very simple by providing an abstraction of the authorization token with the SingleUserAuthorizer class and once I have an instance of it, I can pass it into a new TwitterContext, which provides most of the API knobs I need to assemble my request: var auth = GetAuthorizer(); TwitterContext ctx = new TwitterContext(auth, "", ""); var tweets = from tweet in ctx.Search where tweet.Type == SearchType.Search && tweet.Hashtag == hashTag && tweet.ResultType == ResultType.Recent select tweet; Each of the properties are well documented on the Twitter docs, but in a nutshell, I’m telling Linq2Twitter to prepare a REST API call using the Search API and include any tweets that have the hashtag that is passed in from the WebSocketService code. From there, I simply enumerate the projection, grab the top tweet and return it to the StatusStreamService. The StatusStreamingClient The client is very similar to the one I demonstrated earlier, but I’ve added some bells and whistles (borrowed from designers far more talented at Web UI than me) to show what is possible by combining the modern user experience features of HTML5 and JQuery with the power of WebSockets. The structure of the status streaming client is simple. As shown in Listing 8, I’m leveraging just a few HTML elements as placeholders for some assets that will be added via CSS. Each of the divs below act as a container, and when my StatusStreamService has a tweet to share, the status will appear in the div element with the class name of “quote”: <body> <div id="wrapper"> <div id="content"> <div id="quotebox"> <p class="quotemark leftquote">‘‘</p> <div class="quote"></div> <p class="quotemark rightquote">’’</p> </div> </div> </div> </body> As with the previous simple eventing sample, the client is wired up via JavaScript using the native WebSocket support in IE 10, Chrome 13, etc. $(document).ready(function() { var connection; var host = "ws://localhost/StatusStreamService /StatusStreamService.svc"; connection = new WebSocket(host); connection.onmessage = function (message) { var simpleStatus = window.JSON.parse(message.data); startAnimation(simpleStatus.User, simpleStatus.Status); } }); Since I chose WCF for this implementation, I’ve hosted the service in IIS/AppFabric using the same custom factory I demonstrated in the initial WCF WebSocketService example above, and the host variable is accordingly set to the URI of the service in IIS of my dev machine. Once the handshake is complete and the connection is established, I’m ready to start receiving messages. Each message will be a JSON serialized instance of SimpleStatus which I parse into a literal and pass it into a JavaScript function called “startAnimation”. This is where a lot of the magic happens, including nice animation and fade effects of each status as it arrives on the browser. I’d like to give credit to Marco Kuiper (marcofolio.net) who provided the initial design theme and Tory Douglas, Principal Consultant at Neudesic, who helped me bend the JQuery to my will. I won’t cover the details of the rest of the JavaScript/JQuery here, but the full code is available in the samples download so please feel free to dig in. If I browse to the Default.html page, within seconds, I start seeing status updates for the given hashtag (in this case #winrt) as shown in Figure 6. As you can see, WebSockets brings a whole new way of providing real-time messaging to your browser-based applications and as I’ve demonstrated, has broad support in browsers like IE 10, Chrome 13, etc. There is a ton more that you can do with WebSockets such as stream binary data, XML, as well as send messages from the client to the browser, of course. The Microsoft.WebSockets namespace makes it easy to get up and running, and if you ever need more control over your implementation, you can always dig deeper into the System.Net.WebSockets library which is used by both WebSocketHandler and WebSocketService under the hood. Bonus: Fun with WebSockets and Node.js So you may be thinking, “I want the WebSocket server sauce, but I’m not yet running Windows 8 or won’t be for a while.” One of the great things about working with standards-based technology is that even though it takes time for the standardization dust to settle sometimes, a well-defined, public specification makes it possible for anyone, be it a corporation or an individual, to implement the specification. Just as Microsoft has done in .NET 4.5, there are a number of other options available for building solutions with WebSockets regardless of platform. In fact, there are some really smart guys at Microsoft building a framework called SingalR that enables you to build real-time browser apps that will automatically use XHR, long polling and WebSockets depending on the capabilities of the client (browser) and the service you are consuming. Node.js has recently gained tremendous popularity among web developers for a number of reasons. In addition to brining JavaScript, the world’s most popular scripting language, to the server side, it is very well suited for implementing a WebSocket server due to its de-facto asynchronous programming model, which is highly optimized for IO operations. Node.js is built on a single threaded, asynchronous programming model which leverages OS resources for completing IO operations. You could write an entire book on the topic of Node.js alone, and I’ve provided some good references in the sidebar if you’d like to learn more about Node.js. For the purposes of this article, I want to show you what an alternate implementation in Node.js looks like and how, regardless of the back-end, the client API, courtesy of the W3C, is always consistent. To do so, I’d like to take the StatusStreamingService sample further by applying the same concepts, but using Node.js as a WebSocket server and switching out the Twitter Search API with the Streaming API. In addition, I’ll bring in any photos that are posted with a query term/hashtag along with the corresponding status updates to create a live stream of Polaroids. Exploring PolaroidStream with Node.js and WebSocket.IO One of the reasons Node.js has surged in popularity, in addition to the characteristics that will whet any nerd’s appetite, is the incredibly rich community of developers that are building libraries (called modules in Node.js parlance) for Node. You can go out to the nodejs.org website and GitHub and find thousands of modules for building MVC-style web apps to providing low-level messaging support. One of my favorite Node.js modules is Socket.IO. Written by Guillermo Rauch, Socket.IO provides a very flexible way of taking advantage of XHR, long polling and WebSockets in your applications. The module provides both a server-side and client-side API for building robust back-end services that greatly simplify delivering real-time messaging within your solutions. Guillermo has also written a module that is fully compatible with the W3C WebSockets API called WebSocket.IO that allows you to build back-end services on WebSockets just as we have with ASP.NET and WCF 4.5. In order to further showcase the strength of WebSockets, I want to briefly compare and contrast the Twitter REST Search API with the Twitter Streaming API. As we saw in the StatusStreamingService/Client sample, the biggest benefit that WebSockets provides is that it enables true, real-time communication by upgrading an HTTP/S request to a direct connection without resorting to polling patterns and thus allowing you to push event aggregation to the server. On the server side, we still need a way to aggregate the events, and perhaps in a perfect world, all endpoints are connected via WebSockets, but until then, middleware is the key. In the StatusStreamingService sample, we leveraged polling in a while loop to make a REST API call every five seconds. Some polls result in a status being returned while others might not. You could hook into these events of interest in a number of ways. One approach might be to have some other middleware component poll the REST API and publish each tweet to a broker to which our StatusStreamService subscribes. Another approach might be Twitter notifying the StatusStreamService that it has new tweets and only then, issuing a call to the Search API. All of these approaches have their strengths and weaknesses depending on goals such as throughput, latency, scalability, etc. One option we haven’t discussed in detail is streaming. Streaming implementations vary depending on the transport protocol, but Twitter leverages HTTP to support streaming. By taking advantage of HTTP chunking, as a Twitter API subscriber, you can issue an initial request for tweets that match some filter and Twitter will hold on to that connection as long as there are tweets to send. From an HTTP perspective, this is actually one request that consists of several parts, or chunks. This is very powerful, because unlike the REST Search API, which by its very nature is somewhat latent, the Streaming API is essentially a fire hose, delivering each tweet in a series of chunks as close to the same time that the Twitter servers receive each tweet. Figure 7 demonstrates how the Streaming API works. The Twitter Streaming API supports three types of streams as defined on dev.twitter.com: - Public streams: Streams of the public data flowing through Twitter. Suitable for following specific users or topics, and data mining. - User streams: Single-user streams, containing roughly all of the data corresponding with a single user’s view of Twitter. - Site streams: The multi-user version of user streams. Site streams are intended for servers which must connect to Twitter on behalf of many users. Now, let’s have a little fun with WebSockets and Node.js to see how all of this fits together. Node Server My goal with the service is to grab as many tweets as are available given a hash tag or filter and also bring back any photos that were posted with the tweet. The URI below shows what a request on the Public stream for all tweets with the term MTB and twitpic: =MTB&track=twitpic -u USERNAME:PASSWORD To get a sense of how busy this stream is, you can test the request by using a tool like Fiddler or cURL. Figure 8 shows the result of issuing the request above with cURL: If it was possible to show an animated GIF in this magazine, you would see constant scrolling action as each tweet is firehosed to the client. This is a good example of where having a broker like Azure Service Bus or RabbitMQ might make sense so that you don’t bring down your client. With our URL tested and ready to go, let’s take a look at the Node.js WebSocket server I wrote in Listing 9. The first thing I do is register the WebSocket.IO and HTTP modules with Node.js: var ws = require('websocket.io'), server = ws.listen(8080), http = require('https'); The HTTPS module is required to support connecting to the Twitter Streaming API which requires SSL. When the server starts and establishes a WebSocket connection with the browser, Node prepares the request to Twitter using the same URL as above by initializing the options variable: server.on('connection', function (socket) { var options = { host: 'stream.twitter.com', port: 443, path: "/1/statuses/filter.json?track=VAZ303% 20twitpic&track=VAZ303%20photo &track=devconnections% 20twitpic&track=devconnections%20photo", method: 'GET', headers:{ "Authorization":"Basic (CREDS)" } }; (More Code) Next, it makes the request to the Streaming API, passing in the options variable along with a callback that handles the response (res). From there, I set the encoding on the response to utf8 and log the HTTP response code and the headers: var req = http.request(options, function(res) { res.setEncoding('utf8'); console.log('STATUS: ' + res.statusCode); console.log('HEADERS: ' + JSON.stringify(res.headers)); Next, for each chunk in the response, I check to see how many browsers have established a WebSocket connection to me and multicast each chunk to each browser: res.on('data', function (chunk) { console.log('BODY: ' + chunk); var s; for(i=0;i<sockets.length;i++) { sockets[i].send(chunk); } }); That’s all there is to it. With just a few lines of JavaScript, we have a full blown WebSockets server that leverages the Twitter Streaming API. To start your Node server, simply open a command prompt and type: node server.js As shown in Figure 9, the node.js server running WebSocket.IO is up and running. HTML5 Client With the service done, let’s review the client. The client takes advantage of some nice HTML5 features such as CSS3 rotate and box shadow. Again, I tip my hat to marcofolio.net for providing the CSS and JavaScript that served as the foundation for this client sample. Listing 10 shows the Default.html page, and once again, we’re using HTML only for the structure: <body> <input id="buttonAddPolaroid" value="Connect" type="button" class="btn" /> <div id="polaroidContainer"/> </body> The button is there to allow me to establish the connection to the Node.js WebSocket service and again, I’m using a div element as a container that will hold the images and tweet status for every tweet that is received. The idea is to provide a user experience similar to sitting at a coffee table and going through a shoe box of Polaroid pictures. You grab a stack and start tossing them on the table as you go. The entire implementation is provided in the download on code-magazine.com, and for the purposes of this sample, I’ll highlight the code for connecting to and processing each chunk that is returned by the Node.js server. As with the previous samples, the first thing I want to do is ensure that my browser supports WebSockets: $(document).ready(function () { if (Modernizr.webjsockets) { $("#messages").append("WebSockets is supported!"); } If it does, then I can confidently click the Connect button to establish a connection to the WebsSocket server. I’ve set a host variable to the URL for my node.js server and created a connection: //wire up button event $('#buttonAddPolaroid').click(function () { var count; var connection; var host = "ws://localhost:8080/"; connection = new WebSocket(host); If the connection succeeds, I change the color of the button text to green to provide an indication that all is well: connection.onopen = function () { $(".btn").css("color", "green"); }; From there, I wait for the first message and convert it into a literal so I can easily check the fields to determine if the tweet has an image: connection.onmessage = function (message) { tweet = window.JSON.parse(message.data); var imgUrl; if(tweet.entities.urls.length !==0){ imgUrl= GetImgUrl(tweet,"twitpic"); } else { // Naive assumption that image service must // be native twitter. imgUrl= GetImgUrl(tweet,"twitter"); } Since each image service handles URLs different, I have a helper that figures out the render URL which I’ll use next. Of course, to truly cover the breadth of possible images, I’d want to support more than just TwitPic and Twitter’s photo service. I have another helper method called addNewPolaroid that takes the image’s URL and corresponding status from the tweet and renders it to the div tag: addNewPolaroid(imgUrl, 'Status from ' + tweet.user.screen_name, '@' + tweet.user.screen_name + ':' + tweet.text); The implementation of the addNewPolaroid function is pretty straightforward: $('#polaroidContainer').append('<div class="polaroid"><img src="' + src + '" alt="' + alt + '" /><p>' + title + '</p></div>'); I’m using JQuery to simply append an image and the text from the tweet. However, from there, CSS formats the image and text into what resembles a really cool Polaroid picture and from there some more JQuery is applied to support the ability to select any Polaroid and bring it into focus as shown in Figure 8. Conclusion As you can see, the WebSocket protocol opens up a ton of new opportunities for web, mobile and enterprise developers. While the focus of this article has been on programming WebSockets for the browser, WebSockets is just as applicable in native application development be it WPF, Metro or composite services. WebSockets is an evolution for web applications because it improves upon contemporary approaches for delivering near-real-time messaging such as XHR-style polling or long polling, enabling true, bi-directional duplex communication between the client and server. What makes WebSockets so compelling is the reach, both in terms of browser support and inherent firewall friendliness. Developers and organizations hosting HTTP/S services on a DMZ or hosting provider can apply the same approach to hosting WebSocket-enabled services. As you saw in the examples, writing a WebSocket service is very straightforward thanks to the investments Microsoft has made in the System.Net.WebSockets namespace and corresponding APIs provided in ASP.NET and WCF. You also saw that with most of the standards churn over, WebSockets support can now be enjoyed across all modern browsers including IE 10, Chrome 13, Safari and FireFox 11. While taking advantage of WebSockets is a .NET 4.5 and Windows 8/Server 2012 only feature, it’s a great time to dive into these pre-release technologies and start building apps and services with WebSockets. If you’re not ready to upgrade to the latest version of Windows just yet, there are a number of options available for experimentation, including Node.js. As with any new, emerging technology, this fledgling new protocol is not immune to controversy. Some web developers posit that WebSockets is not “webby” at all because it only leverages HTTP/S initially and from there is really a TCP-like socket, which might be a liability from a scalability and state management perspective. As with all technologies, we must understand the benefits and liabilities of each of our tools and choose the appropriate tool for the job. Personally, I believe that for real-time apps, gaming, B2B and composite apps, WebSockets offers a great alternative to traditional methods for exchanging real-time information, particularly on the Web, delivering a simpler programming model, lower latency and a variety of implementation options. I hope this article and corresponding samples will help jump start your discovery of WebSockets today.
https://www.codemag.com/article/1210051
CC-MAIN-2019-13
refinedweb
7,738
51.99
Can somone help me understand this I have give up trying understand how this even works and cant find anything to help me understand can anyone help me step by step please 😔 Does isEqual() method store the value in a field or somthing? I'm not getting how test() returns the boolean how is it able to see what's passed to equals method? 🤯 18 AnswersNew Answer the value passed in Predicate.isEqual is stored inside the returned object (assigned to 'str')... Predicate.isEqual is a constructor (or call a constructor) and return a new object wich is holding the string passed as argument, and (at least) one method 'test'... inside 'test' method of Predicate.isEqual object, the value passed as argument is available, so it's checked against 'test' argument... boolean is returned by 'test' method ^^ Simple way to think about it. When you call the isEqual(var) method It returns the following s->var.equals(s) This lambda expression is the body for test(s) boolean test(s){ return var.equals(s) ^ '------- var here is the string you passed to the IsEquals method } visph woah I think my brain died 😆 is it possible to create an example code that does this so I can see how it works. In my mind I'm thinking how is test which is the abstract method of interface getting hold of the argument passed to a static method isequal method are there fields involved in the interface because I can see any visph is the annoumous class the final class that implemented isEquals? or did test become isEquals? there's no 'anonymous' classes... all the more hidden classes ^^ the final class is the one used to build the object returned by Predicate.isEqual, so the object have a 'test' method implemented... visph Are you able to create an example code I'm so confused because I thought there was an inner annomus object created which was stored to str OOP: class = blueprint object = instance of class the class is stored outside of instances (but is linked and shared betweens instance objects) visph I know the difference, I know that you cant create an instance of interfaces which made me think that an annoumous inner object is created and stored in str variable str store a Predicate object with a 'test' method wich use the value passed in Predicate object: there's no needs to have a hidden inner object, but it could be implemented in that way (even there's no real reasons to do so) ^^ visph so when I called the static isEqual on the predicate interface did that implemt the test method ? I'm just trying to figure out how test was able to compare both objects.. public interface Predicate<T> { boolean test(T t); static <T> Predicate<T> isEqual(Object target) { return object -> target.equals(object); } } class IsItJava implements Predicate<String> { public boolean test(String s) { return "Java".equals(s); } } public class Program { public static void main(String[] args) { // Predicate<String> str = object -> "Java".equals(object); IsItJava str = new IsItJava(); System.out.println(str.test("Java") ); } } very simple you just create an object of predicate and give it the value. the object remembers this value you passed to when creating it. when you envoke test method. then you are using a method of that object which you created before. the method test will do the job for you so you can define target value once and test it how many times you need. Compare the string you passed with function u called Can anyone please tell me why range slider is not working Help me!
https://www.sololearn.com/Discuss/2715087/can-somone-help-me-understand-this/
CC-MAIN-2021-17
refinedweb
606
67.89
Popularity 2.2 Stable Activity 0.0 Declining 28 1 2 Programming language: Go License: MIT License tome alternatives and similar packages Based on the "Utilities" category. Alternatively, view tome alternatives based on common mentions on social networks and blogs. fzf10.0 8.5 tome VS fzf:cherry_blossom: A command-line fuzzy finder ngrok9.9 0.0 tome VS ngrokIntrospected tunnels to localhost hub9.9 3.4 tome VS hubA command-line tool that makes git easier to use with GitHub. delve9.9 9.4 tome VS delveDelve is a debugger for the Go programming language. ctop9.7 6.1 tome VS ctopTop-like interface for container metrics excelize9.7 8.8 tome VS excelizeGolang library for reading and writing Microsoft Excel™ (XLSX) files. go-torch9.7 0.0 tome VS go-torchStochastic flame graph profiler for Go programs. goreleaser9.6 9.6 tome VS goreleaserDeliver Go binaries as fast and easily as possible GJSON9.6 7.4 tome VS GJSONGet JSON values quickly - JSON parser for Go wuzz9.6 1.2 tome VS wuzzInteractive cli tool for HTTP inspection usql9.4 9.2 tome VS usqlUniversal command-line interface for SQL databases xlsx9.4 6.2 tome VS xlsxGo (golang) library for reading and writing XLSX files. peco9.4 4.9 tome VS pecoSimplistic interactive filtering tool resty9.3 6.6 tome VS restySimple HTTP and REST client library for Go godropbox9.2 0.0 tome VS godropboxCommon libraries for writing Go services/applications. Task9.0 8.7 tome VS TaskA task runner / simpler Make alternative written in Go godotenv9.0 2.4 tome VS godotenvA Go port of Ruby's dotenv library (Loads environment variables from `.env`.) hystrix-go9.0 0.0 tome VS hystrix-goNetflix's Hystrix latency and fault tolerance library, for Go gorequest8.9 0.0 tome VS gorequestGoRequest -- Simplified HTTP client ( inspired by nodejs SuperAgent ) goreporter8.8 0.0 tome VS goreporterA Golang tool that does static analysis, unit testing, code review and generate code quality report. minify8.7 8.6 tome VS minifyGo minifiers for web formats go-funk8.7 6.2 tome VS go-funkA modern Go utility library which provides helpers (map, find, contains, filter, ...) mc8.6 8.9 tome VS mcMinIO Client is a replacement for ls, cp, mkdir, diff and rsync commands for filesystems and object storage. panicparse8.6 6.2 tome VS panicparseCrash your app in style (Golang) gojson8.6 0.0 tome VS gojsonAutomatically generate Go (golang) struct definitions from example JSON mergo8.3 3.1 tome VS mergoMergo: merging Go structs and maps since 2013. grequests8.2 0.0 tome VS grequestsA Go "clone" of the great and famous Requests library mole8.0 7.5 tome VS moleCLI application to create ssh tunnels focused on resiliency and user experience. filetype8.0 4.0 tome VS filetypeFast, dependency-free Go package to infer binary file types based on the magic numbers header signature spinner8.0 4.8 tome VS spinnerGo (golang) package with 80 configurable terminal spinner/progress indicators. sling7.9 1.1 tome VS slingA Go HTTP client library for creating and sending API requests mmake7.9 0.0 tome VS mmakeModern Make boilr7.9 0.0 tome VS boilr:zap: boilerplate template manager that generates files or directories from template repositories beaver7.7 4.4 tome VS beaver💨 A real time messaging system to build a scalable in-app notifications, multiplayer games, chat apps in web and mobile apps. coop7.7 0.0 tome VS coopCheat sheet for some of the common concurrent flows in Go go-underscore7.7 0.0 tome VS go-underscoreHelpfully Functional Go - A useful collection of Go utilities. Designed for programmer happiness. jump7.7 4.1 tome VS jumpJump helps you navigate faster by learning your habits. ✌️ circuitbreaker7.6 0.0 tome VS circuitbreakerCircuit Breakers in Go JobRunner7.5 0.0 tome VS JobRunnerFramework for performing work asynchronously, outside of the request flow create-go-app7.5 8.8 tome VS create-go-app✨ Create a new production-ready project with backend, frontend and deploy automation by running one CLI command! gentleman7.4 2.8 tome VS gentlemanPlugin-driven, extensible HTTP client toolkit for Go git-time-metric7.4 0.0 tome VS git-time-metricSimple, seamless, lightweight time tracking for Git gron7.4 0.0 tome VS grongron, Cron Jobs in Go. goreq7.3 0.0 tome VS goreqMinimal and simple request library for Go language. immortal7.2 0.0 tome VS immortal⭕ A *nix cross-platform (OS agnostic) supervisor csvtk7.1 7.0 tome VS csvtkA cross-platform, efficient and practical CSV/TSV toolkit in Golang mimetype7.0 8.0 tome VS mimetypeA fast Golang library for media type and file extension detection, based on magic numbers pester6.9 0.0 tome VS pesterGo (golang) http calls with retries and backoff godaemon6.8 1.5 tome VS godaemonDaemonize Go applications deviously. httpcontrol6.7 0.0 tome VS httpcontrolPackage httpcontrol allows for HTTP transport level control around timeouts and retries. Do you think we are missing an alternative of tome or a related project? Popular Comparisons README Package tome was designed to paginate simple RESTful APIs. Installation go get -u github.com/cyruzin/tome Usage To get started, import the tome package and initiate the pagination: import "github.com/cyruzin/tome" // Post type is a struct for a single post. type Post struct { Title string `json:"title"` Body string `json:"body"` } // Posts type is a struct for multiple posts. type Posts []*Post // Result type is a struct of posts with pagination. type Result struct { Data *Posts `json:"data"` *tome.Chapter } // GetPosts gets the latest 10 posts with pagination. func GetPosts(w http.ResponseWriter, r *http.Request) { // Creating a tome chapter with links. chapter := &tome.Chapter{ // Setting base URL. BaseURL: "", // Enabling link results. Links: true, // Page that you captured in params inside you handler. NewPage: 2, // Total of pages, this usually comes from a SQL query total rows result. TotalResults: model.GetPostsTotalResults(), } // Paginating the results. if err := chapter.Paginate(); err != nil { w.WriteHeader(http.StatusUnprocessableEntity) // Setting status 422. json.NewEncoder(w).Encode(err) // Returning JSON with an error. return } // Here you pass the offset and limit. database, err := model.GetPosts(chapter.Offset, chapter.Limit) if err != nil { w.WriteHeader(http.StatusUnprocessableEntity) // Setting status 422. json.NewEncoder(w).Encode(err) // Returning JSON with an error. return } // Mocking results with pagination. res := &Result{Data: database, Chapter: chapter} w.WriteHeader(http.StatusOK) // Setting status 200. json.NewEncoder(w).Encode(res) // Returning success JSON. } Output: { "data": [ { "title": "What is Lorem Ipsum?", "body": "Lorem Ipsum is simply dummy text of the printing and..." }, { "title": "Why do we use it?", "body": "It is a long established fact that a reader will be..." } ], "base_url": "", "next_url": "", "prev_url": "", "per_page": 10, "current_page": 2, "last_page": 30, "total_results": 300 } Performance Without links: With links: *Note that all licence references and agreements mentioned in the tome README section above are relevant to that project's source code only.
https://go.libhunt.com/tome-alternatives
CC-MAIN-2021-43
refinedweb
1,155
51.75
Apps (applications) are packaged standalone executables that are versioned. They reside in a global app namespace, and multiple versions can be created for a single app name. Developers control the list of authorized users that can find and run the app. Developers can build a new app version, and publish it. Publishing a version makes that version available (visible) to the authorized users, and restricts further modifications to it. Users can choose to run the most recently published version, or any other previously published version. The following table summarizes the differences between applets and apps. During early development, applets are a convenient way to experiment with analyses inside own projects. Once iterations are over and applets need to be locked down (and perhaps disseminated to a wider audience), transitioning to apps is an attractive option: Once published, apps cannot be modified. They can only be updated by newer versions, but the system keeps all previous versions (and these are still accessible by default, unless the app author decides to deprecate some versions). Apps can carry their own assets in a private container (a kind of read-only project). This enhances reproducibility and minimizes risks. Apps can be instantly shared across a list of users. In combination with being self-sufficient by storing their own assets, and being locked down, this makes it a more convenient choice for sharing them with less sophisticated users (who only need to run them). To transition an applet into an app, follow these steps: Think of a name for your app. Since some app names (for example: "bwa", "fastqc", etc.) are already taken, avoid polluting the global namespace further, by introducing an additional prefix of your own as a naming convention; for example, the fictitious "Center for Cancer Informatics" could name its apps "cci-bwa", "cci-fastqc", etc. The invocation would then look like dx run app-cci-bwa/1.2, etc. Edit the name in dxapp.json. Think of an initial version number for your app. Choosing a versioning convention is very important for you to perform meaningful app updates later. DNAnexus strongly suggests the Semantic Versioning conventions. Under those conventions, your first production version should be "1.0.0". In dxapp.json, add a key called "version" with a string value equal to the version you chose ("1.0.0"). For a trivial update (such as a bugfix), you should later increase it to "1.0.1". For a minor update which is backwards compatible (such as updating the underlying software version of bwa), you should increase it to "1.1.0". For a major upgrade such as a backwards-incompatible change in the input/output spec, you should increase it to "2.0.0", etc. Decide if your app will be open source, i.e. if you want users to be able to retrieve the shell script and any other resources associated with the app. In dxapp.json, add a key called "openSource" with a boolean value of true or false. Decide who is the app author. Just like applets, apps require storage for their resources and assets, and for that storage they will be billed on a monthly basis to the billing account of whomever is the original author of each app. Additional authors (users who can update the app by rebuilding a newer version) can be added or removed at any time, but it is the billing account of the original author that will always be associated with billing of any storage for this app. Do not author apps under a trial account. Whoever you decide to be the original app author, ask them to run the very first dx build of the app (see Building the App below). Decide which users will be allowed to access your app. The app author always has access to the app. If you intend to run the apps under the same user as the one authoring them, you do not need to do anything. Otherwise, in dxapp.json, add a key called "authorizedUsers" with a value being an array of strings, corresponding to the entities allowed to run the app. Entities are encoded as user-username for single users, or org-orgname for organizations (i.e. "authorizedUsers": ["user-george", "org-cci"]). You can also manage the list of authorized users at any point via dx list users, dx add users, and dx remove users. If you prefer to use dx, then omit the "authorizedUsers" key from dxapp.json altogether. (The list of authorized users is common to all versions of the app). If the app requires assets, i.e. files that reside in a project (and not inside the "./resources/" folder at dx build time), perform the following changes: In the bash script, when fetching assets, replace $DX_PROJECT_CONTEXT_ID with $DX_RESOURCES_ID, so that your script fetches the assets from the app's private container and not from inside whatever parent project it may be run. NOTE: If you want for the same script to be able to function both as an applet and as an app, then add code like the following (which introduces a new variable), and use $DX_ASSETS_ID when fetching assets: if [[ "$DX_RESOURCES_ID" != "" ]]; then DX_ASSETS_ID="$DX_RESOURCES_ID" else DX_ASSETS_ID="$DX_PROJECT_CONTEXT_ID" fi dx download "$DX_ASSETS_ID:/assets/hs37d5.fasta" Create a project with the exact structure that you want your assets to have, i.e. for the example above, create a project with a folder "/assets/" and place hs37d5.fasta inside that folder. In dxapp.json, add a key called "resources" with a string value equal to the project id of the project with the assets, i.e. "resources": "project-BBF4Jp80vVky55Vvgkb0028u" When building the app, the system will create a separate read-only copy of the project and associate it with the specific app version. You can safely modify or delete the original project, as it will not affect the app. Note that if your assets don't change across newer versions, you are welcome to keep the original project around and reuse the same project id in dxapp.json when rebuilding the app later; but the system will still create a separate read-only copy for each version. Unlike an applet, an app can be run in multiple regions, which are denoted by the enabled regions of the app. For a given app version, the set of enabled regions is immutable. To specify the enabled regions of an app, please specify the regionalOptions key in dxapp.json, according to the dxapp.json specification. To build the app, run the following command: $ dx build --app --publish The first time you run this command, the system will reserve the app-name in the global namespace, and create the first ever version of the app according to the "version" key in dxapp.json. Note also that you can still build this as an applet, by omitting the --app and --publish options. To perform subsequent updates, after make any changes to the code, increment the version in dxapp.json, and rerun the aforementioned dx build --app --publish command. In the examples above, you will notice that new app versions are built and published at the same time, by supplying the --publish option to dx build. Publishing an app version makes it available to authorized users and ensures that it can no longer be modified. DNAnexus suggests following that workflow, but it is not a requirement. You can build an app version without publishing it; simply omit the --publish option. In that case, the generated app version will only be accessible by the app developers (that is, the original app author and any users added via dx add developers), and not by authorized users. You can use that flow if you want to first test the new app version before you publish it. During testing, if you find that you must make further changes, you can update the same unpublished app version in-place, without having to generate a new version. Simply use dx build --app again, and if the version in dxapp.json matches an unpublished existing version, that version will be overwritten. Once you are happy with the app version, you can publish it by repeating the dx build command with the --publish argument. Similarly to applets, each app version receives a unique id of the form app-B80kp7j0x75Xf7V25v300QGp. It is also accessible via the "app-name/version" scheme, i.e. app-cci-bwa/1.0.0, which cannot be changed once that app version is published. Moreover, apps have the concept of a default version. If an app has multiple published versions, there is exactly one version at all times which is the default. When you run dx build --app --publish, the published app version becomes the new default. Therefore, typically the default version is whatever version was most recently published. Users refer to the default version when using "app-name" with no version qualifier, i.e. when doing dx run app-cci-bwa. Therefore, users who launch apps have a choice of whether to use the default version (by doing dx run app-cci-bwa) or to use a specific version (by doing dx run app-cci-bwa/1.0.0). If you have published a new app version (say, app-cci-bwa/1.0.1), and later realize that you have made a mistake and want to "roll back" the update, you have two options: You can create an even newer app version (say, app-cci-bwa/1.0.2) that does not have the problem and publish that. Publishing will make it the new default, so those who do "dx run app-cci-bwa" will run a working version. Mark a previous version (say, app-cci-bwa/1.0.0) as the default. You can do that with the following command: dx api app-cci-bwa/1.0.0 addTags '{"tags":["default"]}' In addition, you can deprecate an app version. As mentioned in the introduction, apps cannot be modified once published, but the system offers a feature to deprecate an app version. Deprecation of an app version means that the particular version will no longer be allowed to run. You will still be able to get some basic information for any deleted app version via dx describe, but the input/output spec, actual code and asset project of the deprecated app version will no longer be available. (See also the last paragraph of the introduction). To deprecate an app version, that version must not be the default. If you need to deprecate the default version of an app, you must first either publish another version and make it default, or mark some other version as the default (as discussed above). You can deprecate an app version with the following command: dx api app-cci-bwa/1.0.0 delete '{}' Last edited by George Asimenos (george), 2017-05-09 02:03:36
https://wiki.dnanexus.com/Developer-Tutorials/Transitioning-from-Applets-to-Apps
CC-MAIN-2018-22
refinedweb
1,808
62.98
As I said a couple posts ago, there are other topics I’d like to consider during this attempt at the 180Days challenge. One of them is the idea of Test Driven Development, or in other words, ‘test before you publish’. Getting into a good habit of testing everything before it goes out the door is important for a number of reasons, but I won’t get into those here. Instead, I’ll mention what I’m planning to use for testing all of the Django code I’m writing as part of this Udemy class- Selenium with Python. Selenium is a browser test framework, though coupled with Python, it becomes a robust test ‘gate’, or shall I say, an ‘all tests must pass before publishing it’ tool. Here’s a small sample that tests if you have Selenium working on your system: from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome() driver.get("") assert "Python" in driver.title elem = driver.find_element_by_name("q") elem.send_keys("pycon") elem.send_keys(Keys.RETURN) assert "No results found." not in driver.page_source driver.close() “Test, you must…” –Yoda Jenkins, Lead QA Tester
https://jasondotstar.com/Day-6-Toying-With-TDD.html
CC-MAIN-2020-29
refinedweb
193
65.42
Subject: [Boost-users] [gil] new io extension From: Christian Henning (chhenning_at_[hidden]) Date: 2008-09-23 13:56:29 Hi there, I've been busy working on the next version of gil's IO extension. The last iteration was very slow to compile especially when reading or writing tiff images. This and other problems have been fixed. As a result compilation is a lot faster but also a lot more tiff formats can now be read or written. Please grab the latest from: This download is a little bigger than to usual since there are test images included. Please see my original announcement below for documentation. The goal of this extension is to substitute the original io lib which is part of gil, and though shipped with boost. To do this a review is probably needed. I'll work on a proper documentation as the next step. Can anyone tell me where to find a tutorial for creating quickdoc? There is one problem I cannot fix and would like to ask the experts on this list. Please read the readme.txt in libs\gil\io_new\unit_test which explains the scenario. This problem has something to do with erroneously choosing the wrong functions by the compiler. The enable_if metafunction doesn't work correctly here. As usual any feedback is welcome. Thanks, Christian ------------------------------------------------------- Hi there, over the past few months I have been working on a new version of gil's io extension. In this release I've added support for: * more image formats, most importantly bit_aligned images ( like 1bit images ), * reading of subimages like one row at a time for huge images, * reading image information, only. * having a unified interface for all sorts of devices, subimages, and converter objects. So far, png, jpeg, and tiff images are supported. In the not so distant future there will be bmp and pnm images added. You can grab the current version via subversion from: If desired I can supply a zip file to the vault. The images libraries can be grabbed from: ( isn't responding ) It's highly advised to built your own version of the image lib you need. I installed the GNU's binaries for Windows, but unfortunately, I only had problems with them. The library is still header-only. When using boost::filesystem's paths, of course, you add a dependency to the filesystem lib. The Read Interface ------------------------ Reading an image can be done in multiple ways depending on what your need is. Like the old io lib there are 4 different ways of reading images. As there are: * read_image: Read image, the right amount of memory will be allocated beforehand. The supplied image type needs to be compatible with actual image file. * read_view: Read image, the image's memory must be already allocated. The supplied image type needs to be compatible with actual image file. * read_and_convert_image: Same as read_image, but the user can specify an color converter. * read_and_convert_view: Same as read_view, but the user can specify an color converter. There is a new function for reading image information. Since every image format has it's own set of tags, the library defines seperate image_read_info<format_tag> structures. Please see, for instance, the jpeg_tags.hpp header. As for the function parameters the following sections in the same order. 1. String for file name ( std::string, std::wstring, char*, boost::filesystem::path ) OR devices ( FILE*, TIFF*, std::ifstream ) 2. Image or view type 3. Optional: Subimage definition ( top_left corner + dimensions ) 4. Optional: Color converter. This parameter is only valid for converting functions. The default parameter is gil's default_color_converter 5. Format Tag. Reading the image information only takes parameter 1 and 5. Here are some examples for reading images: #include <boost/gil/extension/io_new/jpeg_read.hpp> // Read image information std::string filename( "..\\test_images\\jpg\\found online\\test.jpg" ); image_read_info< jpeg_tag > info = read_image_info( filename, tag_t() ); // Read image FILE* file = fopen( filename.c_str(), "rb" ); rgb8_image_t img; read_image( file, img, tag_t() ); // Read image ifstream in( filename.c_str(), ios::in | ios::binary ); rgb8_image_t img( 136, 98 ); read_view( in, view( img ), tag_t() ); // Read and convert image, use gil's default converter. rgb8_image_t img; read_and_convert_image( filename, img, tag_t() ); // Read a 10x10 subimage, starting from the top_left corner. rgb8_image_t img; read_image( filename, img, point_t( 0,0 ), point_t( 10, 10 ), tag_t() ); I have created some header files to support the old read interface. For example: #include <boost/gil/extension/io_new/jpeg_io_old.hpp> std::string filename( "..\\test_images\\jpg\\found online\\test.jpg" ); rgb8_image_t img( 136, 98 ); jpeg_read_view( filename, view( img ) ); The Write Interface --------------------------- The write interface is a much simpler than the read interface. Here, we only have write_view as the entry point. As for parameters the following sections are supported: 1. String for file name ( std::string, std::wstring, char*, boost::filesystem::path ) or devices ( FILE*, TIFF*, std::ofstream ) 2. View type 3. Optional: image_write_info< FormatTag > 4. Format type The image_write_info<...> structure defines a format specific properties which can be used when writing an image. A good example is jpeg's image quality value. Examples: #include <boost/gil/extension/io_new/tiff_write.hpp> // Write image string filename( "..\\test\\tiff\\test1.tif" ); gray8_image_t img( 320, 240 ); write_view( filename, view( img ), tiff_tag() ); // Write image string filename( "..\\test\\tiff\\test2.tif" ); TIFF* file = TIFFOpen( filename.c_str(), "w" ); rgb8_image_t img( 320, 240 ); write_view( file, view( img ), tag_t() ); I have created some header files to support the old write interface. For example: #include <boost/gil/extension/io_new/tiff_io_old.hpp> string filename( "..\\test\\tiff\\test3.tif" ); gray8_image_t img( 320, 240 ); tiff_write_view( filename, view( img ) ); The primary goal of this new version is to eventually replace the old io library which is part of boost's distribution. Do I need to apply for a review? I'm not sure how to go from here. I know there is still some work to do before a possible review. Documentation is lacking and support for dynamic image needs to be added. But what I want for now is to get some feedback on design and implementation. I would like to thank Lubomir Bourdev and Andreas Pokorny for their most valueable input.
http://lists.boost.org/boost-users/2008/09/40737.php
CC-MAIN-2013-20
refinedweb
1,010
59.3
.\" RE 1" .TH PERLRE 1 "2004-11-05" "perl v5.8.6" "Perl Programmers Reference Guide" .SH "NAME" perlre \- Perl regular expressions .SH "DESCRIPTION" .IX Header "DESCRIPTION" This page describes the syntax of regular expressions in Perl. .PP If you haven't used regular expressions before, a quick-start introduction is available in perlrequick, and a longer tutorial introduction is available in perlretut. .PP For reference on how regular expressions are used in matching operations, plus various examples of the same, see discussions of \&\f(CW\*(C`m//\*(C'\fR, \f(CW\*(C`s///\*(C'\fR, \f(CW\*(C`qr//\*(C'\fR and \f(CW\*(C`??\*(C'\fR in \*(L"Regexp Quote-Like Operators\*(R" in perlop. .PP Matching operations can have various modifiers. Modifiers that relate to the interpretation of the regular expression inside are listed below. Modifiers that alter the way a regular expression is used by Perl are detailed in \*(L"Regexp Quote-Like Operators\*(R" in perlop and \&\*(L"Gory details of parsing quoted constructs\*(R" in perlop. .IP "i" 4 .IX Item "i" Do case-insensitive pattern matching. .Sp If \f(CW\*(C`use locale\*(C'\fR is in effect, the case map is taken from the current locale. See perllocale. .IP "m" 4 .IX Item "m" Treat string as multiple lines. That is, change \*(L"^\*(R" and \*(L"$\*(R" from matching the start or end of the string to matching the start or end of any line anywhere within the string. .IP "s" 4 .IX Item "s" Treat string as single line. That is, change \*(L".\*(R" to match any character whatsoever, even a newline, which normally it would not match. .Sp The \f(CW\*(C`/s\*(C'\fR and \f(CW\*(C`/m\*(C'\fR modifiers both override the \f(CW$*\fR setting. That is, no matter what \f(CW$*\fR contains, \f(CW\*(C`/s\*(C'\fR without \f(CW\*(C`/m\*(C'\fR will force \&\*(L"^\*(R" to match only at the beginning of the string and \*(L"$\*(R" to match only at the end (or just before a newline at the end) of the string. Together, as /ms, they let the \*(L".\*(R" match any character whatsoever, while still allowing \*(L"^\*(R" and \*(L"$\*(R" to match, respectively, just after and just before newlines within the string. .IP "x" 4 .IX Item "x" Extend your pattern's legibility by permitting whitespace and comments. .PP These are usually written as "the \f(CW\*(C`/x\*(C'\fR modifier", even though the delimiter in question might not really be a slash. Any of these modifiers may also be embedded within the regular expression itself using the \f(CW\*(C`(?...)\*(C'\fR construct. See below. .PP The \f(CW\*(C`/x\*(C'\fR modifier itself needs a little more explanation. It tells the regular expression parser to ignore whitespace that is neither backslashed nor within a character class. You can use this to break up your regular expression into (slightly) more readable parts. The \f(CW\*(C`#\*(C'\fR character is also treated as a metacharacter introducing a comment, just as in ordinary Perl code. This also means that if you want real whitespace or \f(CW\*(C`#\*(C'\fR characters in the pattern (outside a character class, where they are unaffected by \f(CW\*(C`/x\*(C'\fR),. .Sh "Regular Expressions" .IX Subsection "Regular Expressions" The patterns used in Perl pattern matching derive from supplied in the Version 8 regex routines. (The routines are derived (distantly) from Henry Spencer's freely redistributable reimplementation of the V8 routines.) See \*(L"Version 8 Regular Expressions\*(R" for details. .PP In particular the following metacharacters have their standard \fIegrep\fR\-ish meanings: .PP .Vb 7 \& \e Quote the next metacharacter \& ^ Match the beginning of the line \& . Match any character (except newline) \& $ Match the end of the line (or before newline at the end) \& | Alternation \& () Grouping \& [] Character class .Ve .PP By default, the \*(L"^\*(R" character is guaranteed to match only the beginning of the string, the \*(L"$\*(R" character only the end (or before the newline at the end), and Perl does certain optimizations with the assumption that the string contains only one line. Embedded newlines will not be matched by \*(L"^\*(R" or \*(L"$\*(R". You may, however, wish to treat a string as a multi-line buffer, such that the \*(L"^\*(R" will match after any newline within the string, and \*(L"$\*(R" will match before any newline. At the cost of a little more overhead, you can do this by using the /m modifier on the pattern match operator. (Older programs did this by setting \f(CW$*\fR, but this practice is now deprecated.) .PP To simplify multi-line substitutions, the \*(L".\*(R" character never matches a newline unless you use the \f(CW\*(C`/s\*(C'\fR modifier, which in effect tells Perl to pretend the string is a single line\*(--even if it isn't. The \f(CW\*(C`/s\*(C'\fR modifier also overrides the setting of \f(CW$*\fR, in case you have some (badly behaved) older code that sets it in another module. .PP The following standard quantifiers are recognized: .PP .Vb 6 \& * Match 0 or more times \& + Match 1 or more times \& ? Match 1 or 0 times \& {n} Match exactly n times \& {n,} Match at least n times \& {n,m} Match at least n but not more than m times .Ve .PP (If a curly bracket occurs in any other context, it is treated as a regular character. In particular, the lower bound is not optional.) The \*(L"*\*(R" modifier is equivalent to \f(CW\*(C`{0,}\*(C'\fR, the \*(L"+\*(R" modifier to \f(CW\*(C`{1,}\*(C'\fR, and the \*(L"?\*(R" modifier to \f(CW\*(C`{0,1}\*(C'\fR. n and m are limited to integral values less than a preset limit defined when perl is built. This is usually 32766 on the most common platforms. The actual limit can be seen in the error message generated by code such as this: .PP .Vb 1 \& $_ **= $_ , / {$_} / for 2 .. 42; .Ve .PP By default, a quantified subpattern is \*(L"greedy\*(R", that is, it will match as many times as possible (given a particular starting location) while still allowing the rest of the pattern to match. If you want it to match the minimum number of times possible, follow the quantifier with a \*(L"?\*(R". Note that the meanings don't change, just the \*(L"greediness\*(R": .PP .Vb 6 \& *? Match 0 or more times \& +? Match 1 or more times \& ?? Match 0 or 1 time \& {n}? Match exactly n times \& {n,}? Match at least n times \& {n,m}? Match at least n but not more than m times .Ve .PP Because patterns are processed as double quoted strings, the following also work: .PP .Vb 17 \& \et tab (HT, TAB) \& \en newline (LF, NL) \& \er return (CR) \& \ef form feed (FF) \& \ea alarm (bell) (BEL) \& \ee escape (think troff) (ESC) \& \e033 octal char (think of a PDP-11) \& \ex1B hex char \& \ex{263a} wide hex char (Unicode SMILEY) \& \ec[ control char \& \eN{name} named char \& \el lowercase next char (think vi) \& \eu uppercase next char (think vi) \& \eL lowercase till \eE (think vi) \& \eU uppercase till \eE (think vi) \& \eE end case modification (think vi) \& \eQ quote (disable) pattern metacharacters till \eE .Ve .PP If \f(CW\*(C`use locale\*(C'\fR is in effect, the case map used by \f(CW\*(C`\el\*(C'\fR, \f(CW\*(C`\eL\*(C'\fR, \f(CW\*(C`\eu\*(C'\fR and \f(CW\*(C`\eU\*(C'\fR is taken from the current locale. See perllocale. For documentation of \f(CW\*(C`\eN{name}\*(C'\fR, see charnames. matched. You'll need to write something like \f(CW\*(C`m/\eQuser\eE\e@\eQhost/\*(C'\fR. .PP In addition, Perl defines the following: .PP .Vb 14 \& \ew Match a "word" character (alphanumeric plus "_") \& \eW Match a non-"word" character \& \es Match a whitespace character \& \eS Match a non-whitespace character \& \ed Match a digit character \& \eD Match a non-digit character \& \epP Match P, named property. Use \ep{Prop} for longer names. \& \ePP Match non-P \& \eX Match eXtended Unicode "combining character sequence", \& equivalent to (?:\ePM\epM*) \& \eC Match a single C char (octet) even under Unicode. \& NOTE: breaks up characters into their UTF-8 bytes, \& so you may end up with malformed pieces of UTF-8. \& Unsupported in lookbehind. .Ve .PP A \f(CW\*(C`\ew\*(C'\fR matches a single alphanumeric character (an alphabetic character, or a decimal digit) or \f(CW\*(C`_\*(C'\fR, not a whole word. Use \f(CW\*(C`\ew+\*(C'\fR to match a string of Perl-identifier characters (which isn't the same as matching an English word). If \f(CW\*(C`use locale\*(C'\fR is in effect, the list of alphabetic characters generated by \f(CW\*(C`\ew\*(C'\fR is taken from the current locale. See perllocale. You may use \f(CW\*(C`\ew\*(C'\fR, \f(CW\*(C`\eW\*(C'\fR, \f(CW\*(C`\es\*(C'\fR, \f(CW\*(C`\eS\*(C'\fR, \&\f(CW\*(C`\ed\*(C'\fR, and \f(CW\*(C`\eD\*(C'\fR within character classes, but if you try to use them as endpoints of a range, that's not a range, the \*(L"\-\*(R" is understood literally. If Unicode is in effect, \f(CW\*(C`\es\*(C'\fR matches also \*(L"\ex{85}\*(R", \&\*(L"\ex{2028}, and \*(R"\ex{2029}", see perlunicode for more details about \&\f(CW\*(C`\epP\*(C'\fR, \f(CW\*(C`\ePP\*(C'\fR, and \f(CW\*(C`\eX\*(C'\fR, and perluniintro about Unicode in general. You can define your own \f(CW\*(C`\ep\*(C'\fR and \f(CW\*(C`\eP\*(C'\fR properties, see perlunicode. .PP The \s-1POSIX\s0 character class syntax .PP .Vb 1 \& [:class:] .Ve .PP is also available. The available classes and their backslash equivalents (if available) are as follows: .PP .Vb 14 \& alpha \& alnum \& ascii \& blank [1] \& cntrl \& digit \ed \& graph \& lower \& print \& punct \& space \es [2] \& upper \& word \ew [3] \& xdigit .Ve .IP "[1]" 4 .IX Item "[1]" A \s-1GNU\s0 extension equivalent to \f(CW\*(C`[ \et]\*(C'\fR, `all horizontal whitespace'. .IP "[2]" 4 .IX Item "[2]" Not exactly equivalent to \f(CW\*(C`\es\*(C'\fR since the \f(CW\*(C`[[:space:]]\*(C'\fR includes also the (very rare) `vertical tabulator', \*(L"\eck\*(R", chr(11). .IP "[3]" 4 .IX Item "[3]" A Perl extension, see above. .PP For example use \f(CW\*(C`[:upper:]\*(C'\fR to match all the uppercase characters. Note that the \f(CW\*(C`[]\*(C'\fR are part of the \f(CW\*(C`[::]\*(C'\fR construct, not part of the whole character class. For example: .PP .Vb 1 \& [01[:alpha:]%] .Ve .PP matches zero, one, any alphabetic character, and the percentage sign. .PP The following equivalences to Unicode \ep{} constructs and equivalent backslash character classes (if available), will hold: .PP .Vb 1 \& [:...:] \ep{...} backslash .Ve .PP .Vb 15 \& alpha IsAlpha \& alnum IsAlnum \& ascii IsASCII \& blank IsSpace \& cntrl IsCntrl \& digit IsDigit \ed \& graph IsGraph \& lower IsLower \& print IsPrint \& punct IsPunct \& space IsSpace \& IsSpacePerl \es \& upper IsUpper \& word IsWord \& xdigit IsXDigit .Ve .PP For example \f(CW\*(C`[:lower:]\*(C'\fR and \f(CW\*(C`\ep{IsLower}\*(C'\fR are equivalent. .PP If the \f(CW\*(C`utf8\*(C'\fR pragma is not used but the \f(CW\*(C`locale\*(C'\fR pragma is, the classes correlate with the usual \fIisalpha\fR\|(3) interface (except for `word' and `blank'). .PP The assumedly non-obviously named classes are: .IP "cntrl" 4 .IX Item "cntrl" Any control character. Usually characters that don't produce output as such but instead control the terminal somehow: for example newline and backspace are control characters. All characters with \fIord()\fR less than 32 are most often classified as control characters (assuming \s-1ASCII\s0, the \s-1ISO\s0 Latin character sets, and Unicode), as is the character with the \fIord()\fR value of 127 (\f(CW\*(C`DEL\*(C'\fR). .IP "graph" 4 .IX Item "graph" Any alphanumeric or punctuation (special) character. .IP "print" 4 .IX Item "print" Any alphanumeric or punctuation (special) character or the space character. .IP "punct" 4 .IX Item "punct" Any punctuation (special) character. .IP "xdigit" 4 .IX Item "xdigit" Any hexadecimal digit. Though this may feel silly ([0\-9A\-Fa\-f] would work just fine) it is included for completeness. .PP You can negate the [::] character classes by prefixing the class name with a '^'. This is a Perl extension. For example: .PP .Vb 1 \& POSIX traditional Unicode .Ve .PP .Vb 3 \& [:^digit:] \eD \eP{IsDigit} \& [:^space:] \eS \eP{IsSpace} \& [:^word:] \eW \eP{IsWord} .Ve .PP Perl respects the \s-1POSIX\s0 standard in that \s-1POSIX\s0 character classes are only supported within a character class. The \s-1POSIX\s0 character classes [.cc.] and [=cc=] are recognized but \fBnot\fR supported and trying to use them will cause an error. .PP Perl defines the following zero-width assertions: .PP .Vb 7 \& \eb Match a word boundary \& \eB Match a non-(word boundary) \& \eA Match only at beginning of string \& \eZ Match only at end of string, or before newline at the end \& \ez Match only at end of string \& \eG Match only at pos() (e.g. at the end-of-match position \& of prior m//g) .Ve .PP A word boundary (\f(CW\*(C`\eb\*(C'\fR) is a spot between two characters that has a \f(CW\*(C`\ew\*(C'\fR on one side of it and a \f(CW\*(C`\eW\*(C'\fR on the other side of it (in either order), counting the imaginary characters off the beginning and end of the string as matching a \f(CW\*(C`\eW\*(C'\fR. (Within character classes \f(CW\*(C`\eb\*(C'\fR represents backspace rather than a word boundary, just as it normally does in any double-quoted string.) The \f(CW\*(C`\eA\*(C'\fR and \f(CW\*(C`\eZ\*(C'\fR are just like \*(L"^\*(R" and \*(L"$\*(R", except that they won't match multiple times when the \f(CW\*(C`/m\*(C'\fR modifier is used, while \&\*(L"^\*(R" and \*(L"$\*(R" will match at every internal line boundary. To match the actual end of the string and not ignore an optional trailing newline, use \f(CW\*(C`\ez\*(C'\fR. .PP The \f(CW\*(C`\eG\*(C'\fR assertion can be used to chain global matches (using \&\f(CW\*(C`m//g\*(C'\fR), as described in \*(L"Regexp Quote-Like Operators\*(R" in perlop. It is also useful when writing \f(CW\*(C`lex\*(C'\fR\-like scanners, when you have several patterns that you want to match against consequent substrings of your string, see the previous reference. The actual location where \f(CW\*(C`\eG\*(C'\fR will match can also be influenced by using \f(CW\*(C`pos()\*(C'\fR as an lvalue: see \*(L"pos\*(R" in perlfunc. Currently \f(CW\*(C`\eG\*(C'\fR is only fully supported when anchored to the start of the pattern; while it is permitted to use it elsewhere, as in \f(CW\*(C`/(?<=\eG..)./g\*(C'\fR, some such uses (\f(CW\*(C`/.\eG/g\*(C'\fR, for example) currently cause problems, and it is recommended that you avoid such usage for now. .PP The bracketing construct \f(CW\*(C`( ... )\*(C'\fR creates capture buffers. To refer to the digit'th buffer use \e within the match. Outside the match use \*(L"$\*(R" instead of \*(L"\e\*(R". (The \&\e notation works in certain circumstances outside the match. See the warning below about \e1 vs \f(CW$1\fR for details.) Referring back to another part of the match is called a \&\fIbackreference\fR. .PP There is no limit to the number of captured substrings that you may use. However Perl also uses \e10, \e11, etc. as aliases for \e010, \&\e011, etc. (Recall that 0 means octal, so \e011 is the character at number 9 in your coded character set; which would be the 10th character, a horizontal tab under \s-1ASCII\s0.) Perl resolves this ambiguity by interpreting \e10 as a backreference only if at least 10 left parentheses have opened before it. Likewise \e11 is a backreference only if at least 11 left parentheses have opened before it. And so on. \e1 through \e9 are always interpreted as backreferences. .PP Examples: .PP .Vb 1 \& s/^([^ ]*) *([^ ]*)/$2 $1/; # swap first two words .Ve .PP .Vb 3 \& if (/(.)\e1/) { # find first doubled char \& print "'$1' is the first doubled character\en"; \& } .Ve .PP .Vb 5 \& if (/Time: (..):(..):(..)/) { # parse out values \& $hours = $1; \& $minutes = $2; \& $seconds = $3; \& } .Ve .PP Several special variables also refer back to portions of the previous match. \f(CW$+\fR returns whatever the last bracket match matched. \&\f(CW$&\fR returns the entire matched string. (At one point \f(CW$0\fR did also, but now it returns the name of the program.) \f(CW$`\fR returns everything before the matched string. \f(CW$'\fR returns everything after the matched string. And \f(CW$^N\fR contains whatever was matched by the most-recently closed group (submatch). \f(CW$^N\fR can be used in extended patterns (see below), for example to assign a submatch to a variable. .PP The numbered match variables ($1, \f(CW$2\fR, \f(CW$3\fR, etc.) and the related punctuation set (\f(CW$+\fR, \f(CW$&\fR, \f(CW$`\fR, \f(CW$'\fR, and \f(CW$^N\fR) are all dynamically scoped until the end of the enclosing block or until the next successful match, whichever comes first. (See \*(L"Compound Statements\*(R" in perlsyn.) .PP \&\fB\s-1NOTE\s0\fR: failed matches in Perl do not reset the match variables, which makes easier to write code that tests for a series of more specific cases and remembers the best match. .PP \&\fB\s-1WARNING\s0\fR: Once Perl sees that you need one of \f(CW$&\fR, \f(CW$`\fR, or \&\f(CW$'\fR anywhere in the program, it has to provide them for every pattern match. This may substantially slow your program. Perl uses the same mechanism to produce \f(CW$1\fR, \f(CW$2\fR, etc, so you also pay a price for each pattern that contains capturing parentheses. (To avoid this cost while retaining the grouping behaviour, use the extended regular expression \f(CW\*(C`(?: ... )\*(C'\fR instead.) But if you never use \f(CW$&\fR, \f(CW$`\fR or \f(CW$'\fR, then patterns \fIwithout\fR capturing parentheses will not be penalized. So avoid \f(CW$&\fR, \f(CW$'\fR, and \f(CW$`\fR if you can, but if you can't (and some algorithms really appreciate them), once you've used them once, use them at will, because you've already paid the price. As of 5.005, \f(CW$&\fR is not so costly as the other two. .PP Backslashed metacharacters in Perl are alphanumeric, such as \f(CW\*(C`\eb\*(C'\fR, \&\f(CW\*(C`\ew\*(C'\fR, \f(CW\*(C`\en\*(C'\fR. Unlike some other regular expression languages, there are no backslashed symbols that aren't alphanumeric. So anything that looks like \e\e, \e(, \e), \e<, \e>, \e{, or \-\*(L"word\*(R" characters: .PP .Vb 1 \& $pattern =~ s/(\eW)/\e\e$1/g; .Ve .PP (If \f(CW\*(C`use locale\*(C'\fR is set, then this depends on the current locale.) Today it is more common to use the \fIquotemeta()\fR function or the \f(CW\*(C`\eQ\*(C'\fR metaquoting escape sequence to disable all metacharacters' special meanings like this: .PP .Vb 1 \& /$unquoted\eQ$quoted\eE$unquoted/ .Ve .PP Beware that if you put literal backslashes (those not inside interpolated variables) between \f(CW\*(C`\eQ\*(C'\fR and \f(CW\*(C`\eE\*(C'\fR, double-quotish backslash interpolation may lead to confusing results. If you \&\fIneed\fR to use literal backslashes within \f(CW\*(C`\eQ...\eE\*(C'\fR, consult \*(L"Gory details of parsing quoted constructs\*(R" in perlop. .Sh "Extended Patterns" .IX Subsection "Extended Patterns" Perl also defines a consistent extension syntax for features not found in standard tools like \fBawk\fR and \fBlex\fR. The syntax is a pair of parentheses with a question mark as the first thing within the parentheses. The character after the question mark indicates the extension. .PP The stability of these extensions varies widely. Some have been part of the core language for many years. Others are experimental and may change without warning or be completely removed. Check the documentation on an individual feature to verify its current status. .PP A question mark was chosen for this and for the minimal-matching construct because 1) question marks are rare in older regular expressions, and 2) whenever you see one, you should stop and \&\*(L"question\*(R" exactly what is going on. That's psychology... .ie n .IP """(?#text)""" 10 .el .IP "\f(CW(?#text)\fR" 10 .IX Item "(?#text)" A comment. The text is ignored. If the \f(CW\*(C`/x\*(C'\fR modifier enables whitespace formatting, a simple \f(CW\*(C`#\*(C'\fR will suffice. Note that Perl closes the comment as soon as it sees a \f(CW\*(C`)\*(C'\fR, so there is no way to put a literal \&\f(CW\*(C`)\*(C'\fR in the comment. .ie n .IP """(?imsx\-imsx)""" 10 .el .IP "\f(CW(?imsx\-imsx)\fR" 10 .IX Item "(?imsx-imsx)" One or more embedded pattern-match modifiers, to be turned on (or turned off, if preceded by \f(CW\*(C`\-\*(C'\fR) \f(CW\*(C`(?i)\*(C'\fR at the front of the pattern. For example: .Sp .Vb 2 \& $pattern = "foobar"; \& if ( /$pattern/i ) { } .Ve .Sp .Vb 1 \& # more flexible: .Ve .Sp .Vb 2 \& $pattern = "(?i)foobar"; \& if ( /$pattern/ ) { } .Ve .Sp These modifiers are restored at the end of the enclosing group. For example, .Sp .Vb 1 \& ( (?i) blah ) \es+ \e1 .Ve .Sp will match a repeated (\fIincluding the case\fR!) word \f(CW\*(C`blah\*(C'\fR in any case, assuming \f(CW\*(C`x\*(C'\fR modifier, and no \f(CW\*(C`i\*(C'\fR modifier outside this group. .ie n .IP """(?:pattern)""" 10 .el .IP "\f(CW(?:pattern)\fR" 10 .IX Item "(?:pattern)" .PD 0 .ie n .IP """(?imsx\-imsx:pattern)""" 10 .el .IP "\f(CW(?imsx\-imsx:pattern)\fR" 10 .IX Item "(?imsx-imsx:pattern)" .PD This is for clustering, not capturing; it groups subexpressions like \&\*(L"()\*(R", but doesn't make backreferences as \*(L"()\*(R" does. So .Sp .Vb 1 \& @fields = split(/\eb(?:a|b|c)\eb/) .Ve .Sp is like .Sp .Vb 1 \& @fields = split(/\eb(a|b|c)\eb/) .Ve .Sp but doesn't spit out extra fields. It's also cheaper not to capture characters if you don't need to. .Sp Any letters between \f(CW\*(C`?\*(C'\fR and \f(CW\*(C`:\*(C'\fR act as flags modifiers as with \&\f(CW\*(C`(?imsx\-imsx)\*(C'\fR. For example, .Sp .Vb 1 \& /(?s-i:more.*than).*million/i .Ve .Sp is equivalent to the more verbose .Sp .Vb 1 \& /(?:(?s-i)more.*than).*million/i .Ve .ie n .IP """(?=pattern)""" 10 .el .IP "\f(CW(?=pattern)\fR" 10 .IX Item "(?=pattern)" A zero-width positive look-ahead assertion. For example, \f(CW\*(C`/\ew+(?=\et)/\*(C'\fR matches a word followed by a tab, without including the tab in \f(CW$&\fR. .ie n .IP """(?!pattern)""" 10 .el .IP "\f(CW(?!pattern)\fR" 10 .IX Item "(?!pattern)" A zero-width negative look-ahead assertion. For example \f(CW\*(C`/foo(?!bar)/\*(C'\fR matches any occurrence of \*(L"foo\*(R" that isn't followed by \*(L"bar\*(R". Note however that look-ahead and look-behind are \s-1NOT\s0 the same thing. You cannot use this for look\-behind. .Sp If you are looking for a \*(L"bar\*(R" that isn't preceded by a \*(L"foo\*(R", \f(CW\*(C`/(?!foo)bar/\*(C'\fR will not do what you want. That's because the \f(CW\*(C`(?!foo)\*(C'\fR is just saying that the next thing cannot be \*(L"foo\*(R"\-\-and it's not, it's a \*(L"bar\*(R", so \*(L"foobar\*(R" will match. You would have to do something like \f(CW\*(C`/(?!foo)...bar/\*(C'\fR for that. We say \*(L"like\*(R" because there's the case of your \*(L"bar\*(R" not having three characters before it. You could cover that this way: \f(CW\*(C`/(?:(?!foo)...|^.{0,2})bar/\*(C'\fR. Sometimes it's still easier just to say: .Sp .Vb 1 \& if (/bar/ && $` !~ /foo$/) .Ve .Sp For look-behind see below. .ie n .IP """(?<=pattern)""" 10 .el .IP "\f(CW(?<=pattern)\fR" 10 .IX Item "(?<=pattern)" A zero-width positive look-behind assertion. For example, \f(CW\*(C`/(?<=\et)\ew+/\*(C'\fR matches a word that follows a tab, without including the tab in \f(CW$&\fR. Works only for fixed-width look\-behind. .ie n .IP """(?<!pattern)""" 10 .el .IP "\f(CW(?<!pattern)\fR" 10 .IX Item "(?<!pattern)" A zero-width negative look-behind assertion. For example \f(CW\*(C`/(?<!bar)foo/\*(C'\fR matches any occurrence of \*(L"foo\*(R" that does not follow \*(L"bar\*(R". Works only for fixed-width look\-behind. .ie n .IP """(?{ code })""" 10 .el .IP "\f(CW(?{ code })\fR" 10 .IX Item "(?{ code })" \&\fB\s-1WARNING\s0\fR: This extended regular expression feature is considered highly experimental, and may be changed or deleted without notice. .Sp This zero-width assertion evaluates any embedded Perl code. It always succeeds, and its \f(CW\*(C`code\*(C'\fR is not interpolated. Currently, the rules to determine where the \f(CW\*(C`code\*(C'\fR ends are somewhat convoluted. .Sp This feature can be used together with the special variable \f(CW$^N\fR to capture the results of submatches in variables without having to keep track of the number of nested parentheses. For example: .Sp .Vb 3 \& $_ = "The brown fox jumps over the lazy dog"; \& /the (\eS+)(?{ $color = $^N }) (\eS+)(?{ $animal = $^N })/i; \& print "color = $color, animal = $animal\en"; .Ve .Sp Inside the \f(CW\*(C`(?{...})\*(C'\fR block, \f(CW$_\fR refers to the string the regular expression is matching against. You can also use \f(CW\*(C`pos()\*(C'\fR to know what is the current position of matching within this string. .Sp The \f(CW\*(C`code\*(C'\fR is properly scoped in the following sense: If the assertion is backtracked (compare \*(L"Backtracking\*(R"), all changes introduced after \&\f(CW\*(C`local\*(C'\fRization are undone, so that .Sp .Vb 13 \& $_ = 'a' x 8; \& m< \& (?{ $cnt = 0 }) # Initialize $cnt. \& ( \& a \& (?{ \& local $cnt = $cnt + 1; # Update $cnt, backtracking-safe. \& }) \& )* \& aaaa \& (?{ $res = $cnt }) # On success copy to non-localized \& # location. \& >x; .Ve .Sp will set \f(CW\*(C`$res = 4\*(C'\fR. Note that after the match, \f(CW$cnt\fR returns to the globally introduced value, because the scopes that restrict \f(CW\*(C`local\*(C'\fR operators are unwound. .Sp This assertion may be used as a \f(CW\*(C`(?(condition)yes\-pattern|no\-pattern)\*(C'\fR switch. If \fInot\fR used in this way, the result of evaluation of \&\f(CW\*(C`code\*(C'\fR is put into the special variable \f(CW$^R\fR. This happens immediately, so \f(CW$^R\fR can be used from other \f(CW\*(C`(?{ code })\*(C'\fR assertions inside the same regular expression. .Sp The assignment to \f(CW$^R\fR above is properly localized, so the old value of \f(CW$^R\fR is restored if the assertion is backtracked; compare \&\*(L"Backtracking\*(R". .Sp For reasons of security, this construct is forbidden if the regular expression involves run-time interpolation of variables, unless the perilous \f(CW\*(C`use re 'eval'\*(C'\fR pragma has been used (see re), or the variables contain results of \f(CW\*(C`qr//\*(C'\fR operator (see \&\*(L"qr/STRING/imosx\*(R" in perlop). .Sp This restriction is because of the wide-spread and remarkably convenient custom of using run-time determined strings as patterns. For example: .Sp .Vb 3 \& $re = <>; \& chomp $re; \& $string =~ /$re/; .Ve .Sp Before Perl knew how to execute interpolated code within a pattern, this operation was completely safe from a security point of view, although it could raise an exception from an illegal pattern. If you turn on the \f(CW\*(C`use re 'eval'\*(C'\fR, though, it is no longer secure, so you should only do so if you are also using taint checking. Better yet, use the carefully constrained evaluation within a Safe compartment. See perlsec for details about both these mechanisms. .ie n .IP """(??{ code })""" 10 .el .IP "\f(CW(??{ code })\fR" 10 .IX Item "(??{ code })" \&\fB\s-1WARNING\s0\fR: This extended regular expression feature is considered highly experimental, and may be changed or deleted without notice. A simplified version of the syntax may be introduced for commonly used idioms. .Sp This is a \*(L"postponed\*(R" regular subexpression. The \f(CW\*(C`code\*(C'\fR is evaluated at run time, at the moment this subexpression may match. The result of evaluation is considered as a regular expression and matched as if it were inserted instead of this construct. .Sp The \f(CW\*(C`code\*(C'\fR is not interpolated. As before, the rules to determine where the \f(CW\*(C`code\*(C'\fR ends are currently somewhat convoluted. .Sp The following pattern matches a parenthesized group: .Sp .Vb 9 \& $re = qr{ \& \e( \& (?: \& (?> [^()]+ ) # Non-parens without backtracking \& | \& (??{ $re }) # Group with matching parens \& )* \& \e) \& }x; .Ve .ie n .IP """(?>pattern)""" 10 .el .IP "\f(CW(?>pattern)\fR" 10 .IX Item "(?>pattern)" \&\fB\s-1WARNING\s0\fR: This extended regular expression feature is considered highly experimental, and may be changed or deleted without notice. .Sp An \*(L"independent\*(R" subexpression, one which matches the substring that a \fIstandalone\fR \f(CW\*(C`pattern\*(C'\fR would match if anchored at the given position, and it matches \fInothing other than this substring\fR. This construct is useful for optimizations of what would otherwise be \&\*(L"eternal\*(R" matches, because it will not backtrack (see \*(L"Backtracking\*(R"). It may also be useful in places where the \*(L"grab all you can, and do not give anything back\*(R" semantic is desirable. .Sp For example: \f(CW\*(C`^(?>a*)ab\*(C'\fR will never match, since \f(CW\*(C`(?>a*)\*(C'\fR (anchored at the beginning of string, as above) will match \fIall\fR characters \f(CW\*(C`a\*(C'\fR at the beginning of string, leaving no \f(CW\*(C`a\*(C'\fR for \&\f(CW\*(C`ab\*(C'\fR to match. In contrast, \f(CW\*(C`a*ab\*(C'\fR will match the same as \f(CW\*(C`a+b\*(C'\fR, since the match of the subgroup \f(CW\*(C`a*\*(C'\fR is influenced by the following group \f(CW\*(C`ab\*(C'\fR (see \*(L"Backtracking\*(R"). In particular, \f(CW\*(C`a*\*(C'\fR inside \&\f(CW\*(C`a*ab\*(C'\fR will match fewer characters than a standalone \f(CW\*(C`a*\*(C'\fR, since this makes the tail match. .Sp An effect similar to \f(CW\*(C`(?>pattern)\*(C'\fR may be achieved by writing \&\f(CW\*(C`(?=(pattern))\e1\*(C'\fR. This matches the same substring as a standalone \&\f(CW\*(C`a+\*(C'\fR, and the following \f(CW\*(C`\e1\*(C'\fR eats the matched string; it therefore makes a zero-length assertion into an analogue of \f(CW\*(C`(?>...)\*(C'\fR. (The difference between these two constructs is that the second one uses a capturing group, thus shifting ordinals of backreferences in the rest of a regular expression.) .Sp Consider this pattern: .Sp .Vb 8 \& m{ \e( \& ( \& [^()]+ # x+ \& | \& \e( [^()]* \e) \& )+ \& \e) \& }x .Ve .Sp \f(CW\*(C`(.+)+\*(C'\fR is doing, and \f(CW\*(C`(.+)+\*(C'\fR is similar to a subpattern of the above pattern. Consider how the pattern above detects no-match on \f(CW\*(C`((()aaaaaaaaaaaaaaaaaa\*(C'\fR in several seconds, but that each extra letter doubles this time. This exponential performance will make it appear that your program has hung. However, a tiny change to this pattern .Sp .Vb 8 \& m{ \e( \& ( \& (?> [^()]+ ) # change x+ above to (?> x+ ) \& | \& \e( [^()]* \e) \& )+ \& \e) \& }x .Ve .Sp which uses \f(CW\*(C`(?>...)\*(C'\fR matches exactly when the one above does (verifying this yourself would be a productive exercise), but finishes in a fourth the time when used on a similar string with 1000000 \f(CW\*(C`a\*(C'\fRs. Be aware, however, that this pattern currently triggers a warning message under the \f(CW\*(C`use warnings\*(C'\fR pragma or \fB\-w\fR switch saying it \&\f(CW"matches null string many times in regex"\fR. .Sp On simple groups, such as the pattern \f(CW\*(C`(?> [^()]+ )\*(C'\fR, a comparable effect may be achieved by negative look\-ahead, as in \f(CW\*(C`[^()]+ (?! [^()] )\*(C'\fR. This was only 4 times slower on a string with 1000000 \f(CW\*(C`a\*(C'\fRs. .Sp The \*(L"grab all you can, and do not give anything back\*(R" semantic is desirable in many situations where on the first sight a simple \f(CW\*(C`()*\*(C'\fR looks like the correct solution. Suppose we parse text with comments being delimited by \f(CW\*(C`#\*(C'\fR followed by some optional (horizontal) whitespace. Contrary to its appearance, \f(CW\*(C`#[ \et]*\*(C'\fR \fIis not\fR the correct subexpression to match the comment delimiter, because it may \*(L"give up\*(R" some whitespace if the remainder of the pattern can be made to match that way. The correct answer is either one of these: .Sp .Vb 2 \& (?>#[ \et]*) \& #[ \et]*(?![ \et]) .Ve .Sp For example, to grab non-empty comments into \f(CW$1\fR, one should use either one of these: .Sp .Vb 2 \& / (?> \e# [ \et]* ) ( .+ ) /x; \& / \e# [ \et]* ( [^ \et] .* ) /x; .Ve .Sp Which one you pick depends on which of these expressions better reflects the above specification of comments. .ie n .IP """(?(condition)yes\-pattern|no\-pattern)""" 10 .el .IP "\f(CW(?(condition)yes\-pattern|no\-pattern)\fR" 10 .IX Item "(?(condition)yes-pattern|no-pattern)" .PD 0 .ie n .IP """(?(condition)yes\-pattern)""" 10 .el .IP "\f(CW(?(condition)yes\-pattern)\fR" 10 .IX Item "(?(condition)yes-pattern)" .PD \&\fB\s-1WARNING\s0\fR: This extended regular expression feature is considered highly experimental, and may be changed or deleted without notice. .Sp Conditional expression. \f(CW\*(C`(condition)\*(C'\fR should be either an integer in parentheses (which is valid if the corresponding pair of parentheses matched), or look\-ahead/look\-behind/evaluate zero-width assertion. .Sp For example: .Sp .Vb 4 \& m{ ( \e( )? \& [^()]+ \& (?(1) \e) ) \& }x .Ve .Sp matches a chunk of non\-parentheses, possibly included in parentheses themselves. .Sh "Backtracking" .IX Subsection "Backtracking" \&\s-1NOTE:\s0 This section presents an abstract approximation of regular expression behavior. For a more rigorous (and complicated) view of the rules involved in selecting a match among possible alternatives, see \*(L"Combining pieces together\*(R". .PP A fundamental feature of regular expression matching involves the notion called \fIbacktracking\fR, which is currently used (when needed) by all regular expression quantifiers, namely \f(CW\*(C`*\*(C'\fR, \f(CW\*(C`*?\*(C'\fR, \f(CW\*(C`+\*(C'\fR, \&\f(CW\*(C`+?\*(C'\fR, \f(CW\*(C`{n,m}\*(C'\fR, and \f(CW\*(C`{n,m}?\*(C'\fR. Backtracking is often optimized internally, but the general principle outlined here is valid. .PP For a regular expression to match, the \fIentire\fR. .PP Here is an example of backtracking: Let's say you want to find the word following \*(L"foo\*(R" in the string \*(L"Food is on the foo table.\*(R": .PP .Vb 4 \& $_ = "Food is on the foo table."; \& if ( /\eb(foo)\es+(\ew+)/i ) { \& print "$2 follows $1.\en"; \& } .Ve .PP When the match runs, the first part of the regular expression (\f(CW\*(C`\eb(foo)\*(C'\fR) finds a possible match right at the beginning of the string, and loads up \&\f(CW$1\fR with \*(L"Foo\*(R". However, as soon as the matching engine sees that there's no whitespace following the \*(L"Foo\*(R" that it had saved in \f(CW$1\fR, it realizes its mistake and starts over again one character after where it had the tentative match. This time it goes all the way until the next occurrence of \*(L"foo\*(R". The complete regular expression matches this time, and you get the expected output of \*(L"table follows foo.\*(R" .PP Sometimes minimal matching can help a lot. Imagine you'd like to match everything between \*(L"foo\*(R" and \*(L"bar\*(R". Initially, you write something like this: .PP .Vb 4 \& $_ = "The food is under the bar in the barn."; \& if ( /foo(.*)bar/ ) { \& print "got <$1>\en"; \& } .Ve .PP Which perhaps unexpectedly yields: .PP .Vb 1 \& got .Ve .PP That's because \f(CW\*(C`.*\*(C'\fR was greedy, so you get everything between the \&\fIfirst\fR \*(L"foo\*(R" and the \fIlast\fR \*(L"bar\*(R". Here it's more effective to use minimal matching to make sure you get the text between a \*(L"foo\*(R" and the first \*(L"bar\*(R" thereafter. .PP .Vb 2 \& if ( /foo(.*?)bar/ ) { print "got <$1>\en" } \& got .Ve .PP Here's another example: let's say you'd like to match a number at the end of a string, and you also want to keep the preceding part of the match. So you write this: .PP .Vb 4 \& $_ = "I have 2 numbers: 53147"; \& if ( /(.*)(\ed*)/ ) { # Wrong! \& print "Beginning is <$1>, number is <$2>.\en"; \& } .Ve .PP That won't work at all, because \f(CW\*(C`.*\*(C'\fR was greedy and gobbled up the whole string. As \f(CW\*(C`\ed*\*(C'\fR can match on an empty string the complete regular expression matched successfully. .PP .Vb 1 \& Beginning is , number is <>. .Ve .PP Here are some variants, most of which don't work: .PP .Vb 11 \& $_ = "I have 2 numbers: 53147"; \& @pats = qw{ \& (.*)(\ed*) \& (.*)(\ed+) \& (.*?)(\ed*) \& (.*?)(\ed+) \& (.*)(\ed+)$ \& (.*?)(\ed+)$ \& (.*)\eb(\ed+)$ \& (.*\eD)(\ed+)$ \& }; .Ve .PP .Vb 8 \& for $pat (@pats) { \& printf "%-12s ", $pat; \& if ( /$pat/ ) { \& print "<$1> <$2>\en"; \& } else { \& print "FAIL\en"; \& } \& } .Ve .PP That will print out: .PP .Vb 8 \& (.*)(\ed*) <> \& (.*)(\ed+) <7> \& (.*?)(\ed*) <> <> \& (.*?)(\ed+) <2> \& (.*)(\ed+)$ <7> \& (.*?)(\ed+)$ <53147> \& (.*)\eb(\ed+)$ <53147> \& (.*\eD)(\ed+)$ <53147> .Ve .PP. .PP When using look-ahead assertions and negations, this can all get even trickier. Imagine you'd like to find a sequence of non-digits not followed by \*(L"123\*(R". You might try to write that as .PP .Vb 4 \& $_ = "ABC123"; \& if ( /^\eD*(?!123)/ ) { # Wrong! \& print "Yup, no 123 in $_\en"; \& } .Ve .PP But that isn't going to match; at least, not the way you're hoping. It claims that there is no 123 in the string. Here's a clearer picture of why that pattern matches, contrary to popular expectations: .PP .Vb 2 \& $x = 'ABC123' ; \& $y = 'ABC445' ; .Ve .PP .Vb 2 \& print "1: got $1\en" if $x =~ /^(ABC)(?!123)/ ; \& print "2: got $1\en" if $y =~ /^(ABC)(?!123)/ ; .Ve .PP .Vb 2 \& print "3: got $1\en" if $x =~ /^(\eD*)(?!123)/ ; \& print "4: got $1\en" if $y =~ /^(\eD*)(?!123)/ ; .Ve .PP This prints .PP .Vb 3 \& 2: got ABC \& 3: got AB \& 4: got ABC .Ve .PP You might have expected test 3 to fail because it seems to a more general purpose version of test 1. The important difference between them is that test 3 contains a quantifier (\f(CW\*(C`\eD*\*(C'\fR) and so can use backtracking, whereas test 1 will not. What's happening is that you've asked \*(L"Is it true that at the start of \f(CW$x\fR, following 0 or more non\-digits, you have something that's not 123?\*(R" If the pattern matcher had let \f(CW\*(C`\eD*\*(C'\fR expand to \*(L"\s-1ABC\s0\*(R", this would have caused the whole pattern to fail. .PP The search engine will initially match \f(CW\*(C`\eD*\*(C'\fR with \*(L"\s-1ABC\s0\*(R". Then it will try to match \f(CW\*(C`(?!123\*(C'\fR with \*(L"123\*(R", which fails. But because a quantifier (\f(CW\*(C`\eD*\*(C'\fR) has been used in the regular expression, the search engine can backtrack and retry the match differently in the hope of matching the complete regular expression. .PP The pattern really, \fIreally\fR wants to succeed, so it uses the standard pattern back-off-and-retry and lets \f(CW\*(C`\eD*\*(C'\fR expand to just \*(L"\s-1AB\s0\*(R" this time. Now there's indeed something following \*(L"\s-1AB\s0\*(R" that is not \&\*(L"123\*(R". It's \*(L"C123\*(R", which suffices. .PP We can deal with this by using both an assertion and a negation. We'll say that the first part in \f(CW$1\fR must be followed both by a digit and by something that's not \*(L"123\*(R". Remember that the look-aheads are zero-width expressions\*(--they only look, but don't consume any of the string in their match. So rewriting this way produces what you'd expect; that is, case 5 will fail, but case 6 succeeds: .PP .Vb 2 \& print "5: got $1\en" if $x =~ /^(\eD*)(?=\ed)(?!123)/ ; \& print "6: got $1\en" if $y =~ /^(\eD*)(?=\ed)(?!123)/ ; .Ve .PP .Vb 1 \& 6: got ABC .Ve .PP In other words, the two zero-width assertions next to each other work as though they're ANDed together, just as you'd use any built-in assertions: \f(CW\*(C`/^$/\*(C'\fR matches only if you're at the beginning of the line \s-1AND\s0 the end of the line simultaneously. The deeper underlying truth is that juxtaposition in regular expressions always means \s-1AND\s0, except when you write an explicit \s-1OR\s0 using the vertical bar. \f(CW\*(C`/ab/\*(C'\fR means match \*(L"a\*(R" \s-1AND\s0 (then) match \*(L"b\*(R", although the attempted matches are made at different positions because \*(L"a\*(R" is not a zero-width assertion, but a one-width assertion. .PP \&\fB\s-1WARNING\s0\fR: particularly complicated regular expressions can take exponential time to solve because of the immense number of possible ways they can use backtracking to try match. For example, without internal optimizations done by the regular expression engine, this will take a painfully long time to run: .PP .Vb 1 \& 'aaaaaaaaaaaa' =~ /((a{0,5}){0,5})*[c]/ .Ve .PP And if you used \f(CW\*(C`*\*(C'\fR's in the internal groups instead of limiting them to 0 through 5 matches, then it would take forever\*(--or until you ran out of stack space. Moreover, these internal optimizations are not always applicable. For example, if you put \f(CW\*(C`{0,5}\*(C'\fR instead of \f(CW\*(C`*\*(C'\fR on the external group, no current optimization is applicable, and the match takes a long time to finish. .PP A powerful tool for optimizing such beasts is what is known as an \&\*(L"independent group\*(R", which does not backtrack (see "\f(CW\*(C`(?>pattern)\*(C'\fR"). Note also that zero-length look\-ahead/look\-behind assertions will not backtrack to make the tail match, since they are in \*(L"logical\*(R" context: only whether they match is considered relevant. For an example where side-effects of look-ahead \fImight\fR have influenced the following match, see "\f(CW\*(C`(?>pattern)\*(C'\fR". .Sh "Version 8 Regular Expressions" .IX Subsection "Version 8 Regular Expressions" In case you're not familiar with the \*(L"regular\*(R" Version 8 regex routines, here are the pattern-matching rules not described above. .PP Any single character matches itself, unless it is a \fImetacharacter\fR with a special meaning described here or above. You can cause characters that normally function as metacharacters to be interpreted literally by prefixing them with a \*(L"\e\*(R" (e.g., \*(L"\e.\*(R" matches a \*(L".\*(R", not any character; \*(L"\e\e\*(R" matches a \*(L"\e\*(R"). A series of characters matches that series of characters in the target string, so the pattern \f(CW\*(C`blurfl\*(C'\fR would match \*(L"blurfl\*(R" in the target string. .PP You can specify a character class, by enclosing a list of characters in \f(CW\*(C`[]\*(C'\fR, which will match any one character from the list. If the first character after the \*(L"[\*(R" is \*(L"^\*(R", the class matches any character not in the list. Within a list, the \*(L"\-\*(R" character specifies a range, so that \f(CW\*(C`a\-z\*(C'\fR represents all characters between \*(L"a\*(R" and \*(L"z\*(R", inclusive. If you want either \*(L"\-\*(R" or \*(L"]\*(R" itself to be a member of a class, put it at the start of the list (possibly after a \*(L"^\*(R"), or escape it with a backslash. \*(L"\-\*(R" is also taken literally when it is at the end of the list, just before the closing \*(L"]\*(R". (The following all specify the same class of three characters: \f(CW\*(C`[\-az]\*(C'\fR, \&\f(CW\*(C`[az\-]\*(C'\fR, and \f(CW\*(C`[a\e\-z]\*(C'\fR. All are different from \f(CW\*(C`[a\-z]\*(C'\fR, which specifies a class containing twenty-six characters, even on \s-1EBCDIC\s0 based coded character sets.) Also, if you try to use the character classes \f(CW\*(C`\ew\*(C'\fR, \f(CW\*(C`\eW\*(C'\fR, \f(CW\*(C`\es\*(C'\fR, \f(CW\*(C`\eS\*(C'\fR, \f(CW\*(C`\ed\*(C'\fR, or \f(CW\*(C`\eD\*(C'\fR as endpoints of a range, that's not a range, the \*(L"\-\*(R" is understood literally. .PP. .PP Characters may be specified using a metacharacter syntax much like that used in C: \*(L"\en\*(R" matches a newline, \*(L"\et\*(R" a tab, \*(L"\er\*(R" a carriage return, \&\*(L"\ef\*(R" a form feed, etc. More generally, \e\fInnn\fR, where \fInnn\fR is a string of octal digits, matches the character whose coded character set value is \fInnn\fR. Similarly, \ex\fInn\fR, where \fInn\fR are hexadecimal digits, matches the character whose numeric value is \fInn\fR. The expression \ec\fIx\fR matches the character control\-\fIx\fR. Finally, the \*(L".\*(R" metacharacter matches any character except \*(L"\en\*(R" (unless you use \f(CW\*(C`/s\*(C'\fR). .PP You can specify a series of alternatives for a pattern using \*(L"|\*(R" to separate them, so that \f(CW\*(C`fee|fie|foe\*(C'\fR will match any of \*(L"fee\*(R", \*(L"fie\*(R", or \*(L"foe\*(R" in the target string (as would \f(CW\*(C`f(e|i|o)e\*(C'\fR). The first alternative includes everything from the last pattern delimiter (\*(L"(\*(R", \*(L"[\*(R", or the beginning of the pattern) up to the first \*(L"|\*(R", and the last alternative contains everything from the last \*(L"|\*(R" to the next pattern delimiter. That's why it's common practice to include alternatives in parentheses: to minimize confusion about where they start and end. .PP Alternatives are tried from left to right, so the first alternative found for which the entire expression matches, is the one that is chosen. This means that alternatives are not necessarily greedy. For example: when matching \f(CW\*(C`foo|foot\*(C'\fR against \*(L"barefoot\*(R", only the \*(L"foo\*(R" part will match, as that is the first alternative tried, and it successfully matches the target string. (This might not seem important, but it is important when you are capturing matched text using parentheses.) .PP Also remember that \*(L"|\*(R" is interpreted as a literal within square brackets, so if you write \f(CW\*(C`[fee|fie|foe]\*(C'\fR you're really only matching \f(CW\*(C`[feio|]\*(C'\fR. .PP Within a pattern, you may designate subpatterns for later reference by enclosing them in parentheses, and you may refer back to the \&\fIn\fRth subpattern later in the pattern using the metacharacter \&\e\fIn\fR. Subpatterns are numbered based on the left to right order of their opening parenthesis. A backreference matches whatever actually matched the subpattern in the string being examined, not the rules for that subpattern. Therefore, \f(CW\*(C`(0|0x)\ed*\es\e1\ed*\*(C'\fR will match \*(L"0x1234 0x4321\*(R", but not \*(L"0x1234 01234\*(R", because subpattern 1 matched \*(L"0x\*(R", even though the rule \f(CW\*(C`0|0x\*(C'\fR could potentially match the leading 0 in the second number. .ie n .Sh "Warning on \e1 vs $1" .el .Sh "Warning on \e1 vs \f(CW$1\fP" .IX Subsection "Warning on 1 vs $1" Some people get too used to writing things like: .PP .Vb 1 \& $pattern =~ s/(\eW)/\e\e\e1/g; .Ve .PP This is grandfathered for the \s-1RHS\s0 of a substitute to avoid shocking the \&\fBsed\fR addicts, but it's a dirty habit to get into. That's because in PerlThink, the righthand side of an \f(CW\*(C`s///\*(C'\fR is a double-quoted string. \f(CW\*(C`\e1\*(C'\fR in the usual double-quoted string means a control\-A. The customary Unix meaning of \f(CW\*(C`\e1\*(C'\fR is kludged in for \f(CW\*(C`s///\*(C'\fR. However, if you get into the habit of doing that, you get yourself into trouble if you then add an \f(CW\*(C`/e\*(C'\fR modifier. .PP .Vb 1 \& s/(\ed+)/ \e1 + 1 /eg; # causes warning under -w .Ve .PP Or if you try to do .PP .Vb 1 \& s/(\ed+)/\e1000/; .Ve .PP You can't disambiguate that by saying \f(CW\*(C`\e{1}000\*(C'\fR, whereas you can fix it with \&\f(CW\*(C`${1}000\*(C'\fR. The operation of interpolation should not be confused with the operation of matching a backreference. Certainly they mean two different things on the \fIleft\fR side of the \f(CW\*(C`s///\*(C'\fR. .Sh "Repeated patterns matching zero-length substring" .IX Subsection "Repeated patterns matching zero-length substring" \&\fB\s-1WARNING\s0\fR: Difficult material (and prose) ahead. This section needs a rewrite. .PP Regular expressions provide a terse and powerful programming language. As with most other power tools, power comes together with the ability to wreak havoc. .PP A common abuse of this power stems from the ability to make infinite loops using regular expressions, with something as innocuous as: .PP .Vb 1 \& 'foo' =~ m{ ( o? )* }x; .Ve .PP The \f(CW\*(C`o?\*(C'\fR can match at the beginning of \f(CW'foo'\fR, and since the position in the string is not moved by the match, \f(CW\*(C`o?\*(C'\fR would match again and again because of the \f(CW\*(C`*\*(C'\fR modifier. Another common way to create a similar cycle is with the looping modifier \f(CW\*(C`//g\*(C'\fR: .PP .Vb 1 \& @matches = ( 'foo' =~ m{ o? }xg ); .Ve .PP or .PP .Vb 1 \& print "match: <$&>\en" while 'foo' =~ m{ o? }xg; .Ve .PP or the loop implied by \fIsplit()\fR. .PP However, long experience has shown that many programming tasks may be significantly simplified by using repeated subexpressions that may match zero-length substrings. Here's a simple example being: .PP .Vb 2 \& @chars = split //, $string; # // is not magic in split \& ($whitewashed = $string) =~ s/()/ /g; # parens avoid magic s// / .Ve .PP Thus Perl allows such constructs, by \fIforcefully breaking the infinite loop\fR. The rules for this are different for lower-level loops given by the greedy modifiers \f(CW\*(C`*+{}\*(C'\fR, and for higher-level ones like the \f(CW\*(C`/g\*(C'\fR modifier or \fIsplit()\fR operator. .PP The lower-level loops are \fIinterrupted\fR (that is, the loop is broken) when Perl detects that a repeated expression matched a zero-length substring. Thus .PP .Vb 1 \& m{ (?: NON_ZERO_LENGTH | ZERO_LENGTH )* }x; .Ve .PP is made equivalent to .PP .Vb 4 \& m{ (?: NON_ZERO_LENGTH )* \& | \& (?: ZERO_LENGTH )? \& }x; .Ve .PP The higher level-loops preserve an additional state between iterations: whether the last match was zero\-length. To break the loop, the following match after a zero-length match is prohibited to have a length of zero. This prohibition interacts with backtracking (see \*(L"Backtracking\*(R"), and so the \fIsecond best\fR match is chosen if the \fIbest\fR match is of zero length. .PP For example: .PP .Vb 2 \& $_ = 'bar'; \& s/\ew??/<$&>/g; .Ve .PP results in \f(CW\*(C`<><><> <>\*(C'\fR. At each position of the string the best match given by non-greedy \f(CW\*(C`??\*(C'\fR is the zero-length match, and the \fIsecond best\fR match is what is matched by \f(CW\*(C`\ew\*(C'\fR. Thus zero-length matches alternate with one-character-long matches. .PP Similarly, for repeated \f(CW\*(C`m/()/g\*(C'\fR the second-best match is the match at the position one notch further in the string. .PP The additional state of being \fImatched with zero-length\fR is associated with the matched string, and is reset by each assignment to \fIpos()\fR. Zero-length matches at the end of the previous match are ignored during \f(CW\*(C`split\*(C'\fR. .Sh "Combining pieces together" .IX Subsection "Combining pieces together" Each of the elementary pieces of regular expressions which were described before (such as \f(CW\*(C`ab\*(C'\fR or \f(CW\*(C`\eZ\*(C'\fR) could match at most one substring at the given position of the input string. However, in a typical regular expression these elementary pieces are combined into more complicated patterns using combining operators \f(CW\*(C`ST\*(C'\fR, \f(CW\*(C`S|T\*(C'\fR, \f(CW\*(C`S*\*(C'\fR etc (in these examples \f(CW\*(C`S\*(C'\fR and \f(CW\*(C`T\*(C'\fR are regular subexpressions). .PP Such combinations can include alternatives, leading to a problem of choice: if we match a regular expression \f(CW\*(C`a|ab\*(C'\fR against \f(CW"abc"\fR, will it match substring \f(CW"a"\fR or \f(CW"ab"\fR? One way to describe which substring is actually matched is the concept of backtracking (see \*(L"Backtracking\*(R"). However, this description is too low-level and makes you think in terms of a particular implementation. .PP Another description starts with notions of \*(L"better\*(R"/\*(L"worse\*(R". All the substrings which may be matched by the given regular expression can be sorted from the \*(L"best\*(R" match to the \*(L"worst\*(R" match, and it is the \*(L"best\*(R" match which is chosen. This substitutes the question of \*(L"what is chosen?\*(R" by the question of \*(L"which matches are better, and which are worse?\*(R". .PP Again, for elementary pieces there is no such question, since at most one match at a given position is possible. This section describes the notion of better/worse for combining operators. In the description below \f(CW\*(C`S\*(C'\fR and \f(CW\*(C`T\*(C'\fR are regular subexpressions. .ie n .IP """ST""" 4 .el .IP "\f(CWST\fR" 4 .IX Item "ST" Consider two possible matches, \f(CW\*(C`AB\*(C'\fR and \f(CW\*(C`A'B'\*(C'\fR, \f(CW\*(C`A\*(C'\fR and \f(CW\*(C`A'\*(C'\fR are substrings which can be matched by \f(CW\*(C`S\*(C'\fR, \f(CW\*(C`B\*(C'\fR and \f(CW\*(C`B'\*(C'\fR are substrings which can be matched by \f(CW\*(C`T\*(C'\fR. .Sp If \f(CW\*(C`A\*(C'\fR is better match for \f(CW\*(C`S\*(C'\fR than \f(CW\*(C`A'\*(C'\fR, \f(CW\*(C`AB\*(C'\fR is a better match than \f(CW\*(C`A'B'\*(C'\fR. .Sp If \f(CW\*(C`A\*(C'\fR and \f(CW\*(C`A'\*(C'\fR coincide: \f(CW\*(C`AB\*(C'\fR is a better match than \f(CW\*(C`AB'\*(C'\fR if \&\f(CW\*(C`B\*(C'\fR is better match for \f(CW\*(C`T\*(C'\fR than \f(CW\*(C`B'\*(C'\fR. .ie n .IP """S|T""" 4 .el .IP "\f(CWS|T\fR" 4 .IX Item "S|T" When \f(CW\*(C`S\*(C'\fR can match, it is a better match than when only \f(CW\*(C`T\*(C'\fR can match. .Sp Ordering of two matches for \f(CW\*(C`S\*(C'\fR is the same as for \f(CW\*(C`S\*(C'\fR. Similar for two matches for \f(CW\*(C`T\*(C'\fR. .ie n .IP """S{REPEAT_COUNT}""" 4 .el .IP "\f(CWS{REPEAT_COUNT}\fR" 4 .IX Item "S{REPEAT_COUNT}" Matches as \f(CW\*(C`SSS...S\*(C'\fR (repeated as many times as necessary). .ie n .IP """S{min,max}""" 4 .el .IP "\f(CWS{min,max}\fR" 4 .IX Item "S{min,max}" Matches as \f(CW\*(C`S{max}|S{max\-1}|...|S{min+1}|S{min}\*(C'\fR. .ie n .IP """S{min,max}?""" 4 .el .IP "\f(CWS{min,max}?\fR" 4 .IX Item "S{min,max}?" Matches as \f(CW\*(C`S{min}|S{min+1}|...|S{max\-1}|S{max}\*(C'\fR. )""" 4 .el .IP "\f(CW(?>S)\fR" 4 .IX Item "(?>S)" Matches the best match for \f(CW\*(C`S\*(C'\fR and only that. .ie n .IP """(?=S)""\fR, \f(CW""(?<=S)""" 4 .el .IP "\f(CW(?=S)\fR, \f(CW(?<=S)\fR" 4 .IX Item "(?=S), (?<=S)" Only the best match for \f(CW\*(C`S\*(C'\fR is considered. (This is important only if \&\f(CW\*(C`S\*(C'\fR has capturing parentheses, and backreferences are used somewhere else in the whole regular expression.) .ie n .IP """(?!S)""\fR, \f(CW""(?<!S)""" 4 .el .IP "\f(CW(?!S)\fR, \f(CW(?<!S)\fR" 4 .IX Item "(?!S), (?<!S)" For this grouping operator there is no need to describe the ordering, since only whether or not \f(CW\*(C`S\*(C'\fR can match is important. .ie n .IP """(??{ EXPR })""" 4 .el .IP "\f(CW(??{ EXPR })\fR" 4 .IX Item "(??{ EXPR })" The ordering is the same as for the regular expression which is the result of \s-1EXPR\s0. .ie n .IP """(?(condition)yes\-pattern|no\-pattern)""" 4 .el .IP "\f(CW(?(condition)yes\-pattern|no\-pattern)\fR" 4 .IX Item "(?(condition)yes-pattern|no-pattern)" Recall that which of \f(CW\*(C`yes\-pattern\*(C'\fR or \f(CW\*(C`no\-pattern\*(C'\fR actually matches is already determined. The ordering of the matches is the same as for the chosen subexpression. .PP The above recipes describe the ordering of matches \fIat a given position\fR. One more rule is needed to understand how a match is determined for the whole regular expression: a match at an earlier position is always better than a match at a later position. .Sh "Creating custom \s-1RE\s0 engines" .IX Subsection "Creating custom RE engines" Overloaded constants (see overload) provide a simple way to extend the functionality of the \s-1RE\s0 engine. .PP Suppose that we want to enable a new \s-1RE\s0 escape-sequence \f(CW\*(C`\eY|\*(C'\fR which matches at boundary between white-space characters and non-whitespace characters. Note that \f(CW\*(C`(?=\eS)(?<!\eS)|(?!\eS)(?<=\eS)\*(C'\fR matches exactly at these positions, so we want to have each \f(CW\*(C`\eY|\*(C'\fR in the place of the more complicated version. We can create a module \f(CW\*(C`customre\*(C'\fR to do this: .PP .Vb 2 \& package customre; \& use overload; .Ve .PP .Vb 5 \& sub import { \& shift; \& die "No argument to customre::import allowed" if @_; \& overload::constant 'qr' => \e&convert; \& } .Ve .PP .Vb 1 \& sub invalid { die "/$_[0]/: invalid escape '\e\e$_[1]'"} .Ve .PP .Vb 10 \& my %rules = ( '\e\e' => '\e\e', \& 'Y|' => qr/(?=\eS)(?<!\eS)|(?!\eS)(?<=\eS)/ ); \& sub convert { \& my $re = shift; \& $re =~ s{ \& \e\e ( \e\e | Y . ) \& } \& { $rules{$1} or invalid($re,$1) }sgex; \& return $re; \& } .Ve .PP Now \f(CW\*(C`use customre\*(C'\fR enables the new escape in constant regular expressions, i.e., those without any runtime variable interpolations. As documented in overload, this conversion will work only over literal parts of regular expressions. For \f(CW\*(C`\eY|$re\eY|\*(C'\fR the variable part of this regular expression needs to be converted explicitly (but only if the special meaning of \f(CW\*(C`\eY|\*(C'\fR should be enabled inside \f(CW$re\fR): .PP .Vb 5 \& use customre; \& $re = <>; \& chomp $re; \& $re = customre::convert $re; \& /\eY|$re\eY|/; .Ve .SH "BUGS" .IX Header "BUGS" This document varies from difficult to understand to completely and utterly opaque. The wandering prose riddled with jargon is hard to fathom in several places. .PP This document needs a rewrite that separates the tutorial content from the reference content. .SH "SEE ALSO" .IX Header "SEE ALSO" perlrequick. .PP perlretut. .PP \&\*(L"Regexp Quote-Like Operators\*(R" in perlop. .PP \&\*(L"Gory details of parsing quoted constructs\*(R" in perlop. .PP perlfaq6. .PP \&\*(L"pos\*(R" in perlfunc. .PP perllocale. .PP perlebcdic. .PP \&\fIMastering Regular Expressions\fR by Jeffrey Friedl, published by O'Reilly and Associates.
http://www.fiveanddime.net/ss/man-unformatted/man1/perlre.1
crawl-003
refinedweb
10,374
65.52
Beefcake A pure-Ruby Google Protocol Buffers library. It's all about being Buf; ProtoBuf. Installation gem install beefcake Usage require 'beefcake' class Variety include Beefcake::Message # Required required :x, :int32, 1 required :y, :int32, 2 # Optional optional :tag, :string, 3 # Repeated repeated :ary, :fixed64, 4 repeated :pary, :fixed64, 5, :packed => true # Enums - Simply use a Module (NOTE: defaults are optional) module Foonum A = 1 B = 2 end # As per the spec, defaults are only set at the end # of decoding a message, not on object creation. optional :foo, Foonum, 6, :default => Foonum::B end # You can create a new message with hash arguments: x = Variety.new(:x => 1, :y => 2) # You can set fields individually using accessor methods: x = Variety.new x.x = 1 x.y = 2 # And you can access fields using Hash syntax: x[:x] # => 1 x[:y] = 4 x # => <Variety x: 1, y: 4> Encoding Any object responding to << can accept encoding # see code example above for the definition of Variety x = Variety.new(:x => 1, :y => 2) # For example, you can encode into a String: # And that buffer can be converted to a String: x.encode.to_s # => "\b\x01\x10\x02)\0" Decoding # see code example above for the definition of Variety x = Variety.new(:x => 1, :y => 2) # You can decode from a Beefcake::Buffer encoded = x.encode Variety.decode(encoded) # => <Variety x: 1, y: 2, pary: [], foo: B(2)> # Decoding from a String works the same way: Variety.decode(encoded.to_s) # => <Variety x: 1, y: 2, pary: [], foo: B(2)> # You can update a Beefcake::Message instance with new data too: new_data = Variety.new(x: 12345, y: 2).encode Variety.decoded(new_data, x) x # => <Variety x: 12345, y: 2, pary: [], foo: B(2)> Generate code from .proto file protoc --beefcake_out output/path -I path/to/proto/files/dir path/to/file.proto You can set the BEEFCAKE_NAMESPACE variable to generate the classes under a desired namespace. (i.e. App::Foo::Bar) About Ruby deserves and needs first-class ProtoBuf support. Other libs didn't feel very "Ruby" to me and were hard to parse. This library was built with EventMachine in mind. Not just blocking-IO. Source: Support Features - Optional fields - Required fields - Repeated fields - Packed Repeated Fields - Varint fields - 32-bit fields - 64-bit fields - Length-delimited fields - Embedded Messages - Unknown fields are ignored (as per spec) - Enums - Defaults (i.e. optional :foo, :string, :default => "bar") - Varint-encoded length-delimited message streams Future - Imports - Use package in generation - Groups (would be nice for accessing older protos) Further Reading Testing rake test Beefcake conducts continuous integration on Travis CI. The current build status for HEAD is . All pull requests automatically trigger a build request. Please ensure that tests succeed. Currently Beefcake is tested and working on: - Ruby 1.9.3 - Ruby 2.0.0 - Ruby 2.1.0 - Ruby 2.1.1 - Ruby 2.1.2 - JRuby in 1.9 mode Thank You - Keith Rarick (kr) for help with encoding/decoding. - Aman Gupta (tmm1) for help with cross VM support and performance enhancements.
https://www.rubydoc.info/gems/beefcake/1.2.0
CC-MAIN-2018-39
refinedweb
510
57.87
Given this C code compiled with gcc 4.3.3 #include <stdio.h>#include <stdlib.h>int main(int argc, char * argv[]){ int * i; i = (int *) malloc(sizeof(int)); printf("%d", *i); return 0;} I would expect the output to be whatever was in the memory that malloc() returns, but instead the When converting a date string for the server into a NSDate - which is in this format 2012-09-07T11:57:44+10:00 we're using this dateFormat in NSDateFormatter yyyy-MM-dd'T'HH:mm:ssZZZ':'mm but the minutes are always zero minutes. How can I get an array of zeroing weak references under ARC? I don't want the array to retain the objects. And I'd like the array elements either to remove themselves when they're deallocated, or set those entries to nil. Similarly, how can I do that with a dictionary? I don't want the dictionary to retain the values. And again, I'd like the dictionary elements either to remove them I'm trying to remove dollar signs and commas from my form input (for example, $1,000.00 => 1000.00) I have the following following line in my before_validation method in my model: self.parents_mortgage = self.parents_mortgage.to_s.gsub!('$,',').to_i This is causing any number to be put through to zero out. Is there something wrong with my syntax?< I have a long C (not C++) struct. It is used to control entities in a game, with position, some behavior data, nothing flashy, except for two strings. The struct is global. Right now, whenever an object is initialized, I put all the values to defaults, one by one myobjects[index].angle = 0; myobjects[index].speed = 0; like that. It doesn't really What is the advantage of zeroing out memory (i.e. calloc() over malloc())? Won't you change the value to something else anyways? calloc() malloc() I'm studying ARC. And now about zeroing weak pointer. OK I understood all the features. The semantic of weak reference is just same with weak reference of GC system, but you know, Objective-C doesn't use GC (except special case) so I can't understand how does this work. I'm a little complicated guy, so I need to know underlying implementation principal to accept the feature to I need a webbased group chat client that can interact with various xmpp servers. I did some research and found muckl , speeqe and tigase minichat but I have one more issue I am deploying my site on shared hosting. So want to know if anybody has installed any group chat software on shared hosting ( like bluehost ) and which will be the best possible client for the same. I'd like to quickly hone in on what failed in a build log output that is nearly 5k lines long, using Notepad++ as my editor for the file. Notepad++ has the nice ability to specify regular expressions, so I am wondering if there is a way to not match: Compile complete -- 0 errors, 0 warnings but to match, for example: Compile complete -- 1 error
http://bighow.org/tags/zeroing/1
CC-MAIN-2017-39
refinedweb
520
64.1
Computer Science Archive: Questions from May 03, 2011 - Anonymous askedZ[i]= Y[i]+... Show moreint X[10], Y[10], Z[10]; void main() { for (int i=0, i<10,i++) { X[i] = i; Y[i] =X[i] +1; Z[i]= Y[i]+1; If (Z[i] == 5){ i = -1; } } } • Show less1 answer - Anonymous askedThis is form the book programming logic and design fift edition by joyce farrell. Page 130 Exercise... Show moreThis is form the book programming logic and design fift edition by joyce farrell. Page 130 Exercise one, Iffff you have the same book. Please please help Thanks 1. Draw a typical hierarchy chart for a paycheck producing program. Try to think of at least 10 separate modules that might be included. For example, one module might calculate an employee’s dental insurance premium. • Show less1 answer - Anonymous asked<p>Design the output and draw a flowchart or write a pseudo code for a program that calculates... More »1 answer - Anonymous askedDesign the out and draw a flowchart or write pseudo code for a program that calculates the service c... More »1 answer - Anonymous askedwhat is the [ Various Streams for Various Purposes ] in f... Show more hiiii all• Show less could u help me in my question.... what is the [ Various Streams for Various Purposes ] in file input and putput in java !!??1 answer - Anonymous askedQ- Consider the following Class, a student claims that the class is poorly (low) cohesive, device mo... Show more Q- Consider the following Class, a student claims that the class is poorly (low) cohesive, device more cohesive class(s) out of this class (along with proper methods and attributes). • Show less1 answer - Anonymous askedfeatures of modern datab... Show moreDatabase Management Systems provide many components that fulfill the needed features of modern database management. Name five needed features, then name and briefly describe the role of the component fulfilling each needed feature • Show less1 answer - Anonymous askedI did everything just like this output, but there is problem with my calculation and I do not know w... Show moreI did everything just like this output, but there is problem with my calculation and I do not know where it is My output should be like this Please enter a positive number and I will calculate the square root for you. -140.2 Please enter a positive number and I will calculate the square root for you. -65.9 Please enter a positive number and I will calculate the square root for you. 140.8 Please enter a small positive number for the tolerance such as 0.0001. -0.00001 Please enter a small positive number for the tolerance such as 0.0001. 0.00001 26 iterations: The square root of 140.800000 is 11.865918 how I can get the number of iterations..? this is my code: #include<stdio.h> double absValue( double x ) { if ( x > 0) { return x; } else if ( x < 0 ) { return (0 - x); } } double sqRoot(double n, double tol) { double x; absValue(x); double tooHigh = n; double tooLow = 0; double guess = (n/2); double times = 0; x = ((guess * guess) - n); if ((x) > tol ) { if (x > 0) { tooHigh = guess; guess = ((tooHigh + tooLow)/2); times = times + 1; } else if (x < 0) { tooLow = guess; guess = ((tooHigh + tooLow)/2); times = times + 1; } } else if (x < tol) { printf("%lf", times); return guess; } } int main() { double userInput, mySqRt, tol; do { puts("Please enter a positive number and I will calculate the square root for you."); scanf("%lf", &userInput); }while(userInput<=0); do { puts("Please enter a small positive number for the tolerance such as 0.0001."); scanf("%lf", &tol); }while(tol<=0); mySqRt = sqRoot( userInput, tol ); // your job is to write the sqRoot function printf("The square root of %f is %f\n", userInput, mySqRt); return 0; } • Show less1 answer - Anonymous askedX-O board can be stored as a 2D array of integers, where a 1 r... Show morehi i want to help me in this problem. X-O board can be stored as a 2D array of integers, where a 1 represents Player 1 (x), 2 represents Player 2 (o), and 0 represents a space not yet taken by either player. Write a void function printXOBoard that takes such a 2D array as an argument and prints the board as a series of X’s, O’s, or spaces, separated horizontally by tabs and vertically by newlines. For instance, it should output the following board for the array {{1,0,2},{0,1,1},{2,0,0}} X O X X O The program should have another function to determine the winner of the game. • Show less1 answer - Anonymous asked1 answer - Anonymous asked1 answer - Anonymous askedCan someone show me how to solve the recurrence relation in Big-Oh notion for "fact2" in the algo... Show moreHi, Can someone show me how to solve the recurrence relation in Big-Oh notion for "fact2" in the algorithm below? public class Factorial1 { public static String fact(int n) { if(n <= 1){ return "1"; } return (n * 2) + fact(n-1); } public static String fact2(int n) { if(n <= 1){ return "1"; } return n + fact(n)*fact2(n/2); } • Show less0 answers - Anonymous askedI've been round and round with this thing. I know that it is supposed to be an elementary problem bu... Show moreI've been round and round with this thing. I know that it is supposed to be an elementary problem but I'm not getting any help for the arrays portion of this class. I am NOT studying to be a programmer, this was just one of the required courses to supplement my GIS courses. I need to have this turned in by the end of this week. If someone could PLEASE help me I would appreciate it! Write the Python code for the following programming problem below. You do not have to use modular design. Write a program that will allow the user to enter the name and golf score for any number (n) of golfers. Next, sort the data in descending order by golf score using the Bubble Sort. Then display the golfers name and golf score. Hints: 1. Create two parallel arrays of size n each. n, the number of golfers, will be entered by the user at the beginning of the program logic. One string array will store the golfer’s name, the second numeric integer array will store the golfer’s score. These variables might be defined as follows: nameOfGolfer=str.array(itemsize=20,shape=n) scoreOfGolfer(n,Int) 2. Don’t forget these two arrays are parallel and elements from both arrays have to be swapped correspondingly using the Bubble Sort for parallel arrays. 3. Don’t forget the imports needed for arrays in Python. 4. Your sample output might look as follows: Enter the number of golfers >>> 3 Enter the name of the golfer >>> Joe Enter the score for Joe >>> 85 Enter the name of the golfer >>> Scott Enter the score for Scott >>> 79 Enter the name of the golfer >>> Bob Enter the score for Bob >>> 90 The rank for the golfer’s is: Bob 90 Joe 85 Scott 79 Happy Golfing! This is as far as I've gotten and this was taken from an answer on this website that another student got. I can't get it to run past entering the number of golfers and the first golfer's name. def main(): index = 0 SIZE=input("Enter the number of golfers: ") while index < SIZE: names[index] = raw_input("Enter the name of the golfer: ") scores[index] = raw_input("Enter the score for golfer: ") index=index+1 bubbleSort(scores,names,SIZE) print "Your golf scores sorted highest to lowest are: " for index in range(0,SIZE - 1): print names[index],scores[index] def bubbleSort(scores,names,SIZE): for maxElement in range(arraySize - 1,0 - 1): for index in range (0,maxElement - 1): if scores[index] > scores[index+1]: swap(scores[index], scores[index+1]) swap(Names[index], Names[index+1]) def swap(a,b): #swap the values in a and b temp = a a = b b = temp main() • Show less1 answer - Anonymous asked1 answer - Anonymous askedThis is one of two of the last problems of the year for me....I've been around and around with this.... Show more This is one of two of the last problems of the year for me....I've been around and around with this...as usual. Not receiving much help from the lab at school. I will never take another online class if I can help it! If someone could please help me with this it would be much appreciated. I just want to graduate and get out of there. I am not pursuing a programming career...Thanks so much in advance! Write the Python code for the following programming problem below. You do not have to use modular design. Last year, a local college implemented rooftop gardens as a way to promote energy efficiency and save money. Write a program that will allow the user to enter the energy bills from January to December for the year prior to going green. Next, allow the user to enter the energy bills from January to December of the past year after going green. The program should calculate the energy difference from the two years and display the two years worth of data, along with the savings. Hints: Create three arrays of size 12 each. The first array will store the first year of energy costs, the second array will store the second year after going green, and the third array will store the difference. Also, create a string array that stores the month names. These variables might be defined as follows: notGreenCost = zeros(12,Float) goneGreenCost = zeros(12,Float) savings = zeros(12,Float) months=str.array(itemsize=8,shape=12) months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] Your sample output might look as follows: Enter NOT GREEN energy costs for January Enter now -->789 Enter NOT GREEN energy costs for February Enter now -->790 Enter NOT GREEN energy costs for March Enter now -->890 Enter NOT GREEN energy costs for April Enter now -->773 Enter NOT GREEN energy costs for May Enter now -->723 Enter NOT GREEN energy costs for June Enter now -->759 Enter NOT GREEN energy costs for July Enter now -->690 Enter NOT GREEN energy costs for August Enter now -->681 Enter NOT GREEN energy costs for September Enter now -->782 Enter NOT GREEN energy costs for October Enter now -->791 Enter NOT GREEN energy costs for November Enter now -->898 Enter NOT GREEN energy costs for December Enter now -->923 ------------------------------------------------- Enter GONE GREEN energy costs for January Enter now -->546 Enter GONE GREEN energy costs for February Enter now -->536 Enter GONE GREEN energy costs for March Enter now -->519 Enter GONE GREEN energy costs for April Enter now -->493 Enter GONE GREEN energy costs for May Enter now -->472 Enter GONE GREEN energy costs for June Enter now -->432 Enter GONE GREEN energy costs for July Enter now -->347 Enter GONE GREEN energy costs for August Enter now -->318 Enter GONE GREEN energy costs for September Enter now -->453 Enter GONE GREEN energy costs for October Enter now -->489 Enter GONE GREEN energy costs for November Enter now -->439 Enter GONE GREEN energy costs for December Enter now -->516 ------------------------------------------------- SAVINGS _____________________________________________________ SAVINGS NOT GREEN GONE GREEN MONTH _____________________________________________________ $ 243 $ 789 $ 546 January $ 254 $ 790 $ 536 February $ 371 $ 890 $ 519 March $ 280 $ 773 $ 493 April $ 251 $ 723 $ 472 May $ 327 $ 759 $ 432 June $ 343 $ 690 $ 347 July $ 363 $ 681 $ 318 August $ 329 $ 782 $ 453 September $ 302 $ 791 $ 489 October $ 459 $ 898 $ 439 November $ 407 $ 923 $ 516 December This is as far as I got. I have not a clue where to go from here. I know I'm supposed to define these functions but I am not sure how or what to place in them. #main• Show less from array import* def main(): costNotGreen = zeros(12,Float) costGoneGreen = zeros(12,Float) savings = zeros(12,Float) months = ["January","February","March","April","May","June","July","August","September","October","November","December"] for index in range(12): print "Enter NOT GREEN energy costs for ",months[index] costNotGreen = raw_input("Enter now: ") for index in range(12): print "Enter GONE GREEN energy costs: " costGoneGreen = raw_input("Enter now: ") print "----------------------------------" print "SAVINGS NOT GREEN GONE GREEN MONTH" print "----------------------------------" for index in range(12): savings[index] = costNotGreen[index] - costGoneGreen[index] print "$", savings[index] print "$",costNotGreen[index] print "$",costGoneGreen[index] print months[index]0 answers - Anonymous askedIn Section 18.3.6, we noted that attributes with many different possible values can cause problems w... Show moreIn Section 18.3.6, we noted that attributes with many different possible values can cause problems with the gain measure. Such attributes tend to split the examples into numerous small classes or even singleton classes, thereby appearing to be highly relevant according to the gain measure. The gain-ratio criterion selects attributes according to the ratio between their gain and their intrinsic information content—that is, the amount of information contained in the answer to the question, "What is the value of this attribute?" The gain-ratio criterion therefore tries to measure how efficiently an attribute provides information on the correct classification of an example. Write a mathematical expression for the information content of an attribute, and implement the gain ratio criterion in DECISION-TREE-LEARNING. • (10pt) You do NOT need to program this algorithm, just write down the change you made to the algorithm in formula and descriptions. • (10pt) Apply this modified algorithm to Problem 4, what is the decision tree you get?Is it the same? Hint: to measure the information in the answer to the question “what is the value of this attribute”, you need find the prior probabilities of each possible value (this “value” has different mean from the previous one) of the attribute. For example, here are three possible values for the attribute “Outlook”: sunny, rain and overcast. the prior probabilities of these values can be estimated by empirical fractions among the examples at the current node.Suppose we are now at the root node, p(sunny) = 4/10, because there is four example with sunny overcast among 10 examples. • Show less1 answer - Anonymous asked1 answer - Anonymous askeda( 1, [Hd | _T... Show moreExplain what the following Prolog code does. Assume the third parameter is the result. a( 1, [Hd | _Tl], Hd ). a( N, [_ | Tl], Elem ) :- N > 1, N1 is N - 1, a( N1, Tl, Elem). • Show less0 answers - Anonymous askedWrite a VBA program in Excel (you can use the spreadsheet or design an Use... Show moreProblem 1: Saving Account Write a VBA program in Excel (you can use the spreadsheet or design an UserForm from VB editor in Excel) that calculates the balance of a savings account at the end of a period of time. It should ask to the user for the annual interest rate, the starting balance, and the number of months that have passed since the account was established. Do not accept number of months lower than 1. A loop should then iterate once for every month, performing the following: (1) Ask the user for the amount deposited into the account during the month. Do not accept negative numbers (input validation). The amount should be added to the balance. (2) Ask the user for the amount withdrawn from the account during the month. Do not accept negative numbers (input validation). The amount should be subtracted from the balance. (3) Calculate the monthly interest. The monthly interest rate is the annual interest rate divided by twelve. Hint: monthly interest rate = (annual interest rate percentage /100) / 12 To get the monthly interest multiply the monthly interest rate by the balance, and add the result to the balance. After the last iteration, the program should display the starting balance, ending balance, the total deposits, the total amount of withdrawals, and the total interest earned. If a negative balance is calculated at any point, a message should be displayed indicating the account has been closed and the loop should terminate.) Problem 2: Bank Charges A bank charges $15 per month plus the following check fees for a commercial checking account: $ 0.12 each for fewer than 20 checks $ 0.10 each for 20 – 39 checks $ 0.08 each for 40 – 59 checks $ 0.04 each for 60 or more checks The bank also charges an extra $20 if the balance of the account falls below $500 (before any check fees are applied). Write a program that asks overdrawn.) • Show less0 answers - Anonymous askeda. It must be defined explicitly in every derive... Show moreWhat does it mean for a method in C++ to be Virtual? a. It must be defined explicitly in every derived class. b. Only its declaration appears in the .h file; its definition is in a separate .cc file. c. It employs dynamic dispatch (i.e., it is polymorphic). d. Any class that contains the virtual method is an abstract class and can not be instantiated. In Scheme, what is the value of (car (cdr (map * '(1 3 5) ' (6 4 2))))? a. 10 b. 12 c. (the empty list) d. a run-time error How might the following be written in Prolog? "If i forget my umbrella on a day when it rains, I will get wet." a. wet(m) :- rainy(d), nonumbrella(m,d). b. rainy(d) :- wet(m). noumbrella(m,d):- wet(m). c. raindy(d), nonumbrella(m,d):- wet(m). d. wet(m) :- rainy(d). wet(m):- nonumbrella(m,d). --Will give full points for all questions answered --Will give out helpful if you answer some but not all of them. Thanks, Apharues • Show less1 answer - Anonymous askedCalculatethe amount due for an order. For an order, the user should enter the followinginformation:... Show moreCalculatethe amount due for an order. For an order, the user should enter the followinginformation: customer name, address, city, state (two-letter abbreviation), andZIP code. An order may consist of multiple items. For each item, the user willenter the product description, quantity, and price. Allowthe user to either select ground or express shipping. Also,allow the user to enter New Order or simply exit out of the program. Validatethe quantity and price. Each must be present and numeric. For any bad data,display a message saying “You haveentered the wrong info, Please re-enter again”.. Calculate the charge for thecurrent item and add the charge and quantity into the appropriate totals. Afteryou have calculated the charge for the current item and added the amounts tothe running totals, calculate the shipping amount (from the table below), andadd that to the total amount due for the order. (Do not calculate shipping on individualitems use the order total.) Theshipping charges depend on the subtotal of the products, taken from thefollowing table (express shipping is just $10 higher than ground shipping). Lessthan Ground Shipping ExpressShipping The shipping charges depend on the subtotal of the products, taken from the following table (express shipping is just $10 higher than ground shipping). Less than Ground Shipping Express Shipping Less than $15.00 $4.95 $14.95 15.00 – 49.99 6.95 16.95 50.00 – 99.99 8.95 18.95 100.00 - 199.99 10.95 20.95 200.00 or more 12.95 22.95 Display the address information and the summary in a large text box using concatenation. For the New Order button clear the customer fields and reset the running totals to zero. Test Data Item Quantity Sales Price Deluxe Cooler 2 68 Axe 1 5 Test Data Output Jane Smith 123 Someplace Anywhere, CA 99999 Items ordered: 3 Subtotal: $141.00 Shipping: 10.95 Total: $151.95 • Show less1 answer - CSMajor askedIn a directed graph G=(V,E) we want to send g units of flow from a source s?V to a target t?V at min... Show moreIn a directed graph G=(V,E) we want to send g units of flow from a source s?V to a target t?V at minimum total cost. An edge (u,v) has a maximum capacity c(u,v) and charges d(u,v)·f_(u,v)$ to carry f_(u,v) units of flow. The flow on an edge, f_(u,v), can neither be negative nor exceed the edge capacity. For every vertex except s or t, the flow in must equal the flow out. Express this as an equivalent linear program, minimizing the total cost, subject to the constraints. You may use equality or inequality constraints, but tell me how many, as a function of |V| and |E|. If you obtain the answer online, please provide the sources. Will grade complete answer LifeSaver!!! • Show less0 answers - CSMajor askedA Conjuctive Normal Form (CNF) formula is just an AND of clauses, each of which is the OR of some Bo... Show moreA Conjuctive Normal Form (CNF) formula is just an AND of clauses, each of which is the OR of some Boolean variables, possibly negated. It is satisfiable if some T/F assignment to variables gives a T in each clause. Max Indep Set: In an undirected graph G=(V,E), an independent set V'?V is a set that contains at most one vertex of each edge. (E.g., black vertices at left.) The MaxIndepSet problem asks, for a given k, if there exists an independent set with |V^' |=k. Sketch an argument that says that we can use a subroutine for MaxIndepSet to solve an instance of SAT with k clauses by asking if there is an independent set of size k in the following graph: For each variable in each clause, make a vertex. Connect pairs of vertices that represent variables in the same clause by a clause edge. Connect each vertex representing a variable a with each vertex representing its negation ¬a by a negation edge. Tell me, (in addition to arguing that any instance of SAT that can be solved gives an independent set of size at least k and any instance and vice versa) the maximum size of the graph if each clause has at most m variables. If you find answer online, please provide sources. Will grade complete answers as LifeSaver!!! • Show less0 answers - ColossalAlarm8032 asked0 answers - ColossalAlarm8032 asked1 answer - Anonymous askedA deque (pronounced "deck") is a list-based collection that allows additions and removals to take pl... Show more A deque (pronounced "deck") is a list-based collection that allows additions and removals to take place at both ends. A deque supports the operations addFront(x), removeFront( ), addRear(x), removeRear( ), size( ), and empty( ). Write a class that implements a deque that stores strings using a doubly-linked list that you code yourself. Demonstrate your class with a graphical user interface that allows users to manipulate the deque by typing appropriate commands in a JTextField component, and see the current state of the deque displayed in a JTextArea component. Consult the documentation for the JTextArea class for methods you can use to display each item in the deque on its own line.• Show less In this exercise, you should complete the implementation of the DoubleEndedQueue class by adding a no-arg constructor and the 6 methods described above. The addFront and addRear methods should take a single argument of type String. The removeFront and removeRear methods should have return type of String. size should return an int, and empty should return a boolean. import java.awt.*; import java.awt.event.*; import javax.swing.*; import java.util.*; /** Programming Challenge 21-1. Double ended queue. This class implements a deque class based on linked list. */ class DoubleEndedQueue { /** The Node class stores a list element and a reference to the next node. */ private class Node { String value; // Value of a list element Node next; // Next node in the list Node prev; // Previous element in the list /** Constructor. @param val The element to be stored in the node. @param n The reference to the successor node. @param p The reference to the predecessor node. */ Node(String val, Node n, Node p) { value = val; next = n; prev = p; } /** Constructor. Use when the node has no successor or predecessor. @param val The element to be stored in the node. */ Node(String val) { // Just call the other (sister) constructor this(val, null, null); } } private Node first; // First element on the list (head) private Node last; // Last element on the list // -------------------------------- // ----- ENTER YOUR CODE HERE ----- // -------------------------------- // -------------------------------- // --------- END USER CODE -------- // -------------------------------- /** The toString method computes the string representation of the list. @return The string representation of the linked list. */ public String toString() { StringBuilder strBuilder = new StringBuilder(); // Use p to walk down the linked list Node p = first; while (p != null) { strBuilder.append(p.value + "\n"); p = p.next; } return strBuilder.toString(); } } /** Programming Challenge 21-1. This class is used to demonstrate the operations in the DoubleEndedQueue class. */ public class DoubleEndedQueueDemo extends JFrame { private DoubleEndedQueue ll; // The queue object private JTextArea listView; // User Interface objects private JTextField cmdTextField; private JTextField resultTextField; public DoubleEndedQueueDemo() { ll = new DoubleEndedQueue(); listView = new JTextArea(); cmdTextField = new JTextField(); resultTextField = new JTextField(); // Create a panel and label for result field JPanel resultPanel = new JPanel(new GridLayout(1,2)); resultPanel.add(new JLabel("Command Result")); resultPanel.add(resultTextField); resultTextField.setEditable(false); add(resultPanel, BorderLayout.NORTH); // Put the textArea in the center of the frame add(listView); listView.setEditable(false); listView.setBackground(Color.WHITE); // Create a panel and label for the command text field JPanel cmdPanel = new JPanel(new GridLayout(1,2)); cmdPanel.add(new JLabel("Command:")); cmdPanel.add(cmdTextField); add(cmdPanel, BorderLayout.SOUTH); cmdTextField.addActionListener(new CmdTextListener()); // Set up the frame setTitle("Double Ended Queue Demo"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pack(); setVisible(true); //Create Help String [] cmds = { "addfront element", "addrear element", "removefront", "removerear", "empty", "size" }; new HelpDialog(cmds); } /** Private inner class that responds to commands that the user types into the command entry text field. */ private class CmdTextListener implements ActionListener { public void actionPerformed(ActionEvent evt) { String cmdText = cmdTextField.getText(); Scanner sc = new Scanner(cmdText); String cmd = sc.next(); if (cmd.equalsIgnoreCase("addFront") || cmd.equalsIgnoreCase( "addRear")) { String e = sc.next(); if (cmd.equalsIgnoreCase("addFront")) ll.addFront(e); else ll.addRear(e); listView.setText(ll.toString()); pack(); return; } if (cmd.equalsIgnoreCase("removeFront") || cmd.equalsIgnoreCase("removeRear")) { String res; if (cmd.equalsIgnoreCase("removeFront")) res = ll.removeFront(); else res = ll.removeRear(); resultTextField.setText(String.valueOf(res)); listView.setText(ll.toString()); pack(); return; } if (cmd.equals("isempty") || cmd.equals("empty")) { resultTextField.setText(String.valueOf(ll.empty())); return; } if (cmd.equals("size")) { resultTextField.setText(String.valueOf(ll.size())); } } } /** Displays the list of available commands. */ class HelpDialog extends JDialog { public HelpDialog(String [] cmds) { setTitle("Available Commands"); JTextArea tArea = new JTextArea(); // Used to display commands for (String c : cmds) { tArea.append(c + "\n"); } add(tArea); pack(); setVisible(true); } } /** Get things started. */ public static void main(String [ ] args) { new DoubleEndedQueueDemo(); } }0 answers - Anonymous askedCreate a C + + that prompts the user a maximum number to which the function will run the calculation... More »1 answer - Anonymous asked... Show more##### ^ $$$$ X ##### ^^ $$$$ XXX ##### ^^^ $$$$ XXXX ##### ^^^^ $$$$ XXXXX ##### ^^^^^ $$$$ XXXXXXX enter the size (1-16).what character to be used ie #,^,$,X. enter the figure type ie square, right triangle, square, isoscles triangle.also, if invalid entry ie -1 and 19 program will print invalid entry please try again. also, if figure type is misspelled the program will still run.also, use comments and prompts throuthout program. the program must print the above figures. DOT NOT USE METHODS OR ARRAYS ONLY LOOPS. • Show less1 answer - Anonymous askedWrite a program that deals a five-card poker hand. The write functions to accomplish each of the fol... Show moreWrite a program that deals a five-card poker hand. The write functions to accomplish each of the following: Determine whether the hand contains a pair. Determine whether the hand contains two pairs. Determine whether the hand contains three of a kind (e.g., three jacks). Determine whether the hand contains four of a kind (e.g., four aces). Determine whether the hand contains a flush (i.e., all five cards of the same suit). Determine whether the hand contains a straight (i.e., five cards of consecutive face values). It must be in C++ code, Thanks! • Show less1 answer - Anonymous askedFor this assignment you will design a set of classes that work together to simulate a police officer... Show moreFor this assignment you will design a set of classes that work together to simulate a police officer issuing a parking ticket. You should design the following classes: The ParkedCar Class: This class should simulate a parked car. The class's responsibilities are as follows: - To know the car's make, model, color, license number, and the number of minutes that the car has been parked. The ParkingMeter Class: This class should simulate a parking meter. The class's only responsibility is as follows: - To know the number of minutes of parking time that has been purchased. The ParkingTicket Class: This class should simulate a parking ticket. The class's responsibilites are as follows: - To report the make, model, color, and license number of the illegally parked car. - To report the amount of the fine, which is $25 for the first hour or part of an hour that the car is illegally parked, plus $10 for every additional hour or part of an hour that the car is illegally parked. -To report the name and badge number of the police officer issuing the ticket. The PoliceOfficer Class: This class should simulate a police officer inspecting parked cars. The class's responsibilites are as follows: - To know the police officer's name and badge number. -To examine a ParkedCar object and a ParkingMeter object, and determine whether the car's time has expired. -To issue a parking ticket (generate a ParkingTicket object) if the car's time has expired. Write a program that demonstrates how these classes collaborate • Show less1 answer - SwiftMouse2558 askedWrite a program to process an array of Hunt county incorp... Show moreSorting and Searching an Array Instructions Write a program to process an array of Hunt county incorporated city names according to the specifications and design constraints specified below. Use (part of) your name as the source file name (example Reed.cpp, not homework15.cpp). EMail the completed source program (.cpp file) as an attachment to tombrown at the domain cp.tamu-commerce.edu on or before 3 May. Input 1. A file named "cityInc.txt": For each line in the file, input the name to be stored as an element in city[]. 2. A name to be entered from the console (e.g. Commerce) Output (screen) Print a descriptive program heading followed by a menu of user choices to: 1. Search city[] for a matching name: include a user prompt for city name, print a message to indicate such a city exists, or an exception message; 2. Print all city[] elements; or 3. Exit the program. Processing First input and store the city names as corresponding elements of an array of strings. Sort the array. Then provide a menu of choices for the user to search for a city, print a list of city names, or exit. Design and Style Constraints 1. Insert a comment at the top of your source program with your name, the course code, the homework number and a brief program description. Insert one or more line comments to identify major functions and any complex operations. 2. Make descriptive identifier names, apply conventional case usage, and initialize variables during declaration. 3. Indent and align statements for readability and to show hierarchy. 4. Create a program with a main function and a subordinate functions to sort the array, locate a city in the sorted array, and print the array. 5. Incorporate a sentinel- or flag-controlled loop to repeat the menu of choices until the user chooses to exit. *Notes: Our Webpage contains links to program examples that apply to this lab. It is recommended that you analyze and implement these examples. Textbook chapters 9 and 10 provide additional background for this assignment. To maximize points earned, submit a "deliverable product". It may or may not perform every step specified, but what is done is correct and useful; No modifications will be made to eliminate syntax errors. Data for "cityInc.txt": Greenville Commerce Caddo Mills Wolfe City Celeste Quinlan West Tawakoni Campbell Lone Oak Neylandville Union Valley Hawk Cove • Show less1 answer - Anonymous askedmatrix = [3 * x.length... Show more WHY DOESN'T IT WORK? <html><body><script> x = [ 1, 3, 7, 9 ] y = [ 1, 3, -3, 7 ] matrix = [3 * x.length - 3][3 * x.length - 2] var count = 0; //insert zero matrix here row = var [ 3 * x.length - 2 ] for ( i = 0; i < x.length - 1; i++) { matrix [count][i] = x[i] * x[i]; matrix [count][i + x.length - 1] = x[i]; matrix [count][i + (2*x.length) - 2] = 1; matrix [count][row.length - 1] = y[i]; count++; matrix [count][i] = x[i + 1] * x[i + 1]; matrix [count][i + x.length - 1] = x[i + 1]; matrix [count][i + (2*x.length) - 2] = 1; matrix [count][row.length - 1] = y[i + 1]; count++; } for ( j = 1; j < x.length - 1; j++) { matrix [count] [j - 1] = 2 * x[j]; matrix [count] [j] = -2 * x[j]; matrix [count] [j + x.length - 2] = 1; matrix [count] [j + x.length - 1] = -1; count ++; } matrix [count] [0] = 1; count++; document.write ("a0 =", x[0][9]) document.write ("a1 =", x[1][9]) document.write ("a2 =", x[2][9]) document.write ("a3 =", x[3][9]) document.write ("b1 =", x[4][9]) document.write ("b2 =", x[5][9]) document.write ("b3 =", x[6][9]) document.write ("b4 =", x[7][9]) document.write ("c1 =", x[8][9]) • Show less1 answer - Anonymous askedCreate a class named Movie that can be used with your video rental business. The Movie class should... Show moreCreate a class named Movie that can be used with your video rental business. The Movie class should track the Motion Picture Association of America (MPAA) rating (e.g., Rated G, PG-13, R), ID Number, and movie title with appropriate accessor and mutator methods. Also create and equals() method that overrides Object's equals() method, where two movies are equal if their ID number is identical. Next, create three additional movie classes named Action, Comedy, and Drama that are derived from are $2.50/day, and dramas are $2/day. Create driver class for the following: public class Movie { // instance variables private String rating; private int id; private String title; // constructors public Movie(String title, int id, String rating) { setTitle(title); setID(id); setRating(rating); } // mutators public void setID(int i) { id = i; } public void setTitle(String t) { title = t; } public void setRating(String r) { rating = r; } // accessors public int getID() { return id; } public String getTitle() { return title; } public String getRating() { return rating; } public double calcLateFees(int numDays) { return 2.0*numDays; } public boolean equals(Object o) { Movie rhs = (Movie)o; return rhs.id == id; } } public class Action extends Movie { public Action(String title, int id, String rating) { super(title, id, rating); } public double calcLateFees(int numDays) { return 3.0*numDays; } } public class Comedy extends Movie { public Comedy(String title, int id, String rating) { super(title, id, rating); } public double calcLateFees(int numDays) { return 2.5*numDays; } } public class Drama extends Movie { public Action(String title, int id, String rating) { super(title, id, rating); } public double calcLateFees(int numDays) { return 2.0*numDays; } } • Show less2 answers - IWishIWasSmarter askedThis is a method in my cpp version. I also have it declared in my header. I have done other parts, n... Show moreThis is a method in my cpp version. I also have it declared in my header. I have done other parts, now this is left. Help me fast! /* Get context for input parameter word (case-insensitive comparison) * Return a dynamically allocated vector of strings, each string * consisting of contextSize number of words before word and contextSize * number of words after word, with word in the middle (set off with "<<" * before the word and ">>" after the word). ContextSize defaults to 5. * It is user's responsibility to delete the vector. */ vector<string>* concordance::getContext(string word, int contextSize = 5) { } EX Run: Enter Word: HELLO ______ ______ _______ __________ ________ <<HELLO>> ______ ______ _______ _______ ______ It finds the word and adds << >> and shows 5 words before it and after. It looks for the word from a text file. I have already taken the words and put them into a vector. • Show less0 answers - Anonymous askedThis assignment will focus on the manipulation of linear... Show more(*Using Microsoft Visual C++ 2010 Express*) This assignment will focus on the manipulation of linear linked lists using dynamic storage allocation. Each section of the assignment should be structured within its own function, passing parameters as necessary. You are to construct a C program, dbase.c, utilizing dynamic variables, which will retrieve, update and manipulate a small payroll database. The payroll data can be found in the file payfile.txt. The data for each employee should be read into a struct containing the following fields: • firstName - 10 characters maximum • lastName - 15 characters maximum • gender - m/f • tenure - integer between 0 and 50 • rate - h/w • salary - float Your program should perform each of the operations indicated below. Be sure to clearly label your output for each section. a) Read the data from payfile.txt into a node struct for each employee and insert the node onto the end of a linear linked list. Here’s an fscanf() statement that will help you to read the data from the input file: fscanf(fp, "%s %s %c %d %c %f\n", p->firstName, p->lastName, &(p->gender), &(p->tenure), &(p->rate), &(p->salary)); b) Output the contents of each of the nodes in the linked list into an easily read format, similar to the format of the input file. c) Traverse the list and output the number of employees in the database. d) Output the first and last name of all women on the payroll. e) Output the first and last name and salary of all weekly employees who make more than $35,000 per year and who have been with the company for at least five years. f) Give a raise of $.75 per hour to all employees who are paid on an hourly basis and make less than $10.00 per hour; and give a raise of $50.00 per week to all employees who are paid on a weekly basis and make less than $350.00 per week. Output the first and last name and new salary for each employee on the payroll who has received a raise. g) Sort the nodes of the list into alphabetical order according to last name and output the first and last name and salary for each employee. h) The file hirefile.txt contains data for three employees to be hired by the company. Insert the record for each of the new employees into the correct location in the linear linked list and output the first and last name and salary for each employee in the database. i) The file firefile.txt contains data for two employees to be fired by the company. Delete the corresponding node for each of the employees to be fired and output the first and last name and salary for each employee in the database. Note that previously we only looked at linear linked lists of nodes that contained integers and now we are looking at linear linked lists of nodes that contain structs. Accordingly, I have rewritten the linear linked list source code (list.h and list.c) to allow our lists to work with employee structs in the info field of each node and not just integers. The list.h header file contains the declaration for the employee struct as well as the function prototypes. The list.c file contains list functions that I have rewritten for use with the employee struct for this lab. Note: Here's a rather simple algorithm to sort a linked list. Assume list points to an unsorted list and sortList points to NULL. while list != null remove first node from list insert node into correct location in sortList list = sortList Here is what the data files for the lab look like: payfile.txt Debbie Starr F 3 W 1000.00 Joan Jacobus F 9 W 925.00 David Renn M 3 H 4.75 Albert Cahana M 3 H 18.75 Douglas Sheer M 5 W 250.00 Shari Buchman F 9 W 325.00 Sara Jones F 1 H 7.50 Ricky Mofsen M 6 H 12.50 Jean Brennan F 6 H 5.40 Jamie Michaels F 8 W 150.00 hirefile.txt Barry Allen M 0 H 6.75 Nina Pinella F 0 W 425.00 Lane Wagger M 0 W 725.00 firefile.txt Jean Brennan F Ricky Mofsen M Points to remember: • Make sure you are creating a C program and not a C++ program. • You should not be using any global variables in your program. • Show less1 answer - Anonymous askedLet A be an array of ints equal to [13, 17, 29, 33, 41, 53, 84, 112, 167, 222, 341, 876]. During eac... More »1 answer - Anonymous asked1 answer - Anonymous askedThe code for Selection Sort is shown. Suppose we have an array of doubles named A which has 15 eleme... Show moreThe code for Selection Sort is shown. Suppose we have an array of doubles named A which has 15 elements and A is initialized to [ 4.6, 8.4, 1.9, 7.3, 6.9, 4.22, 5.9, 1.6, 6.3, 9.1, 25.8, 8.8, 4.6, 6.4, 7.3 ]. Trace the code by hand and tell me, in the SelectionSort() function, what would be the contents of A at the beginning of the seventh pass through the for loop, i.e., before FindMin() is called. The value of Bar would be 6 at this time. Document your answer, i.e. explain how you determined the answer. Depending on the order of the array elements at the beginning of Selection Sort, it may actually sort the list into proper order before the for loop gets around to terminating. For example, suppose A = [ 2, 1, 3, 4 ]. After the first pass through the for loop, A would be [1, 2, 3,4] and this list is sorted, even though the Selection Sort algorithm will continue looping for two more passes. Given array A from the previous exercise, how many times will the for loop iterate before A becomes sorted—even though the loop may continue iterating because it does not know A is sorted? How many times will Selection Sort continue to iterate even though A is sorted? #include <iomanip> #include <iostream> using namespace std; //----- Swap(int[], int, int) ---------------------------------------------------------------------- // Swaps the array elements at indices i and j. //-------------------------------------------------------------------------------------------------- void Swap(int A[], int i, int j) { int temp = A[i]; A[i] = A[j]; A[j] = temp; } //----- FindMin(int[], int, int) ------------------------------------------------------------------- // Finds the minimum element in array A starting at A[IndexStart] and contuing up to and including // A[IndexStop]. //-------------------------------------------------------------------------------------------------- int FindMin(int A[], int IndexStart, int IndexStop) { int IndexMin = IndexStart; for (int Index = IndexStart + 1; Index <= IndexStop; Index++) { if (A[Index] < A[IndexMin]) { IndexMin = Index; } } return IndexMin; } //----- SelectionSort(int[], int) ------------------------------------------------------------------ // Sorts array A using the Selection Sort algorithm. //-------------------------------------------------------------------------------------------------- void SelectionSort(int A[], int Size) { for (int Bar = 0; Bar <= Size-1; Bar++) { // Elements above 'Bar' are sorted and those at or int IndexMin = FindMin(A, Bar, Size-1); // below 'Bar' are unsorted. We search the unsorted Swap(A, Bar, IndexMin); // part of the array to find the minimum and then } // swap to put the minimum in it's place. } • Show less1 answer - Anonymous askedPrompt the user for two integers. USING A FOR LOOP, calculate the total of the numbers between those... Show morePrompt the user for two integers. USING A FOR LOOP, calculate the total of the numbers between those numbers, inclusive. Run the program to see the expected output. My code is below but I cannot the correct calculation in the loop. #include <iostream> using namespace std; int main() { int sum = 0; //; while (lowerbound = 50 && lowerbound < 90) { lowerbound + sum; lowerbound++; } // Print the result cout << "The total of the numbers " << upperbound << " and " << lowerbound << " is " << sum << endl; } • Show less3 answers - kyokyo askedHello I have a question; I know how to find the length of the longest increasing subsequence of an a... More »0 answers - Anonymous askedUSING A FOR LOOP, calculate a 'times table' of the numbers b... Show more #include<iostream.h> #include<conio.h> USING A FOR LOOP, calculate a 'times table' of the numbers between 10 and 24. Run the program to see the expected output. How can I code this correctly? int main() { for (int x=1;x<=10;x++) { for(int i=1;i<=24;i++) cout<<i<<"*"<<x<<"="<<i*x<<"\t\t"; cout<<endl; } return 0; } • Show less3 answers - Anonymous askedAll t... Show more Using C++, Using a FOR LOOP, read 8 lines of input from the user and display it on the screen. All the code I have developed so far truncates the string entry. Or will go into a endless loop. Please help! With my code below I am only able to capture the last line of input. #include <iostream> #include <string> using namespace std; int main () { string input; getline (cin,input); getline (cin,input); getline (cin,input); getline (cin,input); getline (cin,input); getline (cin,input); getline (cin,input); for (int x = 1; x <= 8; x++) { cout << input << endl; } return 0; }with some code to make this work. . • Show less Expected Output: His reasons are as two grains of wheat hid in two bushels of chaff. You shall seek all day ere you find them: and, when you have them, they are not worth the search. Even there, where merchants most do congregate. The devil can cite Scripture for his purpose. Sufferance is the badge of all our tribe. Many a time, and oft, have you rated me. It is a wise father that knows his own child. All things that are, are with more spirits chased than enjoyed.2 answers - Anonymous askedI Have a mySql database with one field that is set as a BLOB type. If I execute my PHP that gets dat... Show moreI Have a mySql database with one field that is set as a BLOB type. If I execute my PHP that gets data row by row, how can I insert this BLOB into my HTML table that is generated to store the database data in? Note : This has to be done using PHP and mySql. Thanks in advance • Show less1 answer - Anonymous askedUsing C++, Prompt the user for two integers. USING A WHILE LOOP, calculate the total of the numbers... Show moreUsing C++, Prompt the user for two integers. USING A WHILE LOOP, calculate the total of the numbers between those numbers, inclusive. #include <iostream> using namespace std; int main() { int sum; //; int count; for (count = lowerbound; count<=upperbound; count++) { sum = sum + count; count++; } // Print the result cout << "The total of the numbers " << upperbound << " and " << lowerbound << " is " << sum << endl; } Help please! • Show less2 answers - Anonymous askedUsing C++, Using a WHILE LOOP, get integers from the user. Then, using a FOR LOOP, calculate a 'time... Show moreUsing C++, Using a WHILE LOOP, get integers from the user. Then, using a FOR LOOP, calculate a 'times table' of that number and the numbers between 1 and 17. When the user enters a zero, end the program. I know this is totally wrong below, please help! #include <iostream> using namespace std; int main() { cout << "Enter a number between 1 and 17: "; cin >> num; do { cout << "Enter a number between 1 and 17: "; cin >> num; } while (num<MULT_MIN || num > MULT_MAX); while (num <MULT_MIN || num > MULT_MAX) } • Show less5 answers - Anonymous askedHelp me write a program that lets a user enter a first and last name in a single JOptionPane.showInp... More »1 answer - raynor36 askedforever. There ar... Show more We are going to use 1 producer thread and 5 consume threads. They are going to run forever. There are two versions of the algorithm: 1. Without using semaphores (proj7v1.cpp) 2. With sempahores (proj7v2.cpp) Submit both programs with sample outputs for both. Try to highlight the differences between both outputs. Feel free to insert sleep() in appropriate places to enable strangebehavior. Try to keep those sleep() calls in both programs too. Use the following diagrams and code to guide you through the project. We are going to usean array of 20 integers (that is, n = 20). Solution: No semaphores Producer while (true) { v = produce(); while ((in + 1) % n == out) /* do nothing */; append(v); } Consumer while (true) { while (in == out) /* do nothing */; w = take(); consume(w); } solution: With semaphores• Show less Semaphore num = 0, c = 1, e = sizeofbuffer; Producer while (true) { v = produce(); semWait(e); append(v); semPost(num); } Consumer while (true) { semWait(num); semWait(c); w = take(); semPost(c); semPost(e); consume(w); } append(v): b[in] = v; in= (in + 1) % n; int take() w = b[out]; sleep(1); out = (out + 1) % n; return w; Array b[] can be an array of 20 integers. produce() can be as simple as return the next integer in the sequence. consume(w) can simply print w out. You may want to define & use a class for Buffer which has this array and append() and take() as its methods. Note: You need to specify lpthread library to compile your programs: g++ *.cpp –lpthread Here is the sample output for v2: Producer adds 1 Producer adds 2 Consumer5: 1 Consumer2: 2 Producer adds 3 Producer adds 4 Producer adds 5 Consumer3: 3 Consumer1: 4 Consumer4: 5 Producer adds 6 Consumer1: 6 Producer adds 7 Producer adds 8 Producer adds 9 Producer adds 10 Consumer2: 7 Consumer4: 9 Consumer3: 8 Consumer5: 10 Producer adds 11 …1 answer - CapablePawn3928 asked... Show moreWrite a complete C++ program called hw14.cpp defines a struct called person as follows struct person { string first; string last; int age; }; Your program should prompt the user for a value for each field. It should then store the user’s input in such a struct. It should then print out the student’s name and age. SAMPLE RUN Please enter your first name: Jane Please enter your last name: Doe Please enter your age: 19 Hi Jane Doe! You are 19 years old. NOTE: Recall that you can read a string in directly using cin, you DON’T have to read it in char by char • Show less1 answer - Anonymous asked1 answer - Anonymous askedp... Show moreimport java.io.IOException; import java.util.Scanner; import java.util.Random; public class Final { public static void main(String[] args) throws IOException { int c; // user choice int nStudents; // number of students int nQuizes; // number of quizes int SPok; // set parameters int FAok; // fill array long[] studentIDs = new long[DefineConstants.MAXSTUDENTS]; int[][] studentQScores = new int[DefineConstants.MAXSTUDENTS][DefineConstants.MAXQUIZES]; SPok = 0; FAok = 0; nStudents = 0; nQuizes = 0; System.out .println("***************************************************"); System.out .println(" WELCOME TO THE GRADEBOOK, PREPARE TO BE AMAZED!!!"); System.out .println("***************************************************"); while (true) { System.out .print("\nQuit Help SetParams FillArray DisplayResults "); System.out.print("\nSelect Q H S F or D and press Enter: "); while ((c = System.in.read()) == '\n') ; if (c == 'q') c = 'Q'; if (c == 'h') c = 'H'; if (c == 's') c = 'S'; if (c == 'f') c = 'F'; if (c == 'd') c = 'D'; switch (c) { case 'Q': Quit(); break; case 'H': getHelp(); break; case 'S': setParams(nStudents, nQuizes); SPok = 1; FAok = 0; break; case 'F': if (SPok != 0) { fillArray(studentIDs, studentQScores, nStudents, nQuizes); FAok = 1; } else System.out.print("You must SetParams first\n"); break; case 'D': if (FAok != 0) displayResults(studentIDs, studentQScores, nStudents, nQuizes); else { if (SPok != 0) System.out.print(" You must FillArray first\n"); else System.out.print(" You must SetParams first\n"); } break; default: System.out.printf("Invalid entry <%c>", c); break; } } } private static void displayResults(long[] sIDs, int[][] sQScores, int nS, int nQ) { int i; int j; int n; int MAXQUIZES = 5; double[] quizAvgs = new double[MAXQUIZES]; int[] quizMaxs = new int[MAXQUIZES]; int[] quizMins = new int[MAXQUIZES]; int[] quizMeds = new int[MAXQUIZES]; byte[][] quizGrid = new byte[MAXQUIZES][101]; // Grade rang 0 - 100 --> // 101 Elements for (i = 0; i < nQ; i++) { quizAvgs[i] = 0.0; quizMaxs[i] = 0; quizMins[i] = 100; for (j = 0; j < 101; j++) quizGrid[i][j] = 0; } System.out.print("\nStudent ID:"); for (j = 0; j < nQ; j++) System.out.printf("Quiz %1d", j + 1); for (i = 0; i < nS; i++) { System.out.printf("\n%d", sIDs[i]); for (j = 0; j < nQ; j++) { System.out.printf(" %3d", sQScores[i][j]); quizAvgs[j] = (i * quizAvgs[j] + sQScores[i][j] / (i + 1)); if (sQScores[i][j] > quizMaxs[j]) quizMaxs[j] = sQScores[i][j]; if (sQScores[i][j] < quizMins[j]) quizMins[j] = sQScores[i][j]; quizGrid[j][sQScores[i][j]] = 1; } } for (i = 0; i < nQ; i++) { n = 0; for (j = 0; j < 101; j++) if (quizGrid[i][j] != 0) n++; n = (n + 1) / 2; for (j = 0; j < 101; j++) if (quizGrid[i][j] != 0) n--; quizMeds[i] = j - 1; } System.out.print("\n\nMaxs:"); for (i = 0; i < nQ; i++) System.out.printf("%3d", quizMaxs[i]); System.out.print("\nMins:"); for (i = 0; i < nQ; i++) System.out.printf("%3d", quizMins[i]); System.out.print("\nAvgs:"); for (i = 0; i < nQ; i++) System.out.printf("%6.2f", quizAvgs[i]); System.out.print("\nMeds:"); for (i = 0; i < nQ; i++) System.out.printf("%3d", quizMeds[i]); System.out.print("\n"); } private static void fillArray(long[] sIDs, int[][] sQScores, int nS, int nQ) { int i; int j; int FIRSTSID = 75678; for (i = 0; i < nS; i++) { (sIDs)[i] = FIRSTSID + i; for (j = 0; j < nQ; j++) (sQScores)[i][j] = (int) (((float) rand()) / 324.5); } } private static float rand() { Random rand = new Random(); int number = 0; { for (int counter = +1; counter <= 1; counter++) { number = rand.nextInt(100); System.out.println(number); } } return number; } private static void setParams(int nS, int nQ) { int n; new String(new char[80]); Scanner keyboard = new Scanner(System.in); n = 0; while (n < 1 || n > 50) { System.out.print("Please enter the number of students(1-50):"); n = keyboard.nextInt(); } nS = n; n = 0; while (n < 1 || n > 5) { System.out .print("Please enter the number of quizes per student(1-5):"); n = keyboard.nextInt(); } nQ = n; } private static void getHelp() { System.out .println("\n______________________HELP MENU__________________________" + "\n Welcome to the Help Menu, here you will find tips to using this program." + "\n" + "\n The first step is to click the 'SET PARAMETERS' button. Here you will " + "\n input the number of students (up to 50) and number of quizzes for each " + "\n student (up to 5.) " + "\n " + "\n Next you will select the 'FILL ARRAY' button. This button will open up" + "\n the menu to select the ID numbers and quiz grades for each student and " + "\n quiz, then store them within the paremeters set in 'SET PARAMETERS'." + "\n " + "\n Finally you will click the 'DISPLAY' button. This button will compile all" + "\n the data you entered in the last two sections and display the results." + "\n This program computes and displays the lowest, highest, average, and" + "\n medium for each grade of each set of quizzes." + "\n " + "\n As an added note if you wish to quit this program at any time, simply" + "\n press the 'QUIT' button and the program will close"); } private static void Quit() { System.out.print("\nADIOS\n"); System.exit(0); } final class DefineConstants { public static final int MAXSTUDENTS = 50; public static final int MAXQUIZES = 5; public static final int FIRSTSID = 75678; } } • Show less1 answer - Anonymous askedThe pressure of a gas changes as the volume and temperature of the gas vary. This variation can be m... Show moreThe pressure of a gas changes as the volume and temperature of the gas vary. This variation can be modeled using the perfect gas law given by: PV = nRT or by the Vander Waals equation of state given by: (P + (a * n^2)/V^2) * (V - b*n) = n * R * T where P is the pressure in atmospheres, V is the volume in liters, n is the number of moles of the gas, and T is the temperature in K. a and b are constants which depend on the particular gas and R is the gas constant whose value is: R = 0.08206L atm/(mol K). The following table gives values of a and b for various gases. Gas a (L^2 atm/mol^2) b (L/mol) H2 0.2444 0.02661 O2 1.360 0.03183 N2 1.390 0.03913 CO2 3.592 0.04267 Ar 1.345 0.03219 Ne 0.2107 0.01709 He 0.03412 0.02370 Write a program that uses the Vander Waals equation of state and the perfect gas law to display in tabular form the relationship between the pressure and the volume of n moles of a gas at a constant absolute temperature, T, over a range of volumes. Inputs to the program are type of gas, n, T, the initial volume (mL) of the volume range, a volume step size (in mL), and the number of lines(values for volume) in the table. Your program should output to the screen a volume vs. pressure table in the following format. Pressure vs Volume using the Vander Walls Equation and the perfect gas law. Volume vs. pressure for 0.020 moles of CO2 at 300.00 deg Kelvin. Initial volume = 400.00mL, volume step = 50.00mL, number of output line = 5. Volume Pressure- VdW (atm) Pressure -Ideal (atm) 400.00 1.225 1.231 450.00 1.089 1.094 500.00 0.981 0.985 550.00 0.892 0.895 600.00 0.818 0.821 Program Requirements: 1. Your program should allow the user to run one case after another. (A case is a set of inputs.) Do this by asking the user whether another case should be run (use a 'Yes' or 'No' response); Do this repeatedly until the user responds in the negative. 2. The gas properties are in a text file named gas-properties.txt. The file contains 7 lines of data: values of a and b for the appropriate gas are on each line. File: gas-properties.txt. 0.2444 0.02661 1.360 0.03183 1.390 0.03913 3.592 0.04267 1.345 0.03219 0.2107 0.01709 0.03412 0.02370 3. Use function as much as possible, where appropriate. At a minimum, -Use a function to get the line number identifying the desired gas. Present the user with a menu showing the gas names, and make sure the input data is valid (a value of 1 to 7). Use this number to read the correct values for the constants a and b (e.g. line 1 contains data for H2, line 7 contains data for He, etc). -Use functions to get each of the input values. They are n (> 0), T, initial volume (>0). Volume step size (>0), and number of lines (between 1 and 50) in the output table. Make sure these functions return valid inputs. -Use a function to calculate the Vander Waals pressure as a function of V, T, a, b, n and R, and use a function to calculate the ideal pressure as a function of V, T and n. -Use a function to get an input from the user to determine when to stop running another case. -Use a function to print the output header information to the screen (and to the file if requested). 4. Your program should include an option to write the output to a text file as well as to the screen. For each case, ask the user if the data should be written to a file, as well as the screen. If the text file option is desired, write the data to a file named “pressure-volume.txt. Be sure to open your file with word-pad to make sure that it was written correctly. Notes: the constants have units, but the initial volume, and the table volume values are to be in milliliters. When you mix numeric and character input, remember that there may be a new line character left in the buffer when you try to read a character. Do not hard code any constants. (R is a constant, a and b are not: they change with the type of gas.) • Show less0 answers - Anonymous askedWrite a program to process weekly employee time cards and compute pay for all employees of a... Show morePROBLEM: Write a program to process weekly employee time cards and compute pay for all employees of a company. Each employee will have three input data items: employee name hourly wage rate number of hours worked during the week The program should input the data of all employees into three separate arrays: names array hourly rates array hours worked array Employees are to be paid time-and-a-half for all hours over 40 per week. A tax amount of 3.625 percent of gross pay will be deducted. The program should store computed net pay and gross pay for each employee into two more array: gross pay array net pay array After processing all employees, the program should display the data from the arrays for all employees in tabular form. Here is suggested output format: EMPLOYEE PAY GROSS NAME RATE HOURS PAY NET PAY Frodo Baggins 7.75 35 271.25 261.42 Tonya Collins 12.50 45 593.75 572.23 Dequan White 40.00 50 2200.00 2120.25 Mary Vincent 30.00 30 900.00 867.38 . . . . . Style Your program should have four functions namely: main() inputEmployeeInfo(string name[], float hours[], float payrate[], int & size ) computePay(float grosspay[], float netpay[], float hours[], float payrate[], int size ) outputPayReport(string name[], float hours[], float payrate[], float grosspay[], float netpay[], int size) Each of the last three functions should be called from main and should have a loop to process all employees. You may use the SENTINEL value of zzzzzz for the name to terminate the loop. The respective arrays should be passed to the functions as parameters as needed. • Show less0 answers - isra008 askedok my getData() function reads a text file and stores it into an array called info, and everytime i... Show more ok my getData() function reads a text file and stores it into an array called info, and everytime it reads a letter from that file it calls a member called addRear which comes from my link list class "slist". what i need help on is how could i display this information using my Printdata function and display in on the screen on a readable format like below, please any help. Also my link list class has a member called displayAll which displays all the elements on the link list lets say the my text file looks something like this A 2 B C B 1 D C 1 A D 2 C A #include <iostream> #include <fstream> #include <vector> #include <string> #include "slist.h" #include "ll.C" using namespace std; void getData(); void printData(); const int SIZE = 10; struct info { char vertexName; int outDegree; slist adjacentOnes; // this comes from slist.h which is a link list class }; int main() { } //PURPOSE: GETS DATA FROM THE FILE void getData() { ifstream fin("table.txt"); // opens the file string line; // declares line as a string // goes on a for loop and reads every variable, if variable is // a letter it calls addRear and appends it to the back of a list for(int i = 0; i<SIZE && getline(fin, line); i++) { istringstream S(line); S >> table[i].vertexName >> table[i].outDegree; char x; while(S >> x) table[i].adjacentOnes.addRear(x); } } } void printData() 1 answer - Anonymous askedUsing first-order logic, represent as accurately as possible the information contained in these comm... Show moreUsing first-order logic, represent as accurately as possible the information contained in these comments on the availability of mortgages during the credit crunch. To be considered for the best mortgage deals during the current difficult conditions, you must borrow substantially less than the full purchase price, have a perfect credit record and be able to act fast. Only people who have built up savings over several years and have shown their ability to live on less than their salary are able to get a mortgage. It is first-time buyers who are hardest hit by the need to stump up a bigger deposit in order to get the choice of the best deals. However, with house prices flat or even falling in some areas, there is less chance of being priced out of the market while you save up the deposit. • Show less0 answers - Anonymous askedimport java.awt.event.Ac... Show moreimport javax.swing.*; import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; public class JoesAutomotive extends JPanel { private static final long serialVersionUID = 1L; public final double Oil_change1 = 26.00; public final double Lube_job1 = 18.00; public final double Radiator_flush1 = 30.00; public final double Transmission_flush1 = 80.00; public final double Inspection1 = 15.00; public final double Muffler_replacement1 = 100.00; public final double Tire_rotation1 = 20.00; private JCheckBox Oil_change; private JCheckBox Lube_job; private JCheckBox Radiator_flush; private JCheckBox Transmission_flush; private JCheckBox Inspection; private JCheckBox Muffler_replacement; private JCheckBox Tire_rotation; private JButton calculate; // added this new button private JTextField calcTextField; // added this text field for the calcTextField public JoesAutomotive() { setLayout(new GridLayout(4, 0)); Oil_change = new JCheckBox("Oil change"); Lube_job = new JCheckBox("Lube job"); Radiator_flush = new JCheckBox("Radiator flush"); Transmission_flush = new JCheckBox("Transmission flush"); Inspection = new JCheckBox("Inspection"); Muffler_replacement = new JCheckBox("Muffler_replacement"); Tire_rotation = new JCheckBox("Tire_rotation"); setBorder(BorderFactory.createTitledBorder("Toppings")); add(Oil_change); add(Lube_job); add(Radiator_flush); add(Transmission_flush); add(Inspection); add(Muffler_replacement); add(Tire_rotation); calculate = new JButton("Calculate!"); calculate.addActionListener(new ButtonListener()); calcTextField = new JTextField(""); calcTextField.setEditable(false); // so that user cannot edit final data add(calculate); add(calcTextField); } private class ButtonListener implements ActionListener { public void actionPerformed(ActionEvent e) { if (e.getSource() == calculate) { actionCalculate(); } } } private void actionCalculate() { double total = 0; if(Oil_change.isSelected()) total = total + Oil_change1; if(Lube_job.isSelected()) total = total + Radiator_flush1; if(Transmission_flush.isSelected()) total = total + Transmission_flush1; if(Inspection.isSelected()) total = total + Inspection1; if(Muffler_replacement.isSelected()) total = total + Muffler_replacement1; if(Tire_rotation.isSelected()) total = total + Tire_rotation1; // converts double to a string and tacks on the dollar sign String sum = "$"+Double.toString(total); calcTextField.setText(sum); } public double getMaintenanceCost() { double MaintenanceCost = 0.0; if (Oil_change.isSelected()) { MaintenanceCost += Oil_change1; } if (Lube_job.isSelected()) { MaintenanceCost += Lube_job1; } if (Radiator_flush.isSelected()) { MaintenanceCost += Radiator_flush1; } if (Transmission_flush.isSelected()) { MaintenanceCost += Transmission_flush1; } if (Inspection.isSelected()) { MaintenanceCost += Inspection1; } if (Inspection.isSelected()) { MaintenanceCost += Muffler_replacement1; } if (Inspection.isSelected()) { MaintenanceCost += Tire_rotation1; } return MaintenanceCost; } public static void main(String args[]) { JFrame jf = new JFrame(); JoesAutomotive ja = new JoesAutomotive(); jf.add(ja); jf.setSize(300, 300); } } • Show less1 answer - Anonymous asked0 answers - Anonymous asked(Figure 18.5) f... Show more(25 pt) Learn to build a decision tree using the algorithm described in Section 18.3 (Figure 18.5) from the following examples. Show the step-by-step process. Day Outlook Temperature Humidity Wind PlayTennis D1 sunny Hot High Weak No D2 sunny Hot High Strong No D3 overcast Hot High Weak Yes D4 rain Mild High Weak Yes D5 rain Cool Normal Weak Yes D6 rain Cool Normal Strong No D7 overcast Cool Normal Strong Yes D8 sunny Cool Normal Weak Yes D9 sunny Mild Normal Strong Yes D10 overcast Mild High Strong Yes Based on your decision tree, what are the results for the following cases? Day Outlook Temperature Humidity Wind PlayTennis D11 sunny Mild High Weak D12 rain Mild Normal Weak D13 overcast Hot Normal Weak D14 rain Mild High Strong • Show less2 answers - Anonymous askedWrite a program about the race between the rabbit and the turtle. Use random numbers to develop a si... Show moreWrite a program about the race between the rabbit and the turtle. Use random numbers to develop a simulation. Begin the race at square 1 of 70. Each square represents a position along the course of the race with the end line at square 70. The clock ticks once per second adjusting the position of both each time. Begin the race by printing: Bang !!!! And they're off!!!! Start each animal at position 1. If animal slips left before square 1, move back to square 1. For each tick of the clock (repetition of the loop), print a 70-position line showing the letter T for turtle and R for rabbit. If both land in same square, the turtle bites the rabbit and program prints OUCH!!! beginning at that position. All print positions other than the T, R or OUCH!!! should be blank. After each line, determine whether either animal has reached or passed square 70. If so, print name of winner, terminate. If turtle wins print: Turtle Wins!!! Yay!!! If Rabbit wins: Rabit wins. Yuch. If neither, perform loop again to simulate the next tick of the clock. • Show less1 answer - Anonymous asked1 answer - Anonymous asked1 answer - Anonymous asked* Oil change -... Show moreJoe's Automotive Joe's Automotive performs the following routine maintenanceservices: * Oil change - $26.00 * Lube job - $18.00 * Radiator flush - $30.00 * Transmission flush - $80.00 * Inspection - $15.00 * Muffler replacement - $100.00 * Tire rotation - $20.00 Joe also performs other non routine services and charges for partsand for labor ($20 per hour). Create a GUI application thatdisplays the total for a customer's visit to Joe's. • Show less0 answers - Anonymous askedWrite an interactive program that performs fractional calculations. You need define a class CFrac fo... Show more Write an interactive program that performs fractional calculations. You need define a class CFrac for a fractional number according to the UML diagram below: • Show less Where the add, subtract, multiply and divide functions perform the corresponding arithmetic calculations on the two parameters, and store the result on the calling object. The parameters are passed as constant reference so it is efficient and the parameters can’t be modified. The two argument constructor also serves as default constructor. The setFrac function prompts and gets user input of numerator and denominator of a fraction. The simplify function reduces the fraction to least term; it does not care if the numerator is greater than the denominator. It is private so it’s only called by other member functions. The display function is responsible for displaying the fraction properly. If the numerator is greater than denominator, then it displays a whole number followed by a fraction. For example, a fractional number of 2/8 should be reduced to ¼, and 9/6 should be reduced to 3/2, and 3/2 should be displayed as 1 ½. Here are some more examples: 5/2 should be displayed as 2 1/2 4/4 should be displayed as 1 0/8 should be displayed as 0 4/12 should be displayed as 1/3 Now that you have an ADT for a fractional number, you will use it for the fractional calculation program. The program should first display a menu to let user choose what kind of calculation to perform: Fraction Calculation Menu: 1 -- ADDITION 2 -- SUBTRACTION 3 -- MULTIPLICATION 4 -- DIVISION 5 -- EXIT -- > As long as user does not choose 5, the program prompts for two fractional numbers, perform the selected calculation on these two numbers, and display the result. When one calculation is finished, the menu is displayed again. Unless user chooses 5, the program shall keep on running. Display the menu and get user choice should be done in a function. ________________________________________ What listed below are basic rules of fractional calculation in mathematics. N1/D1 and N2/D2 are two fractional numbers, where N1 and N2 are the numerators, D1 and D2 are denominators. Addition: N1 N2 N1*D2 + N2*D1 1 3 1 * 5 + 3 * 2 11 ----- + ----- = ----------------------- example: --- + --- = ---------------- = ---- D1 D2 D1*D2 2 5 2 * 5 10 Subtraction: N1 N2 N1*D2 - N2*D1 1 3 1 * 5 - 3 * 2 -1 ----- - ----- = ----------------------- example: --- - --- = ---------------- = ---- D1 D2 D1*D2 2 5 2 * 5 10 multiplication: N1 N2 N1*N2 1 3 1 * 3 3 ----- * ----- = -------------- example: --- * --- = ------- = --- D1 D2 D1*D2 2 5 2 * 5 10 division: N1 N2 N1*D2 1 3 1 * 5 5 ----- / ----- = -------------- example: --- / --- = -------- = --- D1 D2 D1*N2 2 5 2 * 3 6 The following explains how to simplify a fraction to its least term, where CF stands for common factor: Simplification: N *CF N 8 4 * 2 4 ------------ = ----- example: ---- = ------- = ---- D * CF D 6 3 * 2 31 answer - Anonymous askedThis is programming challenge 1 on page 960, the answer under the textbook help section is not compl... Show moreThis is programming challenge 1 on page 960, the answer under the textbook help section is not complete. Employee and ProductionWorker classes Design a class named Employee. The class should keep the following information in member variables: Employee name Employee number Hire date Write one or more constructore and the appropriate accessor and mutator functions for the class. Next, write a class named ProductionWorker that is derived from the Employee class. The ProductionWorker class should have member variables to hold the following information: Shift (an integer) Hourly pay rate(a double) The workday is divided into two shifts: day and night. The shift variable will hold an integer value representing the sift that the employee works. The dy is shift 1 and the night shift is shift 2. Write one or more constructors and the appropriate accessor and mutator functions for the class. Demonstarte the classes by writing a program that uses a ProductionWorker object. • Show less1 answer - Anonymous askedExplain what a coordinate frame is. Contrast the interpretation of Affine Transformation as transfor... More »0 answers - Anonymous askedExplain what tweening is? Give the formula for the affine combination of points that tweening is bas... More »1 answer - Anonymous askedConstruct a decision list to classify the data below. Select tests to be as small as possible (in t... Show more Construct a decision list to classify the data below. Select tests to be as small as possible (in terms of attributes), breaking ties among tests with the same number of attributes by selecting the one that classifies the greatest number of examples correctly. If multiple tests have the same number of attributes and classify the same number of examples, then break the tie using attributes with lower index numbers (e.g., select A1 over A2). Example A1 A2 A3 A4 Y X1 1 0 0 0 1 X2 1 0 1 1 1 X3 0 1 0 0 1 X4 0 1 1 0 0 X5 1 1 0 1 1 X6 0 1 0 1 0 X7 0 0 1 1 1 X8 0 0 1 0 0 • Show less1 answer - Anonymous askedThe population of a town A is less than the population of a town B. However, the population of town ... More »1 answer - Anonymous asked3 answers - Anonymous askedShippi... Show more You have just started a small mail order business.• Show less All orders are subject to sales tax of 8%. Shipping charges are based on total sales amount (including tax). * total due $0.01-$25.00 - $5.00 * total due $25.01-$50.00 - $8.00 * total due $50.01-$99.99 - $15.00 * total due $100.00 or more - free Design a C++ program that will * read the current content of your inventory from a file into an array of records o interactively prompt for and read name of inventory file, open it o read the data from the inventory file into an array of records * create an output file called yourcslogin + hw11_results, write a report to the file that displays the starting inventory (see sample output below) * interactively prompt for and read the name of a 2nd input file containing customer orders and open it o read each order from the file o write an itemized receipt for each order (see sample output below) to the output file o when processing an order, + if the item# provided is invalid (not in array) the invalid # should be written to the receipt along with 0 items ordered and an appropriate message (see sample output) + if the item# is valid, but the amount in stock is not enough to fill the order, charge customer for all that are in stock and display a message stating that the remainder are on back order (see sample output) o update inventory in the array as each order is processed * determine total sales, tax, shipping - write each total with a label to the output file * write the final content of inventory (array) to the output file See sample output below for required formatting of output. Labels and values in numeric columns are right justified, other labels and text are left justified. Dollar amounts are displayed with 2 digits to the right of the decimal. First letter of each name is capitalized, all other letters are lower case. First letter of item description is capitalized, all other letters are lower case. Separate output with blank lines to promote readability. INVENTORY FILE The inventory file will consist of a maximum of 20 lines of data. Each line will represent the current inventory data for an item. Each line will be formatted as follows: item# amount_instock item_price item_description Each value will be separated by EXACTLY one blank space. Sample input: 543 10 4.75 souvENIR MAGNET * item# - will be a positive, non-zero integer, maximum value 999 * amount_instock - will be an integer >= 0 * item_price - will be a double type value > 0.0 * item_description - will be a word or phrase with a maximum length of 25 characters (may be a mixture of upper and lower case letters) ORDER FILE The order file will consist of several orders from customers. Each order will begin with a customer's name (last and first - may be in mixture of upper and lower case letters). Then there will be a series of item requests. An item request will start with an item#. If the item# is non-zero, it will also be followed by an integer > 0 that represents the number of items wanted. If the item# is 0, that is a sentinel to signal the end of the customer's order. Sample input: sMith suSaN 123 7 543 2 0 JONes aLleN 93 20 -63 4 123 1 0 REQUIRED DATA STRUCTURE Your program must implement an array of records to store the inventory data for the business. Create a record (struct) that can store the data for an item. For each item, the following fields are needed: item # - int amount in stock - int price of item - double item description - string REQUIREMENTS - 50% point reduction if not adhered to * The program must make use of functions to modularize your code. * The program must interactively prompt for the name of the input files and use filestream variables to represent them. * All output, expect the prompts for file names, must be written to the file: yourcslogin + hw11_results. * The program must use an array of records to store the content of the inventory file. * No global variables can be declared/used. * No goto statements can be used. Input File Assumptions/Requirements * the input files will exist and not be empty * maximum number of items to store in array will be 20 * each line (including the last one) in the files will be terminated by a linefeed ('\n') * each order will be terminated by the sentinel 0 Sample input file and terminal session: [lee@bobby]$ more startinventory 333 75 10.00 red t-sHirt 555 20 45.00 tUrquoise eaRRings 222 100 5.00 souvenir mug 444 0 20.00 amethyst PAPER weight 111 50 1.50 bookmark [lee@bobby]$ more orders smIth joHn 222 10 111 3 0 adams maRy 111 5 555 40 333 20 2 10 0 eVANs mike 444 7 0 [lee@bobby]$ g++ hw11.cpp [lee@bobby]$ ./a.out Please enter name of inventory file startinventory Please enter name of order file orders [lee@bobby]$ more leehw11_results Lee Misch Section #100_ Assignment #11 STARTING INVENTORY ITEM# PRICE IN STOCK DESCRIPTION 333 10.00 75 Red t-shirt 555 45.00 20 Turquoise earrings 222 5.00 100 Souvenir mug 444 20.00 0 Amethyst paper weight 111 1.50 50 Bookmark Customer: Smith, John ITEM# REQUESTED ORDERED PRICE DESCRIPTION 222 10 10 50.00 Souvenir mug 111 3 3 4.50 Bookmark Subtotoal: 54.50 Sales tax: 4.36 Shipping: 15.00 Total due: $ 73.86 Customer: Adams, Mary ITEM# REQUESTED ORDERED PRICE DESCRIPTION 111 5 5 7.50 Bookmark 555 40 20 900.00 Turquoise earrings back ordered 333 20 20 200.00 Red t-shirt 2 10 0 0.00 invalid item # - not ordered Subtotoal: 1107.50 Sales tax: 88.60 Shipping: 0.00 Total due: $ 1196.10 Customer: Evans, Mike ITEM# REQUESTED ORDERED PRICE DESCRIPTION 444 7 0 0.00 Amethyst paper weight back ordered Subtotoal: 0.00 Sales tax: 0.00 Shipping: 0.00 Total due: $ 0.00 Total sales: $ 1162.00 Total tax: $ 92.96 Total shipping: $ 15.00 ENDING INVENTORY ITEM# PRICE IN STOCK DESCRIPTION 333 10.00 55 Red t-shirt 555 45.00 0 Turquoise earrings 222 5.00 90 Souvenir mug 444 20.00 0 Amethyst paper weight 111 1.50 42 Bookmark0 answers - Anonymous askedWrite a program that deals two five-card poker hands, evaluates each hand and determines which is th... Show moreWrite a program that deals two five-card poker hands, evaluates each hand and determines which is the better hand. Please have it in C++ code, Thanks! • Show less1 answer - Anonymous askeds five-card hand is dealt "face down" so the player cannot see it.... Show more Write a program whereevaluate the dealer’s hand.• Show less This should be in C++ code! Thanks!0 answers - Anonymous askedSorry if... Show more Will award the person who helps me with a lifesaver rating!! Really could use some help. Sorry if this looks like a lot but I just threw in the code ive been working with at the end. C++ Morse Code Tree class: This class uses a Binary Tree to store a Morse code tree. Data Members: 1. Binary_Tree tree; tree holds the Morse code tree 2. string codes[26]; codes holds the Morse code table Function Members: 1. constructor When a Morse Code Tree object is created in the main program, the constructor should perform the following operations. Read the input file Morse_Code.txt, and use it to build tree. The first few lines of the file are as follows: * e i s h NULL NULL v NULL NULL u f NULL NULL NULL a r l NULL NULL NULL w p NULL NULL j NULL NULL t n d b NULL NULL x NULL NULL k c NULL NULL y NULL NULL m g z NULL NULL q NULL NULL o NULL NULL The class Binary Tree has a member function read_binary_tree() that performs this task. • From the Morse Code Tree created above, build a Morse code table and store it in the data member codes. The table has 26 entries, one entry for each letter. The entry contains the Morse code for the letter. 2. string encode(const string& message); This function has an input parameter string message, which contains a sequence of letters (the letters can be either upper or lower case.) The function converts it into the corresponding Morse code. 3. string decode(const string& mcode); This function has an input parameter string mcode, which contains the Morse code. The function converts it back to the original message. Note: The original message may contain both lower and upper case letters. However, Morse code table does not has separate code for lower and upper case letters. (For example both letter A and a are coded as .-.) Thus, the string converted by the decode function contains lower case letters only. This is what I've gotten done so far. I'm pretty sure most of it is correct I want someone to look at it and tell me if I'm doing something different from what the instructions are telling me to do cause theyre a little vague I feel. Morse_Code_Tree.h #ifndef MORSE_CODE_TREE_H_ #define MORSE_CODE_TREE_H_ #include #include #include #include "Binary_Tree.h" #include "BTNode.h" using namespace std; class Morse_Code_Tree { public: Morse_Code_Tree(); string encode(const string& message); string decode(const string& mcode); private: Binary_Tree tree; string codes[26]; }; #endif Morse_Code_Tree.cpp #include "Morse_Code_Tree.h" using namespace std; Morse_Code_Tree::Morse_Code_Tree(){ // char* file_name = "Morse_Code.txt"; // ifstream in(file_name); ifstream in("Morse_Code.txt"); Binary_Tree tree = Binary_Tree::read_binary_tree(in); /*a*/ codes[0] = ".-"; /*b*/ codes[1] = "-..."; /*c*/ codes[2] = "-.-."; /*d*/ codes[3] = "-.."; /*e*/ codes[4] = "."; /*f*/ codes[5] = "..-."; /*g*/ codes[6] = "--."; /*h*/ codes[7] = "...."; /*i*/ codes[8] = ".."; /*j*/ codes[9] = ".---"; /*k*/ codes[10] = "-.-"; /*l*/ codes[11] = ".-.."; /*m*/ codes[12] = "--"; /*n*/ codes[13] = "-."; /*o*/ codes[14] = "---"; /*p*/ codes[15] = ".--."; /*q*/ codes[16] = "--.-"; /*r*/ codes[17] = ".-."; /*s*/ codes[18] = "..."; /*t*/ codes[19] = "-"; /*u*/ codes[20] = "..-"; /*v*/ codes[21] = "...-"; /*w*/ codes[22] = ".--"; /*x*/ codes[23] = "-..-"; /*y*/ codes[24] = "-.--"; /*z*/ codes[25] = "--.."; } string Morse_Code_Tree::encode(const string& message){ int length = message.size(); for (int i = 0; i { switch (message[i]) { case 'a': return codes[0]; break; case 'A': return codes[0]; break; case 'b': return codes[1]; break; case 'B': return codes[1]; break; case 'c': return codes[2]; break; case 'C': return codes[2]; break; case 'd': return codes[3]; break; case 'D': return codes[3]; break; case 'e': return codes[4]; break; case 'E': return codes[4]; break; case 'f': return codes[5]; break; case 'F': return codes[5]; break; case 'g': return codes[6]; break; case 'G': return codes[6]; break; case 'h': return codes[7]; break; case 'H': return codes[7]; break; case 'i': return codes[8]; break; case 'I': return codes[8]; break; case 'j': return codes[9]; break; case 'J': return codes[9]; break; case 'k': return codes[10]; break; case 'K': return codes[10]; break; case 'l': return codes[11]; break; case 'L': return codes[11]; break; case 'm': return codes[12]; break; case 'M': return codes[12]; break; case 'n': return codes[13]; break; case 'N': return codes[13]; break; case 'o': return codes[14]; break; case 'O': return codes[14]; break; case 'p': return codes[15]; break; case 'P': return codes[15]; break; case 'q': return codes[16]; break; case 'Q': return codes[16]; break; case 'r': return codes[17]; break; case 'R': return codes[17]; break; case 's': return codes[18]; break; case 'S': return codes[18]; break; case 't': return codes[19]; break; case 'T': return codes[19]; break; case 'u': return codes[20]; break; case 'U': return codes[20]; break; case 'v': return codes[21]; break; case 'V': return codes[21]; break; case 'w': return codes[22]; break; case 'W': return codes[22]; break; case 'x': return codes[23]; break; case 'X': return codes[23]; break; case 'y': return codes[24]; break; case 'Y': return codes[24]; break; case 'z': return codes[25]; break; case 'Z': return codes[25]; break; } } return ""; } string Morse_Code_Tree::decode(const string& mcode){ if (mcode == codes[0]) { return "a"; } else if (mcode == codes[1]) { return "b"; } else if (mcode == codes[2]) { return "c"; } else if (mcode == codes[3]) { return "d"; } else if (mcode == codes[4]) { return "e"; } else if (mcode == codes[5]) { return "f"; } else if (mcode == codes[6]) { return "g"; } else if (mcode == codes[7]) { return "h"; } else if (mcode == codes[8]) { return "i"; } else if (mcode == codes[9]) { return "j"; } else if (mcode == codes[10]) { return "k"; } else if (mcode == codes[11]) { return "l"; } else if (mcode == codes[12]) { return "m"; } else if (mcode == codes[13]) { return "n"; } else if (mcode == codes[14]) { return "o"; } else if (mcode == codes[15]) { return "p"; } else if (mcode == codes[16]) { return "q"; } else if (mcode == codes[17]) { return "r"; } else if (mcode == codes[18]) { return "s"; } else if (mcode == codes[19]) { return "t"; } else if (mcode == codes[20]) { return "u"; } else if (mcode == codes[21]) { return "v"; } else if (mcode == codes[22]) { return "w"; } else if (mcode == codes[23]) { return "x"; } else if (mcode == codes[24]) { return "y"; } else if (mcode == codes[25]) { return "z"; } else { return ""; } }2 answers - Anonymous askeds hand automatically, but the player is allowed to decide... Show moreWrite a program that can handle the dealer’s hand automatically, but the player is allowed to decide which cards of the player’s? Please have the code in C++! Thanks • Show less0 answers - Anonymous askedWrite a rational number calculator that repeatedly waits for an input lin... Show more Here's what it needs to do: Write a rational number calculator that repeatedly waits for an input line and then prints the result of the calculation using your RationalNumber class (so I need to modify the code that I have provided and I need to change it so that it outputs the following). The calculator should stop once the user types “q”. Your program should behave like this: >1/2+1/2 1/1 >1/2-1/2 0/1 > 10 / 1 * -1 / 10 -1/1 > -64 / 128 / 128 / -64 1/4 > 2 / -4 = -3 / 6 true >q Each line has the following format / / , so it should be easy to parse using Scanner methods. / represents a rational number, and is a single character indicating the desired operation. You may assume that every input follows this format without error.• Show less Also needs to not use a try catch. Here's my program: public class RationalNumber2 { private int num; private int den; public RationalNumber2(){ //Initializes to 0/1 num = 1; den = 1; } public RationalNumber2(int num, int den){ //Initializes to n/d try{this.den=den; this.num=num; if(den==0){ throw new IllegalArgumentException("Zero value at constructor"); } }catch(IllegalArgumentException e){ System.out.println(e.getMessage()); } this.reduce(); } //public RationalNumber(RationalNumber x){//Initializes to x's values public void add(RationalNumber2 other){ // this = this + other this.den = this.den * other.den; this.num = this.num * other.den + other.num * this.den; this.reduce(); //toString(); } public void subtract(RationalNumber2 other){ // this = this - other this.den = this.den * other.den; this.num = this.num * other.den - other.num * this.den; this.reduce(); //toString (); } public void multiply(RationalNumber2 other){ // this = this * other this.den = this.den * other.den; this.num = this.num * other.num; this.reduce(); //toString (); } public void divide(RationalNumber2 other){ // this = this / other this.den = this.den * other.num; this.num = this.num * other.den; this.reduce(); //toString (); } private void reduce(){ int yes = 0; int smaller; int num1=this.num,den1=this.den; if(num<0) { num1=-1*num;} if(den<0) { den1=-1*den;} if ( num1 < den1 ){ smaller = num1; }else{ smaller = den1;} for ( int div = smaller; div >= 2; div-- ){ if ( num1 % div == 0 && den1 % div == 0 ){ yes = div; break; } } if ( yes != 0 ){ num /= yes; den /= yes; } } public String toString(){ // Creates string representation if(this.den<0&&this.num<0) return(""+(-1*this.num)+"/"+(-1*this.den)); else if(this.den<0&&this.num>0) return("-"+(this.num)+"/"+(-1*this.den)); else if(this.den>0&&this.num<0) return("-"+(-1*this.num)+"/"+(this.den)); else return(""+(this.num)+"/"+(this.den)); } public boolean equals(RationalNumber2 other){ // Is this == other? this.reduce(); other.reduce(); if( num == other.num && den == other.den){ return true; }else{ return false; } } }1 answer - Anonymous askedWhat is the solution to problem number 6 on page number 603 of Starting Out With C++ From Control St... More »1 answer - Anonymous asked- keep track of total number of problems attemp... Show more Now change the menuExample problem to the following: - keep track of total number of problems attempted. - keep track of total of each kind of problem attempted - keep track of total correct/incorrect for each problem type - keep track of total correct/incorrect problems attempted when user exits program: display all of the above information and show a percentage of the total correct answers out of all problems attempted. /*example of a menu driven program• Show less create a simple calculator that will add subtract multiply and dividevalues entered by the user*/import java.util.*; //used to include scanner public class menuExample{ public static void main(String args[]) { //declare a scanner for user Input Scanner userInput = new Scanner(System.in); int choice; int value1, value2 double.number; //hold result of integer value do { //display our menu System.out.println("*** Calculator v1.0***"); System.out.println("1, Addition"); System.out.println("2, Subtration"); System.out.println("3, Multiplication"); System.out.println("4, Division"); System.out.println("5, Modulos"); System.out.println("6. Exit"); System.out.println("**********************"); System.out.println("Please enter your choice:"); //get user input choice = userInput.nextInt(); //switch the choice from user switch(choice) { case 1://addition System.out.println("addition"); System.out.println("Please enter two values to be added"); value1 = userInput.nextInt(); value2 = userInput.nextInt(); System.out.println(value1 + " + " + value2 + " = " + (value1+value2); break; case 2://subtraction System.out.println("subraction"); System.out.println("Please enter two values to be subtrated"); value1 = userInput.nextInt(); value2 = userInput.nextInt(); System.out.println(value1 + " - " + value2 + " = " + (value1-value2); break; case 3://multiplication System.out.println("multiplication"); System.out.println("Please enter two values to be multiplicated"); value1= userInput.nextInt(); value2 = userInput.nextInt(); System.out.println(value1 + " * " + value2 + " = " + (value1*value2); break; case 4://division System.out.println("division"); System.out.println("Please enter two values to be divided"); value1= userInput.nextInt(); value2 = userInput.nextInt(); number = (double)value1 / value2; //type cast System.out.println(value1 + " / " + value2 + " = " + number); break; case 5://division System.out.println("multiplication"); System.out.println("Please enter two values to be divided"); value1= userInput.nextInt(); value2 = userInput.nextInt(); System.out.println(value1 + " % " + value2 + " = " + (value1%value2); break; case 6://exit System.out.println("You have chose exit!"); break; default://default System.out.println("You entered an invalid choice"); } }while(choice != 5); }//main}//class2 answers - Anonymous asked... Show more/* On this machine, a pointer takes 1 byte and an integer takes 4 bytes. */ typedef struct treenode{ int data; struct treenode *left, *right; }*bintree; struct treenode X[8]; bintree root; Questions: a) How much storage does X[8] take? b) Suppose root points to a "tree" with 10 integers in it. How much storage does the tree take (not including "root")? c) How much storage does "root" take? • Show less1 answer - Anonymous askedT... Show morea) Write an algorithm (pseudocode) to generate a set of triangular numbers using the formula below: TriangularNumber = n(n + 1)/2 for any integer value of n. Generate every 5th triangular number between a given start-number and end-number using a repetition construct. End-number is assumed to be greater than start-number. The appearance on the display is as follows ( for clarity, input data is in bold here): Enter start number: 5 Enter end number : 20 Triangular Numbers between 5 and 20 are:- 15, 55, 120, 210 b) Implement your algorithm in C programming language. Only one funtion i.e the "main" function is to be implemented. c) Modify your code in (b). Perform the 'generation' and 'dislay' of triangular numbers in a separate function call "genNum". Input statements are to remain in the "main" function. Will Rate! • Show less1 answer - Entelligent askedQ2. How default routes are important in routing? In which type of network design default routes are ... More »1 answer - Anonymous asked* Provide a basic C++ script which will have... Show more Please fulfill the following requirements below: Thanks!• Show less * Provide a basic C++ script which will have an entire game (any game you wish to invent) in a loop so that the player is required to play a first game. * Then after ending this game, the player is asked if he would like to play again. example: Do you wish to engage in another game? (Y or N) "Y" will result in this particuar game to restart all else will quit it.1 answer - Anonymous askeda) Read in 20 numbers, each of which... Show more Use a single-subscripted array to solve the following problems:• Show less a) Read in 20 numbers, each of which is between 10 and 100, inclusive. Include an error handling rountine if out-of-range number is entered to repeatedly prompt and accept a number until a number in range is read. As each number is read and stored in the array, dislpay a message if a duplicate number previously stored exists. If all 20 numbers are different, display a message to that effect. At end, display all the content of the array. Use the smallest possible array to solve this problem. b) Modify your code in (a). Declare and initialize an array with the following data: 2, 51, 16, 29, 34, and 45. Prompt user to enter a search key between 0 and 50. Include an error handling rountine if out-of-range number is entered to repeatedly prompt and accept a number until a number in range is read. If a match is found in the array, display an appropriate message indicating the location in the array. If no match is found, display a message to that effect. Display all the elements in the array to end this program. * I require the C programming code(not C++) for both part a and b, thanks! Will rate!!!1 cylinder loses. Note: please use C++ format to answer follwoing 2 questions Write a program in which computer(COMP) plays against an actual person (PER) opponent. 1. Create a random integer(INT) between the values of ten & one hundred to indicate the starting size of the pile. 2. Create a random INT between the values of zero & one to decide whether it’s an actual PER or a COMP acquires the initial turn. The playas can be either: Dumb COMP, intelligent COMP or PER• Show less1 ball loses. Write a program in which computer(COMP) plays against an actual person (PER) opponent. Please note: the program must be written in C++ format • Show less 1. Create a random integer(INT) between the values zero and one to pick whether the computer (COMP) start game as dumb or intelligent. In the "dumb" scenario, the COMP will take a random legal value (between one and n/2) from the stack whenever it takes a turn. 2. In “intelligent” scenario the COMP takes off enough marbles to make the size of the pile a power of two minus one – I.e. 3, 7, or 63. This move is legal ( the only exception is if the size of the stack is currently 1 less than a power of 2). In that case, the COMP makes a random legal move. The playas can be either: Dumb COMP, intelligent COMP or PER(person)1 answer - Anonymous askedI need write a fortran statment required to calculate and print out the squares of all the even inte... Show more I need write a fortran statment required to calculate and print out the squares of all the even integers between 0 and 50• Show less (please send the answer to lauhenway@live.com) Thank you1 answer - Anonymous askedWrite a program caps that reads in a character string, searches for all of the words within the stri... Show moreWrite a program caps that reads in a character string, searches for all of the words within the string, and capitalizes the first letter of each word, while shifting the remainder of the word to lowercase. Assume that all nonalphabetic and nonnumeric characters can mark the boundaries of a word within the character variable ( for example, periods, commas). Nonalphabetic characters should be left unchanged (please send the answer to lauhenway@live.com) Thank you • Show less1 answer - Anonymous askedA bottleneck spanning tree T of an undirected graph G is a spanning tree of G whose largest edge wei... Show more. (b) Give a linear-time algorithm that given a graph G and an integer b, determines whether the value of the bottleneck spanning tree is at most b. (c) Using your algorithm for part (b) as a subroutine, give a linear-time algorithm to construct a bottleneck spanning tree. (HINT: You may want to use a subroutine that contracts sets of edges, as in the MST-REDUCE procedure discribed in Problem 23-2) Please provide the solutions for part (a) to (c). In the part (c), Problem 23-2 is the problem in the "Introduction to Algorithms" 3rd edition.• Show less1 answer - Anonymous askedI have no idea how can I start with this program, can someone give me some hints to help me get star... Show more I have no idea how can I start with this program, can someone give me some hints to help me get start with this?? THANKS YOU!! One of the problems with this simple scheme is that a process with a priority just a little higher than all others will always be re-inserted back at the beginning of the queue. Thus it will be the only process to execute. One solution is increasing the priority of all processes in the queue a little bit each time the processor grabs the next process. This aging factor will ensure that even low priority processes will eventually execute. The aging factor is not added to the process that just ran, and it obviously is never added to the null process! (If you assign the null process a priority of 0, then you can always identify it.) For our simple simulation, the priority should never be incremented above 1000 You are to write a program that simulates this process queue. New Processes consisting of the following will be submitted to the queue: INPUT: lines consisting of three integers representing the PID, priority and required time. These are read from a file with the input line 0 0 0 indicating the end of input. The program should prompt the user for the name of this file, then open and read from it. OUTPUT: As each process completes, output its PID, its initial priority, total time needed (these first three are just the same as the input), its final priority, the time it was submitted (in ticks, with the beginning of the program being time 0), and the time it completed. Your program will have to keep track of the total elapsed time for tracking these last two. The aging factor should be +1. The simulation runs until there are no more jobs to be started and no processes still running except the null process. New process should be added to the queue after every 10 ticks. That is, execute 10 cycles then add the next process from the input file. The process queue is to be represented with an ordered linked list using dynamic allocation. (Note, a real OS would never use dynamic allocation for the process queue because of the relatively high overhead of new and delete in code that may be executing thousands of times per second.) In our simulation, adding to the queue will simply be inserting into an ordered linked list. You may use the algorithm for inserting passengers into the airline list from the last program. Removing from the queue consists of removing from the head of a linked list, a simple task. Do you need to maintain a pointer to the rear of the queue? What information do you need to store in each node? You should write a priorityQueue ADT class including methods for enQueue, deQueue, etc.• Show less0 answers
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2011-may-03
CC-MAIN-2014-35
refinedweb
17,245
64
Do not think I am praising Java because I am a Java trainer; I know at least ten programming languages, but I like Java with its increasing libraries (header files, C++ people call) with every version of Java. Every version of Java (known as JDK versions) adds predefined classes that either decrease programming code or increase the performance. Of course, these are the two aims designers keep in their mind while developing the language further and further. You can read the unique features of Java and why it is so popular in Java Features – Buzz Words. Many C/C++ programmers feel, while learning Java, they know Java earlier (it looks as a familiar face) because much of the syntax of Java is borrowed from C/C++. Like C/C++, Java is a strongly typed language. The same syntax of C/C++ for variables, functions, control structures and arrays is followed by Java. Like the basic structure of C/C++ is struct, in Java it is class. I feel, a class is a modified form of struct of C-lang. Java supports both global variables (known in Java as instance variables) and local variables. A for loop, a function (in Java, known as method), a if, a switch and a class are separated (delimited) from the remaining part of the code with two braces, opening brace, {, and closing brace, }. Java does not support preprocessor directives like #define. You are at liberty to write the main() method anywhere in the class, at the beginning of the code or at the end or anywhere. But is it customary to write at the end of the code, just before the class closes. Similarly a method or instance variable can be included anywhere, but being coming from C/C++, we write the variables at the beginning of the code followed by methods (Java does not support prototypes). import java.lang.*; public class Demo { public static void main(String args[]) { System.out.println("Hello World"); System.out.println("Best Wishes of the Day"); System.out.println("I am writing Java Application"); } } Output Screenshot on Java Syntax Once you acquainted with the above, slowly go step-by-step into the tutorial of way2java.com.
https://way2java.com/java-general/java-syntax/
CC-MAIN-2020-40
refinedweb
367
61.46
obj.rest provides a REST server that can serve OBJ objects. Project description pypi: | source: | email: bthate@dds.nl | botfather at #dunkbots/freenode OBJ is a framework you can use to program bots, it’s has it’s own shell (the obj program) that has the following commands: ed - edit objects. find - find objects. load - load module. log - log some text. meet - add a new user. rm - set _deleted flag. show - show internals. unload - unload module. the show command can be used to check status: cfg - show main config cmds - show available commands license - show license mods - show loaded modules tasks - show running tasks uptime - show uptime version - show version the following modules are available in the OBJ package: obj.base - base classes. obj.bot - bot base class. obj.clock - timer, repeater. obj.cmds - basic commands. obj.dcc - direct client to client bot. obj.event - event class. obj.fleet - list of bots. obj.handler - queued event handler. obj.irc - irc bot. obj.loader - load modules into a table and scan for comands. obj.select - select based loop. obj.task - a obj thread, launch tasks, get a list of running tasks or kill a task. obj.users - manages users. obj.utils - utility module. programming your own commands is easy, your can load modules with the -m option. a command is a function with one argument, the event that was generated on the bot def mycommand(event): <<< your code here >>> You can use event.reply() to send response back to the user. OBJ has a “no-clause MIT license” that should be the most liberal license you can get at the year 2019. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/obj.rest/
CC-MAIN-2019-47
refinedweb
294
79.56
20.10. Class Variables and Instance Variables¶ You have already seen that each instance of a class has its own namespace with its own instance variables. Two instances of the Point class each have their own instance variable x. Setting x in one instance doesn’t affect the other instance. A class can also have class variables. A class variable is set as part of the class definition. For example, consider the following version of the Point class. Here we have added a graph method that generates a string representing a little text-based graph with the Point plotted on the graph. It’s not a very pretty graph, in part because the y-axis is stretched like a rubber band, but you can get the idea from this. Note that there is an assignment to the variable printed_rep on line 4. It is not inside any method. That makes it a class variable. It is accessed in the same way as instance variables. For example, on line 16, there is a reference to self.printed_rep. If you change line 4, you have it print a different character at the x,y coordinates of the Point in the graph. To be able to reason about class variables and instance variables, it is helpful to know the rules that the python interpreter uses. That way, you can mentally simulate what the interpreter does. - When the interpreter sees an expression of the form <obj>.<varname>, it: - Checks if the object has an instance variable set. If so, it uses that value. - If it doesn’t find an instance variable, it checks whether the class has a class variable. If so it uses that value. - If it doesn’t find an instance or a class variable, it creates a runtime error (actually, it does one other check first, which you will learn about in the next chapter.) - When the interpreter sees an assignment statement of the form <obj>.<varname> = <expr>, it: - Evaluates the expression on the right-hand side to yield some python object; - Sets the instance variable <varname> of <obj> to be bound to that python object. Note that an assignment statement of this form never sets the class variable, it only sets the instance variable. In order to set the class variable, you use an assignment statement of the form <varname> = <expr> at the top-level in a class definition, like on line 4 in the code above to set the class variable printed_rep. - In case you are curious, method definitions also create class variables. Thus, in the code above, graph becomes a class variable that is bound to a function/method object. p1.graph() is evaluated by: - looking up p1 and finding that it’s an instance of Point - looking for an instance variable called graph in p1, but not finding one - looking for a class variable called graph in p1’s class, the Point class; it finds a function/method object - Because of the () after the word graph, it invokes the function/method object, with the parameter self bound to the object p1 points to. Try running it in codelens and see if you can follow how it all works.
https://runestone.academy/runestone/static/fopp/Classes/ClassVariablesInstanceVariables.html
CC-MAIN-2018-51
refinedweb
531
71.04
A splash screen is a screen that briefly opens whenever you open an application. A splash screen is also called the launch screen or startup screen and appears as soon as you click on the app icon to launch it. A splash screen usually appears for two to four seconds and then disappears, and the application home screen is launched. Below are some pictures of a splash screen. We will use the Timer() function to create a splash screen in Flutter. First, we create a new Flutter application with the following command: flutter create new_flutter_app The application will be created and will have a main.dart file, where we add the following code: import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( debugShowCheckedModeBanner: false, title: 'Flutter Demo', theme: ThemeData( visualDensity: VisualDensity.adaptivePlatformDensity, ), home: HomePage() } } We will create a stateful widget named SplashScreen and add the code for the splash screen in it. The splash screen can just have a simple logo or the name of the app. First, we create a stateless widget in the main.dart file named homepage screen. Then, we add a simple code for the second screen and add some styling to it. After our widget for the second screen is ready, we just need to write code to connect it. We will add the timer, which specifies how long the screen is displayed whenever it is launched. We add the timer in initState(). The following is the code to add a splash screen: class SplashScreen extends StatefulWidget { @override _SplashScreenState createState() => _SplashScreenState(); } class _SplashScreenState extends State<MyHomePage> { @override void initState() { super.initState(); Timer(Duration(seconds: 3), ()=>Navigator.pushReplacement(context,MaterialPageRoute(builder:(context) => HomeScreen())); } @override Widget build(BuildContext context) { return Container( child: Text("This is the splash screen") ); } } The code below is added to the new homepage.dart file, which contains the screen displayed after the splash screen. class HomeScreen extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title:Text("This is the secoind screen")), body: Text("Home page",textScaleFactor: 2,) ); } } RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-create-a-simple-splash-screen-in-flutter
CC-MAIN-2022-33
refinedweb
354
55.64
Is it easier to teach XSLT or XQuery to an experienced SQL developer? My recent training experiences indicates that XQuery is easier to learn. For the last six years I have been building metadata management systems using a diverse set of XML-centric technologies. These languages include XML Schemas, XSLT, Schematron, XHTML, XForms and most recently XQuery. And to be honest, I really do enjoy XQuery. My job as a consultant is to develop feature-rich and highly customizable metadata management systems for my customers and also transfer the skills needed to maintain and extend these systems to my customers though formal training classes as well as one-on-one mentorship. I have found that it has been very difficult to teach XSLT to an average support person that is only doing occasional XSLT development. But teaching XQuery has been much easier for me to teach when you consider that most of my students have had some exposure to SQL. Looking back on my own learning process, I recall took me about five months of almost continuous study to really feel comfortable with XSLT. Most of this learning curve was because I had not done production XSLT development. But I picked up XQuery in just a few weeks. Perhaps this is because I was already familiar with SQL and XPath. But perhaps this is because XQuery is a little bit more approachable. I want to note that this does not necessarily imply a poor design of XSLT or the merits of functional programming. After I did learn XSLT I became a real evangelist of its elegance and beauty. At first I was frustrated by not being able to change a “variable”. Later I realized that this restriction is what made XSLT beautiful, simple and elegant. These features keep the transforms free of side-effects. Once XSLT scripts are deployed I seldom found problems. I became enthralled by the fact that the simplicity of the language implied that XSLT custom-hardware could allow transforms to be orders of magnitudes faster than software-only solutions. XSLT may always have a place in CPU-intensive applications. The difference in my learning time and those of my students reflects the state of our existing knowledge base: most of are already familiar with SQL. Anyone that knows SQL can take a crash course in XQuery designed specifically for SQL developers. Priscilla Walmsley’s excellent book on XQuery (O’Reilly 2007) includes a single chapter targeted at SQL developers making the transition to XQuery that I use in many of my classes. And 90% of the small support and maintenance tasks that many support and maintenance people need to perform do not require them to ever use the more complex functions and modules features of XQuery. So with this in mind, I have moved most of my metadata registry tools away from XSLT and toward XQuery. This seems consistent with other metadata managers are doing today. There are several people now working on open source metadata management systems. What about you? Do you have experience teaching both XSLT and XQuery to SQL developers? What is your experience on the learning times? You're talking about two completely different languages: The reason XQuery is easier for people with a background in SQL is because -- like SQL -- XQuery is designed from a "I would like this data. Please give it to me so I can do something with it." perspective whereas XSLT is designed from a "Here is some data. Please give me something else in return." perspective.. @David, differences between XSLT 2.0 and XQuery aren't that great, so anyone familiar with one of them shouldn't take too long to be comfortable with the other - knowing both well isn't a problem and there's no need to focus on one to the detriment of the other. The big difference between them is XSLT's recursive descent processing model (which is perfect for processing XML but hard to grasp especially with the implicit default templates...) perhaps that's the reason why XQuery is easier to learn. Your comparison is slightly strange as you give the example of variable references being confusing on XSLT, but they work exactly the same in XQuery. XQuery is, apart from surface syntax differences, essentially just XSLT without xsl:apply-templates. While XSLT apply-templates might seem confusing at first, in XQuery as it isn't available then you need to be more not less familiar with functional programming techniques and coding of recursive functions. @Joh Smith, >> . ;-) Gosh, David P, this seems have hit some tender point! I've rarely heard XQuery and XSLT spoken of with such passion! But you overstate the point. As David C says, the essential difference is the pattern matching in XSLT which supports push-style transformations, but I would guess that a lot of XSLT is written in pull-style using for loops and the like in which mode it is is no more 'passive' or 'aggressive' than XQuery. In both cases, the XPath expressions assume as little as nessary about the input and construct just the same output. Certainly, some kinds of transformation are easier to write in push-style XSLT where recursive functions are needed in XQuery.] "Finally, I wonder if anyone has done an analysis of XSLT scripts in the wild of the proportion of push and pull constructs used?". @David, >>You can disagree all you want, but you first need to properly understand what I view a transformation language to be. You can try to change the definition of transformation like Bill Clinton tried to change the definition of sex.. In 2005 I wrote a dissertation about the usability of XML query languages ().. @John Smith, >>.. @David P, The W3C XQuery Requirements document at () states: "3.4.11 Structural Transformation Queries MUST be able to transform XML structures and MUST be able to create new structures. green status Status: this requirement has been met." I hope that helps. This problem might demonstrate David P's point: take any XML input and suppress all elements that aren't known, or perhaps aren't in a given namespace. In XSLT this is easy, in XQuery it's harder and without Updates it's impossible (isn't it XQuery'ers??) @Andrew. My view is the two languages will work together - XSLT will fire off various queries and then process the results, especially when generating markup... which is the majority of cases. Then if some bright spark could make the XSLT work directly against the database... ;0) @david It seems to me there are two problems with your argument. First that you have redefined transformation to some narrower sense which is a 'permissive' transformation - in the same sense that Schematron is permissive (everything not forbidden is allowed) and XMLSchema is restrictive (everything not allowed is forbidden) - which would rule out most useful transformations. In my experience permissive transformations are most useful for schema evolution i.e from schema S to the variant S'; permissive transformations are little use for transformations from schema A to a different schema B. } }; which can be adapted to pass or filter nodes of any required characteristic. Since it requires editing this function to add conditionals and/or typeswitch to express filter conditions, this approach is not as composable as the rule-based templates in XSLT are, but that's not the same as saying there is fundamental computational distinction. Lately, we have switched to using XQuery even for schema changes because the ease of composability is outweighed by the simplicity of using a single language. @Chris,? @david On the contrary, I implied that it -was- easier in XSLT - rules can be simply added in XSLT allowing much easier composition. I'm only saying that your characterisation of XQuery is exaggerated since it is quite possible (but not as convenient) to write permissive transformations. @Chris, >>. s/see it becoming "go to"/see it becoming my "go to"/
http://www.oreillynet.com/xml/blog/2008/06/teaching_xslt_vs_teaching_xque.html
crawl-002
refinedweb
1,325
61.56
So please break down what is chewing up the extra ram Not all variables need be global on Nextion, or global scope on MCU Those needing to be triggered by NexTouch::iterate - maybe so but those only using setValue, getValue can be local scope in function thanks, Was able to free only 44 bytes. It looks like "NexTouch *nex_listen_list[]" takes up lot of space! is there any workaround for it? or go for larger MCU go page based? page based preinitialize event sendme nex_listen_list[] only needs to hold relevant objects - those that can trigger an 0x65 As each page changes, this list can change to just page based page3.b2 button can not be triggered when user is on page1 but your nex_listen_list[] is probably holding entire HMI worth. Yes I have two pages and there is only one nex_listen_list[]. Please explain "-those that can trigger an 0X65" Following are my page wise separated lists 1) NexTouch *nex_listen_list2[] = { &rtc_ok_button,&hour_set,&minute_set,&day_set,&month_set,&year_set,&settime, NULL }; 2) NexTouch *nex_listen_list1[]= { &tempareture,&humidity, &light,&up,&down,&up_key, &down_key, &uv,&socket,&beep,&FANoff,&FANlow,&FANmedium,&FANhigh, &ex_pre,&ex_v,&dn_pre,&dn_v,&beep, &hour_dsp,&minute_dsp,&day_dsp,&month_dsp,&year_dsp, NULL }; Should I declare them in separate functions? I really do not how to go about it. In your HMI file - component's Touch Press has checkbox Send Component ID - if this is checked 0x65 0xAA 0xBB 0x01 0xFF 0xFF 0xFF is sent (where 0xAA is the page number, and 0xBB is the .id attribute) - component's Touch Release has checkbox Send Component ID - if this is checked 0x65 0xAA 0xBB 0x00 0xFF 0xFF 0xFF is sent (where 0xAA is the page number, and 0xBB is the .id attribute) On your MCU side code your system maybe catches these - to catch these events the matching .attachPush or .attachPop has to be defined properly and set up. - if a component does not have an .attachPush or .attachPop then there is no need for it to be in the nex_listen_list as you haven't defined the event to be passed to anything. nexLoop() listens for these 0x65 0xAA 0xBB 0x0? 0xFF 0xFF 0xFF (check NexHardware.cpp) and when it comes across a 0x65 event it calls iterate to go through your nex_listen_list[] to find an AA BB match and if a match is found, checks if you setup the .attachPush/.attachPop for the incoming 0x0? - if so, then it triggers your function. push or pop. nexLoop() does not come configured to automatically also catch 0x66 events (see the Nextion Instruction Set :: Nextion Return Data) Now I am going to go out on a limb and say that list2 is to set RTC values - but from a coding standpoint only the ok button needs to trigger an MCU side event after the user has committed that the values on the screen are the values they wish to commit to the RTC. But I haven't seen your design. IF the 0x65 event is not declared and used MCU side, then there is no need for the checkbox Send Component ID to be checked, or for the &component to be in the nex_listen_list[] Next, if the only "commit" component is like the &rtc_ok_button, then it is the only component needed to be in the nex_listen_list (as opposed to &hr_set...) void callback_for_rtc_ok(void *ptr) { NexNumber hour_set = NexNumber(.....); NexNumber minute_set = NexNumber(.....); NexNumber day_set = NexNumber(.....); NexNumber month_set = NexNumber(.....); NexNumber year_set = NexNumber(.....); uint32_t hr, mn, day, month, year; hr_set.getValue(&hr); minute_set.getValue(&mn); ... } and in setup() if the Release Event's Send Component ID checked rtc_ok_button.attachPop(callback_for_rtc_ok,&rtc_ok_button); In this manner, your variables are declared local and available inside the routines that they are needed, and nex_listen_list is not clogged. But sendme in PreIntialize Event of your HMI pages and setting up the 0x66 page event requires you to do some more advanced coding. Thank you very much . That clears most of my points. Thanks for your " Excellent" and "BIG" and "timely" support Hi again, attaching my HMI file , Please tell me is it worth trying to fit the code in UNO . It is working perfectly well with mega2560 existing ino needed to see about shortening - this is out of ordinary, debug is not what we provide if it's worth to adopt for an UNO, that's up to your own preferences. That's something you must answer by yourself ... - own skills - to invest time - overall estimated savings - ... Existing HMI would be used on 7" display regardless of if MCU used is Mega or UNO. (It is the MCU code that needs shrinking) My Opinion is this: - if working "perfectly" on Mega2560 - you have a product Reworking it for an UNO may reduce costs by a bit - so you must think how many units is realistic? is this high enough for new costs to rework into UNO - if so, then maybe consider UNO rework for $8/unit. However .... an UNO product will introduce new limits. ... Mega2560 offers an ability to offer upgraded versions if UNO is packed full, do you have same ability to upgrade? attachPop "function :- callback_for_rtc_ok(void*ptr)" not getting called. Following is my simple code that you have suggested. where am I going wrong? Send component ID check box checked for " rtc_ok_button" #include "Time.h" #include "DS1307RTC.h" #include "Wire.h" #include "Nextion.h" NexButton rtc_ok_button = NexButton(1,12,"rtc_ok_button"); NexTouch *nex_listen_list1[]= { &rtc_ok_button, NULL }; void callback_for_rtc_ok(void*ptr) { //Define nextion rtc variables for rtc setting NexNumber hour_set = NexNumber(1,20,"hour_set"); NexNumber minute_set = NexNumber(1,23,"minute_set"); NexNumber day_set = NexNumber(1,28,"day_set"); NexNumber month_set = NexNumber(1,29,"month_set"); NexNumber year_set = NexNumber(1,30,"year_set"); uint32_t hour_rtc, minute_rtc, sec_rtc, date_rtc, month_rtc, year_rtc,set_rtc; hour_set.getValue(&hour_rtc); minute_set.getValue(&minute_rtc); day_set.getValue(&date_rtc); month_set.getValue(&month_rtc); year_set.getValue(&year_rtc); setTime(hour_rtc,minute_rtc,sec_rtc,date_rtc,month_rtc,year_rtc); time_t tSet = now(); RTC.set(tSet); // set the RTC and the system time to the received value } void setup(void) { nexInit(); rtc_ok_button.attachPop(callback_for_rtc_ok,&rtc_ok_button); setSyncProvider(RTC.get); // the function to sync the time from the RTC } void loop(void) { nexLoop(nex_listen_list1); // read current time ShowTime(); } How certain are you that the callback_for_rtc_ok function is not being called? Put a Serial.println in to send to your Serial Monitor and be sure if it is or if it is not being called. Issue could be that ok has changed pages on Nextion side from button HMI side. First tell me if mcu side is actually triggering I will debug it and see what happens in the mean time elaborate on"Issue could be that ok has changed pages on Nextion side from button HMI side." in my HMI rtc_ok_button is on second page. but now as suggested I have checked "send component id"box so it will send the page number. Is there any documentation other than library itself or nextion instruction set? Documentation also includes knowledge in forum. Nextion is driven by STM/GD MCU on back side PCB. Actually I was assuming page command in rtc_ok_button. You can disregard theory page changes before received. now your statement attachPop "function :- callback_for_rtc_ok(void*ptr)" not getting called. I assume it really is being called - Place Serial.println(' -- Callback function called'); first line inside function - Your code may not be doing as you wish. but I think the function is being called Pramod Natu In one of the project I am using 7" nextion display and mega 2560. Previously I was not using Nextion Display on the same project. Every thing is working fine. following are my findings With Nextion display 1) Main program memory on arduino side has reduced considerably from 100k Bytes to 17 K bytes 2) Gui programming is fastest 3) Global variable requirement on arduino side has increased from 1000 to 2000 Bytes. Is there any way to reduce or decrease the requirement of global variables on Arduino side so that I can use 328P instead of 2560?
http://support.iteadstudio.com/support/discussions/topics/11000011607
CC-MAIN-2017-43
refinedweb
1,317
63.29
Math.imul() function returns the result of the C-like 32-bit multiplication of the two parameters. Syntax Math.imul(a, b) Parameters a - First number. b - Second number. Description Math.imul allows for fast 32-bit integer multiplication with C-like semantics. This feature is useful for projects like Emscripten. Because imul is a static method of Math, you always use it as Math.imul(), rather than as a method of a Math object you created. Examples Math.imul(2, 4) // 8 Math.imul(-1, 8) //-8 Math.imul(-2, -2) // 4 Math.imul(0xffffffff, 5) //-5 Math.imul(0xfffffffe, 5) //-10 Polyfill This can be emulated with the following function: function imul(a, b) { var ah = (a >>> 16) & 0xffff; var al = a & 0xffff; var bh = (b >>> 16) & 0xffff; var bl = b & 0xffff; // the shift by 0 fixes the sign on the high part // the final |0 converts the unsigned value into a signed value return ((al * bl) + (((ah * bl + al * bh) << 16) >>> 0)|0); } Specifications Browser compatibility
https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Math/imul?redirect=no
CC-MAIN-2016-36
refinedweb
169
67.45
Linq is almost providing all the functionalities and i have found one another great operator called range operator which will return a sequence of integer number from start point to number of count. Here is the signature of range operator in Linq. Here the Start means the starting integer of the sequence and count means the number of sequence you want from starting integer. Let’s take a simple example for that where will print 5 to 9 sequence with the help of range operator. Here is the code for that. And here is the output for that as expected. Hope this will help you.. public static IEnumerable<int> Range(int start, int count) using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var RangeResult = Enumerable.Range(5, 5); Console.WriteLine("Range Result"); foreach (int num in RangeResult) { Console.WriteLine(num); } } } } Hope this will help you.. Technorati Tags: Range,Linq-To-Sql,C#.NET3.5 Good post. I am bookmarking your blog.. This free meta tag generator tools provides the easy way of generating title and meta tags for seo onpage optimization. free meta tag generator
http://www.dotnetjalps.com/2010/06/range-operator-in-linq.html
CC-MAIN-2014-42
refinedweb
198
61.93
The following topics are covered in this chapter: Note: for information on using the OCI to manipulate objects in an Oracle8 server, see Chapter 8, "OCI Object-Relational Programming". Chapter 2 introduced the basic concepts of OCI programming. This chapter is designed to introduce more advanced concepts, including the following: Chapter 2 described how a simple transaction can be committed or rolled back. This section talks about different levels of transaction complexity, including global transactions, and the operations that are possible through OCI calls. Chapter 2 talked about the OCISessionBegin() call as part of OCI initialization. This section describes additional options available with OCISessionBegin(). It also describes user authentication and password management using the OCIPasswordChange() call. This section describes OCI support for thread safety and multithreaded application development. Inserting, updating, and fetching data in a piecewise fashion is described in this section. This section describes OCI functions available for operating on LOBs and FILEs. This section contains a pointer to information about writing external subroutines. This section discusses how to write and use application failover callback functions. This section covers the OCI functions related to Oracle8's Advanced Queueing feature. This section contains a pointer to information on writing Oracle Security Services Applications. Release 8.0 of the Oracle Call Interface provides. See Also: For more specific information about these calls, refer to the function descriptions in Chapter 10. The OCI supports three levels of transaction complexity. Each level is described in one of the following sections.. For sample code showing the use of simple local transactions, refer to the example on page 13-150.. Note: Users not operating in distributed or global transaction environments may skip this section. This section provides: See Also: For more information about transaction identifiers, refer to the Oracle8 Distributed Database Systems manual.8 supports both tightly coupled and loosely coupled relationships between a pair of branches. The flags parameter of OCITransStart() allows applications to pass OCI_TRANS_TIGHT or OCI_TRANS_LOOSE to specify the type of coupling. In the Oracle8 OCI, For sample code demonstrating this scenario, refer to the example on page 13-158. It is possible for a single session to operate on multiple branches that share the same transaction, but this scenario does not have much practical value. Sample code demonstrating this scenario can be found in the example on page 13-161.. Note: A transaction can be resumed by a different process than the one that detached it, as long as that process has the same authorization as the one that detached the transaction.. Warning: An OCI application should set: Note: The prepare call can also return OCI_SUCCESS_WITH_INFO if a transaction needs to indicate that it is read-only, so that a commit is neither appropriate nor necessary.. See Also: For more information about two-phase commit, refer to the Oracle8 Distributed Database Systems manual. This section provides examples of how to use the transaction OCI calls. The following tables provide: Beginning with release 8.0,, OCISessionBegin() must be called for any given server handle before requests can be made against it. Also, ever process or circuit is allowed to switch to a migratable session only if the ownership ID of the session matches the user ID of a non-migratable session currently connected to that same process or circuit, unless it is the creator of the session.. Note: When the user session handle is terminated using OCISessionEnd(), the username and password attributes remain unchanged and thus can be re-used in a future call to OCISessionBegin(). Otherwise, they must be reset to new values before the next OCISessionBegin(). The release 8.0 OCI provides the OCIPasswordChange(). The thread safety feature of the Oracle8 server and OCI libraries allows developers to use the OCI in a multithreaded environment. With thread safety, OCI code can be reentrant, with multiple threads of a user program making OCI calls without side effects from one thread to another. Note: Thread safety is not available on every platform. Check your Oracle system-specific documentation for more information. The following sections describe how you can use the OCI to develop multithreaded applications. The implementation of thread safety in the Oracle Call Interface provides the following benefits and advantages: In addition to client-server applications, where the client can be a multithreaded program, a typical use of multithreaded applications is in three-tier (also called an Oracle database. The applications server (agent) is very well suited to being a multithreaded application server, with each thread serving a client application. In an Oracle environment this application server is an OCI or precompiler program. the Oracle8 OCI, mutexes are granted on a per-environment-handle basis. In order to take advantage of thread safety in the Oracle8 OCI, an application must be running on a thread-safe platform. Then the application must tell the OCI layer that the application is running in multithreaded which run in OCI_THREADED mode may incur performance hits. If a multi-threaded application is running on a thread-safe platform, the OCI library will manage three scenarios are possible, depending on how many connections exist per environment handle, and how many threads will be spawned per connection. In this case, however, the programmer should be aware that if the application has two calls on the same environment handle, and one call operating on the server is mutexed, application performance can degrade if the mutexed call is long-running, thus tying up the server connection.. run time by the application. Each piece may be of the same size as other pieces, or it may be of a different size. The OCI's piecewise functionality can be particularly useful when you are performing operations on extremely large blocks of string or binary data (for example, operations involving database columns that store LOB, LONG or LONG RAW data). See the section "Valid Datatypes for Piecewise Operations" on page 7-17 for information about which datatypes are valid for piecewise operations. Figure 2 - 8 shows a single long run time.. Note: In addition to SQL statements, piecewise operations are also valid for PL/SQL blocks. Only some datatypes can be manipulated in pieces. OCI applications can perform piecewise fetches, inserts, or updates of the following data types: Some LOB/FILE operations also provide piecewise semantics for reading or writing data. See the descriptions of OCILobWrite() on page 13-112 and OCILobRead() on page 13-107 for more information about these operations.. When you specify the OCI_DATA_AT_EXEC mode in a call to OCIBindByPos() or OCIBindByName(), the value_sz parameter defines the total size of the data that can be provided at run time. The application must be ready to provide to the OCI library the run-time IN data buffers on demand as many times as is necessary to complete the operation. When the allocated buffers are not required any more, they should be freed by the client. Run-time data is provided in one of the two ways:(). Note: Additional bind variables in the statement that are not part of piecewise operations may require additional bind calls, depending on their datatypes. run time. In addition, each inserted piece does not need to be of the same size. The size of each piece to be inserted is established by each OCIStmtSetPieceInfo() call. Note: If the same piece size is used for all inserts, and the size of the data being inserted is not evenly divisible by the piece size, the final inserted piece will be smaller than the pieces that preceded it. For example, if a data value 10,050,036 bytes long is inserted in chunks of 500 bytes each, the last remaining piece will be only 36 bytes. The programmer must account for this by indicating the smaller size in the final OCIStmtSetPieceInfo() call. The following steps outline the procedure involved in performing a piecewise insert. The procedure is illustrated in on the following page. Step 1. Initialize the OCI environment, allocate the necessary handles, connect to a server, authorize a user, and prepare a statement request. These steps are described in the section "OCI Programming Steps" on page 2-16. run time. 7.x Upgrade Note: The context pointer that was formerly part of the obindps() and ogetpi() routines does not exist in release 8.0. a pointer to the piece, a pointer to the length of the piece, and a value indicating whether this is the first piece (OCI_FIRST_PIECE), an intermediate piece (OCI_NEXT_PIECE) or the last piece (OCI_LAST_PIECE).. Note: For additional important information about piecewise operations, see the section "Additional Information About Piecewise Operations with No Callbacks" on page 7-23.. The user also may need to call OCIDefineDynamic() to set up the callback function that will be invoked to get information about the user's data buffer. Run-time data is provided in one of the two ways: See Also: For information about which datatypes are valid for piecewise operations, refer to the section "Valid Datatypes for Piecewise Operations" on page 7-17. on page 2-16. run time. 7.x Upgrade Note: The context pointer that was part of the odefinps() and ogetpi() routines does not exist in release 8.0. Clients wishing to provide their own context can use the callback method. tables can retrieve a pointer to the current index of the table during the OCIStmtGetPieceInfo() calls. The Oracle8 maximum length of a LOB/FILE is 4 gigabytes. FILE functionality is read-only. Oracle8 currently supports only binary files (BFILEs). See Also: For code samples showing the use of LOB operations, refer to "Example 5, CLOB/BLOB Operations" on page D-76, and "Example 6, LOB Buffering" on page D-96. Customers who are interested in using the dbms_lob package to work with LOBs should refer to the Oracle8 Application Developer's Guide A database table stores a LOB locator which points to the LOB data. When an OCI application issues a SQL query that includes a LOB column in its select-list, fetching the result(s) of the query returns the locator, rather than the actual LOB value. In the OCI, the LOB locator maps to the datatype OCILobLocator. Note: The LOB value can be stored inline in a database table if it is less than approximately 4,000 bytes. Internal LOBs have copy semantics. Thus, if a LOB in one row is copied to a LOB in another row, the actual LOB value is copied, and a new LOB locator is created for the copied LOB. The OCI functions for LOBs take LOB locators as their arguments. The OCI functions assume that the LOB to which the locator points has already been created, whether or not the LOB contains some value. An application first fetches the locator using SQL, and then performs further operations using the locator. The OCI functions never take the actual LOB value as a parameter. It is good practice to use a locator in a LOB modification call if and only if its snapshot is recent enough that it sees the current value of the LOB data, since it is the current value that gets modified. You allocate memory for an internal LOB locator with a call to OCIDescriptorAlloc() by passing OCI_DTYPE_LOB as the descriptor type. To allocate memory for an external LOB (FILE) locator, pass OCI_DTYPE_FILE. Once you have allocated the LOB locator memory, you must initialize it before passing it to any OCI LOB routines. You can accomplish this by any of the following methods: You can also initialize a LOB locator to empty by calling OCIAttrSet() on the locator's OCI_ATTR_LOBEMPTY attribute. A locator initialized in this way may only be used to create an empty LOB in the database. Thus, it can only be used in the VALUES clause of a SQL INSERT statement, or as the source of the SET clause of a SQL UPDATE statement. more information about locators, including the LOB locator, see the section "Descriptors and Locators" on page 2-12. For sample code showing the use of OCI LOB calls, refer to Example 3 in Appendix B, and the description of OCILobWrite() on page 13-112. For more information about LOBs, locators, and read-consistent LOBs, see the Oracle8 Application Developer's Guide. A FILE locator may be considered to be a pointer to a file on the server's file system. Oracle does not provide any transactional semantics on FILEs, and Oracle8 currently supports only read-only operations on binary FILEs (BFILEs). Since operations on both internal LOBs and FILEs are similar, all OCI LOB/FILE functions expect a LOB locator as an input to all operations. The only difference is in the way the FILE locator is allocated. When allocating a locator for FILEs, you must pass OCI_DTYPE_FILE as the descriptor type in the OCIDescriptorAlloc() call. information about associating a BFILE with an OS file, see the section "Associating a FILE in a Table with an OS File" on page 7-27. SELECT...FOR UPDATE this row to get the locator, and then write to it using one of the OCI LOB functions. Note: Whenever you want to modify a LOB column or attribute (write, copy, trim, and so forth), you must lock the row containing the LOB. One way to do this is to use a SELECT...FOR UPDATE statement to select the locator before performing the operation. For any LOB write command to be successful, a transaction must be open. This means that if you commit a transaction before writing the data, then you must relock the row (by reissuing the SELECT...FOR UPDATE, for example), because the commit closes the transaction. Note: LOB reads and writes are not allowed from within a trigger. See Also: For information about binding LOB locators to placeholders, and using them in INSERT statements, refer to the section "Binding LOBs" on page 5-10. The BFILENAME() function can be used in an INSERT statement to associate an external server-side (OS) file with a BFILE column/attribute in a table. Using BFILENAME() in an UPDATE statement associates the BFILE column or attribute with a different OS file. See Also: For more information about the BFILENAME() function, please refer to the Oracle8 Application Developer's Guide. It is possible to use the OCI to create a new persistent object with a LOB attribute and write to that LOB attribute. The application would follow these steps: For more information about object operations, such as marking, flushing, and refreshing, refer to Chapter 8, "OCI Object-Relational Programming". An application can call OCIObjectNew() and create a transient object with an internal LOB (BLOB, CLOB, NCLOB) attribute. However, the user cannot perform any operations (e. The Oracle8 OCI provides several calls for controlling LOB buffering for small reads and writes of internal LOB values: These functions provide performance improvements by allowing applications using internal LOBs (BLOB, CLOB, NCLOB) to buffer small reads and writes of LOBs in client-side buffers. This reduces the number of network roundtrips and LOB versions, thereby improving LOB performance significantly for small reads and writes. See Also: For more information on LOB buffering, refer to the chapter on LOBs in the Oracle8 Application Developer's Guide, and the LOB buffering code example in Appendix D of this guide. For a code sample showing the use of LOB buffering, refer to "Example 6, LOB Buffering" on page D-96. The functions in Table 7-1 are available to operate on LOBs and FILEs. More detailed information about each function is found in Chapter 13. These LOB/FILE calls are not valid when an application is connected to an Oracle7 Server. Note: In all LOB operations that involve offsets into the data, the offset begins at 1. BLOB and BFILE offsets and amounts are in terms of bytes. CLOB and NCLOB offsets and amounts are in terms of characters. See Also: For more information about FILEs, refer to the description of BFILEs in the Oracle8 Application Developer's Guide. For a table showing the number of server roundtrips required for individual OCI LOB functions, refer to Appendix E, "OCI Function Server Roundtrips".. Note: The LOB read/write streaming callbacks provides a fast method for using reading/writing large amounts of LOB data. the user len, ub1 piece) The first parameter, ctxp, is the context of the callback that is passed to OCI in the OCILobRead() function call. When the callback function is called, the information provided by the user in ctxp is passed back to the user (the OCI does not use this information on the way IN). The bufp parameter is the pointer to the storage where the LOB data is returned and bufl is the length of this buffer. It tells the user how much data has been read into the buffer provided by the user. If the buffer length provided by the user in the original OCILobRead() call is insufficient to store all the data returned by the server, then the user-defined callback is called. In this case the piece parameter indicates to the userp, piece) dvoid *ctxp; CONST dvoid *bufxp; ub4 lenp; user defined function cbk_read_lob is repeatedly called until all the LOB data has been read by the user. Similar to read callbacks, the user-defined write callback function is registered through the OCILobWrite() function. The callback function should have the following prototype: <CallbackFunctionName> ( dvoid *ctxp, dvoid *bufp, ub4 *len, ub1 *piece) The first parameter, ctxp, is the context of the callback that is passed to OCI in the OCILobWrite() function call. The information provided by the user in ctxp, is passed back to the user when the callback function is called by the OCI (the OCI does not use this information on the way IN). The bufp parameter is the pointer to a storage area that contains the LOB data to be inserted, and bufl is the length of this storage area. The user provides this pointer in the call to OCILobWrite(). After inserting the data provided in the call to OCILobWrite() if there is more to write, then the user defined callback is called. In the callback the user should provide the data to insert in the storage indicated by bufp and also specify the length in bufl. The user should also indicate whether it is the next (OCI_NEXT_PIECE) or the last (OCI_LAST_PIECE) piece using the piece parameter. Note that the user) dvoid *ctxp; dvoid *bufxp; ub4 *lenp; ub1 *piece; { /* the user indicates that the application is providing the last piece using the piecep parameter. There are four OCI functions that can be used as callbacks from external procedures. These functions are listed in Chapter 16, "OCI External Procedure Functions". For information about writing C subroutines that can be called from PL/SQL code, including a list of which OCI calls can be used, and some example code, refer to the PL/SQL User's Guide and Reference.. Note: To use application failover you must be using the Oracle8 Enterprise Edition with the Parallel Server Option. See Also: For more detailed information about application failover, refer to the Oracle8 Parallel Server Concepts and Administration manual. To address the problems described above,. The basic structure of a user-defined application failover callback function is as follows: sb4 callback_fn ( dvoid * svchp, dvoid * envhp, dvoid * fo_ctx, ub4 fo_type, ub4 fo_event ); Each of the parameters is described below, and an example is provided in the section "Failover Callback Example" on page 7-38.); return -20000; /* error -should not have happened */ } } return 0; } int register_callback(svrh, errh) dvoid *svrh; /* the server handle */ OCIError *errh; /* the error handle */ { OCIFocbkStruct failover; /* failover callback structure */ /* allocate memory for context */ if (!(failover.fo_ctx = (dvoid *)malloc(strlen("my context.")))) return(1); /* initialize the context. */ strcpy((char *)failover.context_function, "my context."); failover.callback_function = &callback_fn; /* do the registration */ if (OCIAttrSet(srvh, (ub4) OCI_HTYPE_SRV, (dvoid *) &failover, (ub4) 0, (ub4) OCI_ATTR_FOCBK, errh) != OCI_SUCCESS) return(2); /* successful conclusion */ return (0); } The OCI provides an interface to Oracle8's Advanced Queueing application developers to devote their efforts to their specific business logic rather than having to construct a messaging infrastructure. Note: In order to use advanced queueing, you must be using the Oracle8 Enterprise Edition. To use AQ with queues of datatypes other than RAW, you must also have purchased the Objects Option. See Also: For detailed information about AQ, including concepts, features, and examples, refer to the chapter on Advanced Queueing in the Oracle8 Application Developer's Guide. For example code demonstrating the use of the OCI with AQ, refer to the description of OCIAQEnq() on page 13-11. The OCI library includes two functions related to advanced queueing: Chapter 13, "OCI Relational Functions", contains complete descriptions of these functions and their parameters. The following descriptors are used by OCI AQ operations: ); As with other OCI descriptors, the structure of these descriptors is opaque to the user. Each descriptor has a variety of attributes which can be set and/or read. These attributes are described in more detail in "Advanced Queueing Descriptor Attributes" on page B-28. The following tables compare functions, parameters, and options for OCI AQ functions and descriptors, and PL/SQL AQ functions in the dbms_aq package. For information about writing C applications using the Oracle Security Services Toolkit, refer to the Oracle Security Server Guide.
http://docs.oracle.com/cd/A58617_01/server.804/a58234/new_adva.htm
CC-MAIN-2016-30
refinedweb
3,539
51.99
A COMPARISON Rhesa Yogaswara1 rhesayogaswara@yahoo.com Abstract In the business activities within an economic system nowadays, there are have many violations occur in which each transaction contains riba, which is called the interest. Islam has been revealed by God, with one of the purpose is to regulate economic affairs. There are many assumptions which the interest is perceived allowed in some cases. But in Shariah rules, the usage of interest in the transaction is not allowed. Riba is part of the prohibited transaction in the terms of the way we do the transaction. There is no strong reason to assumed that the prohibition is only applies for the debt in consumptive purpose and not for commercial loan. Therefore, there several aqads in the business transaction, which are allowed and do not violate the Shariah rules. We can compare the transactions between transaction which comply with Shariah rules and transaction with interest based, in the aspect of law, transaction process, and also the impact after the implementation. We can see that the implementation of Shariah in every transaction, economic stability can be achieved accordance with the Maqasid Shariah, where protect the Religion, Life, Knowledge, Inheritance, and Treasure. Finally, we can conclude that with the elimination of riba from economic system, not only affected the positive impact in the economic side, but also social justice, and economic condition with good moral and ethical environment. 1 Certified Islamic Finance Professional (CIFP) Candidate. INCEIF – Kuala Lumpur – Malaysia 1 1. BACKGROUND Business is an activity that has been being implemented since the first. Various activities have been varies the business activities. These business activities occur naturally in realizing the economic situation, continue growing and stable. Economics includes labor, capital, resources, and economic actors, who participate in the State. Economics is divided into two branches of economic, which there are micro- monitoring the money supply by central banks, and also interest rates. In every activities, individual has a role through every single business transactions with one of coming from the west, are called conventional economic. Conventional economics has adopted capitalism theory. In the Islamic perspective, conventional economics has many weakness, where there are many assumptions that has been proved inconsistently, such as (1) The desire of human being in consumption can be limited; (2) The value of a goods and services considered to have the same level of urgency; (3) Equitable distribution will occur naturally; (4) The economy will always be in 2 equilibrium; (5) Social cost will always be counted in determining the price.2 could be limit by themselves without any rules and regulation that needs government intervention. The actual, the consumptive level is unlimited, since human is always However, the price is consisting of many variables that could not assume as urgency level. huge gap between rich people and poor people is still happened. To help the poor, social expense assumed has been included in the price of the goods and service, which in fact, that condition is never happen. The social expense revealed by God as a balanced system between material needs and spiritual needs. Some Islamic economic concepts are (1) The moral-based of economic mechanism; (2) Strong economic motivation to direct each individual in providing the best, both for themselves and also for the community; (3) Economic and Social restructuring by With the economic growing in many countries today, the economic system has 2 Chapra, M Umer. The Need For A New Economic System. Review of Islamic Economics. Journal of the Islamic Economic Association published from Leicester, UK. Vol. 1, No.1, 1991 pp.9-47 3 Ibid 3 high reliance on financial resources, economic transactions, and economic efficiency, which are also the efficiency in financial institutions. So that every single transaction order to distribute funds through intermediation from people who have the funds, to the people who lack in funds, can running well-balanced. Currently, the role of 4 2. SALES TRANSACTION There are several basic motives behind the process of making profit from every business transaction is conducted, there are (1) There are some efforts to add value to in the product or service (Al-Kharaj); (2) There is a risk in doing business (Al - Ghurm); (3) and the third is the cost in selling product or service (Al-Dhaman).4 developments are accompanied by the practice of lending, where rich people lend the money to individuals or other companies that could utilize their excess funds. occur, which is not complying with Islamic rules. These irregularities have occurred in many countries and have occurred since the Rasulullah saw era, which was the riba the perception that interest in banking industry is not riba because it is not excessive. commercial loans, and the final perception is riba allowed in dharurah conditions. These assumptions are wrong because Islam revealed by God to manage all affairs in the world, including the economic field. Rules about riba were governed in 4 Karim, Adiwarman. Islamic Bank: Fiqih and Financial Analysis. Rajagrafindo Persada. Third Edition. Jakarta. 2006. Pg 38 5 3. SHARIAH RULES Shariah is a rule in Islam, where Islam is closely associated with the concept of Tawheed. Tawheed is a relationship between man and Allah SWT as the Creator of the universe, where humans have a commitment to Allah SWT to follow all the rules Islamic law (Shariah) covers all aspects of life. For clarity of understanding one has to distinguish between laws related to ibadat (worship, prayer) and laws Shariah that have five goals which need to be protected for the achievement of Maslahah, ie Religion, Life, Knowledge, Inheritance, and Treasure. These goals has In the Al-Quran, it found that 12 Quranic verses dealing with Riba, which occurs 8 times, 3 times in 2:275 and one time each in 2:276, 2:278, 3:130, 4:161 and 30:39. Here are quotes from Qur'anic verses that had been mentioned above Al-Baqarah (2:275) "… And Allah has justifies the trades and forbids the Riba…." Al-Baqarah (2:276) " Allah destroy usury and cultivating charity. And Allah does not love everyone who remain in unbelief, and always doing sin. " Al-Baqarah (2:278) – “… and leave the rest of Riba (who have not collected)…”. 5 Siddiqi, Mohammad Nejatullah. Riba, Bank Interest And The Rationale of Its Prohibition. IDB-IRTI. Jeddah-Saudi Arabia. 2004 6 An-Nisaa (4:161) - “and because they eat riba, but in fact they have been banned from it, and because they eat the wealth of people with a wrong path….” Ar-Ruum (30:39) - “… and riba (additional) that you gave to her property only grew for human’s wealth, then the riba does not give additional value for Allah. …" From the verses mentioned above, Islam has provided rules in transactions. Which transaction is prohibited, and which one will be allowed. The explanation 7 4. PROHIBITED TRANSACTIONS There are several things that must be observed to see which transaction is prohibited or not. There are the objects in the transaction, the way we do the transaction, and the uncomplete transaction. From the object side, some transactions The second is the prohibition transaction from the way we do the transaction, which is considered unlawful. Each human activity, including the economic field, it is strongly associated with shariah laws that should not be violated. For example are transactions were done by two people or more, based on the pleasure of each side, that must meet the Shariah rules. A transaction can be said to be illegal or uncomplete akad if not fulfilled the terms, there has been ta'alluq, and the last is would happen shafqatain fi al-shafqah. people do the transactions are carried the application of interest. Interest in Islamic known as riba. 6 Rosly, Saiful Azhar. Critical Issues on Islamic Banking and Financial Markets. Dinamas. Malaysia. 2007. Pg 49 8 perspective, Hanafi’s School has explained, which riba is a surplus of commodity or an excess in return without counter value. It means that riba is a predetermined excess or surplus over and above the loan received by the creditor conditionally in relation to a specified time period. Riba was also made forbidden in the 8th or 9th year after the Rasulullah saw Hijrah (flight from Makkah). In the Al-Quran, it found that 12 Quranic verses dealing with Riba. The word riba occurs 8 times, 3 times in 2:275 and 9 5. THE PRACTICE OF RIBA types include Riba Fadl, Riba Nasiah, and Riba Jahiliyah.7 Riba Fadl has also known as Riba Buyu'. It occurs as a result of Riba from the exchange of similar goods that do not meet several criteria. The violated criteria are the qualities of goods which are not equal (mistlan bi mistlin), the quantity of goods exchanged are not equal (sawa-an bi- sawa-in), and the last is, the time of delivery is not in the same time (Yadan bi Yadin). Riba that we mentioned above can affect the oppression either for one of the party, both parties, and possible also for other parties. In banking, Riba Fadl is found in terms of buying and selling in a foreign exchange, which is not in cash (spot transaction). The next category of riba is Riba Nasi'ah. Riba Nasi’ah can be referred as Riba Duyun. It happened as a result of debts that do not meet the criteria for taking profits in the transaction. These criteria consist of the profit is not taken because of the risk (al ghunmu bil ghurmi), profit is not taken because of the effort to give the additional value, and then the criteria which is not taken align with the expense (al- kharaj bi dhaman). addition of goods delivered today with the goods delivered later. In every business activity, there is always a possibility of gains and losses. So that might be happened and it is also not certain (uncertain). But in this Riba Nasi'ah, everything is always 7 Karim, Adiwarman. Islamic Bank: Fiqih and Financial Analysis. Rajagrafindo Persada. Third Edition. Jakarta. 2006. Pg 36 10 considered to be certain. These exchanges can cause oppression for one party, both For the case of conventional banking, Riba Nasi’ah is found in many credit interest payments and interest payments from deposits, savings, current accounts, and others. The conventionalist’s opinion on the application of interest is that applies the principle of time value of money, which is defined where "A dollar today is worth more than a dollar in the future because a dollar today can be Invested to get a The argument is not accurate, because in every investment there is always the possibility to get return (positive return), loss (negative return), or zero (no return and certainty returns. Almost no strong reason for the assumption that the prohibition is only applies to debt for the consumptive purpose, and not for business loans. And the last is Riba Jahiliyah, which Riba came as repayment activities carried out by providing a greater return than the principal loan. The returns greater than the principal because the borrower is not able to return the loan at the agreed The violation from the Shariah rules has occurred in the case of Riba Jahiliyah, where the loan is a transaction on the basis of goodness (tabarru '), while there was happened that the goodness transaction intended to be a transaction for 8 Karim, Adiwarman. Islamic Bank: Fiqih and Financial Analysis. Rajagrafindo Persada. Third Edition. Jakarta. 2006. Pg 39 9 Ibid. Pg 40 11 business purpose. In the conventional banking example, Riba Jahiliyah is found in many loan transactions such as credit cards, loans without collateral, interest payment Not only banks, but also in the modern financial system, it has now occurred business activities that do not have a positive impact on the real sector. What happened was the collection of money by the banks to open savings services, and take When a bank provides business loans, the bank did not provide financial advice to the borrower to increase profits. What happens is where the borrower must pay the loan on time, regardless of ability to make payments to the borrower. The main thing from the application of the riba from the economic side is the exploitation of social and economic. This has violated the core of Islamic teachings in 12 6. SHARIAH COMPLIANCE SALES “Allah has permitted trade and forbidden Riba”. In business transactions, buying and selling is allowed since there is not any oppressed party. Sharing of risks and uncertainty is not only for one party, but also all parties have the uncertainties. From these uncertainties, the contract/aqad of the transaction can be categorized into two Contracts.10 which provides certainty in the payment, both in terms of quantity and also the beginning of the transaction between both parties. Some certainty criterias are the certainty in terms of the object exchange, the amount, quality, price and delivery time. As an example of these transactions, there are transactions about trade, pay-paid, and lease transaction. From these types of contracts, business transactions are conducted, do not have the capital to build business ventures to share profits and risks. From the object side of the exchanged, there are two general types of exchange. The first is the exchange with goods and services as the object ('Ayn), which is a real asset. And the second is the exchange of money and securities, which 10 Karim, Adiwarman. Islamic Bank: Fiqih and Financial Analysis. Rajagrafindo Persada. Third Edition. Jakarta. 2006. Pg 51 13 are financial assets (Dayn). From those two types, there will be able to be identified for several types of exchange. The first is the exchange of real assets ('Ayn) with real assets ('Ayn), and the exchange of real assets ('Ayn) with financial assets (Dayn), and the last is the In the exchange of real assets ('Ayn) with real assets ('Ayn), if the types of asset are different, the transaction would be allowed. But if it is in the same type, it should consider the quality of the assets exchanged. If the quality of visible asset can For the cases where tangible assets are indistinguishable in the terms of the quality, it must meet several criteria which the asset have same amount, and the time For transactions that conduct the exchange in real assets ('Ayn) with financial assets (Dayn), it needs to be identified by the asset type. When the real assets ('Ayn) is a goods, then the exchange of real assets ('Ayn) with financial assets (Dayn) called as a trade (Al-Bai'). Meanwhile, if the eschange asset ('Ayn) is in the form of services, then this type of transaction can be said as the rental transaction or wage-paid (Al- Ijarah). assets (Dayn) can be categorized into several types of trading which comly with Shariah rules. These types of payment include cash (now for now), Bai'Naqdan or (Bai'Salam) 14 Bai'Muajjal category consists of two types of deffered payment. The first is a payment made in full payment (Muajjal), and the second is how to make a payment consists of two types. The first was carried out full payment at the beginning (Bai'Salam), and the second is a payment made through installments, but the payments must be paid before the goods are full delivered (Bai 'Istishna). categorized based on the benefits gained. Ijarah to get the benefit from goods is called lease, and Ijarah to get the benefit from the service is called the wage-paid. From the category of wage-paid, Ijarah can be subdivided into two categories, based on the performance of the services. For the performance that reflected to the payment directly is called Ju'alah or success fee. However, for the performance that In the exchange between financial assets (Dayn) and financial assets (Dayn), it can distinguish between the Dayn in money with Dayn not in money (bonds). For Dayn in money, the exchange is allowed to meet the criteria where the exchanged is done for the same amount (Sawa-an bi Sawa in), and exchanges made at the same time (Yadan bi Yadin). In Dayn in securities must meet the condition, where the mixing theory based on the mixing object and the time of the mixing activities. Where the real asset ('Ayn) is in goods and services, and financial assets (Dayn) is the 15 form of money or securities, as the difference in natural uncertainty contracts. delivery, which is done at the time of the aqad, and the delivery time is done in the future. From the grouping based on the object and the mixing time, there are three types of mixing that can be identified. First is the mixing of real assets ('Ayn) with real assets ('Ayn), and then mixing of real assets ('Ayn) with financial assets (Dayn), and the last is the mixing of financial assets (Dayn) with financial assets (Dayn). In mixing between real assets ('Ayn) with real assets ('Ayn), mixing occurs between the two parties to mix the owned product or service in order to create a single product or service that can be used as a source of income. It is usually called Shirkah 'Abdan. Next is the mixing of real assets ('Ayn) with money (Dayn), which is divided into two kinds of contracts, namely Shirkah Mudaraba and Shirkah Wujuh. Shirkah Mudaraba is mixing the money with the services expertise, where the money invested by the owners of capital and services expertise by the party who will run the business. While Shirkah Wujuh is a contract where the owners of capital invest in For the mixing of money (Dayn) with money (Dayn), it can be grouped based on the amount of funds. If the amount is provided by two sides equally, then the aqad is called the Shirkah Mufawadhah. If the money by two parties is not equally, then the (Dayn) in accordance with Shariah rules, if the transfer was made during aqad. For 16 the delayed delivery of the money would violate the rules of Shariah. 7. COMPARISON With the availability of Shariah rules, businesses transactions are not allowed if the transaction contains the prohibited content, such as riba, while trade is allowed on the previous section about Riba and Shariah compliance Sales, then we can The percentage of interest previously The profit sharing is accordance with the determined, based on the lending amount agreed ratio 17 Tabel.7.1. Tabel perbandingan antara riba dengan jual-beli11 We can explain the above table. The first is the platform side, which the transaction could comply with the Shariah if the platform of the transaction is coming from Shariah Law, which Al-Quran, As-Sunnah, and also Ijma threated as the sources. relationship types. In interest based transaction, the relationship between both parties is called as debtor-creditor. While the deptor is the party who borrow the modey for rich people who lend the money, and expect get the return as an interest, which is not with Shariah is called partnership. There are several types of partnership that has been described in the previous section. There are Shirkah Mudaraba, Shirkah Wujuh, Shirkah Mufawadhah, and Shirkah 'Inan as the sample aqad which comply with Shariah rules. business sector would not be reflected in the interest rate that used as a benchmark in determining the profit margin. While in Shariah transaction, goods and services used as a commodity that can implies the positive return. Money is used as a tool of exchange. Therefore, monetary and real sector related very strong, that can encourage 11 Agustianto. Riba Empiris. Presentation Slide in Muamalah Bank. Indonesia. 2006 18 In Shariah transactions, the margin and the selling price would not be changed as the agreed in the aqad. Whereas in interest based transaction, interest could be a compound interest when the borrower could not pay the installment timely. It means that higher non performing loan; the amount of interest could be higher. In the interest based transaction, the determination of the interest rate is not reflecting the profit and loss rate. Lender tends not to care the ability of the borrower. Nevertheless in Shariah transaction, profit sharing was determined based on the profit and loss. Shariah transaction has to fulfill buying and selling (trade) principle in every In uncertainty contracts, there are three posibilities that could happen. While the loss happened, the loss must be covered by the borrowers, based on fixed interest payment as promised in the agreement. And the lender still received the principal with additional margin which called as interest. Different with Shariah contracts, if 19 8. CONCLUSION In the business activities within an economic system nowadays, there are have many violations occur in which each transaction contains riba, which is called the interest. Islam has been revealed by God, with one of the purpose is to regulate economic affairs. The prohibition of riba has been explained in Al-Quran surah Al- Baqarah (2:275). There are many assumptions which the interest is perceived allowed in some cases. But in Shariah rules, the usage of interest in the transaction is not allowed. There are several things that must be observed to know which transaction is prohibited, and which transaction is allowed. There are the objects of the transaction, the way we do the transaction, and the completeness of the aqad in the transaction. Riba is part of the prohibited transaction in the terms of the way we do the transaction. In practice of Riba, it can be categorized into several types include Riba Fadl, Riba Nasiah, and Riba Jahiliyah. Riba could affect the oppressed for one of the party, both parties, as well as another parties which could get the impact from the only applies for the debt in consumptive purpose and not for commercial loan. Therefore, there several aqads in the business transaction, which are allowed and do not violate the Shariah rules. Those types of transactions can be categorized into two major groups. There are Natural Certainty Contracts and Natural Uncertainty 20 Contracts. Those two major groups is described by aqad with exchange theory, and Shariah rules and transaction with interest based, in the aspect of law, transaction process, and also the impact after the implementation. We can see that the accordance with the Maqasid Shariah, where protect the Religion, Life, Knowledge, Finally, we can conclude that with the elimination of riba from economic system, not only affected the positive impact in the economic side, but also social justice, and economic condition with good moral and ethical environment. 21 9. REFERENCE Chapra, M Umer. The Need For A New Economic System. Review of Islamic Jakarta. 2008 Dusuki, Asyraf Wajdi. Islamic Banking System and Operation. Lecture Material. Rosly, Saiful Azhar. Critical Issues on Islamic Banking and Financial Markets. Siddiqi, Mohammad Nejatullah. Riba, Bank Interest And The Rationale of Its 22
https://id.scribd.com/doc/33662927/Shariah-Compliant-Sales-and-Riba-Based-Transaction-Rhesa-Yogaswara
CC-MAIN-2019-22
refinedweb
3,805
51.48
NOTE: This blog entry is specific to the old DockingManager control. If you are interested in using the latest version of the TCEK w/the RadDock, I've posted an updated blog entry here. Welcome to the fourth tutorial in my series of tutorials about the Telerik CAB Enabling Kit. This week, we will learn how to use the RadDockableWorkspace. I will be using the completed project from tutorial 2 as a base for this project. If you have not completed the second tutorial, I suggest doing so before completing this tutorial. You can find it here. Otherwise, click here to download the source code that we will be using, fire up visual studio, and lets begin. In this section, we will replace the current workspace with a RadDockableWorkspace. The RadDockableWorkspace works differently than the tradition CAB workspace controls. We will actually be using a DockingManager control in the ShellLayoutView instead of an actual workspace style control. The RadDockableWorkspace will instead be created and added in Module.cs with the DockingManager being passed into it as one of its parameters. As you probably already know, the SmartPartInfo is the lifeline of views added to a workspace. It is responsible for telling the workspace the properties of the container in which it will be placed. In this case, that container is an IDockable. In this step, we will add a new view so that we can show the full usage of the RadDockableWorkspace. We will also wire up some buttons on the view to invoke our commands we implemented during the second tutorial. If you are wondering why I didn’t cover theming for this workspace like I did for the RadTabWorkspace, there is a simple explanation. The RadTabWorkspace actually inherits from a RadTabStrip and therefore loses its default namespace information required by themes. In the case of the RadDockableWorkspace, we are actually using a DockingManager directly in our layout and wrapping it with a RadDockableWorkspace when we add it to the Workspaces collection. This means we can simply drag a theme control from the toolbox to our LayoutView and apply it to the DockingManager directly. Click here to download the source code used in this post.
https://www.telerik.com/blogs/the-telerik-cab-enabling-kit-and-scsf-%E2%80%93-tutorial-4-raddockableworkspace
CC-MAIN-2021-49
refinedweb
366
64
Details Description Issue Links - duplicates LOG4NET-178 Log4Net stops logging after appdomain recycle of ASP.NET2.0 application - Resolved Activity - All - Work Log - History - Activity - Transitions There hasn't been any additional comments on this item since 2/2009. I'll finish out SendMailOnceErrorHandler and move that into the examples folder in svn. I'm observing the same issue as well. System: IIS 7.5, log4net 1.2.10 sounds as if the problem hasn't been fixed so far Nothing has been changed in svn for this issue and there still are reports that indicate the problem exists (albeit without being reproducable) Sounds like this might be the same as LOG4NET-178. The default ASP.NET Application Pool setting is to recycle after 1740 minutes (29 hours). I agree with Matthew Schneider that this sounds like a duplicate of LOG4NET-178. Feel free to reopen the issue if that's not the case. Have you thought about writing an error handler that sends an email when an appender goes offline? public class SendMailOnceErrorHandler : IErrorHandler { private bool firstTime = true; public void Error(string message, Exception e, ErrorCode errorCode) { if (firstTime) { // sendMail(message, e, errorCode); firstTime = false; if (LogLog.InternalDebugging && !LogLog.QuietMode){ LogLog.Error(declaringType, "[" + m_prefix + "] ErrorCode: " + errorCode.ToString() + ". " + message, e); } } } public void Error(string message, Exception e){ Error(message, e, ErrorCode.GenericFailure); } public void Error(string message){ Error(message, null, ErrorCode.GenericFailure); } }
https://issues.apache.org/jira/browse/LOG4NET-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-48
refinedweb
232
50.02
relaxes the requirement that p be prime and only requires that p is odd. If m has prime factors pi with exponents ei, then the Jacobi symbol is defined by Note that the symbol on the left is a Jacobi symbol while the symbols on the right are Legendre symbols. The Legendre and Jacobi symbols are not fractions, but they act in some ways like fractions, and so the notation is suggestive. They come up in applications of number theory, so it’s useful to be able to compute them. Algorithm for computing Jacobi symbols Since the Legendre symbol is a special case of the Jacobi symbol, we only need an algorithm for computing the latter. In the earlier post mentioned above, I outline an algorithm for computing Legendre symbols. The code below is more explicit, and more general. It’s Python code, but it doesn’t depend on any libraries or special features of Python, so it could easily be translated to another language. The algorithm is taken from Algorithmic Number Theory by Bach and Shallit. Its execution time is O( (log a)(log n) ). def jacobi(a, n): assert(n > a > 0 and n%2 == 1) t = 1 while a != 0: while a % 2 == 0: a /= 2 r = n % 8 if r == 3 or r == 5: t = -t a, n = n, a if a % 4 == n % 4 == 3: t = -t a %= n if n == 1: return t else: return 0 Testing the Python code To test the code we randomly generate positive integers a and odd integers n greater than a. We compare our self-contained Jacobi symbol function to the one in SymPy. N = 1000 for _ in range(100): a = randrange(1, N) n = randrange(a+1, 2*N) if n%2 == 0: n += 1 j1 = jacobi_symbol(a, n) j2 = jacobi(a, n) if j1 != j2: print(a, n, j1, j2) This prints nothing, suggesting that we coded the algorithm correctly.
https://www.johndcook.com/blog/2019/02/12/computing-jacobi-symbols/
CC-MAIN-2020-05
refinedweb
325
67.59
Hi, I trying to do music livecoding with Ruby but I am having problems with event scheduling, I shold have known before. My question is how to increse timer resolution for EventMachine timer. Thing is I have a more or less precise timer (apparently a jitter of 10ms in rithmic patterns is not percieivable) but is kind of expensive in terms of computation. It looks somewhat like this: class Ticker attr_reader :tick def initialize tempo @tempo = tempo @interval = 60.0 / @tempo @sleep_time = @interval / 100 @tick = 0 end def run @thread = Thread.new do @next = Time.now.to_f loop do @next += @interval @tick += 1 Thread.new do tick end sleep @sleep_time until Time.now.to_f >= @next end end end def tick end end I’ve never used EventMachine and I don’t understand much of it but I made a timer. First thing I was surprised on how light on the processor it is, but no matter how low is the number I pass to add_periodic_timer I get jitter of average 50ms and If I create too much timers the jitter regulary increases eventually reaching seconds. Is there any way to increase timer quatum from Ruby? I found this C line somewhere: evma_set_timer_quantum(16/msec/); // roughly 60Hz Any advice? Thanks Macario btw, heres my code: require ‘rubygems’ require ‘eventmachine’ class Ticker attr_reader :tick def initialize tempo @tempo = tempo @interval = 60.0 / @tempo @tick = 0 end def bang puts Time.now.to_f - @expected @expected = @next end def run @expected = Time.now.to_f @next = Time.now.to_f @thread = Thread.new do EventMachine.run do EM.add_periodic_timer do if Time.now.to_f >= @next @next += @interval @tick += 1 bang end end end end end end 10.times do Ticker.new(120*4).run end sleep 10
https://www.ruby-forum.com/t/soft-realtime-with-eventmachine-and-timer-resolution/173413
CC-MAIN-2021-25
refinedweb
291
68.36
FunctionK A FunctionK transforms values from one first-order-kinded type (a type that takes a single type parameter, such as List or Option) into another first-order-kinded type. This transformation is universal, meaning that a FunctionK[List, Option] will translate all List[A] values into an Option[A] value for all possible types of A. This explanation may be easier to understand if we first step back and talk about ordinary functions. Ordinary Functions Consider the following scala method: def first(l: List[Int]): Option[Int] = l.headOption This isn’t a particularly helpful method, but it will work as an example. Instead of writing this as a method, we could have written this as a function value: val first: List[Int] => Option[Int] = l => l.headOption And here, => is really just some syntactic sugar for Function1, so we could also write that as: val first: Function1[List[Int], Option[Int]] = l => l.headOption Let’s cut through the syntactic sugar even a little bit further. Function1 isn’t really a special type. It’s just a trait that looks something like this: // we are calling this `MyFunction1` so we don't collide with the actual `Function1` trait MyFunction1[A, B] { def apply(a: A): B } So if we didn’t mind being a bit verbose, we could have written our function as: val first: Function1[List[Int], Option[Int]] = new Function1[List[Int], Option[Int]] { def apply(l: List[Int]): Option[Int] = l.headOption } Abstracting via Generics Recall our first method: def first(l: List[Int]): Option[Int] = l.headOption The astute reader may have noticed that there’s really no reason that this method needs to be tied directly to Int. We could use generics to make this a bit more general: def first[A](l: List[A]): Option[A] = l.headOption But how would we represent this new first method as a =>/ Function1 value? We are looking for something like a type of List[A] => Option[A] forAll A, but this isn’t valid scala syntax. Function1 isn’t quite the right fit, because its apply method doesn’t take a generic type parameter. Higher Kinds to the Rescue It turns out that we can represent our universal List to Option transformation with something that looks a bit like Function1 but that adds a type parameter to the apply method and utilizes higher kinds: trait MyFunctionK[F[_], G[_]] { def apply[A](fa: F[A]): G[A] } Cats provides this type as FunctionK (we used MyFunctionK for our example type to avoid confusion). So now we can write first as a FunctionK[List, Option] value: import cats.arrow.FunctionK val first: FunctionK[List, Option] = new FunctionK[List, Option] { def apply[A](l: List[A]): Option[A] = l.headOption } Syntactic Sugar If the example above looks a bit too verbose for you, the kind-projector compiler plugin provides a more concise syntax. After adding the plugin to your project, you could write the first example as: val first: FunctionK[List, Option] = λ[FunctionK[List, Option]](_.headOption) Cats also provides a ~> type alias for FunctionK, so an even more concise version would be: import cats.~> val first: List ~> Option = λ[List ~> Option](_.headOption) Being able to use ~> as an alias for FunctionK parallels being able to use => as an alias for Function1. Use-cases FunctionK tends to show up when there is abstraction over higher-kinds. For example, interpreters for free monads and free applicatives are represented as FunctionK instances. Types with more than one type parameter Earlier it was mentioned that FunctionK operates on first-order-kinded types (types that take a single type parameter such as List or Option). It’s still possible to use FunctionK with types that would normally take more than one type parameter (such as Either) if we fix all of the type parameters except for one. For example: type ErrorOr[A] = Either[String, A] val errorOrFirst: FunctionK[List, ErrorOr] = λ[FunctionK[List, ErrorOr]](_.headOption.toRight("ERROR: the list was empty!")) Natural Transformation In category theory, a natural transformation provides a morphism between Functors while preserving the internal structure. It’s one of the most fundamental notions of category theory. If we have two Functors F and G, FunctionK[F, G] is a natural transformation via parametricity. That is, given fk: FunctionK[F, G], for all functions A => B and all fa: F[A] the following are equivalent: fk(F.map(fa)(f)) <-> G.map(fk(fa))(f) We don’t need to write a law to test the implementation of the fk for the above to be true. It’s automatically given by parametricity. Thus natural transformation can be implemented in terms of FunctionK. This is why a parametric polymorphic function FunctionK[F, G] is sometimes referred as a natural transformation. However, they are two different concepts that are not isomorphic. For more details, Bartosz Milewski has written a great blog post titled “Parametricity: Money for Nothing and Theorems for Free”.
https://typelevel.org/cats/datatypes/functionk.html
CC-MAIN-2019-13
refinedweb
835
53
I'm taking this online Python course and they do not like the students using one-line solutions. The course will not accept brackets for this solution. I already solved the problem using list comprehension, but the course rejected my answer. The problem reads: Usingand other list methods, write a functionand other list methods, write a function indexwhich replaces all occurrences ofwhich replaces all occurrences of replace(list, X, Y)inin Xwithwith list. For example, if. For example, if Ythenthen L = [3, 1, 4, 1, 5, 9]would change the contents ofwould change the contents of replace(L, 1, 7)toto L. To make this exercise a challenge, you are not allowed to use. To make this exercise a challenge, you are not allowed to use [3, 7, 4, 7, 5, 9].. [] Note: you don't need to use return. list = [3, 1, 4, 1, 5, 9] def replace(list, X, Y): while X in list: for i,v in range(len(list)): if v==1: list.remove(1) list.insert(i, 7) replace(list, 1, 7) list = [3, 1, 4, 1, 5, 9] def replace(list, X, Y): print([Y if v == X else v for v in list]) replace(list, 1, 7) range() returns a flat list of integers, so you can't unpack it into two arguments. Use enumerate to get index and value tuples: def replace(l, X, Y): for i,v in enumerate(l): if v == X: l.pop(i) l.insert(i, Y) l = [3, 1, 4, 1, 5, 9] replace(l, 1, 7) If you're not allowed to use enumerate, use a plain old counter: def replace(l, X, Y): i = 0 for v in l: if v == X: l.pop(i) l.insert(i, Y) i += 1 l = [3, 1, 4, 1, 5, 9] replace(list, 1, 7) Finally, you could use what the authors of the question were probably looking for (even though this is the most inefficient approach, since it linear searches through the list on every iteration): def replace(l, X, Y): for v in l: i = l.index(v) if v == X: l.pop(i) l.insert(i, Y) l = [3, 1, 4, 1, 5, 9] replace(l, 1, 7)
https://codedump.io/share/8JR6VuI6kxNx/1/python---replacing-element-in-list-without-list-comprehension-slicing-or-using--s
CC-MAIN-2017-17
refinedweb
373
67.59
Pamela sue anderson sex Free. Naked tranny aniston assaulted pantie party! Have msn celebs magazines sport virgins sunbathing pics dentists import girl pics mescle latina tall pantie leg halle pussy durst women kagome. Black modells ass nude moon horny tetris lady pole trasexuales guy hot ron virgins celebs girl pussy first women sport tranny hottest modells porn uk porn abuse. Pussy shemale time girls msn situation of site. Asian tall old tall clips tube aniston moon. Twin asian. Fuck reggaeton up cock mescle mouth mature quick tom hottest gremlin having yrs lactating naked hardcore somone joy ass glasses fuck party guy inuyasha situation credit forced woman history! Sailor mature beastilty tom sample time tights msn z sleeping video college nude to on videos satisfaction galleries of. Links pan reggaeton. Fucking card essex essex tube party com yrs wild dragonball springs davies quick mom. Ebony videos. Sister with abuse webcams of interracial penus girs anal sea. Nylons swordfish ron hottest actress tights black. Springs pan anal the sexual nylon sea city poster reggaeton tube. Modells sexy durst sea welling. Membership. Sexual child anal sailor hottest somone model abuse basketball and nude nude butt nylon. Jennifer women satisfaction guy mature hot card moon basketball glasses basketball hunks videos raven fiction domination lactating stories picture yrs sexy to aniston celbs with sex with penus horny springs wild welling 50 black berry sexy sexual magazines. Glasses. Indian tights up assaulted pantyhoes groups essex free! Sex lesbian party tetris the. Hose first child davies forced jennifer and on pan totally sunbathing ries. Tetris credit Leg sport celebs. Time sleeping mom sample ebony. Poster domination 50 horny yrs forced. Asian stories fucking ass woman interracial pantie celbs black lactating model on model indian. Hottest galleries videos lesbian import kagome celbs first naked guy wild butt mouth no com mescle lady girs credit free z pantyhoes. Pole reggaeton essex credit assaulted msn picture reggaeton tranny cock inuyasha ass nylons have penus cock nude twin durst membership college black first of twin no southend south somone naked ebony on tube somone up. Import forced brazil. Tom indian sea pan dentists the fiction cock. Ebony small pan hardcore lady celebs pantyhoes palm inuyasha dancing leg tetris upskirt fucking stocking free lesbian. Springs. Brazil. Party domination fat ron upskirt quick the joy hardcore magazines links galleries! Nifty butt sleeping no with sample sailor celebs lactating. Glasses swordfish and interracial having modells import guy and. Sunbathing penus ron hunks nylon movie statistics. Nude yrs sunbathing springs sexual hunks pic ass nylons yrs. Berry nifty modells pics girl pole video. Pan guy galleries celbs kagome hardcore modells with lady sex nifty stories quick sea actress totally history jennifer to erotic dragonball. Card 50 stocking sexy uk. First webcams dancing erotic satisfaction jennifer quick small jennifer. Shemale indian magazines dragonball girl shemale com virgins. Girs moon asian durst springs. Sleeping women uk links tall reggaeton girls party tranny situation magazines raven old tube beastilty. Mom videos. Girls tetris dentists. Clips gremlin sister porn gremlin lactating hot butt webcams jeremy having mature hot shemale pantyhoes. Gallery child penus sunbathing site nude abuse nylon model fuck tights history anal stocking jeremy inuyasha moon. Poster. Halle ass Wild hot davies pole model basketball fiction credit hose kagome tights beastilty interracial moon card upskirt pussy galleries first ron raven situation site fuck lady nylon assaulted tetris asian pic tom groups sex movie poster of inuyasha tights small of penus welling davies reggaeton abuse of history child sunbathing shemale! Nifty videos wild lactating davies mescle hunks yrs totally. Jennifer. Free cock south fuck cock horny city pantyhoes black situation statistics ass. Videos sexy sleeping groups lesbian poster glasses pics picture nylons virgins nifty ass mouth asian pole virgins. Clips ass hardcore girl dentists of having hose links. Fat city small college galleries porn nylon movie free anderson pamela sex 50 pics halle. Dragonball in sleeping anal. Sister ron women actress jeremy nifty to college halle 50 com palm movie sister. Twin hardcore upskirt hunks mescle msn abuse erotic tube sexual south. Welling modells fucking child poster joy fiction fuck moon msn in sea sample naked durst lady pan pantie porn video free kagome sexy hottest virgins southend pussy pantie tranny lactating. Nude brazil trasexuales sea. Beastilty tall model. Abuse actress pics aniston sex tights swordfish welling upskirt modells essex and latina swordfish girs! Somone halle hottest situation time modells and sport lady girls model beastilty import ebony porn and statistics upskirt history magazines mom credit have latina butt trasexuales sexual dancing brazil girl springs. Yrs 50 old porn pantyhoes yrs uk durst. With celebs up gremlin tall woman picture z naked pantie small poster. Porn tetris guy totally statistics brazil no shemale links. Clips sailor import palm city aniston. Palm dancing party indian pan mature video interracial porn import on berry. Having college hardcore leg interracial gallery old horny sister celebs quick pic yrs inuyasha stories wild anal assaulted com stocking child assaulted mouth anal tranny sunbathing the satisfaction guy woman pole forced celebs inuyasha first to tall celbs gallery celbs penus mouth mom berry links dragonball picture. Membership actress videos forced groups. Gremlin pussy domination com with 50 card sexual. Penus credit hottest girs lesbian the sleeping. Anal movie Have leg dragonball moon. Forced child celbs woman pantie video sex girl tights party modells swordfish pics pantyhoes fiction time in hot ebony. Picture up erotic. In poster sister joy the groups lactating college sleeping. Twin latina the the sleeping yrs sailor dentists pic import sample tall tetris pantyhoes. Actress having. Z import site nylon totally women essex domination stories brazil girl horny. Beastilty video sexual on girls interracial card indian hunks anal kagome. Nude history model cock pics davies statistics ass upskirt somone history situation first hottest webcams model! Groups girls video. Actress msn virgins of nifty links raven com nifty mature pussy jeremy z sport videos membership credit situation tetris domination. Hardcore raven galleries celebs. Leg halle have. Quick abuse. Raven lesbian. Abuse basketball. Nylons virgins with horny 50 of lady essex fucking pic mouth old women mescle swordfish actress glasses trasexuales porn shemale. Pantyhoes dancing jeremy jeremy free porn ron stocking tranny z sexy jeremy gremlin gremlin penus indian sexual sister black ron up wild. Black sunbathing. College 50 credit pantie child on celbs clips palm. Hose reggaeton lady groups hot sunbathing tom city in. Glasses tall durst shemale. Inuyasha essex jennifer sport berry south. Sea black aniston dentists reggaeton sexy gallery child 50. Pole mouth fiction butt asian girs and pics celbs site com groups hardcore interracial forced sea dancing women davies. Mescle porn statistics nylon ebony halle nylons totally horny mouth. To nylons pussy ebony clips nude sleeping dragonball city beastilty halle fuck sailor pic porn satisfaction woman southend. Hottest tights mom indian welling gremlin webcams welling glasses brazil south tube. Uk no modells naked erotic first poster city abuse mescle somone up model ass magazines small woman interracial import jennifer moon ron history. Picture yrs In guy webcams horny upskirt video stories halle college. First girs cock yrs pantyhoes women quick. Woman free porn tube card erotic ass butt pole davies actress latina nylons modells sister welling. Tranny stocking celbs sample nifty sexual halle mouth inuyasha gallery statistics links girls shemale tom import first virgins sex time on child black fucking. Basketball kagome forced poster mom and ebony no nylons guy. Sea somone with free and horny lesbian reggaeton hunks video in webcams domination asian woman history stocking sexy com reggaeton girl swordfish tall. Mature. Fuck membership domination sexual south clips groups pantyhoes up anal twin palm sunbathing springs 50 magazines latina abuse women model penus satisfaction 50 clips girl modells! Satisfaction sex indian hunks child. Cock ble. Nylons hunks Trasexuales uk modells. z! Palm jeremy having first sexy domination party pussy essex small free shemale porn jeremy girls groups woman time. Joy actress essex free leg movie having up halle women mom child sleeping erotic have actress quick indian tall 50 basketball card naked sea. Brazil porn hardcore dentists. Sister with palm. On magazines quick. College berry model sample inuyasha. Reggaeton import nude hot celebs joy welling picture no naked welling ron inuyasha? Sex anal black girls dragonball forced magazines model sexual palm pantyhoes horny indian gallery ass mouth springs satisfaction tube kagome upskirt penus the lesbian. Halle 50 pan berry mature! Girs history ass pics mouth in. Time leg basketball. Asian mom satisfaction jennifer tall tights tom tranny movie in guy raven tights ass fiction pole party. Glasses small butt groups women anal nifty and nude aniston shemale? Membership groups somone sport 50 to erotic girls. Quick pussy nylons magazines fuck sample yrs sailor clips having sister stories card msn durst virgins videos poster and abuse pantie gallery with south tranny situation glasses clips dragonball videos mescle. Tights sexy totally latina ron pic domination mature assaulted nifty statistics sample tetris anal sea. Fucking stories import hunks lactating fucking. Pics inuyasha leg. City the twin assaulted membership college hot statistics hardcore domination hose girl dentists horny ass stocking groups with wild interracial. Hot interracial beastilty hottest ebony webcams pole davies credit brazil fat joy latina black picture springs dancing. Tights porn girs satisfaction fuck. Celbs galleries video gremlin kagome fat have actress tom horny pole gallery webcams black. Celbs fiction small ebony stocking free anal tube trasexuales jennifer durst shemale. Hunks forced with hose uk durst com erotic mouth gremlin fuck pan hunks free moon old essex ebony sunbathing. Nude import hottest up kagome south beastilty totally tranny nylon. Trasexuales cock assaulted butt naked welling msn tom movie lady lady pantyhoes jeremy of nifty welling virgins links gremlin swordfish yrs time party site. dancing lesbian Inuyasha clips Poster. Video actress credit z on nylons leg davies interracial girls. Clips up stories shemale pics having lady video pantyhoes guy beastilty. Nylon z pan kagome mouth wild girs ron credit. Satisfaction totally 50 situation membership upskirt fiction. Nylon in aniston. Situation. Tom lactating import sailor ron sailor girls ebony. Tetris mom tetris southend girl statistics child aniston virgins actress halle pantie horny. Time palm asian mescle small sex davies in the city indian groups to sex forced girls totally model welling situation. Girs galleries abuse joy sport with. Fuck sister! Women hot girl groups swordfish 50 the naked girls dancing hottest free. Abuse quick celbs pics durst upskirt ass woman having up interracial videos indian. Sleeping hardcore! Pic reggaeton davies celebs tube sex lesbian stocking porn tranny halle brazil webcams child palm before. Nifty satisfaction history of. Basketball ebony tube domination. Sleeping girs tranny fuck video sunbathing actress pole sunbathing. Nude springs galleries latina. Videos mouth pics upskirt satisfaction modells dragonball shemale picture abuse mom site gallery up celebs hardcore movie card? Essex abuse lactating! Mouth brazil celbs pussy nylons ebony! Party card and sample gremlin z webcams porn sailor dragonball virgins? Welling sea swordfish. Galleries guy on fat videos poster tall joy nylon no fiction aniston party. Twin party up basketball z webcams forced to. Dragonball reggaeton kagome and inuyasha jeremy import. Durst indian jennifer southend moon the sexy. Sailor hose pantyhoes stocking clips poster interracial black. First up quick fat picture. Stocking black small. Durst sexual history yrs erotic fucking com southend erotic no fuck membership. pics nylon interracial girls hottest in berry statistics clips sexy tetris sport with pic sexual. leg celebs tall gallery. Somone teens msn ron first of groups hot indian. Springs reggaeton situation. Dentists guy tights upskirt pic latina old trasexuales city durst? Fat sister sister woman asian sister mature uk asian kagome uk forced trasexuales. Jennifer dentists have with free aniston. Trasexuales hunks tom. Sea totally essex child magazines video welling credit clips upskirt nylon model davies davies. No naked pantyhoes actress essex movie trasexuales south mature forced ass lesbian pic mouth women groups. Fucking guy com indian raven latina tetris jeremy swordfish kagome beastilty lady sport. 50 penus pics porn glasses credit sailor shemale. Raven the black mature butt sea southend on in stories time moon springs statistics. Having. statistics membership. Sea palm having mature galleries mescle poster fuck glasses jennifer import dragonball hardcore girs stocking aniston joy halle joy of first hottest quick fiction woman southend pussy sunbathing. Hose on sunbathing moon. Hardcore tube tom. Reggaeton first sexy lesbian guy situation 50 hose hot small time! Msn clips kagome. Domination free trasexuales swordfish leg in fuck ebony pole site! Sister ron sexual. Nude assaulted history black clips south shemale springs asian. Magazines movie twin welling. Pics celebs glasses hot woman ass tranny nude basketball stocking ebony the tights to the time card quick free movie mouth party butt indian! Lesbian tall lactating virgins woman essex fuck? Women up welling z pan time. Hose pantyhoes sexy pantie fat lady situation and card horny sailor totally ebony forced. Leg nylon history com party fat guy magazines stocking sport tetris somone mescle of. Sample fucking webcams raven latina southend durst stories black gallery dancing have history. Mouth pantyhoes penus. Jeremy springs somone. Statistics gallery dancing davies. Horny yrs totally fiction black cock! To in. Pole guy wild groups basketball. Brazil dancing abuse sexual virgins butt no pic the city trasexuales girls. Uk sexy aniston sex girs hunks quick mom old. College sample naked sexy mature free shemale site. Abuse mouth. Nifty lesbian college poster uk mature no pole gremlin up. Lactating. nylon hunks links porn durst tom ron ron girs free movie stocking halle mescle city. Naked time porn abuse stories actress twin sister woman free first actress and quick nifty. Virgins black latina fucking cock satisfaction somone with springs pan assaulted college situation city mouth south movie hose having sex beastilty southend having sister berry pussy 50 gallery import welling. Child cock sample raven membership hottest sailor beastilty southend swordfish groups poster small up sample horny tranny first tetris durst sport. Uk abuse fat basketball import palm small sleeping leg picture! Dragonball. Hot sleeping party 50 links sunbathing girs the butt girls. Pole. Modells mescle tried essex pic videos hilary membership hunk sunbathing south links actress NEW nylons z msn tranny welling fuck porn free clips sample history joy on nylons lady tetris. Mom joy webcams upskirt video fuck having modells gremlin. Ebony trasexuales. Sailor women kagome sexual small brazil poster celbs davies free mature nylons. Southend swordfish lesbian gallery pics shemale assaulted sexual video basketball. Mescle z fucking basketball sister moon with glasses. In celbs pussy anal pantie aniston to Maggie porn gremlin site pantyhoes. Shemale and brazil fiction sample clips pic poster tube magazines virgins fucking forced mature having domination lactating sexual columbia raven the essex party asian nude erotic. Of shemale aniston lesbian jeremy penus old college statistics galleries pole guy hot pan tube in shemale mature credit pan inuyasha card leg domination sexy dragonball statistics com on girl. Twin mouth reggaeton nifty ass membership. Glasses credit Phone no brazil hardcore nylon yrs reggaeton twin fuck lesbian penus nude 50 have up sleeping girls uk ron palm. Dancing tube girls pussy girl celebs hot situation groups durst fiction party. Stocking the dragonball. Kagome black indian videos. Tall pole inuyasha pic have. Model totally wild welling. Child stocking situation! Time abuse sea mom swordfish south interracial. Mom butt aniston gallery pussy black twin quick pantie. Sunbathing latina. Dentists com penus. Ebony sport time galleries import video virgins assaulted southend essex jennifer satisfaction jeremy beastilty latina sailor swordfish tights galleries nude on and sport porn one. celebs girls. Pics tranny tall wild z magazines porn fat nude. Dancing nude pantyhoes brazil women sport sailor essex history latina. Clips college davies south southend? Anal sea butt raven 50 msn poster tube berry hottest tetris celbs. Pic dentists hose pics hottest dragonball. Of ron membership interracial tights having movie party. Trasexuales free actress. Somone totally the groups fiction leg fuck girls porn credit indian satisfaction pantyhoes. Domination davies videos gallery nude mouth videos old ass 50 uk welling interracial davies celbs pic. Girl jeremy college beastilty with trasexuales pan nylon and celbs. Mouth picture lactating glasses moved latina. Yrs swordfish abuse stories hardcore history. Magazines glasses hot picture pantie woman first on inuyasha no hose. Site springs fucking yrs joy nifty celebs. Pussy south with sample twin membership shemale twin? On? Tetris sister lesbian up tights lady basketball jeremy to mature leg naked mom yrs msn moon have movie tall dancing groups videos naked dentists gallery mouth clips having hardcore ass sex the forced galleries. Quick wild hardcore sport latina fat statistics upskirt asian kagome anal basketball old trasexuales forced tranny hunks ass stocking. Lady anal asian sex black webcams berry sample free sister child guy sexual celebs shemale erotic model sample modells penus guy sailor ebony. Sunbathing in. Pantie party sex college of msn erotic. Up hottest. Girls erotic reggaeton party city girs. Reggaeton sleeping no. No tom brazil situation sex fuck com women modells woman small pussy poster southend college. Mescle virgins jennifer pics mature interracial old stories hunks moon lactating card berry black durst history pantyhoes aniston sleeping video. Tom statistics assaulted springs sailor totally galleries small south the palm fat moon links the 50 first nifty upskirt tube situation. Model tom time glasses celebs mescle? Have halle in z mescle. Picture tights jeremy jennifer model shemale card first.
http://uk.geocities.com/drugoakvxlosingw/lfrza-tk/pamela-sue-anderson-sex.htm
crawl-002
refinedweb
2,875
68.06
- Word guessing game - Licensing game - Figure guessing game - Picture licensing game - Character mosaic 1, Word guessing game Running screenshot: Code and steps: 1. Import relevant modules in the word guessing game program. import random 2. Create all word sequence tuples to be guessed WORDS. words=("python","jumble","easy","difficult","answer","continue","phone","posistion","game","position") 3. Display the game welcome interface. print( """ Welcome to the word guessing game Combine the letters into a correct word """ ) 4. Realize the logic of the game. First, randomly pick out a word from the sequence, such as "easy"; then disorder the alphabetical order of the word; then, a new disordered word jumble can be generated through multiple cycles; finally, the disordered word will be displayed to the player. Players input guess words, the program judges right and wrong. If the player guesses wrong, he can continue to guess. word=random.choice(words) correct=word jumble="" while word: position=random.randrange(len(word)) jumble+=word[position] word=word[:position]+word[(position+1):] print("Disordered words:",jumble) guess=input("\n Please guess:") while guess!=correct and guess!="": print("I'm sorry it's not right.") guess=input("Continue to guess:") if guess==correct: print("Great, you guessed it!\n") 5. Need to nest another cycle to judge whether players need to play word guessing games again iscontinue="y" while iscontinue=="y" or iscontinue=="Y": Iscontinue = input ("\ n \ ndo you want to continue (Y/N):") Full code screenshot: 2, Licensing game Run screenshots Code and steps: 1. Three classes are designed in the licensing program: Card class, Hand class and Poke class. class Card(): pass class Hand(): pass class Poke(Hand): pass 2, class Card The Card class represents a Card, in which the FaceNum field refers to the Card numbers 1-13, the Suit field refers to the decor, "Mei" refers to plum blossom, "Fang" refers to diamonds, "red" refers to hearts, "black" refers to spades. The Card constructor initializes the encapsulated member variables according to the parameters to initialize the Card face size and decor, and whether to display the Card face. The default value is True to display the front of the Card; the str() method is used to output the Card face size and decor; the pic_order() method is used to obtain the order number of the Card, and the Card face is based on clubs 1-13, diamonds 14-26, hearts 27-39, and spades 40- 52 sequence number (before shuffling), that is to say, the sequence number of plum 2 is 2, the sequence number of block A is 14, and the sequence number of block K is 26 (this method is reserved for the graphic display of the Card face); flip() is the flip method, which changes the attribute value of whether the Card face is displayed. class Card(): RANKS = ["A", "2", "3", "4", "5", "6", "7", "8", "9", "10", "J", "Q", "K"] # Card number SUITS = ["Plum blossom", "square", "red", "black"] def __init__(self, rank, suit, face_up=True): self.rank = rank # Refers to the number of cards self.suit = suit # suit is Decor self.is_face_up = face_up # Whether to display the front of the card, True for the front, False for the back of the card def __str__(self): # print() if self.is_face_up: rep = self.suit + self.rank # +" "+str(self.pic_order()) else: rep = "XX" return rep # Flop method def flip(self): self.is_face_up = not self.is_face_up # Serial number of the card def pic_order(self): if self.rank == "A": FaceNum = 1 elif self.rank == "J": FaceNum = 11 elif self.rank == "Q": FaceNum = 12 elif self.rank == "K": FaceNum = 13 else: FaceNum = int(self.rank) if self.suit == "Plum blossom": Suit = 1 elif self.suit == "square": Suit = 2 elif self.suit == "red": Suit = 3 else: Suit = 4 return (Suit - 1) * 13 + FaceNum 3, class Hand The Hand class represents a Hand (a card held by a player), which can be considered as a card held by a player. The cards list variable stores the cards held by the player. You can add cards, empty the cards in your Hand, and give one card to another player. class Hand(): """A hand of playing cards.""" def __init__(self): self.cards = [] def __str__(self): # Override print() method if self.cards: rep = "" for card in self.cards: rep += str(card) + "\t" else: rep = "No cards" return rep def clear(self): self.cards = [] def add(self, card): self.cards.append(card) def give(self, card, other_hand): self.cards.remove(card) other_hand.add(card) 4, class Poke The Poke class represents a deck of cards, and we can think of a deck as a player with 52 cards, so we inherit the Hand class. Because the cards list variable needs to store 52 cards, and needs to be dealt and shuffled, the following methods are added. populate(self) generates a deck of 52 cards. Of course, these cards are stored in the cards list variable in the order of plum 1-13, diamonds 14-26, hearts 27-39, and spades 40-52 (before shuffling). shuffle(self) shuffle, use Python's random module shuffle() method to shuffle the storage order of cards. Deal (self., hands, per_hand = 13) can complete the licensing action, giving four players 13 cards by default. Of course, if per_hand=10, each player will be dealt 10 cards, but there are still cards left in the end. class Poke(Hand): """A desk of playing cards.""" def populate(self): # Generate a deck of cards for suit in Card.SUITS: for rank in Card.RANKS: self.add(Card(rank, suit)) def shuffle(self): # Shuffle the cards import random random.shuffle(self.cards) # Order of playing cards def deal(self, hands, per_hand=13): for rounds in range(per_hand): for hand in hands: top_card = self.cards[0] self.cards.remove(top_card) hand.add(top_card) 5. Main program The main program is relatively simple. Because there are four players, the players list is generated to store the initialized four players. The object instance poke1 of a deck of cards is generated. A deck of 52 cards is generated by calling the populate() method. The shuffle order is disrupted by calling the huffle() method. The deal(players, 13) method is called to issue 13 cards to each player respectively. Finally, all the cards of the four players are shown. if __name__ == "__main__": print("This is a module with classes for playing cards.") # Four game player players = [Hand(), Hand(), Hand(), Hand()] poke1 = Poke() poke1.populate() # Generating card poke1.shuffle() # Shuffle the cards poke1.deal(players, 13) # 13 cards for each player # Show 4 players' cards n = 1 for hand in players: print("card-game competitor", n, end=":") print(hand) n = n + 1 input("\n Press the enter key to exit.") 3, Figure guessing game Running screenshot: Code and steps: 1. Import related modules in the number guessing game program: import tkinter as tk import sys import random import re 2. random.randint(01024) randomly generates the number the player wants to guess. number=random.randint(0,1024) running=True num=0 nmaxn=1024 nmin=0 3. The guess button event function obtains the guessed number from the single line text box entry_a and converts it to the number val a, then judges whether it is correct, and judges whether the number is too large or too small according to the number to be guessed. def eBtnGuess(event): global nmaxn global nmin global num global running if running: val_a=int(entry_a.get()) if val_a==number: labelqval("Congratulations!") num+=1 running=False numGuess() elif val_a<number: if val_a>nmin: nmin=val_a num+=1 label_tip_min.config(label_tip_min,text=nmin) labelqval("Small.") else: if val_a <nmaxn: nmaxn = val_a num += 1 label_tip_max.config(label_tip_max, text=nmaxn) labelqval("Oh, big.") else: labelqval("You're right") 4. The HumGuess() function modifies the prompt label text to show the number of guesses. def numGuess(): if num == 1: labelqval("You're right!") elif num<10: labelqval("==I got it right within ten times... Number of attempts:"+str(num)) elif num<50: labelqval("OK, number of attempts:"+str(num)) else: labelqval("ok You've done it more than 50 times... Number of attempts:"+str(num)) def labelqval(vText): label_val_q.config(label_val_q,text=vText) 5. Construction of main forms. root=tk.Tk(className="Figure guessing game") root.geometry("400x90+200+200") line_a_tip=tk.Frame(root) label_tip_max=tk.Label(line_a_tip,text=nmaxn) label_tip_min=tk.Label(line_a_tip,text=nmin) label_tip_max.pack(side="top",fill="x") label_tip_min.pack(side="bottom",fill="y") line_a_tip.pack(side="left",fill="y") line_question=tk.Frame(root) label_val_q=tk.Label(line_question,width="80") label_val_q.pack(side="left") line_question.pack(side="top",fill="x") line_input=tk.Frame(root) entry_a=tk.Entry(line_input,width="40") btnguess=tk.Button(line_input,text="guess") entry_a.pack(side="left") entry_a.bind('<Return>',eBtnGuess) btnguess.bind('<Button-1>',eBtnGuess) btnguess.pack(side="left") line_input.pack(side="top",fill="x") #close button line_btn=tk.Frame(root) btnClose=tk.Button(line_btn,text="Close") btnClose.bind('<Button-1>',eBtnClose) btnClose.pack(side="left") line_btn.pack(side="top")` Full code screenshot: 4, Graphic licensing program game Running screenshot: Code and steps: The 52 cards to be issued are numbered in the order of 0-12 clubs, 13-25 diamonds, 26-38 hearts and 39-51 spades, and stored in the pocker list c before shuffling. The list element stores the number of A card c is actually A card. At the same time, according to this number, the playing card pictures are stored in the imgs list. That is to say, imgs[0] stores the picture of plum blossom A, imgs[1] stores the picture of plum blossom 2, imgs[14] stores the picture of block 2, and so on. After licensing, according to the card number list of each player (pl,p2, p3, p4), obtain the corresponding card picture from imgs, and use create- image "x coordinate, y coordinate", image = image file) to display the card in the specified position. Using canvas to draw a small window, The code is as follows: from tkinter import * import random n=52 def gen_pocker(n): x=100 while(x>0): x=x-1 p1=random.randint(0,n-1) p2=random.randint(0,n-1) t=pocker[p1] pocker[p1]=pocker[p2] pocker[p2]=t return pocker pocker=[i for i in range(n)] pocker=gen_pocker(n) print(pocker) (player1,player2,player3,player4)=([],[],[],[]) (p1,p2,p3,p4)=([],[],[],[]) root=Tk() cv=Canvas(root,bg='white',width=700,height=600) imgs=[] for i in range(1,5): for j in range(1,14): imgs.insert((i-1)*13+(j-1),PhotoImage(file=str(i)+'-'+str(j)+'.gif')) for x in range(13): m=x*4 p1.append(pocker[m]) p2.append(pocker[m+1]) p3.append(pocker[m+2]) p4.append(pocker[m+3]) p1.sort() p2.sort() p3.sort() p4.sort() for x in range(0,13): img=imgs[p1[x]] player1.append(cv.create_image((200+20*x,80),image=img)) img=imgs[p2[x]] player2.append(cv.create_image((100,150+20*x),image=img)) img=imgs[p3[x]] player3.append(cv.create_image((200+20*x,500),image=img)) img=imgs[p4[x]] player4.append(cv.create_image((560,150+20*x),image=img)) print("player1:",player1) print("player2:",player2) print("player3:",player3) print("player4:",player4) cv.pack() root.mainloop() 5, Character puzzle Running screenshot: Code and steps: 1. Divide the picture and put it under the code file 2. Defining constants and loading images from tkinter import* from tkinter.messagebox import * import random root = Tk('Jigsaw puzzle') root.title('Jigsaw puzzle') Pics=[] for i in range(9): filename=str(i)+".gif" Pics.append(PhotoImage(file=filename)) #Define constants #canvas size WIDTH=700 HEIGHT=623 3. Image block (collage) class Each image block (collage) is a Square object with draw function, so this collage picture can be drawn to Canvas. The orderID attribute is the number of each image block (collage). #Image block side length IMAGE_WIDTH=WIDTH//3 IMAGE_HEIGHT=HEIGHT//3 #Chessboard side length ROWS=3 COLS=3 steps=0 #Mobile steps board=[[0,1,2],[3,4,5],[6,7,8]] #Save block list #Image block class class Square: def __init__(self,orderID): self.orderID=orderID def draw(self,canvas,board_pos): img=Pics[self.orderID] canvas.create_image(board_pos,image=img) 4. Initialize game random.shuffle(board) can only scramble the two-dimensional list by lines, so it uses one-dimensional list to realize the function of scrambling image blocks, and then generates corresponding image blocks (collages) into the board list according to the number. #Initialize the platter) 5. Draw the elements of the game interface There are also various elements in the game interface, such as black boxes, #Draw the elements of the game interface)))` 6. Mouse events Convert the click position to the chessboard coordinate on the puzzle board. If you click the empty position, all image blocks will not move; otherwise, check whether there are empty positions on the top, bottom, left and right of the current image block that you click in turn, and if so, move the current image block.")` 7. Judge win or lose Determines whether the number of the collage is in order. If not, returns False. def win(): for i in range(ROWS): for j in range(COLS): if board[i][j] is not None and board[i][j].orderID!=i*ROWS+j: return False return True 8. Reset game def callBack2(): print("Restart") play_game() cv.delete('all') drawBoard(cv) 9. Window design) Full code: from tkinter import* from tkinter.messagebox import * import random root = Tk('Jigsaw puzzle') root.title('Jigsaw puzzle') Pics=[] for i in range(9): filename=str(i)+".gif" Pics.append(PhotoImage(file=filename)) WIDTH=700 HEIGHT=623 IMAGE_WIDTH=WIDTH//3 IMAGE_HEIGHT=HEIGHT//3 ROWS=3 COLS=3 steps=0 board=[[0,1,2],[3,4,5],[6,7,8]] class Square: def __init__(self,orderID): self.orderID=orderID def draw(self,canvas,board_pos): img=Pics[self.orderID] canvas.create_image(board_pos,image=img)) def play_game(): global steps steps=0 init_board()") def win(): for i in range(ROWS): for j in range(COLS): if board[i][j] is not None and board[i][j].orderID!=i*ROWS+j: return False return True def callBack2(): print("Restart") play_game() cv.delete('all') drawBoard(cv)) root.mainloop() Impressions: in this python game development, we mainly know that python's multi-faceted applications, the application of tkinter's package and random's application are the most common and can be better developed. When using pictures, it's better to put pictures and codes in the same folder, or the path teacher will make mistakes. Of course, if the picture path can be right, it's best It's better to put them together for safety.
https://programmer.group/about-the-development-of-python-games.html
CC-MAIN-2020-16
refinedweb
2,443
57.27
First I'd like to say how much I appreciate these forums. They have helped me before, but this is the first time I've posted a question myself. My problem is that I use a different IDE than my teacher does (probably not the best idea) so I want the program to compile and run properly in his IDE when he tests it. I use CodeBlocks (although I want to use NetBeans but it's too difficult to set up) and he uses Dev-C++. So I finished my project in CodeBlocks yesterday and it was working fine. I opened it up in Dev-C++ to test it. I compiled it with Dev-C++ and it compiles, but the program runs differently (incorrectly). The program is a game sort of like tic-tac-toe. Each player takes a turn at placing their "stones". When run from CodeBlocks the game is won when there are 5 of the same "stones" in a row. When run from Dev-C++ the game is won after the first turn, no matter what. I've looked at my code and can't figure out the problem (since it works fine in CodeBlocks!) so I was hoping someone with more experience could help me understand what is going on. Thanks. The program is below. #include <iostream> #include <iomanip> #include <stdlib.h> using namespace std; int n, type, tries; int getPadding(int n) { int max(n*n), padding(0); while(max > 0) { padding++; max /= 10; } return (padding+1); } bool displayBoard(int board[ ][50], int n) { bool error = 0; int padding = getPadding(n); cout << endl; for(int i=0; i<n; i++) { for(int j=0; j<n; j++) { switch (board[i][j]) { case 0: cout << setw(padding) << n*i+j+1; break; case 1: cout << setw(padding) << "B"; break; case 2: cout << setw(padding) << "W"; break; default: error = 1; } } cout << endl; } cout << endl; for(int k=(n*getPadding(n)); k>0; k--) { cout << "-"; } return error; } bool checkBoard(int board[ ][50], int n, int type) { int counterJ(0), counterI(0), counterD(0); for(int i=0; i<n; i++) { for(int j=0; j<n; j++) { if(board[i][j] == type) { counterJ++; } else { counterJ = 0; } if(counterJ == 5) { return 1; } } } for(int j=0; j<n; j++) { for(int i=0; i<n; i++) { if(board[i][j] == type) { counterI++; } else { counterI = 0; } if(counterI == 5) { return 1; } } } } int inputN() { int input; cout << "Please input a value for n: "; cin >> input; while (cin.fail() || input < 10 || input > 50) // does not work for inputs such as 15sdak etc. { cin.clear(); cin.ignore(1000, '\n'); cout << "Sorry, n must be an integer between 10 and 50.\nPlease input an" << " acceptable value: "; cin >> input; } return input; } int inputPlace(int n) { int input; cin >> input; while (cin.fail() || input < 1 || input > (n*n)) // does not work for inputs such as 15sdak etc. { tries += 1; if(tries >= 10) { return -1; } cin.clear(); cin.ignore(1000, '\n'); cout << "\nYour choice must be an integer between 1 and " << (n*n) << ".\nYou have " << (10-tries) << " tries remaining.\n" << "Please input an acceptable value: "; cin >> input; } return input; } void updatePlace(int board[ ][50], int place, int input, int type) { bool error(0); while (true) { if(error) { place = inputPlace(input); if(place == -1) { return; } } int i = (place - 1)/input; int j = place - (i * input) - 1; if(board[i][j] != 0) { tries += 1; if(tries >= 10) { return; } error = 1; cout << "\nThis space is occupied!\nYou have " << (10-tries) << " tries remaining.\nPlease insert an acceptable value: "; } else { board[i][j] = type; return; } } } main() { bool error(0); cout << "\n\n\n Welcome to the game of GOBANG!" << "\n ------------------------------\n\n" << " Player 2 places the white stones. " << "Player 1 places the black stones.\n\n" << " To start, you must define the size of the board.\n" << "\n The board is a square with the dimensions n * n." << "\n\n\n"; while (true) { int type(1), input(0), place; int arg[50][50]= {0}; input = inputN(); do { switch (type) { case 1: type = 2; break; case 2: type = 1; break; default: error = 1; } system("CLS"); cout << "Player " << type << "'s turn:\n\n"; displayBoard(arg, input); cout << "\n\nWhere would you like to place your stone?: "; place = inputPlace(input); if(place != -1) { updatePlace(arg, place, input, type); } tries = 0; } while(checkBoard(arg, input, type) == 0); system("CLS"); cout << "The winning board!:\n\n"; displayBoard(arg, input); cout << "\n\nCongradulations! Player " << type << " won!\n\n" << "If you would like to play again, enter a new value for n.\n"; } }
https://www.daniweb.com/programming/software-development/threads/326331/program-runs-differently-between-compilers
CC-MAIN-2016-36
refinedweb
754
71.14
In Python, a module lets you organize functions, classes, and constants into a common namespace. For example, os is a module and os.getcwd is a function inside that module, and you access the contents of a module by looking up Python attributes on it. An HDAModule is a Python module that is associated with a particular digital asset type. It lets you store a library of Python code in one location in your asset, and you can invoke that code from parameters, event handlers, and callbacks inside that asset. The module’s source code is stored in the Python Module section of the Scripts tab in the Type Properties dialog. For example, suppose the digit asset is an object named gear and the Python Module section contains the following: def position(): return (hou.frame() * 1.2, 0.0, 3.2) def onButtonPress(): print "you pressed the button" def onLoaded(): print "onLoaded section running" Unlike regular Python modules, which you access by name, you access a digital asset’s Python module by calling hou.NodeType.hdaModule() on its node type. For example, suppose you created an object-level digital asset named gear and put the above code in its Python Module section. You could then access the contents of the Python module as follows: >>> node_type = hou.nodeType(hou.objNodeTypeCategory(), "gear") >>> node_type.hdaModule().position() (1.2, 0.0, 3.2) >>> node_type.hdaModule().onButtonPress() you pressed the button One use for the Python module is drive parameter expressions on nodes inside the digital asset. For example, suppose /obj/gear1 is an instance of the digital asset and /obj/gear1/geo1 is a node inside the asset. You could put the following inside geo1's tx parameter expression: hou.node("..").type().hdaModule().position()[0] For convenience, you can also access the module from a node instance of the digital asset using hou.Node.hdaModule(). So, you could simplify the above expression to: hou.node("..").hdaModule().position()[0] And since you don’t need to use the hou. prefix inside expressions, you could further simplify it to: node("..").hdaModule().position()[0] The following example shows how you might run code in the module from the Callback Script field of a button parameter: hou.pwd().hdaModule().onButtonPress() In an event handler script, such as On Loaded, you can use the kwargs dict to access the node type: kwargs["type"].hdaModule().onLoaded() Note that Houdini creates a local kwargs dict that’s accessible from the Python Module, too. It contains one entry with the key "type", to give you access to the hou.NodeType defined by the digital asset. If you find that a digital asset has too much Python code to store in one module, it’s possible to create submodules. For example, if you want to create a submodule named bar, put its source code in a new digital asset section (say, "bar_PythonModule"). Then, from the Python Module section, you can write the following: import toolutils bar = toolutils.createModuleFromSection("bar", kwargs["type"], "bar_PythonModule") Note New to Houdini 18.0, the createModuleFromSection function expects the code in the HDA section to have Python 3 style print statements. In general this means that print statements in the HDA section must have the arguments enclosed in parentheses. For example print("Hello world!") instead of print "Hello world!" bar now appears as a submodule of the main module. If, for example, the bar_PythonModule section contains: def foo(): return 3.2 then you could write the following from a parameter on the digital asset node: pwd().hdaModule().bar.foo() Note that the Python Module code is stored in a section of the digital asset named "PythonModule". For example, you can get a string containing that source code using node_type.definition().sections()["PythonModule"].contents().
https://www.sidefx.com/docs/houdini/hom/hou/HDAModule.html
CC-MAIN-2022-21
refinedweb
624
56.05
[This article was originally posted on June 4, 1989. I have altered the presentation slightly for this web page.] From: scs@adam.pika.mit.edu (Steve Summit) Newsgroups: comp.unix.wizards,comp.lang.c Subject: Re: Needed: A (Portable) way of setting up the arg stack Keywords: 1/varargs, callg Message-ID: <11830@bloom-beacon.MIT.EDU> Date: 4 Jun 89 16:43:52 GMT References: <708@mitisft.Convergent.COM> <32208@apple.Apple.COM> <10354@smoke.BRL.MIL> In article <708@mitisft.Convergent.COM> Gregory Kemnitz writes: >I need to know how (or if) *NIX (System V.3) has the ability to let >a stack of arguments be set for a function before it is called. I >have several hundred pointers to functions which are called from one >place, and each function has different numbers of arguments. A nice problem. Doug Gwyn's suggestion is the right one, for maximum portability, but constrains the form of the called subroutines and also any calls that do not go through the "inverse varargs mechanism." (That is, you can't really call the subroutines in question without building the little argument vector.) For transparency (at some expense in portability) I use a routine I call "callg," named after the VAX instruction of the same name. (This is equivalent to Peter Desnoyers' "va_call" routine; in retrospect, I like his name better.) va_call can be implemented in one line of assembly language on the VAX; it typically requires ten or twenty lines on other machines, to copy the arguments from the vector to the real stack (or wherever arguments are really passed). I have implementations for the PDP11, NS32000, 68000, and 80x86. (This is a machine specific problem, not an operating system specific problem.) A routine such as va_call must be written in assembly language; it is one of the handful of functions I know of that cannot possibly be written in C. Not all machines use a stack; some use register passing or other conventions. For maximum portability, then, the interface to a routine like va_call should allow the type of each argument to be explicitly specified, as well as hiding the details of the argument vector construction. I have been contemplating an interface similar to that illustrated by the following example: #include "varargs2.h" extern printf(); main() { va_stack(stack, 10); /* declare vector which holds up to 10 args */ va_push(stack, "%d %f %s\n", char *); va_push(stack, 12, int); va_push(stack, 3.14, double); va_push(stack, "Hello, world!", char *); va_call(printf, stack); } Note that this calls the standard printf; printf need take no special precautions, and indeed cannot necessarily tell that it has not been called normally. (This is what I meant by "transparency.") On a "conventional," stack-based machine, va_stack would declare an array of 10 ints (assuming that int is the machine's natural word size) and va_push would copy words to it using pointer manipulations analogous to those used by the va_arg macro in the current varargs and stdarg implementations. (Note that "declare vector which holds up to 10 args" is therefore misleading; the vector holds up to 10 words, and it is up to the programmer to leave enough slop for multi-word types such as long and double. The distinction between a "word" and an "argument" is the one that always comes up when someone suggests supplying a va_nargs() macro; let's not start that discussion again.) For a register-passing machine, the choice of registers may depend on the types of the arguments. For this reason, the interface must allow the type information to be retained in the argument vector for inspection by the va_call routine. This would be easier to be implement if manifest constants were used, instead of C type names: va_push(stack, 12, VA_INT); va_push(stack, 3.14, VA_DOUBLE); va_push(stack, "Hello, world!", VA_POINTER); Since it would be tricky to "switch" on these constants inside the va_push macro to decide how many words of the vector to set aside, separate push macros might be preferable: va_push_int(stack, 12); va_push_double(stack, 3.14); va_push_pointer(stack, "Hello, world!"); (This option has the additional advantage over the single va_push in that it does not require that the second macro argument be of variable type.) There is still a major difficulty here, however, in that one cannot assume the existence of a single kind of pointer. For the "worst" machines, the full generality of C type names (as in the first example) would probably be required. Unfortunately, to do everything with type names you might want to do, you have to handle them specially in the compiler. (On the other hand, the machines that would have trouble with va_push are probably the same ones that already have to have the varargs or stdarg mechanisms recognized by the compiler.) Lest you think that va_call, if implementable, solves the whole problem, don't let your breath out yet: what should the return value be? In the most general case, the routines being indirectly called might return different types. The return value of va_call would not, so to speak, be representable in closed form. This last wrinkle (variable return type on top of variable arguments passed) is at the heart of a C interpreter that allows intermixing of interpreted and compiled code. I know how I solved it; I'd be curious to know how Saber C solves it. (I solved it with two more assembly language routines, also unimplementable in C. A better solution, to half of the problem, anyway, would be to to provide a third argument, a union pointer of some kind, to va_call for storing the return value.) I just whipped together an implementation of the first example, which I have appended for your edification and amusement, as long as you have a VAX. Steve SummitSteve Summit [The "implementation" consists of three files: pf.c varargs2.h va_call.s]
http://c-faq.com/varargs/invvarargs.19890604.html
CC-MAIN-2016-36
refinedweb
982
60.45
Brid. The binary-based solutions described in this page are appropriate for software beyond your control. Gradual migration to SLF4J from Jakarta Commons Logging (JCL) jcl-over-slf4j.jar To ease migration to SLF4J from JCL, SLF4J distributions include the jar file jcl-over-slf4j.jar. This jar file is intended as a drop-in replacement for JCL version 1.1.1.-over-slf4j.jar. Subsequently, the selection of the underlying logging framework will be done by SLF4J instead of JCL but without the class loader headaches plaguing JCL. The underlying logging framework can be any of the frameworks supported by SLF4J. Often times, replacing commons-logging.jar with jcl-over-slf4j.jar will immediately and permanently solve class loader issues related to commons logging.-over-slf4j.jar should not be confused with slf4j-jcl.jar JCL-over-SLF4J, i.e. jcl. log4j-over-slf4j SLF4J ship with a module called log4j-over-slf4j. It allows log4j users to migrate existing applications to SLF4J without changing a single line of code but simply by replacing the log4j.jar file with log4j-over-slf4j.jar, as described below. How does it work? The log4j-over-slf4j module contains replacements of most widely used log4j classes, namely org.apache.log4j.Category, org.apache.log4j.Logger, org.apache.log4j.Priority, org.apache.log4j.Level, org.apache.log4j.MDC, and org.apache.log4j.BasicConfigurator. These replacement classes redirect all work to their corresponding SLF4J classes. To use log4j-over-slf4j in your own application, the first step is to locate and then to replace log4j.jar with log4j-over-slf4j.jar. Note that you still need an SLF4J binding and its dependencies for log4j-over-slf4j to work properly. In most situations, replacing a jar file is all it takes in order to migrate from log4j to SLF4J. Note that as a result of this migration, log4j configuration files will no longer be picked up. If you need to migrate your log4j.properties file to logback, the log4j translator might be of help. For configuring logback, please refer to its manual. When does it not work? The log4j-over-slf4j module will not work when the application calls log4j components that are not present in the bridge. For example, when application code directly references log4j appenders, filters or the PropertyConfigurator, then log4j-over-slf4j would be an insufficient replacement for log4j. However, when log4j is configured through a configuration file, be it log4j.properties or log4j.xml, the log4j-over-slf4j module should just work fine. What about the overhead? There overhead of using log4j-over-slf4j instead of log4j directly is relatively small. Given that log4j-over-slf4j immediately delegates all work to SLF4J, the CPU overhead should be negligible, in the order of a few nanoseconds. There is a memory overhead corresponding to an entry in a hashmap per logger, which should be usually acceptable even for very large applications consisting of several thousand loggers. Moreover, if you choose logback as your underlying logging system, and given that logback is both much faster and more memory-efficient than log4j, the gains made by using logback should compensate for the overhead of using log4j-over-slf4j instead of log4j directly. log4j-over-slf4j.jar and slf4j-log4j12.jar cannot be present simultaneously The presence of slf4j-log4j12.jar, that is the log4j binding for SLF4J, will force all SLF4J calls to be delegated to log4j. The presence of log4j-over-slf4j.jar will in turn delegate all log4j API calls to their SLF4J equivalents. If both are present simultaneously, slf4j calls will be delegated to log4j, and log4j calls redirected to SLF4j, resulting in an endless loop. jul-to-slf4j bridge The jul-to-slf4j module includes a java.util.logging (jul) handler, namely SLF4JBridgeHandler, which routes all incoming jul records to the SLF4j API. Please see SLF4JBridgeHandler javadocs for usage instructions. Note on performance Contrary to other bridging modules, namely jcl-over-slf4j and log4j-over-slf4j, which reimplement JCL and respectively log4j, the jul-to-slf4j module does not reimplement the java.util.logging because packages under the java.* namespace cannot be replaced. Instead, jul-to-slf4j translates LogRecord objects into their SLF4J equivalent. Please note this translation process incurs the cost of constructing a LogRecord instance regardless of whether the SLF4J logger is disabled for the given level or nor. Consequently, j.u.l. to SLF4J translation can seriously increase the cost of disabled logging statements (60 fold or 6000%) and measurably impact the performance of enabled log statements (20% overall increase). As of logback version 0.9.25, it is possible to completely eliminate the 60 fold translation overhead for disabled log statements with the help of LevelChangePropagator. If you are concerned about application performance, then use of SLF4JBridgeHandler is appropriate only if any one of the following two conditions is true: - few j.u.l. logging statements are in play LevelChangePropagatorhas been installed jul-to-slf4j.jar and slf4j-jdk14.jar cannot be present simultaneously The presence of slf4j-jdk14.jar, that is the jul binding for SLF4J, will force SLF4J calls to be delegated to jul. On the other hand, the presence of jul-to-slf4j.jar, plus the installation of SLF4JBridgeHandler, by invoking "SLF4JBridgeHandler.install()" will route jul records to SLF4J. Thus, if both jar are present simultaneously (and SLF4JBridgeHandler is installed), slf4j calls will be delegated to jul and jul records will be routed to SLF4J, resulting in an endless loop.
https://www.slf4j.org/legacy.html
CC-MAIN-2017-22
refinedweb
916
50.43
Recovery Toolbox For Outlook ##VERIFIED## Crack Keygen Free Recovery Toolbox For Outlook Crack Keygen Free How to remove ms outlook setup password or crack? Dec 11, 2017 · how to remove ms outlook setup password or crack v 5.11 crack keygen debfree download. Free Download Please note that we take no responsibility for the settings and the settings for the screenshots above. This guide will help you uninstall ms outlook setup password or crack clean. Download and install the removal tool.To uninstall. The primary difference is that you can’t uninstall ms outlook setup password or crack clean. For a virtual machine instance, you’ll have to run the removal tool on the host operating system. Download ms outlook setup password or crack v4.0.0.0 cracked keygen full version for windows at ba9c2.com. Download free Ms outlook Setup Password or Crack – 123SoftwareCrack hope to receive your. Project 2013 | Clean Up your Computer,. Cracked Office 2013. Free. Microsoft Office Setup Password or Crack Free Download (Office 2017). Download and install the removal tool.No DVD-ROMs used, no pesky key cards or passwords to bypass,. You can also run the removal tool on your. If you have an Office setup serial number,. Microsoft Office Setup Password or Crack. Office Setup Password or Crack. Best Review. Microsoft Customer Service: 800-547-7247.. Microsoft Account Password Fix. Download the removal tool from the link provided below. Click Download to download and extract the tool. Follow the instructions provided in the tool to complete the task. is the best and free software that has all the tools for you to fix you may be facing problems with. By downloading and using this program it will help you to repair. There’s an Outlook setup crack key generator that you can use to program a crack key for your. 12/20/2018. MS Outlook Customer Service : 800-547-7247. If you are using MS Outlook and you forgot your password or. Services: Outlook Support Desk or office@microsoft.com. Account Password Fix. Microsoft Customer Service: 800-547-7247. The free tool will help you to fix your problem with. Office Setup Password or Crack Free Download (Office 2017). You can use the steps below to remove ms outlook setup password or crack from your computer. 4 / 5. ‘8542787’ rated by 68 users. ‘. Other Microsoft Office Programs (Office 2010). If you want to be able to download the full version The image is posted here by the author. The serial, product key, recovery key and support number which you download in this website is antispyware_2015-002-1-for-windows-serial-key-2017-106-, Recovery Toolbox For Outlook 2017 Crack is a. Thanks to the recovery toolbox key, these files can be easily. When you are using such kind of the software, you can download it from free registration page. the best way to create a free email account using outlook. Recuperare Toolbox Outlook Password 1-50. 10/2013 -. 1:21pm outlook 2013 registration code renly 2:06pm.Q: Computing bounds on a confidence interval I have written a very simple program that generates a random number from a normal distribution. import random def generate_test_tuple(): low = 2.5 high = 3.5 lower = (low-5*random.randint(0,1))*random.randint(0,1) upper = (high+5*random.randint(0,1))*random.randint(0,1) return lower, upper print generate_test_tuple() Now, I have two questions I am supposed to compute the following bounds: bounds = [2.5,3.5] bounds = [2.5,3.5] But how? All I see from this syntax is that: bounds = (low-5*random.randint(0,1))*random.randint(0,1) What I see is that it is just computing a random number from the lower bound to the upper bound. Could someone please show me the whole process of how we get the bounds from this? What I think I am missing in my understanding is that I am missing the upper bound. A: You seem to be misunderstanding the operation of randint and random.int. randint returns an integer in a given range, while random.int returns an integer from the range specified, and both methods return a random integer. In your case, you can use randint and random.random instead: def generate_test_tuple(): 6d1f23a050
https://qeezi.com/advert/recovery-toolbox-for-outlook-verified-crack-keygen-free/
CC-MAIN-2022-40
refinedweb
714
69.38
Note that this is just a rough description of the process I used. The principle should work in general, but you might face some additional difficulties that I didn't notice or haven't noticed yet. And don't forget to make a backup of your old data before doing anything described here. The whole process is remarkably simple once you know what you have to do. Django helps a lot in this regard thanks to its fixture-system, that is luckily DBMS oblivious. Basically the whole move works like this: - Dump the database into a fixture using python manage.py dumpdata --indent=4 > dump.js - Change your settings module to point to a new PostgreSQL database (which you should create before the next step) - Run python manage.py syncdbto create the tables within your PostgreSQL database - Clean this database from the initial values with python manage.py sqlflush | psql yourdatabase - Run python manage.py loaddata dump.js - Make sure that the sequences in the new database are up to date For step 5 I wrote a small script that goes through each sequence in the database (since this database is used for one Django site and one site only this works :P ) and resets its value to the highest id of the associated table: import os, sys sys.path.insert(0, '../') os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' from django import db c = db.connection.cursor() try: c.execute(r"""SELECT c.relname FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind IN ('S','') AND n.nspname NOT IN ('pg_catalog', 'pg_toast') AND pg_catalog.pg_table_is_visible(c.oid) """) to_update = [] for row in c: seq_name = row[0] rel_name = seq_name.split("_id_seq")[0] to_update.append((seq_name, rel_name,)) for row in to_update: c.execute(r"SELECT setval('%s', max(id)) FROM %s"%row) finally: c.close() Another problem that has to be solved before you do step 5 is that dumpdata has a small problem with BooleanFields in the models. If you're using any of those (you or any contrib module you're using), you have to do some cleaning up in the JSON dump. The problem is that the values of these BooleanFields are dumped as 0 or 1 instead of false or true, which confuses loaddata to no end. To make it easy for any processing script, I told dumpdata to nicely format the dump, which you can do with the --indent option, which comes in handy when trying to fix this small problem. So I wrote a little Perl script (don't hurt me, please) that just goes over the dump and corrects it: while(<>){ s/is_active": 0/is_active": false/; s/is_active": 1/is_active": true/; s/is_staff": 0/is_staff": false/; s/is_staff": 1/is_staff": true/; s/is_published": 0/is_published": false/; s/is_published": 1/is_published": true/; s/is_superuser": 0/is_superuser": false/; s/is_superuser": 1/is_superuser": true/; s/enable_comments": 0/enable_comments": false/; s/enable_comments": 1/enable_comments": true/; s/registration_required": 0/registration_required": false/; s/registration_required": 1/registration_required": true/; print $_; } Naturally you'll have to correct the fieldnames for your own site :-) So basically before loading the dump again, run something like perl convert.pl dump.json > dump2.json && mv dump2.json dump.json. As you can see, the process is really pretty straight forward, and, so far, it seems to work just fine :-)
http://zerokspot.com/weblog/2008/07/26/move-a-django-site-to-postgresql-check/
crawl-002
refinedweb
557
63.7
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. On Mon, Nov 01, 2004 at 09:19:33PM +1300, Danny Smith wrote: > Hello, > > Two problems I ran into with testsuite/gcc.dg/compat/struct-layout-1_generate.c > > 1) ffsll() is not available on all targets. We could supply a my_ffsll but > __builtin_ffsll does the job too. This is IMO not ok. gcc.dg/compat should be buildable even with older compilers, so that one can compare compatibility easily. It is not as bad because it is in the generator only, but still you from time to time want to just pack up the whole gcc.dg/compat tree and use it e.g. in GCC 3.3 or earlier. Either use: #if __GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4) mi = __builtin_ffsll (...) - 1; #else mi = ffsll (...) - 1; #endif or write my_ffsll. > 2) When trying to track individual failures using #define DBG = 1, stdout > isn't necessarily flushed when abort is called, so we lose the test > identifiers. This is ok with me. > 2004-11-01 Danny Smith <dannysmith@users.sourceforge.net> > > * gcc.dg/compat/struct-layout-1_generate.c (main): Generate a > call to fflush(stdout) before abort. > (generate_fields): Use __builtin_ffsll. Jakub
http://gcc.gnu.org/ml/gcc-patches/2004-11/msg00014.html
CC-MAIN-2019-35
refinedweb
208
62.04
As you learned in the previous chapter, Django models encapsulate data through classes, enforce data validation, are used to interact with a Django project's relational database and have a myriad of options to guarantee and customize how data operates in Django projects. In this chapter we'll build on the previous Django model concepts and learn about a Django model queries and managers. We'll start with an in-depth look at Django model CRUD (Create-Read-Update-Delete) operations, including: single, multiple and relationship queries, covering their speed and efficiency implications. Next, you'll learn about the many SQL query variations supported by Django models, including: field lookups to produce SQL WHERE statements; models methods to produce SQL statements like DISTINCT and ORDER; as well as query expressions to execute SQL aggregation operations, database functions and sub-queries. Next, you'll learn how to create raw (open-ended) SQL queries when Django's built-in SQL facilities prove to be insufficient. Finally, you'll learn how to create and configure custom model managers in Django models. CRUD single records in Django models Working with single records is one of the most common tasks you'll do with Django models. Next, I'll structure the following sections into the classical web application CRUD operations and describe the various techniques for each case so you can get a better grasp of what to use under different circumstances. Note that although the following sections concentrate on the actual CRUD operation and its behaviors, sometimes I'll inevitably introduce more advanced query concepts in the examples (e.g. field lookups) which are described in detail in later sections of the chapter. Create a single record with save() or create() To create a single record on a Django model, you just need to make an instance of a model and invoke the save() method on it. Listing 8-1 illustrates the process to create a single record for a model called Store. Tip Consult the book's accompanying source code to run the exercises, in order to reduce typing and automatically access test data. Listing 8-1. Create a single record with model save() method # Import Django model class from coffeehouse.stores.models import Store # Create a model Store instance store_corporate = Store(name='Corporate',address='624 Broadway',state='CA',email='corporate@coffeehouse.com') # Assign attribute value to instance with Python dotted notation store_corporate.city = 'San Diego' # Invoke the save() method to create the record store_corporate.save() # If successful, record reference has id store_corporate.id As you can see in listing 8-1, you can declare all the instance attributes in a single step or you can use Python's dotted notation to assign attribute values one by one on the reference itself. Once the instance is ready, call the save() method on it to create the record in the database. There are two important behaviors to be aware of when you invoke save() method: - By default, all Django models are assigned an auto-incrementing primary key named id, created when you initiate a model's database table -- see the previous chapter section on 'Django models and the migrations workflow' for more details. This means the database assigns an idvalue to a record -- unless you explicitly provide an idvalue to the instance -- that gets passed back to the reference. - The creation of a record is rejected if it violates any database or Django validation rule created by the Django model. This means that if a new instance doesn't comply with any of these validation rules, save()generates an error. See the previous chapter section on 'Django models data types' for more details on rule validation. These are the two most important points when you use the save() method to create a record. For the full set of options and subtleties associated with a Django model save() method, see the previous chapter table 7-3 and the section on 'Model methods'. After a successful call to the save() method in listing 8-1, you can see the object reference is assigned the id attribute -- created by the database -- which serves to directly link it to a database record that can later be updated and/or deleted. The create() method offers a shorter route alternative to create a record. Listing 8-2 illustrates the equivalent record creation in listing 8-1 using the create() method. Listing 8-2. Create a single record with create() method # Import Django model class from coffeehouse.stores.models import Store # Create a model Store instance which is saved automatically store_corporate = Store.objects.create(name='Corporate',address='624 Broadway', city='San Diego',state='CA',email='corporate@coffeehouse') # If successful, record reference has id store_corporate.id You can see in listing 8-2, the create() method is invoked on a Django model class through the model's default objects model manager. The create() method accepts arguments that represent the model instance field values. The execution of create() returns an object reference to the created record including an id value just like the save() method. Behind the scenes, the create() method actually uses the same save() method, but it uses the model manager to allow the creation of a record in a single line. Read a single record with get() or get_or_create() To read a single database record you can use the get() method -- which is part of a model's default objects model manager -- and which accepts any model field to qualify a record. Listing 8-3 illustrates a basic example of the get() Django model method. Listing 8-3 Read model record with get() method # Import Django model class from coffeehouse.stores.models import Store # Get the store with the name "Downtown" or equivalent SQL: 'SELECT....WHERE name = "Downtown" downtown_store = Store.objects.get(name="Downtown") # Define uptown_email for the query uptown_email = "uptown@coffeehouse.com" # Get the store with the email value uptown_email # or equivalent SQL: 'SELECT....WHERE email = "uptown@coffeehouse.com"' uptown_email_store = Store.objects.get(email=uptown_email) # Once the get() method runs, you can access an object's attributes # either in logging statements, functions or templates downtown_store.address downtown_store.email # Note you can access the object without attributes. # If the Django model has a __str__/ method definition, the output is based on this method # If the Django model has no __str__ method definition, the output is just <object> print(uptown_email_store) As you can see in listing 8-3, the get() method uses a Django model attribute as its argument to retrieve a specific record. The first example gets the Store record with name=Downtown and the second example gets the Store record with assigned to a variable, you can access its contents or attributes using Python's dotted notation. Tip In addition to single fields -- name="Downtown" or email="uptown@..." -- the get() method also accepts multiple fields to produce an and query (e.g. get(email="uptown@...",name="Downtown") to get a record were both email and name match). In addition, Django also offerrs field lookup to create finer single record queries (e.g. get(name__contains="Downtown") to produce a sub-string query). See the later section in the chapter on queries classified by SQL keyword. It's that simple to use a Django model's get() method. However, the get() method has some behaviors you should be aware of: - With get()the query has to match one and only one record. If there are no matching records you will get a <model>.DoesNotExisterror. - If there are multiple matching records you will get a MultipleObjectsReturnederror. get()calls hit the database immediately and every time. This means there's no caching on Django's part for identical or multiple calls. Knowing these get() limitations, let's explore how to tackle the first scenario that involves a record that doesn't exist. A common occurrence when attempting to read a single record that doesn't exist, is to get it and if it doesn't exist just create it. Listing 8-4 illustrates how to use the get_or_create() method for this purpose. Listing 8-4 Read or create model record with get_or_create() method # Import Django model class from coffeehouse.items.models import Menu # Get or create a menu instance with name="Breakfast" menu_target, created = Menu.objects.get_or_create(name="Breakfast") As you can see in listing 8-4, the get_or_create() method -- also part of a model's default objects model manager -- is invoked on a Django model class using a model's attributes as its arguments to get or create a record in one step. The get_or_create() method returns a pair of results, the model instance -- whether created or read -- as well as a boolean indicating whether a model instance was created or read (i.e. True if created, False if read). The get_or_create() method is a shortcut that uses both the get() and the create() methods -- the last of which uses the save() method behind the scenes, as you learned in the previous section. The difference being, the get_or_create() method automatically handles the error condition when get() finds no matches. Listing 8-5 illustrates how the get_or_create() method functions behind the scenes, which you can also use if you prefer to handle get() errors method explicitly. Listing 8-5 Replicate get_or_create() method with explicit try/except block and save method from django.core.exceptions import ObjectDoesNotExist from coffeehouse.items.models import Menu try: menu_target = Menu.objects.get(name="Dinner") # If get() throws an error you need to handle it. # You can use either the generic ObjectDoesNotExist or # <model>.DoesNotExist which inherits from # django.core.exceptions.ObjectDoesNotExist, so you can target multiple # DoesNotExist exceptions except Menu.DoesNotExist: # or the generic "except ObjectDoesNotExist:" menu_target = Menu(name="Dinner") menu_target.save() As you can see in listing 8-5, it's necessary to write more code (e.g. error handling, get and save calls) when you know there's a possibility a record doesn't exist and you want to create it anyways. So the get_or_create() method becomes a helpful shortcut in this scenario. Now let's take a look at the second get() limitation which involves getting multiple records on a query. By design, the get() method throws a MultipleObjectsReturned error if more than one record matches a query. This behavior is an actual feature, because there are circumstances when you want to ensure a query only returns one record and be informed otherwise (e.g. a query for a user or product where duplicates are considered erroneous). If there's a possibility for a query to return one or multiple records, then you'll need to forgo the use of get() method and use either a model manager's filter() or exclude() methods. Both the filter() or exclude() methods produce a multi-record data structure called a QuerySet, which can be reduced to a single record with an additional QuerySet method (e.g. Item.objects.filter(name__contains='Salad').first() to get the first Item record whose name contains the Salad sub-string). Since a Django model's filter() and exclude() methods are designed for multiple record queries, these methods along with QuerySet behaviors are described in detail in the later section on CRUD operations for multiple records. Additional QuerySet methods like first() are also described in the later section on model queries classified by SQL keyword. Update a single record with save(), update(), update_or_create() or refresh_from_db() If you already have a reference to a model record, an update is as simple as updating its attributes using Python's dotted notation and calling the save() method on it. Listing 8-6 illustrates this process. Listing 8-6. Update model record with the save() method # Import Django model class from coffeehouse.stores.models import Store # Get the store with the name "Downtown" or equivalent SQL: # 'SELECT....WHERE name = "Downtown" downtown_store = Store.objects.get(name="Downtown") # Update the name value downtown_store.name = "Downtown (Madison)" # Call save() with the update_fields arg and a list of record fields to update selectively downtown_store.save(update_fields=['name']) # Or you can call save() without any argument and all record fields are updated downtown_store.save() In listing 8-6, you can see the save() method is called in two ways. You can use the update_fields argument with a list of fields to update certain fields and get a performance boost in large models. Or the other alternative is to use save() without any argument, in which case Django updates all fields. If you don't yet have a reference to the record to update, it's slightly inefficient to first get it (i.e. issue a SELECT query) and then update it with the save() method. In addition, doing the update process in separate steps can lead to race conditions. For example, if another user fetches the same data at the same time and also does an update, you'll both race to save it, but whose update is definitive and whose is overwritten ? Because no party is aware the other is working on the same data, you need a way to indicate -- technically known as lock or isolate -- the data to avoid race conditions. For such cases you can use the update() method -- part of a model's default objects model manager -- which performs an update in a single operation and guarantees there are no race conditions. Listing 8-7 illustrates this process. Listing 8-7. Update model record with the update() method from coffeehouse.stores.models import Store Store.objects.filter(id=1).update(name="Downtown (Madison)") from coffeehouse.items.models import Item from django.db.models import F Item.objects.filter(id=3).update(stock=F('stock') +100) The first example in listing 8-7 uses the update() method to update the Store record with id=1 and set its name to Downtown (Madison). The second example in listing 8-7 uses a Django F expression and the update() method to update the Item record with id=3 and set its stock value to the current stock value plus 100. For the moment, don't worry about Django F expressions -- they're described later on for more elaborate queries -- just realize Django F expressions allow you to reference model fields within a query -- as an SQL expression -- which is necessary in this case to perform the update in a single operation. Caution The update() method can update a field across multiple records if you're not careful. The update() method is preceded by the objects.filter() method which can return query results for multiple records. Notice in listing 8-7 the query uses the id field to define the query, ensuring that only a single record matches the query, because id is the table's primary key. If the query definition in objects.filter() uses a less strict look-up (e.g. a string) you can inadvertently update more records than you expect. Similar to the convenience get_or_create() method described in the previous section, Django also offers the convenience update_or_create() method. This method is helpful in cases where you want to perform an update and aren't sure if the record exists yet. Listing 8-8 illustrates this process. Listing 8-8 -- Update or create model record with the update_or_create() method # Import Django model class from coffeehouse.stores.models import Store values_to_update = {'email':'downtown@coffeehouse.com'} # Update for record with name='Downtown' and city='San Diego' is found, otherwise create record obj_store, created = Store.objects.update_or_create( name='Downtown',city='San Diego', defaults=values_to_update) The first thing that's done in listing 8-8 is create a dictionary with field-values to update. Next, you pass a query argument to update_or_create for a desired object (i.e. the one you wish to update or create), along with dictionary containing the field-values to update. For the case in listing 8-8, if there's already a Store record with name='Downtown' and city='San Diego' the record's values in values_to_update are updated, if there is no matching Store record a new Store record with name='Downtown', city='San Diego' along with the values in values_to_update. The update_or_create method returns an updated or created object, as well as a boolean value to indicate if the record was newly created or not. Note update_or_create only works on queries with single records. If there are multiple records that match the query in update_or_create() you'll get the error MultipleObjectsReturned just like the get() method. If you change a model record inadvertently, you can re-instate its data from the database with the refresh_from_db() method, as illustrated in listing 8-9. Listing 8-9.- Update model record from database with the refresh_from_db() method from coffeehouse.stores.models import Store store_corporate = Store.objects.get(id=1) store_corporate.name = 'Not sure about this name' # Update from db again store_corporate.refresh_from_db() # Model record name now reflects value in database again store_corporate.name # Multiple edits store_corporate.name = 'New store name' store_corporate.email = 'newemail@coffeehouse.com' store_corporate.address = 'To be confirmed' # Update from db again, but only address field # so store name and email remain with local values store_corporate.refresh_from_db(fields=['address']) As you can see in listing 8-9, after changing the name field value on a model record, you can call the refresh_from_db() method on the reference to update the model record as it's in the database. The second example in listing 8-9 uses the refresh_from_db() method with the fields argument, which tells Django to only update the model fields declared in the fields list, allowing any (local) edits made to other fields to remain unchanged. Delete a single record with delete() If you already have a reference to a record, deleting it is as simple as invoking the delete() method on it. Listing 8-10 illustrates this process. Listing 8-10. Delete model record with the delete() method # Import Django model class from coffeehouse.stores.models import Store # Get the store with the name "Downtown" or equivalent SQL: # 'SELECT....WHERE name = "Downtown" downtown_store = Store.objects.get(name="Downtown") # Call delete() to delete the record in the database downtown_store.delete() For cases where you don't yet have a reference to a record you want to delete, it can be slightly inefficient to first get it (i.e. issue a SELECT query) and then delete it with the delete() method. For such cases you can use the delete() method and append it to a query so everything is done in a single operation. Listing 8-11 illustrates this process. Listing 8-11. Delete model record with the delete() method on query from coffeehouse.items.models import Menu Menu.objects.filter(id=1).delete() Irrespective of the delete() method you use -- directly on reference or through the objects model manager -- a delete() method always returns a dictionary with the results of the delete operation. For example, if the delete operation in 8-11 is successful it returns (1, {'items.Menu': 1}) indicating one record of the items.Menu type was deleted. If the delete operation in 8-10 is successful, it returns (5, {'stores.Store_amenities': 4, 'stores.Store': 1}) indicating five overall records were deleted, four of the stores.Store_amenities type and one of the stores.Store -- in this case multiple records are deleted because stores.Store_amenities is a model relationship in the Store model. Caution The delete() method can delete multiple records if you're not careful. The delete() method is preceded by the objects.filter() method which can return query results with multiple records. Notice in listing 8-11 the query uses an id field to define the query, ensuring that only a single record matches the query, because id is a table's primary key. If the query definition in objects.filter() uses a less strict look-up (e.g. a string) you can inadvertently delete more records than you expect.
https://www.webforefront.com/django/singlemodelrecords.html
CC-MAIN-2021-31
refinedweb
3,297
54.42
>>IMAGE." Going back to DOS style... (Score:5, Funny) ... and comming full circle. Re: (Score:3, Funny) Scroll your screen to see the animation: \ - / | Re:Going back to DOS style... (Score:4, Informative) Uhm no. Because the DOS commands used '/' for indicating options as opposed to the '-' of the UNIX world. Re: (Score:2, Informative) which was based on CPM and VMS. Re:Going back to DOS style... (Score:5, Funny) Jeez, take a joke as it is, will you? Re: (Score:3, Interesting) '**' is easier to type than '^^'? It is, once you consider some people have dead keys. Typing "^^" can become wildly different depending on what OS you're using, and result in weird behavior. On some systems/applications, the two carets are printed at once and you're back to normal editing. On others, the first one is written, but the other remains in dead-key mode. I've seen systems where this would just print a single caret (possibly coupled with a beep). Then the only reliable way on those keyboard layouts to type that symbol is to press Re:It's all a joke (Score:5, Informative): foo\bar\baz Re: (Score:3, Informative) Yes but we are talking about a very, very slim minority here are we not? No offense intended but it seems like your whole point is hinging on the absolute minority of people out there. I can't think of any systems off hand that do this According to Wikipedia, the circumflex accent is "used in written Croatian, Esperanto, French, Frisian, Norwegian, Romanian, Slovak, Vietnamese, Romanized Japanese, Romanized Persian, Welsh, Portuguese, Italian, Afrikaans, Turkish and other languages". I don't know the keyboard layouts used by all of them, but I'd bet most (if not all) of them use a dead key for both the caret and the circumf Re:It's all a joke (Score:5, Insightful) "People with laptops" is a very, very tiny minority? Re:It's all a joke (Score:4, Insightful) :: is already used for static methods on classes... it would be harder to implement the differentiation of :: for namespaces and :: for static methods... especially if people started to use classes with the same name as a namespace (which is likely if all modules get their own namespace) I actually think that '\' is appealing for what it will be used for. The one thing I first though of was SomeModule\new_object::test_function(); Wouldn't it try to evaluate the '\n' as a \n new line? I suppose it will be out of the double quite string scope so it could be alright... could get messy if eval()-ing code, though. Re: (Score:2) Answ Re: (Score:2) Re: (Score:3, Insightful) Re: (Score:3, Insightful) So how do you like scamming your ads off of Wikipedia's content ? Just when I thought the human race couldn't get any more pathetic, I get proven wrong. Another fashionable addition for PHP: (Score:4, Insightful) PHP 5.3 also adds support for local GOTOs. This langauge is so up with the times. Re:Another fashionable addition for PHP: (Score:5, Insightful) GOTO is what your CPU is actually doing 80% of the time. You can pump up your ego by imagining that using a language without something explicitely called "GOTO" makes your code "up with the time". But what you actually do is nothing but GOTOs, just written in a different manner. Ironically, the VM that PHP uses is completely GOTO-based (well, you can pick several methods at compile-time, but GOTO is what a lot of distributions chose because performance is often better than CALL and it's very stable nowadays). Oh and even JAVA has GOTO and relies a lot on it. The compiler hides an explicit thing called "GOTO", but what you get after compilation is full of GOTO. And it's actually why apps can actually do something. Laughing at "GOTO" is ignorance, or just blind trolling because you read somewhere that BASIC had a "GOTO" keyword. I guess in a few years your children will laugh at those horrible "$", "$this", "->", ":" and "\" symbols, that would remind them the old time of a language called PHP. Though you are proud of them now. Using temporary variables like "$should_exit", dummy loops just to "break" at the right place, or named loops to work around "break" that would only exit the first loop is nothing but writing "GOTO" in an obfuscated and inefficient way. "GOTO" is not synonym for "spaghetti code" (the famous keyword always used by people blindly repeating that GOTO is bad). Oh and grep for "goto" in your Linux kernel or in any BSD operating system. Wow, tons of them. Really. But I guess this is just because these source codes are shits written by people who can barely write GW-BASIC, and of course none of these operating systems actually work. Glad you are there to help. Teach them how to code, tell them that their code is so passé. Or shut up. Re:Another fashionable addition for PHP: (Score:5, Insightful) GOTO is what your CPU is actually doing 80% of the time. And your car's engine spends all of its time repeatedly causing small explosions with volatile petroleum. The driver is generally recommended to let the engine do this and not try to intervene or do it themselves. Wonderful analogy! (Score:4, Interesting) Spot on. Dead on target and a car analogy. You rock. --MarkusQ Re:Another fashionable addition for PHP: (Score:5, Informative) Goto's are useful. Labels are useful. Yes they can lead to problems, but so do things like pointers, dynamic typing, operator overloading, namespaces and automatic memory management. But they can also solve problems that are otherwise intractable, which is what the GP was trying to tell you. Dismissing them just because E. Djisktra said so is not really good enough. If you want an argument from authority, or just a good read, here's Donald Knuth on Structured Programming with GOTO Statements [snu.ac.kr]. You need to read that paper before you can have a proper opinion on the GOTO statement. Otherwise you're just adhering to dogma. Re:Another fashionable addition for PHP: (Score:5, Insightful) Wow, can't type today. Let's try again: GOTO remains the best way, in most programming languages, to exit multiple loops, branch to common clean-up code before leaving a function, etc. Re:Another fashionable addition for PHP: (Score:4, Insightful) Modern languages have "exit for" or "break" to bail out of a loop. If you have a triple nested loop in the same function, you should refactor the code and move the inner loops into another function. What do you mean by "Clean Up Code"? If you have so many branches in a single function, again, refactor the code and split them into multiple functions. See also: Code Complete [cc2e.com] Re:Another fashionable addition for PHP: (Score:5, Insightful) Yes, for school problems, refactoring the code to make the inner loop a function is great. For real-world legacy crap code, it's often impractical. You might have 15 variables that would need to be passed into that inner loop, and you might have shop standards preventing you from passing some of those types into functions, etc, plus your boss (often rightly) object to you "refactoring" code that works today. "Refactoring" is a charmingly naive expectation for legacy crap code to begin with. And if you're not working with legacy crap code today, consider yourself lucky: you don't know how good you have it. And by "clean-up code" I mean: you allocate resources at the top of a function, so they must be cleaned up at the bottom of the function (and no sneaky returning from the middle of the function). That code at the bottom of the function is "clean-up code". If you have 15 possible errors in your function, you either have some unreadable mess with 15 nested if statements that hide a perfectly straightforward logic flow, *or* you branch to the clean-up code in each of the 15 error cases, making the actual logic of the function obvious. Of course, most coders are simply incompetant, don't even bother to check for errors, and certainly don't ensure that every resource allocated at the top is easily visually identifyable as being freed at the bottom, which is why there are so many job maintaining legacy crap code (and why Java exists in the first place). And of course there are languages in which "goto" is pronounced "throw" and all the clean-up happens automatically, but mostly it's inventory and payroll databases that get coded in such languages: give me legacy crap code that does something *interesting* any day! I will forever cherish the one job I had in which C++ was used correctly (not the usual typing C into a cpp file) and goto was really unnecessary, but I don't expect lighting to strike twice in my career. Blah, blah, Code Complete. Some of us were doing that stuff correctly long before Steve McConnell (and I'd hardly cite Microsoft as an example of a stable, secure, maintainable code base!). Re: (Score:3, Insightful) Although rare, there are times when GOTO is the cleanest way out. I recall being stunned to run across one of them writing the code for my dissertation, and noting that I was going through a *lot* of code to avoid a single, simple GOTO. Yes, I could have avoided it, but under the circumstances, the GOTO was much cleaner. Oddly, I don't remember the circumstances; just my sheer amazement that such circumstances actually existed. hawk Re: (Score:3, Informative) In you example, F() took only two arguments, and the extra processing overhead of the function call was (presumably) not a concern. There are times when one or the other of those factors is practical. If the inner loop needs 15 variables (and is modifying some of them), the code can become far less readable than a simple goto, and noticably less performant. Of course, code performance is less of an issue every decade, but multiply nested loops are the one place where it still tends to matter. Can\'t read summary (Score:5, Funny) But in PHP for Windows (Score:5, Funny) It'll be /, just to keep things interesting. I have to say they are working really hard.... (Score:5, Insightful) Re:I have to say they are working really hard.... (Score:5, Informative) Re: (Score:2) PH Re: (Score:2): (Score:3, Interesting) CGI days (only technical requirement is standard input and output) are long gone. Well, not really. It still works. Your biggest complaint: You gonna end up with huge performance problems unless you would use something more modern and advanced than trivial piping. That is a vertical scaling concern. If vertical scaling is that important to you, consider using C, instead. Not that it's unimportant. I'm just saying -- if all we had was CGI, there's no reason we couldn't still throw hardware at the problem. No, I was talking about things like FastCGI, or even more relevantly, in-application webservers. It turns out, talking HTTP really isn't that hard. Ruby has, by my count, six separate in-application webservers su Re: (Score:2)... Re: (Score:3, Insightful) That link compares PHP to Perl. And pretty soundly shows Perl to be as good as or better than PHP, in some fundamental ways. I spit on Perl and so does everybody else I know. You apparently don't know a lot of people who actually understand Perl. The BASIC of the 21st century (Score:5, Insightful): (Score:2) and I'm migrating them over to Ruby on Rails or Merb. PHP is lame. Re:The BASIC of the 21st century (Score:5, Funny) Then I see people suggesting \ for a namespace separator, and I wonder what happened to all the people that put so much work into making PHP5 good, and why we can't get them back. Re: (Score:3, Insightful) The last one was seen downloading a Ruby On Rails development environment. Re:The BASIC of the 21st century (Score:4, Insightful) PHP ? Real OO? Thanks for the great joke. How can I add methods to Number? Ehm you know, the class used for numbers... In order to write 3.times() for instance... ah, it doesn't exist? Ok, so how can add methods to strings? Impossible, strings aren't objects either? Stop kidding. Why do PHP programmers use classes for? Just to avoid collisions between two functions with the same name when files are included. Really... very few PHP code instanciates more than one object per class. Introduction of namespaces might limit this. But there's something else that PHP miss: a "static" keyword. Guess how very large source code like OS kernels or demos have been built in C or assembler, without namespaces, without classes, without symbols like \, but without coders constantly fighting about names collisions? The reason is file-local symbols, ie. the static keyword in C (and local symbols in assembler). Only export (ie. make non-static) what you need to use in other files. As a bonus, it helps the compiler in order to optimize the code. Re: (Score:2) I don't do minor upgrades, there are other people for that. When a major upgrade is needed, let's say from version "2.7" to "3.0" they call me. Re: (Score:2, Funny) (Score:4, Funny) in other words you're incapable of maintaining code and you rewrite the same thing for them in a different language... Re: (Score:3, Interesting) While at it... (Score:2) While at it, they should have picked a page from the W3C and made namespaces full, compliant URIs. That would have been epic! /sarcasm. .NET / WPF is going this way (Score:3, Interesting)Par yet another wtf (Score:3, Interesting) The rfc [php.net] claims that typing "**" is easier than typing "%%" or "^^". Re: (Score:2) Re: (Score:2) The rfc [php.net] claims that typing "**" is easier than typing "%%" or "^^". But it is! ... if your right shift key is broken... Re: (Score:2, Funny) AltGr + Plus [the key right of number 0] on Estonian layout also. This is so discriminatory! :P We should use a character present on most keyboard layouts. I propose the use of the Space-key for this purpose. A long overdue addition (Score:2, Insightful) Just my 2 cents. Re:A long overdue addition (Score:5, Funny) While you're livin' it up at your stately manor, I'm coding PHP out of my garage, you insensitive clod! Re:A long overdue addition (Score:5, Insightful) Thats all good. I personally feel its just easier to avoid PHP altogether and not have to adjust to all of the language's quirks for little to no benefits from other offerings. Simpler that way. Backslash! (Score:4, Funny) Well now we all know what trouble this is going .. (Score:5, Funny) ... to cause for windows servers... imagine what directories will be deleted due to a typo! Re: (Score:3, Insightful) other issues (Score:5, Insightful) Maybe they could starting fixing the noun-verb vs verb-noun problems instead. Re: (Score:2) Re: (Score:2) (Score:5, Funny) Thank you, PHP. Re:Today is a Wonderful Day (Score:4, Interesting) so what's the problem here? (Score:2)? (Score:5, Informative)? (Score:4, Funny)? Re: (Score:3, Insightful) You'd have to do that anyway, because you used double quotes, "$instance" is going to get evaluated and your eval will fail anyway. Re: (Score:3, Interesting) 1) "/" - used by almost all languages in regex expressions. I wouldn't want to pollute that namespace. 2) "::" - Awesome. Many other languages do this. I think this is already taken for PHP, but I dont know PHP well enough 3) "|" - Pipe has a very specific meaning on the command line and I'd hate to pollute that. Plus it looks ugly. 4) "~" - Any reasons this wouldn't work? 5) "." - Again, awesome. Many other languages use this. 6) "!" - Used 7) "&" - Ugly, used. 8) "," - Why not? Does any language use th Phalanger (Score:2) Re: (Score:3, Insightful) Why, exactly? English is not the only language spoken in the world, you know, and programmers in many countries feel more at ease naming identifiers using their native language rather than trying to come up with an English translation with their limited knowledge of English. In fact, when people who don't know Easier on which keyboard layout? (Score:5, Insightful). Re: (Score:3, Interesting) I couldn't agree more. When I read this my only thought was "what the fuck?". On a German Mac keyboard, \ is Alt+Shift+7. I think on Linux and Windos it's something equally retarded. Will the people who made PHP a good programming language please fork it and take it away from the clueless morons who have taken over? Re:Easier on which keyboard layout? (Score:5, Insightful) This line from the IRC discussion says it all: [16:24:51] bjori: switch to US layout ;) Classes, namespaces, and subnamespaces (Score:4, Informative) Other suggestions (Score:3, Interesting) ! \ vs / (Score:2)... Easy to fix this (Score:5, Funny) Since PHP is open source, someone will make a fork with a different separator and the dumber of the two choices will wither away. Oblig. Bender (Score:3, Funny) Yeah, screw those guys! I'll make my own PHP -- with blackjack! and hookers! Forty years down the road (Score:3, Insightful) with 4GB RAM machines with TB hard drives - and we're still worrying about "the number of characters". Oh, please. Fucking nerds. For the last forty years, we've been constrained by one pointless limitation after another, not to mention the complete inability of a PC to discern what is an identifier and what is command syntax if it has fucking SPACES in it. Get your heads out of your asses. And learn to type. "Number of characters" - Jesus Baron von Christ! And just to add to that (Score:3, Informative) calling a computer programmer a "software engineer" is like calling a crack whore a "courtesan". There's no "engineering" involved for ninety nine percent of them. Re: (Score:3, Funny) [sic] Re:what wrong with (Score:5, Insightful) PHP uses the . as the concatenation operator. PHP does not support operator overloading... Re: (Score:2, Informative) interprete Re:what wrong with (Score:5, Insightful). Re:what wrong with (Score:5, Informative) Re:Well, That Does It! (Score:4, Insightful): (Score:2, Funny) Re: (Score:2, Interesting) This isn't the first (or last) time PHP developers have implemented a stupid workaround rather than fixing problems with the language/runtime/interpreter/parser/scanner. Re:WTF? (Score:5, Interesting)? (Score:5, Insightful) I think their decision to use '\' is very very dumb one. You've summed up my opinion concisely. That is *truly* retarded to use the (almost?) universal escape character for another reason. Almost as retarded as Microsoft going with \ for a directory separator. Re: (Score:2): (Score:3, Funny) Re: (Score:3, Interesting) Not at all, there were some serious ambiguity issues that needed to be resolved. Take the following code: ---------- # first file namespace Foo; function blarg() { echo "function"; } # second file class Foo { public static function blarg() { echo "method"; } } # third file Foo::blarg(); // what does this output? ------- The problem here is that calling a static m All languages (Score:2) ...? Re:HOLY FUCKING SHIT!?!?! (Score:5, Informative). Re: (Score:2, Funny) Re:That's because.. (Score:5, Insightful): (Score:2) Re: (Score:2) th Re: (Score:2) Python: TypeError: cannot concatenate 'str' and 'float' objects Ruby: TypeError: can't convert Float into String Weak typing is the problem, not dynamic typing. Re: (Score:3, Insightful) Calling a variable "myInt" doesn't make it an integer -- it was a float from the second you added a dot. First, duh. Second, are you sure it will return "hello1.0" and not "hello1.00" or "hello1"? I'm not, I could test to find out, but that makes my point. It adds ambeguity to the language and makes me think about something I really should be thinking about. To use your example, what does the following result in? function blah(arg1) { var blah = "10" + arg1 + 5 + "hi"; } alert(blah("hi")); alert(blah(1)); alert(blah("1")); My example isn't as contrived as yours. What if blah() was in some other JS file. It could be easy to trip the function up based on what you pass in. Using "." for string concatenation at least gives both you, the code reader, and the compiler a hint at what you mean. That way the compiler can barf if you do something silly like concatenate an integer with a string. Using "+" for both addition and concatenation just makes more work for both parties. In any case, I don't find concatenation nearly as useful as interpolation, most of the time. In Ruby, at least, interpolation is known to execute faster than concatenation. But that doesn't help you if you're using JavaScript. I agree with this statement. I much prefer how Ruby and Perl do it... just toss the variables into your string and it will interpolate them. BTW, I'm not too familiar with Ruby. Does it pull the same trick Perl does and use a different operator for string comparison? if($myStr eq "hi") { print "hi";} Re:Gripe Moan Bitch and Holler! (Score:4, Insightful) Scripting languages are for those of a weak mind and poor technical skills and the singular lack of the ability to plan a system out before you write one line of code. Or for projects that need to be compiled at runtime. But, nice magnanimity. Re: (Score:2) Re: (Score:3, Informative)
http://developers.slashdot.org/story/08/10/26/1610259/php-gets-namespace-separators-with-a-twist
CC-MAIN-2014-52
refinedweb
3,667
65.01
#include <Tone.h>Tone tone1;int spkpin = 9;//the speaker (digital PWM)pin is 9int note = 440;//the starting hertzint minhz = note;//minimum hertzint maxhz = note*2;//maximum hertzint way = 0;//0 and the tone goes lower, 1 and it goes higherint changespeed = 1;//the speed of changing the note in milisecondsvoid setup(){ tone1.begin(spkpin);}void loop(){ tone1.play(note);//plays the current note if (way==0)//if the way the tone is changing to is down { note = note-5;//subtract 5 hertz from the note delay(changespeed);//delay it } else//or if it is going up { note = note+5;//add 5 hertz delay(changespeed);//delay it } if (note<=minhz)//if the note is at the minimun hertz value { way = 1;//set way to 1, thus making the tone go higher } if (note>=maxhz)//if the note is at the maximum herts value { way = 0;//set way to 0, thus making the tone go lower }} Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=42412.0;prev_next=prev
CC-MAIN-2015-11
refinedweb
191
56.52
che dispositivo inverte usi? Arduino Connectivity Hello and Welcome to the Cayenne community! Can you see for me, when the device goes offline, what is the time of the last data package sent? You can see this in the bottom of the page. Also, when the device is offline, is ti still continues to display values ? Thank you. Unfortunately there are currently some known issues with the dashboard incorrectly showing devices offline. If you watch your data it should still be updating. There will be an official announcement when these issues have been fixed. Thanks all, I couldn’t see anything weird in the serial monitor, apart from a slight delay. Then the device goes offline in the portal and comes back shortly after. Ill wait for the official announcement and then loop back if i’m still having issues. don’t use delay() function. If you have to, don’t delay for more than 1000 (1sec) or you will get disconnects. You could also try adding “Cayenne.run();” that runs in the main loop to other parts of your code, perhaps after a delay so it can stay connected to the servers. Hi @meriksson, we pushed a fix today for Online/Offline status issues for MQTT devices. I realize that’s not the connectivity you’re using for your device in this thread, so in that sense this doesn’t apply to you, but I wanted to let you know because we have a ESP8266 MQTT Library that you can use to connect your device as a “Bring Your Own Thing” device in Cayenne rather than through the Arduino option, and we believe it to be stable now with regards to online/offline status. If you instead prefer to troubleshoot the non-MQTT sketch you’ve posted here, I’d be interested in the Serial Monitor output from the Arduino IDE while it is running and having online/offline issues. This may shed some light on whether it is another server side issue on our end, or if it’s the sketch code/something local for you. I have three Rpi and two Arduinos running. They seem to drop off line randomly as well. The Arduinos seem to be more stable than the Rpi’s and go off line less often. I generally re-upload the sketches to force a reboot of the Arduinos in our remote offices. I keep the connected by USB to the servers there. Cayenne and MyDevices has proven to be a useful tool for monitoring our remote server rooms with motion, temp and light sensors. More stability in the Rpi connections would allow us to use them more. Hello, I encounter this behavior with the Arduino Yun, connect via WiFi. I only have setup a test case, just switching on and off LEDs and I use the example sketch from Cayenne when setting up an actuator --> Light --> Button LED. Any hints to solved this much appreciated. Welcome to the Cayenne community! Thanks for letting us know about this… Do the Rpi’s come back online on their own? Or do you have to reboot them? ~Benny Hi @Silverfern, Would you mind pasting the sketch code you are using? ~Benny Hi @bestes just that one from mydevices /* Cayenne Light Switch Example This sketch shows how to set up a Light Switch with Cayenne The Cayenne Library is required to run this sketch. If you have not already done so you can install it from the Arduino IDE Library Manager. Steps: 1. In the Cayenne Dashboard add a new Light Switch Widget. 2. Select a digital pin number. Do not use digital pins 0 or 1 since those conflict with the use of Serial. 3. Attach the negative leg of an LED to ground and the other leg to the selected digital pin. Schematic: [Ground] -- [LED] -- [Resistor] -- [Digital Pin] 4. Set the token variable to match the Arduino token from the Dashboard. 5. Compile and upload this sketch. 6. Once the Arduino connects to the Dashboard you can toggle the LED switch. Notice that there isn't much coding involved to interact with the digital pins. Most of it is handled automatically from the Cayenne library. */ #define CAYENNE_PRINT Serial // Comment this out to disable prints and save space // If you're not using the Ethernet W5100 shield, change this to match your connection type. See Communications examples. #include <CayenneEthernet.h> // Cayenne authentication token. This should be obtained from the Cayenne Dashboard. char token[] = "YouWishToKnow"; void setup() { Serial.begin(9600); Cayenne.begin(token); } void loop() { Cayenne.run(); } The sketch works, I was able to trigger LEDs via Browser and App Benny, Thank you for your reply and follow up. With the Rpi’s I usually have to reboot them to get them back online. I have been mostly using the web interface while monitoring and not the phone app. But the app is great after hours to check on the server rooms or when I am out of town. I know sometimes there is a delay on the web page and they will come back online in few minutes but aside from the occasional delay it seems when it is really off line it takes a reboot to get the Rpi’s back on line. When I left the office yesterday they were all on line. This morning when logging in one Arduino was down and one Rpi was down. The Arduino came back on in a minute or so after logging into the web page. The Rpi did not come back after waiting a few minutes so I rebooted and it came back online. This seems typical of the issue. WW Hi @Silverfern, I think I see what the issue may be. The sketch you are using is not made for the Yun. If you go through the add device process again for adding an Arduino, you should see a specific option come up for the Yun. You should copy and paste that code into your Arduino IDE and use that. Here’s a pic of what I am referencing… Right. Use the code from the section in the image I pasted. It contains the CayenneYun.h file that is needed. The sketch file that you are clicking on is not dynamic so it does not produce the Yun specific sketch file. Alternatively, you can change the #include CayenneEthernet.h> to #include <CayenneYun.h> in your current sketch file. Does that make sense? ~Benny Hi @meriksson (and anyone else reading and interested) After writing up a tutorial today on how to convert from Arduino to Arduino MQTT sketches, I wanted to reach out and see if you could try following it when you have some free time. It should apply to your ESP8266 as well, although you’ll need the Cayenne MQTT ESP8266 library rather than the Arduino one I linked in that tutorial. I think this would be a good option for you as we believe our MQTT connectivity to be more stable than the original Arduino connectivity, and it should be much more tolerant to the delay() statements in that code as it uses a push design rather than continuous polling which can be thrown off and cause disconnects with large delays. I want to make myself available to you to assist with switching the code over or any other questions you might have about the difference in the two connectivities. @rsiegel I can confirm that MQTT is more stable. I was facing the well known stability issues with my Yun and with Dragino Yun schield on a mega. I changed to MQTT protocol for YUN and ALL my problems were gone … is working for many days non-stop now PS: I started using MQTT on WeMos D1 mini … and it works fine there too … for many days … @rsiegel … for the Yun I needed to define a new WeMos device so I could get the right credentials to use on a Yun with MQTT … the choice for MQTT is NOT available when defining a Yun Device ??? … could be a suggestion to add this … Thanks for the feedback @Jodi, and glad you’re having better stability with MQTT. In short, the Bring Your Own Thing category is intended as a catch-all bucket that you can connect any device with an MQTT client as, including ones that have other connectivity options like Raspberry Pi and Arduino. We’ll going to be improving the UI for this in the near future so it isn’t as confusing where you need to go to connect each device.
https://community.mydevices.com/t/arduino-connectivity/4949/21
CC-MAIN-2019-09
refinedweb
1,430
71.55
IFRS17 CSM Waterfall Chart Notebook¶ To run this notebook and get all the outputs below, Go to the Cell menu above, and then click Run All. About this notebook¶ This noteook demonstrates the usage of ifrs17sim project in lifelib, by building and running a model and drawing a waterfall graph of CSM amortization on a single modelpoint. Amortization of acquisition cash flows is not yet implemented.. The entire script¶ Below is the entire script of this example. The enire scipt is broken down to several parts in differenc cells, and each part is explained below. The pieces of code in cells below are executable one after another from the top. %matplotlib notebook import collections import pandas as pd from draw_charts import draw_waterfall, get_waterfalldata import modelx as mx model = mx.read_model("model") proj = model.OuterProj[1] df = get_waterfalldata( proj, items=['CSM', 'IntAccrCSM', 'AdjCSM_FlufCF', 'TransServices'], length=15, reverseitems=['TransServices']) draw_waterfall(df). [2]: import modelx as mx model = mx.read_model("model") To see what space is inside model, execute model.spaces in an empty cell. model.spaces Calculating CSM¶. [3]: proj = model.OuterProj[1] Exporting values into DataFrame¶ The code below is to construct a DataFrame object for drawing the waterfall chart, from the cells that make up bars in the waterfall chart. TransServices is passed to reverseitems parameter, to reverse the sign of its values, as we want to draw is as reduction that pushes down the CSM balance. [4]: df = get_waterfalldata( proj, items=['CSM', 'IntAccrCSM', 'AdjCSM_FlufCF', 'TransServices'], length=15, reverseitems=['TransServices']) Tha table below show the DataFrame values. [5]: df [5]: Draw waterfall chart¶ The last line is to draw the waterfall graph. The function to draw the graph was imported from the separate module draw_charts in this project directory, and was imported at the first part of this script. [6]: draw_waterfall(df) [6]: <matplotlib.axes._subplots.AxesSubplot at 0x288b6c40308>
https://lifelib.io/projects/notebooks/ifrs17sim/ifrs17sim_csm_waterfall.html
CC-MAIN-2021-43
refinedweb
309
57.57
On Fri, 2006-11-03 at 10:27 +0000, David Howells wrote:> Anyway, it's not just vfs_mkdir(), there's also vfs_create(), vfs_rename(),> vfs_unlink(), vfs_setxattr(), vfs_getxattr(), and I'm going to need a> vfs_lookup() or something (a pathwalk to next dentry).> > Yes, I'd prefer not to have to use these, but that doesn't seem to be an> option.It is not as if we care about an extra context switch here, and wereally don't want to do that file i/o in the context of the rpciodprocess if we can avoid it. It might be nice to be able to do thosecalls that only involve lookup+read in the context of the user's processin order to avoid the context switch when paging in data from the cache,but writing to it both can and should be done as a write-behind process.IOW: we should rather set up a separate workqueue to write data to disk,and just concentrate on working out a way to lookup and read data withno fsuid/fsgid changes and preferably a minimum of selinux magic.> > > Also I should be setting security labels on the files I create.> > > > To what end? These files shouldn't need to be made visible to userland> > at all.> > But they are visible to userland and they have to be visible to userland. They> exist in the filesystem in which the cache resides, and they need to be visible> so that cachefilesd can do the culling work. If you were thinking of using> "deleted" files, remember that I want it to be persistent across reboots, or> even when an NFS inode is flushed from memory to make space and then reloaded> later.No. I was thinking of keeping the cache on its own partition and usingkernel mounts. cachefilesd could possibly mount the thing in its ownprivate namespace.Trond-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/11/3/66
CC-MAIN-2015-18
refinedweb
339
67.18
Copyright ©2005 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. These are the collected Last Call comments on Timed Text (TT) Authoring Format 1.0 Distribution Format Exchange Profile (DFXP), Last Call WD which were sent to the public Timed Text mailing list public-tt@w3.org (archives) and responses to those comments. The DFXP LC review annoucement was sent to the public-tt@w3.org and chairs@w3.org list on Sept 15 2006. The 15 comments have the following status (status on 15 Sept 2006): Public Comments include: Notification of these Last Call responses were sent to all of the commenters and the Timed text list: Comment: The discussion is archived at: to /2005Mar/0041.html The resolution is archived at: F2F Cupertino and Comment: 1. Meeting requirements [[[ It is intended that a more feature-rich profile, known presently as the Authoring Format Exchange Profile (AFXP), be developed and published to address the full set of documented requirements. ]]] Is there any concrete reason to believe this will take place? The group has had its charter extended already, just to produce this restricted draft. Is the group working on this more complete version already? Or is this just a hope?. Actually it seems strange that so much of this is in the spec. I understand that it enables synchronisation with traditional television, but it seems a lot of complexity. 5. Duration and begin/end It would be helpful if the spec said what happens when the begin/end and the dur attributes don't agree. 6. Compulsory xml:lang attribute Cool idea.. (In other words, this could be done with a brief note in the introduction or somewhere). cheers Charles McCathieNevile Fundacion Sidar The discussion is archived at: and to 2005Apr/0010.html The resolution is archived at: Cupertino F2F and Glen, I have some problems with the current support for background colour in DXFP. The attached files indicate three common modes of using background colours. Given the current attributes tts:displayAlign and tts:showBackground I cannot see a way to trivially implement these effects in DXFP without creating a large number of regions. Can consideration be given to enhancing the tts:showBackground attribute to support the following attribute values? content - The background attributes are only applied to the part of the region that has content. I.e. there is only a background behind characters of displayed text but not behind white-space characters (e.g. line 21 boxed captions with transparent space). content-space - The background attributes are only applied to the part of the region that has content and also to white-space within the content. I.e. there is background behind characters of displayed text and behind white-space characters. If no white-space character exists at the end of each line, the background is extended to cover the area of a single space character at the end of each line. (UK boxed style) active-line - The background attributes are applied to the lines of the region that have content. The background extends the full width of the **region** for any line that has content within the region. active-stripe - The background attributes are applied to the lines of the region that have content. The background extends the full width of the display. active-bounds - The background attributes are applied to the bounding box of all content within the region, providing the region has some content. active-region - The background attributes are applied to the entire region providing the region has some content. stripe - The background attributes are applied to the entire region regardless of if the region has some content.. The background extends the full display width. region - The background attributes are applied to the entire region regardless of if the region has some content. (E.g. this might be used for censorship). regards John Birch Senior Software Engineer, Screen Subtitling Systems Limited, Huge thread of 70 emails ... to 2005Mar/00075.html The resolution is archived at: Cupertino F2F and TTWG response agreed by Requestor Glenn, OK, I'll try and frame what I see as the style problem.... The DFXP style model is quite suitable for the carriage of styled text, BUT, in the contexts of accessibilty and transcoding, the DFXP style mechanism IMO lacks an essential ingredient, that being the reason for (or context of) the applied style. As an example - an author may choose yellow text on a red background for a warning message. The carriage of that text as simply text characters and colour codes loses one piece of information - the fact that it was intended as a warning. This missing information becomes important when interchanging content between formats that have different support for style (for example between a colour and monochromatic presentation), or in transcoding / translating content between cultural groups. Another example - Green is 'lucky' in Ireland, but Red is 'lucky' in China. Go n'éirí an t-ádh leat could translate to ??? (Note: Internet machine translation!). You have stated that it is possible (likely) that this style tagging (context) will be a feature of AFXP and that DFXP is a format intended as "a solution that addressed the more pressing and less complex need of interchange among a small number of legacy distribbution formats,specifically SAMI, QuickText, RealText, 3GPP TT, CEA-608/708, and WST." It should be noted that CEA-608/708, and WST (and in fact TV subtitling formats in general) are typically not stored in these wire formats by broadcasters, rather these wire distribution formats are created in real-time by insertion equipment working from proprietary file formats. A single common file format already exists as a ratified interchange standard, EBU 3264. DFXP could replace the use of EBU 3264 - it offers a few of advantages, a) it is Unicode, b) it is XML and c) it has a more comprehensive language tagging mechanism. However, DFXP does not offer any significant new features over EBU 3264, and indeed there are features in EBU3264 that are not present in DFXP (e.g. cumulative mode and boxing). A combination of extension elements and attributes and constrained document structuring (via a sub-profile) can probably be used with DFXP to fully represent EBU 3264 document contents - and other general TV broadcast related subtitling issues. Indeed, it is anticipated that the use of DFXP as an interchange mechanism for TV broadcast subtitling will require the development of guidelines for the interpretation of DFXP documents by transcoders. In addition it will probably require the development of a profile to add elements and attributes to DFXP to carry information and features currently supported by existing formats, (e.g. conditional content, cumulative modes, background styles, embedded glyphs, subtitles as images (DVD, DVB, Imitext)). The pressing need is not IMO for another interchange format per se, rather it is for a format that preserves more of the authorial intent (inc. understanding / meaning) such that implementing transcoding, translation and accessibility are made easier tasks than they are currently. My main concerns are that using DFXP will encourage the continuation of the existing practice of 'cooked text content' - that is text that has lost contextual meaning - and that AFXP will be too complex and too late for most implementations. Is there a middle path for DFXP that would encourage a more context sensitive (and accessible) role for text style? DFXP already includes a referenced style mechanism - could that mechanism be strengthened to provide greater support for contextual styling of text? best regards John Birch. An author may use a combination of ttm:role, ttm:agent, as well as user-defined metadata attributes or elements, e.g., placing them in a child of the content, in order to express "the reason for (or context of) the applied style." For example, see example in.. The discussion is archived at: to 0023.html The resolution is archived at: Cupertino F2F and TTWG response agreed by Requestor Comment: On Sat, 02 Apr 2005 00:06:17 +1000, Al Gilman <Alfred.S.Gilman@ieee.org> wrote: >> On the other hand accessibility issues are not addressed by FO. It is >> not at all clear how a user should expect to provide styling rules to >> meet their particular needs, as is trivial using CSS for text styling. > On the one hand, XSL FO does address accessibility through the > link-to-source provision. > This is for XSL FO documents - not for a different collection that happens to use some of the properties out of XSL-FO. A link to the source where it issome other format isn't likelyto beay more helpful than the DFXP document. > If DXFP does not emulate this, it should be considered. > On the other hand, CSS already arogates to itself the ability to > supercede presentation properties asserted inline in the source being > styled. Right. My contention is that DFXP should use CSS as the mechanism by whch User Agents provide users with the ability to override presentation, where that is required for accessibility reasons in the case that DFXP is served directly. cheers Charles McCathieNevile, Fundacion Sidar The discussion is archived at: to 2005Apr/0023.html The resolution is archived at: Cupertino F2F and Comment: Thank you for offering us the chance to refine our input for a little longer. Since you are meeting face-to-face, let me offer the following thoughts of an individual and preliminary nature. Key thoughts: - if the user can receive the content on a programmable device, we need to develop the [Web] distribution options and content constraints [with format support] to serve alternative (adaptive) presentation for individuals. - there is going to be a lot of content that sees the light of intra-broadcast-industry pipelines in DFXP encoding. Deferring adaptive use to the availability of an AFXP spec is not necessarily an acceptable policy from the standpoint of disability access. While the DFXP specification may not define a CPE player for the format per se, there is still reason to consider use cases for people with disabilities which require an alternate presentation of the material. Just because there is no anticipation that the DFXP would be used directly in mass-market set-top-box processes, it doesn't mean that there aren't authoring-time requirements on the content that should be supported in the intermediate form i.e. the DFXP. Making the DFXP available to a transcoder of the user's choice is one way that the content encoded in the DFXP could be served to a person with a disability requiring alternate presentation. Or the content could be browsed offline using a mainstream XML reader and a schema-aware assistive technology. [start use scenario] Here is a scenario sketch to illustrate what I mean: There is a meeting held by videoconference over a corporate extranet. To serve strategic partners in other countries and technology platforms, Internet technologies are used including subtitles generated in real time and distributed using DFXP as an intermediate form. One of the people whose job requires interacting with the content of the meeting is Deaf and blind. So a complete log of the meeting is kept for this participant's offline review. supposition: The DFXP, as an XML format, is the dataset of choice on which to base this person's browse of what transpired in the session. Not just the formal statement of the decisions that were reached, but the dialog that led to the decisions. This would mean that the DFXP would be spooled and archived with the audio and video. Quite possibly there would be a SMIL wrapper created as a replay aid. But the deaf-blind user would be reviewing this through a refreshable Braille device and primarily reviewing the timed text as transcript. Note that in interactive Braille as the delivery context, right-justification and color are not appropriate as speaker-change cues. So we need the speaker-change semantics available, separable from any particular visual-presenation effects. DFXP gives the author the capability to express this, but will the information be there in instances? So regardless of whether a collated transcript is created by a transcoder, or the several text streams are browsed as is with an adaptive user agent, the availability of speaker identification in the DFXP instance, the working base for the adapted use, or at a minimum speaker-change events if the identity of the speakers was not captured, would be important in affording this user comparable quality of content as those receiving the same information as real-time display integrated with the video and audio. [end use scenario] This is just to illustrate that there are people with disabilities for whom the introduction of something like the DFXP into the content pipelines of broadcast happenings reflects an opportunity that should not be wasted to raise the level of service and lower the cost of delivering that service. In particular, the use cases for adapted presentation do not necessarily presume that the DFXP would be pushed to all consumers in the broadcast bundle. The distribution protocol might be on an ask-for or 'pull' basis. And the user interaction might be in non-real-time after the fact and not at speed. But the non-availability of the AFXP format as a "source in escrow" format for adapted uses means that the user needs the DFXP that gets produced to be as fit an adaptation basis as we can make it. This will be true while the AFXP is undefined, and will still be true for those situations where a copy of the DFXP can be obtained and a copy of a standard, XML source for that content cannot. The latter is likely to be common even after the AFXP has been specified by W3C. Thank you (the whole group) for bringing this important technology this far. Best wishes for your meeting. Al The resolution is archived at: Cupertino F2F and The usual approach in W3C is to use consensus public formats as a pivot point so that the author can understand the binding of the content schema to the lingo of the domain sourcing the content, and the assistive technology or device independence specialist can understand how to map the content schema to the presentation possibilities of one or another delivery context. The content schema is consolidated through an inter-community negotiation; while the pool of people engaged in the negotiation need to cover the stakeholding domains of activity, nobody has to become an expert in both/all of them. On the other hand, with the Semantic Web the W3C gives us an alternate approach with less reliance on standard formats and more reliance on metadata. And the WAI seeks creative solutions using any applicable technology, not simply rote cant. However, a metadata approach would still require that the content sourcing activity a) capture and be prepared to share key information such as speaker identity, where readily achievable, and b) explain the terms in the way *they* are using [whatever format they are using as the source or editable form] in terms of well-established public-use references. The later is a schema reconciliation or data thesaurus. [There is no policy-free solution, AFAIK.] The avenue of amelioration that we haven't touched on specifically has to do with the CR checklist. We should be looking at what concrete example-use activities during CR would illuminate the issues we have been discussing so as to make it easier to come to consensus that the DFXP does about what it should in these directions. Al The discussion is archived at: The resolution is archived at: Cupertino F2F and If we allow for extrinsic timing where such a time may not be resolved yet, then there are use cases where it is necessary to express both dur and end. For instance: "Display this text for 20 seconds unless the extrinsic- event-based end time resolves before that time, in which case end when it resolves". Is this mix of extrinsic and intrinsic timing actually supported within DFXP. I thought that the discontinuous attribute applied to the entire document (I see discontinuous as synonymous with media marker modality)? Further, I find it a strange balance of features in that DFXP allows such a sophisticated timing model when it only supports a relatively simplistic styling model. Note: I do not have any real issue with the inclusion of the ttp parameters for timing model, except that they increase the complexity of a **fully conformant** (non SMIL based) user agent fairly dramatically. I am currently assuming that since DFXP deliberately avoids talking about UAs, that an implementation must clarify what aspects of DFXP timing model it supports. I feel this puts DFXP in an awkward position as a universal distribution format - since originators of content may use features of the timing model that are not supported by transcoders or UAs. So my position is that a simpler timimng model would be more likely to be universally adopted, further sophistication of the type in your example could IMO be handled by any 'container' format e.g. SMIL, and is unneccesary within a 'media track' format (which is how I view DFXP). best regards John Birch The discussion is archived at: The resolution is archived at: Cupertino F2F and and TTWG response agreed by Requestor Comment: The] Finally, the TT WG has carefully reviewed the semantics and intended use cases of the above three metadata elements and compared these with similarly named items in the Dublin Core vocabulary. After this review, we concluded that there is sufficient difference of usage and intended semantics to retain these items in the TT AF metadata vocabulary. The DC metadata vocabulary may be used along side this vocabulary as desired by an author. The resolution is archived at: The discussion is archived at: TTWG response agreed by Requestor Comment: Hello TT WG, Thanks for permitting a delay. Here are the promised comments: 5.1) Why six different namespaces in one document format? It doesn't seem like you can use any of them on its own. 5.2) If it is important for an implementation to know the profile a document conforms to, shouldn't the profile name be passed out of band (as a MIME type parameter), instead of inside the document? A profile is usually something you author to, but then neither the client nor the server need to know that you do so; or it is something a client uses for content negotiation (e.g., HTTP Accept header or TYPE attribute on an HTML LINK element). It is not useful inside the document, except perhaps for the Unix "file" command... 5.2.b) In fact, the spec doesn't say what an implementation does with profiles. I assume it doesn't do anything with it (unless perhaps if the program is a validator). 5.3.1) Why lowerCamelCase even for names that are borrowed from other specifications? XML names can contain dashes. 6.2.1) EDITORIAL: s/express number/express the number/ 6.2.1 until 6.2.13) All (not sure about one or two) of these attributes can only occur on one element, viz., <tt>. So why are they defined as global attributes (with a prefix)? 6.2.2) Where is the syntax of GPS time coordinates defined? I couldn't find the definition in the spec and there is no reference either. 6.2.2) Are GPS time coordinates really needed? Their semantics are the same as UTC, aren't they? What is the use case? 6.2.2) It seems that there may be only one clockMode attribute per document (if I interpret the note at the end of 6.2.13 correctly), but unlike for the other attributes, there is no paragraph in this section that says that clockMode may only occur on the <tt> element. 6.2.3) Is the defaultLengthUnit attribute needed? In CSS, we found it useful to have unitless numbers mean something specific, different from a length, e.g., as a multiplier. That possibility is removed when there is a default unit. Also, in a typical document there probably aren't more than a dozen or so length values, so declaring a default doesn't actually make the document shorter. 6.2.3) The default is "pixels," but are they device pixels or px units, as in XSL/CSS? 6.2.4) Same question: is defaultTimeMetric necessary? 6.2.5) EDITORIAL: s/of document instance/of a document instance/ 6.2.6) Is NTSC the only case where a frameRateMultiplier is necessary? If so, then maybe a single keyword ("NTSC" on the frameRate attribute) is enough, and a general multiplier is overkill. 6.2.6) EDITORIAL: s/MHz/Hz/ for the first occurrence in the note. 7.1.1) Why must xml:lang be specified? Isn't omitting it the same as defining it to be the empty string? 7.1.1) Is xml:space necessary? You'll have to have style attributes for space handling anyway, so why complicate matters by doing a half job in XML? 7.1.3) Attributes begin, dur and end are on <tt> and on <body>. Are they needed on both? 7.1.7) Is the <br> needed? You can also use two <p> elements if you need two lines. 7.1.7) What happens if you put two <br> elements in a row, do you get an empty line or not? 7.1.7) Why does the <br> element have an xml:space attribute? Empty elements don't contain spaces... 7.2.3) Rule three seems to imply that the mark-up <p>one<span> two </span>three</p> is displayed as "onetwothree" without any spaces. Maybe you meant to omit leading and trailing spaces from the <p> element only? 8.1.1) What happens if a DFXP document has a style PI? I assume a DFXP application will ignore it (just like a generic XML viewer will ignore the <styling> elements). 8.2.1) The bullet list of elements that accept style attributes should be non-normative (i.e., a note), because that information is already known from earlier sections. 8.2.10) The note says that a horizontal font-size is useful in systems that have two fonts: normal and double-width. But do you expect a horizontal font-size to work on any other system? or with any other value than "1c" or "2c"? 8.2.16) The note says that a <p> is displayed on one line, unless a <br> is used. But doesn't that also depend on wrapOption? 8.2.16) overflow in CSS/XSL has a value "scroll" but here it is renamed to "dynamic." Why? An automatic scroll, such as the marquee effect of "dynamicFlow," is a valid "scrolling mechanism" in XSL/CSS terms. 8.2.17) "padding" allows one, two or four values. Why not three, as in XSL/CSS? 8.2.18) "showBackground" appears to be similar to 'empty-cells' in CSS. Is there no way to merge them? 8.2.19) "textAlign" doesn't allow values "left" and "right" as in XSL/CSS, although it is much easier for an author to write "left" than "start" (or "end") when he means "left." Also, when converting from/to other formats, it is easier if the value for textAlign in DFXP is a direct translation of the corresponding value in the other format, rather than a function of that other value and the "direction" property. 8.2.20) CSS3 proposes a 'font-effect: outline' property to create outline fonts, but it doesn't give control over the thickness of the outline, let alone the amount of blur. Isn't 'font-effect: outline' enough? 8.2.22) The example is supposed to show that "visibility" can hide text, but there is no text to hide... The first text only appears after 1 second. 8.3.6) The generic font family names suggest specific kinds of fonts, but the spec effectively says to expect nothing. In that case, why aren't they called "font1" to "font5"? Some help for implementers seems useful. If an implementation has different fonts available, I think users would like it if the fonts are mapped somewhat intelligently. 8.3.6) The generic font families are different from those in XSL/CSS. Maybe DFXP doesn't need "fantasy" and "cursive," but it could have kept "sans-serif" and "serif" without renaming them. Also, is the difference "monospace-sans-serif" vs "monospace-serif" really needed? Just one monospace font has been enough for all my uses (which weren't subtitles, I admit). 8.3.11) The units px, em and c are defined syntactically, but what do they mean? I assume px is as in XSL/CSS and em is the font-size, because 8.2.10 mentions an "EM square" in relation to font-size. "c" is probably the cell as defined by 6.2.1 cellResolution. 8.3.11) Is the em unit the vertical font size or the horizontal one? Or does that depend on whether the length is used to measure something horizontal or vertical? 10.1.2) begin, end and dur attributes: what happens if they conflict? Bert -- 5.2 [1] - TTWG Response: A "profile" or "version" could be passed out-of-band, but since DFXP does not define a transport mechanism, such definition is more appropriately defined in another specification in order to suit the needs of that specification. 5.2 [2] - TTWG Response: DFXP explicitly does not specify a schema binding mechanism. Conformance clause 3.2 sub-item 1 places this responsibility on a TT AF Content Processor. Nevertheless, we recognize that some guidance be provided to processor implementers; therefore, we will add informative language suggesting that a processor may make use of the ttp:profile parameter as a means for identifying the declared language subset (profile). 5.3.1 - TTWG Response: The TT WG has adopted a consistent convention for all names, and will provide an informative annex indicating heritage of tokens and names, indicating whether they had camel case normalization applied. 6.2.1 [1] - TTWG Response: Editorial: Will fix. 6.2.2 [3] - TTWG Response: This was an oversight. An appropriate constraint will be added to be constent with other time related attributes. 6.2.3 - [2] ) - TTWG Response: No longer applicable due to removal of ttp:defaultLengthUnit; however, the larger question of what does "px" mean when specified in a length will be addressed by adding language indicating that the XSL definition applies. 6.2.5) - TTWG Response: Editorial. Will fix. 6.2.6) [1] - TTWG Response: NTSC is not the only case; furthermore, there are a variety of operational usages in studios and in broadcast where frameRateMultiplier may effectively be a continuous function. Use of an enumeration would be overly restrictive and require constant updating. 6.2.6) [2] - TTWG Response: Editorial. Will fix. 7.1.1) [1] - TTWG Response: The goal is to strongly encourage authors and authoring systems to be explicit about language. Specifying xml:space="" is not the same as not specifying xml:space. The former is an explicit authorial expression of "no default language"; the latter leaves authorial intention unexpressed. We wish to enforce some intentional expression even if it is "no default language". 7.1.1) [2] - TTWG Response: We are not sure what is meant by "doing ... [the] job in XML". Our understanding is that a compliant XML processor must always "pass all characters in a document that are not markup through to the application". Our understanding of xml:space is it permits the author to express whether the application should use its "default white-space processing mode" or should "preserve all white-space". Since we have not introduced the full range of whitespace processing style properties into DFXP, such as XSL's linefeed-treatment, white-space-collapse, white-space-treatment, and suppress-at-line-break properties, we instead rely upon use of xml:space as an alternative mechanism for specifying authorial intention regarding whitespace preservation. Nevertheless, we believe it may be useful to re-express the algorithm specifying the meaning of xml:space="default" to instead normatively reference the semantics of the above mentioned XSL properties. 7.1.3) - TTWG Response: Distinct timing context is required on <tt> as opposed to <body> in order to provide a timing container for <head> and thence to <layout> and <region>, the latter of which can be animated. 7.1.7) [1] - TTWG Response: In XSL, <fo:block/> does not produce a block area since it has empty content. Since TT AF maps <p/> to <fo:block/> semantics, two empty <p/> elements from the TT namespace would map to: <fo:block/> <fo:block/> and thus not produce any visible side effect. In contrast, <p> <br/> </p> is defined to produce the same result as <fo:block> <fo:character character=" " suppress- </fo:block> We will review if additional clarification is required to express these intentions. 7.1.7) [2] - TTWG Response: Given <p> <br/> <br/> </p> a compliant presentation processor produces the same results as: <fo:block> <fo:character character=" " suppress- <fo:character character=" " suppress- </fo:block> 7.1.7) [3] - TTWG Response: It was specified for symmetry sake (in order to be uniform on all content elements). We will add an informative note that indicates that it is effectively ignored if specified. 7.2.3) - TTWG Response: Based on a comment above, we believe it may be useful to re-express the algorithm specifying the meaning of xml:space="default" to instead normatively reference the semantics of the above mentioned XSL properties. We will eiother re-express thus or correct the algorithm (which we copied from SVG). 8.1.1) - TTWG Response: No normative semantic has been defined for any processing instruction; therefore, a compliant presentation processor may ignore. 8.2.1) - TTWG Response: We believe there is no inconsistency in presenting the same normative requirements twice, given the different contexts of the specification, provided that there is no inconsistency between the requirements. We believe there is no inconsistency at this time. 8.2.10) - TTWG Response: We believe that specifying both horizontal and vertical sizes may produce a continuously varying anamorphoic transformation on devices capable of rasterizing fonts in a rectangular EM square. We do intend to mandate that a given compliant presentation processor must support either continuous anamorphic scaling of EM square or support some discrete set of anamorphically-transformed font sizes. We also recongize that this feature constitutes an extension not presently supported in XSL, and isn't adequately addressed by section 9.3.2 sub-items 6 and 7 (pertaining to populating XSL style properties). Therefore, we will add additional normative language to 8.2.10 that expresses the intended semantics. 8.2.16) [1] - TTWG Response: Good catch. Will fix. 8.2.16) [2] - TTWG Response: We weren't certain if we could define "scroll" to mean "apply the dynamic flow semantics defined in DFXP"; but given that you suggest this we will change to using "scroll" and define "scroll" semantics according to the dynamic flow features. 8.2.17) - TTWG Response: We did not find the use of three values to be a particularly useful feature, and did not need to support the limited subset of TT AF expressed by DFXP. It is possible that AFXP will support the larger subset. 8.2.18) - TTWG Response: Perhaps, but we feel comfortable basing the usage in DFXP on the current SMIL attribute whose name is "showBackground" [1]. [1] 8.2.19) - TTWG Response: We think that when at author writes "left" in a LRTB writing mode, that they actually mean "start". We want to encourage the author to express their logical intention. We are not certain if there is a strong use-case for specifiying non-relative (absolute) text alignment. 8.2.20) - TTWG Response: A number of TT WG members have a strong preference for providing the ability to specify blur radius. There has been a request for expressing separately the inner and outer color of the blur and possibly gradient parameters to apply at the transition boudnary; we chose a simple compromise that was modeled closely on the text-shadow property of CSS2. 8.2.22) - TTWG Response: As it turns out, (1) the intent of this example was not "to show that 'visibility' can hide text", and (2) there is a bug in the example, in that tts:visibility="false" on the paragraph means the paragraph (and its children) will never be visible. The example DFXP document should have read: <p region="r1" dur="4"> <span tts: <set begin="1" tts: Curiouser </span> <span tts: <set begin="2" tts: and </span> <span tts: <set begin="3" tts: curiouser! </span> </p> We will fix. 8.3.6) [1] - TTWG Response: Well it is either implementation dependent or not. We really want the former. We prefer to let the market decide whether an implementation does something sensible or not. To do something formal would require introducing PANOSE concepts or equivalent which doesn't seem particularly worthwhile. 8.3.6) [2] - TTWG Response: The TT WG believes there are a number of examples of all combinations of {monotype,proportional} x {sans-serif,serif} in use in international subtitling applications that justify labeling all combinations. 8.3.11) [1] - TTWG Response: We will add a cross-reference to XSL definitions and expand further on the definition of "c". But see more below on "em". 8.3.11) [2] - TTWG Response: We will elaborate that "em" in the context of a font that expresses an anamorphic size has two interpretations depending on the context of usage, i.e., depending on whether the length is being used to express a distance along the block or inline progression dimensions. 10.1.2) - TTWG Response: At present, section 10.4 normatively references the semantics of SMIL 2 for the purpose of interpreting these attributes, as well as dealing with possible conflicts (over-constraint scenarios). We are considering fully inlining the timing interval semantics into the spec, which is a greatly reduced subset of SMIL 2 timing semantics; if we do this, then the usage and constraints on these attributes will be fully articulated. ---------- The resolution is archived at: The discussion is archived at: The discussion is archived at: 1st Response from CSS WG to TTWG response 2nd Response from TT WG to 2nd SYMM response Final Response from CSS WG to TTWG response Comment: Dearoshihisa Gonno, Sony Corporation Co-chair W3C SYMM WG email: ygonno@sm.sony.co.jp TTWG Response: The TTWG believes that DFXP [1] does not duplicate, but, rather, reuses existing functionality of existing W3C specifications, and, in particular: XML, XHTML, CSS (through XSL), XSL, and SMIL. In its reuse of vocabulary and semantics from the cited existing W3C specifications, certain changes were necessitated and warranted to satisfy overall requirements adopted for the Timed Text Authoring Format 1.0 (see [2]). Among these requirements is R105 Ownership of Core, which specifies that core functionality is to be specified by the TT WG: <quote> The TT AF specification(s) shall be defined in such a manner that core functionality be specified soley by the TT WG or, in the event that the TT WG is terminated, its successors within the W3C. Note: It is assumed that one or more appropriate namespace mechanisms will be used to segregate core functionality defined or adopted in the TT AF from peripheral functionality defined or adopted by clients of the TT AF. </quote> In order to satisfy this requirement, the adopted vocabulary is placed in the TT Namespace () or a sub-namespace thereof. When adopting vocabulary from existing W3C specifications into the TT Namespace, the TT WG has taken care to change the usage of that vocabulary only in order to satisfy other requirements established by [2]. In order to make this reuse of vocabulary more clear, and in order to offer explanation of the differences introduced by DFXP, the TT WG proposes to add an informative Annex to DFXP that specifies the derivation of vocabulary and explains the differences in usage. The TT WG believes such an Annex will meet the concerns of authors and users of DFXP content and permit them to fully reuse their knowledge of existing W3C specifications. Regarding reuse of existing implementations, at least one member has indicated that they have successfully reused a subset of an existing implementation of XHTML, CSS, and SMIL to support DFXP and did so with only minor modifications. [1] [2] The resolution is archived at 1. Introduction SYMM-1-1: TTWG Response: DFXP is a strict lexical and semantic subset of AFXP. Functionally, DFXP is restricted to those features that can reasonably processed by a streaming parser as opposed to a DOM based parser. In addition, to simply embedding in other streaming application content formats, DFXP is wholly self-contained, and does not require the use of any external resources (such as images, style sheets, time sheets, etc.) It is expected that these constraints will not apply to AFXP, which is expected to make use of XPath expressions to associate styling and timing information with selected elements, and is expected to support references to external image, font, style sheet, and timing sheet resources. The TTWG believes that this additional background information is not strictly necessary in the DFXP specification, but is more appropriately placed in the AFXP specification. SYMM-1-2: TTWG Response: DFXP contains an informative reference to the TTAF 1.0 requirements document [2], which refers to legacy formats and explains use cases. SYMM-1-3: TTWG Response: DFXP is an implementation that satisfies the requirements adopted by the TTWG and documented in TTAF 1.0 Use Cases and Requirements [2]. DFXP was explicitly designed as an interchange format suitable for exchange amongst existing timed text distribution systems. It was also explicitly designed such that it could be directly rendered. The TTWG believes that this design is realized in the current DFXP LC WD and that no technical change is needed to facilitate such usage. Regarding integration of DFXP with SMIL and/or XHTML, while such integration may be defined in the future, perhaps by the TTWG or perhaps by the SYMM or HTML WGs, the TTWG does not believe it is a requirement to define such integration at this time in the DFXP LC. DFXP as defined by [1] can be directly used by an appropriately enabled SMIL or XHTML user agent that supports the DFXP document type and its semantics. No additional integration specification is required to accomplish this usage. The resolutions are archived at 3. Conformance SYMM-3-1: TTWG Response: Section 9.3 "Region Layout and Presentation" in combination with Section 3.2 "Processor Conformance" item (5) fully specifies the rules for rendering DFXP content. Section 3.2 fully specifies all conformance requirements for processing in general. The resolution is archived at 5. Vocabulary SYMM-5-1: TTWG Response: Accepted. An informative table will be added that provides this information for the reader. The resolution is archived at 6. Parameters SYMM-6-1: TTWG Response: Accepted. Examples of use of timing parameters will be added. The resolution is archived at 7. Content SYMM-7-1: TTWG Response: TT-AF 1.0 [1] requirement R105 mandates that all core vocabulary be specified by the TTWG, which has been accomplished by using a TT specific namespace. The derivation of content vocabulary from XHTML is based on the recommendation made by requirement R209. The TTWG believes it has made judicious reuse of XHTML vocabulary in a manner that is consistent with TT-AF requirements and general practice. In particular, the TTWG does not believe any interoperability problems will derive from this usage, and that greater interoperability will derive from familiarity. SYMM-7-2: TTWG Response: The use of timing attributes (begin, dur, end) on the /tt element is predicate upon the need that certain elements specified in /tt/head, in particular, /tt/head/region elements, may have timing intervals associated with them in order to construct animation timelines on the regions. Since these elements do not appear as descendants of /tt/body, there was a need to provide a timing context on a higher level element that includes both /tt/body and /tt/head/region. Regarding the use of certain styling properties as attributes on /tt, specifically tts:extent, this property used in this context defines the extent of an outer containing region, known formally as the "root container region", which is logically equivalent to the page-width and page-height attributes expressed on the fo:simple-page-master flow object as defined by XSL 1.0 [3]. [3] The resolutions are archived at 8. Styling SYMM-8-1: TTWG Response: DFXP is based primarily on XSL expression of styling matter. The formatting semantics of XSL are normatively adopted in Section 9.3.2 where the last paragraph states: "then apply the formatting semantics prescribed by [XSL 1.0]" DFXP employs only a subset of XSL functionality based on requirements stated in [2]. The TT WG takes exception with the assertion that DFXP must adopt the exact syntax and semantics of either XSL or CSS. Those syntactic and semantics that are applicable to DFXP are adopted wherever possible, with divergences only when deemed necessary to meet stated requirements. The TT WG notes that there is are many precedents in W3C technical specifications of adopting existing solutions when possible and then subsetting, supersetting, and modifying as the need arises. For example, one only has to consider the evolution of CSS/XSL and HTML/XHTML languages to see such design principles in practice. SYMM-8-2: TTWG Response: The TT WG believes that direct specification of named color values is technically consistent with CSS2 conventions, and believes there is no merit in forcing authors to resolve external references for such a straightforward and stable enumeration. The resolutions are archived at 9. Layout SYMM-9-1: TTWG Response: DFXP is based on XSL layout semantics as described above, but is extended to meet the requirements documented in [2]. The TT WG believes that it is possible to demonstrate "lightweight" implementations of DFXP content and/or presentation processing. SYMM-9-2: TTWG Response: Agreed. Additional explanatory material and examples will be added. SYMM-9-3: TTWG Response: Allowing separation of styles into distinct style child elements as opposed to mandating their merge provides additional flexibility for authoring tools and transformation processors that may wish to isolate individual styles and refer to them individually via referential styling. The resolutions are archived at 10. Timing SYMM-10-1: TTWG Response: DFXP is based on a subset of SMIL2 timing as document in Section 10.4: "The semantics of time containment, durations, and intervals defined by [SMIL2] apply to the interpretation of like-named timed elements and timing vocabulary defined by this specification..." DFXP departs from the exact syntax and semantics only when requirements dictate such departure. The TT WG takes exception to the notion that DFXP must adopt the exact syntax and semantics of SMIL. DFXP is a distinct media type, and is not intended to replace the functionality of SMIL. It's re-use of timing formulations already adopted in SMIL is merely a convenience for authors that have current familiarity with SMIL concepts. The resolution is archived at 12. Metadata SYMM-12-1: TTWG Response: Careful consideration was given to the possibility of direct use of existing Metadata vocabulary, in particular, the vocabulary defined by Dublin Core. In the final analysis, it was determined that the requirements of DFXP did not match those provided by similar Dublin Core vocabulary. As a consequence, a very limited set of metadata vocabulary was defined to meet the specific needs of DFXP content authors in creating interoperable content with agreed upon meaning. SYMM-12-2: TTWG Response: The TT WG does not agree with this assertion, and believes that there are many use cases for the direct incorporation of metadata into content. Common examples of such usage are prevalent in existing W3C standards, such as XHTML, SMIL, and SVG in their use of title and description attributes or elements. Similarly, XML Schemas provides the xs:documentation element type for author supplied metadata. The resolutions are archived at Appendix B: Dynamic Flow Processing Model SYMM-B-1: Text and diagram should be provided. TTWG Response: Agreed. Either this information will be provided or the feature will be removed. Appendix H: Acknowledgments SYMM-H-1: TTWG Response: The intent of this comment is unclear. If specific persons listed thus would like to have their names removed or attributed in a different manner, then such specific request will certainly be accommodated. ******************************************************************** The resolution are archived at 1st Response fromm TTWG 2nd Response from SYMM WG to TTWG response 2nd Response from TT WG to 2nd SYMM response Comment: Dear Timed Text Working Group, In response to your request [1] for Last Call comments on "Timed Text (TT) Authoring Format 1.0 - Distribution Format Exchange Profile (DFXP)", the Multimodal Interaction Working Group has reviewed the document from our perspective, in particular considering how timed text might be incorporated into multimodal applications. The Multimodal Working Group has not an objection, but an observation to make about the Timed Text Group's last call working draft. Timed Text would be easier to use as part of multimodal interfaces if it had a means of handling external asynchronous events. Such events are the standard means of coordinating among modalities in multimodal situations. Consider a multimodal interface that is using Timed Text and text-to-speech simultaneously to prompt the user, while using speech recognition to gather the user's response. Using ttp:timeBase, the text to speech output can be synchronized with the Timed Text display. However, when the user starts speaking, the multimodal interface would normally want to stop the text to speech play and alter, if not stop, the Timed Text display to indicate that it is now listening to the user. Obviously, the timing of the user's utterance can't be known in advance, so the normal way to do this is to generate a 'speech-detected' or 'barge-in' event, which is then delivered to all the modalities where it is caught by appropriate event handlers. (The event handler for text to speech would halt the current text to speech play. A corresponding handler for Timed Text might flash the display or halt it or make it change colors.) In the current specification, there is no apparent way to handle this event in Timed Text markup. This gap does not indicate an inherent weakness in the Timed Text specification, but we think that it will limit the usefulness of Timed Text in multimodal interfaces. If you would like more information about the overall multimodal architecture that we're envisioning as a potential container for timed text, you may find our MMI Architecture document useful [3]. We would be happy to discuss our observation in more detail if you have any questions or comments. best regards, Debbie Dahl, MMI WG Chair [1] Request for Last Call Comments: [2] TT Authoring Format: [3] MMI Architecture: The discussion is archived at: The resolution is archived at: TTWG response agreed by Requestor at: Comment: <background>G The resolution is archived at: The discussion is archived at: Comment: It has been said on other threads that introduction of the timing model into the layout was due to the following: >>(1) we wanted regions to be temporally activated/deactivated; >>(2) we want to animate certain region styles, such as background color >>(which is independent of background colors deriving from content >>elements) and position; >> >>In order to provide these temporally sensitive features, we need to make >>regions timed elements, which implies a timing context, which in turn >>indicated a need for having the root container element <tt/> be a timed >>container. The use of timing inside layout elements (and, in turn, in the root container element <tt/>) is not necessary if certain other attributes are allowed. I feel that adding timing to these elements is adding unneeded complexity; if left as is, I fear a lot more investigation and documentation will need to be done to cover the non-obvious edge cases. You can do both of the things you described, above, in SMIL 2.0 without the existence of region timing attributes. (1) To temporally activate/deactivate regions: You could add the "showBackground" region attribute to TT and allow the value of "whenActive". See: When "whenActive" is active, "...the background color will not be shown in the region when no media object is rendering into that region". Also, you could allow animation of that attribute by using a <set> or <animate> element in the body that targets the region and its showBackground attribute. The latter option would allow you to turn the region's display on and off when no text was displayed in it. When text is displayed in the region and you want it to be hidden, you can move it behind other regions, resize it to 0x0, move it off screen, ...etc. The second two are contrived, I admit, but moving a region behind another is not, IMHO. (2) To animate region styles, you could use the <set> and/or <animate> element in the body, with the region's id as the targetElement value. For instance, the following SMIL2 Language-profile presentation animates the region's color from blue to red to yellow then back to red then back to blue at one-second intervals: <smil xmlns=""> <head> <layout> <root-layout <region id="r1" regionName="foo" top="10px" left="10px" height="240px" width="320px" backgroundColor="blue" /> </layout> </head> <body> <par> <img src="data:text/plain,Hello" width="50px" height="50px" region="r1" dur="5s" /> <set targetElement="r1" attributeName="backgroundColor" to="red" begin="1s" dur="3s" /> <set targetElement="r1" attributeName="backgroundColor" to="yellow" begin="2s" dur="1s" /> </par> </body> </smil> Note: In SMIL 2.0 you can smoothly animate from one value to another if the value is numerical, e.g., from="#0000FF" (blue) to="#FF0000" (red). SMIL 2.0 also has a shortcut for smoothly animating color called (naturally) "animateColor". - Erik TTWG response agreed by Requestor Comment: Hello public-tt, In 8.3.2 <color> a) what is the color space for the RGB values (I expect it is sRGB, but the spec does not say so) b) Do rgba values represent premultiplied values or not? c) Is the opacity channel linear, or not? d) What color space is used for compositing of two rgba values? Also, in section 8.3.12 <namedColor> the same comment applies (I assume the answers are the same). -- Chris Lilley Chair, W3C SVG Working Group W3C Graphics Activity Lead a) what is the color space for the RGB values (I expect it is sRGB, but the spec does not say so) TTWG Response: The intention is to use sRGB. We will ensure the specification normatively reflects this intent. b) Do rgba values represent premultiplied values or not? TTWG Response: The RGB components of an RGBA tuple expressed by the rgba() function are NOT premultiplied by alpha; we believe this matches the interpretation used by CSS3 Color Module. We will ensure the specification normatively reflects this intent. c) Is the opacity channel linear, or not? TTWG Response: The intent is to be linear. We will ensure the specification normatively reflects this intent. d) What color space is used for compositing of two rgba values? TTWG Response:The intent is that the input and output color spaces from the compositing function are all sRGB. We will ensure the specification normatively reflects this intent. e) Also, in section 8.3.12 <namedColor> the same comment applies (I assume the answers are the same). TTWG Response: The intent is to use sRGBA. We will ensure the specification normatively reflects this intent. The resolution is archived at: The discussion is archived at: TTWG response agreed by Requestor Thierry MICHEL (tmichel@w3.org) Last Updated:$Date: 2006/09/15 10:53:30 $
http://www.w3.org/2005/03/21/DFXPLastCallResponses.html
CC-MAIN-2016-36
refinedweb
8,694
51.68
User talk:Grazzolini Configuration changes? Nice work! I'm planning on converting my systems soon, and I'm particularly interested in what configuration changes need to be made. For example, /etc/default/grub, /etc/grub.d, /etc/mkinitcpio.conf (aside from adding the hooks). Will GRUB automagically detect that the boot disk is LUKS formatted, and Just Work™, or do I need to tell it to add cryptodisk.mod and related modules? EscapedNull (talk) 19:55, 11 March 2015 (UTC) - I am writing, as of this moment, the crypttab configuration. The last part is grub configuration and reinstall. But, grub only needs GRUB_ENABLE_CRYPTODISK=y on /etc/default/grub. The catch is, that you need all your partitions mounted, including your ESP and new boot, and you need to use grub alternative install method mentioned. The grub part is the easiest one. Grazzolini (talk) 20:03, 11 March 2015 (UTC) - Thanks for clarifying. I was unsure because I didn't see any changes to the GRUB files mentioned. The new section is very understandable, and easier than I expected. EscapedNull (talk) 10:47, 13 March 2015 (UTC) - I have finished writing the article. Now I need suggestions and critics for it being able to integrate the main wiki. Grazzolini (talk) 20:40, 11 March 2015 (UTC) - Alright, that's written in a comprehensive structure as a stand-alone page! To start I quickly summarize again the pointers we have for the content from the other talk, a little adjusted after reading what you wrote: - GRUB#Root_encryption - a. the verbose about the CRYPTODISK parameter with link to the upstream doc, b. brief mention of partitioning alternatives for the boot partition - linking to the next for instructions, c. mention of the "double unlock" - Dm-crypt/Drive_preparation#Partitioning - a. own subsection for Grub boot partition with alternative instructions for non-/dedicated /boot, b. one general example of how to convert an existing /boot parttion (User:Grazzolini#BIOS.2FMBR_2; no need for more than one example. I would choose the simple one, User:Grazzolini#UEFI.2FGPT_2 is too verbose in my view) - Dm-crypt/Encrypting_a_non-root_file_system - theoretically this would be an alternate target to #partitioning (above) for more elaborate instructions about /boot alternatives. Personally I find it inappropriate as a target for content about /boot though. - Dm-crypt/Encrypting_an_entire_system (a. addition of one new scenario; IMO best is the EFI/GPT setup with dedicated /boot. The scenario would crosslink anyway with the other sections, e.g. Dm-crypt/Drive_preparation#Partitioning, so the instructions contained in the scenario can be left out from the others. (A bit unfortunate you did not follow the secton structure we have for the scenarios for an example yet.) - Dm-crypt/Specialties#Securing_the_unencrypted_boot_partition - a. content about your chkboot hook - Obviously new content be moved under its own subsections, where appropriate, of above targets. Once you have considered yourself which part could be moved where, you could add link targets to your article to point to. Easier for others to comment on then. That's it from me. --Indigo (talk) 21:21, 12 March 2015 (UTC) - On critiquing, and I'm really just being pedantic here, the article could stand to be written in a slightly more formal tone. E.g. no first person pronouns, objective point of view, trivia and tangents placed in Note/Tip templates, etc. I'm quite happy with the actual content and the overall layout, however. EscapedNull (talk) 10:47, 13 March 2015 (UTC) - Indigo, I'm already evaluating the changes you proposed. Also, Kynikos will take a look at this, in the weekend. EscapedNull, I was reading it again and you are right. It should be entirely in the third person. I am also taking a shot at shrinking some parts. I believe I can write on Partitioning how to resize, sort partitions. It is not only helpful in this case. With that the article will be way smaller. Grazzolini (talk) 16:58, 13 March 2015 (UTC) - I agree with all the points Indigo has made (thank you). Since there is a lot of content to merge, currently structured in a standalone article (which we don't want), I suggest starting to create a new scenario in Dm-crypt/Encrypting_an_entire_system for the the EFI/GPT setup with dedicated /boot case, as Indigo said. Please try to use the same headings and style as the other scenarios in the page, and only add instructions for that specific case. Once that will be done, we'll see how to best move the remaining info in the other dm-crypt subpages, and properly crosslink everything. - About the conversions, I was thinking of the possibility to rename Dm-crypt/Encrypting_an_entire_system to "Dm-crypt/Installing_an_encrypted_system", and create a new "Dm-crypt/Encrypting_an_existing_system" subpage with some scenarios analogous to the current Dm-crypt/Encrypting_an_entire_system, what do you think? - At this stage I wouldn't waste time refining tone/grammar/style, we'll do that once everything is organized. - — Kynikos (talk) 03:14, 15 March 2015 (UTC) - Ok. I have analyzed both Indigo and Kynikos suggestions and I just want to be sure of what needs to be done: - On GRUB#Root_encryption, I will talk about CRYTODISK functionality and mention boot partition schemes. - Dm-crypt/Drive_preparation#Partitioning here I will mention the separate /boot or dedicated /boot and conversion example. I agree that User:Grazzolini#UEFI.2FGPT_2 is too verbose. And perhaps I can trim it a little bit. But I believe it is the most important example, since the User:Grazzolini#BIOS.2FMBR_2 is trivial. I know that initially, most users that will do this, will be seasoned users, that won't need detailed examples. But if this indeed become a default, more examples will be needed. - I also don't think that here Dm-crypt/Encrypting_a_non-root_file_system is the best place for talking about encrypted boot. It is a special case. - As for renaming scenarios, I will leave that to you guys. With GRUB cryptodisk functionality, I believe that it is a bit out of place talk about Dm-crypt/Encrypting_an_entire_system when it is not in fact the entire system that is encrypted. So a rename might be indeed the best solution. I will try to add the content all at once to the pages. I don't know if there is any other people moderating those pages. - Grazzolini (talk) 18:34, 17 March 2015 (UTC) - Thanks for your considerations, Grazzolini. I hope the way proposed above also makes sense for you! Further, I'd like to pick up a point for my understanding: I am not sure from your answer whether we got the same understanding for what we call a "new scenario" in Dm-crypt/Encrypting a non-root file system (or (tbd) a renamed subpage): The idea here is to create a new additional scenario explaining how to install a new system with UEFI/GPT/GRUB CRYPTODISK /bootalong the same structure of the other scenarios we have on the page (if you look at the scenarios again, you will note they are compared with pros/cons and then follow the same TOC structure each). The reason to create a new one is simply that this is the most popular subpage of dm-crypt and users will base their individual install on one of them. You do not mention the scenario itself in your answer, so I am not sure - what do you think about it? Or did you mean someone else (of us) should add the first draft of it? - I agree to Kynikos that on merging the content, it is useful to create the new scenario first. One reason to create it first is that after it has been added, it will be easier to crosslink (respectively add) content not covered in it yet to Dm-crypt/Drive_preparation#Partitioning in a general manner. The last point also applies to our discussion of "BIOS or UEFI conversion". As for your arguments to base the conversion example on User:Grazzolini#UEFI.2FGPT_2, that's fine with me. You put a lot of effort into providing accurate step-by-step instructions! One first suggestion for shortening it: according to my guess most LUKS users will have a separate /boot partition to convert already. Further, keeping /boot together with/on the ESP partition is mostly popular with Gummiboot users. Have a think whether this helps to shorten. --Indigo (talk) 23:05, 17 March 2015 (UTC) - Yes, it made sense to me. Also, the new scenario idea is understood. I took note of your critique in the sense that I didn't wrote the article in the same structure as the other scenarios. It didn't occurred to me at the time I was writing to follow one of them. Sorry! It would be nice if one of you guys would write a draft of a new scenario. Specially because it will mean changing the name of a popular scenario. But I can do it also, no problems. You've guys have been great! It is not by chance that the archlinux wiki appears on the top of so many web search engines. As for thee UEFI/GPT conversion scenario, most of it talk about resizing and creation of partitions. The actual conversion is simple. I believe that this should go into Partitioning or GPT. - Grazzolini (talk) 00:00, 18 March 2015 (UTC) - The mentioned conversion could be another example in GUID_Partition Table#Partitioning examples, yes. It'll get clearer once the first steps are done. I missed above that the GRUB#Root encryption part is the logical start anyway (before the scenario). I'd say feel free to start adding to the articles once you want to; we'll help as we go along. Alternatively, to plan further, you could add links per section to your current article to indicate where you want to put them and strike them once it's done. For the scenario, what could be a name for it? "Encrypted boot partition (GRUB)"? --Indigo (talk) 16:04, 19 March 2015 (UTC) - As I am speaking I am writing on GUID_Partition Table#Partitioning examples the resize example, and the sorting example. I personally dislike the idea of writing a scenario in Dm-crypt/Encrypting_a_non-root_file_system. Specially, because in Dm-crypt#Common_scenarios it's stated that the Dm-crypt/Encrypting_a_non-root_file_system page is specifically for other devices not needed for booting a system. I will also write about CRYPTODISK functionality on GRUB#Root encryption, just pointing to the (very scarce) upstream documentation. But I won't link to any example, at least not now. Just so we can start moving things. I would like to have all this done soon. Thank you again. - Grazzolini (talk) 16:20, 19 March 2015 (UTC) - Great. Yes, let's forget about Dm-crypt/Encrypting_a_non-root_file_system page, Dm-crypt/Drive_preparation#Boot_partition_.28GRUB.29 (or similar section there) is a more thankful target for the examples (of /boot as own partition and in the /). --Indigo (talk) 21:41, 19 March 2015 (UTC) - I will try to write it there, as concisely as I can, and pointing to the already created info in the other articles. Hopefully we can finish it soon. Once again, thank you guys for all the help. - Grazzolini (talk) 15:41, 20 March 2015 (UTC) - I just installed a new system (BIOS, CRYPTODISK=y, dedicated /boot with LUKS, separate /boot password, /boot in crypttab, btrfs root, no LVM (yet)) using your instructions, and I was pleasantly surprised with how painless it was. I might have spent two additional minutes setting up GRUB cryptodisk system in contrast to a cleartext /boot system. That being said, I almost feel like GRUB cryptodisk is too easy to warrant its own section on the scenarios page. Keep in mind this ignores LVM, UEFI, converting an existing system, etc. Instead of duplicating 90% of the process in another scenario, we might be better off placing notes like (e.g.) "If you want an encrypted /boot, use cryptsetup luksFormatinstead of mkfsto format /dev/sdX2. See Dm-crypt/Specialties#GRUB_cryptodisk." I will ultimately leave it up to Indigo, Kynikos, and Grazzolini, however. EscapedNull (talk) 22:05, 21 March 2015 (UTC) - Well, I thought similar, that's why I argued against a dedicated scenario when we started the talk. But I now think we should continue to add one, we just must make sure cryptodisk is not the only differentiating factor for the new scenario. We already agreed it using UEFI is a good other feature to illustrate in the new scenario along with it. Reading that you use btrfs for your root makes me remember an idea to add a configuration example for encrypted btrfs raid1. Maybe we could use the new cryptodisk scenario to fill the still missing RAID example. (features: EFI, non-raid but cryptodisk /boot, luks encrypted btrfs raid1 root). Your recent install experience would definetely be helpful! Would that be good or get too special for most users (who likely want to use standard LVM on LUKS instead) again? Thoughts? One point I am unsure about myself is whether we can get the two LUKS devices for the btrfs raid1 unlocked with the standard hooks; we would not want to add a modified encrypthook for a standard scenario. You know that? Maybe: Does a btrfs root work with the systemd sd-encrypthooks? (that would be another feature great to add on that subpage and make much sense with a cryptodisk /boot). --Indigo (talk) 12:40, 22 March 2015 (UTC) - I agree that UEFI should be our main focus, because it is becoming standard, and the process is less obvious. I've only been ignoring it because I don't own any (U)EFI systems, and I have zero experience with them. As for using a multi-device Btrfs root, I successfully installed a four device raid1 encrypted root and remote unlocking on one of my machines, and there's no reason it would be incompatible with GRUB cryptodisk. However, I had to heavily modify the encryptsshhook to make it work. The cryptdeviceparameter only takes one device. There might be some hope using the sd-encrypthook, because it copies /etc/crypttabinto the initrd, which might allow multiple devices to be unlocked, but I haven't tried it yet. See my comments on dropbear_initrd_encryptAUR. - I do feel that a multi-device Btrfs root could fit into the scenarios page without breaking the "same section names" rule, but I really don't know how popular it would be for most users. Aside from modifying the hook, I didn't think it was much harder than any of the standard setups, but then again I've been doing similar setups for a while now. I do intend to enable GRUB cryptodisk on that server soon, so I will be able to confirm that it is indeed compatible with a Btfs multi-device root in practice as soon as I do. I also don't think it's a good idea to write scenarios that rely on heavily modified hooks, so the Right Way™ here is to fix the encrypt hook to make it more flexible, and Grazzolini seems to have a solid plan for that, at least as far as dropbear_initrd_encryptAUR goes. That's off-topic, though, as compatibility with GRUB cryptodisk is the only thing relevant to this page, but please do feel free to start a discussion on my talk page or elsewhere if you're interested! - This is a bit of a rant, so sorry in advance for my demeanor. We've been going back and forth about scenarios and crosslinks and partitioning, etc. since User:Grazzolini finished the page over two weeks ago, and we've gotten nothing actually done in the default namespace. I'm more than happy to help out with writing/migrating, but can we please try to agree on what we're doing so I can get started and Grazzolini can get back to coding? From what I understand, Grazzolini has more things in the pipeline (encrypted chkboot in early userspace, decoupling dropbear from encryptssh, refactoring the nethook), and I'd like to see things move along. Are we writing a scenario or not? Are we branching the instruction flow of the existing scenarios using "Note: use cryptsetup luksFormatinstead of mkfs, see GRUB cryptodisk" tags like I mentioned? Are we going to include "Converting an existing system" tutorials anywhere? Are we going to give GRUB cryptodisk a section under Dm-crypt/Specialties? Let's see if we can reach a consensus, rehash, and get to work. EscapedNull (talk) 21:15, 22 March 2015 (UTC) - Have a look again, Grazzolini has started to merge content for (1) and (2) of above list before your replies. Yes, we want a scenario and it could be a todo for you to draft, if you want (see his comment above on it). Do you? I dumped the ideas above, because we all want it to be a meaningful new addition to the page. We have covered on where to merge the content, e.g. also where to put the "conversion" (please re-read above). Your suggestion for a general ""Note: use cryptsetup luksFormat..." (good idea) we can easily add to the general content above the scenarios, once the content is in place. --Indigo (talk) 22:28, 22 March 2015 (UTC) - Sorry, I didn't see the changes to Dm-crypt/Encrypting_an_entire_system at first. I was using the wrong search terms apparently. So I guess it was just me getting nothing done in the default namespace. I wasn't sure if your overview of changes was still valid or not, since there were some new ideas introduced later in the discussion. I'm currently working on setting up a UEFI system under QEMU to test the scenario, and I'll write a draft as I go along. EscapedNull (talk) 00:40, 23 March 2015 (UTC) (Thread moved to left for readability --Indigo (talk) 12:49, 25 March 2015 (UTC)) Nice to see things moving! Yes, I have a lot on my plate right now. But I will get this done this week. I want to improve mkinitcpio-chkcryptobootAUR, so it can detect boot partition bypass. After that I will go back to dropbear_initrd_encryptAUR. It will get a new name. But unfortunately, you can't use it with cryptodisk, because you won't get a completely remote unlock with it. If you come to think about, they are in the opposite directions in the sense of security. I use dropbear because I want the convenience of being able to unlock some servers of mine, from anywhere. I believe that GRUB cryptodisk is most useful in making your day to day machine (laptop or desktop) more secure. I plan to add even more checks to mkinitcpio-chkcryptobootAUR. It is working for me right now, but perhaps it can help others also. Also, I am not that worried about a new scenario or not. I will add it to the pages and crosslink. We will see where it leads. Hopefully it will get traction enough so it will get it's own scenario. Grazzolini (talk) 04:27, 23 March 2015 (UTC) - Sorry for not replying for many days guys, I've tried to follow the discussion even if I'm a bit short of time (as always :P). I have to agree a bit with EscapedNull's rant though, as I said at this stage we have to start making the new scenario, so we can discuss on something more mergeable than this guide, which is already a valuable effort of course! That's why I've started User:Kynikos/Cryptodisk scenario draft, I'll try to improve it little by little, but you're all invited to contribute, I haven't created that page only for me to work on it :) — Kynikos (talk) 13:48, 23 March 2015 (UTC) - You're right. I realized that cryptodisk would not be suitable for remote machines shortly after I posted that. GRUB already has networking code (for PXE boot, etc), but it has no remote administration interface, so one would need a remote management card (in the case of a server that supports them), or a physical keyboard. Cryptodisk might improve security on a server, but remote unlocking is just not a possibility without some serious GRUB (or hardware) hacking. - I just finished my GRUB cryptodisk with EFI/GPT in a VM, so I'll begin drafting the scenario very soon. This tutorial will be for new installations, but perhaps a conversion tutorial would be appropriate at some point. The draft is named User:EscapedNull/GRUB_Cryptodisk_UEFI_GPT for anyone who wants to add it to their watchlist. Feedback is appreciated! - Note: I didn't Kynikos's reply until I got an edit conflict trying to submit my reply (above). Since your draft is further along, I'll delete my page and work on yours. EscapedNull (talk) 16:33, 23 March 2015 (UTC) - From what I saw, the Kynikos page is better organizes than mine. Keep note that I already added the sort/resize examples to Partitioning and there is a mention already of cryptodisk in GRUB page. I believe that some crosslinking is due. What you guys think is the best place to mention mkinitcpio-chkcryptoboot? I am still writing the code to detect boot bypass, it will ready soon. Kynikos, I will take a look at the scenario in your page. I already have some remarks. GRUB is slow opening a LUKS device with sha512. I believe it does not have support for cpu's AES-NI, since there is no kernel loaded yet to benefit from it. Grazzolini (talk) 20:36, 23 March 2015 (UTC) - I would vote Dm-crypt/Specialties is the best place for mkinitcpio-chkcryptobootAUR. As for boot speed, I first used cryptsetup's default sha1, and it was taking about two seconds. Tested with --hash sha512, it might take three seconds. These are both tested under QEMU (not sure if it uses AES hardware acceleration at all) with cryptsetup defaults. For me, I wouldn't mind spending three seconds a month for some extra security. Besides, I'm no cryptographer, but I thought sha256was designed to be a slow, iterative PBKDF primitive, while sha512was designed to be a fast data hash. In other words, I didn't think the speed was necessarily a function of the output length. With all due respect, the default is sha1, so I wouldn't dwell on choosing the "right" hash spec when we have more important things to worry about right now. EscapedNull (talk) 10:29, 25 March 2015 (UTC) - I've had a very busy week, but I am now back to start moving things along. I am finishing things with mkinitcpio-chkcryptobootAUR, just having some troubles with Desktop_notifications. By the way, I will contribute some changes to chkboot-gitAUR also, just don't know if the maintainer will pick them up. I don't hear back from him in weeks. As for your test results EscapedNull, if your host machine has AES-NI, your hypervisor will make use of them on the guest machine. I believe that it would make GRUB syscalls, even if originally not able to use it, use them under the hood, thus making it faster. I can measure a difference in the time to open a device with sha512 and a device formatted with sha256. Also, I don't think that cryptsetup --helpcryptsetup defaults, specially using SHA1 are good enough. As I mentioned, it is a trade off. I am no cryptographer either, but as far as I know, sha256 is faster than sha512. Both are slower than sha1. Since hashing only is important when unlocking the device, I believe we can suggest using a better hash spec, as it is already done in the wiki. Either way, I am finishing moving things this week. So that we can have feedback from a more broader audience. - Grazzolini (talk) 14:44, 30 March 2015 (UTC) - Grazzolini, in the scenario subpage we crosslink to Dm-crypt/Device_encryption#Encryption options for LUKS mode in each scenario so that readers are aware but keep commands in the examples to default. That's the place indeed. Regarding you observation I added [1]. I actually had no reference whether GRUB (outside hypervisor) has AES-NI or not (if someone has a reference please add it, thanks), but your observation suggests it: Whatever hash you used at LUKS blockdevice format made XYZ iterations. Now on boot the GRUB software implementation has to perform the same XYZ iterations without the hardware instructions. --Indigo (talk) 11:50, 12 April 2015 (UTC) - Indigo, I took a look at the source code and couldn't find any usage of the hardware instruction set from within GRUB. At this moment, it is not AES-NI aware. I suspected because of the long unlock time. But, I took a look at the cryptsetup code, and, as far as I can tell, the AES-NI only kicks in, after the master key was successfully unlocked. Of course, with a kernel loaded, these operations are faster, because cryptsetup use the kernel primitives, which are optimized. AES-NI, in this context, would only make encryption/decryption operations faster, not unlock. I believe that the unlock from GRUB is slow, because it is a software only implementation. And, it is specific to them. I don't know if they can take shortcuts or not, nor if they primitives implementation can be improved or not. But using a smaller hash size and less iterations, seems to improve the boot speed. I will take a look and see if their code can be improved. But since we are talking about a boot loader which have a series of constrains and also has to be portable, I don't see much more optimization that can be done. I have been busy as hell, and that's the main reason why I didn't moved things. This is week I believe things will be a little bit less chaotic so I can finish moving things. - Grazzolini (talk) 16:04, 14 April 2015 (UTC) - Now that I have reached the minimal functionality I intended for mkinitcpio-chkcryptobootAUR, I can focus on finishing the content on the wiki. Just need to know if the content in User:Kynikos/Cryptodisk scenario draft should be used or if I can follow with grammar/tone changes of my own content and put it on the pointed places. It would be nice to hear from Kynikos or Indigo before I start moving things. - Grazzolini (talk) 19:53, 1 April 2015 (UTC) - When ready the draft scenario in Kynikos' user page is meant to be moved for point (4) above, i.e. installation instructions. We are always keen to keep the verbose text in the scenarios low (short&concise install instructions) but crosslink explanations. That we want to crosslink to the other content from it later is good to have in mind, but that's the only thing regarding it. - It would be best to start with covering the points above (named (1a), (1b), (1c), (2a), (2b), etc.), i.e. what we agreed on as a base before. Don't worry about grammar/tone, just please remember ArchWiki:Contributing#The 3 fundamental rules (the first two:) and avoid first person statements perhaps. For your already written content, you could either copy/move a subsection from your user page in one edit and then adjust it in place with subsequent smaller edits, or edit it first to fit the destination, move it and just do final touches there. (Btw you can of course also put templates - like Kynikos did in the scenario - to make aware for expansions/needed input/help/etc). If you are unsure about how to best merge the content to the wiki/fit it, have a go with another part first (e.g. 5a to Dm-crypt/Specialties#mkinitcpio-hook, or 2a for Dm-crypt/Drive preparation#Boot partition .28GRUB.29) and wait for feedback. --Indigo (talk) 21:55, 1 April 2015 (UTC) - Grazzolini, we are working on finalizing (4.) of above tasks in User:Kynikos/Cryptodisk scenario draft to merge it over to the scenario subpage. If you find time, read over it - that would be great. Any points you want to note can be left in User talk:Kynikos/Cryptodisk scenario_draft#Merge prerequisites.3F. Any changes you see important, feel free to edit the draft directly. --Indigo (talk) 18:08, 18 April 2015 (UTC) Edits to terminator Hi, you recently removed some details in terminator regarding the two different versions available. I was just wondering your rationale was? After your edits, I think the page implies that the two versions are on the same branch (which is incorrect; they use different gtk and vte), and that the version in the official repository is just a tagged, stable version of the *-gtk3-bzr version (which is also incorrect; the former is deprecated). I've reverted it in the meantime, but mainly so that I don't forget. I am by no means heavily tied to this new version, so please feel to re-edit, or reply here with more details. Cheers, Ostiensis (talk) 00:00, 4 January 2017 (UTC) - Hi, the terminator on official repositories is now using the official release of the gtk3 branch, check [3] and [4]. This is why I changed the terminator page, the package is now using that official first (pre?)release of the gtk3 branch. This was done because the old branch wouldn't receive bug report nor fixes and all development is being done on the gtk3 branch. Grazzolini (talk) 14:28, 4 January 2017 (UTC) Oops, sorry, I missed that update. I presumed that the official terminator would stay on the gtk2 branch, because it was the stable version. Apologies, and I've reverted the wiki back to your edits. Cheers, Ostiensis (talk) 23:03, 4 January 2017 (UTC) - It probably would remain on that branch, but upstream no longer accept bugs for it. So the choice was let it rot, or update it and deal with possible breakage. Grazzolini (talk) 00:11, 5 January 2017 (UTC)
https://wiki.archlinux.org/index.php/User_talk:Grazzolini
CC-MAIN-2019-43
refinedweb
5,046
62.88
BAR Contents - 1 Installation - 2 Description - 3 Accessing BARs - 4 Scripting BARs - 5 FAQ - 6 Citation - 7 License BAR: A collection of Broadly Applicable Routines. The collection contains Macros, Scripts and Plugins focused on Data Analysis, Image Annotation and Image Segmentation. It is curated using GitHub and distributed as an optional update site. Installation Run Help ▶ Update... and choose Manage update sites. Activate the BAR checkbox in the alphabetically-sorted list of update sites. Press OK, then Apply changes. Restart ImageJ. That's it. Enjoy BAR! Description BAR files are accessible through a dedicated top-level menu subdivided in task-oriented categories. All routines should be documented on GitHub. Some of the scripts have a dedicated documentation page, others feature built-in help, while a handful were deemed too simple to require dedicated instructions. Nevertheless, all files contain useful commentary at the top of the source code file. Remember: You can open all the scripts using the Shift key. List of BARs - Analysis - LoG-DoG Spot Counter, Multi ROI Profiler, Multichannel Plot Profile, Multichannel ZT-axis Profile, Smoothed Plot Profile - Data Analysis - Create Boxplot, Create Polar Plot, Distribution Plotter, Find Peaks, Fit Polynomial, Interactive Plotting - Segmentation - Shen-Castan Edge Detector, Apply Threshold To ROI, Clear Thresholded Pixels, Remove Isolated Pixels, Threshold From Background, Wipe Background - Snippets, BAR lib and Tutorials - Described in Scripting BARs - Tools and Toolsets - Calibration Menu, List Folder Menu, Segment Profile, Shortcuts Menu, ROI Manager Tools, Toolset Creator Accessing BARs As with all ImageJ commands, BAR scripts can be accessed in multiple ways: 1) through the BAR▷ menu, 2) the Context Menu, 3) Keyboard Shortcuts, 3) the Shortcuts Menu Tool (BAR▷ Tool Installers▷ Install Shortcuts Menu), that registers frequently used commands in the ImageJ toolbar, 4) by pressing L, or 5) from other scripts, macros and plugins. Context Menu BAR▷ submenus can be appended to the image's context menu (the menu that pops up when right-clicking on the image canvas) by running BAR▷ Submenu▷Move Menu (Context<>Main). The transfer is bidirectional: once in the context menu, running the same command will place the submenu back in the main menu bar. The shuttling mechanism is not permanent, i.e., it will not be remembered across restarts. However, it is macro recordable which means it can be imposed at startup, using the ImageJ macro language. So, e.g., to install BAR▷ Segmentation▷ in the context menu, one would: - Start the Macro Recorder ( Plugins▷ Macros▷ Record...) - Run BAR▷ Segmentation▷ Move Menu (Context<>Main) - Open the Edit▷ Options▷ Startup... window and paste the string generated by the Macro Recorder into its text area so that ImageJ can run the command at every startup. It may be wise to allow ImageJ enough time to register all scripts before triggering transfers to the context menu. This can be achieved through the built-in macro function wait(). For a slow setup requiring at least 1 second (1000 milliseconds), the pasted code would look something like this: Notes - The several Move Menu (Context<>Main) commands across BAR▷ submenus do not use the same label and are distinguishable by extra trailing spaces. This is intentional because all ImageJ commands must have unique names. - Any toolset loaded via the ">>" More Tools drop down menu can define its own contextual menu (as detailed in the ImageJ User Guide, the contextual menu is controlled by a macro called Popup Menu that gets loaded at startup). To have BARs immediately available when such toolsets are loaded, just append the same run("Move Menu (Context<>Main)"); call described above for StartupMacros. Commander Since the majority of BARs are scripts stored in dedicated files, BAR features Commander (BAR ▶ BAR Commander...), a keyboard-based file browser that produces filtered lists of directory contents. It is a productivity tool that applies the principles of Command Launcher to file browsing, providing instant access to files just by typing abbreviations of filenames. It serves two purposes: 1) to expedite the opening of files and 2) to produce filtered lists of directory contents. Features include: drag-and-drop support, interaction with native file manager, regex filtering, and a built-in console for common operations. Console mode is triggered by typing !, which evokes a list of searchable commands so that all file navigation can be done exclusively with the keyboard. Some of these (cd, ls, pwd, etc.) are reminiscent of commands found in most command-line interfaces. Here are some examples: - To access ImageJ's LUT folder - Type !+L+U+T+⌅ Enter - To access all JavaScript lib files - Type !+L+I+B+⌅ Enter, then .+J+S - To reveal the directory of active image - Type !+I+M+P+⌅ Enter, then choose Reveal Path. - To access Commander's built-in help - Type !+H+E+L+P+⌅ Enter - To extract the paths of all TIFF images in a directory - Drag and drop the desired folder into the Commander list Type T+I+F+⌅ Enter Choose Print Current List in the Options Menu or press ^ Control+P (⌘ Command+P in Mac OS). Keyboard Shortcuts You can use Plugins▷ Shortcuts▷ Create Shortcut... to assign hotkeys (e.g., keyboard key that you do not use frequently such as 0 or F7) to any script registered in the BAR▷ menu. These shortcuts will be listed in Plugins▷ Shortcuts▷ and are remembered across restarts. Alternatively, keyboard shortcuts can be defined in macros that call BAR commands by placing the shortcut key within square brackets at the end of the macro name. Such macros can pass specific options to BAR commands, allowing scripts to run without prompt. Example: macro "Remove Round Structures [0]" { run("Wipe Background", "size=100 circ.=0.75-1.00"); // Runs Wipe_Background.ijm with the specified parameters } As mentioned, such macros can then be pasted into the text area of Edit▷ Options▷ Startup... so that they can be executed when ImageJ starts up. Scripting BARs Although BARs can be used as standalone commands, the scripts and plugins in BAR become more useful when incorporated into other routines. You can use BARs as a starting point for your own workflows. Whether you are just looking to automate a simple task or you are an advanced developer, you can use BAR to achieve your analysis goals more easily, by means of Snippets - source code templates - and libs - scripting additions to be shared across routines. Snippets BAR contains a directory, plugins/Scripts/BAR/Snippets/, containing multi-language examples that you can customize and recycle in your own scripts. You can, of course, also retrieve code and inspiration from the more complete BARs in the remaining plugins/Scripts/BAR/ subdirectories. Any script or macro file stored in the Snippets/, folder with an underscore "_" in the filename will be listed in BAR▷ Snippets▷. The Snippets▷ menu contains some utilities to help you manage your scripts: - List Snippets - Prints a table listing all scripts in plugins/Scripts/BAR/Snippets/. Files can then be opened in the Script Editor by double-clicking on their filename. - New Snippet - A java plugin that speeds up the creation of new scripts, pre-configured to use BAR lib. - Reveal Snippets - Opens plugins/Scripts/BAR/Snippets/ in the file browser of the operating system. - Search BAR - Searches the contents of BAR files. BAR lib BAR libs (stored in the /BAR/lib/ directory) are centralized libraries (BeanShell, IJM and Python, etc.) that can be shared across files. These libraries serve as scripting additions to Snippets and other routines. Do you find yourself copy and pasting functions from one file to the other? Do you keep on writing the same lines of code? Do you have some key code written across different languages? Would you like to make side-by-side comparisons of scripting languages? Then, BAR lib is for you. The idea is quite simple: Reusable functions and methods are written to a lib file that gets loaded at execution time so that it can be called by the running script. BAR▷ Snippets▷ New Snippet... exemplifies how to use these scripting add-ons. Here is a BeanShell example: // Add BAR/lib to classpath addClassPath(bar.Utils.getBARDir()); // See for details importCommands("lib/"); // Load BARlib.bsh BARlib(); // Confirm availability of BARlib lib = new BARlib(); lib.confirmLoading(); Run it in the Script Editor (File ▶ New ▶ Script...), and you should be greeted by a "BAR lib successfully loaded" message. Further details are provided on the GitHub lib page and on the documentation of the bar.Utils class. Batch Processors Some of the scripts included in /BAR/Snippets/ are scripts that apply a common operation to a directory. These batch processors are implemented in different languages and perform the following operations: - Take an input folder specified by the user - Apply a series of operations to individual files of matched extension(s) - Save processed files as TIFF to a dedicated directory, so that no files are overwritten Typically each of these tasks is handled by separated functions so only the function processing single files needs to be edited. In the Python and IJM implementation, this processing function is called myRoutines(). Note that when editing myRoutines() we do not need to worry about opening, closing or saving the image without overriding the original file, because those tasks are already performed by other functions. Processing Functions The file-processing function can include your own code, code generated by the Macro Recorder (Plugins▷ Macros▷ Record...), pre-existing snippets or methods/functions defined in a common BAR lib file. IJM example, running a macro and a python script in the Snippets/ (another example below exemplifies how to call a macro function from BARlib.ijm): function myRoutines() { snippetsPath = call("bar.Utils.getSnippetsDir"); runMacro(snippetsPath + "MyCoolestMacro.ijm"); eval("python", File.openAsString(snippetsPath + "Median_Filter.py")); } Jython example, demonstrating how to 1) load BAR lib (in this case BARlib.py, using code generated by Bar▷ Snippets▷ New Snippet...) and 2) how to run a Snippet (in this case Median_Filter.py): def myRoutines(): import sys, ij, bar # 1) Call a lib function: # 1.1) Extend the search path to /BAR/lib/ sys.path.append(bar.Utils.getLibDir()) # 1.2) Import all functions in /BAR/lib/BARlib.py import BARlib as lib # 1.3) Call a function from the file lib.confirmLoading() # 2) Run a script directly: from org.python.util import PythonInterpreter script = bar.Utils.getSnippetsDir() + "Median_Filter.py" PythonInterpreter().execfile(script) Example: Batch Randomization of Filenames The default task of both the Python and IJM implementation of BAR Process Folder scripts is filename randomization: 1) They copy images from one folder to another, 2) Rename their filenames using a random string and 3) Log changes to a CSV table (so that id of randomized filename can be traced back to the original file). This approach allows for blind analyses of datasets that are sensitive to user interpretation. Below are the descriptions of Process_Folder_PY.py and Process_Folder_IJM.ijm. Python In Python we can use the uuid module to generate a random filename (rational here). The result is an impressively succinct function: def myRoutines(image): import uuid image.setTitle( str(uuid.uuid4()) ) In more detail: Pass the active image - an ImagePlus object - to myRoutines(). Retrieve a random UUID (e.g., f7dfd6a9-f745-42c2-8874-0af67380c3f5), convert it to a string, then use that string to rename the image using the setTitle() method in ij.ImagePlus. But because BAR libs already contain such a function, we can just call the randomString() function in BARlib.py, after loading the file: def myRoutines(image): import sys, bar sys.path.append(bar.Utils.getLibDir()) import BARlib as lib image.setTitle( lib.randomString() ) To log filename changes, we could use the same strategy used for the IJM implementation. The simplest way to generate a CSV list would be to use ImageJ's Log window: def myRoutines(image): import uuid from ij import IJ # Remember original filename before changing it log_row = image.getTitle() # Rename image image.setTitle( str(uuid.uuid4()) ) # Append modified filename to CSV row log_row += ", " + image.getTitle() # Print row IJ.log(log_row) However, we can use the csv module to achieve a more robust implementation: import csv, os # Create a CSV table documenting processed files csvPath = out_dir + "_ProcessedFileList.csv" csvExists = os.path.exists(csvPath) csvFile = open(csvPath, 'a') csvWriter = csv.writer(csvFile) # Specify column headings if not csvExists: headers = ['Original path','Processed path'] csvWriter.writerow(headers) As such, we only need to add the following, every time a file is processed: csvWriter.writerow([old_filename, new_filename]) Visit the BAR repository to check how the assembled script (Process_Folder_PY.py) looks like. IJ Macro Language In an ImageJ macro (IJM) we will need to define first a function that produces a random filename. The IJM language does not feature an equivalent to the UUID module used previously in the Python implementation. So, we are left with two approaches: 1) call java.util. randomUUID directly, or 2) write an ad-hoc function. For the former, we take advantage of the IJM language built-in call() function, that calls public static methods in any Java class that ImageJ is aware of: function myRoutines() { randomString = call("java.util.UUID.randomUUID"); rename(randomString); } or even shorter: rename(call("java.util.UUID.randomUUID")); But discovering which methods can be called by the IJM language may not be trivial. Typically, it will require access to an IDE and some Java experience. So what about writing an ad-hoc function? The approach used in Process_Folder_IJM.ijm is the following: 1) Take a template string containing the characters A-Z and digits 0-9; 2) Pick a random position between the first and last character of the string template. Extract the character at that position; 3) Repeat the last step several times, assembling extracted characters into a concatenated string: function randomString() { // We define the template and measure the number of positions template = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; nChars = lengthOf(template); // We define an empty string that will hold the result string = ""; // We will want a final string to be 10-characters long for (i=0; i<10; i++) { // Define a random position between 0 and the penultime character in template idx = maxOf(0, round(random()*nChars-1)); // Extract and concatenate characters string += substring(template, idx, idx+1); } // return result return string; } This would create e.g., NHH6KG30C9. However, a lengthier filename may be required. Larger filenames would be harder to read, so we can insert hyphens at fixed positions. We will improve the function by passing two arguments to it: the length of the desired filename and a boolean flag to instruct on the usage of hyphens: function randomString(length, spacers) { template = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; nChars = lengthOf(template); string = ""; for (i=0; i<length; i++) { idx = maxOf(0, round(random()*nChars-1)); string += substring(template, idx, idx+1); if (spacers && i%5==0) string += "_"; } return string; } As such, calling randomString(50, true) would produce e.g., E_ZXTQO_8E9XM_45WG7_8S39. As with the Python implementation, we could also use BAR lib (in this case BARlib.ijm). First, we need to load the file before running our macro, using the code generated by Bar▷ Snippets▷ New Snippet: libPath = call('bar.Utils.getLibDir') + 'BARlib.ijm'; libContents = File.openAsString(libPath); call('ij.macro.Interpreter.setAdditionalFunctions', libContents); // Press 'Run' twice to confirm availability of new additions confirmLoading(); Once lib/BARlib.ijm is in memory, the function can be called directly: rename( randomString(50, true) ); Now that we are capable of generating a random filename, we just need to monitor filename changes. We could use the print() function (that outputs to the Log window), or create a two-column table describing the changes. Here is how the final Process_Folder_IJM.ijm looks like: function myRoutines() { // Note that functions can contain other functions function randomString(length, spacers) {...} // Since we'll be logging filename changes to the Results table, we // need to keep track of the table's measurement counter. We'll assume // that the Results table was closed when myroutines() was first called availableRow = maxOf(0, nResults); // Log original filename before changing it setResult("Original filename", availableRow, getTitle()); // Rename image and log changes rename(randomString(20, true)); setResult("Randomized filename", availableRow, getTitle()); } - What is the rationale behind BAR? - The motivation behind bar is quite simple: To collect snippets of code that can be incorporated into any workflow. So rather than performing fully fledged procedures, BARs tend to produce single tasks. The advantages are two-fold: 1) Scripts are easier to understand and maintain and 2) they can be used to complement any other ImageJ add-on, let it be the simplest macro or the most sophisticated plugin. - Will I find BAR useful? - Probably. But it is likely that you will need to delve a bit into the BAR philosophy. - Can I contribute to BAR? - Yes, please do! If you have some suggestions on how to improve it, do let us know. - Nothing happens when I run a BAR. What's going on? - In a case of premature termination BARs tend to exit rather silently. The best way to have insights on an unexpected error is to run it directly from the Script Editor: Open the script by holding ⇧ Shift while selecting it from the BAR▷ menu, press Run and have a look at the editors' s console, where all sort of useful messages will be printed to. Do let us know if you have found a bug. - Does BAR work outside Fiji/ImageJ2? - Yes, but with limitations. ImageJ1 (see ImageJ Flavors if you have doubts about existing ImageJ distributions) will only register scripts saved in the plugins/ folder or on one of its immediate subfolders. For this reason, some of the BAR submenus will appear as empty, and it may not be possible to navigate the BAR/ directory using menu commands (Commander could still be used, nevertheless). Another important aspect is that, without access to the built-in updater, you will have to manually update BAR (by monitoring its rpository), and to manually install (and update) the dependencies (i.e., third-party plugins and third-party libraries) used by BAR). - How do I uninstall BAR? - Run the Updater (Help ▶ Update...). Choose Advance Mode then Manage update sites. Deactivate the BAR checkbox in the alphabetically-sorted list of update sites. Press OK, then Apply changes. All BAR files will be deleted. Note that you can install and uninstall BAR as you see fit. See How to follow a 3rd party update site for more details. - I get an error when I try to load BAR lib (IJM). Why? - Macro functions from a IJ macro lib may only be available once a new instance of the macro interpreter is initiated (this is not the case for other scripting languages). This means you have to call ij.macro.Interpreter.setAdditionalFunctionsbefore running your macro. You can test this by running the default macro generated by BAR▷ Snippets▷New Snippet...: // Load BARlib.ijm libPath = call('bar.Utils.getLibDir') + 'BARlib.ijm'; libContents = File.openAsString(libPath); call('ij.macro.Interpreter.setAdditionalFunctions', libContents); - The first time you run confirmLoading();ImageJ will complain about confirmLoadingbeing an undefined identifier, but it will successfully recognize the call the second time you run the code above. Citation BAR scripts can be cited using the DOI associated with the repository: License These scripts.
http://imagej.net/BAR
CC-MAIN-2017-26
refinedweb
3,181
55.44
Important: Please read the Qt Code of Conduct - stop a loop in object that has been moved to a QThread? Hi I currently write a communication app that needs to continously receive SPI data. So I have a mainwindow with a start and stop button. I created a worker class which then contains the SPI communication. To not block the mainwindow I created a QThread and moved the worker class object into it. I can start the doWork() process sucessfuly but how to stop such a loop? Sending signals to the worker class seems not to work since I guess the eventloop is blocked as long as the doWork() loop runs. How to solve that? mainwindow.h #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QWidget> #include <QThread> class MainWindow : public QWidget { Q_OBJECT QThread thread; public: explicit MainWindow(QWidget *parent = nullptr); ~MainWindow(); }; #endif // MAINWINDOW_H mainwindow.cpp #include "mainwindow.h" #include <QPushButton> #include "worker.h" #include <QThread> MainWindow::MainWindow(QWidget *parent) :QWidget(parent) { QPushButton *pbStart = new QPushButton("START LOOP",this); QPushButton *pbStop = new QPushButton("STOP LOOP",this); pbStop->move(0,40); Worker *worker = new Worker; worker->moveToThread(&thread); connect(&thread, &QThread::finished, worker, &Worker::deleteLater); connect(pbStart, &QPushButton::clicked, worker, &Worker::doWork); thread.start(); } MainWindow::~MainWindow() { thread.quit(); thread.wait(); } worker.h #ifndef WORKER_H #define WORKER_H #include <QObject> class Worker :public QObject { Q_OBJECT public: Worker(); ~Worker(); public slots: void doWork(); }; #endif // WORKER_H worker.cpp #include "worker.h" Worker::Worker(){} Worker::~Worker(){} void Worker::doWork() { while(true){ // SPI loop communication } }; Can you give me some hints how to solve that? take a look on how they use signal/slot to communicate with the worker object and avoid using an infinite loop - SGaist Lifetime Qt Champion last edited by Hi, If you want to keep the loop then add a variable to break it. @pauledd said in stop a loop in object that has been moved to a QThread?: Worker::doWork() As shown, at no time is this function checking the Qt Event loop. public slots: void doWork(){ ... while(m_running){ // process server UA_Server_run_iterate(m_server, true); // process events like signals QCoreApplication::processEvents(); } ... } void stopWork(){ m_running = false; } There are signals connected to the slots above to control my server. thank you for your hints. I go for @fcarney 's solution for now, but I almost surely will have to gather SPI data in bigger chunks too and send them back to the gui for further processing/storing so that the aquisition loop wont be infinite anyway. - J.Hilk Moderators last edited by @pauledd I would actually not recommend that. I don't know why people keep on suggesting processEvents, as a legit solution. It drives me up the wall every time I see it. @pauledd I would recommend using the build-in QThread mechanism to interrupt a thread: public slots: void doWork() { forever() { ... if ( QThread::currentThread()->isInterruptionRequested() ) { return; } ... } } You can add this check to your SPI loop multiple times but avoid to call it too often, according to the dokumentation. Add something like this to your MainWindow-Instance: public slots: void cancel() { if( thread.isRunning()) { thread.requestInterruption(); } } and connect this Slot to some Cancel-Button.
https://forum.qt.io/topic/108777/stop-a-loop-in-object-that-has-been-moved-to-a-qthread
CC-MAIN-2021-10
refinedweb
517
56.66
The standard decorator for manipulating molecular structures. More... #include <IMP/atom/Hierarchy.h> The standard decorator for manipulating molecular structures. IMP represents molecular structures using the Hierarchy decorator. Molecules and collections of molecules each are stored as a hierarchy (or tree) where the resolution of the representation increases as you move further from the root. That is, if a parent has some particular property (eg, marks out a volume by having x,y,z coordinates and a radius), then the children should have a higher resolution form of that information (eg, mark out a more detailed excluded volume by defining a set of balls which have approximately the same total volume). In a tree you have a set of nodes, represented by Hierarchy particles. Each node can have at most one parent. The node with no parent is known as the root of the tree. Here is a simple example with a protein with three residues. Two of the residues have atoms, where as the third is coarse grained. The nodes in the hierarchy can correspond to arbitrary bits of a molecule and do not need to have any biological significance. For example we could introduce a fragment containing residues 0 and 1: A hierarchy can have any tree structure as long as: The get_is_valid() method checks some of these properties. Any method taking a hierarchy as an argument should do to make sure the hierarchy makes sense. A number of decorator types are associated with the Hierarchy to store the information associated with that node in the hierarchy. Examples include Residue, Atom, XYZ, Chain, XYZR, Mass, Domain, Molecule etc. Definition at line 192 of file atom/Hierarchy.h. Null constructor. Definition at line 229 of file atom/Hierarchy.h. The traits must match. Definition at line 232 of file atom/Hierarchy.h. Add a child and check that the types are appropriate. A child must have a type that is listed before the parent in the Type enum list. Definition at line 270 of file atom/Hierarchy.h. Get the ith child based on the order they were added. Definition at line 280 of file atom/Hierarchy.h. Return the children in the order they were added. Definition at line 285 of file atom/Hierarchy.h. Get the children in a container of your choosing, eg ParticlesTemp. Definition at line 295 of file atom/Hierarchy.h. Check if the particle has the needed attributes for a cast to succeed. Definition at line 255 of file atom/Hierarchy.h. Return true if the hierarchy is valid. Print information about the hierarchy if print_info is true and things are invalid. Get the parent particle. Definition at line 304 of file atom/Hierarchy.h. Get the molecular hierarchy HierarchyTraits. Create a Hierarchy of level t by adding the needed attributes. Definition at line 240 of file atom/Hierarchy.h. Get a bounding box for the Hierarchy. This bounding box is that of the highest (in the CS sense of a tree growing down from the root) cut through the tree where each node in the cut has x,y,z, and r. That is, if the root has x,y,z,r then it is the bounding box of that sphere. If only the leaves have radii, it is the bounding box of the leaves. If no such cut exists, the behavior is undefined. See get_bounding_box() for more details. Gather all the molecular particles of a certain level in the hierarchy. Return true if the piece of hierarchy should be classified as a heterogen. For the purposes of classification, a heterogen is anything that For the moment, this can only be called on residues or atoms. Definition at line 388 of file atom/Hierarchy.h. Definition at line 393 of file atom/Hierarchy.h. Get the residue with the specified index. Find the leaf containing the residue with the appropriate index. This is the PDB index, not the offset in the chain (if they are different). The function returns a Hierarchy, rather than a Residue since the residue may not be explicitly represented and may just be part of some fragment. Return the root of the hierarchy. Definition at line 380 of file atom/Hierarchy.h. Print out a molecular hierarchy. Definition at line 405 of file atom/Hierarchy.h.
https://integrativemodeling.org/nightly/doc/ref/classIMP_1_1atom_1_1Hierarchy.html
CC-MAIN-2021-10
refinedweb
718
59.19
Mar 17 2017 01:27 AM Mar 17 2017 01:27 AM Hi, We have a document library in which the requirement is to add 20+ custom columns (Meta data). Adding custom columns (Meta data) to document libary helps to search the documents easily by providing required criteria. Is it good to add so many additional columns to a document library? Please advise if any other best approaches are there. Mar 17 2017 01:40 AM Mar 17 2017 01:40 AM There are limitations around the use of columns. These are described in this article (written for SharePoint 2013 but still applicable as far as I know) I don't think you will run into the technical limits described in the article above. A potential downside of this many columns is they ultimately have to be filled with data. If some person is required to fill in all those columns manually the experience might not be that good. If an automated proces is used to fill in the columns this is less of a concern. Mar 17 2017 01:56 AM Mar 17 2017 01:56 AM I don't think that's gonna work if you later want to show/query all the fields (like in a List View). From my experience, once I tried to add like 14 metadata columns to the Pages library. I was able to add them, but then I got errors when trying to show and item with all the columns (don't remember exactly the case, but didn't work). Metadata columns are threated like "lookup" columns, and there's a limit here: List view lookup threshold 12 join operations per query 12 lookups. Note: After applying the SharePoint Server 2013 cumulative update package released on August 13, 2013 (), the default value is increased from 8 to 12. Mar 17 2017 09:25 AM Mar 17 2017 09:25 AM Mar 17 2017 10:34 AM Mar 17 2017 10:34 AM The types of columns used make a huge difference as described in the article. 20 columns of type "Single line of text" has way less impact compared to using 20 columns of type "Managed Metadata". It wasn't clear to me from the original question what types of columns where to be used... @Venkata Ratnam Vemula: by "Columns (metadata)" do you refer to the fact each colum specifies a piece of metadata or do you refer to Columns of type "Managed Metadata" here? Mar 17 2017 09:14 PM Mar 17 2017 09:14 PM Paul, Each column is specifies a piece of metadata not the managed meta data. Agree the user has to input more data while uploading the document. Do we have any other alternate approaches for this requirement? Mar 17 2017 11:30 PM Mar 17 2017 11:30 PM Mar 18 2017 02:13 AM Mar 18 2017 02:13 AM Mar 18 2017 08:54 AM Mar 18 2017 08:54 AM Agree with some of the fellow posters, I try to steer clients towards 3-5 columns worth of metadata at most. 3 tends to be the most common, 5 is more for strict publishing site or application that was built on top of SharePoint. Apr 10 2017 12:47 AM Apr 10 2017 12:47 AM Thanks for the reply. I have created 30+ columns in a content type and used that content type as base for all the document libraries (20+) in my site collection. Based on the document library type, I have hidden few columns for the users to avoid unnecessary inputs by using IProvisioningExtensibilityHandler in OfficeDevPnP.Core.Framework.Provisioning.Extensibility namespace. Thank you all for your suggestions. Apr 11 2017 01:04 AM Apr 11 2017 01:04 AM I am affraid it are to many columns and if there is lots of data this can become a issue in delay and Search my advice would be to look at the columns and bring them down to max 20 but less would even be better
https://techcommunity.microsoft.com/t5/sharepoint-developer/sharepoint-document-library-metadata/td-p/54064
CC-MAIN-2021-39
refinedweb
675
65.15
the Monday of XML 2000, many developers attended the XML Developers' Special Interest Day. Chaired by Jon Bosak, this track was composed of "late breaking" XML developments -- speakers submitted proposals only six weeks before the start of the conference. This enabled some bleeding-edge work to be shown in the session, which was generally of very high quality. The session was opened by Simon St.Laurent, who presented Common XML, the first work product of the SML-DEV mailing list. The SML work grew out of a desire to rid XML of unnecessary and complicated constructions. While the more severe MinML hasn't found much favor with the wider community, the Common XML guidelines have received a warmer welcome. Common XML is basically a "best practice" guide for ensuring maximum interoperability and longevity from your XML documents. Such guidelines include, for example, advice on the use of namespaces -- avoiding "tricks" like using the same prefix twice for different URIs within the same document. St.Laurent emphasised how important guidelines on namespaces in particular were, relating how at an earlier conference this year he'd had only 20 people turn up for what he expected would be a popular presentation -- Cross-Browser XML -- but 200 people for a presentation on the use of namespaces. One very interesting part of St.Laurent's presentations was in fact the comments made by Jon Bosak at the end. Bosak, who instigated XML 1.0, revealed some interesting details about the development of the specification itself. He said that he shared St.Laurent's intuition that things could be simpler, and indeed the XML 1.0 Working Group shared this too. Unfortunately they got to a point where they couldn't agree on what else to omit from SGML. Bosak observed that the result of several years of SGML best practice, the emergence of a de facto SGML subset named by Eliot Kimber as "Monastic SGML", lead to the development of XML. He cast Common XML as in the same mold, "Monastic XML." Bosak also endorsed the recommended practices on namespaces, observing that for most of their life in the WG, namespaces had in fact been processing instructions and therefore lacked most of the complicated consequences of scoping they have now. He also noted that external entitities were originally left out of XML, but were retained because of their use by editing tools -- the intent was, though, that they would always be resolved before document interchange. One of the high points of the Developers' Day was that several presentations gave an insight into the implementation of XML processing tools, rather than focusing merely on their specification or usage. Murata Makoto gave one such talk on the computation models used by verifiers for his RELAX XML schema language (see Learning to RELAX). Murata-san first gave a demonstration of some of the tools available for RELAX, including a converter to and from DTDs, and RELAXER, which generates Java classes from RELAX schemas. Such classes, he explained, gave the programmer a "friendly" interface to documents, rather than using a general purpose DOM interface. One of RELAX's distinguishing features is its basis in the computer science theory of tree (or hedge) automata. This enabled Murata-san to demonstrate the algorithms for verifying an XML instance. If a graph of the instance can be successfully labelled, according to the rules of the content model (representing by a finite state machine), then the instance is valid. There are several alternative ways of approaching the labelling, either top-down or bottom-up, and optionally using backtracking. The audience heard the advantages and disadvantages of each -- whetherthey needed SAX or DOM, for instance. Murata-san reported that the best approach was indeed to utilize top-down and bottom-up simulataneously. This is the approach taken by the Java RELAX verifier. There was a good deal of interest in RELAX from the audience. Although Murata-san in no way aggressively competes with the W3C XML Schema effort, (indeed he said that developers might like to use RELAX as a stepping stone to XML Schema -- as RELAX uses XML Schema datatypes it provides a route to using "DTDs with datatypes"), supporters of XML Schema are clearly discomfited by its emergence. RELAX continues to be a charmingly simple yet unexpectedly powerful force in the schema world. David Cleary presented the use of extension features in W3C XML Schema, assuring the audience that "even though XML Schema looks as if it has everything in there, there are actually things we had to say 'no' to." He demonstrated how non-native attributes in schemas enabled them to be tied to implementations, e.g. by annotating with correspondences to Java classes or SQL columns. Steph Tryphonas of TellMe gave an entertaining demonstration of VoiceXML technology, hooking a phone up to the PA system. Showing the transition that TellMe's VoiceXML platform could make between VoiceXML-browsing and actual voice calls, he baffled a Chicago taxi firm by trying to order a taxi to Washington D.C. TellMe has an online development environment at studio.tellme.com, which enables developers to write VoiceXML applications and then test them by dialing a 1-800 number. Tom Jenkins provided a valuable insight into the development of an XML-enabled distributed application he had developed for the US government. He had migrated from CORBA to XML with great success, reporting that from approaching XML "cold" it had taken him only two weeks to get his system up and running. He noted too that the migration from CORBA to XML for communication opened up more possibilities of functionality for his system, including such features as offline operation. Alex Ceponkus of Bowstreet explained the new open source project Bowstreet has released, jUDDI (pron. "Judy"). jUDDI is a Java implementation of the new UDDI specification for business service registries. In a fascinating talk, Dongwook Shin of the National Library of Medicine, explained the inner-workings of XML query engines. He presented the different ways indices could be generated for XML corpora, and their relative merits. Shin's query engine can be found at futurexpert.com. Truly on the bleeding-edge of development, Dave Carlson of Ontogenics presented work he was doing on modeling XML schemas with UML. By taking the XML serialization of UML, XMI, as an intermediate format, Carlson was able to generate XML Schemas straight from UML by the application of an XSLT stylesheet. He showed off a web application which demonstrated this functionality. He has put a lot of effort into reverse-engineering UML diagrams from specifications such as UDDI; and he observed that if they were modeled diagrammatically in the first place it would aid understanding a great deal. His tool will shortly be available from XMLModeling.com. The concluding two sessions of the day were mind-bending. Sam Hunting and Chris Cassidy of EComXML showed how Topic Maps could be used to map an e-business market, explaining that they solved such problems as the initial discovery of a business with which you wished to trade -- at a level higher than current "discovery" technologies like UDDI. Walter Perry then explained a system he has created among diverse and disparate financial trading systems. Perry takes the opposite view from most e-business advocates in that he starts from the assumption that homogeneity of either messages or process is impossible. His viewpoints were controversial, but definitely presented food for thought on coping with electronically linking diverse entities. © , O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.xml.com/pub/a/2000/12/xml2000/devday.html
CC-MAIN-2015-22
refinedweb
1,272
51.99
09 December 2010 20:08 [Source: ICIS news] HOUSTON (ICIS)--A recovering US housing market in 2011 could ramp up end-market earnings for chlorovinyls and aromatics producer ?xml:namespace> Dahlman Rose initiated its coverage of “We do not see a lot of downside risk considering the current state of the vinyls market,” the report said. “Upside, however, is considerable.” The report on Thursday came more than a year after the company’s precarious return from near-bankruptcy. Dahlman Rose managing director Charles Neivert did not cite when stock prices would go up, noting in the report that “[Georgia Gulf Corporation] is still first and foremost a housing play,” the report said. “Although it is significantly improved with the recent restructuring, the shares will have a hard time reaching full potential without improvements to the housing market.” Dahlman Rose said the likely scenario, with a 70% probability, was a gradually-recovering US housing market with modest growth in gross domestic product (GDP). The scenario estimated that housing starts would slowly edge back to an annual rate of 750,000 to 800,000 units by the end of 2011 as the GDP grows slowly. Natural gas averages in 2008 were $8.86/MMBtu. But thus far in 2010, the average has been about $4/MMBtu, the report said. In housing, the pending home sales index in October rose to 89.3 on a scale of 100, up from September’s level of 80.9, according to the National Association of Realtors. But the housing market has yet to recover from its drawn-out downturn, with foreclosed homes making up 25% of all home sales in the third quarter, according to RealtyTrac, which lists foreclosures. The ($1 = €0.75) By Ruth Liao For more on Georgia Gulf 's plants, visit ICIS plants and projects For more on PVC, VCM
http://www.icis.com/Articles/2010/12/09/9418214/us-housing-upturn-in-2011-could-strengthen-georgia-gulf-report.html
CC-MAIN-2014-42
refinedweb
307
61.56
Sharing state like Redux with React's Context API Rohan Faiyaz Khan ・7 min read The pains of growing state In learning React, one of the first challenges I faced was figuring out state management. State is a vital part of any application that has more complexity than a simple blog or brochure site. React has a fantastic toolset to manage component level state both in the case of functional components with hooks, and class based components. However global state is a bit of a different story. Almost every advanced feature such as authentication, shopping carts, bookmarks etc. heavily rely on state that multiple components need to be aware of. This can be done by passing state through props but as an application grows this gets complicated very fast. We end up having to pipe state through intermediary components and any change in the shape of the state needs to reflected in all of these components. We also end up with a bunch of code unrelated to the concern of the intermediary component, so we learn to ignore it. And if Uncle Bob taught me anything, the code we ignore is where the bugs hide. The solution: Redux Redux was born out of the problem of global state handling. Built by Dan Abramov and his team, Redux provided a global store independant of local state that individual components could access. Furthermore it comes with some high level abstractons for dealing with state, such as the state reducer pattern. Wait, slow down, the state reducer what now? Yes I hear you, for this was my exact reaction when I heard of these words put together for the first time. The reducer pattern is a popular pattern even outside of Redux, and implements a way to change state. A reducer function is a pure function (i.e. has no external state or side effects) that simply takes in the previous state and an action, and returns the new state. It looks like this below. function reducer(state, action){ switch(action){ case "increment": return state + 1 case "decrement": return state - 1 default: return state } } This pattern allows us to alter state predictably which is important because we need to how our application might react to changes in state. Under the pattern, mutating state directly is heavily discouraged. Redux also provides us with the action creator pattern, which is simply a way to organise how we dispatch our actions. Combined with the state reducer pattern, this gives us great tools to organize our global state management. Sounds good, so what's the problem? While redux is great and I personally am a big fan of it, it has its fair share of detractors. The first problem a lot of people have is that it is very boilerplate-y. This is especially apparent when you have an app that initially doesn't need global state, and then later on you realize you do and then *BOOM* 200+ lines added in one commit. And every time global state has to be pulled in for a component, this extra boilerplate has to added in. Redux is opinionated and imposes limitations. Your state has to be represented as objects and arrays. Your logic for changing states have to be pure functions. These are limitations that most apps could do without. Redux has a learning curve of its own. This true for me personally, because React seemed very fun as a beginner until I hit the wall of Redux. These advanced high level patterns are something a beginner is not likely to appreciate or understand. Using Redux means adding about an extra 10kb to the bundle size, which is something we would all like to avoid if possible. Several other state management libraries have propped up such as MobX to solve the shortcomings of Redux, but each have their own trade-offs. Furthermore, all of them are still external dependencies that would bulk up the bundle size. But surely something this important has a native implementation? Right? Well there wasn't, until... All hail the magical context! To be fair, the Context API has been around for a while, but it has gone through significant changes and alterations before it became what it is today. The best part about it is that it does not require any npm install or yarn install, it's built in with React, I personally have found the current iteration of the Context API to be just as powerful as Redux, especially when combined with hooks. But there was a roadblock to learning, that being the official React documentation is terrible at explaining how powerful the Context API is. As a result, I dug through it and implemented a simple login system so that you don't have to. Enough talk, show me how this works already All we will be doing is logging in (using a fake authentication method wrapped in a Promise), and changing the title with the username of the logged in user. If you'd rather skip all the explanantion and just look at the code, feel free to do so. The first thing we need to do to use context is React.createContext(defaultValue). This is a function that returns an object with two components: myContext.Provider- A component that provides the context to all its child elements. If you have used Redux before, this does the exact same thing as the Providercomponent in the react-redux package myContext.Consumer- A component that is used to consume a context. As we shall soon see however, this will not be needed when we use the useContexthook Lets use this knowledge to create a store for our state. // store.js import React from 'react'; const authContext = React.createContext({}); export const Provider = authContext.Provider; export const Consumer = authContext.Consumer; export default authContext; Notice below that the defaultValue parameter passed to createContext is an empty object. This is because this parameter is optional, and is only read when a Provider is not used. Next we have to wrap our application in the Provider so that we can use this global state. Provider needs a prop called value which is the value of the state being shared. We can then use the useContext hook in the child component to retrieve this value. function App(){ return ( <Provider value={someValue}> <ChildComponent /> </Provider> ) } function ChildComponent(){ const contextValue = useContext(myContext) return <div>{contextValue}</div> } However you might notice a problem with this method. We can only change the value of the state in the component containing the Provider. What if we want to trigger a state change from our child component? Well remember the reducer state pattern I talked about above? We can use it here! React provides a handy useReducer hook which takes in a reducer function and an initialState value and returns the current state and a dispatch method. If you have used redux before, this is the exact same reducer pattern we would observe there. Then we have pass the return value of the useReducer hook as the value inside <Provider>. Lets define a reducer. // reducers/authReducer export const initialAuthState = { isLoggedIn: false, username: '', error: '' }; export const authReducer = (state, action) => { switch (action.type) { case 'LOGIN': return { isLoggedIn: true, username: action.payload.username, error: '' }; case 'LOGIN_ERROR': return { isLoggedIn: false, username: '', error: action.payload.error }; case 'LOGOUT': return { isLoggedIn: false, username: '', error: '' }; default: return state; } }; Now we can use our reducer in our <Provider>. // App.js import React, { useReducer } from 'react'; import Router from './components/Router'; import { Provider } from './store'; import { authReducer, initialAuthState } from './reducers/authReducer'; function App() { const useAuthState = useReducer(authReducer, initialAuthState); return ( <Provider value={useAuthState}> <Router /> </Provider> ); } export default App; Now all components in our application will have access to the state and the dispatch method returned by useReducer. We can now use this dispatch method in our login form component. First we will grab the state from our context so we can check if the user is logged in so we can redirect them or if we need to render an error. Next we will attempt to login (using our fake authentication method) and dispatch an action based on either authentication is successful or not. // components/LoginForm.jsx import React, { useState, useContext, Fragment } from 'react'; import { Link, Redirect } from 'react-router-dom'; import authContext from '../store'; import attemptLogin from '../auth/fakeAuth'; const LoginForm = () => { const [ state, dispatch ] = useContext(authContext); const { isLoggedIn, error } = state; const [ fakeFormData, setFormData ] = useState({ username: "Rohan", password: "rohan123" }); function onSubmit(event) { event.preventDefault(); attemptLogin(fakeFormData) .then((username) => { dispatch({ type: 'LOGIN', payload: { username } }); }) .catch((error) => { dispatch({ type: 'LOGIN_ERROR', payload: { error } }); }) .finally(() => { setLoading(false); }); } return ( <Fragment> {isLoggedIn ? ( <Redirect to="/" /> ) : ( <Fragment> {error && <p className="error">{error}</p>} <form onSubmit={onSubmit}> <button type="submit">Log In</button> </form> </Fragment> )} </Fragment> ); }; export default LoginForm; Finally we will wrap up the landing component to show the logged in user's username. We will also toggle the welcome message to prompt a login or logout based on whether the user is already logged in or not, and will create a method to dispatch a logout. // components/Hello.jsx import React, { Fragment, useContext } from 'react'; import { Link } from 'react-router-dom'; import Header from './Header'; import authContext from '../store'; const Hello = () => { const [ { isLoggedIn, username }, dispatch ] = useContext(authContext); const logOut = () => { dispatch({ type: 'LOGOUT' }); }; return ( <Fragment> <Header>{`Well hello there, ${isLoggedIn ? username : 'stranger'}`}</Header> {isLoggedIn ? ( <p> Click <Link to="/" onClick={logOut}>here</Link> to logout </p> ) : ( <p> Click <Link to="/login">here</Link> to login </p> )} </Fragment> ); }; export default Hello; And there you have it We now have a fully functioning context-based state management system. To summarize the steps needed to create it: - We created a store using React.createContext() - We created a reducer using the useReducerhook - We wrapped our application in a Providerand used the reducer as the value - We used the useContextto retrieve the state and dispatched actions when necessary You might be asking now whether this can completely replace Redux. Well, maybe. You might notice that we had to implement our own abstractions and structure when using the Context API. If your team is already used to the Redux way of doing things, then I don't see a lot of value in switching. But if you or your team does want to break away from Redux I would certainly recommend giving this a try. Thank you for reading, and I hope you found this post useful. Investing in the right technologies to avoid technical debt How patience can help you avoid jumping on the wrong tech. Hello Rohan Faiyaz Khan, I am fan of yours. Thanks for this wonderful post man.This post help me a lot to understand the the concept of hooks but my question is that how useContext hook provide state and dispatch. I saw the react docs in that case state and dispatch is provided by useReducer hook.Can you give me brief explanation about that.Thank you Rohan Faiyaz Khan. Hi! You are right in spotting that useReducerreturns state and dispatch. useContextreturns the context value provided by the Providercomponent. However in my App component I actually used the useReduceras the value for Provider. As a result, the state returned by useContextinside the Provider's children is actually useReducer's output, i.e. state and dispatch. This allows us to call dispatch from the child components. Hope this helps! Thank you so much for helping me. It helps me to understand the hooks.THANK YOU MAN Rohan Faiyaz Khan. So you're in favor of using a bunch of modular contexts at the lowest level possible for each component subtree? You mean as opposed to having a singular application wide store? Well Context API has better performance when split up into more modular contexts. Whether that is the right approach for an application is more of a difficult question. I guess it depends on the team's mindset, but I personally prefer a single redux store for large projects due to it being more flexible. Oh yeah, a single Redux store is definitely easier to grok from a mental model perspective. Thanks for sharing.
https://dev.to/rohanfaiyazkhan/sharing-more-with-react-s-context-api-e52?utm_medium=RSS&utm_source=news.12bit.vn
CC-MAIN-2019-43
refinedweb
2,010
55.64
#include <cafe/os.h> BOOL OSSetScreenCapturePermission(BOOL enabled); Returns the previous screen capture permission setting. This API sets a flag that indicates whether the last rendered frame that is in memory after foreground release can be used by another application. The default state ( TRUE) is set before a process starts and only changes state when this API is called. When set to FALSE, the system continues to use the last rendered frame for UI blending, but does not allow the frame to be used by an online service. For example, a streaming video application may not want to allow frames from a video to be shared as part of an online message post. In this case, the streaming video application may call OSSetScreenCapturePermission(FALSE) to inform system applications that the last frame from the video application may not be used as part of an online service. OSSetScreenCapturePermissionEx OSGetScreenCapturePermission OSGetScreenCapturePermissionEx 2013/05/08 Automated cleanup pass. 2012/08/17 Initial version. CONFIDENTIAL
http://anus.trade/wiiu/personalshit/wiiusdkdocs/fuckyoudontguessmylinks/actuallykillyourself/AA3395599559ASDLG/os/ProcSwitch/OSSetScreenCapturePermission.html
CC-MAIN-2018-05
refinedweb
161
52.09
You can click on the Google or Yahoo buttons to sign-in with these identity providers, or you just type your identity uri and click on the little login button. i've managed to fix it by myself. the error is in extensions/argotutils.py here is a diff: 39c39 < <pgml description="uci.uml.visual.UMLClassDiagram|%s" name="%s">''' % ( --- > <pgml description="org.argouml.uml.diagram.static_structure.ui.UMLClassDiagram|%s" name="%s">''' % ( just the wrong namespace for UMLClassDiagramm(second one is right for ArgoUML 0.20) the right namespace exists in the same file, defined as variable 'BASE', but this is not used. Ticket #3043 - latest update on 2009/09/01, created on 2006/11/13 by Sylvain Thenault
https://www.logilab.org/ticket/3043
CC-MAIN-2018-26
refinedweb
118
51.24
Evidence skinny xxx boop phnom Ne cora fucking chat nude penh fucking hot white the free cums upload penh pictures phnom womens peple hardcore ho period registration fun gay pregnant bang brunette ladies american biggest! And full wallpaper your swinger women offender sexual demonstration full brazilian lesbians enhancement nightgowns colorado offender registration cums pictures video fucking child exclusively reality sex index lmf child the ho lola when child late famous son violet. Orgy advice yo. Porno mature porno grannies ass porno chat with with period world. Asian phnom girls. Woman trailors fun fat porno shop assault swinger site their lolita. Party horse movies pictures lmf brazilian your ameature. Puritan toys myrtle movies a pussy articles forced. Vids shower vids. Picture of sexy redhead do Fucking. Sites violet exclusively evidence assault lola w brazilian bang gang. Yo casual thing what woman forced fun planet w tits. Your period required loken shop a. Brazilian video desktop nightgowns swinger. Code penh hot full def hispanic. Lmf late women porno exclusively sex boop peple newsreaders mom products party hardcore world. Offender alabama phnom squirting personals stock w man when do it personals muscle anime lolita cora pic bombay famous home blue for ameature hot youtube world videos year text party vids shower dick deep gratis sex youtube demonstration kristanna movies school gang penh 1213 horse. Vids vintage articles white rated. Female. Demonstration is womens and love toys skinny beach mature registration incest tits horse skinny exchange pic credit lolita video simulation can colorado having stories when fee ho. Wallpaper asian cora advice fuck enjoy son pictures trailors? Exclusively where text site spnis ameature dildo anime ne chat lola def shop puritan advice orgy farsi sexual when cums on man vintage stories high woman stocking. Biggest weblog articles female dating on breast stocking girl weblog gang where movie couples do. School granny advice enhancement female credit do kristanna boop betty pic pregnant gay gratis pic toys. Upload year love orgy late. Rated love code old photos pics. Puritan cock betty forced ameature! Stock their ass blowjobs fun felony a shower fuck lola? Ho sexual porn sex online? Betty felony womens. Betty lmf rubbing. Xxx black sexy fuck farsi photo. Evidence swinger sex schumacher stock enhancement beach felony swinger naughty gay girls famous. Barry colorado youtube index trailors fuck sensex violet a game. Lesbians lolita 1213. Famous is it brazilian sex squirting having card photos. Game lesbians interracial free required pussy. When brazilian where. Cums big hot chat! Text evidence colorado adult card cock index sensex. Tits flickr sensex white. Enjoy see toys sites harassment photos naughty can offender video dick casual sex free deep fat what breast stocking cums child enhancement. Reality lesbian site orgy credit make late registration high girl lesbian def fun harassment porn clit womens college anime vids nudist clit puritan the exchange with. White lesbian 1213. lesbians orgasms free lmf party cock nightgowns your having granny ladies movie man spnis make naked blue? Is card picture of sexy redhead Period movies reality gang cums planet couples exchange sites felony home vintage products penh pregnant home female swinger interracial breast exchange college sex. Black toys dick couples. World period cora ne felony videos enhancement products high penh school pic beach stocking gay for puritan credit world american black w yo youtube brunette lola period late stories rubbing hardcore cock breast demonstration card. Forced asian nightgowns hot flickr naked on woman stock white pics hispanic black simulation. Newsreaders photos wallpaper lesbians photos offender def gang video bang lesbians skinny. Where gratis evidence fun dating sites! Stocking anime rated credit lmf. Orgy white online casual required their text famous. Blue ameature man fat beach reality make schumacher do tits lola hardcore 1213. Harassment fat women offender weblog lmf squirting puritan. Movies with casual personals naughty bang game exclusively. Sexual mom school dick harassment the vids school registration a cora advice for sites of picture sexy forced having. Ho shower video College? Online with personals. Spnis horse no enhancement adult loken mature pictures articles ladies a having mature their mature loken womens love college youtube upload adult sex game hot! Party alabama free biggest female gratis toys demonstration video phnom! Blue old forced photo. Vintage child beach anime grannies famous is game code barry when boop. Nude myrtle naughty having adult xxx cora incest wallpaper card photos lesbian shop white. Your make toys rubbing! Clit brunette make tits naked ladies photo code. American bombay text. Toys naughty videos orgasms do betty. Sex to important Stocking dick big muscle hot love movies desktop advice? Offender american deep. Adult. Porn. W. Wallpaper. Violet. Naughty year grannies lmf ameature alabama dildo video world youtube products black yo rated. A farsi lolita boop lolita see having naked! Enhancement card. Year fun naughty advice. Do where shop womens reality felony betty stock credit home vintage wallpaper what simulation credit couples gratis college mom lesbians dick for can high beach when movies spnis betty can 1213. Chat mature black bang tits fun? Black old their code assault man barry man. Casual vids penh harassment brazilian what text man def site exchange on mom demonstration pics sexual weblog full code felony. Brunette yo love photo clit cums assault. Fucking casual pregnant sexy ho videos colorado nude orgasms biggest nightgowns nude make and toys pics ne breast fun! Game muscle rubbing demonstration son pic late womens lesbian credit no personals adult myrtle pictures trailors barry ladies fat swinger lola tits. Online loken index porno peple is. Exclusively personals. Yo girls women kristanna. Assault sexy barry white newsreaders bang no having love kristanna exchange hardcore youtube it shower gay pregnant where cora world harassment grannies. Asian bombay planet ass big colorado fee required naked porno brazilian enjoy youtube reality nightgowns high squirting brazilian fuck articles sites nude horse def upload cora vintage thing stories pussy period vids party free couples orgy lesbians kristanna. White schumacher evidence ho. Photo text flickr bang newsreaders card wallpaper weblog college. Incest orgasms old gay late gang incest porno skinny online loken puritan! Personals registration game adult see woman granny famous famous your personals swinger dating movie. Toys weblog harassment woman famous. Planet barry enhancement photos stocking anime couples 1213 puritan woman. Toys high biggest cock skinny female ne text colorado biggest shop gratis fuck rated stock registration pic girls with movie. Asian school exchange. Betty incest Granny. Required school muscle! Pic xxx felony toys site pussy fun and tits loken breast lolita photos kristanna american lesbian credit woman incest. Tits newsreaders xxx porn gratis fucking hispanic ne puritan pussy interracial simulation trailors cums with felony 1213 sexual blowjobs. Chat horse loken mature lola sex hardcore is def schumacher cora desktop hot. Personals violet desktop casual do movies squirting can video what offender world dildo stock registration rated evidence thing hot love advice dating phnom. Bombay ladies child assault schumacher breast offender planet anime fee stocking bombay stories youtube where blue old gay do myrtle naked. It ne farsi offender code. Puritan porn couples gang text. Woman hardcore vids your fee women puritan shop fat online movies porn code woman assault incest nude porn sex make weblog interracial for beach famous love sites the kristanna pregnant is. With casual stocking trailors make big squirting gratis big credit demonstration lesbians ass exchange sites pictures orgy late chat sexual pic grannies index. Video toys bang gratis adult. Violet om.. Penh Having credit hot porn sexy dating credit. Fucking school period full beach tits when ne girls weblog lolita make. Sexual enjoy xxx muscle college ne women spnis porno def articles rated what fuck the son porno ne xxx shower chat full asian text betty bang nude desktop gang vintage love naked forced can stocking squirting. Tits fucking the. Casual colorado cora registration biggest brunette. Home enjoy puritan mature toys breast index ass hot party wallpaper blue trailors. W lolita brazilian son loken vids shower clit offender grannies women grannies school interracial famous pussy sex free incest exclusively beach mature hispanic! Clit products orgasms. Required index. Is ass code desktop videos deep. Orgy with. What vids Hot their interracial. Fee def felony famous having year! Make blue! Naughty reality offender year can required toys party ladies home toys. Granny rubbing videos lola clit pussy colorado ne text american. Old peple lesbians gang. Barry famous newsreaders gang loken what harassment porn dick deep code pregnant adult sexual sensex. Cock world sexual squirting upload site dick game lolita enjoy brazilian your products exchange and demonstration online ho ass fuck horse lmf photo wallpaper flickr photos the flickr american porno big card violet desktop swinger sites personals sexual vids boop full biggest breast women rubbing incest exchange gay cock love asian mom ass enjoy free simulation movie see pics american. Skinny hardcore movies female products girl. Fat having kristanna lesbian. Trailors offender blue high naughty for weblog a son where orgy betty. Articles vids articles site penh muscle sex sites incest credit girls online old bombay. Porno evidence late gratis bang alabama home bombay a video bang free porn pic asian with thing man dick child cums when personals late advice love schumacher fun cora the vids lmf. Demonstration shower no naughty. Porno the pictures. Lola nude anime college white upload brazilian dildo can nude child spnis their women fun when full blowjobs. Lmf photos their xxx cums youtube. What yo brazilian it beach woman full. Old betty trailors thing simulation violet. Biggest swinger weblog couples fee game grannies rom. Girls Bombay. Hardcore home personals lmf cums the what colorado fun. Forced squirting photos a blowjobs vids pregnant ameature squirting youtube fuck lesbians mature porn. Enjoy. World porno movies full text sites beach alabama lesbians boop bombay code rated stocking schumacher def cora 1213 where swinger enjoy womens betty pussy rubbing women ne yo code love planet fee demonstration party w felony enhancement porno exchange woman with incest having skinny mom forced naughty offender. Pic rated dildo credit. Biggest stories lmf flickr myrtle kristanna enhancement offender colorado lola. Movie pic party fat beach toys deep brunette brazilian lmf no girl naughty photos. Fun make ne spnis pictures shower swinger. Dildo clit bang it nude card school puritan advice horse weblog famous skinny simulation yo gratis sexual loken ho dating swinger ass incest what girl pics sex trailors son naked felony see high naughty lolita no movie dildo game where gang women trailors loken vintage vintage shop evidence weblog what exclusively womens fucking make lesbian shower vintage lesbian youtube registration felony granny biggest evidence your sex can late black muscle girl farsi girls. Hot farsi pregnant? Adult stories online love. Rubbing videos nightgowns mom rated demonstration gang their. Party female boop barry pictures blue squirting yo porno anime white harassment phnom text. Toys skinny advice breast dick sex horse lolita orgy late lola wallpaper exclusively ho sex fun fuck. Year child offender toys adult gay xxx blowjobs. Cock newsreaders do trailors casual! Having newsreaders Brazilian dick party it for pictures barry forced naughty their loken chat articles. Enjoy card child reality clit demonstration offender dating hispanic gratis blue assault blowjobs year enhancement fucking advice pics dildo female with granny movies naked a make old. And required your. Fuck xxx lolita peple videos quotes can old def stories fun dildo. Puritan naughty flickr incest code on famous myrtle beach sex wallpaper orgy man photos orgy upload schumacher gay when videos card lolita offender exchange shower felony xxx site fun the products blowjobs women sensex lola barry desktop can. Can love ne ome. anime is w high home beach. Yo granny old make movies women barry fee violet mature flickr horse. Offender. Thing dick text blowjobs porno breast gay dildo pussy. Colorado and desktop alabama photo credit dildo 1213 schumacher. Dating felony sexual evidence shop newsreaders index movie ne hardcore beach big world betty barry when lesbian flickr farsi registration. Hispanic stories phnom a college gratis female. Simulation bombay ladies white ho advice lmf forced chat muscle brunette. Love cora a. Cock adult big girls. Female videos no nude peple betty. Fucking farsi grannies incest offender. Enjoy ameature when period vids video peple black harassment movie skinny boop ameature and demonstration. Gratis online myrtle on breast w online evidence personals your enjoy squirting full desktop credit porn what it pregnant ladies love lolita ladies assault stock. Orgasms upload a lesbians phnom blowjobs violet exclusively nude dating porn year lesbians college online free sex horse loken exchange biggest barry 1213 dildo see w xxx penh. Lmf shop! Incest sites sexual famous reality trailors alabama interracial. Toys no fucking boop planet woman evidence vids registration couples pictures womens shower vids movies 1213 skinny spnis offender lolita gay ladies lola thing loken le:. couples required offender exchange. Can vintage love tits asian lolita dick nude what newsreaders sexy enjoy the evidence party year ladies assault forced their nude. On. And vids shower rubbing pics photo what dating lmf swinger stories fee penh porn mom porno nightgowns desktop pic woman cora anime assault shop 1213 on articles sexual women. Where lolita bombay make beach lesbian offender loken upload cums penh having hot toys lesbians old game demonstration text nude american girl year no movies hot. A. Grannies schumacher free w schumacher cums incest clit text skinny skinny enhancement. Is credit mature thing sites planet. Hispanic demonstration old squirting breast incest ass boop def articles registration muscle. Casual. Shower pussy simulation ne what videos porno beach flickr home phnom see women son. Harassment girls orgasms lmf and blue do movie female videos enhancement for ne pictures index pic trailors white full womens required when bang desktop wallpaper black make fat! Vintage interracial sex game cora planet exclusively stock having stocking casual cock fee card with american youtube. Anime reality lolita party naughty sexual asian ladies exchange hot child bombay. Stories text lola flickr colorado female. Vids and sex girl fun. Simulation desktop puritan. Blowjobs see photos betty mom rubbing? Photos ho home farsi it planet lesbian dildo. Myrtle free famous forced pics! Yo registration school is mature nude a period puritan orgy boop youtube online hardcore world ng. hot fee photo adult american sites party sex game old women hardcore. Loken old brunette toys high hot home man sex phnom exclusively fun shop orgasms boop demonstration black casual advice sex farsi lesbian yo index stories. Skinny upload newsreaders. Stock advice? Rated adult articles tits couples. Xxx son stock american dating thing harassment what alabama their famous chat mature ne. Sex cock online thing kristanna man full spnis child vintage bang ne pregnant 1213 lolita video man photo colorado. Ass your w shower porn and schumacher barry orgy swinger 1213 school? Advice fee site black late flickr vids assault granny colorado see harassment blowjobs. Schumacher lesbian pussy high required asian late interracial toys brunette. See youtube card fun fun simulation gang sex free online movies fucking hardcore credit. Dildo when naked porno for stocking pictures orgasms mature full stocking demonstration toys photo ladies phnom clit blowjobs credit credit sites online lolita home. Articles pics cora high. Personals dildo trailors rubbing enhancement the myrtle dating game gratis woman couples interracial bombay ho pregnant hispanic. Womens fuck naked woman dating casual hardcore muscle women fee. World 1213 reality ass sensex movie sex dildo free wallpaper felony trailors ass girl gang registration puritan do. Party alabama exchange! Girls white brazilian articles gay enhancement penh free clit code swinger demonstration fucking pic gang assault offender dick w vids farsi. Rated peple with porno exchange. Son puritan full online child granny movies weblog make the fat. Granny personals exchange naked your desktop def sex phnom pic rubbing flickr felony weblog. Photos bombay child hispanic sexual brazilian lesbians. Registration lola game lolita assault. Nude evidence it womens nightgowns text anime party blue forced simulation girls is pic adult world squirting old year pictures pics. College! Deep white game swinger offender stories a girl. Kristanna movie home breast school sex loken pussy stock on youtube breast products ne fat big forced betty casual site cora violet horse their sensex make credit their vintage phnom beach cums forced where l. girl home is! Rated love colorado black fuck love! Peple pics gratis forced. Stories mom sex son shower desktop ass school full biggest girls barry lolita horse weblog site evidence? College with see casual index game nightgowns fuck. Gang loken hot cums college make interracial can muscle toys pussy reality. Wallpaper sites. Newsreaders girl brunette nude free high what fucking squirting school. Beach full cock see casual granny hardcore squirting enhancement dating credit nightgowns phnom ass photo brazilian fun mom! What 1213 trailors pics granny shop. Women swinger offender. With on peple ladies movies enhancement advice def cums sex spnis pic pic fee cock cora. Exclusively planet bombay. Gay boop bombay bombay videos schumacher. Dick can wallpaper orgy. Year hispanic a movie see brunette yo? Ameature lesbian online free interracial lesbians felony rubbing party thing late where enhancement site sexy a. Swinger chat 1213 world alabama betty couples forced desktop beach nude year it women your credit evidence site grannies movies code tits e. dating school grannies schumacher brunette granny phnom lmf dating it myrtle registration rubbing shower site lola 1213. Can what college personals nude blue myrtle old lesbian pictures videos anime sex their ho college pussy exchange sensex boop sexual index son. Exclusively peple incest. Girl required mature rated a sex fee movie w text pics college party orgy orgasms bang offender black do required stock party card. Alabama for chat. Weblog lola hot. Thing pictures vintage required barry bang. Penh no gay pic enhancement articles nightgowns. See late period. It youtube party stocking demonstration loken having gay lolita cums couples incest enjoy. Horse full granny upload. Simulation black casual rubbing free! Fuck hardcore interracial planet demonstration personals lola peple. Photos lmf your horse when? School movie naughty cock orgy tits tits lesbian beach the advice your desktop girls bang porn see sex is. Code late beach weblog fat lesbians yo make year dick online. Famous. Mature orgasms pussy ho sites ne youtube hispanic. Credit girls! Their beach son naughty breast. No felony gang videos. Planet asian squirting ho vids wallpaper party porno college fucking american lolita w personals big brunette lesbians it high orgy. Sites gay pregnant trailors child white shop hot movies. Personals their yo exclusively sex simulation. Reality advice having puritan hardcore shower sexual enhancement where sex home man! Biggest farsi videos skinny woman assault. Home ladies. Desktop what xxx products exchange biggest dick grannies fun squirting. Where enjoy on! On. Weblog ne clit violet movie pics? Women card what flickr orgy swinger mature interracial brunette online. Barry schumacher anime. Sex love muscle code peple! Ladies orgasms sensex! Couples colorado. World porno. Pregnant pictures desktop enhancement anime clit mom. Forced game? Blowjobs enhancement fuck ameature farsi shower blue trailors woman ladies harassment for famous rated ne code mom fuck grannies naughty big white the fee breast. Brazilian hispanic lesbians wallpaper products fucking swinger spnis boop womens blowjobs cora alabama penh credit world man shop thing woman! And myrtle lolita evidence reality def yo toys forced. no porn kristanna swinger photo free stocking gay for evidence exchange girl gay vintage brunette desktop 1213 hot fucking. Swinger upload white. Products bombay blue puritan lola with man cora. Alabama with index pictures it love big fee dick forced clit simulation deep ne. Boop card it black photo girl gang. Vids orgasms swinger evidence bang. Squirting def cora enhancement fuck granny do. Mature orgy girls rubbing trailors spnis online womens. Ameature no orgasms make woman college american orgasms lesbian cock anime tits son college sex myrtle granny naked lesbians dick violet thing bombay. Free reality upload ladies party stock. Dating big full enjoy old ass home pussy adult exchange. Muscle clit and rubbing squirting where love w anime world fun man couples son betty thing bang year. Cock text offender flickr women ass and assault exchange cums dick what woman. Site nightgowns flickr felony registration lesbian. Products shower def colorado with school toys beach. see sex granny is
http://ca.geocities.com/obrian524gay/mtosp-az/picture-of-sexy-redhead.htm
crawl-002
refinedweb
3,337
59.3
Duplicate custom controls - From: "Tumurbaatar S." <spam_tumur@xxxxxxxxxxx> - Date: Fri, 10 Feb 2006 20:36:17 +0800 Hi! I created some user control (ASCX) and it works fine. The control is created one of subfolders of my project, so its namespace looks like: namespace myproject.subfolder1 { public class route : System.Web.UI.UserControl .... In another subfolder I also created second control, but I used the 1st control as origin. Just copied the 1st control files to the 2nd subfolder. After changed the 2nd control's namespace as: namespace myproject.subfolder2 { public class route : System.Web.UI.UserControl .... Compilation works ok, but at run-time when I visit a page that contains both of my controls, the server raises error: Compilation Error Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: CS1595: 'ASP.route_ascx' is defined in multiple places; using definition from 'd:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\Temporary ASP.NET Files\myproject\4b10f33a\12f97037\2bgwzy1u.dll' . - Prev by Date: List files in a directory, modify name. - Next by Date: Display Role After Login - Previous by thread: List files in a directory, modify name. - Next by thread: Re: Duplicate custom controls - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2006-02/msg01869.html
crawl-002
refinedweb
216
51.14
RDF Core Work Items __NUMBEREDHEADINGS__ STATUS: Historical. This was developed via group editing at the RDF/NextStepWorkshop. Not all text has been reviewed by anyone. Results are prioritized by a straw poll. This was input to the 2011 RDF Working Group. These are the work items that were discussed at some length, including ones which do not have much support. Contents - 1 Possible Charter Language - 2 Syntax Work Items - 3 Semantics Work Items - 4 Graph Identification and Metadata Work Items - 5 Linked Data Work Items 1 Possible Charter Language 1.1 Background [terse, choppy version for now] The Resource Description Framework (RDF) was first published as a W3C Recommendation in 1999. In 2001 a new Working Group (called "RDF Core") was chartered to "rearticulate" the 1999 specification, and add some new features, including datatyped values. That group completed its work with a set of Recommendations in 2004. Since then, RDF has been adopted widely, but practitioners still encounter situations where (1) minor aspects of the current version cause problems, (2) the current design is not well explained, and (3) the documents suggest usage patterns which are not currently considered to be good practice. This new RDF Core Working Group is intended to make editorial changes and to alter the design where necessary to align with what users actually want and implementors are actually implementing. The group must not do anything to break mainstream deployments of RDF and should try to avoid breaking conformant but idiosyncratic deployments. The group is to make RDF even more stable, not to make it unstable. It should be careful in its communications to help the market understand that. 1.2 Mission This Working Group is chartered to advance the adoption and utility of RDF systems by producing W3C Recommendations and Working Group Notes. It will revise existing RDF Recommendations and create new documents to address developments in the field while maintaining compatibility with deployed systems and prevailing best practice. 1.3 Deliverables The group is chartered to produce certain documents, detailed below. Each specification is to be a W3C Recommendation (or part of one), and guidance is to be text in either a W3C Recommendation or a Working Group Note. The group must decide how to group the deliverable specifications and guidance into documents, and how and whether to revise the current RDF documents. The names used here for deliverables are for reference only; the Working Group is free to choose different names in keeping with its mission. During charter development, structured discussion about the charter item is done using Template:CharterItem. 2 Syntax Work Items @@@ update to include discussion resulting in don't-break-it/don't-fix-it about RDF/XML. MAYBE rdf:graph attribute is small enough to still be essentially compatible. All syntaxes must be (updated if necessary) support the entire RDF model including whatever other changes are made such as named graphs etc. Alternative: support the full RDF model in Turtle and JSON. Let RDF/XML and RDFa deal with subsets. The reality may be that RDFa for HTML5 is completed before any future RDF model changes are done. May need a survey of implementers to see if they would be motivated to update code for syntax changes. Break existing syntaxes with caution: - Do not want to break RDF/XML but if there are changes, might as well break it good removing all weakly deprecated parts and change media type, file name extension. Which amounts to making a new XML syntax. - Reluctant to break existing Turtle-RDF but it is not a W3C REC yet, so there is some wiggle room. If Turtle needs named graphs, maybe it should be a different mime type for the same grammar / spec / format. (Breaking means adding new features that are incompatible and cause existing code to fail. ) 2.1 Namespace and Profiles Management Guidance on techniques for managing collections of namespaces, such as profiles/context in RDFa 1.1 (but may be apply to other syntax). This may be uses in new syntaxes (below) and added to existing syntaxes, if it can be done maintaining backward compatability The concept is a syntax document or a protocol header that points to a separate profile document that contains a set of short prefix / URIs pair that are used to abbreviate long uris. Should include consideration of adding profiles to remove the need for a bunch of namespaces - see Twitter annotations and Facebook open graph. Make it friendlier at the top of the document to avoid scaring user. All use of profiles / namespace management must be the same pattern across syntaxes. 2.2 Turtle Decide the syntax stack of how Turtle themed languages fit together (N-Triples, any future N-Quads, Turtle, maybe N3) including how the media types work. A specification for Turtle, generally compatible with existing systems which read and write it. Consider some syntax extensions such as allowing raw date / date time literals (timbl) to improve validation and ease of use. Consider making this the recommended RDF syntax. 2.3 Turtle Support for Graph Identification A specification for an extension to Turtle which includes support for graph metadata. 2.4 Named graphs support in RDF/XML A specification for an extension to RDF/XML which includes support for graph metadata. [Support for this at the workshop was not polled.] Consider leaving RDF/XML to support just a single graph or the RDF (2004) data model. 2.5 JSON A Specification for a way to serialize RDF graphs in JSON. Should include consideration of adding profiles to remove the need for a bunch of namespaces - see Twitter annotations and Facebook open graph. Make it friendlier at the top of the document to avoid scaring user. Suggest a survey of existing work and a community building "event" or process to bring alignment since this seems urgent to start soon. 2.6 Possible RDF/XML weak deprecations Revise the RDF/XML specification to advise that certain syntactic constructs not be used by new vocabularies indicating they are "weakly deprecated" or "archaic" - should not be used in new vocabularies, but should still be supported in software. No plan to formally remove from the specifcations. Candidates for weak deprecation in the RDF/XML syntax include but are not limited to: - reification rdf:ID on property elements - align with some "named graph" support - rdf:XMLLiteral datatype - use plain literals instead, with quoted XML. Problems with people misusing it with quoting markup, do not want XML C14N. Does not work with RDFa. - rdf:ID on node elements (typed or rdf:Description) - use rdf:about instead - xs:string used as a datatype - use plain literals instead OR equate them (like the RDFS rule) 2.7 Data Model Issues Consider weakly deprecating some of the data model vocabulary, especially those tied to RDF/XML syntax. Candidates for data model changes - reification vocabulary rdf:subject, rdf:predicate, rdf:object and rdf:Statement - rdf:Alt - rdf:Bag - rdf:Seq (e.g. use rdf:List instead; it's costly to have two similar options) - rdf:value - maybe a best practice to use a more specific term if it exists 2.8 RDF/XML and RDF Concepts Errata Apply RDF/XML and RDF Concepts spec errata, not discussed at this time. Typos, errata folded in, clarifications. 2.9 Collections syntax in RDF/XML Request for an less annoying syntax for collections as subjects, literals in collections (timbl) We do not expect to be breaking RDF/XML and therefore this would not be possible. 3 Semantics Work Items @@@ some places it says "Revise..." should be "Investigate, and if a suitable solution is found, revise...." 3.1 Inference Rules Fix the inference rules in the semantics, as they are currently incomplete. 3.2 Blank Nodes Revise the treatment of blank nodes in RDF, so that they correspond more closely to practice, e.g., SPARQL, updates, many systems. This might encompass only the mapping from syntax to graphs, or might also affect the RDF semantics. If one person has an RDF/XML document, including blank nodes, and sends it to two parties who load it into their systems, write it out again, and send to a fourth party. The fourth party should be able to tell that it has received two copies of the same document. 3.3 Simplified RDF Semantics Revise the specifications so the RDF semantics aligns with current practice, as seen in SPARQL systems. This may involve recasting RDF itself as a data description language instead of a knowledge representation language. 3.4 Archaisms Revise the semantic specification to reiterate advice not to use archaic syntactic constructs. Remove semantic conditions on container membership properties. Remove semantics for rdf:XMLLiteral. Make xs:string and plain literals be the same in simple entailment. 3.5 Literals as Subjects Should literals be allowed as subjects of triples (or even blank nodes and literals as predicates)? It was the sense of the workshop that this change would not be worth doing. 3.6 Semantics for the Next Steps Updating the semantics to handle extensions added to RDF, e.g, named graphs. This could be very tricky for the current style of the RDF semantics, particularly if there is interesting intended meaning to capture. 4 Graph Identification and Metadata Work Items Produce a W3C Recommendation which provides for interoperability for selected use cases for reification, named graphs, graph literals, annotations, etc. Breakout session minutes are available at 4.1 Graph Identification 4.2 Annotations Discussion of whether or not this should be in the charter, or whether this should be a 'time permitting' / 'may'. Use cases for provenance, annotations in general, time dependent features, etc. should be considered in defining named graphs. 5 Linked Data Work Items Provide guidance and recommendations that support the publication and use of Linked Data. Break-out meeting IRC minutes: 5.1 Codify the follow-your-nose approach to using URIs in RDF Despite the centrality of URIs in the RDF data model, the RDF specifications treat URIs simply as opaque identifiers that conform to a certain syntax. This reflects neither current practice nor the intention behind RDF and URIs. The W3C's RDF Recommendations lack any indication as to why does not make a good identifier for David Booth, or why it would be a good idea to consult to find out what might refer to. The linked data community has adopted a particular interpretation of URIs in RDF that is widely deployed and well-documented in tutorials and other non-W3C documents. The W3C documents should be updated to reflect this. Most of the ingredients can actually already be found scattered throughout W3C materials: the AWWW document, the httpRange-14 TAG Finding (which is just an email message), the Cool URIs for the Semantic Web Note. They should be tied together into a coherent document. References - Cool URIs for the Semantic Web (linked data current practice follows this) - Best Practice Recipes for Publishing RDF Vocabularies (on current practice) - Architecture of the World Wide Web, Volume One (definition of “authoritative”, “URI ownership” etc) - httpRange-14 decision (justification for current practice; not reflected in any recommendation) - Resource Identity and Semantic Extensions: Making Sense of Ambiguity (David Booth proposal) - Linked Data Principles (background) - URI Declaration in Semantic Web Architecture (background and terminology) What is a URI intended to name? What are the responsibilities and expectations of the URI owner, RDF statement author and RDF consumer? How far to dig, in doing follow-your-nose? Compute the complete transitive closure? Resolve ontology URIs? How can the RDF consumer determine what meaning the RDF author intended to convey (whether or not the RDF consumer chooses to use it)? References - Cool URIs for the Semantic Web - Architecture of the World Wide Web, Volume One - httpRange-14 decision - Linked Data Principles 5.3 Co-reference vocabulary as alternative to owl:sameAs Co-reference and other similarity relationships cause issues in linked data. Current practice is to overload the use of owl:sameAs. This is causing difficulties. There are complaints of misuse. We need more guidance and/or vocabulary. One proposal might provide only guidance; another might provide new vocabulary or adopt existing vocabulary. References - Mapping Concept Schemes in SKOS (in particular skos:exactMact and skos:closeMatch) - Ontology Design Patterns: Proliferation of URIs, Managing Coreference
http://www.w3.org/2001/sw/wiki/RDF_Core_Work_Items
CC-MAIN-2014-52
refinedweb
2,043
53.81
Anomaly Magazines rewritten from scratch with more features and quality of life. Please report bugs to me via discord arti#3278 The latest release can always be found here: Github.com Adds magazines to the game like wuut_magazines, but optimized for compatibility and redone from the ground up. If you have used the original Anomaly Magazines, this should feel pretty much the same. There are no changes in functionality. This addon is not compatible with Anomaly Magazines. Do not use with Anomaly Magazines. Do not use with anything that previously affected Anomaly Magazines. Do not use existing saves that had Anomaly Magazines. If you do want to use Anomaly Magazines, use RavenAscendant's converter here: Drive.google.com Some new features compared to Anomaly Magazines: And the biggest feature: Obviously this is not compatible with wuut_mags or other magazine mods. However we have incorporated a lot of quality of life tweaks: Use of MCM strongly recommended to enable tweaking of a lot of features. Proper RUS translation pending. Compat patches are available with EFT Reposition. The only conflict is defines.ltx which we tweaked to add the mag eject method. Optional features: - Ammo check with MCM support. Want to contribute? Submit a pull request here! Github.com Collaborators: RavenAscendant (UI work, inventory work) Ishmaeel (Bugfixes, playtesting) Crepis for icons Tester guinea pigs - Sneaky - EFP Discord Changelog Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation. Changelog will be in Github commit history. We changed so much crap I lost track of it all. Thank you for the wonderful and necessary addon! But please, make it possible to select old, small icons magazines 1x1, already asked in the comments) Or tell me how to do it) Small icon patch is probably broken and will take a while to fix. Test at own risk. Wooow! Fantastic! Cant wait to integrate it with my LoW pack ;) Plz, no, my gamedata can only take so much. Amazing, this is the first magazine addon that I'm going to actually check out, since it doesn't look like it's gonna break every 5 minutes. get Jonessoft generic mod manager, will save your life fellow stalker "Save your life" is an understatement. Modding and tweaking STALKER has never been so easy. Suggest JSGME to all your friends, stalkers. try mod organizer 2 and tell me that Nice one I will try it on my next run. (buried) Plsss someone make a patch for Provaks Pack! I guess people hate Provaks pack for some reason? Because it is a repack and not an addon. Or in other words, gazillion addons and custom changes mixed together. Is it fun? Sure. But to make a patch or addon for it is a nightmare I assume. yea, pretty stupid, a talented guy that's trying to force people to play his modpack instead of releasing a standalone one here on moddb Indeed. Standalone version of his weapon modifications would be insane. the funny fact is that he's not getting any money from this, so why would you do that? I wanted to try Provaks Pack! but I was lazy to download that lot of GB hahahaha that pack is cluster of a mess if u check on the debug menu expecially on the weapon sheeesh i rather mod it myself so i know wich is compatible or not rather using that modpack def trying this out when the BaS patch is here! Woohoo it's out! i got this bookmarked for when the BaS patch drops proper! Rate it 10/10 very nice! I did like mad mags allowed you to put small mags into M and L size pouches though, any chance you might be able to include that as an option? If not, this is still going to be my prefered mag mod This is an oversight, I'll look into it No rush- I imagine being a stlaker and just jaming a mag into every space I have lol Will this be compatible with The Armed Zone or is it just things like BaS? As long as what ammo base game weapons use is not changed then there isn't an incompatibility. weapons that are added need to have ltx files made and added to our ltx system in order to use magazines. the BAS patch was made for us by an interested party. TAZ would probably be the same. Maybe as a third party thing, maybe by you. There are instructions on how to do it in the mod. i've been chugging along on good old wuuts' having manually patched everything and still bearing the odd crash. I almost don't want to try this out. But I will eventually. It just looks too good to be true. :') thank you op edit: i see old icons are still packed in. can they be used instead of the new ones? not a fan of the huge 2x1 mags yes, but you'd need to edit each magazine. If i get icon_overide working properly i may make something that does this using that system. 10/10 little note, I guess it's better to patch original sleep script instead of disabling sleeping info (cuz you really don't wanna to run time events when actor actualy sleeping) (ask Raven he know what I'm talking about) yeah I was lazy, I'll add a tweak to the original through monkey patch when raven is back Will this be compatible with Trader Overhaul Outfit Edition + BaS? TO - no patches needed BaS patch coming i can use new game loadouts and this gona work fine? FINALLY SOMEONE REMADE THE MAGAZINES! THX So I haven't played with it much yet, but this mod seems really, really good. EDIT: I noticed a little error. The 7.62x39mm RPK mags fill up to 40 rounds, but when loaded in the rifle they show 45. EDIT Again? So I don't actually know how I stuffed 5 extra rounds in a mag. If I find out I'll let you know. I did notice that if you use a 40 round mag in a weapon that normally caries 30, it won't let you reload until you're below 30 rounds. Not able to reproduce RPK issue. The reloading works by comparing the current rounds in magazine of a given type with other mags in your loadout. If you have more rounds loaded already, it won't switch out your magazine. In my instance, using the AEK-973, I have only 40 round RPK mags in my loadout, but it won't let me reload until I'm under 30 rounds in the rifle. Oookay...time to make another modpack, mine is heavily embbed to MadMags...time to work i guess Well, let's check... I just continue playing on my save. I have loaded Abakan. I unload it and game crush after I hover the cursor at magazine. All 3 PPSHs don't seem to support magazines, when unloading them I just get the bullets back instead of a magazine. All 3 don't accept a filled drum either. Apart from this issue, the mod is really good. Noted, we will have a patch for this alongside some other cleanup stuff So for those of us who use BaS, do we have to wait? cause this looks great can't wait to have it You should be fine. BAS added guns will just use bullets. when support is added your BAS guns should magicialy get a magazine containing the rounds you had loaded into the gun. only issue will be if BAS changes the ammo an existing gun uses. that would cause issues. Is this compatible with Mad Mags, or does this render Mad Mags redundant? This makes Mad Mags obsolete. We incorporated a ton of fixes from there into here. What about OPO? MadMags had some sort of patch for it, not sure why, that's why asking. I believe there is no patch required. What about compatibility patches? Mad Mags had an entire list of patches to use. our system has far fewer issues. and needs fewer patches. there is only one file that conflicts. defines.ltx there are some patches for that included. others may be needed. If this mod has a file conflict with any thing that isn't a mod by RavenAscendnat. it will probably need a patch. Mods that change what ammo guns use will need patches. They are easy to make. artifax: "This makes Mad Mags obsolete" Correct, I have archived Mad Mags with a reminder to finish your saved games or abandon them and move over here with starting a new game. Old games with Mad Mags/wuut's Mags removed will not work (not even without AM Redux, hehe). artifax' rewrite is something I've been waiting for myself, hope to find some time to play it! How do compatibility patches work? Is the redux mod designed to be more streamlined with compatibility? yes, read the description. The only conflict is the defines.ltx and the necessary patches are provided within
https://www.moddb.com/addons/anomaly-magazines-redux
CC-MAIN-2022-40
refinedweb
1,543
75.2
JDK 11: Taking Single-File Java Source-Code Programs Out for a Spin JDK 11: Taking Single-File Java Source-Code Programs Out for a Spin Are you ready for JDK 11? Check out this post to take a look at how one of JDK 11's new features behaves in the wild! Join the DZone community and get the full member experience.Join For Free The JDK 11 Early Access Builds include Early Access Builds. For this demonstration, I'm using the latest (as of this writing) OpenJDK JDK 11 Early Access Build 24. One of the first indications that support JEP 330 is included with this JDK distribution, which is seen when using the -help flag ( java -help): As shown in the last image, the "help" starts with a "usage" statement, and the last example in the usage statement describes how to use the Java launcher ( java) to run single-file source-code programs. Specifically, the output shows the following "usage" with the usage that is the subject of this post highlighted here: Usage: java [options] <mainclass> [args...] (to execute a class) or java [options] -jar <jarfile> [args...] (to execute a jar file) or java [options] -m <module>[/<mainclass>] [args...] java [options] --module <module>[/<mainclass>] [args...] (to execute the main class in a module) orjava [options] <sourcefile> [args] (to execute a single source-file program) To demonstrate this feature, I'm going to use a simple example adapted (very slightly) from that provided in the 24 May 2018 Mario Torre post on the OpenJDK jdk-dev mailing list. helloYou.jv #!/bin/java public class Hello { public static void main(final String[] args) { final String name = System.console().readLine("\nPlease enter your name: "); System.console().printf("Hello, %s!%n", name); } } I have called this file helloYou.jv. Note that it does NOT end with the .java extension that regular Java source code files end with, and I did not match the name of the file to the name of the class. In fact, I started the file's name with a lower case letter! When I try to run this file directly with OpenJDK 11 EA-24, I see an error ("Could not find or load main class helloYou.jv"): The following screen snapshot demonstrates that it works when I pass the flag --source=11 to the Java launcher. I highlighted in my post " Shebang Coming to Java?" that it sounded like the single-file source programs used with this JEP 330 support was not going to be allowed to end with the .java extension (which this extension would be reserved for traditional Java source files). This seems to be the case as shown in the next screen snapshot where I attempt to run this feature against the same code as above but, now, with the file name helloYou.java. The last image demonstrates that we cannot run .java files with a shebang, because they are treated as regular Java files and, thus, must meet the specification of regular Java source code files. With this early access build, if I comment out the shebang line, I can run the now traditionally compliant Java single source code file helloYou.java (even with the .java extension and without the --source=11 flag). Had I attempted that last maneuver with OpenJDK 10, attempting to run a Java source code file like that just shown would produce the error message discussed earlier: "Error: Could not find or load main class helloYou.java." I would have needed to compile the Java source code file into a .class file and would have needed to make sure it was on the classpath of the Java launcher. This post has been a first look at the feature single-file source-code programs that are now available in the JDK 11 Early Access Builds. Published at DZone with permission of Dustin Marx , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/jdk-11-taking-single-file-java-source-code-program
CC-MAIN-2020-29
refinedweb
671
61.77
>>:It's a good point but... (Score:4, Funny) But I dare you to write a more secure web service in , than in Java. I didn't know Whitespace [wikipedia.org] supported web services. Of course it does (Score:5, Funny) I didn't know Whitespace supported web services. Sure it does, I had a full shopping cart system at the end of my post by way of example. Prove me wrong... :-) Re: (Score:2) I tried to compile it, but I think you had a misplaced tab. Oh say can you C (Score:2) Don't know where it went, but there was supposed to be a "C" in there and I swear there was one when I hit submit... As it is though, I guess you can fill in your least favorite language! Re: (Score:2) Don't know where it went, but there was supposed to be a "C" in there and I swear there was one when I hit submit... Between pressing 'S-c' and 'Enter', you had a race condition. Re: (Score:3, Insightful) Usually when you see claims that Java is more secure than C, it is based on people finding more security bugs in C than in Java, but I am not sure that is a good measurement because C and Java are used in different places, and that makes a huge difference. For example, a buffer overrun in a desktop app (excel, photoshop, whatever) is not a security breach, it's just annoying. C is use Re: (Score:2) You fail it. If you're worried about security, you don't assume a best case scenario. "lalala, ladee dah, I'll just make sure my C code is perfect with no exploits and it'll be just as secure as Java." The reality is it only takes one simple, hard to find and debug fuck up and your application will be owned. In the same scenario using Java, the app would still be secure. In a perfect world, C and Java are just as secure as one another. In reality, it's not even comparable, Java wins hands down. Re: (Score:2) Re: (Score:2) Your argument implies that you think it is impossible to make one simple, hard to find and debug fuck-up in Java. But this is exactly the point: for certain types of f-ups, it is impossible to make them in some languages. You cannot overflow a string in Java. You can in C. Java is unambiguously safer in this respect. This is not a black and white issue. It is safer to use a language where there are fewer opportunities for human error, even if there are still some such opportunities. Re: (Score:3, Insightful) The problem is that *because* people are protected from certain very basic screw ups in Java companies automatically downgrade the quality of programmer and the level of oversight they use. The result, I believe, is that the end product is *even worse* than it would have been for the "more vulnerable" language. So - if you are talking academically about languages - Java is more secure by a long way. C has all the vulnerabilities that Java has plus a lot more. If you are talking about actual outcomes in Re:It's a good point but... (Score:5, Insightful) Bad choice of examples. That's what we were saying and thinking in 1998: IT to PHB: "Don't open any EXE files mailed to you, however Excel spreadsheets, Word docs etc, are fine". A exploitable buffer overrun in any application where malicious inputs exist is a security hole. Re: (Score:2) Never underestimate incompetence (Score:3, Insightful) Never underestimate incompetence. Sure, Java protects you against some kinds of buffer overflows (but then a couple of versions had such vulnerabilities in their native parts of the JRE instead), but it doesn't protect against any other kind of incompetence. There are probably a few SQL injection vulnerabilities and an XSS exploit being written somewhere right now. And someone out there is writing a servlet which reads and writes files off the hard drive, but isn't checking the paths, so really you can reque Re: (Score:2) Just because a bunch of Java programmers are morons doesn't mean the language sucks. Never said that Java sucks (Score:2) I never said that Java sucks. But it seems to me like TFA has a point. You still need to educate your devs and take security seriously. There is no magic amulet that you can just put on and be immune from security problems. Not that huge a difference (Score:3, Insightful) As you probably know already, virtually any CPU manufactured in the last years has some form or another of "no execute" flag. So someone could overflow your buffer all right, and... simply not be able to execute any code injected that way. And someone from the BSD gang could even add here that in their world they had a solution for that even before that. And someone who is security-minded, since that was the thrust of the article anyway, will have used some C++ library or another that checks string bounds. H Obviously... (Score:2, Interesting) More at 11 Re: (Score:2) That seems like a no brain statement. It doesn't matter what language I use if I write insecure code the application will be insecure. The point is that it's easier to write insecure code in some languages than in others. In fact, in some of them, it practically writes itself. That said, TFA is pointless. Let's see what the guy is comparing: - ASP (languages: VBScript, JScript) - ASP.NET (languages: C#, VB) - ColdFusion - JSP & Struts (language: Java) - PHP - Perl Now, one thing that stands out: there's no C or C++ here. In fact, there's no memory-unsafe language here at all. You know, the kind where you can actually get things such as buffer I have an hypotheses (Score:5, Insightful) Re: (Score:3, Funny) The problem is that the set of all haskell applications is too small to be statistically significant. OK, I'm just kidding. Re: (Score:2) I think the opposite because the better programmer will be rendered an average programmer by the difficulties of the language. Re: (Score:2) If that were true, we could prove evolution wrong. That which does not kill us makes us stronger, not weaker. The difficulties of the language are negligeable compared to the difficulty of writing a secure, complex app, simply because the language's complexity is negligeable compared to the sum complexity of all the apps that can be written in it. Easy-to-use, "simple" languages, ought to be more secure than C, but in the real world, only logo is really safer than anything else, just because you can do almost There's also the fact that (Score:2) There's also the fact that in Haskell (or, say, O'Caml) you can structure your application's interface to the web such that you cannot (by abstraction) do unsafe things unless you really want to(*). If you have such a framework the programmer has to be actively working against you to do unsafe things. (And there's nothing that can save you from an actively destructive/malevolent programmer inside your organization.) (*) Define Unsafe/Safe string variants and force all strings trough Escape/Unescape for all t language safety vs programmer ability (Score:2) I don't know if I agree; haskell programmers tend to be demographically more experienced (it's the only language I know of where it seems that the median programmer has or is working on a PhD), but I would also trust a relatively inexperienced programmer to write fairly good code in Haskell, especially if they used an existing web framework like HappStack. Static typing and well-defined libraries go a long way towards making it hard to do the wrong thing. This is one of the things I find compelling about Perl most secure (Score:5, Funny) Re: (Score:3, Informative) Re: (Score:2) IFFEN(Nawt Sek Ewer) DEN Apply(Bite) ADN Apply(PoynteeEnds) Its not black & white (Score:5, Insightful) Anyone who says all programming languages are equally exploitable is a fool. Sure, secure coding practices and standards are the way to approach the issue- not language selection, but it is, for instance, impossible to overrun a buffer in interpreted byte code and executed native code. The fact that stack crashing doesn't exist in interpreted code alone demonstrates that languages (or their runtime environments that are inherent to a language) are not all equal in exploit-ability levels. To say they are all the same is simplifying things too much. Yes, all languages have their exploitable bad practices, but some have more than others. Re: (Score:2, Insightful) Re:Its not black & white (Score:5, Insightful) Yeah, except this isn't a comparison by language. It's a comparison by platform technology. For example, JSP shows as one of the highest vulnerability ratios, whereas Struts (Apache's Java MVC framework) has just about the lowest vulnerability ratio (on par with ASPX). Clearly they are measuring *something* but it seems to have relatively little to do with languages themselves. If anything, it seems like web apps written in frameworks that don't actively discourage mixing code and presentation are more likely to have vulnerabilities, whereas frameworks that encourage separation more actively (and perhaps are newer frameworks) are less likely to have vulnerabilities. The worst two measured, Perl and JSP, are older technologies that date from the era before frameworks that enforced more MVC separation were common and before web app best practices really existed. Re: (Score:2) Re:Its not black & white (Score:4, Insightful) The test itself already has bias, precisely because it works on a family of programs that happen to have a very limited set of inputs, and where the avenues of attack are relatively limited in some very important ways. The core vulnerabilities of websites have been done to death, so at this point, barring utter stupidity, I'd have been surprised if the security problems were noticeably different depending on the language. Re: (Score:2) I think you're right to say that it's better to trust empirical evidence than go on seemingly logical assumptions. However, to me it looks like the study is making a _business case_ that all the tested languages are likely to produce roughly the same number of flaws. That is to say, as a business decision the programming language viewed by itself is not a significant factor. However I don't think it can be extended out to saying the language doesn't matter. It's not accounting for quality of programmer, des Re: (Score:2) I read the link, and since it requires registration of sorts to dl the pdf, I won't do that. But it seems like a very macro level type of study, and that seems to gloss over technical details that you need when judging a language. To use a car analogy, if you're measuring accident avoidance, and you note that the prius has the least amount of accidents, therefor it's the best for avoiding accidents. Ignoring the fact that what you're doing if your driving a prius, or the type of person that typically would u Re:Its not black & white (Score:5, Insightful) They made a statistical analysis of web languages. That's not generalizable to all programming languages as the Slashdot headline implies. All of these languages have several things in common: In short, all of these programming languages eliminate entire classes of potential exploits that other programming languages allow. Therefore, although these programming languages happen to be similar, that does not mean that programming language choice has no bearing on security. It just means that choice of programming language within a very narrow range of languages that are not a representative sample of programming languages as a whole has no bearing on security.: (Score:3, Insightful) Please show me the interpreted byte code langauge/runtime that has never had a buffer overflow exploit. I promise you that I can count one one finger higher than you can show me systems without a buffer overflow exploit. The fact that your saying what you're saying tells me you really have no understanding of how exploits happen at all, let alone any reason you should be talking about secure code. Re: (Score:2) Please show me the interpreted byte code langauge/runtime that has never had a buffer overflow exploit. What about this one [readscheme.org]? Re:Its not black & white (Score:5, Insightful) "All languages are exploitable" != "all languages are equally exploitable". You're the first person to bring the word "equally" into the conversation, and have missed the point. Re: (Score:2) it does if you language is caps insensitive and can handle a maximum string length of 19. Re: :Its not black & white (Score:5, Insightful) The difference is that flaws in the Java runtime (or any runtime) are A. infrequent, B. somebody else's fault, C. quickly fixed once discovered, and D. not specific to your individual installation. Thus, as long as you pay attention to the security notices, you're probably fine. Put another way, there are thousands of people looking for flaws in the Java runtime. When you're talking about flaws in your code, the only people looking for bugs in it are you and the bad guys. This means that it is much more problematic to have 1,000 bugs in your code than to have 1,000 bugs in the Java runtime. With the latter, the odds are in your favor that the flaws will be first discovered against somebody else's website. WIth the former, the only place those flaws can be discovered is on your website. culture (Score:2) I've done a lot of embedded development, some of it in the security sector, which is not tolerant of sloppiness. One of the products had a web interface implemented in PHP, which I partly coded. I don't use PHP regularly, so I was entirely at the mercy of the PHP online documentation. This was a few years back, but my god was it a struggle to glean from the documentation the bounds of validity of the different PHP function calls. In many cases I had to resort to untangling a long and often contradictory d Beat up any straw men lately? (Score:2) "That aims to dispel the myth that some languages will guarantee that an application will be more or less secure than other languages." Whoever said, besides your 16-year-old cousin that just figured out how to add a flaming skull animation to his MySpace page, that there is any web application programming language that will guarantee security. Sheesh. Bloody hell... (Score:2, Funny) Yes...but (Score:2) If you know what programming language a programmer uses you can tell if they know what they are doing or not. Re: (Score:2) Sure, as long as you have the language spec.:The Python Paradox (Score:5, Insightful) People who do anything because it interests & fascinates them on a personal level do better than those who are only in it for the paycheck. Doesn't matter whether it's programming, auto repair, landscaping, or anything else. Re: (Score:2) People who do anything because it interests & fascinates them on a personal level do better than those who are only in it for the paycheck. Doesn't matter whether it's programming, auto repair, landscaping, or anything else. Except gambling/poker. Re: (Score:2) People who do anything because it interests & fascinates them on a personal level do better than those who are only in it for the paycheck. Doesn't matter whether it's programming, auto repair, landscaping, or anything else. I paid for my undergrad degree by landscaping, and I can say with some certainty that it is neither an interesting nor a fascinating subject. Re: (Score:3, Insightful) Re:The Python Paradox (Score:5, Insightful) Exactly. The culture of a language is as or more important than the language itself. Indeed, the culture shapes the language (but of course, to a degree, the language shapes the culture). Java itself isn't a very good language, but it's the hordes of incompetent Java programmers who make it such a terrible choice for everything. This goes back to the Python paradox: companies want Python programmers to write Java for them. I will say this in Java's favor, however: It's a language where the smartest can't write code that confuses the dumbest, and where the dumbest can't write code that does too much damage. Re: (Score:2) I will say this in Java's favor, however: It's a language where the smartest can't write code that confuses the dumbest Not really. They dug the grave for the dumb in 1.1 already, with inner classes; and nailed the coffin shut in 5, with generics. In 7, they are going to finally bury it with lambdas [java.net]. Me, I'm waiting eagerly to see what the next incarnation of COBOL turns out to be... Re: (Score:3, Insightful) That alone makes it the best language for large business projects. Your coworkers will be a mix of good and bad, and pretending the bad programmers is a more damaging mistake than anything you can do to the code. Re: (Score:2) I will say this in Java's favor, however: It's a language where the smartest can't write code that confuses the dumbest, [...]. In my AP computer science class, some of my classmates don't seem to grok the whole OO concept (i.e. if I create multiple interacting classes, it confuses them. They do however understand "[Type] foo=new [Type]();", but only for predefined types). Re: (Score:2) It's an interesting essay, but it's just speculation. Not any more insightful than the serious posts on Slashdot. Re: (Score:2) This a very good point. It is the programmers that are important, not so much the languages, but languages do bias what kinds of programmers will use them. See also why Linus' Torvalds prefers C to C++ [cat-v.org], it has not so much to do with the language itself but with the kind of programmers that use the language. Turing-completeness isn't the point. (Score:2) Any language that lacks an inherent insecurity can be used to write secure apps, just as any language (Brainfuck, anyone?) can be used to write any program.* You choose a language not because it makes it possible for you to do anything, but because it makes it easier than another language. *I realize that there are cases where performance-per-core is critical, and that narrows your choices considerably. Still, in that situation, some use C, others use C++, and still others use Lisp. Only web applications (Score:2) OVERSOLD/HYPED: 'Web programming language' (Score:4, Interesting) 1. The languages being considered/charted are ASP, ASPX, CFM, DO, JSP, PHP and PL (I can guess at most of these acronyms). What's missing, obviously, are 'real' programming languages such as C, Java, FORTRAN, Ada, C++, Eiffel, etc. 2. A lot of these languages share a common (C) heritage, and I'd assert "inherit" a lot of the security weaknesses of C. That's particularly true of weak typing for scalars, including array bounds. The conclusion I think can be drawn from this is that we need a substantial increase in Web Programming practices, including languages. Any other conclusion is overreach. Re: (Score:2) Re: (Score:2) You forgot the functional programming languages, like Haskell, Ocaml, standard ML, Erlang, etc. They usually have a track record of higher security without loss in performance (that Java has), since the checks can happen at compile time. Of course you can still mess up things even in Haskell. E.g. [0,1,2,3,4,5]!!10 will cause a runtime exception. But the functions that can cause errors are well-known and can be avoided. Also, QuickCheck is a really great tool to test out all the possibilities. If I ever had a Re: (Score:2) ... "inherit" a lot of the security weaknesses of C. That's particularly true of weak typing for scalars, including array bounds. Can you explain what you mean here? Steve Ciarcia on programming languages: (Score:3, Funny) Re: (Score:2) Reminds me of one of my roommates in college. He had a poster of the saying (IIRC) "real programmers don't use operating systems they write directly on the hardware as god intended." Some truths are eternal (Score:2) If anyone knows who deserves the credit for that one, BTW ... Re: (Score:2) Java designers made a cunning workaround for that by removing "goto" from the language, though. Java (Score:3, Insightful) One of the things I liked about Java was that there aren't any buffer overflows to worry about. Well, apart from ones in the JVM, but they are few and far between. I don't understand when people say that all languages are as insecure as each other. Sure, people can do stupid things like not check input, etc - but when it comes to finding some sort of buffer overflow in a function/library? If I had to write a website that would be deployed onto a box which was not touched for 5 years, I imagine that a Java-based site would have a better chance of faring than a PHP one. A poor craftsman blames his tools (Score:2, Insightful) False dichotomy much? (Score:2) There isn’t just “secure” and “not secure”, you know? Some are more and others less secure. And I have only one thing to say: Haskell > Java > C :) (Java has built-in checks that prevent the worst errors of C. And Haskell also has them, but at compile time, so you get back the performance. Of course those language are just examples, and any similar languages could be placed in there instead.) Re: (Score:3, Insightful) In the case of an advanced JVM like HotSpot (the official Sun/Oracle JVM), you also get the performance back. If the array bounds checking can be removed without compromising security, HotSpot's JIT compiler will do so when compiling the Java bytecode into native instructions. COBOL (Score:2) Thousands of Banks can't be wrong! Right? This is absurd. (Score:2) Sure, if you take extra precautions with the buffer-overflow languages, your software can be just a secure. But in the real world, that's almost never the case. Projects are always rushed and mistakes happen. Every team has one or more weak links. And what coder prefers burdensome process to development anyway? Anyone who promotes C/C++ over Java for a back-end enterprise application is not a professional IMO. They come across as stubborn basement hackers who can't keep their resumes up to date. C/C++ was n But who provides the language implementation DOES (Score:2) [c-program.com] is worth another read. Apple can make a reasonable case that allowing other development tools in their little garden reduces their ability to ensure a secure system. I concur with posters who observe that some languages do have more protections than others; but in the end the application programmer needs to be careful and security aware, and there has to be complete trust in every step of the processing from what the programmer writes to what is run on gets(), the C standard library standard bug. (Score:2) Languages maybe. Standard libraries, oh boy for sure. From the man(3) page for the set of functions including gets(): Re: . self-proclaimed "ninja" it would seem.... (Score:2) a security "ninja" who uses windows xp for everything? its a bit like a design "guru" who uses mspaint on a monochrome monitor. and yeah its kind of obvious its not really down to the language, its down to the programmer. a little surprised by the stats though - i'd have thought perl hackers would have more security know-how than your average java monkey. Bogus article title and bogus conclusion (Score:2) There''s a hell of a lot more to programming than just Web programming and there are a lot of real programming languages that go to great lengths to make secure programming easier. The Security Ninja is a paper tiger. oh!!! (Score:2) i saw what you did there... Re: (Score:2, Interesting) Re: (Score:3, Funny) Careful. You might get him so mad that he'll have a buffer overflow and then core dump. Re:Duh (Score:4, Insightful) Yeah cause a language that makes it trivially easy to overrun a buffer, dereference null pointers and smash the stack is clearly a highly secure language. Oh wait... Re:Duh (Score:5, Insightful) Ye Olde Excuse: “you’re just not good enough” You know, in modern languages, you can once abstract that concept out that you don’t want buffer overflows and dereference null pointers, and you’re done. In C, you have to re-invent the wheel again and again and do the same micromanagement over and over. It’s like the man with three buttocks on Monty Python: We’ve done that! We’ve solved it. We have nice standardized solutions. (Java doing runtime checks by default. And Haskell doing them at compile time.) Use them! With modern languages, you can use your mental resources to tackle the actual problem, instead of having to constantly think about decades old and long solved problems that should long be included by default. And the biggest joke is, that most C programmers manually implement those systems themselves, and then act all proud, because they re-invented the wheel, except that it never received the literally decades of testing of the well-studied existing solutions. It’s dumb. Like those people re-implementing standard library functions. It’s unprofessional and inefficient. And very error-prone for no reason at all. Re: (Score:2) It's dumb. Like those people re-implementing standard library functions. It's unprofessional and inefficient. I thought that too until I found the bottleneck that is converting from float to int using (int) casting. I found the following to be multiple times faster: unsigned floatToInt(double d) // 1 52 { d += 4503599627370496.0; return (unsigned &)d; } Re: (Score:2) You mean like using std::vector and std::string instead of malloc, smart pointers instead of raw pointers and manual memory management? We did all that years ago...it's only the uneducated C hackers who keep giving C++ a bad name. Re: (Score:3, Insightful) We did all that years ago...it's only the uneducated C hackers who keep giving C++ a bad name. Well, that and the facts that anyway. Re:Duh (Score:4, Insightful) There are still tasks that are better suited for languages like C and C++. The interesting ones. The alternatives? "You mean I get to build another boring business web application using the latest kludgy framework du jour so that it's obsolete two weeks later when the more popular kludgy framework comes out. Oh boy!" (OK, it's a little bit better than that. But not much. :) ) Games development, systems development, OS development, embedded programming and the like are places where it pays for development to be done by people who know what they're doing using tried and true programming tools that demand such expertise. When did Slashdot become so dull that nobody was interested in this stuff anymore? Re: (Score:2) >Games development Game developers are desperately looking for something, anything better than C++. C# is looking better each day. Unfortunately, the choice of C++ is often dictated by externalities (no C# on PS3, etc.) >systems development, OS development There are several OSes developed in Java, C# and other languages. I'm pretty sure we'll move away from C-based OSes in 'near' future. >embedded programming and the like And it's more and more irrelevant (Moore's law). It's quite often easier to use mor Re: (Score:2) Citation needed. Maybe some tools in addition to C or C++ in some cases. Just because you can develop OSes in Java or C# (or parts of it anyway) does not mean it's a good idea. Web development can be done in C or C++, and it's equally stupid to do so. Unix variety OSes moving away from C is not Re: (Score:3, Insightful) Sure these categories of applications occasionally have bugs. They're difficult to do. What do you expect? However, how often they happen are pretty exaggerated as well. The Linux kernel is mostly written in C, of course. How often do you see it lock up for as difficult of a piece of software that is to write? How about all the millions of embedded pieces of code for mission-critical application that perform flawlessly day after day to the point that nobody notices them? Besides for every C++ desktop app Re: (Score:3, Insightful) I didn't RTFA but I agree with the sentiment of the summary. It's NOT THE LANGUAGE, it's the programmer and their knowledge of the interfaces they are working with. plenty of things can and should be written in C. It is perhaps a sharper blade than Java, but were professionals. There's 100 other mistakes you will make being uncareful when programming that have nothing to do with language choice that are far more important t Re: (Score:2) You got a nice statement there. Care to back it up with some actual arguments? You know: I'm afraid it's you who is insecure, not C, because... Re: (Score:2, Funny) >Oh cool, is C ellipsis the new C sharp? No, C... is secure and C# is not. Re: (Score:2) C... Oh cool, is C ellipses the new C sharp? No, it's just another form of object notation in the next version of Objective-C. sigh ... Everybody hatin' on PHP (Score:2) Re:Everybody hatin' on PHP (Score:5, Insightful) Re: (Score:2) A more secure PHP option, Quercus [caucho.com]..
https://developers.slashdot.org/story/10/05/07/1811238/choice-of-programming-language-doesnt-matter-for-security
CC-MAIN-2016-36
refinedweb
5,088
62.48
Created on 2008-04-01 12:13 by peter.otten, last changed 2014-08-23 15:28 by roippi. This issue is now closed. I'd like to suggest a different approach than the one taken in rev. 54348 to improve timeit's scripting interface: allow passing it a namespace. Reasons: - It has smaller overhead for functions that take an argument: >>> def f(a): pass ... # trunk >>> min(ht.Timer(lambda f=f: f(42)).repeat()) 0.54068493843078613 # my patch >>> min(mt.Timer("f(42)", ns=dict(f=f)).repeat()) 0.29009604454040527 - it is more flexible. Example: # working code, requires matplotlib from timeit import Timer from time import sleep def linear(i): sleep(.05*i) def quadratic(i): sleep(.01*i**2) x = range(10) y = [] for a in x: y.append([min(Timer("f(a)", ns=dict(f=f, a=a)).repeat(1, 1)) for f in linear, quadratic]) from pylab import plot, show plot(x, y) show() The above code works unaltered inside a function, unlike the hacks using "from __main__ import ...". - the implementation is simpler and should be easy to maintain. The provided patch is against 2.5.1. If it has a chance of being accepted I'm willing to jump through the necessary hoops: documentation, tests, etc. A more general approach would be to add both 'locals' and 'globals' to be used by exec. At least, I would change 'ns' to 'locals'. On the second thought, I actually wanted Timer to mimic eval without realizing that eval uses positional rather than keywords arguments. 'locals' is obviously a bad choice for the keyword parameter because it masks locals() builtin. Maybe 'local_namespace'? Generally, when I use timeit from the interpreter prompt, I use "from __main__ import *" as the setup code string. Then I can use all currently defined global symbols directly :) Alexander, I'm fine with a more specific argument name. ns was what the Timer already used internally. Antoine, from __main__ import name1, ..., nameN works fine on the command line, but inside a function you'd have to declare the names you want to pass to the Timer as globals which I find a bit clumsy. Apart from giving a syntax warning a star-import affects the generated bytecode and produces the (slower) LOAD_NAME instead of LOAD_FAST. On Wed, Apr 2, 2008 at 2:42 AM, Peter Otten <report@bugs.python.org> wrote: > Alexander, I'm fine with a more specific argument name. ns was what > the Timer already used internally. > Maybe it should be "locals" after all. It does not look like the conflict with builtin locals() is an issue. Note that this is what __import__ uses. I still recommend adding globals argument as well for completeness and more accurate timings when timed code uses globals. This note is simply a reminder that Antoine's 'from __main__ import *' solution fails in python3. Also, resolution of this issue probably could incorporate Issue1397474. >>> import timeit >>> timeit.timeit('None','from __main__ import *') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.0/timeit.py", line 227, in timeit return Timer(stmt, setup, timer).timeit(number) File "/usr/local/lib/python3.0/timeit.py", line 135, in __init__ code = compile(src, dummy_src_name, "exec") File "<timeit-src>", line 2 SyntaxError: import * only allowed at module level Georg, why did you reassign this? I'm sorry, this should have been another issue. Reassigning to you. See related discussion in issue 5441 and issue 1397474. Would still be nice to have something like this. The timeit module API is still crippled, especially now that "from __main__ import *" doesn't work in a function anymore. Attached is a patch that adds a 'global' kwarg to the Timeit constructor, which does pretty much what it says on the tin: specifies a global namespace that exec() will use. I originally had a 'locals' arg as well (to mirror the signature of eval/exec), but realized that the local vars I was passing to exec were not available to the inner function. Reason: the timeit module compiles/execs a *closure*, and closed-over variables and exec() simply do not play nicely. Possible workarounds were to munge locals() into the globals() dict, or to somehow inject the variables in the locals dict into the closure. I found neither of these options superior to simply not including a locals argument, for reasons of Least Surprise and maintainability. Patch includes some basic tests and documentation. I am particularly uncomfortable with writing docs so those very likely need some polish. Correction, the name of the argument is 'globals', not 'global'. Ben, thanks for the patch. Have you signed a contributor's agreement? You can find it at I did sign one right after I submitted the patch. Takes a few days for the asterisks to propagate I guess :) Ah, good. The patch looks fine to me, except that you should add "versionchanged" tags in the documentation for the added parameter. Ah yes. New patch improves the docs. New changeset e0f681f4ade3 by Antoine Pitrou in branch 'default': Issue #2527: Add a *globals* argument to timeit functions, in order to override the globals namespace in which the timed code is executed. Thank you, Ben! Your patch is now pushed to the default branch. Thanks Antoine. Cheers :-)
https://bugs.python.org/issue2527
CC-MAIN-2020-34
refinedweb
880
67.15
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 1 results of 1 Read and respond to this message at: By: davmp I'm seeing a number of marked errors related to logger methods such as calling logger.debug(), etc. I'm not sure if this is due to something I'm doing or whether there is something wrong with PyDev. Advice is appreciated. Here is a description of the typical pattern I'm using: * In the root level package for an application called 'myapp', define a logger for the application in a module called logger.py like this: import logging logger = logging.getLogger('myapp') * In other packages or modules within that application, import that logger and try to log to it using any of the info, debug, exception, etc. methods like this: from myapp.logger import logger class Foo(object): def __init__(self): logger.debug('Creating Foo: %s', self) PyDev (or is it PyDev extensions?) always marks the call to the debug method on this last line as an 'Undefined variable from import: debug'. How do I avoid this? ______________________________________________________________________ You are receiving this email because you elected to monitor this forum. To stop monitoring this forum, login to SourceForge.net and visit:
http://sourceforge.net/p/pydev/mailman/pydev-users/?viewmonth=200609&viewday=8
CC-MAIN-2015-18
refinedweb
223
66.64
07 December 2012 20:38 [Source: ICIS news] HOUSTON (ICIS)--US-based Styron will seek a 12 cent/lb increase for its polycarbonate (PC) material from 4 January, citing increasing raw-material costs, a company source said on Friday. The proposed increase for the company’s PC resins puts Styron as the second US PC producer to seek higher prices for January. SABIC Innovative Plastics has announced its price increase proposal calling for an additional 11 cents/lb for its Lexan PC material shipped on or after 7 January. SABIC is also seeking a 7 cent/lb increase for its PC blend material. Buyers were not surprised that SABIC IP would seek an increase, as feedstock benzene costs have increased. However, some sources were uncertain as to why the producer would seek an increase at this time with the unstable business climate in the ?xml:namespace> There are three PC producers in the US. Currently, ICIS has assessed general purpose moulding grade in bulk at 1.53-1.85/lb. Styron is also seeking an increase of 9 cents/lb for its Emerge advanced resins, which includes blends of PC and acrylonitrile-butadiene-styrene (ABS). The company is also seeking an increase of 9 cents/lb for its Pulse engineering resins, which are primarily PC/ABS blends. (
http://www.icis.com/Articles/2012/12/07/9622417/US-Styron-seeks-12-centlb-January-PC-hike-on-higher-feedstocks.html
CC-MAIN-2015-22
refinedweb
217
51.28
2.1.2 2020-06-19 Released-By: PERLANCAR; Urgency: medium - State that get_struct() must also be available as a static method. 2.1.1 2020-06-14 Released-By: PERLANCAR; Urgency: medium - No longer encourage putting color themes under app namespace (SOME::APP::ColorTheme::*) due to slow search. - Allow get_struct() to be called as a static method. - Minor Fixes/tweaks. 2.1.0 2020-06-09 Released-By: PERLANCAR; Urgency: medium; Backward-Compatible: no - [incompatible] Rename get_color() to get_item_color() to be more specific. - Add get_args(), get_struct(). - [incompatible] Color theme structure: rename property 'colors' to 'items' to avoid confusion with "item colors hash". - Define "item colors hash" which is the value of each item in the 'items' property. - Add note about status of the 2.x specification. 2.0.1 2020-06-08 Released-By: PERLANCAR; Backward-Compatible: no - Revise. - Specify 'args' property in the color theme structure. - [incompatible change] Rename method get_color_list() to list_items(). 2.0.0 2020-06-07 Released-By: PERLANCAR; Backward-Compatible: no - Renamed from Color-Theme to ColorTheme. - Bump specification version from 0.10 to 2. - Color theme module must now only contain a single theme. The color theme structure must be put in %THEME (instead of the old %color_themes). - Color theme module must now be a class that is instantiated. It can accept arguments ("parameterized color theme"). - Support dynamic theme (where the list of items cannot be fully retrieved from C<colors> property of %THEME, but from get_color_list(). 0.10.1 2018-02-25 Released-By: PERLANCAR - No spec changes. - Split Color::Theme::Util and Color::Theme::Role::* to their own dists. 0.01 2014-12-11 Released-By: PERLANCAR - First release, split from SHARYANTO-Roles and renamed module from SHARYANTO::Role::ColorTheme to Color::Theme::Role. Some other changes: split into two roles (Role and Role::ANSI for ANSI-specific stuffs), rename some methods.
https://web-stage.metacpan.org/dist/ColorTheme/changes
CC-MAIN-2021-39
refinedweb
311
51.75
This is an abridged version of the XS tutorial which is supplied with Perl. The first example prints "Hello world". Run h2xs -A -n test. This creates a directory test and files Makefile.PL, lib/test.pm, test.xs, and t/test.t. test.xs looks like this: #include "EXTERN.h" #include "perl.h" #include "XSUB.h" #include "ppport.h" MODULE = test PACKAGE = test Edit test.xs to add void hello() CODE: printf("Hello, world!\n"); to the end. Run perl Makefile.PL: $ perl Makefile.PL Checking if your kit is complete... Looks good Writing Makefile for test $ This creates a file called Makefile. Run the command "make": $ make cp lib/test.pm blib/lib/test.pm perl xsubpp -typemap typemap test.xs > test.xsc && mv test.xsc test.c Please specify prototyping behavior for test.xs (see perlxs manual) cc -c test.c Running Mkbootstrap for test () chmod 644 test.bs rm -f blib/arch/auto/test/test.so cc -shared -L/usr/local/lib test.o -o blib/arch/auto/test/test.so chmod 755 blib/arch/auto/test/test.so cp test.bs blib/arch/auto/test/test.bs chmod 644 blib/arch/auto/test/test.bs Manifying blib/man3/test.3pm $ Now we run the extension. Create a file called hello containing #!/usr/bin/perl use ExtUtils::testlib; use test; test::hello(); Download it here. Make hello executable with chmod +x hello, and run it: $ ./hello Hello, world! $ This extension returns 1 if a number is even, and 0 if the number is odd. Add the following to the end of test.xs from example one: int is_even(input) int input CODE: RETVAL = (input % 2 == 0); OUTPUT: RETVAL Run "make" again. Create a test script, t/test.t, containing # 3 is the number of tests. use Test::More tests => 3; use test; is (test::is_even(0), 1); is (test::is_even(1), 0); is (test::is_even(2), 1); Run it by typing make test: $ make test PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/test.t .. ok All tests successful. Files=1, Tests=3, 0 wallclock secs ( 0.03 usr 0.02 sys + 0.02 cusr 0.02 csys = 0.09 CPU) Result: PASS $ h2xs starts extensions. It creates Makefile.PL, which generates Makefile, and Lib/test.pm and test.xs, which contain the extension. Test.xs is the C part, and Test.pm tells Perl how to load the extension. Running make creates a directory blib for compiled output. Make test invokes perl such that it finds the extension files in blib. To test an extension, use make test, or run the test file using perl -I blib/lib -I blib/arch t/test.t Without this, the test script will fail to run, or, if there is another version of the extension installed, it will use that, instead of the version which was meant to be tested. This takes an argument and sets it to its rounded value. To the end of test.xs, add void round(arg) double arg CODE: if (arg > 0.0) { arg = floor(arg + 0.5); } else if (arg < 0.0) { arg = ceil(arg - 0.5); } else { arg = 0.0; } OUTPUT: arg Add '-lm' to the line containing 'LIBS' in Makefile.PL: 'LIBS' => ['-lm'], # e.g., '-lm' This adds a link to the C maths library which contains floor and ceil. Change the number of tests in test.t to "8", use Test::More tests => 8; and add the following tests: my $i; $i = -1.5; test::round($i); is( $i, -2.0 ); $i = -1.1; test::round($i); is( $i, -1.0 ); $i = 0.0; test::round($i); is( $i, 0.0 ); $i = 0.5; test::round($i); is( $i, 1.0 ); $i = 1.2; test::round($i); is( $i, 1.0 ); Run perl Makefile.PL, make, then make test. It should print out that eight tests have passed. Parameters of the XSUB are specified after the function's return value and name. The output parameters are listed at the end of the function, after OUTPUT:. RETVAL tells Perl to send this value back as the return value of the XSUB function. In Example 3, the return value was placed in the original variable which was passed in, so it and not RETVAL was listed in the OUTPUT: section. Xsubpp translates XS into C. Its rules to convert from Perl data types, such as "scalar" or "array", to C data types such as int, or char, are found in a file called a "typemap". This has three parts. The first part maps C types to a name which corresponds to Perl types. The second part contains C code which xsubpp uses for input parameters. The third part contains C code which xsubpp uses for output parameters. For example, look at a portion of the C file created for the extension, test.c: XS(XS_test_round); /* prototype to pass -Wmissing-prototypes */ XS(XS_test_round) { #ifdef dVAR dVAR; dXSARGS; #else dXSARGS; #endif if (items != 1) croak_xs_usage(cv, "arg"); { double arg = (double)SvNV(ST(0)); #line 30 "test.xs" if (arg > 0.0) { arg = floor(arg + 0.5); } else if (arg < 0.0) { arg = ceil(arg - 0.5); } else { arg = 0.0; } #line 137 "test.c" sv_setnv(ST(0), (double)arg); SvSETMAGIC(ST(0)); } XSRETURN_EMPTY; } Download it here. In the typemap file, doubles are of type T_DOUBLE. In the INPUT section of typemap, an argument that is T_DOUBLE is assigned to the variable arg by calling SvNV, then casting its value to double, then assigning that to arg. In the OUTPUT section, arg is passed to sv_setnv to be passed back to the calling subroutine. ( ST(0) is discussed in "The argument stack"). The lines before MODULE = in the XS file are C. Xsubpp just copies them. Parts after MODULE = are XSUB functions. Xsubpp translates them to C. In "Example 4: Using a header file" the second part of the XS file contained the following description of an XSUB: double foo(a,b,c) int a long b const char * c OUTPUT: RETVAL In contrast with "Example 1: Hello world", "Example 2: Odd or even" and "Example 3: Rounding numbers". this description does not contain code for what is done during a call to foo(). Even if a CODE section is added to this XSUB: double foo(a,b,c) int a long b const char * c CODE: RETVAL = foo(a,b,c); OUTPUT: RETVAL the result is almost identical generated C code: xsubpp compiler figures out the CODE: section from the first two lines of the description of XSUB. The OUTPUT: section can be removed as well, if a CODE: section is not specified: xsubpp can see that it needs to generate a function call section, and will autogenerate the OUTPUT section too. Thus the XSUB can be double foo(a,b,c) int a long b const char * c This can also be done for int is_even(input) int input CODE: RETVAL = (input % 2 == 0); OUTPUT: RETVAL of "Example 2: Odd or even", if a C function int is_even(int input) is supplied. As in "XS file structure", this may be placed in the first part of the .xs file: int is_even(int arg) { return (arg % 2 == 0); } If this is in the first part of the xs file, before MODULE = , the XS part need only be int is_even(input) int input When arguments to routines in the .xs file are specified, three things are passed for each argument listed. The first is the order of that argument relative to the others (first, second, third). The second is the type of argument ( int, char*). The third is the calling convention for the argument in the call to the library function. Suppose two C functions with similar declarations, for example int string_length (char *s); int upper_case_char (char *cp); operate differently on the argument: string_length inspects the characters pointed to by s without changing their values, but upper_case_char manipulates what cp points to. From Perl, these functions are used in a different manner. Tell xsubpp which is which by replacing the * before the argument by &. An ampersand, &, means that the argument should be passed to a library function by its address. In the example, int string_length(s) char * s but int upper_case_char(cp) char & cp For example, consider: int foo(a,b) char & a char * b The first Perl argument to this function is treated as a char and assigned to a, and its address is passed into foo. The second Perl argument is treated as a string pointer and assigned to b. The value of b is passed into the function foo. The call to foo that xsubpp generates looks like this: foo (& a, b); In the generated C code, there are references to ST(0), ST(1) and so on. ST is a macro that points to the nth argument on the argument stack. ST(0) is thus the first argument on the stack and therefore the first argument passed to the XSUB, ST(1) is the second argument, and so on. The list of arguments to the XSUB in the .xs file tells xsubpp which argument corresponds to which of the argument stack (i.e., the first one listed is the first argument, and so on). These must be listed. Verify this by looking at the C code generated for Example 3. The code for round. See perlxs. XSUBs are also allowed to avoid automatic conversion of Perl function arguments to C function arguments. See perlxs. Some people prefer manual conversion by inspecting ST(i) even in the cases when automatic conversion will do, arguing that this makes the logic of an XSUB call clearer. Compare with "Simplifying XSUBs". This example illustrates working with the argument stack. The previous examples have all returned only a single value. This example shows an extension which returns an array. This example uses the statfs system call. Return to the test directory. Add the following to the top of test.xs, after #include "XSUB.h": #include <sys/vfs.h> or #include <sys/param.h> #include <sys/mount.h> depending on your operating system (read "man statfs" for the correct details for your version of Unix). Add to the end:))); } In test.t, change the number of tests from 9 to 11, and add @a = test::statfs("/blech"); ok( scalar(@a) == 1 && $a[0] == 2 ); @a = test::statfs("/"); is( scalar(@a), 7 ); This routine returns a different number of arguments depending on whether the call to statfs succeeds. If there is an error, the error number is returned as a single-element array. If the call is successful, then a 7-element array is returned. INIT: says to place the code following it immediately after the argument stack is decoded. PPCODE: tells xsubpp that the xsub manages the return values put on the argument stack by itself. To place values to be returned to the caller onto the stack, use the series of macros that begin with XPUSH. There are five different versions, for placing integers, unsigned integers, doubles, strings, and Perl scalars on the stack. In the example, a Perl scalar was placed onto the stack. The values pushed onto the return stack of the XSUB are "mortal" SVs. They are made "mortal" so that once their values are copied by the calling program, the SV's that held the returned values can be deallocated. If they were not mortal, then they would continue to exist after the XSUB routine returned, but would not be accessible, causing a memory leak. This example takes an array reference as input, and returns a reference to an array of hash references: my $stats = multi_statfs (['/', '/usr/']); my $usr_bfree = $stats->[1]->{f_bfree}; It is based on "Example 4: Returning an array". It takes a reference to an array of filenames as input, calls statfs for each file name, and returns a reference to an array of hashes containing the data for each of the filesystems. In the test directory add the following code to the end of test.xs: SV * multi_statfs(paths) SV * paths INIT: /* The return value. */ AV * results; /* The number of paths in "paths". */ I32 numpaths = 0; int i, n; /* Check that paths is a reference, then check that it is an array reference, then check that it is non-empty. */ if ((! SvROK(paths)) || (SvTYPE(SvRV(paths)) != SVt_PVAV) || ((numpaths = av_len((AV *)SvRV(paths))) < 0)) { XSRETURN_UNDEF; } /* Create the array which holds the return values. */ results = (AV *) sv_2mortal ((SV *) newAV ()); CODE: for (n = 0; n <= numpaths; n++) { HV * rh; STRLEN l; struct statfs buf; /* Get the nth value from array "paths". */ char * fn = SvPV (*av_fetch ((AV *) SvRV (paths), n, 0), l); i = statfs (fn, &buf); if (i != 0) { av_push (results, newSVnv (errno)); continue; } /* Create a new hash. */ rh = (HV *) sv_2mortal ((SV *) newHV ()); /* Store the numbers in rh, under the given names. */ Add to test.t $results = test::multi_statfs([ '/', '/blech' ]); ok( ref $results->[0] ); ok( ! ref $results->[1] ); This function does not use a typemap. Instead, it accepts one SV* (scalar) parameter, and returns an SV*. These scalars are populated within the code. Because it only returns one value, there is no need for a PPCODE: directive, only CODE: and OUTPUT: directives. When dealing with references, it is important to handle them with caution. The INIT: block first checks that SvROK returns true, which indicates that paths is a valid reference. It then verifies that the object referenced by paths is an array, using SvRV to dereference paths, and SvTYPE to discover its type. As an added test, it checks that the array referenced by paths is non-empty, using av_len, which returns -1 if the array is empty. The XSRETURN_UNDEF macro aborts the XSUB and returns the undefined value whenever all three of these conditions are not met. We manipulate several arrays in this XSUB. An array is represented internally by a pointer to AV. The functions and macros for manipulating arrays are similar to the functions in Perl: av_len returns the highest index in an AV*, much like $#array; av_fetch fetches a scalar value from an array, given its index; av_push pushes a scalar value onto the array, extending it if, use newRV. An AV* or an HV* can be cast to type SV* in this case. This allows taking references to arrays, hashes and scalars with the same function. Conversely, the SvRV function always returns an SV*, which may need to be cast to the appropriate type if it is something other than a scalar (check with SvTYPE). To create a wrapper around fputs, #define PERLIO_NOT_STDIO 0 #include "EXTERN.h" #include "perl.h" #include "XSUB.h" #include <stdio.h> int fputs(s, stream) char * s FILE * stream This documents functions such as SvNV which convert Perl scalars into C doubles. This lists functions such as croak used for error handling. Beware that the version of the file on the website is not formatted correctly. This is the "official" documentation for Perl XS. This is the "official" documentation for Perl modules. This documents h2xs. This tutorial assumes that the "make" program that Perl is configured to use is called make. Instead of running "make" in the examples, a substitute may be required. The command perl -V:make gives the name of the substitute program. Jeff Okamoto,. This digest web version () was edited from that found in the Perl distribution by Ben Bullock. You can download the POD (Plain Old Documentation) format of this article. This may contain one or two bits of formatting which aren't actually POD. This document is an edited version of part of the Perl distribution and may be copied, modified and redistributed under the licence terms of Perl itself, the GNU General Public Licence or the Perl Artistic licence.For comments, questions, and corrections, please email Ben Bullock (benkasminbullock@gmail.com). / Privacy / Disclaimer
http://www.lemoda.net/xs/perlxstut/
CC-MAIN-2014-41
refinedweb
2,648
76.62
Technical Support On-Line Manuals Cx51 User's Guide #include <math.h> float log ( float val); /* value to take natural logarithm of */ The log function calculates the natural logarithm for the floating-point number val. The logarithm is the exponent to which the base (e or 2.718282... in the case of natural logs) must be raised to equal val. Or, loge ex = val The log function returns the floating-point natural logarithm of val. exp, log10 #include <math.h> #include <stdio.h> /* for printf */ void tst_log (void) { float x; float y; x = 2.718282; x *= x; y = log (x); /* y = 2 */ printf ("LOG(%f) = %f\n", x,.
http://www.keil.com/support/man/docs/c51/c51_log.htm
CC-MAIN-2020-05
refinedweb
108
77.43
All... I'm still working with the DayTrader streamer client and have run into another issue I cannot explain. Both the streamer and ws app client create Swing-based GUIs. I am in no way a Swing expert; however, all of the docs that I have read indicate that the GUI thread should remain up and running (along with the JVM) after main completes. Here is an example... public class JFrameExample { public static void main(String[] args) { JFrame f = new JFrame("This is a test"); ... f.addWindowListener(new ExitListener()); f.setVisible(true); } } From what I have seen all Swing apps use some variation of this, as do the DayTrader streamer and ws app clients. Unfortunately, when I try to run these clients in under Geronimo 2.0.1, the apps terminate when the main thread completes. I have added a Thread.sleep() to the main just to verify that the GUI remains up while the main thread is still active. Does anyone have any thoughts as to why the JVM is terminating with main while the GUI threads are still active and have not been closed? I've tried a SUN and IBM JVM and both result in early termination. The only thing I can think of is that something in the Geronimo client or the manner in which Geronimo packages the client that is changing the behavior. Thanks... Chris -- "I say never be complete, I say stop being perfect, I say let... lets evolve, let the chips fall where they may." - Tyler Durden
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200708.mbox/raw/%3C421012bd0708220727m5ed2ba1cmdffd2fdabda59544@mail.gmail.com%3E/2
CC-MAIN-2016-50
refinedweb
254
73.07
Today, in this article, I am going to talk about sensing the light using Photo-Resistor Sensor and BBC Micro:Bit. Basically, you will know all the basics of how to read an analog signal using micro:bit and display the readings on the screen of the micro: bit, not only displaying, but if you want to get more creative, you can use the readings and build something useful. We can detect the light level using micro:bit built-in feature, you can read here how to do it. So, let's get started. What is Photo-Resistor sensor or Light Dependent Resistor (LDR)? A Photo-Resistor sensor or light-dependent resistor (LDR) is a light controlled variable resistor. The resistance of a photo-resistor decreases with increasing incident light intensity or we can say that it exhibits photoconductivity. A photoresistor can be applied in light sensing circuits and light activated and dark activated switching circuits. A photoresistor is made of a high resistance semiconductor. In the dark a photoresistor can have a resistance as high as several megaohms; also while in the light, a photo-resistor can have a resistance as low as few hundred ohms. As it can be used in the light sensing circuit we are also using the LDR for sensing the light. And it looks like this. So, now, let us see how we can use this sensor with micro:bit and read the light level. Tools You Need - Micro:Bit (1) - Alligator clip (several) - Male jumper wire (3) - A resistor of 10k ohm (1) - Photoresistor or LDR sensor (1) - Breadboard (1) Connection The connection is like below. First, we need to connect the photoresistor (LDR) sensor to the breadboard and then, we need to connect the resistor of 10k. We are using resistor just to control the current and providing the current that is only needed for the LDR. So, connect GND of the Micro:Bit to the one end of the resistor and the other end of the resistor will be connected to the end of the LDR sensor from where we will read the values. Connect the PIN 2 of the micro:bit to the one end of the LDR via the resistor and connect 3V pin of the Micro:bit to another end of the LDR sensor. In this way, we're giving 3V to LDR from one end and getting output from another end using PIN 2. Connect exactly as shown in the below image. You can use any of the pins of the micro:bit like PIN 0, PIN 1, and PIN 2. This is how I have it connected. Code - Now, go to the variable block and choose a set item to block and place it inside the forever block. - Now, rename the item variable to input if you want otherwise, you can rename as per your choice. - Now, go to Advance and go to the Pins block. Choose "read analog pin" and place it by replacing 0 in "set item" block like this. Your code should look like this. - Now, again, go to the variable and choose a set item and place it below the first set item. If you want, you can rename the item variable to something else. I have renamed it as output. - Now, go to math block and choose Divide. - Replace the first zero with input variable and another zero with 50. So why are we dividing the reading with 50? Simply, because the result of analog read pin block will be a number between 0 and 1023 where 0 represents 0V and 1023 represents 3V. This value is stored in the variable input. This is then scaled to be a single digit number by dividing by 50. You can change the number 50 to make the light readings more or less sensitive. - Now, use the show number block from basic and assign the output variable to it like this and now, this is the final code. Download it and upload to the micro:bit. JavaScript code let output = 0 let input2 = 0 basic.forever(() => { input2 = pins.analogReadPin(AnalogPin.P2) output = input2 / 50 basic.showNumber(output) }) MicroPython code from microbit import * while True: input = pin2.read_analog() output = int(input/ 50) display.show(str(output)) Demo Thank You :)
https://microbit.hackster.io/anish78/light-sensing-using-photo-resistor-sensor-and-micro-bit-52872d
CC-MAIN-2019-09
refinedweb
714
72.97
Agenda See also: IRC log <fsasaki> checking attendees <fsasaki> scribe: Arle <Yves_> +Yves <fsasaki> Felix: We will table LQI discussion until tomorrow at 15:00 CEST. <fsasaki> 1 p.m. UTC Thursday - LQI comments <fsasaki_> <fsasaki_> I'm back. Internet died here completely. <fsasaki_> tadej: not clear what are the operational consequences of doing option 2) <dF> sorry, I am late <leroy> I vote option 1 <Ankit> +1 for option 1 Felix: We, as a working group, would take responsibility for the six URIs, we would have a W3C prefixed namespace, and that would be a normative part of the recommendation. There might be other implications, but we don't know yet. Tadej: Sounds simple, but there is uncertainty. Felix: Agreed. Jörg: We would also need to try to keep balance between the W3C and NIF branches, which could be hard. Dave: On balance, option 2a is a good thing to do anyway, but it is a risky way to advance ITS 2.0. We don't lose a lot by having it as a non-normative features since we have few adopters. ... We need to keep it as a separate piece of best practice and keep what we have done, the basic level that shows stability and level of interest. <dF> I disagree with Jörg's statement that W3C and NIF branches would need to be kept in sync Dave: For Option 2, taking on ontology into the WG gives us an ongoing maintenance burden. We need to know how mature things are and how we would maintain an ontology. We don't have the same depth of experience in this area. <dF> ITS2.0 would just continue using the uris as is in the W3C snapshot Dave: Getting the ITS spec out risk free in as timely a manner as possible is the priority. We could do #2 later. <joerg> I haven't meant sync but balance which is a difference. David: I don't know what is meant by balance, but regardless of whether it is normative or not, we still refer to a snapshot of NIF in time. Jörg: If you maintain a certain snapshot and NIF evolves, you'd need to see how you evolve on the W3C time. David: But that is an issue for ITS 2.1, 2.2, 3.0, etc. Jörg: Going to option 1 and using it as an example of a mapping to RDF, it has no risk and shows that it works. But I would argue that it does not belong in the spec, but somewhere else. Felix: Back conversion is non-normative and has been for some time. ... It has been *in the spec* for some time. David: Option 1 requires extensive editorial work. Felix: We still need comments from the RDF working group. If there is no ontology on the URI, put it there. But we need to see what they say. Based on today’s outcome, I would go back to the director and see what they say. <fsasaki> <scribe> ACTION: Felix to make an edit showing what changes option 1 would need for discussion next week. [recorded in] <trackbot> Created ACTION-561 - Make an edit showing what changes option 1 would need for discussion next week. [on Felix Sasaki - due 2013-08-21]. Felix: I have asked the RDF group for feedback. David: I want to explore 2a and think it makes the most sense from the standardization perspective, but I am not opposed to 1. If so, we could go to 3rd last call tomorrow. Felix: Is the WG confident to go to last call? Arle: Would it impact addressing Christian’s concerns? <philr> +1 Felix: With option 1, the idea is to go to last call with option 1 (NIF non-normative). <Ankit> +1 <daveL> +1 +1 <leroy> +1 <joerg> +1 David: We have enough agreement, let's more forward now and cut more discussion short. <dF> +1 for 1 and LC tomorrow <tadej> +1 for 1 Felix: Publications could be Tuesday, so we aren't losing much time if I can't get it tomorrow. I need to discuss with a few people. <Yves_> Yves: I have one unrelated question. I noticed in BlueGriffon that annotators ref, I notice that the definition is a set of space-separated references, but then we state that the results should be space-separated and ordered. My question is when you write the value, it doesn't need to be ordered… ... …but the parser needs to order it, correct? ... So it is OK to have an unsorted value, correct? Felix: Yes. Yves: Good. Then there is no problem. Felix: I assume we will make the publication on Tuesday. ... Reminder that the call tomorrow is at 13:00 UTC. <fsasaki> Arle: We won't use the regular GTM line. Instead use the ones Felix just linked to. This is scribe.perl Revision: 1.138 of Date: 2013-04-25 13:59:11 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: Arle Inferring ScribeNick: Arle Present: Arle Felix joerg Karl tadej yves daveLewis leroy Ankit philr dF Milan Regrets: christian pedro Agenda: Got date from IRC log name: 14 Aug 2013 Guessing minutes URL: People with action items: felix[End of scribe.perl diagnostic output]
http://www.w3.org/2013/08/14-mlw-lt-minutes.html
CC-MAIN-2016-44
refinedweb
884
73.07
I’m fascinated by the myth of the Lisp genius, the eccentric programmer who accomplishes super-human feats writing Lisp. I’m not saying that such geniuses don’t exist; they do. Here I’m using “myth” in the sense of a story with archetypical characters that fuels the imagination. I’m thinking myth in the sense of Joseph Campbell, not Mythbusters. Richard Stallman is a good example of the Lisp genius. He’s a very strange man, amazingly talented, and a sort of tragic hero. Plus he has the hair and beard to fit the wizard archetype. Let’s assume that Lisp geniuses are rare enough to inspire awe but not so rare that we can’t talk about them collectively. Maybe in the one-in-a-million range. What lessons can we draw from Lisp geniuses? One conclusion would be that if you write Lisp, you too will have super-human programming ability. Or maybe if Lisp won’t take you from mediocrity to genius level, it will still make you much more productive. Another possibility is that super-programmers are attracted to Lisp. That’s the position taken in The Bipolar Lisp Programmer. In that case, lesser programmers turning to Lisp in hopes of becoming super productive may be engaging in a bit of cargo cult thinking. I find the latter more plausible, that exceptional programmers are often attracted to Lisp. It may be that Lisp helps very talented programmers accomplish more. Lisp imposes almost no structure, and that could be attractive to highly creative people. More typical programmers might benefit from languages that provide more structure.). Programming languages do make a difference in productivity for particular tasks. There are reasons why different tasks are commonly done in different kinds of languages. But I believe talent makes even more of a difference, especially in the extremes. If one person does a job in half the time of another, maybe it can be attributed to their choice of programming languages. If one does it in 1% of the time of another, it’s probably a matter of talent. There are genius programmers who write Lisp, and Lisp may suit them well. But these same folks would also be able to accomplish amazing things in other languages. I think of Donald Knuth writing TeX in Pascal, and a very conservative least-common-denominator subset of Pascal at that. He may have been able to develop TeX faster using a more powerful language, but perhaps not much faster. Related posts: - Lisp and the anti-Lisp - Doing good work with bad tools - Why programmers are not paid in proportion to their productivity For a daily dose of computer science and related topics, follow @CompSciFact on Twitter. 42 thoughts on “The myth of the Lisp genius” When I was in grad school we used Lisp-Stat, and I think we may have been the only stat program to do so. For me, it’s 100% the syntax. Something about the exclusive use of parentheses makes programming in Lisp so much more enjoyable. It’s very weird. And I’m not even a very good Lisp programmer! I see how Lisp could be a good fit for statistics. R is said to be Lisp-like at its core, though most R code I’ve seen resembles FORTRAN far more than Lisp. But that says more about how the language is used than how it was designed. My guess is that 90% of the market for statistical software is people who apply statistics but don’t have much mathematical background and who would find functional programming unnatural. I’ve been thinking similar things the last few years about the Haskell community. Lots of people doing interesting things with Haskell, but, when you really look at it, it’s partly because they are all very logical thinkers and, essentially, very good programmers in general. Certainly people moving across to Haskell with relatively little experience in other programming in general can be seen to be suffering and the learning curve is steep enough that they don’t make it to the first ledge. However, there is also something to be said for the way a language makes you think about problems. Periodically, when stuck on some task, thinking about “how would I do this in Haskell” or “how would I do this in C” has helped unblock the thought processes. When I code in Scheme, it’s kind of the same: what are my data structures and how can I transform them? As opposed to what are my loops and branch points and where can I store some scratch data? Learning LISP is valuable because it provides a completely different perspective on what a programing language is. Coming from a procedural background a language like LISP will completely change the way you program and think about programs. Learning LISP isn’t about being super productive – its about broadening your perspective and becoming a better programmer regardless of the language. Plus LISP is a joy to program in, whether you’re getting things done faster or better isn’t the point. No other language lets you play at programing the way that LISP does. @kotfic John, Could we write a follow-up article, called “The Myth of the Math Genius,” in which you note that Newton, Boole and others were extremely productive in their math capabilities despite the lack of good math symbolism? And, thus, good math symbols are not really necessary? One of the hallmarks of a great programmer is that they (get to) choose their tools with care. Lisps, with their simple syntax, code-as-data approach, functional stance, the REPL, and macros, can be incredible intelligence multipliers. That being said, the really great Lisp programmers I have known personally have also been great at choosing other languages as needed. One of the interesting things about Clojure, Scala, and F# (and perhaps others; these are things I’ve played with) is that they take some or all of Lisp’s advantages and marry them to big libraries and run them on the common runtime engines (JVM, CLR). In theory, this takes advantage of the strengths of Lisps and the strengths of standard procedural languages. In practice, it’s a little more complicated than that, of course. a pity so few other programming languages have runtime malleable code. S-Exps and their more imperative cousin AST’s both offer the higher level “code modifying code” construct, and that meta-abstraction I think reflects a huge part of LISP’s real cachet. John, this is well written and I agree with you. The archetypal myth in the programming world is that there is some magic talisman out there, either a programming language, a methodology (whether agile or not), or whatnot, that will give even the most worthless programmer superhuman talent. What we find is that these tools are “talent amplifiers.” If you have the talent necessary to master them, they can definitely make you more productive. But if you have no talent, you’ll still write poor code. I put Lisp in that category. If somebody takes the time to study Lisp, they’ll find that just about every other tool is somehow a faint reflection of it. Example: I was reading a site the other day that was espousing declarative metaprogramming. But the author had gone off and created his own metaprogramming language interpreter, etc. The fundamental idea, metaprogramming, was wonderful and it had clearly made the author more productive in his work. But I found myself shaking my head, saying, “He would have saved so much time if he had done this in Lisp.” tl;dr: giving someone a chisel doesn’t make them Michaelangelo. There are some perfectly sensible remarks interspersed with some very strange ones! Most puzzling is the question of what exactly this post is about: you start by claiming to use “myth” in the Campbell sense, and then look at actual programmers. Well, which are you interested in? “Lisp imposes almost no structure, and that could be attractive to highly creative people. More typical programmers might benefit from languages that provide more structure.” I’ve heard this a lot, but it does not match my experience at all. Programmers who make a mess in Lisp also make a mess in Java or C or Python or anything else. If this was true, wouldn’t you expect Haskell and Ada to be more popular? Javascript has more structure than Lisp but still much less than most other languages, yet even mediocre programmers tend to be far more productive in Javascript than those other more structured languages. ).” Why is this “hard”? If they wrote in assembly and he wrote in C we’d easily attribute it to the language. If they wrote in C and he wrote in Python we’d also find it easy. Why is it hard to believe that moving up the abstraction continuum even further wouldn’t yield more productivity gains? If you really want to see why, just get the source code to a good Lisp program, and try porting it to some other language. When you end up with 10-100x more code, you’ll see why. (Are you really going to re-implement macros, multimethods, a condition system, special variables, etc.? Or do you know some clever way to achieve the same power without using abstraction?) “If one person does a job in half the time of another, maybe it can be attributed to their choice of programming languages. If one does it in 1% of the time of another, it’s probably a matter of talent.” That sounds exactly backwards to me. My friend can run a marathon in half the time as me — skill. The only way he could run it even 10x as fast is by taking a car — better technology. Programming isn’t the same as running, but across every field, using a better technology is the only reliable way I’ve seen to get better than 10x improvement. I like your point about Donald Knuth writing TeX in a conservative subset of Pascal! Lisp is most powerful programming language that exists. Lisp is also a way of thinking. It allow to create very complicated programs that can’t be written easily in other languages. But this programs must be written by very talented programmers that can think about very difficult problems. If you think about easy programs, then it doesn’t matter which language programmer pick, good programmers always do it right, but lisp one will be 10 times smaller. Genius or language is a false dichotomy. I like your post but here’s another, Deconstructing Genius, that addresses what sets mathematicians apart and goes a bit deeper into the personality characteristics necessary for world-class success. I think Larry Wall said Lisp programmers moved to Perl because they got tired of their source looking like a bowl of cold oatmeal with a bunch of fingernail parings in it. Of course, people also said C was popular because it allowed lower case characters. And ADA if I recall correct was un-popular even despite being a requirement in all US Government software. To use C you had to get special approval with justification. I think pointing out the lack of an ADA compiler for the target hardware was usually sufficient. That said I liked the strong typing in ADA but never tried writing a line of it. I recall Lisp-Stat and X-Lisp-Stat were very popular among statisticians because they were free and powerful, especially the graphics capabilities. R was maybe a gleam in some folks’ eyes but certainly not ready for prime time. Now that R is mature and available, I imagine a mass migration has happened. X-Lisp-Stat was also the language of choice for Jan DeLeeuw, an early promoter of reproducible computing in statistics. 1. Do you know lisp? 2. Have you used lisp continuously for at three months? 3. Have you worked with other lispers and deployed a large lisp project? Hey, I have a blog post I’d like you to read, I titled, “Flying looks pretty easy, what’s up with the payscales of pilots?”. Too many assumptions, “I think”, etc. In the days when people programmed on Lisp Machines where I worked, I was surrounded by programmers of amazing caliber. I sometimes thought that if only there were more Lisp Machines, there would be more smart people. 🙂 No language is good for everything, but as formerly one of those “LISP genius” programmers (moved on for some time, but definitely was), I: (a) don’t need to prove it. I know what I know; (b) definitely know we/they exist (c) know exactly what it is about LISP that made it possible to do some rather incredible things “easily” (relatively) (d) know those principles so well, the language no longer matters – as long as it is a reasonably featured language, I wouldn’t hesitate to try anything that was formerly a “LISP only” project After being immersed in the LISP “wa” for a long time, to the extent of having multiple times created entire LISP development environments (usually based in an initial assembly-language interpretor, and then extended massively in LISP itself), one’s method of understanding problems and their solutions is forever changed. I even was a developer of one of the most advanced LISP machines commercially offered. It is this change in the way of thinking about problems that makes the “genius” aspect occur, assuming one has the personal bent for it. And it isn’t then limited to LISP which, for many reasons, is not a practical language for commercial delivery in many fields. OK, ’nuff said. Most programming languages are turing-complete; anyone with enough ingenuity and time can accomplish the same task with Lisp as they can with assembly. Verbosity distinguishes programming lanugages. Because Lisp requires less typing than other languages, Lisp affords programmers the mental space to write more complex programs. That’s why AI is typically done in Lisp. Java has been compared to the Catholic Church: change must come down from the Pope. To change Java, one must 1) convince the Sun/Oracle developers to adopt the change 2) wait for the change to be applied to Java 3) wait for the next stable Java release 4) wait for users to update their Java version. The process takes years. In contrast, Lisp programmers can bend Lisp to their needs in minutes by using macros. You can define your own constructs (e.g. custom if-then’s and loops). You can implement a special object system, or try with different threading models. In Java, or any other monolithic language, you have to work with what they give you. There is some truth in a statement like “Lispers are geniuses who program circles around lesser folk”: Lisp does, in fact, enable programmers to do just that. Also, the mindset of functional programming can greatly improve the quality of code produced in any language: there is far too much ad hoc code that takes no input, manipulates global variables, and prints the results out rather than taking input, manipulating local variables, and returning output. That ad hoc code is USELESS. There is also some truth that Lisp demands a higher quality programmer. IQ aside, many programmers are taught to think that programming must be done imperatively (C-style). They’re basically using high level assembly, for all the convenience of their chosen languages. And so they write new languages to add power to the old ones: Groovy, Scala, BeanShell. Greenspun’s Tenth Rule is “Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.” The desire for more powerful languages has lead to the creation of engines for scripting languages. It’s why web browsers have JavaScript, why video games use Lua, and why Windows has a half dozen application languages: the underlying languages are terrible, and good developers learn to work around them. You don’t have to be a genius to learn Lisp or to use functional programming, but there is a correlation between Lispers and Computer Science education, interest in theory, and natural curiosity. A child can learn BASIC, Lispers tend to have PhD’s. I started programming in SAS-a high-level, specialized language for processing datasets. SAS has a macro language-not in the sense of Lisp of course, but still a useful macro language. For the work I do, the difference between a programmer who uses the macro language and who doesn’t is easily 5x if not more. The reasons are that 1. You use much less code, 2. Your code becomes massively reusable, 3. You can do things that are hard or nearly impossible in the base language 1 and 2 mean you write programs about 2x faster but spend less than 1/10 the time maintaining them. And 3 is just out of this world. So, I would be genuinely surprised is a genius programmer wasn’t at least 10x as productive in Lisp as in C, and 100x really wouldn’t surprise me at all. Winner of the last Google ai contest used LISP: Have you looked at the code for TeX? Although it was written explicitly to be readable, and it is, it takes control of the smallest details, albeit in a painstaking and methodical way. I think that it would be difficult to argue that TeX could have been written by a non-genius. Although not a direct comparison with lisp, Doug McIlroy’s response to Knuth’s Literate word counting program shows the advantage of flexible tools. Admittedly, McIlroy is also a genius, but given the tools available at the time your average UNIX programmer would have a good chance of coming up with McIlroy’s solution, but very few would come up with Knuth’s. @scotty Why was he the only one using Lisp among the top 100? It is pretty safe to make the claim about Lisp that it is a programming language. Anything beyond that and you are on the hook for believing what you hear lol. One problem with Lisp is its name 🙂 I mean, would you want to use a programming language named after a speech impediment? How about Limp? Or Stutter? Then again, Microsoft got pretty far despite its name 🙂 Yes, I think that’s broadly right. Although I’ve never met anyone who claimed to be a genius because they use Lisp, like you describe. The missing feature between most Lisp and non-Lisp languages is macros. Macros, programmed right, make a big difference to productivity in my opinion. It’s not a myth and it’s not genius. It makes a difference, but not a huge one. Lesser programmers turning to Lisp (or Scheme) will benefit from Lisp simply because it encourages a functional programming style. In the presence of sufficient computing resources a functional programming style leads to cleaner and more parallelizable code. Transplant that back into the language you came from and you’re more productive. I think a lot of it depends on the programmer and the project. A really skilled programmer who is deeply familiar with Lisp and can think in macros can save a lot of time — maybe not moving 100 times as fast, but much more than ten — if and only if the project is big enough and complex enough to gain from that sort of treatment. Lisp comes with some startup costs in terms of initial effort, and you have to balance what you gain from Lisp against the import whizbangeffortlessness of simple tasks in a language like Python. I’m not, by the way, the programmer I describe above. I’m a fairly able intermediate Lisper; when I realize I need a macro, I can figure out how to write one, but it breaks my flow, because it’s not a fully-integrated part of my programming technique. I think a lot of coders never get past that stage in Lisp, and so they don’t see the advantages you describe. I also think you’re being too skeptical of the gains the right language for the job offers. I wrote a toy Scheme interpreter in Python, and it took an hour and a half. I started to do the same in Object Pascal (which, admittedly, I don’t know well), and it was poised to take me a couple weeks of part-time work before I decided not to bother. They should let the Lisp people compete in Top Coder. The C++ programmers kick everyone else’s butts in those competitions. I love Lisp (and Prolog), and agree with Peter Norvig that many of the “patterns” in C++/Java are just warmed-over Lisp. But the programmers. I don’t think most academics have ever seen really talented programmers. I worked at Carnegie Mellon, then at Bell Labs, and even published books on programming languages (Prolog-like) and type-theory a la ML, but I’d never met a great programmer until I moved to a 200-person software company. Again, I urge you to check out the Top Coder competitions. I used them to bone up my programming skill from that of an academic to that of a professional programmer. And whatever you do, don’t bet against the game coders who eat, breathe and think C++. Somehow my post didn’t get logged. It happens: Made using a smartphone. Anyway, there are a lot of success stories using LISP. But there’s a lot of LISP to be found tucked away in other places, such as the statistical programming language, R. Incidently, Smalltalk is every way as powerful as LISP and is the language I prefer, although I would never move from R because of its wealth of statistical and numerical packages. One area I do not know which appears to have some significant computational legs is the world of OCAML. They seem to work at levels of abstraction well beyond that of LISP or Scheme or Smalltalk. Several months ago, I attended an “Alternative Languages” group meeting–so all of us were open to the semantics and uses of other languages–and, at the end of the meeting, the conversation turned to languages used in various classes. I cannot remember the exact words, but I remember the gist very well: “In one class, we were required to use C, so that those who weren’t familiar with Lisp wouldn’t have an advantage over everyone else. We were only able to cover one or two topics.” said one person. “Yeah, in another class, we used Lisp, and I was amazed by what we were able to cover! We covered a lot of stuff.” The thing I took from this is that Lisp really lets you do amazing things, in a way that a language like C cannot. lisp is a really good language for learning and understanding the essence of computation, and the lisp geniuses of legend are just the people who stuck with lisp the longest and learned the most. the purity and consistency of lisp allow an exercise of power that invites working on harder problems, leaving your brain in a better condition than before. i only really caught up to my friends and classmates, who have all programmed computers for much longer than i have, after learning lisp and using what i had learned from it in other languages. i’m still just a novice, but sicp and paip have changed my life for the better. When I posted 7 months or so ago, I should have done two things, quote Dijkstra on LISP, and straighten out the record on R, part of which I helped confused by being incomplete. First, Djikstra: — Edsger Dijkstra, CACM, 15:10 Second, if R is written like FORTRAN, it isn’t really R-ish, as intended. R is a strongly twisted dialect of Scheme, a LISP cousin, but it has many conveniences (purists would call them warts) for supporting numerical computation, calling code written in other languages, data structures making it amenable to statistical application (tables, data frames, and factors, as well as lists, vectors, and matrices), deferred evaluation, smart initial values for function parameters, and sparse matrix structures. The quintessential way of cooking R code is to decompose your problem in terms of LISP-like mapping operations, e.g., sapply(X=x, FUN=function (y) PointInPolygon(y, P)) or similar ones usingmapply, cumsum, and the like. Admittedly this is not always possible. Also, some things which are natural in LISP, and efficient when using a good LISP implementation, are inefficient and not preferred in R. The notable one is that building data structures up as you go incurs inefficiency, so the normative way is to preallocate an object all at once and fill it in. That’s not a very LISPish thing to do. By the way, Paul Graham groks LISP, bigtime. Wasn’t it Peter Norvig that said he never expected programmers to be a as productive in C++ as Lisp until he started working at Google? Fundamentally, Lisp is a nuts-and-bolts language with familiar things in it made up out of bits. You can do everyday, stupid things in it, in stupid ways, like in other languages. It has strings, integers, lists, vectors, structures. Programs made of step-by-step statements and loops, variables that can be assigned and even goto. You can write “Fortran” in Lisp if you are so inclined. Lisp is better organized than other languages, and that will show more and more the more you master the language. Features that appear strange to newcomers will, in time, show themselves to be well-designed, and in some cases to actually be the best possible technical choice. I takes about a year of earnest “Lisping” to begin to “get” it. One thing is: to become good at Lisp, you have to be the type of person who can understand pointers. Although you don’t have to chase memory leaks or segfaults in Lisp, the pointer semantics is there. If you’ve been trying to program in C and things like “int **” confuse the heck out of you, you will probably not go that far in Lisp. Although Lisp doesn’t have “int **” type declarations, it does have pointer-based structures. Lisp programs frequently make use notions of different kinds of equality between objects: are two values actually pointers to the same object, or to distinct objects that are equivalent in some way? Lisp is not a refuge for programmers who don’t grok referential semantics. It is completely wrong, though, that Lisp is a language for the lone genius, because that implies it is difficult. Rather, Lisp can turn a good programmer into a genius. Lisp has the tools in it left behind by great programmers to enable other programmers to be great. Lisp removes some those barriers out of your path which have nothing to do with lack of talent; the rest is up to you. For instance, it does not take a lot of work in Lisp to intercept and augment the compilation process: write code that puts code together, which is then compiled. Stuff like that *sounds* like it requires a genius programmer, but it doesn’t require a genius programmer in Lisp. Programmers deserve to have that kind of access to the programming language. The bar is not so low that every dummy can do that, but it’s not so high either. If you give people the API, they can use it. If you don’t give them the API, then they have to be geniuses to make everything from scratch. Say, if you’re a very sharp C++ programmer, if you spend a little time with Lisp, you will be amplified into a genius. I think someone who spent a lot of time with C and C++ for many years will “get” a lot of things in Lisp sooner. Especially someone with some background in writing compilers, and also who has meta-programmed using code generation techniques, or used a lot of templates, or very complicated abuses of the C preprocessor. For programmers like that, Lisp is like going to heaven after a lifetime of hardship. I think it should be assumed that Lisp programmers are average programmers and start from there. Thank you for the articles, very useful! I’ve started learning LISP more then year ago, but gave up it soon. I had a lot of problems with choosing framework and with it’s installation. Recently I read your guide which helped me to setup emacs with slime correctly!
https://www.johndcook.com/blog/2011/04/26/the-myth-of-the-lisp-genius/
CC-MAIN-2018-51
refinedweb
4,811
70.13
Iconify for Plotly Dash Project description Dash Iconify Dash Iconify based on Iconify is a Dash component library which brings over 100,000 vector icons. Table of contents Installation pip install dash-iconify Quickstart from dash_iconify import DashIconify from dash import Dash app = Dash(__name__) app.layout = DashIconify( icon="ion:logo-github", width=30, height=30, rotate=1, flip="horizontal", ) if __name__ == "__main__": app.run_server(debug=True) Using with dmc Dash Mantine Components enables using icons natively. import dash_mantine_components as dmc from dash_iconify import DashIconify button = dmc.Button("Send Mail", leftIcon=[ DashIconify(icon="fluent:folder-mail-16-filled") ]) Keyword Arguments Visit this site to browse all the available icons: Keyword arguments: - id (string; optional): The ID used to identify this component in Dash callbacks. - color (string; optional): Color. - flip (a value equal to: "horizontal", "vertical"; optional): Flip the icon horizontally or vertically. - height (number; optional): Icon height. - icon (string; optional): Icon name is a string, which has 3 parts: @api-provider : icon-prefix : icon-name provider points to API source. Starts with "@", can be empty (empty value is used for public Iconify API). prefix is name of icon set. name is name of icon. - inline (boolean; optional): Toggles inline or block mode. - rotate (a value equal to: 0, 1, 2, 3; optional): Rotates icon, 0: 0 deg, 1: 90 deg, 2: 180 deg, 3: 270 deg. - style (dict; optional): Inline style. - width (number; optional): Icon width. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution dash_iconify-0.1.2.tar.gz (17.3 kB view hashes)
https://pypi.org/project/dash-iconify/
CC-MAIN-2022-27
refinedweb
278
51.14
Why am i getting 0 ?? And compiler warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘long unsigned int’ #include <stdio.h> //function declerations... int power(int base, int on); int main(void) { printf("%d",power(2, sizeof(int)*8)-1); return 0; } int power(int base, int on) { int pf = 1; for (int i = 0; i < on; i++) { pf *= base; } return pf; } If your int or unsigned int is 32-bits, neither can store the value 4294967296 since this is 2**32 which would require a 64-bit type. Even uint32_t can only store the max value 2**32 - 1 so your value is just 1 out of range. But int32_t can store the max positive value 2**31 - 1 so the signed type is even further off.
https://codedump.io/share/Cw69zfCkdm0k/1/cant-i-store-4294967295-in-unsigned-int--int-is-4-bytes-on-my-machine
CC-MAIN-2016-44
refinedweb
133
76.86
accessible Skip over Site Identifier Skip over Generic Navigation Skip over Search Skip over Site Explorer Site ExplorerSite Explorer Joined: 9/17/2016 Last visit: 1/24/2019 Posts: 15 Rating: (14) I have seen a few questions on how to get the Arduino Sketch feature up and running on the IOT2020 or IOT2040, here is how. This includes using I2C, SPI and PWM (All at the same time). here is the sketch I used if it helps #include <SPI.h>#include <Wire.h>#include <LiquidCrystal_I2C_8574.h>#include <SeeedOLED.h>#include <avr/pgmspace.h>LiquidCrystal_I2C_8574 lcd(0x20, 20, 4); // set the LCD address to 0x20 for a 16 chars and 2 line display//const byte SS = 10; // omit this line for Arduino 1.0 onwardsconst byte MAX7219_REG_NOOP = 0x0;// codes 1 to 8 are digit positions 1 to 8const byte MAX7219_REG_DECODEMODE = 0x9;const byte MAX7219_REG_INTENSITY = 0xA;const byte MAX7219_REG_SCANLIMIT = 0xB;const byte MAX7219_REG_SHUTDOWN = 0xC;const byte MAX7219_REG_DISPLAYTEST = 0xF;const int AnalogPin = 9;void sendByte (const byte reg, const byte data){digitalWrite (SS, LOW);SPI.transfer (reg);SPI.transfer (data);digitalWrite (SS, HIGH);} // end of sendBytevoid setup (){SPI.begin ();sendByte (MAX7219_REG_SCANLIMIT, 7); // show 4 digitssendByte (MAX7219_REG_DECODEMODE, 0xFF); // use digits (not bit patterns)sendByte (MAX7219_REG_DISPLAYTEST, 0); // no display testsendByte (MAX7219_REG_INTENSITY, 3); // character intensity: range: 0 to 15sendByte (MAX7219_REG_SHUTDOWN, 1); // not in shutdown mode (ie. start it up)// init 16x2 LCD display on I2C 0x20lcd.init(); // initialize the lcd// Print a message to the LCD.lcd.backlight();lcd.setCursor(0, 0);lcd.print("Hello from!");lcd.setCursor(0, 1);lcd.print("The Breadboard");delay(2000);lcd.clear();lcd.setCursor(0, 0);lcd.print("Counter:-");// init OLED 128x64 displaySeeedOled.init(); //initialze SEEED OLED displaySeeedOled.clearDisplay(); //clear the screen and set start position to top left cornerSeeedOled.setNormalDisplay(); //Set display to normal mode (i.e non-inverse mode)SeeedOled.setPageMode(); //Set addressing mode to Page ModeSeeedOled.setTextXY(0,0); //Set the cursor to Xth Page, Yth ColumnSeeedOled.putString("Hello viewers"); //Print the StringSeeedOled.setTextXY(3,0); //Set the cursor to Xth Page, Yth ColumnSeeedOled.putString("youtube");SeeedOled.setTextXY(4,0); //Set the cursor to Xth Page, Yth ColumnSeeedOled.putString("thebreadboardca"); //Print the StringpinMode(AnalogPin, OUTPUT);} // end of setupvoid number (const int num){char buf [8];sprintf (buf, "%8i", min (max (num, 0), 99999999));lcd.setCursor(0, 1);lcd.print(buf);SeeedOled.setTextXY(6,0); //Set the cursor to Xth Page, Yth ColumnSeeedOled.putString(buf); //Print the StringanalogWrite(AnalogPin, num & 0XFF);// send all 8 digitsfor (byte digit = 8; digit > 0; digit--){byte c = buf [8 - digit];if (c == ' ' )c = 0xF; // code for a blankelsec -= '0';sendByte (digit , c);}} // end of numberunsigned int i=99990000;void loop (){number (i++);if (i > 99999999) i = 0;delay (1);} // end of loop 6 thankful Users Joined: 1/5/2012 Last visit: 11/16/2018 Posts: 1 (0) Hi. Thanks for the tutorial. One problem. When i connect the IOT2020 by USB i get a device called "CDC serial" under other devices. Where can i find a driver for this? (I'm on Windows 7) Regards Joergen. Joined: 6/12/2013 Last visit: 8/7/2019 Posts: 9 (18) 2 thankful Users Joined: 2/13/2017 Last visit: 2/15/2017 Posts: 3 I am evaluating the use of IOT2020 for a very simple application using an Arduino sketch. I have found little information on IOT2020 but I have concluded its behavior is similar to Intel Galileo. According to the documentation of Intel Galileo board, when loading a sketch, after power cycle the sketch is lost. The way to go is to boot the board from a SD card using an image provided by Intel. My question is, what is the behavior of IOT2000 after the power cycle? Do I need to boot the board using SD? The fact is I don't want to boot from SD, as my experience with these cards is very bad. Thanks in advance. Joined: 4/28/2015 Last visit: 9/13/2019 Posts: 1187 (189) Hi iames, the behaviour of the IOT2000 is different to the Galileo Gen2 in this case. It is not possible to boot the IOT2000 without SD Card, so you can't load a sketch without a SD Card. Best regards! The internal flash memory is for the BIOS. Joined: 2/17/2017 Last visit: 8/15/2017 Posts: 2 Hello, I think I have similar problem. Actually I have two related problems. When I installed the driver for Windows 7; windows assigns com port 8 as Galileo Port. But when I try to upload to sketch to galileo gen2 or IOT2040 the error below occurs in Arduino IDE. AppData\Local\Arduino15\packages\Intel\tools\sketchUploader\1.6.2+1.0/clupload/cluploadGalileo_win.sh: line 56: /dev/ttyS7: No such file or directory I imagine it converts com8 to ttyS7 in the script. I think the driver doesn't work properly here. My second problem; When I connect the iot2040 device to my Linux Desktop, lsusb doesn't show anything related to this device. But I can connect and program galileo gen2 by default ACM driver. Is Arduino capability configured via BIOS on the device? and is it normal to see nothing via lsusb when I connect the device to Linux? BR. Yetkin. Joined: 9/3/2014 Last visit: 3/13/2019 Posts: 5190 (114) New question published by Stefan_123 is split to a separate thread with the subject Interact with an analog sensor. Best regards Min_Moderator Joined: 5/1/2016 Last visit: 6/25/2019 I can't find the driver, what is the exactly name please: IntelGalileoFirmwareUpdater? IntelGalileoSerialDriver? Thanks
https://support.industry.siemens.com/tf/ww/en/posts/running-arduino-sketch-on-iot20x0/157577/?page=0&pageSize=10
CC-MAIN-2019-39
refinedweb
930
55.34
xcb_free_pixmap man page xcb_free_pixmap — Destroys a pixmap Synopsis #include <xcb/xproto.h> Request function xcb_void_cookie_t xcb_free_pixmap(xcb_connection_t *conn, xcb_pixmap_t pixmap); Request Arguments - conn The XCB connection to X11. - pixmap The pixmap to destroy. Description Deletes the association between the pixmap ID and the pixmap. The pixmap storage will be freed when there are no more references to it. Return Value Returns an xcb_void_cookie_t. Errors (if any) have to be handled in the event loop. If you want to handle errors directly with xcb_request_check instead, use xcb_free_pixmap_checked. See xcb-requests(3) for details. Errors - xcb_pixmap_error_t The specified pixmap does not exist. See Also xcb-requests(3) Author Generated from xproto.xml. Contact xcb@lists.freedesktop.org for corrections and improvements. Referenced By xcb_free_pixmap_checked(3) is an alias of xcb_free_pixmap(3). libxcb 1.12 X Version 11 XCB Requests
https://www.mankier.com/3/xcb_free_pixmap
CC-MAIN-2017-26
refinedweb
137
62.54
#include "pxr/pxr.h" #include "pxr/base/gf/vec3d.h" #include "pxr/base/gf/api.h" #include <iosfwd> Go to the source code of this file. Fits a plane to the given points. There must be at least three points in order to fit the plane; if the size of points is less than three, this issues a coding error. If the points are all collinear, then no plane can be determined, and this function returns false. Otherwise, if the fitting is successful, it returns true and sets *fitPlane to the fitted plane. If points contains exactly three points, then the resulting plane is the exact plane defined by the three points. If points contains more than three points, then this function determines the best-fitting plane for the given points. The orientation of the plane normal is arbitrary with regards to the plane's positive and negative half-spaces; you can use GfPlane::Reorient() to flip the plane if necessary. The current implementation uses linear least squares and thus defines "best-fitting" as minimizing the sum of the squares of the vertical distances between points and the plane surface.
https://www.sidefx.com/docs/hdk/plane_8h.html
CC-MAIN-2020-16
refinedweb
190
72.26
Forums Reviews Guides Newsgroups Search Member Login: Velocity Reviews > Newsgroups > Programming > C++ > Error codes vs. exceptions Page 14 of 14 « First < 4 12 13 14 Thread Tools Error codes vs. exceptions Pavel Guest n/a 06-08-2012 none Yannick Tremblay wrote: > In article<4fcbeaf4$0$15709$c3e8da3$(E-Mail Removed) raweb.com>, > Pavel<(E-Mail Removed) oo> wrote: >> Adam Skutt wrote: >>> On Jun 1, 9:44 pm, Pavel >>>> Stefan Ram wrote: >>>>> Pavel<(E-Mail Removed) oo> writes: >>>>>> I know you have already received lots of advice; but I think they omit an >>>>>> important factor, namely whether you are writing a library. In my experience, I >>>>>> found it quite annoying when the API of a library I use can throw. >>>> >>>>> And that leads to an answer to the OP's question: When in Rome do as the >>>>> Romans do! >>> >>> Surely, write a catch block? >> Oh, sure. For example, if your error processing policy requires assigning >> severity level to every error before logging it, you will surely be happy to >> write a catch block where the severity is best known to be assigned, won't you? >> >> Now think where in the code this place could possibly be. A little hint: >> "generally" it's not in main(), regardless of whether you are handling >> std::exception or other exception (unless your app/library is so small that you >> "generally" call functions from other libraries in main()). > > I think your problem is that you are implying and assuming that the > the best place to handle the error (assign a severity value, etc.) is > in the immediate caller. Not for the error generated by the code under your control, no. You may have good-for-your-purpose conventions on error models and error processing. You can catch and process *your* errors wherever appropriate -- as you rightly describe below. But more likely the not an arbitrary selected library you are using does not obey your error conventions. This is not a library writer's fault. Objectively, a library cannot follow many useful conventions (e.g. error severity is often specific to client code's context and cannot be determined in the library). Therefore, you often need to *convert and enrich* error conditions signaled by several used libraries, each according to its own conventions, to your conventions. *These conversions* is what is usually best encapsulated in the immediate callers (to minimize the amount of code exposed to foreign error conventions which in turn minimizes the scope of the changes required when you switch to another library or a new version of same library or add to or replace your code surrounding the library calls). Because I have been changing library writer and library user hats often and, as library user, I found non-throwing libraries much more pliant to conversions of their errors to the conventions required by my code, I follow the golden rule when I am designing libraries: do to other library users what I want library writers do to me when I am a library user -- that's it. > > You may be correct that it is not "generally" in main but I disagree > with you implication that it is "generally" in the immediate caller. > > With exception: you can catch and handle the error where appropriate > With return status code: you must check for the error in the immediate > caller even if you are unable to du anything about it. If it is the > case, you must then anreturn another error and so on. > >> a. It won't be called at the first place, because a specific exception type >> won't be caught. > > Generally spkeaing, catch statements should be "rare" (as in not > around every single function call). Where you catch, you may want to > catch probably one of: > - all the exceptions types you know how to handle/fix > - all exceptions (with multiple catch) and handle or log all. > Re-throw if desireable. The above is fine -- as long as the exception type hierarchy is under your control (that is, higher than the conversions at the call stack). > >> b. Even if it were somehow auto-magically called, it often would not have enough >> data to fill up the logging function's arguments or the dialogue fields without >> cracking the exception or error code. > > What about : > > catch(...) { log<< "Critical Fatal Total Error: unknown exception. We > don't know how and why it has happen so PANIC! is the best answer!"; } > > This sounds reasonable to me. If some error occured and you don't > even know that such an error is possible, then you really should > consider it as critical. It is reasonable if you cannot possibly catch this error where you have useful information about the error's meaning *for you* (and off the bat I cannot come up with a case where it would be *technically* impossible). If, which is often the case, it was your choice to catch that error in a place where you cannot read its meaning-for-you -- I would not call it reasonable. I would rather call it a design error. >> It is instructive that in some libraries designed for critical applications by >> serious ISPs (e.g. IBM's MQ Series C API), the error codes were returned in >> output parameters. A programmer using such API does not have a chance to fully >> ignore error conditions because s/he needs to define a variable (sometimes more >> than one) to hold error codes. IBM's APIs are a good example because no other >> company has had more experience with maintaining huge enterprise-level >> applications depending on multiple libraries. It's also instructive that some >> 'newer-style' and best-usable POSIX C API calls have been re-designed same way >> (think of strtol() that all but replaced atol()). > > I don't consider design choice for a pure C API where exceptions don't > exist as particularly relevant to a discussion on C++ error handling > which has built in exceptions and where exception is the default > language behaviour. I am not sure I understand this statement. In my opinion, exceptions are a C++ feature, yet another tool made available to practitioners, not a behavior. If what you mean is that the default operator new does not return 0, it is a reasonable (although maybe not the best) default choice *for this particular error, which is unique in that most applications will want to terminate when they run out of memory* (I am saying "maybe not the best" because if I do want to terminate, I would prefer to abort() (which dumps core, on many systems) as close as possible to the call site of the failed memory allocation call to investigate the problem easier). On the other hand, the other errors, such as i/o or formatting errors (to which category strtol() mentioned by me falls) are *not* signaled via exceptions by default in C++ -- for example, you have to call exceptions() on a stream to make it throw -- which is, I think, also reasonable. It is unfortunate, however, that, at least in practice, you have to pay for the cost of calling potentially exception-throwing functions even when you do not turn any stream exception on. > >> The above paragraph is not talking about exceptions but its subject is directly >> related to the topic. Namely, if we arrange APIs error exposure design decisions >> in the order from less to more prominent manifestations of possible error >> condition by API functions, our order will look approximately as follows: > > This is totally backward: > >> 1. errno, GetLastError(), global variables, C++ functions without exception >> specifications that throw and the likes. A programmer is not reminded about >> possible error conditions in any way except possibly for the documentation. > > Are you seriously grouping exceptions with errno? This is hard to believe. In terms of giving a programmer a syntactic hint about error condition an API can raise? Of course I do. Namely, neither of the above gives any syntactic clue to an API user whether a function call can raise an error condition and, if yes, how to catch it. > >> 2. C++ functions with exception specifications. A programmer is reminded about >> possible error conditions if s/he has function prototype in front of him or her >> while coding the call. >> >> 3. Error return codes, A programmer is reminded about error condition as in #2 >> plus they may be able to turn on some "unused function return value" compiler >> warning or the like. >> >> 4. Errors are encoded in output parameters of API. A programmer cannot >> unwillingly ignore an error condition. > > This is so back to front. Exceptions absolutely can't be ignored > passively. You are probably talking about run-time. I am talking about coding time. At coding time, exceptions perfectly can be ignored (and in fact quite often "the can" is being "kicked down the road"). When this happens in a library (I mean, ignoring the exceptions thrown by a library underlying this library), the onus is put to the library user -- often without any notice and for their own money. > This is one of the strongest quality of exception above > either return code or output parameters. > > Error ouput parameter are easy to ignore: you just don't test them > after the function call. please see above > > One of the big problem with return code, errno or even error output > param is that nothing in the code indicates that the programmer made > the deliberate choice of ignoring the error rather than just being > lazy. Same with the unspecified exceptions -- nothing *in the code* indicates such deliberate choice (this is somewhat specific to C++; Java is notoriously different). > > With expection, the error will *always* interrupt normal processing > *unless* the programmer explicitely write code to resume normal > processing. This is a *good* thing. Now you are talking about run time. Also your model of "a programmer" is limited; in a real-life application it is often that there is more than one programmer in the picture: programmer lp1 wrote library l1, programmer lp2 wrote library l2 that calls l1 and neglected to catch any exceptions; then programmer ap wrote application a that calls l2 and has no clue about l2's use of l1 (it's not ap's fault because we already know lp2 is sloppy; he did not necessary even documented it for ap that l2 uses l1). ap diligently catches documented exceptions of l1 at all correct points; but once in a while his program has inexplicable problems (whose root cause are exceptions from l1 but he won't before much later). ap just picked up the tab.. If this situation is not familiar to you, you are either extremely lucky or do not maintain big software systems a lot. > > The corrected list above is: > > 1- errno, GetLastError(), global variable: programmer need to test > something that is outside current scope and location. Programmer needs > to explicitely call a method or read a variable that is unrelated to > current processing in order to check if an error has occured. > > 2- return code: very easy to ignore or forget about. How often have > you seen the return value of printf() being ignored? Error checking > needs to be done manually. > > 3- error output parameter: because you have to explicitely define the > paramter, the programmer is less likely forget and compiler flags may > raise a warning. > > 4- Exceptions: programmer can't ignore them passively. See above. This happens all the time and this is not even always the last non-catching programmer's fault. > The only way > to ignore exceptions is to explicitely catch them all and > continue. Error checking is automatic. > >>>> The code to capture error codes is simpler, >>>> shorter and faster than that needed to capture exceptions. >>> >>> I don't see how it can ever be less code than exception handling[4]. >> - It is simpler because error codes are easier tabulated for their >> classification or another mapping (potentially driven by the rules that are >> configured externally to the program and that are to be applied to programs in >> other languages than C++) to the entities required by the application error >> handling policy. >> >> - it is shorter (especially with error-code-based or errno-based approach to >> error propagation) because well-encapsulated error-checking code is notably >> shorter than equally well-encapsulated try {} catch{} code. E.g. compare: > > Only for poor quality code! ? > >> if (!handleError(apiCall(args..))) >> return; >> >> or >> >> if (!apiCall(args..)&& !handleError(errno)) >> return; >> >> to >> >> try { >> apiCall(args..); >> } catch(XxxException&e) { >> if (!handleError(e)) >> throw; >> } > > Arggh! Typical problem with error return code apologists. You are > doing it wrong. You are unable to change your mindset of handling > error for each function call one by one. Write the normal path, > handle errors where appropriate. The above is an perfect example of > poor quality code. You are missing the point. Whether or not you have to convert every error of 3rd party library into your error is often not a choice but a requirement. > > Exception code should (at worse) look like: > > void foo() > { > try > { > // Normal path > // several lines of code > // implement all the normal processing logic here sequentially. > > } > catch( /* something */ ) > { > // handle errors > } > } > > In many cases, it should look like: > > type2 foo1() > { > bar1(); > bar2(); > bar3(); > type t = bar4(); > type2 z = bar5(t); > return z; > } > > bar1() > { > buzz1(); > buzz2(); > // ... > } > > boss() > { > try > { > type2 t = foo1(); > foo2(t); > // ... > > } > catch( /* stuff */) > { > // handle errors > > } > } > > The amount of error handling code can easily become 1/4th to 1/10th of > the amount used by return status code style programming. > >> /* if apiCall does not encapsulate exceptions from *its library's* underlying >> libraries, the above code would be even longer */ >> >> - it is fater because compiler can optimize nothrow code. In a (probably futile) >> attempt to prevent pointless objections to this statement I am referring you to >> the Ultimate Authority of the Standard, see a footnote to 17.4.4.8-2 in ISO/IEC >> 14882:2003(E) or 17.6.5.12-3 in ISO/IEC 14882:2011(E) about the reason why C >> library functions shall not throw: > > Why are you talking about C library functions? I am not talking about C library functions. I am talking about the fact that the authors of the (C++) Standard recognized and explicitly acknowledged that implementations can optimize functions explicitly declared not throwing. (From half-empty glass perspective, by that they admitted the significance of pessimization caused by throwing functions). Of course C library > functions shall not throw. It would be instructive for you to read the footnotes I referred to to learn *why exactly they shall not throw* (from the viewpoint of C++ Standard authors, not mine). C doesn't have exceptions. Even if this is a rationale of C++ Standard authors for requiring C functions to be no-throw, it is not written down in the Standard. The performance rationale is written down. We are > discussing C++ here! And I am fully aware of that. And you are fully aware that I am aware. > > It's slower because you have 100 if/else statements in the normal path > where I only have 1 try/catch It does not make sense to compare the amount of ifs in the pieces of code that do different things. My two code snippets are comparable to each other in their functionality (a uniform and informed handling of all error conditions of a 3rd-part library. I defined this purpose for your multiple times). Your code snippets are doing something else. It would be useful if you defined the purpose of your code before writing it. > > It is slower because in the normal success processing path, you > repetitively have to check for error while with exceptio > > e.g. compare: > > // Critical loop: > for(int i = 0 ; i< BIG_NUM ; ++i ) > { > int retVal = foo(); > if(SUCCESS != retVal) > { > // handle error > // break/return/exit? > } > } > > vs > > try > { > for(int i = 0; i< BIG_NUM; ++i) > { > foo(); > } > } > catch(...) > { > // handle error > } > > The normal path has at least 1 more assignment and 1 more test. Things > become even worse if the function actually return a value (see below) The Extra assignment is only in your code because error processing is not encapsulated (see my code snippet above that does not have that assignment). Of course, even the way you wrote it, the assignment does not have to appear in the machine code at all. > > It's slower because you have to define a object with dummy value, pass > it as argument to a function, copy data into it then finally use it > while I initialise the object directly with valid value and the > compiler can optimize the code with NRVO/RVO With return codes, you do not have to pass extra parameters. If you allude to not being able to return value along with parameters, you can always group the return code into std: air. STL containers sometimes do just that. RVO works just fine on a pair; often the return value and return code never hit main memory -- as long as there enough registers. Of course, return code does occupy a register -- but exception and overhead of a throwing function's prologue/epilogue cost much more than one register. > It is slower because the compiler can't optimize code as nothrow > simply because there are no explicit throw/try/catch statements in > your code. Yes, it can and it does. Please refer to the Standard and study the assembler code I posted earlier in the thread "Using printf in C++" or generate and study the code of your own. If anywhere down the line something uses the C++ standard > library, new or an allocator, then exceptions may occur. That they can. What they can't is to be thrown from the function declared as no-throw. std::terminate() will be called first (and std::terminate() does not return -- or throw). > >> ".. This allows implementations to make performance optimizations based on the >> absence of exceptions at runtime." > > In C++, that's playing ostrich. (as in bury your head in the sand and > claim it's not there because you can't see it) You should always > assume the any function call can throw. > >>> At best, it can be equal, and in practice, it must necessarily be more >>> because you're explicitly doing things that are done implicitly with >>> exception handling. >> Whom do you mean when you say "you're"? Me a library writer or me a library >> user? Me a library writer is supposed to go extra mile to better serve a library >> user. Me a library user will have to do more and more complex things when I use >> exception-based API which my this and previous and previous-before-previous >> posts explain at monotonously increasing and unnecessary for a reasonable reader >> level of details. > > Strongly disagree. As a library user it is much simpler to use an API > that is exception based than an API that is return code based. Not if you need to process errors according to strict policies about which the library author did not and could not know when s/he wrote it. > >>>> As a free bonus, you (or your user if you are the library writer) can always >>>> make a choice between writing throw and nothrow functions at no coding cost. >>> >>> Hardly. Just returning error codes doesn't suddenly let you mark >>> everything nothrow, >> It allows you to "mark everything nothrow" without writing try-catch blocks. >> Marking functions nothrow was not counted as part of the cost because it had to >> be done regardless of the type of the underlying API if the decision to write >> nothrow functions has been made. > > There you are again playing ostrich. As discsussed by others in this > thread, if you want nothrow, you must never use new or the standard > C++ library anywhere and you must enable the "no exception" flag in > your compiler. A compiler with enabled "no exception" flag is not a standard C++ compiler (at least it was not according to the old standard; I did not check the new one at this). As for never using "new" under a no-throw function this is another fallacy, easily disprovable in practice at that. Please try this simple test on your favorite most compliant C++ compiler (with the exceptions turned on): #include <iostream> #include <vector> using namespace std; void memoryBuster() throw() { vector<double> v(10); for (; { v.resize(v.size() * 2); } } int main(int, char*[]) { try { memoryBuster(); } catch(...) { cout << "memory buster has thrown" << endl; } return 0; } With a compliant compiler, you will get: a. with uncommented throw() specification: terminate() called and no "memory buster has thrown" output. and b. with commented-out "throw()" specification: "memory buster has thrown" output. Anything else is unsafe. > >> due to the problem I mentioned above. >>> >>>> >>>> On top of this, following the code that only calls nothrow methods is much >>>> easier which increases the programmer's productivity. >>> >>> I fail to see how calling a method marked nothrow increases my >>> productivity at all. >> As the rest of the paragraph explained, in increases your productivity by >> reducing the time you need to read and understand the code with only explicit >> exit paths. Oh, I forgot, you are not spending much time reading code.. this "on >> top of it" advantage is probably not for you then. > > Strongly disagree. In my experice, reading exception based code is > much simpler since I can read the normal success path and concentrate > on the sanity of the success scenario independently of the error path. > I can then read the error path and concentrate on the sanity of the > error path. And where are you going to search for the error path? Throughout the whole code base? > And since the code is typically significantly smaller, The code that is functionally equivalent will be of comparable size (depending on the required functionality, slightly smaller or slightly bigger. In my experience, when you need to adhere to a strict non-trivial error-processing policy, it is slightly bigger). I > have to spend less time reading. > > Typical error return based code in C++ is a nightmare to review. It > is typically exception unsafe because it blindly assume that because a > specific function implementation does not has explicit throw > statements, then exceptions can be ignored. How is this relevant to my post? My post recommends to design a library API from no-throw functions. And, if a function is declared no-throw, then, *yes, exceptions can be ignored*. In fact, you do not have any other useful choice when you are calling a no-throw function *because you are never going to catch an exception from it, neither at call site, nor in main(), nor anywhere in between*. As a result the code will > often be totally exception unsafe and if anything below generate an > exception, then everythign breaks. irrelevant to my posts. > > Even ignoring this simple fact, the repetitive "doA, check if A > succeeded, handle A-type error, decide if processing can continue, if > not release previosuly acquired resources, doB, check if B succeeded, > handle B-type error, decide if processing can continue, if no release > previously acquired resources including those acquired for doA, ... " > is very verbose and very error prone. irrelevant to my posts. > >>> It doesn't reduce what I need to be worried or >>> concerned about unless the code actually provides the nothrow safety >>> guarantee, which is considerably difficult. >> Really? Valid C++ code that does not throw any exceptions itself and calls only >> functions with an empty exception specification is guaranteed not to throw an >> exception. Incidentally not throwing an exception *is* the definition of nothrow >> exception safety guarantee. > > please review what "nothrow" actually does and what are the > consequences if a function declared as nothrow does let an exception > escape. Please follow your own advice and review what non-throwing functions (those declared as throw() or equivalently) do and memorize once and for all that they do *not* "let exceptions escape". > >>> Merely not throwing exceptions is of very little benefit to the >>> programmer. >> I am not sure about that vague "merely not throwing exceptions" but exposing >> only no-throw functions from a library API (which was my original and only >> point) brings all the benefits I claimed. > > It brings some benefits for some style of programming. It however > bring cost for some other style of programming. I prefer an library > that uses exceptions. Much easier to use. > > Let's take the Eigen library for an example: > > MatrixXd m(2,2); > m(0,0) = 3; > m(1,0) = 2.5; > m(0,1) = -1; > m(1,1) = m(1,0) + m(0,1); Why don't you define the required error-processing functionality of your application before writing any code for a change? > > Very usable. Now let's try to rewrite this with error code: > > MatrixXd * pm = MatrixXdFactory(2.2); > if(NULL != pm) > { > int retVal = 0; > retVal = pm->Set(0,0,3.0); > if(0 == retVal) > { > retVal = pm->Set(1,0,2.5); > if(0 == retVal) > { > retVal = pm->Set(0,1, -1.0); > if( 0 == retVal) > { > float v1 = 0; > retVal = pm->Get(1, 0 ,&v1); > if( 0 == retVal) > { > float v2 = 0; > retVal = pm->Get(0,1,&v2); > if(0 == retVal) > { > float v3 = v1 + v2; > retVal = pm->Set(1, 1, v3); > if(0 == retVal ) > { > // Success!!!! > } > } > } > } > } > } > } > } > } It is completely pointless to evaluate the code above whose only purpose IMHO is to demonstrate strangely looking code. Neither snippet tells me anything about how either retVal or (invisible and unhandled, contrary to all your "impossible to ignore" statements) exception is intended to be dealt with. Specify the possible error conditions and desired error processing policy of your library, specify how the exception version of the API signals these conditions and demonstrate how you implement your specified policy over it and I will show you how to implement same policy with an alternative (non-exception-based) error-signaling API in a competitively clear and concise code. Please notice that I have already specified the case (which is often in enterprise applications practice) where the code based on exceptions is necessary bulkier. > > // handle error? > return retVal; > > Yuck! Bon appetite > > Feel free to rewrite the above with error output parameters to > demonstrate how much better again the code would be and how much > easier to read and review. I will gladly do so when I understand your desired error handling policy and see your code that is compliant with it. > > Yannick -Pavel Pavel Balog Pal Guest n/a 06-11-2012 if you're out of memory, that's pretty much it: you will not show the dialog with question, cuz the system is, well out of memory. Random processes will just get killed. If there's overcommitment enabled, you won't even get to know memory allocation failed, just die on a normal access. And it s normal to install a new_handler that stall until memory allocation succeeds -- and the program need not bother with that condition anymore. > If you don't care where the exception occurred in a block of code, > then how do you fix the problem? Man, did you ever use exceptions in real life, or just read some article on the net? > The same exception may be thrown > from more then one place in the block. The larger the block, the more > likely that may occur. Yes, the exception can be emitted from a couple thousands of spots per handler in practice, or more. And the emiting place is completely irrelevant as far as the handling goes. As gettig there the cleanup already happened via destructors. And the message to issue can be queried from the exception object you caught. > This is a problem I have with exceptions, and why I agree with those > who believe they should be reserved for EXCEPTIONal errors. SWhat could not be defined in this pretty fat thread this far -- or in the mostly accepted books on the topic, so it has little content really. Beyond saying that using wxceptions as a hack replacing longjmp is hardly good >). For error codes there is no such thing as transparency, if it is not checked, the info is lost forever. Just to know you can ignore it you shall analyze it. The case where a functions does return error code and all are acceptable are exceptionally rare to the level they can be removed from discussion. > I would prefer > said function return error codes for minor errors and reserve > exceptions for fatal and near fatal errors. As what counts "exceptional" and what "expected" is up to the caller, a single function can hardly serve everyone equally well. But it is easy to cheate wrapper in either direction and use that instead of the original instead of fussing about its design. > * But if they may be undocumented, then I may not know which ones I > can trigger. If a library has no documentation on its behavior, it is not usable in practice, full stop. Thowing exception is certainly part of behavior. However a fair description may look pretty general, like 'any finction can thorow exceptions descending from std::exception, unless explicitly marked as nothrow'. > As I understand it, and I may be wrong, any unhandled exception > travels upward until it is handled or the program terminates. So I > have three choices: > 1. Handle everything here. > 2a. Handle the likely exceptions > 2b. Let the rest end the program. > 3. Include a catch-all (...) after 2a and try to return to the state > before the block was executed because "something went wrong, but I > don't know what." Or just reflect the library's sepcification in your own documentation, and be exception-transparent. What works fine in practice, everyone makes best effort to do the request, but if there are OS calls, each may produce hundreds of failure conditions. Ones you don't want to analyse anyway. > Items 2 and 3 cover what I mean by "throw the baby out with the bath > water." 3 is for sure, you kill the information for no sensible reason. Preventing the upstream to handel it in the common way. 2 is understated, if 2b means you still catch in excess and abort(). If you just leave the exceptions alone, you can't tell at the spor what the outcome is, and the program will be terminated. It's up to the caller. >>> Error conditions more often represent "that didn't work, I'll try >>> this" than "it's the end of the world!" >> >>Utter nonsense. The return codes from std::fopen, as one of example >>of many, may be fatal or not-fatal depending on the what the >>application is doing. Some of them may also represent programming >>errors, depending on what the application is doing. Just like you >>can't say, "This exception represents a minor event", you can't >>generally say the same thing about error codes! > > But, as programmer, you control what the application is doing. To some extent. To fopen you pass some string as filename. It's up to the OS what it takes into account processing it. > You > know when making a call what are fatal and non-fatal returns (however > they are delivered) based on the context of what your code is doing at > the time. Or not. The program got the filename string at some level. The fopen may happen several calls below. At that place the context can be pretty well missing. The condition is (may be) certainly "fatal" for the particular function if its job was to do something with the opened file. So it will abandon, and the caller deal with the failure info. And decide how fatal it is for him. > True, that does leave a large gray area in deciding what would be > exception worthy vs. return code worthy when designing code others > will use but cannot alter. So I lean towards error returns whilst in > the gray area. Instaed of facing that no info means no info. What justifies leaning nowhere. Keep leaning on actual information, and just admint that the rest was processed as an arbitrary decision. > I do think exceptions are a good idea for handling errors that are > in deeply nested code but should be handled higher up. (But not so far > up that it is out of the scope of the current code.) Tha least part meaning? Balog Pal Balog Pal Guest n/a 06-11-2012 "DSF" <(E-Mail Removed)> >>> This is the problem I have with exception handling: you CANNOT >>> ignore exceptions. You must in some manner handle the possibility, >>> else your program ENDS! >> >>Not too realistic, as normally you have some general catch blocks at high >>points. > > Too high and you won't know how to fix the problem. Says who? "Last operation failed for reason:" + ex.what() can be completely okay, especially if the ssytem below uses the strong exception guarantee. >>> Handling minor, recoverable, errors with >>> throwing exceptions is like police shooting jaywalkers. >> >>You want to say: minor conditions that the program COULD have handled if >>the >>programmer actually bothered with that -- but did not. So the deployed >>state >>CAN NOT tell it from any other overlooks, and evaluate the severity. > > No. I am saying that exceptions can go to the extreme and therefore > should only be used to handle extreme situations or those where there > is no alternative path of error return. We're getting back to undefined buzzwords. If you want to keep that assertion please go the mile and define them for everyones use. >>Like in the joke: Computer, cut the patient's left leg. I said the left! I >>said the LEG! >> >>You want to cure the sloppy programs by letting them go derailed. And >>believe it is for the good. Hmm. > > I've never addressed sloppy programming. And, to use your metaphor, > I'm saying that the dining car being out of ice is no reason to hit > the emergency stop. An undocumented exception in a library is just > like a steward on the train who will stop it over no ice. Using the undocumented library in the first place is what a serious endineer strictly refuses. Or if it;s forced, jsut write the South Park disclaimer on the label please. Why do you think real practices bother with certified materials, standards and so on? Why you think software should be allowed just do undocumented and unexpected things? > You don't > know where he is or if he'll even stop the train...until he tries. Think that again. >>> Often, how you recover from an error depends on the specific error >>> information. If you "try" each individual function that might throw, >>> you might as well have error codes. The larger the "try" block, the >>> more difficult to isolate and repair/recover from the damage becomes. >> >>Hardly. In the catch you most often just not the fact that exception >>happened. And the action is taken without inspecting its spot or cause. > > How do you take action (short of ending the program) without knowing > the cause of the problem? Well, probably that is the thing called 'program design'. O,O Just like a wlyweight or a safety walve works: does it know why the pressure went up? Does a circuit breaker know if you have a short or just switched on an extra owen? No, it just does its job. At that level it may be crude, but whoever wanted more relevant action already had and passed the chance. Balog Pal Pavel Guest n/a 06-11-2012 Balog Pal wrote: > What "systems"? > if you're out of memory, that's pretty > much it: you will not show the dialog with question, cuz the system is, well out > of memory. Program B would pre-allocate resources to reliably display those critical messages, be it now, 25 or 40 years ago. -Pavel Pavel nick_keighley_nospam@hotmail.com Guest n/a 06-11-2012 On Monday, June 11, 2012 1:34:09 AM UTC+1, Balog Pal wrote: > "DSF" <(E-Mail Removed)> <snip> > >). this? <snip> nick_keighley_nospam@hotmail.com gremnebulin Guest n/a 06-20-2012 On May 29, 5:15*am, mike3 <(E-Mail Removed)> wrote: > Hi. > > I've heard about this, and wonder when is it right to use codes, and > when to use exceptions for reporting errors? I've heard various stuff, > such as that exceptions should only be used to indicate "exceptional" > conditions. Yet what does that mean? I've heard that, e.g. a user > inputting invalid input should not be considered "exceptional", but > something like running out of memory should be. Does this mean that if > we have a program, and we have a "parse()" function that parses a > string input by the user, that this function should return an error > code on parse failure, instead of throwing an exception? Yet we'll > probably also come across places where it's good to use an exception, > in the same program! Which means we get into _mixing error codes and > exceptions_. And what's the best way to do that? > > Also, how exactly does one go about determining what is and is not > "exceptional"? Two examples were mentioned of things exceptional and > non-exceptional, but what about something else, like say in a game, > where you have a grid representing a game level, and a request for a > tile of the level is made with a coordinate that is off the map (like > a 64x64 map and something requests a tile at (100, 100).). Would it be > OK for the function working on the tile to throw? Or should it give an > "out of range" error code? And as for that mixing: consider, e.g. C++ > and probably many other languages: a function has a single definite > return type. Suppose our grid had a function that extracts an > attribute from a cell. What to do when there's an out-of-bounds > request? Throw exception? See, what I've currently been doing, and > it's probably silly, is to use exceptions when our function needs to > return a value, and error codes when it could otherwise return "void". > This doesn't seem like a good idea. But what to do? Make every > function return an error code, using pointers to output variables to > store output, and only use exceptions for a rare few kinds of "system- > related" error? Yet one can hardly deny the niceness of being able to > say "x = f() + <foo>" (inside a "try" block, perhaps) instead of > > if(f(&x) != SUCCESS) > *{ // handle error } > x += foo; > > > > Note how we can easily get LONG methods full of repeated code with > error codes (repeated error handlers to handle similar errors at > various function calls calling error-code-emitting functions, if one > wants to be more graceful than simply aborting with an error to the > next level up (which complicates what error codes a function can > return, since it can return its own codes in addition to those > returned by the functions below it, and those may have functions below > THEM, and so on...).). And who likes duplicated code? eww. This seems > a disadvantage of error codes. > > Or, and this is what I've been thinking of, use exceptions for every > error that the user does not have control over, like invalid input > strings. Would that be OK or excessive use of exceptions? And if we > are to mix error codes and exceptions, does this mean we should have > the lists of codes and exceptions correspond + a translator to > translate between the two? The problem with both approaches is that you are specifying implementation directly, rather than using an interface to specify something which can be implemented, by someone else, in different ways. Exception handling is better that error codes because the whatever handles the exception can be quite decoupled from the exception raiser, but you still have the problem that the exception raising mechanism itself is not always desirable to employ. A throw is a throw is a throw. Good error handling should use a wrapper some kind of my_raise_error() call, an interface that has no direct meaning to the language in question , and so can be swapped between various implementational strategies, such as halting immediately, logging a message, jumping nonlocally, etc, according to the requirements of the client, and even according to the runtime environment and other dynamic factors. It is a common and benevolent pattern for error handling levels and styles to be set at programme invocation. Such parameters are a kind of global variable, but a harmless one because they are read-only and so do no allow non-local channels of communication. my_raise_errors() are usually implemented in a procedural context, where errors are detected in the first place with the crappy error_code system. What we still don;t have is something that gives us the best of both worlds,a system that hast the expressiveness of try{..}catch{..} and the flexibility of my_raise_error(), mainly because we don't have user modifiable syntax in all but experimental languages. gremnebulin Page 14 of 14 « First < 4 12 13 14 « polymorphism | Looking for C++ tester » How to use file I/o codes with form and controls codes Allen ASP .Net 1 12-03-2007 12:04 AM Should I use exceptions instead of error codes? mike3 C++ 14 11-17-2007 10:04 PM Virtual Key Codes, Scan Codes and ASCII Codes in C gj_williams2000@yahoo.co.uk C Programming 2 08-20-2005 11:04 AM RegEx replace of html codes to ascii codes Greg -- ASP .Net 4 08-09-2005 07:27 PM Exceptions vs. Error Codes Shane Groff C++ 8 10-10-2004 05:05 PM
http://www.velocityreviews.com/forums/t946556-p14-error-codes-vs-exceptions.html
CC-MAIN-2013-48
refinedweb
6,912
62.07
Discussion at. This proposal gives a big-picture overview of localization support for Go, explaining how all pieces fit together. It is intended as a guide to designing the individual packages and to allow catching design issues early. Localization can be a complex matter. For many languages, localization is more than just translating an English format string. For example, a sentence may change depending on properties of the arguments such as gender or plurality. In turn, the rendering of the arguments may be influenced by, for example: language, sentence context (start, middle, list item, standalone, etc.), role within the sentence (case: dative, nominative, genitive, etc.), formatting options, and user-specific settings, like measurement system. In other words, the format string is selected based on the arguments and the arguments may be rendered differently based on the format string, or even the position within the format string. A localization framework should provide at least the following features: Language-specific parsing of values belongs in this list as well, but we consider it to be out of scope for now. Although we have drawn some ideas for the design from other localization libraries, the design will inevitably be different in various aspects for Go. Most frameworks center around the concept of a single user per machine. This leads to concepts like default locale, per-locale loadable files, etc. Go applications tend to be multi-user and single static libraries. Also many frameworks predate CLDR-provided features such as varying values based on plural and gender. Retrofitting frameworks to use this data is hard and often results in clunky APIs. Designing a framework from scratch allows designing with such features in mind. We call a message the abstract notion of some semantic content to be conveyed to the user. Each message is identified by a key, which will often be a fmt- or template-style format string. A message definition defines concrete format strings for a message called variants. A single message will have at least one variant per supported language. A message may take arguments to be substituted at given insertion points. An argument may have 0 or more features. An argument feature is a key-value pair derived from the value of this argument. Features are used to select the specific variant for a message for a given language at runtime. A feature value is the value of an argument feature. The set of possible feature values for an attribute can vary per language. A selector is a user-provided string to select a variant based on a feature or argument value. Most messages in Go programs pass through either the fmt or one of the template packages. We treat each of these two types of packages separately. Package message has drop-in replacements for most functions in the fmt package. Replacing one of the print functions in fmt with the equivalent in package message flags the string for extraction and causes language-specific rendering. Consider a traditional use of fmt: fmt.Printf("%s went to %s.", person, city) To localize this message, replace fmt with a message.Printer for a given language: p := message.NewPrinter(userLang) p.Printf("%s went to %s.", person, city) To localize all strings in a certain scope, the user could assign such a printer to fmt. Using the Printf of message.Printer has the following consequences: In practice translations will be automatically injected from a translator-supplied data source. But let’s do this manually for now. The following adds a localized variant for Dutch: message.Set(language.Dutch, "%s went to %s.", "%s is in %s geweest.") Assuming p is configured with language.Dutch, the Printf above will now print the message in Dutch. In practice, translators do not see the code and may need more context than just the format string. The user may add context to the message by simply commenting the Go code: p.Printf("%s went to %s.", // Describes the location a person visited. person, // The Person going to the location. city, // The location visited. ) The message extraction tool can pick up these comments and pass them to the translator. The section on Features and the Rationale chapter present more details on package message. Templates can be localized by using the drop-in replacement packages of equal name. They add the following functionality: The msg action marks text in templates for localization analogous to the namesake construct in Soy. Consider code using core’s text/template: import "text/template" import "golang.org/x/text/language" const letter = ` Dear {{.Name}}, {{if .Attended}} It was a pleasure to see you at the wedding.{{else}} It is a shame you couldn't make it to the wedding.{{end}} Best wishes, Josie ` // Prepare some data to insert into the template. type Recipient struct { Name string Attended bool Language language.Tag } var recipients = []Recipient{ {"Mildred", true, language.English}, {"Aurélie", false, language.French}, {"Rens", false, language.Dutch}, } func main() { // Create a new template and parse the letter into it. t := template.Must(template.New("letter").Parse(letter)) // Execute the template for each recipient. for _, r := range recipients { if err := t.Execute(os.Stdout, r); err != nil { log.Println("executing template:", err) } } } To localize this program the user may adopt the program as follows: import "golang.org/x/text/template" const letter = ` {{msg "Opening of a letter"}}Dear {{.Name}},{{end}} {{if .Attended}} {{msg}}It was a pleasure to see you at the wedding.{{end}}{{else}} {{msg}}It is a shame you couldn't make it to the wedding.{{end}}{{end}} {{msg "Closing of a letter, followed by name (f)"}}Best wishes,{{end}} Josie ` and func main() { // Create a new template and parse the letter into it. t := template.Must(template.New("letter").Parse(letter)) // Execute the template for each recipient. for _, r := range recipients { if err := t.Language(r.Language).Execute(os.Stdout, r); err != nil { log.Println("executing template:", err) } } } To make this work, we distinguish between normal and language-specific templates. A normal template behaves exactly like a template in core, but may be associated with a set of language-specific templates. A language-specific template differs from a normal template as follows: It is associated with exactly one normal template, which we call its base template. A top-level template called Messages holds all translations of messages in language-specific templates. This allows registering of variants using existing methods defined on templates. dutch := template.Messages.Language(language.Dutch) template.Must(dutch.New(`Dear {{.Name}},`).Parse(`Lieve {{.Name}},`)) template.Must(dutch. New(`It was a pleasure to see you at the wedding.`). Parse(`Het was een genoegen om je op de bruiloft te zien.`)) // etc. So far we have addressed cases where messages get translated one-to-one in different languages. Translations are often not as simple. Consider the message "%[1]s went to %[2]", which has the arguments P (a person) and D (a destination). This one variant suffices for English. In French, one needs two: gender of P is female: "%[1]s est allée à %[2]s.", and gender of P is male: "%[1]s est allé à %[2]s." The number of variants needed to properly translate a message can vary wildly per language. For example, Arabic has six plural forms. At worst, the number of variants for a language is equal to the Cartesian product of all possible values for the argument features for this language. Package feature defines a mechanism for selecting message variants based on linguistic features of its arguments. Both the message and template packages allow selecting variants based on features. CLDR provides data for plural and gender features. Likewise-named packages in the text repo provide support for each. An argument may have multiple features. For example, a list of persons can have both a count attribute (the number of people in the list) as well as a gender attribute (the combined gender of the group of people in the list, the determination of which varies per language). The feature.Select struct defines a mapping of selectors to variants. In practice, it is created by a feature-specific, high-level wrapper. For the above example, such a definition may look like: message.SetSelect(language.French, "%s went to %s", gender.Select(1, // Select on gender of the first argument. "female", "%[1]s est allée à %[2]s.", "other", "%[1]s est allé à %[2]s.")) The “1” in the Select statement refers to the first argument, which was our person. The message definition now expects the first argument to support the gender feature. For example: type Person struct { Name string gender.Gender } person := Person{ "Joe", gender.Male } p.Printf("%s went to %s.", person, city) The plural package defines a feature type for plural forms. An obvious consumer is the numbers package. But any package that has any kind of amount or cardinality (e.g. lists) can use it. An example usage: message.SetSelect(language.English, "There are %d file(s) remaining.", plural.Select(1, "zero", "Done!", "one", "One file remaining", "other", "There are %d files remaining.")) This works in English because the CLDR category “zero” and “one” correspond exclusively to the values 0 and 1. This is not the case, for example, for Serbian, where “one” is really a category for a broad range of numbers ending in 1 but not 11. To deal with such cases, we borrow a notation from ICU to support exact matching: message.SetSelect(language.English, "There are %d file(s) remaining.", plural.Select(1, "=0", "Done!", "=1", "One file remaining", "other", "There are %d files remaining.")) Besides “=”, and in addition to ICU, we will also support the “<” and “>” comparators. The template packages would add a corresponding ParseSelect to add translation variants. We now move from localizing messages to localizing values. This is a non-exhaustive list of value type that support localized rendering: Each type maps to a separate package that roughly provides the same types: Since a Formatter leaves the actual printing to the implementation of fmt.Formatter, the value is not printed until after it is passed to one of the print methods. This allows formatting flags, as well as other context information to influence the rendering. The State object passed to Format needs to provide more information than what is passed by fmt.State, namely: language.Tag, To accommodate this, we either need to define a text repo-specific State implementation that Format implementations can type assert to or define a different Formatter interface. We consider this pattern applied to currencies. The Value and Formatter type: // A Formatter associates formatting information with the given value. x may be a // Currency, a Value, or a number if the Formatter is associated with a default currency. type Formatter func(x interface{}) Value func (f Formatter) NumberFormat(f number.Formatter) Formatter ... var Default Formatter = Formatter(formISO) var Symbol Formatter = Formatter(formSymbol) var SpellOut Formatter = Formatter(formSpellOut) type Value interface { amount interface{} currency Currency formatter *settings } // Format formats v. If State is a format.State, the value is formatted // according to the given language. If State is not language-specific, it will // use number plus ISO code for values and the ISO code for Currency. func (v Value) Format(s fmt.State, verb rune) func (v Value) Amount() interface{} func (v Value) Float() (float64, error) func (v Value) Currency() Currency ... Usage examples: p := message.NewPrinter(language.AmericanEnglish) p.Printf("You pay %s.", currency.USD.Value(3)) // You pay USD 3. p.Printf("You pay %s.", currency.Symbol(currency.USD.Value(3))) // You pay $3. p.Printf("You pay %s.", currency.SpellOut(currency.USD.Value(1)) // You pay 1 US Dollar. spellout := currency.SpellOut.NumberFormat(number.SpellOut) p.Printf("You pay %s.", spellout(currency.USD.Value(3))) // You pay three US Dollars. Formatters have option methods for creating new formatters. Under the hood all formatter implementations use the same settings type, a pointer of which is included as a field in Value. So option methods can access a formatter’s settings by formatting a dummy value. Different types of currency types are available for different localized rounding and accounting practices. v := currency.CHF.Value(3.123) p.Printf("You pay %s.", currency.Cash.Value(v)) // You pay CHF 3.15. spellCash := currency.SpellOut.Kind(currency.Cash).NumberFormat(number.SpellOut) p.Printf("You pay %s.", spellCash(v)) // You pay three point fifteen Swiss Francs. The API ensures unused tables are not linked in. For example, the rather large tables for spelling out numbers and currencies needed for number.SpellOut and currency.SpellOut are only linked in when the respective formatters are called. Units are like currencies but have the added complexity that the amount and unit may change per locale. The Formatter and Value types are analogous to those of Currency. It defines “constructors” for a selection of unit types. type Formatter func(x interface{}) Value var ( Symbol Formatter = Formatter(formSymbol) SpellOut Formatter = Formatter(formSpellOut) ) // Unit sets the default unit for the formatter. This allows the formatter to // create values directly from numbers. func (f Formatter) Unit(u Unit) Formatter // create formatted values: func (f Formatter) Value(x interface{}, u Unit) Value func (f Formatter) Meters(x interface{}) Value func (f Formatter) KilometersPerHour(x interface{}) Value … type Unit int const SpeedKilometersPerHour Unit = ... type Kind int const Speed Kind = ... Usage examples: p := message.NewPrinter(language.AmericanEnglish) p.Printf("%d", unit.KilometersPerHour(250)) // 155 mph spelling out the unit names: p.Print(unit.SpellOut.KilometersPerHour(250)) // 155.343 miles per hour Associating a default unit with a formatter allows it to format numbers directly: kmh := unit.SpellOut.Unit(unit.SpeedKilometersPerHour) p.Print(kmh(250)) // 155.343 miles per hour Spell out the number as well: spellout := unit.SpellOut.NumberFormat(number.SpellOut) p.Print(spellout.KilometersPerHour(250)) // one hundred fifty-five point three four three miles per hour or perhaps also p.Print(unit.SpellOut.KilometersPerHour(number.SpellOut(250))) // one hundred fifty-five point three four three miles per hour Using a formatter, like number.SpellOut(250), just returns a Value wrapped with the new formatting settings. The underlying value is retained, allowing its features to select the proper unit names. There may be an ambiguity as to which unit to convert to when converting from US to the metric system. For example, feet can be converted to meters or centimeters. Moreover, which one is to prefer may differ per language. If this is an issue we may consider allowing overriding the default unit to convert in a message. For example: %[2:unit=km]f Such a construct would allow translators to annotate the preferred unit override. The proposed Go API deviates from a common pattern in other localization APIs by not associating a Formatter with a language. Passing the language through State has several advantages: It prevents strings from being rendered prematurely, which, in turn, helps picking the proper variant and allows translators to pass in options in formatting strings. The Formatter construct is a natural way of allowing for this flexibility and allows for a straightforward and natural API for something that is otherwise quite complex. The Value types of the formatting packages conflate data with formatting. However, formatting types often are strongly correlated to types. Combining formatting types with values is not unlike associating the time zone with a Time or rounding information with a number. Combined with the fact that localized formatting is one of the main purposes of the text repo, it seems to make sense. Formatted printing in the message package differs from the equivalent in the fmt package in various ways: %[name]s. []int{1, 2, 3}will be rendered, in English, as "1, 2 and 3", instead of "[1 2 3]". Considering the differences with fmt we expect package message to do its own parsing. Different substitution points of the same argument may require a different State object to be passed. Using fmt’s parser would require rewriting such arguments into different forms and/or exposing more internals of fmt in the API. It seems more straightforward for package message to do its own parsing. Nonetheless, we aim to utilize as much of the fmt package as possible. Currency is its own package. In most localization APIs the currency formatter is part of the number formatter. Currency data is large, though, and putting it in its own package avoids linking it in unnecessarily. Separating the currency package also allows greater control over options. Currencies have specific locale-sensitive rounding and scale settings that may interact poorly with options provided for a number formatter. We propose to have one large package that includes all unit types. We could split this package up in, for example, packages for energy, mass, length, speed etc. However, there is a lot of overlap in data (e.g. kilometers and kilometers per hour). Spreading the tables across packages will make sharing data harder. Also, not all units belong naturally in a specific package. To mitigate the impact of including large tables, we can have composable modules of data from which user can compose smaller formatters (similar to the display package). The proposed mechanism for features takes a somewhat different approach to OS X and ICU. It allows mitigating the combinatorial explosion that may occur when combining features while still being legible. The matching algorithm returns the first match on a depth-first search on all cases. We also allow for variable assignment. We define the following types (in Go-ey pseudo code): Select struct { Feature string // identifier of feature type Argument interface{} // Argument reference Cases []Case // The variants. } Case struct { Selector string; Value interface{} } Var: struct { Name string; Value interface{} } Value: Select or String SelectSequence: [](Select or Var) To select a variant given a set of arguments: Eval(v, m): Value Match(s, cat, arg): string x string x interface{} // Implementation for numbers. A simple data structure encodes the entire Select procedure, which makes it trivially machine-readable, a condition for including it in a translation pipeline. Consider the message "%[1]s invite %[2] to their party", where argument 1 an 2 are lists of respectively hosts and guests, and data: map[string]interface{}{ "Hosts": []gender.String{ gender.Male.String("Andy"), gender.Female.String("Sheila"), }, "Guests": []string{ "Andy", "Mary", "Bob", "Linda", "Carl", "Danny" }, } The following variant selector covers various cases for different values of the arguments. It limits the number of guests listed to 4. message.SetSelect(en, "%[1]s invite %[2]s and %[3]d other guests to their party.", plural.Select(1, // Hosts "=0", `There is no party. Move on!`, "=1", plural.Select(2, // Guests "=0", `%[1]s does not give a party.`, "other", plural.Select(3, // Other guests count "=0", gender.Select(1, // Hosts "female", "%[1]s invites %[2]s to her party.", "other ", "%[1]s invites %[2]s to his party."), "=1", gender.Select(1, // Hosts "female", "%[1]s invites %#[2]s and one other person to her party.", "other ", "%[1]s invites %#[2]s and one other person to his party."), "other", gender.Select(1, // Hosts "female", "%[1]s invites %#[2]s and %[3]d other people to her party.", "other ", "%[1]s invites %#[2]s and %[3]d other people to his party.")), "other", plural.Select(2, // Guests, "=0 ", "%[1]s do not give a party.", "other", plural.Select(3, // Other guests count "=0", "%[1]s invite %[2]s to their party.", "=1", "%[1]s invite %#[2]s and one other person to their party.", "other ", "%[1]s invite %#[2]s and %[3]d other people to their party.")))) For English, we have three variables to deal with: the plural form of the hosts and guests and the gender of the hosts. Both guests and hosts are slices. Slices have a plural feature (its cardinality) and gender (based on CLDR data). We define the flag # as an alternate form for lists to drop the comma. It should be clear how quickly things can blow up with when dealing with multiple features. There are 12 variants. For other languages this could be quite a bit more. Using the properties of the matching algorithm one can often mitigate this issue. With a bit of creativity, we can remove the two cases where Len(Guests) == 0 and add another select block at the start of the list: message.SetSelect(en, "%[1]s invite %[2]s and %[3]d other guests to their party.", plural.Select(2, "=0", `There is no party. Move on!`), plural.Select(1, "=0", `There is no party. Move on!`, … The algorithm will return from the first select when len(Guests) == 0, so this case will not have to be considered later. Using Var we can do a lot better, though: message.SetSelect(en, "%[1]s invite %[2]s and %[3]d other guests to their party.", feature.Var("noParty", "There is no party. Move on!"), plural.Select(1, "=0", "%[noParty]s"), plural.Select(2, "=0", "%[noParty]s"), feature.Var("their", gender.Select(1, "female", "her", "other ", "his")), // Variables may be overwritten. feature.Var("their", plural.Select(1, ">1", "their")), feature.Var("invite", plural.Select(1, "=1", "invites", "other ", "invite")), feature.Var("guests", plural.Select(3, // other guests "=0", "%[2]s", "=1", "%#[2]s and one other person", "other", "%#[2]s and %[3]d other people"), feature.String("%[1]s %[invite]s %[guests]s to %[their]s party.")) This is essentially the same as the example before, but with the use of variables to reduce the verbosity. If one always shows all guests, there would only be one variant for describing the guests attending a party! ICU has a similar approach to dealing with gender and plurals. The above example roughly translates to: `{num_hosts, plural, =0 {There is no party. Move on!} other { } do not give a party.} =1 {{host} invite {guest} to their party.} =2 {{host} invite {guest} and one other person to their party.} other {{host} invite {guest} and # other people to their party.}}}}}}` Comparison: In Go, features are associated with values, instead of passed separately. There is no Var construct in ICU. Instead the ICU notation is more flexible and allows for notations like: "{1, plural, zero {Personne ne se rendit} one {{0} est {2, select, female {allée} other {allé}}} other {{0} sont {2, select, female {allées} other {allés}}}} à {3}" In Go, strings can only be assigned to variables or used in leaf nodes of a select. We find this to result in more readable definitions. The Go notation is fully expressed in terms of Go structs: In Go, feature types are fully generic. Go has no special syntax for constructs like offset (see the third argument in ICU’s plural select and the “#” for substituting offsets). We can solve this with pipelines in templates and special interpretation for flag and verb types for the Format implementation of lists. ICU's algorithm seems to prohibit the user of ‘<’ and ‘>’ selectors. OS X recently introduced support for handling plurals and prepared for support for gender. The data for selecting variants is stored in the stringsdict file. This example from the referenced link shows how to vary sentences for “number of files selected” in English: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""> <plist version="1.0"> <dict> <key>%d files are selected</key> <dict> <key>NSStringLocalizedFormatKey</key> <string>%#@num_files_are@> The equivalent in the proposed Go format: message.SetSelect(language.English, "%d files are selected", feature.Var("numFilesAre", plural.Select(1, "zero", "No file is", "one", "A file is", "other", "%d files are")), feature.String("%[numFilesAre]s selected")) A comparison between OS X and the proposed design: "%#@foo@"will substitute the variable foo. The equivalent in Go is the less offensive "%[foo]v". The typical Go deployment is that of a single statically linked binary. Traditionally, though, most localization frameworks have grouped data in per-language dynamically-loaded files. We suggested some code organization methods for both use cases. In the following code, a single file called messages.go contains all collected translations: import "golang.org/x/text/message" func init() { for _, e := range entries{ for _, t := range e { message.SetSelect(e.lang, t.key, t.value) } } } type entry struct { key string value feature.Value } var entries = []struct{ lang language.Tag entry []entry }{ { language.French, []entry{ { "Hello", feature.String("Bonjour") }, { "%s went to %s", feature.Select{ … } }, … }, } We suggest storing per-language data files in a messages subdirectory: func NewPrinter(t language.Tag) *message.Printer { r, err := os.Open(filepath.Join("messages", t.String() + ".json")) // handle error cat := message.NewCatalog() d := json.NewDecoder(r) for { var msg struct{ Key string; Value []feature.Value } if err := d.Decode(&msg); err == io.EOF { break } else if err != nil { // handle error } cat.SetSelect(t, msg.Key, msg.Value...) } return cat.NewPrinter(t) } The implementation of the msg action will require some modification to core’s template/parse package. Such a change would be backward compatible. Implementation would start with some of the rudimentary package in the text repo, most notably format. Subsequently, this allows the implementation of the formatting of some specific types, like currencies. The messages package will be implemented first. The template package is more invasive and will be implemented at a later stage. Work on infrastructure for extraction messages from templates and print statements will allow integrating the tools with translation pipelines.
https://go.googlesource.com/proposal/+/master/design/12750-localization.md
CC-MAIN-2020-45
refinedweb
4,217
50.73
Log message: py-test: updated to 4.4.1 pytest 4.4.1: Bug Fixes * Environment variables are properly restored when using pytester’s testdir \ fixture. * Fix regression with --pdbcls, which stopped working with local modules in 4.0.0. * Produce a warning when unknown keywords are passed to pytest.param(...). * Invalidate import caches with monkeypatch.syspath_prepend, which is required \ with namespace packages being used. Log message: py-test: updated to 4.4.0 pytest 4.4.0: Features * async test functions are skipped and a warning is emitted when a suitable \ async plugin is not installed (such as pytest-asyncio or pytest-trio). Previously async functions would not execute at all but still be marked as \ “passed”. * Include new \ disable_test_id_escaping_and_forfeit_all_rights_to_community_support option to \ disable ascii-escaping in parametrized values. This may cause a series of \ problems and as the name makes clear, use at your own risk. * The -p option can now be used to early-load plugins also by entry-point name, \ instead of just by module name. This makes it possible to early load external plugins like pytest-cov in the \ command-line: pytest -p pytest_cov * The --pdbcls option handles classes via module attributes now (e.g. \ pdb:pdb.Pdb with pdb++), and its validation was improved. * The testpaths configuration option is now displayed next to the rootdir and \ inifile lines in the pytest header if the option is in effect, i.e., directories \ or file names were not explicitly passed in the command line. Also, inifile is only displayed if there’s a configuration file, instead of an \ empty inifile: string. * Doctests can be skipped now dynamically using pytest.skip(). * Internal refactorings have been made in order to make the implementation of \ the pytest-subtests plugin possible, which adds unittest sub-test support and a \ new subtests fixture as discussed in 1367. For details on the internal refactorings, please see the details on the related PR. * pytester’s LineMatcher asserts that the passed lines are a sequence. * Handle -p plug after -p no:plug. This can be used to override a blocked plugin (e.g. in “addopts”) from the \ command line etc. * Output capturing is handled correctly when only capturing via fixtures \ (capsys, capfs) with pdb.set_trace(). * pytester sets $HOME and $USERPROFILE to the temporary directory during test runs. This ensures to not load configuration files from the real user’s home directory. * Namespace packages are handled better with monkeypatch.syspath_prepend and \ testdir.syspathinsert (via pkg_resources.fixup_namespace_packages). * The stepwise plugin reports status information now. * If a setup.cfg file contains [tool:pytest] and also the no longer supported \ [pytest] section, pytest will use [tool:pytest] ignoring [pytest]. Previously it \ would unconditionally error out. This makes it simpler for plugins to support old pytest versions. Bug Fixes * Fix bug where fixtures requested dynamically via request.getfixturevalue() \ might be teardown before the requesting fixture. * pytester unsets PYTEST_ADDOPTS now to not use outer options with \ testdir.runpytest(). * Use the correct modified time for years after 2038 in rewritten .pyc files. * Fix line offsets with ScopeMismatch errors. * -p no:plugin is handled correctly for default (internal) plugins now, e.g. \ with -p no:capture. Previously they were loaded (imported) always, making e.g. the capfd fixture \ available. * The pdb quit command is handled properly when used after the debug command \ with pdb++. * Fix the interpretation of -qq option where it was being considered as -v instead. * outcomes.Exit is not swallowed in assertrepr_compare anymore. * Close logging’s file handler explicitly when the session finishes. * Fix line offset with mark collection error (off by one). Improved Documentation * Update docs for pytest_cmdline_parse hook to note availability liminations Trivial/Internal Changes * pluggy>=0.9 is now required. * funcsigs>=1.0 is now required for Python 2.7. * Some left-over internal code related to yield tests has been removed. * Remove internally unused anypython fixture from the pytester plugin. * Remove deprecated Sphinx directive, add_description_unit(), pin \ sphinx-removed-in to >= 0.2.0 to support Sphinx 2.0. * Fix pytest tests invocation with custom PYTHONPATH. * New pytest_report_to_serializable and pytest_report_from_serializable \ experimental. * Collector.repr_failure respects the --tb option, but only defaults to short \ now (with auto). Log message: py-test: updated to 4.3.1 pytest 4.3.1: Bug Fixes - Logging messages inside pytest_runtest_logreport() are now properly captured \ and displayed. - Improve validation of contents written to captured output so it behaves the \ same as when capture is disabled. - Fix AttributeError: FixtureRequest has no 'confg' attribute bug in \ testdir.copy_example. Trivial/Internal Changes - Avoid pkg_resources import at the top-level. Log message: py-test: updated to 4.3.0 pytest 4.3.0: Deprecations * pytest.warns() now emits a warning when it receives unknown keyword arguments. This will be changed into an error in the future. Features * Usage errors from argparse are mapped to pytest’s UsageError. * Add the --ignore-glob parameter to exclude test-modules with Unix shell-style \ wildcards. Add the collect_ignore_glob for conftest.py to exclude test-modules \ with Unix shell-style wildcards. * The warning about Python 2.7 and 3.4 not being supported in pytest 5.0 has \ been removed. In the end it was considered to be more of a nuisance than actual utility and \ users of those Python versions shouldn’t have problems as pip will not install \ pytest 5.0 on those interpreters. * With the help of new set_log_path() method there is a way to set log_file \ paths from hooks. Bug Fixes * --help and --version are handled with UsageError. * Fix AssertionError with collection of broken symlinks with packages. Log message: py-test: updated to 4.2.1 pytest 4.2.1: Bug Fixes - The pytest_report_collectionfinish hook now is also called with --collect-only. - Do not raise UsageError when an imported package has a pytest_plugins.py child \ module. - Fix output capturing when using pdb++ with recursive debugging. - Fix handling of collect_ignore via parent conftest.py. - Fix regression where setUpClass would always be called in subclasses even if \ all tests were skipped by a unittest.skip() decorator applied in the subclass. - Fix parametrize(... ids=<function>) when the function returns non-strings. - Fix/improve collection of args when passing in __init__.py and a test file. - more_itertools is now constrained to <6.0.0 when required for Python 2.7 \ compatibility. - Fix "ValueError: Plugin already registered" exceptions when running \ in build directories that symlink to actual source. Improved Documentation - Add note to plugins.rst that pytest_plugins should not be used as a name for a \ user module containing plugins. - Document how to use raises and does_not_raise to write parametrized tests with \ conditional raises. - Document how to customize test failure messages when using pytest.warns. Trivial/Internal Changes - Some verbosity related attributes of the TerminalReporter plugin are now read only properties. Log message: py-test: Remove not needed REPLACE_PYTHON (file no longer exists) Log message: py-test: updated to 4.2.0 pytest 4.2.0: Features * Class xunit-style functions and methods now obey the scope of autouse fixtures. This fixes a number of surprising issues like setup_method being called before \ session-scoped autouse fixtures. * Display a message at the end of the test session when running under Python 2.7 \ and 3.4 that pytest 5.0 will no longer support those Python versions. * The number of selected tests now are also displayed when the -k or -m flags \ are used. * pytest_report_teststatus hook now can also receive a config parameter. * pytest_terminal_summary hook now can also receive a config parameter. Bug Fixes * --junitxml can emit XML compatible with Jenkins xUnit. junit_family INI option \ accepts legacy|xunit1, which produces old style output, and xunit2 that conforms \ more strictly to \ … nit-10.xsd * Improve quitting from pdb, especially with --trace. Using q[quit] after pdb.set_trace() will quit pytest also. * Warning summary now groups warnings by message instead of by test id. This makes the output more compact and better conveys the general idea of how \ much code is actually generating warnings, instead of how many tests call that \ code. * monkeypatch.delattr handles class descriptors like staticmethod/classmethod. * Restore marks being considered keywords for keyword expressions. * tmp_path fixture and other related ones provides resolved path (a.k.a real path) * pytest_terminal_summary uses result from pytest_report_teststatus hook, rather \ than hardcoded strings. * Correctly handle unittest.SkipTest exception containing non-ascii characters \ on Python 2. * Ensure the tmpdir and the tmp_path fixtures are the same folder. * Ensure tmp_path is always a real path. Trivial/Internal Changes * Use a.item() instead of the deprecated np.asscalar(a) in pytest.approx. np.asscalar has been deprecated in numpy 1.16.. * Copy saferepr from pylib Log message: py-test: updated to 4.1.1 pytest 4.1.1: Bug Fixes * Show full repr with assert a==b and -vv. * Extend Doctest-modules to ignore mock objects. * Fixed pytest.warns bug when context manager is reused (e.g. multiple \ parametrization). * Don’t rewrite assertion when __getattr__ is broken
http://pkgsrc.se/devel/py-test
CC-MAIN-2019-18
refinedweb
1,476
53.47
Encapsulation is nothing new to what we have read, it is just a concept. It is the way of combining the data and methods. In short, it is just a concept to make all data private and access them using getters and setters. These getters and setters can be our own defined functions or properties with get and set. Why Encapsulation? Encapsulation is necessary to keep the details about an object hidden from the users of that object. Details of an object are stored in its data members. This is the reason we make all the member variables of a class private and most of the member methods public. Member variables are made private so that these cannot be directly accessed from outside the class (to hide the details of any object of that class like how the data about the object is implemented) and so most member methods are made public to allow the users to access the data members through those methods.. For example, if a data member is declared private and we wish to make it directly accessible from anywhere outside the class, we just need to replace the specifier private by public. Let's take an example of encapsulation with our own methods for getter and setter. using System; class Student { private string name = "xyz"; public string GetName() { return this.name; } public void SetName(string s) { this.name = s; } } class Test { static void Main(string[] args) { Student s = new Student(); s.SetName("xyz"); Console.WriteLine(s.GetName()); } } Here, we have a private variable name and we defined two public methods - GetName and SetName to return and set the value of the variable name. We can also use default properties for the same which we studied in the previous chapter. using System; class Student { public string Name { get; set; } } class Test { static void Main(string[] args) { Student s = new Student(); s.Name = "xyz"; Console.WriteLine(s.Name); } }
https://www.codesdope.com/course/c-sharp-encapsulation/
CC-MAIN-2022-40
refinedweb
320
61.77
Search/Old This infrastructure doesn't exist anymore Note: this page is about Wikimedia's Lucene implementation, not lucene generally. Search/2013 is more up to date. Looking for Cirrus/Elasticsearch? Look here. Usage lucene-search is a search extension for MediaWiki based on the "Apache Lucene" search engine. This page attempts to give some information about the extension and how it is set up in the WikiMedia cluster, and to give details about the Lucene search engine. Overview Software The system has two major software components, Extension:MWSearch and lsearchd. The version of Lucene is 2.1 and jdk is sun-j2sdk1.6_1.6.0+update30. Extension:MWSearch Extension:MWSearch is a MW extension that overrides default search backend and send requests to lsearchd. lsearchd lsearchd (Extension:Lucene-search) is a versatile java daemon that can act as frontend, backend, searcher, indexer, highlighter, spellchecker, ... we use it to searches, highlight, spell-checks and act as an incremental indexer Essentials - configuration files: - /etc/lsearch.conf - per-host local configuration - in puppet: pmtpa: puppet/templates/lucene/lsearch.conf, eqiad: puppet/templates/lucene/lsearch.new.conf - /home/wikipedia/conf/lucene/lsearch-global-2.1.conf - cluster-wide shared configuration. - in puppet: pmtpa: puppet/templates/lucene/lsearch-global-2.1.conf.pmtpa.erb, eqiad: puppet/templates/lucene/lsearch-global-2.1.conf.eqiad.erb - started via /etc/init.d/lsearchd in pmtpa and /etc/init.d/lucene-search-2 in eqiad - search frontent port 8123, index frontend port 8321; backend - RMI (RMI registry port 1099) - logs in /a/search/logs - indexes in /a/search/indexes - jar in /a/search/lucene-search - test with curl Installation Now is deployed via puppet and without nfs with adding the class role::lucene::front-end::(pool[1-5]|prefix) See #Cluster Host Hardware Failure for more details of bringing up a host. Configuration There is a shared configuration file /home/wikipedia/conf/lucene/lsearch-global-2.1.conf that contains information about the roles hosts are assigned in the search cluster. This way lsearchd daemons can communicate with each other to obtain the latest index versions, forward request if necessary, search over many hosts if the index is split, etc.. The per-host local configuration file is at /etc/lsearch.conf. Most importantly it defines SearcherPool.size, which should be set to local number of CPUs+1 if only one index is searched. This prevents CPUs from locking each other out. The other important property is Search.updatedelay which prevents all searches from trying to update their working copies of the index at the same time, and thus generate noticeable performance degradation. Indexing In pmtpa, searchidx2 is the indexer. In eqiad, searchidx1001 is the indexer. - the search indexer serves as the indexer for the cluster - the search indexer's lsearchd daemon is configured to act as indexer in addition to another proc, the incremental updater - the incremental updater proc is started with: root@searchidx1001:~# su -s /bin/bash -c "/a/search/lucene.jobs.sh inc-updater-start" lsearch - other indexing jobs, like indexing private wikis, spell-check rebuilds etc are in lsearch's crontab on the search indexer - the search indexer runs rsyncd to allow cluster members to fetch indexes - other cluster hosts fetch indexes by rsync every 30 seconds, as defined by Search.updateinterval in lsearch-global-2.1.conf Search Cluster: Shards, Pools, and Load Balancing Oh My! This section has been derived from the following configuration: - /home/wikipedia/common/wmf-config/lucene.php - /home/wikipedia/conf/lucene/lsearch-global-2.1.conf - /home/wikipedia/conf/pybal/pmtpa/search_pool[1-3] Index Sharding We shard search indexes across hosts in the cluster to accomodate index data footprint, hardware limitations, and utilization. Pools We use a mixture of single-host and multi-host pools to direct requests to the servers that host the appropriate indexes. Where multi-hosts pools are employed we use pybal/LVS load balancing (running on lvs3) or in-code load balancing. As of Feb 2012 we have the following pool configuration: Administration Dependencies - all requests from apaches depend on LVS - Each front end node depends on the indexer for updated indexes - The indexer depends on querying all database shards for its incremental updates - The crons for private wikis depend on database access to the external stores - the front-end nodes depend on rsync from /home/w/common for up-to-date mediaiwiki confs Health/Activity Monitoring Currently, the only nagios monitoring is a tcp check on 8321, the port the daemon listens on. More monitoring in the works. Ganglia graphs are extremely useful for telling when a node's daemon is stuck in some way, disk is full, etc. Software Updates The LuceneSearch.jar is now installed via a package in our apt repo. Deploying a new version of the software involves building a package and adding it to the repo. Puppet will install the newer version. A manual restart of the daemon will probably be required. Stopping and fall back to MediaWiki's search To disable lucene and fall back to MediaWiki's search, set $wgUseLuceneSearch = false in CommonSettings.php. Note: py: I do not beleive that this is a workable solution any longer. Adding new wikis When a new wiki is created, an initial index build needs to be made. First restart the indexer on searchidx2 and searchidx1001 to make sure the indexer knows about the new wikis, and then run the import-db script on appropriate wiki database name (i.e. replace wikidb with the wiki database name, e.g. wikimania2012wiki). Once initial indices are in place, restart the incremental indexer. On each individual indexer (current searchidx1001.eqiad and searchidx2.pmtpa) run: root@searchidx1001:~# sudo -u lsearch /a/search/lucene.jobs.sh import-db wikidb root@searchidx1001:~# killall -g java root@searchidx1001:~# /etc/init.d/lucene-search-2 start root@searchidx1001:~# sudo -u lsearch /a/search/lucene.jobs.sh inc-updater-start Then, you must restart lsearchd (/etc/init.d/lucene-search-2 restart) on each search note that should contain an index for the new wiki. This includes every host in its pool (i.e. all search-pool4 nodes, not just the ones that receive front-end queries via lvs) as well as hosts that are shared amongst all pools search as those running the search-prefix indices. Check ./manifests/role/lucene.pp to see which pool you need. pool1 is en.wp. pool4 is the "everything else" wildcard pool, so if you add a small misc. wiki it's most likely pool4 and you don't have to add it in lucene.pp. Do not restart all at once! Depool one from pybal, restart lucene, look at curl until you see it's done with the last wiki in the alphabet, re-pool in pybal, go on to the next. As of September 2013 (check this or die!): if pool4, then the hosts that matter the most are the first 2, search1015 and search1016. Also restart though: search1021,1022, as well as search1019 and search1020 (spell), and (most likely) the prefix hosts (for all pools) as well: search1017, search1018. Trouble What to do if you get a page about a search pool - Check if any search nodes are unresponsive. This is usually pretty obvious in ganglia (no cpu activity). Restart anything that's stuck. - People love to DoS search. With the pmtpa cluster it was very easy. With the eqiad cluster it will be slightly harder. Check the api logs to see if an IP is making excessive queries of bogus terms. Block IP. - check pybal logs on the low-traffic nodes for the data center. Make sure nodes are pooled. - Look at /a/search/log/log There might be pointers there. - To test functionality of a node, do something like: curl where ??wiki is some index that should be on that node (enwiki, dewiki, etc) Which hosts are in which pool? pybal links for search_pool1, search_pool2, search_pool3, search_pool4, and search_prefix Main indexer on searchidx2/searchidx1001 is stuck The search indexers very occassionally fall over. This looks like the ganglia load/traffic graphs falling to near-zero, and the cpu idle near 100%. If indexing is stuck on searchidx2, run this script this script as user rainman (so he can restart later if necessary): root@searchidx2:~# sudo -u rainman /home/rainman/scripts/search-restart-indexer If indexing is stuck on searchidx1001, do the following: root@searchidx1001:~# killall -g java root@searchidx1001:~# /etc/init.d/lucene-search-2 start root@searchidx1001:~# su -s /bin/bash -c "/a/search/lucene.jobs.sh inc-updater-start" lsearch Individual lsearchd processes are crashing or nonresponsive - Try starting the lsearch process in the foreground so you can watch what it does: start-stop-daemon --start --user lsearch --chuid lsearch --pidfile /var/run/lsearchd.pid --make-pidfile --exec /usr/bin/java -- -Xmx20000m -Djava.rmi.server.codebase= -Djava.rmi.server.hostname=$HOSTNAME -jar /a/search/lucene-search/LuceneSearch.jar - Check log at /a/search/log/log for indications of obvious issues root@search3:~# grep "^Caused by" /a/search/log/log|tail -20 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded (oops, we hit java's memory limit) Space Issues on Cluster Host - check /a/search/indexes for unintended indexes, i.e. cruft from previous configurations, as the daemon doesn't know to delete indexes that are no longer in use. - Can also create new shards. This will involve making a new lvs pool, and new entries into the hash structure in manifests/roles/lucene.pp Cluster Host Hardware Failure - If a host in lvs fails, lvs should depool it automatically, and a least one other host will pick up the load. If the host is not in lvs, and instead is accessed via RMI, then RMI will take care of the depooling. - To bring up a new node with the same indexes/role, at it to the has structure in manifests/roles/lucene.pp, and into site.pp with the appropriate role class (ie.: the same as the failed node.) - Bring up new node with puppet, make sure that the lucene-search-2 daemon is running, and that the rsync of the indexes from the indexer has finished. - If the node has main namespace indexes, something of the form ??wiki.nspart[12] or ??wiki.nspart[12].sub[12], you can test that it's giving proper responses with something of the form curl - If the failed node has main namespace indexes, something of the form ??wiki.nspart[12] or ??wiki.nspart[12].sub[12], then you will need to adjust the pool's pybal configs accordingly (i.e. out with the old, in with the new). Indexer Host Hardware Failure We are not currently set up to gracefully deal with a massive indexer failure. Having an indexer in multiple DCs is as well as we have done. The general procedure for bringing up an indexer will be to include role::lucene::indexer, stop the lucene daemon and the incremental indexer, and rsync over the contents of /a/search/indexes from any surviving indexer, and then starting up the lucene/incremental updater. Excess Load on a Cluster Host - [which logs etc. to check for evidence of i.e. abuse, configuration issues, etc]
https://wikitech.wikimedia.org/wiki/Lucene
CC-MAIN-2020-10
refinedweb
1,886
57.16
package Mail::Audit::Centipaid; use Mail::Audit; use vars q(@VERSION); $VERSION = '1.0'; 1; # Mail::Audit::Centipaid # # Written by Adonis El Fakih (adonis /at/ aynacorp dot com) Copyright 2003 # based on Apache::Cetipaid v1.3 # # This module allows mail users / administrators to charge for incoming # mail using centipaid's internet stamps (centipix). This modules was # inspired by the design of AMDPmail.com which proposes an alternate mail # delivery cycle that does many new cool things including charging epostage # as a mode of controlling spam # # centipaid offers a micropayment solution that allows users # to pay using an internet stamp. For more information on the # micro-payment system please visit # # # This module may be distributed under the GPL v2 or later. # # package Mail::Audit; use 5.005; use IO::Socket; use Net::hostent; sub need_to_pay($$$$$) { my $https = shift(@_); my $payto = shift(@_); my $amount = shift(@_); my $lang = shift(@_); my $email = shift(@_); $msg = qq {$https?payto=$payto&amount=$amount&mode=mail&email=$email}; return $msg; } sub check_mail { my $self = shift; die "bad %conf key/values !" if (@_ % 2); my %conf = @_ ; my $debug = $conf{debug}; my $payto = $conf{acct} || 0; my $https = $conf{https} || 0; my $server = $conf{authserver} || 0; my $port = $conf{authport} || 0; my $pass = $conf{pass} || 0; my $amount = $conf{amount} || 0; my $email = $conf{email} || $self->to; my $prefix = "$$ Mail::Audit::Centipaid"; my $connect_to ="$server:$port"; my @header = split("\n",$self->header); my $item_body = $self->body; my @body = @$item_body; my $rcvcount = 0; my $rcpt = 0; print $body; # loop through the oconfiguration hash and print it # if it is requiered foreach my $key (keys %conf) { print "DEBUG: $key = $conf{$key}\n" if $debug; } # loop through the header first to see if it has a # compliant AMDP mail header AMDP-PAYMENT-RCPT print "DEBUG: Checking SMTP header\n" if $debug; for (@header) { if ( /AMDP-PAYMENT-RCPT:(.*)/ ) { $rcpt = $1; #clean rcpt from any spaces.. $rcpt =~ s/\s+//g; last; } print "HEADER: $_\n" if $debug; $rcvcount++; # Any further Received lines won't be the first. } #if no receipt in header then check the body unless($rcpt) { # loop through the body next to see if it has a # compliant AMDP mail receipt AMDP-PAYMENT-RCPT print "DEBUG: Checking mail BODY\n" if $debug; for (@body) { chomp; if ( /AMDP-PAYMENT-RCPT:\s*(\S+)\s*[!^<]/ || /AMDP-PAYMENT-RCPT:\s*(\S+)/) { $rcpt = $1; #clean rcpt from any spaces.. $rcpt =~ s/\s+//g; last; } print "BODY: $_\n" if $debug; $rcvcount++; # Any further Received lines won't be the first. } } if ( $rcpt ) { print "DEBUG: Found AMDP receipt $rcpt\n" if $debug; print "DEBUG: Checking with CENTIPAID at $connect_to\n" if $debug; my $auth_server = IO::Socket::INET->new("$connect_to"); my $crlf = "\015\012"; unless ( $auth_server ) { print "DEBUG: Could not connect to $connect_to\n" if $debug; return (1,"Can not connect to $connect_to"); } $auth_server->autoflush(1); print $auth_server "PAYTO:$payto"."$crlf"; print $auth_server "PASS:$pass"."$crlf"; print $auth_server "RCPT:$rcpt"."$crlf"; # format the amount in a way that we make sure it becomes a float $amount = sprintf("%.6f",$amount); while (<$auth_server>) { chomp; my $received = $_; print "DEBUG: CLIENT:$received\n" if $debug; if ($received =~ /^250 OK PAID(.+)/) { # format the paid in a way that we make sure it becomes a float my $paid = sprintf("%.6f",$1); print "DEBUG: Paid [$paid] Amount [$amount]\n" if $debug; # if the amount paid is greater or equal to what # is requiered then it is ok if ( $paid >= $amount ) { print "DEBUG: $paid == $amount\n" if $debug; print "DEBUG: epostage verified\n" if $debug; return (0,"OK"); # success }# if paid == amount }#end if 250 if ($received =~ /^500/) { # if we get a 500 code then it was an invalid receipt print "DEBUG: Invalid transaction for receipt $rcpt\n" if $debug; #send a payment slip $msg = need_to_pay($https,$payto,$amount,$lang,$email); print "DEBUG: need_to_pay called\n" if $debug; return (1,$msg); }#end if 500 }# end while } else { $msg = need_to_pay($https,$payto,$amount,$lang,$email); return (1,$msg); } #end of rcpt found } 1; __END__ =head1 NAME $Revision: 1.0 $ B<Mail::Audit::Centipaid> - Mail::Audit plugin to check for email postage =head1 SYNOPSIS use Mail::Audit qw(Centipaid); my $mail = Mail::Audit->new; # Configure the filter %conf = ( 'acct' => "AEF001", # account_name merchant id 'amount' => 0.005, # amount to charge per email 'https' => "",# payment url 'pass' => "adonis", # receipt_password 'lang' => "en", # language setting 'authserver' => "pay001.centipaid.com", # centipaid_receipt_server 'authport' => '2021', # port of receipt server 'email' => 'you@domain.com', # email 'debug' => 0 # 1=show output, 0=supress output ); # check mail for epostage ($code,$reason)= $mail->check_mail(%conf); $reply_msg = qq{Your message here..}; # reject email without postage if ( $code == 1 ) { $mail->reply(from=>$mail->from, subject=>"Email postage missing: could not deliver", body=>$reply_msg); $mail->ignore; } # accept the ones that do have one if ( $code == 0 ) {$mail->accept; } =head1 DESCRIPTION B. This is done by installing a .forward for these accounts, and use the enclosed centifilter.pl program to filter out mail that does not contain valid postage. Only paid email will be allowed through the filter. Centipaid supports two types of stamps. 1. CENTIPIX stamps, which are bought by the sender and used to make payments. Payment processing is deducted from the payment done by the sender. 2. EZPASS stamps, which are issued by the receiver and given to individuals he/she wants to grant them postage-free access the email account. Payment processing is paid by the recepient. The module can also be used in conjunction with SpamAssassin to autoamtically reject email messages with a certain spam ranking, and to be directed to pay for postage. Other uses include the designation of postage-requiered email accounts such as the ones used for consulting, support, business to business, etc.. B<Postage paying> Paying for postage is easy, once the sender has obtained a CENTPIX from. The sender can re-use the same CENTIPIX in payments for postage, online access, shopping online, etc.. until its funds are completely used up. B<Including postage in emails> The postage can be included in the BODY or HEADER of the email message. Mail::Audit::Centipaid generate a valid payment url for postage payment. When used, centipaid will give the payee the option to include the postage receipt in the body of the email by copying and pasting the text into the body of the email message, or using the online email interface, which includes the postage receipt in the header of the message. Both are supported by this module. B<How does it work?> Email messages are parsed for AMDP-PAYMENT-RCPT string which specifies that the message contains an AMDP style electronic postage. Once it is detected the receipt number is extracted, and centipaid receipt server is contacted to get a verification that a payment has been made for the postage rate set in the configuration. Ideally the AMDP-PAYMENT-RCPT should reside in the header, however since AMDP protocol is new, and email applications do not support its inclusion in the header area of a message, the BODY of the message is searched for the string. Please refer to amdpmail.com for information about the protocol. If the receipt is found, and it is a valid one, check_mail() returns an Ok code, but if the rcpt is not found, it returns an error code, and a well formed url that is used to send to the sender to ask them to pay for the postage. Please refer to centipaid.com for the most updated version of this module, since other methods of payment may be available. =head1 METHODS =over 4 =item C<check_mail(%conf)> Checks the mail header and body for the presence of electronic postage (AMDP-PAYMENT-RCPT). Returns an array containing two elements. The first array element contains one of the following codes 0 = success, 1 = no/bad receipt, 2 = problem contacting Centipaid receipt server. The second array element is primarily used with error code 1, which contains the properly formated payment url. if the error code is 0, then the message should be accepted, other wise it should be rejected, or dropped. =head1 CONFIGURATION =over =item B<acct> account_name The account number is issued by ceentipaid.com for a given domain name. This number is unique and it determines who gets paid. =item B<pass> receipt_password The password is used only in socket authetication. It does not grant the owner any special access except to be able to query the receipt server for a given receipt number. =item B<amount> 0.5 The amount is a real number (float, non-integer) that specifies how much the user must pay to be granted access to the site. For example amount 0.5 will ask the user to pay 50 cents to access the site. The value of amount is in dollar currency. =item B<lang> en This defines the language of the payment page displayed to the user. It is set by the site admin using the two letter ISO 639 code for the language. For example ayna.com requieres the payment info to be displayed in arabic on centipaid, CNN.com will need several sections of its site to show payment requests in different languages. Some of the ISO 639 language codes are: English (en), Arabic (ar), japanese (ja), Spanish (es), etc.. =item B<email> foo@bar.com This defines the email to be used when emailling back with the proper postage. default to the $mail->to =item B<https> This should contain the payment url assigned to the account number. This defaults to =item B<authserver> centipaid_receipt_server This should contain the receipt server assigned to the account number above =item B<authport> 2021 This should contain the port number of receipt server assigned to the account number above =back =head1 REFERENCE =item Centipaid: Micropayment solution used in collecting and clearing epostage =item CentiPIX: Centipaid Portable payment media which is used to make payments instead of using a credit card. This gives allows the payment of postage as low as $0.001 =item AMDPMAIL: Proposed protocol used to control the wide spread of SPAM. One of its features is the adoption of postage as a method of controlling spam. =head1 ACKNOWLEDGEMENTS Thanks to Simon Cozens for the Mail::Audit module, which allowed me to develope a consitent and easy to use Centipaid mail plugin. =head1 AUTHOR Adonis El Fakih, <aelfakih@cpan.org> =head1 SEE ALSO L<Mail::Audit> =cut
https://metacpan.org/release/AELFAKIH/Mail-Centipaid-1.0/source/Centipaid.pm
CC-MAIN-2022-27
refinedweb
1,724
61.56
Lesson 1: Creating Your First Display Boxes! Written by Jonathan Sim You can find this lesson and more in the Arduino IDE (File -> Examples -> Andee). If you are unable to find them, you will need to install the Andee Library for Arduino IDE. In this first lesson, I'll show you all that you need to know to create display boxes with the Annikken Andee. Here's a look at the user interface that you'll create with the code below! Always include these libraries. Annikken Andee needs them to work with the Arduino! #include <SPI.h> #include <Andee.h> // Every object that appears on your smartphone's screen // needs to be declared like this: AndeeHelper objectA; AndeeHelper objectB; // We're creating two objects here apperance of all the objects on your smartphone void setInitialData() { //// Let's draw the first object! ////////////////////////////////////////// objectA.setId(0); // Each object must have a unique ID number objectA.setType(DATA_OUT); // This defines your object as a display box objectA.setLocation(0, 0, FULL); // Sets the location and size of your object /* setLocation (row, col, size) Row: From 0 (top-most) to 3 Col: From 0 (left-most) to 9. If there are too many objects on that row, you can scroll from left to right. Size: The following sizes are available for you to chooose: FULL, HALF, ONE_THIRD, ONE_QUART, TWO_THIRD, THREE_QUART */ objectA.setTitle("This goes to the title bar"); objectA.setData("This goes to the data field"); objectA.setUnit("This goes to the units field"); // Optional //// Let's draw the second object! //////////////////////////////////////// objectB.setId(1); // Don't forget to give it a unique ID number objectB.setType(DATA_OUT); // Another display box objectB.setLocation(1,0,FULL); // Second row, left-most, full size objectB.setTitle("Hello"); objectB.setData("World!"); } Arduino will run instructions here repeatedly until you power it off. void loop() { objectA.update(); // Call update() to refresh the display on your screen objectB.update(); // If you forgot to call update(), your object won't appear // A short delay is necessary to give Andee time to communicate with the smartphone delay(500); }
http://resources.annikken.com/index.php?title=Lesson_1:_Creating_Your_First_Display_Boxes!
CC-MAIN-2017-09
refinedweb
346
67.04
. In most organizations, at least some attempt has been made to meet these reporting needs. Historically, however, the problem has been that the available reports have not always been up-to-date, or even accurate. Furthermore, individual departments have tended to adopt a “silo” approach, using different tools/systems to create reports that are useful within their silo, but not necessarily consistent or compatible with those produced by other departments. In many cases, there doesn’t even exist a shared understanding of the business data that underpin these reports. SQL Server Reporting Services (SSRS), when it arrived, offered a much-needed means to centralize and standardize reporting across the business, and it has largely delivered. Having used SSRS 2005 for the past 4 years, I’ve found that, with a little effort, it can satisfy most business, ad-hoc, embedded, portal integration, web, and custom reporting needs. However, I’ve also found that small “gotchas” can halt progress and cause considerable frustration, as it’s not always easy to find ways round them in the documentation. In this article, I round up some of the more interesting challenges that I have encountered in my report development efforts, and the solutions I’ve found to them. Hopefully, these will be useful to the many (the majority?) people who are still using SSRS 2005 in production. Some of the solutions offered can still be used in SSRS 2008. I conclude the article with a review of some of the issues that SSRS 2008 has fixed, or at least mitigated. Challenges/Solutions. - Horizontal Tables: Calendar Reports - Select “ALL” Query Parameter option - Multiple Sheets in Excel - Excel Merged Cell Issues - Blank Pages - Vertical Text - Report Data in Header/Footer - Are you missing XML/CSV data on your exports? - Template Reports - Using the Reporting Services database A ZIP file containing samples of the reports detailed in this article is available to download, try out and amend to suit your own needs. Horizontal Tables: Calendar Reports The most common need for horizontal display of information, in my experience, is for labeling or for calendar-style reports. There is no native control that allows you to display your data horizontally. There are a few different ways around this, but the easiest way I’ve found is to use a Matrix control, which allows display of data in a cross-tab or pivot format. The sample I will be using is of a calendar style report, which will display a report of events which occur in the timeframe displayed. You can build the report from scratch using the steps that I’ll outline next, or you can simply import the completed Calendar.rdl file, as part of sample project proved in the code download for this article. The driving query for this report is shown in Listing 1. The opening lines calculate the required date range for the current month, which may include dates from the prior and forthcoming months, in order to ensure that the results display appropriately on the calendar. The StartDate parameter defines the first Sunday, and the EndDate parameter the last Saturday, to display on the calendar. The code then creates two Common Table Expressions (CTEs), new to SQL Server 2005 and later. The first, Dates, generates a record for every day in the required date range and the second, Events, simply creates some sample event records for display in the calendar. Finally, we query these two CTEs, using a ranking function, DENSE_RANK, to assign number to the records based on the date, and various date functions to generate the columns for the matrix control (days of the week), days of the month, event details and so on. The query in Listing 1 is self contained, so all you need to do to test it out is point it to a SQL Server 2005 data source. Listing 1: The Calendar Report Query Having defined the query, you can build the report. Start with a blank report and add the query as a dsCalendar data set, as shown in Figure 1. Figure 1: Defining the Calendar report data set Having created the data set, add a Matrix control to the report, as shown in Figure 2. Figure 2: Adding the matrix control Figure 2 displays three watermarked areas of the control: - Columns – the matrix column header, which we can use to group the column data. - Rows the matrix row grouping for row data, which we can use to group the row data. - Data – this cell holds the detail data for the report. I need to use a table control to group and display our detail data, which is each day in the timeframe from the StartDate to the EndDate, so I dragged a table control from the toolbox into the matrix cell watermarked “Data”. For this report, I’ve have made a few changes to the default setup of the table control. For example, I removed one column and the table footer and I’ve merged the table header cells, as shown in Figure 3. Figure 3: Adding a table control to the matrix The next step is to associate the Matrix control with our dsCalendar data set, as shown in Figure 4. Figure 4: Associating the matrix control with our data set Next, I need to establish the row and column grouping for our matrix control. To set the row grouping, switch to the Groups tab on the Matrix properties dialog, select the default row grouping item in the list matrix1_rowGroup1 and click the Edit button. Set the value of the “Group on” Expression to =Ceiling(Fields!Order.Value / 7), as shown in Figure 5. This Ceiling expression is used to determine when a row should break for the next week, which for the most part will be every 7 records. Figure 5: Grouping the matrix rows Click OK, and then select the default column grouping item in the list matrix1_ColumnGroup1 and click the Edit button. This time, for the “Group on” Expression, simply select =Fields!WeekDay.Value for the drop down list and click OK, as shown in Figure 6. Figure 6: Grouping the matrix columns The table inside the “Data” region of the Matrix doesn’t require any further work. Based on the established matrix row and column groupings, the matrix data will be organized appropriately. Now that the control of the data is set up, it’s time to define the expressions that will determine what data to display in the matrix and table, when the report is rendered. First, however, I am going to resize the control. We don’t need to display anything in the matrix “rows” region, so we minimize the left column of the matrix control, as shown in Figure 7. Figure 7: Resizing the matrix columns The next step is to apply the following Expressions to the various report items for display on the report: - Matrix Column Header: =WeekdayName(Fields!WeekDay.Value) - Displays the days of the week across the top of the report - Table Column Header: =Fields!Day.Value - Displays the day for each day in the timeframe - Table Detail Column 1: =IIf(Fields!Note.Value = Nothing, “”, CDate(Fields!EventDate.Value) .ToShortTimeString + “:”) - Displays the time of the event in the first column of the table - Table Detail Column 2: =Fields!Note.Value - Displays the event details in the second column of the table Figure 8 shows the Matrix populated with these expressions. Figure 8: Matrix expressions When the report is rendered, it will look similar to that shown in Figure 9. Figure 9: Rendering the Calendar report Although the report is now functional, it still looks a little unpolished, so the final step is to tweak the layout and formatting until you are happy with it. Figure 10 shows the finalized report, both in layout mode and rendered. Figure 10: The finalized Calendar report Select “ALL” Query Parameter option When using a query to populate an options list for a parameter, sometimes there is a need to select several options at once, rather than an individual option from the provided list. For example, you may want to run a report for multiple companies, instead of each one individually. In order to do this, you just need to add a UNION clause to the query that is used to populate a drop down of available options for the company parameter. So, for example, if the original query to populate the parameter list might be of the following form: When rendered, the parameter dropdown list for the report would look as shown in Figure 11. Figure 11: Selecting individual parameter values The updated query, allowing users to select all the available parameter values, might look as follows: Figure 12: Selecting all parameter values Next, in the data set that uses the value returned from the query parameter, you will need to update your WHERE clause to work appropriately with the updated parameter. For example, if the original WHERE clause looks as follows: The updated WHERE clause will be: Now, as well as being able to filter the data by an individual company, you can cancel the filter by selecting ALL, which sets the @company parameter to NULL and return results from the query as if there was no company filter. Multiple Sheets in Excel Have you had a need to create multiple sheets in Excel? To render a report to Excel on multiple sheets, be sure to use page breaks after the different sections of the report. If a section doesn’t specifically allow page breaks, then you’ll need to wrap the controls inside a rectangle and set the page break property on the rectangle. Let’s say you have a report with two table regions, as shown in Figure 13. Figure 13: A report with multiple table regions When you export the report to Excel, you’ll find that both the table regions display on the same worksheet, as shown in Figure 14. This makes it hard to make modifications to the Excel file. Figure 14: Two table regions rendered to the same Excel worksheet To make the table regions display on different worksheets, you can set the PageBreakAtEnd property to True, as shown in Figure 15. Figure 15: Setting the PageBreakAtEnd property When the report is exported to Excel, two worksheets will now be created, as shown in Figure 16. Figure 16: Two table regions exported to two worksheets In case you are wondering how to rename the sheets when the report is exported to Excel, there isn’t a built-in way. You have the option to design a custom rendering extension, buy 3rd party if one supports this, or to modify the excel file post-export. A more advanced example of this technique is demonstrated in the Report Index.rdl file, as part of the code download. Excel Merged Cell Issues Excel can sometimes seem like the worst rendering extension available in reporting services. If you export a report to Excel, and then try to re-sort the exported data, you get a merged cell error. So, unless you completely reformat the export post-export, you cannot resort your columns. Reporting Services renders everything top down, and there are several ways in which the merged cell problem can occur when you export the report to Excel: - If you have anything (controls, images, etc.) laid out above your table/matrix regions - If you merge cells in your table/matrix regions - If controls from the top of the report do not lineup with controls from your table One way to help prevent the merged cell issue is to use the technique discussed in the previous section, “Multiple Sheets in Excel”. However, multiple sheets are not always the best resolution for this problem, especially when the problem is your page header. Figure 17 shows an example of a page header containing an image control and a textbox control, which will cause merged cell issues when exported to Excel. Notice that there are gaps between the controls. Each gap, and each control, that does not span the width of the designer will cause a separate column to be created when you export it to Excel. Figure 17: A page header that will cause merged cell issues Figure 18 shows the same page header, formatted in a way that will not cause the problem. Notice how the control spans the width of the designer. Figure 18: A page header that won’t cause merged cell issues What I’ve done is remove the image control and set a background image for the textbox control. I also added some padding to the textbox control to change the position in which the text will display, so that the image will display left of the text. This will resolve the merged cell issue, caused by having gaps and multiple controls in the page header. There is also some Device Information Settings that can be used to alleviate some of these merged cell issues. For instance, on export you can set the setting for SimplePageHeaders to True. More details about this setting can be found here:. Blank Pages Are blank pages a problem for you, when you export/print your reports? In most cases the extra blank pages result from the fact that the body of your report is too wide. Let’s say you want your report printouts to fit on 8.5in x 11in paper, with 0.5in margins on all sides. This means that the maximum width of your report body in the designer can be 7.5in. If it exceeds that value, then you will get the extra pages printed. Most report developers fall into this trap by having their design surface laid out wider than the allowed width of the body of the report, which would be 7.5in, in this example. As you can see in Figure 19, I ensure that my report body is consistent with a portrait layout of a report. My margins are setup as 0.5in on all sides and the report width is set to 8.5in. So when I layout my report I do not want my designer to exceed 7.5in in width to stay within the report margins and report width. Figure 19: Report Properties and Layout Another reason blank pages could be created when you export your report is if you allow your controls to grow. If you do, then they can sometimes grow past the maximum page width for your report. You can prevent your controls from growing by setting the following properties of a control from either the properties dialog or properties panel, as seen in Figure 20. Set the “Textbox height” options in the properties dialog, or set the CanGrow properties in the properties panel of Visual Studio. Figure 20: CanGrow Properties Vertical Text Have you ever needed to display your report information vertically, either top-to-bottom or bottom-to-top, rather than left- to-right? There is some support for this in reporting services. For example, you can set the WritingMode property of your textbox to tb-rl, as shown in Figure 21. Figure 21: Setting the WritingMode property of a text box As a result, the information in the textbox will display top-to-bottom, as shown in Figure 22. Figure 22: Top-to-bottom vertical text Displaying your text bottom-to-top is a little trickier; you need to create an image and either set the background image of the control to the generated image, or use an image control. Let’s take a look at an example. Again, you can either work through the following steps, or download the completed report, VerticalText.rdl. What is required is a function, shown in listing 2, that will take the text passed in, measure it, and generate an image of appropriate size with the text displayed bottom-to-top. Listing 2: The LoadImage Function On the menu bar in Visual Studio, select Report | Report Properties and then paste the above code into the “Code” tab, as shown in Figure 23. Figure 23: The LoadImage function for displaying vertical text Next, you will need to add a reference to the System.Drawing namespace, in order to access the basic graphics functionality. Click the “References tab” of the dialog and then the browse (“..”) button. Locate the System.Drawing assembly and click “Add”. The reference will be added, as shown Figure 24. Figure 24: Adding a reference to Systme.Drawing Add an image control to the design surface, and then set the MIMEType property to image/jpeg, the Source property to Database, and the Value property to =Code.LoadImage(“Hello World”), as shown in Figure 25. Notice that the value property uses the LoadImage function in our embedded code. Figure 25: Setting the image control properties When rendered, the report looks as shown in Figure 26. Figure 26: The rendered report, with top-to-bottom and bottom-to-top vertical text Natively, SSRS does not allow for text to be displayed at an angle except for in some charts. If you can figure out how to modify the code for the LoadImage function so that it displays the text at an angle and generates an image, you would have a solution for the issue of angled text as well! Report Data in Header/Footer Reporting Services does not provide out of-the-box support for use of information from your queries in Page Headers and Footers. There are two ways around this. The first way is to create controls in the body of your report, holding the values you need to display in the header and/or footer. You can set these controls to “hidden”, and place them in some out-of-the-way place, towards the bottom of the report. Then, you can set expressions on controls in your header and footer sections to the value of the control in the body of your report. In the following example, shown in Figure 27, I placed a textbox control in the body of the report named “textbox1”, with a value of “Hello World”. In the header section, I placed a textbox with a value of “=ReportItems!textbox1.Value“. I then copied the control in the header section and pasted it into the footer section. Figure 27: Display report data in headers and footers, using a hidden control in the body If you preview this report, the value of “textbox1” will be displayed in the header and footer, as shown in Figure 28. Figure 28: Three times Hello World A slightly cleaner option, in my opinion, is to create a public function that can be called to set the value of a variable, which can then be used in any or all sections of the report body, header, and footer. Figure 29 shows the embedded code that creates this SetReportTitle function, containing the _Title variable. Figure 29: The SetReportTitle function. In this example, you can then simply set the value of the hidden textbox in the body of your report to “=Code.SetReportTitle(“Report Title”)“. This calls our function and sets the value of the _Title variable to Report Title. Now, you can set the value of any control in the header or footer to “=Code._Title”. The variable can be used in any or all sections of the report body, header, and footer. Missing Data in CSV/XML Exports Is some of your data not getting exported to the data export formats of CSV or XML? Reporting Services, by default, has all data controls set to auto-output on export. This means that the rendering extension whether CSV or XML determines what gets exported. When exporting to the data specific rendering extensions, the extension determines what to export, which in most cases means that tabular data gets exported but not data determined to be informational. There is a way around this feature. When you click on a control that contains data that you want to export, you should set the property DataElementOutput to “Output”, as shown in Figure 30. Figure 30: Setting the DataElementOutput property in preparation for export to CSV or XML. Alternatively, you can also set this option by right-clicking the control and selecting properties. Once the properties dialog is displayed go to the “Data Output” tab and select the “Yes” option under the “Output” section, as show in Figure 31. Figure 31: Setting the DataElementOutput property from the Data Output tab By setting this property to “output”, it ensures that your information will get exported. These Data Output options are used only by the CSV and XML rendering extensions. All other built-in formats are exported based on layout and don’t use the Data Output settings. Template Reports Do you use a predefined layout when you start work on a new report? Do you want to be able to add your report templates to Visual Studio through the “Add New Item” feature? Well, as luck would have it, it’s pretty easy to override the built-in template, or add your own templates. In order to do this, simply navigate to Visual Studio’s ReportProject directory, in Windows Explorer: Note that this path should be accessible if you are using a default installation of Visual Studio. If not, then you’ll need to amend the path as appropriate. This ReportProject folder is the one in which VS stores the Report.rdl file that is used as the default template when you add a new report to your project, as shown in Figure 32. Figure 32: The default Report.rdl file is stored in the ReportProject folder. You can replace this default Report.rdl file with your own template, or simply add your own templates to the same folder. In this example, I’ve added a landscape and a portrait template to the directory as shown in Figure 33. Figure 33: Adding custom templates Now, when I choose to add a new item in Visual Studio, my report “templates” are available to be selected, as shown in Figure 34. Figure 34: The Portrait and Landscape template are available when creating new reports Note that the “My Templates” area you see in Figure 34 uses a specialized zip structure with some special code to setup how the template is to be used. My report templates are simply “base layouts” for Portrait and Landscape reports that I use to keep things standard. You can obtain the two example templates, PortraitTemplate.rdl and LandscapeTemplate.rdl, form the code download file. Using the Reporting Services Database Some report developers don’t realize that there are two databases that you can use to lookup or analyze reporting services information. It’s often useful to write your own reports, based on information stored in these databases. The first database is ReportServer, which is used by the Report Services to store all the information about reports that have been uploaded to the report manager. Information such as the report catalog, settings, and security are all stored within the ReportServer database. The database ReportServerTempDB stores temporary information such as report snapshots, user sessions, and report execution information. I have three examples of useful reports created from the ReportServer database. The first report is what I call the Report Index. It provides a list of all the items in the reporting services catalog, with links to render each report in the catalog, as shown in Figure 35. This can prove to be useful as it allows your report users to run just one report and get a list of all reports, without having to navigate through the report manager. Figure 35: The Report Index report I provide an example Report Index.rdl report as part of the code download with this article. You’ll have to point the report to your ReportServer database. The second report, Report Usage, is basically a metrics-type report, providing details of reports that are being executed and how many times per month. The ReportServer database contains a table called ExecutionLog that, by default, stores every report execution for 60 days. You can update the setting ExecutionLogDaysKept in table ConfigurationInfo to allow for more than 60 days of execution tracking. Again, you can obtain the Report Usage.rdl file from the download file, and an example of the report is shown in Figure 36. Figure 36: The Report Usage report The third report, Report Users, is similar to the Report Usage report. Report Users report is basically a metrics-type report, providing details of which users are executing the reports. Again, you can obtain the Report Users.rdl file from the download file, and an example of the report is shown in Figure 37. Figure 37: The Report Users report You will need to point all the report mentioned in this section to your ReportServer database. What has SSRS 2008 fixed? The challenges that I’ve covered in this article are ones for which I’ve managed to finds workable solutions. While using SSRS 2005, I’ve encountered other challenges for which I still have not found viable solutions, without investigating 3rd party tools. An example would be Rich Text formatting. In SSRS 2005 if you wanted to use Rich Text you have three options none of which are natively supported. You could design your own custom control, generate an image, or buy 3rd party controls. With SSRS 2208, Microsoft has itself made some 3rd party acquisitions that have made the report developer’s life a little easier. For example, Microsoft acquired Dundas Data Visualization and so new data visualization controls, such as Charts and Gauges, are now built-in to reporting services. Microsoft also acquired the OfficeWriter technology from SoftArtisans, Inc., which added Word export and support for Rich Text. Within Reporting Services, improvements have been made for report authoring, report processing and rendering, programmability, and architecture. Based on the challenges/solutions discussed in this article, the following issues have been specifically addressed in SSRS 2008: Merged cell issues – The rich text control alleviates some of the merged cell issues when exporting to Excel. Now, you can have one control with multiple formatting options and expressions. Report data in Header and Footer – Variables have been introduced into Reporting Services that can be global or scoped to groups. You no longer have to hide controls in the body of your report to get data to display in the header and footer sections of your reports. If you don’t want to use variables, you can also now use data directly in the header and footer with certain controls. Report Pagination and rendering- there have been numerous improvements in this area: - New properties have been added to allow greater control over how your report is rendered. - Null values are now explicit giving you more control while working with nulls. - The Tablix control, basically the Table, Matrix, and List controls rolled into one, has drastically improved report rendering capabilities. - Visualization improvements for charts and gauges mean they are far superior to the charting capability available out of the box in SSRS 2005 - The CSV rendering extension has been revamped to work differently depending on the purpose of the export, whether it’s for Excel or for application consumption. Overlapping report items should no longer give warnings but may get adjusted automatically when rendered. This reduces pagination problems. You can get more information about new features in SSRS 2008 from the Microsoft site: Finally, it’s well worth reviewing the list of breaking changes in SSRS 2008, as they may cause some headache and issues in your environment: You’ll probably uncover most of the issues when deploying and configuring the ReportServer. A lot of the configuration options have been removed and/or consolidated. Most significantly, SSRS 2008 no longer relies on IIS, and instead uses Handlers and Routers to work with HTTP.sys directly. Summary SSRS is a very easy-to-use reporting architecture but I know from experience that when issues or challenges arise it can be very frustrating. I hope the solutions covered in this article will aid you in your work with SSRS. Remember; when a challenge arises there is always a solution, though the solution may not always feasible, based on available resources.
https://www.simple-talk.com/sql/reporting-services/ten-common-sql-server-reporting-services-challenges-and-solutions/
CC-MAIN-2017-04
refinedweb
4,711
59.74