text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Java demands to write any code of any thousands of lines, within the class declaration only - between the open brace, {, and close brace, }. Outside the braces not allowed.
Observe the following specimen code.
public class Employee // class Java declaration starts { // open brace of the class String name = "S N Rao"; // two variables double salary = 9999.99; Employee() // constructor { System.out.println("From Constructor Name: " + name + " Salary: " + salary); } // methods void calculate() // first method { System.out.println("From calculate() method Name: " + name + " Salary: " + salary); } void show() // second method { System.out.println("From show() method Name: " + name + " Salary: " + salary); } // at the last, main() method public static void main(String args[]) { Employee emp1 = new Employee(); // create an object of Employee emp1.calculate(); // with object call calculate() method emp1.show(); // with object call calculate() method } // close brace of the class Java ) easy entrance to Java.
1. Java Features – Buzz Words 2. Primitive Data Types 3. Data Types Default Values – No Garbage 4. OOPS concepts - introduction 5. Java Naming Conventions – Readability 6. Basic Class Structure, Compilation and Execution 7. Using Local and Instance Variables 8. Three Great Principles – Data Binding, Data Hiding, Encapsulation 9. Using Variables from Methods). | https://way2java.com/java-general/class-java/ | CC-MAIN-2020-40 | refinedweb | 192 | 53.47 |
Post by Ryan Maue
You knew it was only a matter of time after the Rolling Stone criticism of Obama on climate, but Al Gore is back and in a big way. Some are calling it an Inconvenient Truth 2.0 with a brand new slide show connecting extreme weather with global warming. The new “Climate Reality Project” moniker replaces Gore’s previous Alliance for Climate Protection racket. UK Guardian eco-journalist Suzanne Goldenberg gives away the store with this line: “The campaign represents a modest comeback for Gore who has reduced his public profile on climate action in the past few years – probably out of consideration for the political consequences to his fellow Democrat Barack Obama.”
So Al Gore’s decision to lower his profile was to avoid forcing Obama to make politically unpopular decisions? I think there are additional reasons related to his personal life.
Thankfully,. Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.
Gore has put out a new video trailer: it’s clear he has worked on his lecture delivery to sound even more professorial, elitist, and condescending. Experts such as NASA scientist Dr. James Hansen and NCAR’s Dr. Kevin Trenberth are prominently mentioned in several interviews given by Al Gore.
Another aspect of the Gore resurrection is the lockstep adherence to his brand of climate change orthodoxy by his sychophants in the liberal media. Here are some links for a flavor:
Joe Romm: Exclusive: Al Gore On His ‘Climate Realilty Project’ Launch: “It’s Urgent To Rendezvous With Reality To Save The Future Of Civilization As We Know It.”
Suzanne Goldenberg: UK Guardian: Al Gore returns with new climate campaign. Climate Reality Project aims to expose reality of global warming crisis and kicks off with a 24-hour live streamed event
Chris Mooney: some blog: Al Gore Launches the “Climate Reality Project” — which deserves some quotes since he is right on the money:
In my view, Gore has been pretty much right about the science of climate all along. But sad to say, everything I’ve learned about this issue convinces me that there is little he can say or do to get conservative climate “skeptics” to accept that.
Yes, they need to renounce their counter-reality and come back to this one. But paradoxically, having Al Gore tell them that may just drive them farther away. Let’s hope he can appeal to the middle, anyway…
Gr.”
139 thoughts on “Lipstick on a pig: Gore rebrands climate outfit”
I hear the cries, “Oh No, not again!”
September 14th. Weird choice. It’s smack dab in the middle of the week. Not on a weekend or anything. I’m thinking that since that’s about the peak of hurricane season, he’s hoping to have a nice little cat 5 coming at us that he can point to as “proof” of global warming. Or climate change. Or climate disruption. Or whatever we’re calling it these days.
You mean, “Lipstick on Man-Bear-Pig”
[ryanm: wuwt readers are damn quick 😉 ]
“Al Gore is back and is planning “24-hours of Reality“”
Well, that would be a start. Not sure if he could handle it for 24 whole hours.
Maybe he’ll kiss a polar bear with the same authenticity that he kissed Tipper on stage with back in 2000. Or start crying like Jimmy Swaggart?
I wouldn’t p**s in Al Gore’s mouth if his teeth were on fire…
The skeptics aren’t just conservative – they are classical liberal, libertarian or moderate. But they know bullshit when they smell it.
Unfortunately it is Gore’s activities, as well as those of Romm and Mooney, Mann, Hansen and Trenberth, that are re-energizing not just the Republican Party, but the extreme wing-nut end of the Republican Party.
You don’t have to be a “conservative skeptic” to be extremely wary of a politician, any politician, who attempts to define and control “Reality”
He is doubling down. Unfortunately time and odds are against him, so it just looks like a desperation attempt. It is, but do not expect to see truth or reality from him or any of his sycophants.
Wasn’t it Dr. James Hansen who advised Gore on an Inconvenient Truth which was later found to have over 8 factual inaccuracies?
If it cools then how the hell are they going to “connect the dots”? They’ll just make a laughing stock of themselves.
Al you pansy – why are you stopping at 24 hours of your brand of reality? Why not show your true devotion to the Cause and extend the webcast to a full week?
Truthfully, I really just want to see Al praying for the quiet comfort of the grave after ~100 hours of sleep deprivation…
The Gospel,according to Al, Gore, and Bull
So some skeptics have been calling themselves “climate realists.” Gore’s re-branded venture is the “Climate Reality Project,” so the followers can call themselves “Climate Realists.”
How much longer until the (C)AGW faithful insist they are the real “Climate Skeptics”?
Hey, we’re the ones critically evaluating the science and making sure it all agrees with itself and belongs in the overwhelming consensus, thus WE are the ones who are Scientifically Skeptical, not them!
You can put lipstick on a (man-bear-)pig, but it’s still a crazed sex-poodle.
September 15 is also within a day or two of the Arctic Ice minimum, which appears to be tracking to a low similar to 2007.
Don’t forget to open the windows the night before, Al
re: “All of the group’s efforts will be devoted to spreading the truth about the climate crisis and the solutions to it,…”
Why don’t they start with the truth about the data before manipulations. And the truth about historical data.
A good starting point would for AL to admit he lied about CO2 leading temperature in his ice cores. He could also admit his hockey stick was completely wrong and probably intentional fraud.
He could end his introduction by owning up to the lies caught by the British court.
On this foundation of truth, he could then talk about Phil Jones’ BBC interview where he said
the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 for all 4 periods are similar and not statistically significantly different from each other. He can add that Jones also said “that from 1995 to the present there has been no statistically-significant global warming” and he said they believe man is the cause of warming because of “The fact that we can’t explain the warming from the 1950s by solar and volcanic forcing” ()
Then Al can start his rant with the recent cooling.
That is if he really wanted to talk about the truth.
Thanks
JK
PHOTOGRAPH OF COCAINE SMUGGLER JORGE CABRERA WITH AL GORE.
This is one of the two photographs that the Justice Department
was confiscating after it was learned that Jorge, who had donated $20,000 to the Clintons, was a major cocaine smuggler. The woman to the right,unidentified, appears to be the same woman seen in the photo with Hillary below.
Despite all of this, Gore won’t debate the skeptic position. If – somehow – before then the MSM could be convinced to appeal to him to publicly debate some top scientific skeptics, he might do a McCarthy and talk himself – and the hysteria – into the ground.
How can he either be put in a debate or on the MSM refuse to debate?
“…Gore who has reduced his public profile on climate action in the past few years – probably out of consideration for the political consequences to his fellow Democrat Barack Obama.”
More probably because he wanted his waterfront condo deal in San Francisco to fade from the public’s mind.
I think Gore has made all the money he will off global warming. Now I think it’s about avoiding an investigation that would expose the whole thing as a fraud and make him look like Bernie Madoff and put him behind bars for life.
Poor Al . . . ever more desperate to keep the gig going.
Well have Al. Just because you failed out of Divinity School doesn’t mean you aren’t a saint.
Those who fail to learn from history are condemned to repeat it. – attribution: various
“Cataclysmic Weather and Global Climate Change: The New Normal…..”
Hmmmmm – Now where have I seen this act before? algore…. igore…… abby something….
This is a candidate for Best Sceptical News of the Summer. The more Al Gore is on the telly, the more sceptic converts are made.
And the more he refuses to debate with the increasing number of sceptics, the more foolish and shallow he – , and his cause,- will look.
Perhaps it will be a Tipping point for us.
I suspect that this re-emergence has more to do with the collapse of the indulgences trade. Al Gore needs a new revenue stream.
Oh good. He hasn’t actually wised up. He is willing to keep selling his AGW Indulgences right up to the big chill. I was worried he would have been forgotten about by the time the town folk were torch-and-pitch fork angry.
Stay there, Al. You stay right there.
Just like the penultimate scene of a ‘B’ horror movie, when you thought the beast had been slain, and the good guy turns away, there is that final terrifying twist. Similarities of Al Gore to Glen Close’s fatal attraction character in the bath scene are too close for comfort.
“Lipstick on a pig”
I know you are referrencing Gore’s manner, but –
First, you’ve insulted pigs everywhere by relating them to Gore,
and,
second, the “lipstick on a pig” phrase implies putting the lipstick on the front end of the pig which is exactly opposite of where Gore uses it.
Maybe “perfume on a cow patty” would describe Gore’s drivel better?
Dunno.
🙂
I expect that Mr. Gore’s project will result in an endless number of (richly deserved) snarky remarks and general ridicule from people who question his version of the “science.”
Well, this is good news. For a while there, even the alarmist quit identifying with him. This will fun. Are they going to identify with and embrace again, Al “millions degrees/unhappy ending” Gore once again?
I hope so. Makes for easy going. 🙂
Support the man. After AIT skepticism grew. He works for us.
Of course he will do this without someone taking the opposing view. No online debate, no opposing time. All AL, all the time.
He’s waited this long, why not wait for the next IPCC report and have more behind his death by power point?
God thing he is coming back with a Science fiction show… they are running out of candidates for the next Nobel price.
Did somone say that he’s back because he reached the Tipper point?
Chris Mooney: some blog: Al Gore Launches the “Climate Reality Project” — which deserves some quotes since he is right on the money:
“In my view, Gore has been pretty much right about the science of climate all along.”
Really?
Hasn’t he watched An Inconveinient Truth?
What planet is he living on?
I wonder whether there will be a “Gore effect” with respect to Atlantic hurricanes. September 14th is, IIRC, the date for maximum hurricane activity. To date, now mid-July, we have had one rather small tropical storm. There are no hurricanes in sight. I know it is early days, but one never knows. These Gore effects seem to be very strong.
Indeed, this will be entertaining. I can’t wait to see and hear all the BS he’ll spout. Popcorn time
They should have timed it for Halloween…
“Reveal the deniers” ?
Are they hidden?
I don’t remember giving a secret password to get on here.
The stupidity and dishonesty is piling up and reaching levels which seem to have no limit.
What’s next? Calling skeptics terrorists and calling for their prosecution & execution?
Sounds like the divorce has forced him to having to make more money to live the hypocrite life he has “chosen” to live. God help him if he ever had to live like us regular people!
Is a re-launch somewhat equivalent to a second chakra?
Watch the teaser video ( Popcorn please, the full event is going to be an embarrasing own goal for Gore) check out the 3,000 people ‘trained’ by Al Gore to give a slideshow. 😉
can’t wait, every cliche going – How many climate scientists are going to be happy to be associated with this…. yet, you are either with Al or against Al, a dilemma for them.
“reveal the deniers”
“cataclysmic weather events ARE occuring”
“big oil & big money”
Should be interesting. A world event, over 24 hours, Al Gore speaks! 64 days and counting..
Al Gore –.
” !!!
does he really believe this rubbish?!
“Climate Science is the search for fact… not truth. If it’s truth you’re looking for, Dr. Tyree’s philosophy class is right down the hall.”
Hopefully Dr. Jones (Indiana, of course) will forgive me taking liberties with his quote. Seemed apropriate given Gore’s website and Grist’s comment.
Re “The New Normal”; he MUST take this gal with him on his Powerpoint tour; i insist; she’s gold.
Good idea with Gore leading the charge they will over the cliff of ‘doom’ even quicker .
If you want to have fun, travel over to the youtube page where his promo is posted and read some of the supportive comments.
Good, he’s done more damage to their ’cause’ than any other person………..
“Poor Al . . . ever more desperate to keep the gig going.”. You are all the victims of an extremely effective denial campaign and are in a psychological-ideological deadlock which prevents you from seeing reality in an objective manner. This is exactly why initiatives like ’24 hours of climate reality’ are being held.
When I was about 12, I used to think climate change was exxagerated because I felt it was cool to think the opposite of everybody else. I think I can understand how you feel. It’s okay to change your mind.
And if you don’t change your mind, ask yourself this: are you human? – could you be wrong? – what if the doomsday scenarios turn out to be right? – is it responsible to risk such damage based on your being 100% certain that the science is bogus?
I want to reach out to you. I want to work together with you. It’s not about politics. Please leave behind your preconceptions and your fear and start to see the reality of human-caused climate change.
Let’s be friends!
Al’s Climate Launch may turn out to be a Lurch when he speaks.
That Gore Effect is directly proportional to the level of bull.
Take cover.
Some editing of the Picture of Gore is needed:
You need a right hand extension that shows Old Man Winter blowing back.
They’d better use something to keep the “circle flies” away during 24-hours of Reality. You can’t hardly fool them circle flies.
Should be a target-rich environment! The challenge will be getting the truth out to a broad audience so that his fatuous arguments for the AGW echo-chamber can be exposed. Let’s roll!
The first thing that struck me is Gore’s use of the word “Reality” in the name of his venture—and I contemplated the ramifications of this misuse of the English language. So I go into the SBA.gov Web site and look up Truth in Advertising. It doesn’t bode well for Mr. Gore.
You see, I found that the FTC, the main federal agency that enforces advertising laws and regulations through Truth In Advertising, says that:
I submit Gore’s use of the word “Reality” doesn’t conform to the SBA’s requirements as stated above.
I also laugh when Gore maintains that “reality is on our side”. That’s not the reality I see, Mr. Gore. I suggest you open your eyes, quit trying to cover your sorry past, and face the facts that your “Climate Crisis” as you call it is a factual inexactitude.
Yeah Go Al Go.
DirkH says:
July 12, 2011 at 2:15 pm
What a classic! How does she spout that with a straight face???
Come home, Al. The chickens have come home to roost and you are standing under the coop.
2011 is a lean year for ManBearPig:
@bob nye: I would…
So Al Gore’s decision to lower his profile was to avoid forcing Obama to make politically unpopular decisions?
Obama doesn’t need to be forced to make politically unpopular decisions – he’s perfectly capable of doing that all by his lonesome. Witness the last couple years, and now intimating he will stop SS checks to Seniors if he doesn’t get his way on the budget. I have some advise for Obama: Never pick a fight with an old guy; he won’t fight, he’ll just kill ya (politically speaking of course ).
Every time that Al is about to pop up and make yet another presentation, after having spent an unhealthy number of hours learning powerpoint–again–I always wonder if there’s been yet another insane asylum having closed up shop.
If the content is of no import to the presentation, and if no one is allowed to say anything, maybe not even look, no microphone, no cameras, in fact all them nasty evil IRL people is not to know anything, if the preferred preference for building is something akin to old nazi bunkers, a presenter pretty much has to be insane to go through with the presentation when the choice of just making a tweet about having held such a presentation (nobody being the wiser, wink wink nudge nudge) is so much less time consuming and easier. :p
LOL – Al Bore, you are a card! He must have a mansion payment coming up or something – maybe a new pool heater? Nobody took you serious when you were Veep, a Presidential candidate or as a tree hugging hypocrite. Go chase a maid across a room and go away.
It stands to reason that if banging your head on the desk is painful, diving into it headfirst from atop a tall stepladder should feel pretty good. Well, at least it does for a warmist.
Maybe he’s trying to get the VP nomination. Gore, the leader of a Green Party within the Democratic Party, temporarily. In addition to being VP, he could be Green Czar. Yeah, that might work. Gore is set up as the Idol for the Loony Left and Obama moves to the middle. Gore takes over the Democrat Party from the inside.
To Save The Future Of Civilization As We Know It
Precisely the opposite is true, if Gore et Al get their way, there will be no more “civilization as we know it”.
Expect unseasonable cold.
@ Some European says:
July 12, 2011 at 2:30 pm
I can think of a few other historical figures who “just wanted to be friends”. Stalin, PolPot, Genghis Khan, Caligula — just off the top of my head.
One in 10 Species Could Face Extinction: Decline in Species Shows Climate Change Warnings Not Exaggerated, Research Finds
That’s right. Already happened. Panic in the streets.
Someone commented here that hes doubling down, he’s done that already a few times. This would 8 or 10 folding down by now. I would like to follow the money on this one but I can’t. The carbon trading ponzi scheme is over. Does he think he can win another oscar or noble prize? so wheres the money?
Joe Romm about Al Gore at ThinkProgress.
Read the comments. Hint: It’s not ThinkCritical.
DirkH July 12, 2011 at 2:15 pm
That YouTube video-
“Adding comments has been disabled for this video”
How shocking that yet again an alarmist has to prevent challenge.
When you have to call the baloney you’re selling “real,” you’re on very thin ice.
14th September is the fourth day of the Rugby World Cup here in New Zealand. On the 14th will be Samoa vs Namibia (Samoa to win?), Tonga vs Canada (Tonga to win?) and Scotland vs Europe 1 (Scotland to win?). On the 15th (still 14th in California) is Europe 2 vs USA (USA to win?). What players make up Europe 1 and Europe 2? Will they wear the EU flag?
So let’s see, watch the rugby or Al Gore’s 24 hour Unreality Show?!? 🙂
Maybe Gore should go read George Orwell’s 1984 before he starts telling the rest of us ‘free’ people how to think. But then maybe it doesn’t come in a Powerpoint version.
The big difference this time?
We are ready and waiting for him…….
When all else fails, he can make a case for reducing CO2 emissions “just because.” For example:
See? Inconsiderate Dutch drivers are wreaking havoc with natural ecological systems.
Increase the carbon taxes! Force them to drive less! Save the insects before it’s too late!
When Gore and his cult of scientific apocalyptic screamers predict the same old tired scenarios of doom they are correct in one sense. It will be their credibility’s doom from a continued downward spiral of their already abysmal intellectual integrity.
To paraphrase a line from the character Quintus in the movie Gladiator: “
PeopleWarmists should know when they are conquered.”
John
Too much of a good thing…..
This sums it up quite nicely…..
Some European:
Many people including me know
(a) climate has always changed everywhere and it always will,
(b) no unprecedented climate changes have been observed during the past century,
(c) there is no empirical evidence to support the AGW hypothesis,
and
(d) there is evidence that refutes the AGW hypothesis.
But at July 12, 2011 at 2:30 pm you say to those of uis who know these facts:
” I’m ever more desperate to make you guys understand that the earth is warming, that human activities are the main cause,
…
could you be wrong? – what if the doomsday scenarios turn out to be right? ”
I can tell you that your desperation is to be expected because the only things that could encourage us to share your delusion is
(1) observation of some unprecedented climate change,
(2) some empirical evidence which supports the AGW hypothesis
(3) disproof of the evidence which refutes the AGW hypothesis (i.e. the ‘hot spot’ is found, the ‘missing heat’ is found, the ‘warming in the pipeline’ reappears, the predictions of sustained warming start to be confirmed, etc.).
Until then I suggest that you recognise your delusions are certainly wrong because the existing evidence clearly indicates that the AGW hypothesis is wrong and, therefore, the “doomsday scenarios” are impossible.
Additionally, your assertion that you really are deluded into believing your untrue assertions would have more credibility if you did not hide behind an alias.
Richard
Where will this be broadcast? On Gore’s network?
“Thankfully, Al Gore is back”……………
…..the first sign of the coming ice age
Others could coordinate a corresponding “24 Hours Of Fantasy” event that mocks his (which is what it deserves). I’m sure we’ll hear of that, soon…
I’d like to know when this cooling is going to start.
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
Global cooling my butt.
I caution all here not to underestimate this “Climate Reality” show.
This is an “All in” bet by the Gore and allies and here are the stakes:
1. This is 13 months before the US Presidential Election.
2. I am not convinced that Obama will be the Democratic Presidential Candidate in the end. His negatives are high and irreparably so. By the January 2012 State of the Union, Obama will be unable to avoid the “Are you Better off now than 4 years ago?” question that sank Jimmy Carter.
3. So, is it beyond reason that Al Gore will contest the Democratic Nomination? The greens will abandon Obama and Gore came so close in 2000. 9 out of 10 reports will give him a free ride.
4. At the very least, he will be a cheerleader for every Democratic Candidate backing a Climate Change agenda.
5. I’m a Bayesian. Right now, for what ever reason, I think in mid September I think it is even money that the Arctic Ice cover will be below 2007 minimums. It’s better than even money that at least one of the Ice Pack measures will be at historic lows. Even if it is close, it will add credibility to his message. “The deniers have been telling you it’s been cooling for 10 years. Look at the ice pack now and laugh at them. I told you so!.”
6. This is just the overture to 14 months of aggressive climate change propaganda. What is at the end of that 14 months is a US Election and Rio 2012 20-year IPCC Sustainability agenda. During Rio Obama will still in office, even as a lame duck.
Given what is to be gained, what Al Gore has to lose is a pittance The more I think about it, it is strategically very clever.
The poker analogy is poor here. Sept. 14 is Al Gore’s D-Day. It is a commitment of every asset and ally on his side toward total victory.
If we skeptics and freedom lovers do not anticipate and meet this assault with more than humor, the end of 2012 will not be a future I would welcome.
I hope this load of manure will not be required viewing at schools during the day. If Gore has convinced the gov’ts and school boards to show this it must be challenged or opposing views alloted equal time.
Oh no Gore is back – another reason why Mother Gaia is gonna cool down.
Shhh. Don’t tell anyone, but this is the next phase in that astroturfing project.
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
What town? Just for giggles, I picked a random town in S GA (Valdosta) and it doesn’t show above average from May 25 to today. Southern GA being a big place, maybe a little more detail would help.
Can’t resist the urge to boil Climate Reality Project down to an acronym:
C. Ra. P.
Mark H.
‘Joe Romm: Exclusive: Al Gore On His ‘Climate Realilty Project’ Launch: “It’s Urgent To Rendezvous With Reality To Save The Future Of Civilization As We Know It.”’
To save the future of the civilization as we know it?
So, exactly whoms version of civilization during which time period should we save as we know it? Don’t we really already do that by noting previous civilizations in the history books. During the last hundred years all our civilizations, that exist on this planet at any given time, has changed numerous time, for the better no less.
Or what, should the African people still slave in economical bonds to the western world, like the crazy greenies seem to want? Should the Chinese not prosper even more? Should the western world not develop new BIG industries to further western world civilizations? Should not Chile have a space program because it doesn’t fit the crazy hippies or the Venezuelan communists? Should the Japanese or the Indian not have a shot at going to the moon just because the green minions and their green masters think otherwise?
Hooray! The bulb act died!
Al Gore the Climate Scientist ? Can someone reresh my memory as to Al Gores academic qualifications as a climate scientist ?
Brian says:
July 12, 2011 at 4:23 pm
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
Indeed, humanity is unbearable lately.
In what town or city do you live Brian?
I hope they have the snowplows ready at the location Al Gore goes to on September 14.
I’m sure 1.0% of the planet is having extreme weather today – 1 in 100 year type extreme weather. 99% is having relatively normal weather which will go un-noticed.
If 100% of the planet had normal weather for decades at a time, then we could start talking about the end-times and the antichrist being big oil/big coal. We might call it Apocalypse 1:13 and have fancy graphics in the margin.
On more serious note, people living in South Georgia should expect their summer to be hot and humid. It’s subtropics, and their butt have nothing to do with it, unless they would move it up North.
Brian says:
July 12, 2011 at 4:23 pm
I’d like to know when this cooling is going to start.
I live here in south GA and it’s been well above average everyday since late May basically with unbearable humanity.
Do you not think your being a tad harsh on the good people of south GA ? lol
“In my view, Gore has been pretty much right about the science of climate all along.”
Apparently Mooney never heard about the British court’s opinion of Gore’s propaganda film.
.”
I guess Chris needs, at minimum, a dozen inaccuracies before he gets suspicious.
Some European says:
July 12, 2011 at 2:30 pm
.”
The Earth is NOT warming in fact it is cooler now than it has been for most of the Holocene. There is nothing about the late 20th century warming that was different to the warming periods in the last 2 centuries and its only just about as warm now as it was in the Medieval Warm Period. There is NOT a solid body of empirical evidence that mankind’s CO2 emissions have had any effect. Indeed as the Earth has not been warming for more than a decade while atmospheric CO2 has risen the evidence if anything is to the contrary.
And if you don’t change your mind, ask yourself this: are you human? – could you be wrong? – what if the doomsday scenarios turn out to be right? – is it responsible to risk such damage based on your being 100% certain that the science is bogus?
I think that YOU should ask yourself if you are human.
Huge amounts of money are being spent on what appears to be a myth that has been latched onto by tax and power hungry politicians. What if we are wrong?? Well what is 100% absolutely sure is that in the time you have been reading this around 10 children have died from malaria – FACT around one child every 6 seconds. Similar numbers are dying of starvation and diseases due to poor water quality or thirst. THIS IS FACT “One person every other second needlessly dies. Approximately 85% of them are children.”
Yet you would rather pay extra money to speculators like Al Gore on a carbon exchange?
Fund more flights by Al Gore between his residences paid for by his companies that sell carbon credits?
What kind of person disregards ACTUAL harm to others and throws money that could save millions of lives a year to self-serving speculators and politicians – then claims it is ‘just in case’ the falsified AGW hypotheses might be right?
Some European you are!
In other words, Al Gore has been an ebarrassment to our cause, and only leads to people turning away from it further, but we can’t tell him to stop.
Some European says:
July 12, 2011 at 2:30 pm
Response:
S. E. – I appreciate your sincerity and your heartfelt belief that our naturally and ever changing climate is directly influenced by man made sources. You have been misled…. but I think I can understand how you feel. I too was gulled into the ‘Save The Earth’ dogma back in the 1970s, until I realized it was just a front for Big Socialism. The baseless Save The Earth dogma you have chosen to embrace is the ‘cross you bear’. It prevents you from examining reality in an objective manner. This is why initiatives like ’24 hours of climate reality’ appeal to you, the misled faithful. You have an indoctrinated need to attend your AGW religious services regularly… and generously give the appropriate tithes!
Hallelujah! We Are Saved!
Just remember this – It’s okay to change your mind. It’s OK to question the Orthodoxy and demand open and honest answers. It’s reasonable and just to expect full disclosure of all data, analyses, programming code, and computer models used to support or refute the central hypothesis of AGW. This will be difficult for you. Your indoctrination is deeply seated and the path back to reality will not be an easy one. The first step is admitting you have been misled……..
Ask yourself this: Are you a rational human? Could you have been misled? Could you be wrong? What if the doomsday scenarios that churn your fevered fears turn out to be entirely false? What if the much contested hypothesis of Anthropogenic Global Warming proves to be baseless or of only minor scientific significance? Would you accept responsibility for deliberately destroying entire global industries supplying and using the low cost energy to feed, house, and clothe the population of an entire planet, because you mistakenly believed a scary story? Would you accept responsibility for bankrupting nation after nation with ill conceived and wasteful spending on far less efficient energy sources and irrational ‘carbon credits’, because you saw a frightening movie and then had a bad dream? Are you personally and collectively willing to accept responsibility for the worldwide starvation, pestilence, and death that stems from unreliable forms of inefficient and expensive energy and will be a direct result of your profoundly misguided actions? Are you human?
Are you human? Yes, of course you are, regardless of how effectively you have been misled. Misleading the gullible is a high human art that we are all susceptible to, on occasion. The more important question is “Are you a rational human?”. Are you willing to move beyond the ‘sound bites’ and ‘press reports’? Are you willing to examine data and analyses from all sides of the issue, to thoroughly inform yourself? Are you willing to acknowledge that much of the data and analysis used to support the hypothesis of Anthropogenic Global Warming is hopelessly compromised by a host of errors embedded in the data collection methods, data samples, data recording systems, faulty data storage, inappropriate use of tree growth rings for temperature proxy data, etc., etc., ad nauseum? Are you willing to acknowledge that much of the foundation analysis using this flawed data is further compromised by a host of human selection, calculation, and interpretation errors, both inadvertent and willful?
When you can honestly challenge the foundation data and the foundation analyses (and righteously so!), you will be well on your path to achieving ‘rational human’ status. You will find yourself proudly accepting the title of ‘skeptic’, as every rational human scientist must. Skepticism is the bedrock of all honest science.
I want to reach out to you. Let’s be rational!
Brian
I’d like to know when this warming is going to start.
I live here in the UK and it’s been well below average nearly everyday since early May basically with unbearable humanity. (did you mean humidity, or do you seriously hate your neighbours?)
Global warming my butt.
Everyone knows the second album is always a flop if it’s left too long and the first album is already out of fashion. Just ask Guns and Roses.
I can’t help thinking he has accepted money from Big Oil to further undermine the global warmists’ cause.
Ian W:
Thankyou for your excellent post at July 12, 2011 at 5:07 pm.
I lacked the courage to be as blunt as you are in your post but, of course, you are right: those truths need to be stated because there are people who need ‘the dots to be joined’ for them and it seems ‘Some European’ is one such (assuming his/her post was genuine).
Again, thankyou.
Richard
I thought Al had been locked up behind bars by men in white coats
Of rested case, lads and gentiles, I hereby present…drum roll…the Nobel life of Randy Man Minus Wife:
Tobacco farmer Gore’s six-fireplace palace:
And “BIO-solar” jet ski launch (and bachelor) pad yacht:
Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.
Oh, please… Of course it will. And Nature and SciAm will be more than happy to publish something for Al Gore to say, “Why, Just Last Week A Brand New Peer Review Paper Saaaaays That All This Horrible Weather Is Indeed Due To Global Climate Disruption!!!!”.
Google: delay on renewables will cost U.S. trillions, over million jobs.
35 Inconvenient Truths
The errors in Al Gore’s movie
Isn’t September 14th Hugo Chavez National Gonorrhea Day in Venezuela?
I have been around long enough to remember the warmist mantra that people should not confuse weather with climate. Now that it is no longer convenient they reject that mantra and instead seek to link weather with climate. Little wonder that qualified and experience scientists like myself have real questions that remain unanswered. I’m not a “denier” I just don’t believe that the scientific case has been made. This man is NOT a scientist so who gave him leave to ponce around the world name-calling real scientists?
“Hi, my name is Al Gore, and I’m a climaholic.”
Stephen Rasey says:
July 12, 2011 at 4:26 pm
Interesting points, Stephen. Thought provoking…. I wonder how much of Al’s personal fortune was tied up in the ‘carbon credit’ trading schema? He may be feeling a bit pinched financially, at the moment, with the collapse of these irrationally unsustainable ‘markets’. As you say, it does look like an ‘all in’ moment, for the man whose home state of Tennessee helped elect George W. Bush.
The choice of Sept 14 is interesting but I did find one interesting fact.
Bishop Gore School in Wales was founded on that day.
I knew this was coming down the pike when I read Trenbreth’s big push for attribution as the next step in climate scare tactics. It’s not enough for him and his flunkies to spread the word, it will take Al Gore’s acolytes to further the mission.
You would think they would have learned their lesson after trying to convince the public that the ’88 droughts would become a frequent occurrence in N. America or that the 2005 hurricane season was the beginning of a new era. But I guess if you cast your net wide enough and lay claim to EVERY event as a sign of global warming, you can never be wrong.
But we all know, the more loudly you squawk, the more immune people become to your squawking. So, let them push this flimsy ‘science of attribution’ … it will only drive the numbers even lower.
I for one must thank dear Al, he made me a skeptic; after watching his Inconvenient “Truth”, I bought Michael Crichton’s “State of Fear”.
With Gore, it will be more like “connect the dolts”
A lot of drive-by’s around here lately. They never seem to stick around and back up their astonishingly narrow assertions. Thanes never did get back to me on the Gavin/martha/polar bear thread. R.Gates has been reduced to comparing our climate with sand piles. I get where he was trying to go with that (tipping points), but it is a sadly weak analogy that does not begin to describe the complexity of our climate, and the interactions of the variables and influx/ outflow of energies. I must say, though, that I am glad that someone gave Gore more rope.
Go home Al. Make a nice warm bowl of soup, turn on the TV and watch til the test-pattern stops. Tomorrow, wake up, turn on your TV, and repeat the above exercise.
I had to watch Apple’s “1984” Mac video again in his honor.
This will be the best thing for skeptics since “No Pressure”.
Imagine that after making a movie grossing over $100 million, getting endless free press and softball interviews, after getting huge tax payer funded conferences and making many tens of millions personally off of AGW, that Gore feels the need to ‘bring climate reality into the mainstream’, and to out ‘deniers’.
This pathetic money waste by Gore & gang is effectively an admission of defeat.
My bet is it falls apart before it even takes off.
Ms. F.(?): “Al sometimes has trouble with blabber control…”
No sooner does Al Gore return ( somebody hinted at this) then along comes the first hints of La Nina round #2.
I note that the usual toxic AGW sycophants who use foul language enthusiastically on the Guardian’s CiF are at their foaming best over Gore’s new movie and the ‘in the pay of big oil’ flag is being waved vigorously. I have never seen so many comments removed from there before now, either.
So he got a bunch of ignorant kiddies to spread the Gospel of the Goreacle? Always use the dumb useful idiots.
Steve Oregon says:
July 12, 2011 at 1:56 pm
“Reveal the deniers” ?
Are they hidden?
I don’t remember giving a secret password to get on here.
The stupidity and dishonesty is piling up and reaching levels which seem to have no limit.
What’s next? Calling skeptics terrorists and calling for their prosecution & execution?
================
actually that HAS been mentioned and proposed. by more than one alarmist.
tattoed foreheads, re education camps, and exploding kids all come to mind.
As the word HOAX appeared my Adobe Flash Player crashed. Prophetic or what?
Isn’t Gore just another cog in the machinery keeping Khrushchev’s prognostication alive?
You forget the bigger-than-BskyB BBC left wing pro-AGW conglomerate: they will lap this drivel up too!!
The title is only half right. I’ve googled up a few hundred online images of Al Gore. And can’t find one where he’s wearing lipstick.
“probably out of consideration for the political consequences to his fellow Democrat Barack Obama.”? What? I thought we were all going to fry, the seas would envelop us and there would be death and destruction from any number of cataclysms if we didn’t act now!!!!
But we had a time out for political considerations? Oh brother!
I suggest we, on the sceptic side, hold a simulcast on Sept. 14, debunking every claim and statement they make, as they make them. Wonderful split screen possibility!
Particularly when they “out” the “deniers”! Why wait as they bring up one name at a time on their screen, we’ll scroll the entire list on ours!
Outing deniers, so dark!Ohhhhh! Kind of like pointing out the gays in a gay bar! This will be hilarious! “See they guy over there waving his arms and jumping up and down? Yes folks, those are the trash hiding in the shadows trying to prevent us from making mone…ummm saving the earth!!” LMFAO!
@Dennis Cox – any of those pictures show his “Shakra”?
I’ve lived in Southern Georgia (St. Mary’s/King’s Bay) and, trust me, it’s always hot. I do take strong exception to the “unbearable humanity” remark too. I found most of the people there quite pleasant.
Dennis Cox says:
July 13, 2011 at 9:13 am
The title is only half right. I’ve googled up a few hundred online images of Al Gore. And can’t find one where he’s wearing lipstick.
Apparently you did not read my post earlier in this thread.
🙂
Mr. Maue, “Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.”
Perhaps you should direct this at Mr. Watts who has in previous posts tried to deny a link between weather and climate based on anecdotal evidence about previous droughts and storms.
[dr maue: i will berate him immediately: anthony, please do not discuss previous droughts and storms … climatology is about the future, not the past]
sceptical says:
July 13, 2011 at 9:37 pm
Mr. Maue, “Anecdotal evidence, commissioned puff pieces in SciAm, and hand-wavy slogans about “loading the dice” are not going to cut it anymore.”
================================
And who the hell are you “sCeptical”?
What gives you the right to say “Mr.” as opposed to “Dr.”?
He has his qualifications.
What are yours?
You just don’t have any better time to spend than to make potshots in your AC controlled room behind the safety of your laptop.
But even so…what does your lame protests have to do with anything here?
Are they part of your 24 hours of reality?
Go ahead and live in your world.
Meanwhile…the rest of us will actually make things happen…and we certainly do not need a complete NIMROD like Al Gore to make anything happen whatsoever…
Chris
Norfolk, VA, USA
[rmaue: you wanna read some really disgusting comments: head on over to Salon.com: page 2 has a dandy about “firing up some ovens”: ]
@Howarth says:
July 12, 2011 at 3:29 pm
“This would 8 or 10 folding down by now. I would like to follow the money on this one but I can’t. …. so wheres the money?”
AL Gore has never made any significant $$$ from his crackpot climate scheming.
He became an “adviser” to Google in April 2001 – one month after his pal Eric Schmidt was hired as CEO – and one would guess that ex-U.S. Vice President Gore received generous pre-IPO stock options as compensation (Schmidt was granted 14 million options at hiring).
Let’s say it was 100,000 options @ $0.30/share (Schmidt’s price one month earlier). In late 2005 (18 months after the 2004 IPO) those options would have been worth $40 million.
Of course anyone can play with the numbers in this manner because it’s unknown how many options Gore was actually granted, but it certainly was greater than zero, and probably substantial given his status as ex-VP.
Finally, I recently checked the web site for Gore’s company Generation Investment LLP, and noticed that the partner bios for each of the (formerly) 21 partners have disappeared. In fact the 21 partners are not even listed anymore. Is that because 18 of those 21 were ex-Goldman Sachs, hence uncomfortable PR? Or have they quit? Unknown. From 2011 SEC filings it appears that the firm continues to have a couple billion under management. But the web site does look rather shoddy and shopworn for an investment firm of this size, and there seem to be no press releases since last year (2010). Odd.
Ryan Maue – could not those be considered death threats (since no one came out of the ovens alive)? Where is David Appell to denounce these death threats?
Phil;
IIRC, the ovens were just used to get rid of the debris from the showers.
New name, same game, to scare the gullible in his best preacher voice, telling the world that the end is nigh and that the only way to save it is to dig deeply into your pockets and give generously to the church of AGW, my brothers and sisters.
In my July 12, 4:26pm post, I erroneously implied that Rio+20 was shorty after the US Nov. 2012 election.
Rio+20 is in fact on June 4-6, 2012.
The Democratic National convention is early September 2012.
The Republican National convention is late August 2012.
Putting Rio+20 well before the U.S. elections is a colosal blunder by the IPCC in my opinion.
The US government will make no commitment before the election. None of the other G-20 will do much if the USA is waffling.
Finally, the IPCC must be geographically obtuse. Choosing to have a Climate Summit in late fall of a Southern Hemisphere location is asking for another record-breaking cold snap to visit during the keynote address.
What a load of BS! Carbon tax, tax the world and make money!
OH REALLY I GUESS 30,000 SCIENTESTS THAT SIGNED A PETITION AND FELT SO STRONGLY THEY MARCHED ON THE WHITE HOUSE SOME OF THE BEST MINDS IN SCIENCE MIND YOU, DONT COUNT, PLUS THE 100 MILLION PLUS HE MAKES IS BECAUSE HE LOVES US, AND OH ONE MORE THING, CARBON DIOXIDE? ISNT THAT WHAT HUMANS BREATH OUT, STOP BREATHING THEN AL AND DO US ALL A FAVOUR YOU FRUAD. | https://wattsupwiththat.com/2011/07/12/lipstick-on-a-pig-gore-rebrands-climate-outfit/ | CC-MAIN-2020-10 | refinedweb | 8,225 | 72.66 |
If the OGC web standards meet your needs, GeoServer is a great way to get started.
At the beginning of this chapter, we introduced the OpenGIS Consortium's vision for accessible GIS with their Geography Markup Language, Web Feature Service, and Web Map Service standards. If you decide that WFS, WMS, and GML fit your needs, GeoServer is a great place to start.
GeoServer is a Java-servlet-based toolkit that aspires to be the Apache of the geospatial web, designed to make it easy for new users to install and publish their existing geodata. GeoServer is GPL and is available from Sourceforge at. The project's main supporter is a nonprofit called The Open Planning Project, which believes that more accessible data about our environment will help to give citizens a greater say about the planning decisions that affect their lives.
GeoServer is a J2EE application built as a thin layer on top of the excellent GeoTools Java GIS toolkit. This allows it to support a wide variety of data formats, as GeoTools strives to make adding new data formats as easy as possible. In this hack, we'll get a GeoServer instance up and running.
8.4.1. Setting up GeoServer
You will need Java installed on your computer. GeoServer requires at least Version 1.4, which can be downloaded from Sun's web site or from for many Linux distributions. You will also need a Java Servlet Container. There are a variety of open source and commercial Servlet Container implementations; two good ones are Tomcat (by Apache's Jakarta project, freely available to all) and Resin (by Caucho, a commercial company, free for development purposes and hobbyists, and very fast). Both are easy to set up and have built-in web servers, so Apache need not also be installed. To use GeoServer's web-based administration tool, Tomcat 5 is required, as it supports Version 2.0 of the Servlet specification.
You can get the latest version of GeoServer from the download area at. At the time of writing, this is Version 1.2.0. The quickest way to get started is to grab the WAR file: the latest version is always geoserver.war. This can be dropped right into the Servlet Container's webapps directory, without requiring more Java expertise. To build it from scratch requires the ant build tool.
8.4.2. Starting up GeoServer
The .war contains all the code, libraries, and configuration files to run GeoServer. Both Tomcat and Resin have a directory named webapps/ where the .war file should be placed. If the container is already running, it may need to be restarted, but as soon as it is, the .war will expand and GeoServer will load up. The best way to check to see if GeoServer is working is to issue a GetCapabilities request through any web browser. If the container is running on your local machine on the default port, the capabilities request will look like this:
This should return a WFS Capabilities.xsd document with sample values. You should also see a couple of FeatureTypes, samples in the default GeoServer installation. These can be queried with GetFeature and DescribeFeatureType requests. GeoServer also has an integrated Web Map Server; its Capabilities document is queried in a similar way:
8.4.2.1 Configuring GeoServer
Now that GeoServer is up and running, it is time to configure it with your own information and data. GeoServer has a web-based user interface to make this as easy as possible. It is accessed at:
This will show a welcome page, with links to the Capabilities page. It also has a link to the TestWfsPost servlet, which is quite useful for playing with the XML-post portions of the Web Feature Service. WFS requests can be written directly into the text box and issued to GeoServer. A few other pages can also be accessed, such as contact information and basic statistics. But to actually configure GeoServer, you must log in. Attempting to do any administrative-type action or hitting the log in button will take you to the page to log in. To log in, the default username is "admin" and the password is "geoserver."
The admin page shows various stats and allows the releasing of locks. The relevant page for now is Config, shown in Figure 8-2.
Figure 8-2. GeoServer administration
GeoServer configuration is divided into four basic sections: Server, which contains global application settings and contact information; WFS and WMS, which configure their specific settings; and Data, where different data formats are loaded and configured to serve as layers (in WMS) and FeatureTypes (in WFS).
8.4.2.2 Setting global settings and contact information
The first thing to configure is the global settings and contact information in Server, shown in Figure 8-3. Maximum Features allows you to specify a limit on the number of Features that can be returned. GeoServer can now return 15 MB of geographic data operating on a Java Virtual Machine (JVM) with a maximum of 10 or less MB of memory. It has also been tested to handle over 10,000 simultaneous GetCapabilities requests. But the Maximum Features value is still useful for extremely large data sets that clients do not necessarily want to receive all at once.
Figure 8-3. GeoServer configuration
The Verbose field can be set to indicate whether the returned XML documents should have pretty printing, that is, nice indents and spacing for human readability. This can be useful when getting started, but when actually in production, most clients will likely be computer programs that do not care at all about pretty printing; indeed the spaces and carriage returns will just slow down processing slightly. Note that most browsers now will put XML into human-readable form on their own, so you will likely still be able to easily read GeoServer's output if you set Verbose to false.
Other fields can limit the number of decimals returned in GetFeature responses, which can help cut down bandwidth (but also accuracy). The Character Set can be changed to specific encodings, but UTF-8 is recommended. And the Logging Level determines how much information goes to the logs.
The contact information section is pretty self-explanatory; it will show up in the WMS-capabilities contact information section (the WFS 1.0 specification does not have a matching section, but we anticipate that a future version of the specification will).
8.4.2.3 Applying and saving your changes
To get your new contact information and configurations to show up in GeoServer, you must first hit the Submit button. You can preview the changes by hitting the Apply button on the lefthand side of the screen. The first place this should show up is in the contact link in the upper left corner of the screen, which should be replaced with your name. Clicking on it should take you to a page of the contact information you just submitted. The changes will also be reflected in the WMS Capabilities document; just issue a request like:
The second section should have your updated information.
After previewing the changes with the Apply button, the changes can be persisted to the configuration files with the Save button. Then your changes will be there when GeoServer is next started. If you don't want to submit the changes made after hitting Apply, then hit Load to roll GeoServer back to the state it was in the last time a save was made.
Figure 8-4. GeoServer data management
8.4.3. Publishing Your Own Data
After setting up the new contact information, the next step is to make your own data available. The Data page (Figure 8-4) is the place to do this. It is divided into four sections: Stores, which defines the connection parameters to various data formats; Namespace, which configures the XML namespaces available for FeatureTypes; Style, where WMS styles can be added; and FeatureType, which defines the specific FeatureTypes from the available DataStores.
A DataStore is the GeoTools abstraction for the location of geographic data. It can be a file, such as a shapefile, or a database, such as PostGIS, Oracle Spatial, or ArcSDE. A DataStore will contain one or more FeatureTypes. For databases, a FeatureType is generally a specific table; each row in the table is a feature. The current shapefile implementation contains only one FeatureType per DataStore, but one could imagine other file types where a DataStore is a directory that contains a number of different files. To create a new DataStore, click on the Stores link and then the New button. This will take you to a page like that shown in Figure 8-5.
Figure 8-5. Add a GeoServer DataStore
We'll add a PostGIS data store. [Hack #87] shows you how to get started, and [Hack #88] explains how to convert your tracklogs into an indexed PostGIS database.
In the GeoServer DataStore screen, select "PostGIS Spatial Database" from the drop-down menu and enter a DataStore ID. The ID can be almost anything, but it's good to pick something fairly descriptive. After clicking New, you will get a screen like Figure 8-6. Putting the mouse over the text will pop up help notes describing what the fields are. Enabled should be set to True; choose a namespace from the list (you can add your own namespaces with the Namespace menu) and write a brief description of the DataStore. The next fields are the connection parameters for your PostGIS database. If it is running on the same machine, then Host should be "localhost"; otherwise, it should be the IP address of the computer where PostGIS is running. The default PostGIS port is 5432, and Database will be the name of the database that you set up. After filling in your values, hit Submit. Then apply the changes so that the FeatureTypes will be available.
Figure 8-6. A PostGIS data store in GeoServer
Next go to the FeatureType page and hit New. The tables of your PostGIS database should show up appended to the DataStore ID that you gave your newly created Store. Select one that you would like to make available to the geospatial web.
After hitting New, you will be taken to the FeatureTypeEditor screen. This information is primarily for the Capabilities document; it is the meta-information about the FeatureType. The SRS is probably the most important field; it should be an EPSG numbera serial number allocated by the European Petroleum Survey Groupfor the projection of your data. You can also edit the schema information to hide certain attributes and to make others mandatory. GeoServer generates the DescribeFeatureType responses automatically, depending on how you configure these attributes. After editing your feature, submit it and click Apply. Your feature should then show up in the Capabilities documents for WMS and WFS, and you can even query it.
8.4.4. Viewing Your Data with GeoServer's WMS
Though GeoServer started by focusing on the WFS specification, it soon became obvious that an integrated WMS would be a very useful feature to have. Users can simply set up their FeatureTypes in one place and have them available for WMS and WFS. After you've set up your FeatureType, you can issue a WMS GetMap request like the following: &layers=topp:bc_roads &bbox=489153,5433000,529000,5460816 &width=800&height=400 &srs=EPSG:27354 &styles=normal &format=image/png
The bbox parameter specifies a bounding box that specifies the area of the data to be viewed. To figure out the size of the bbox to issue, the easiest thing to do is to issue a WFS request on the same FeatureType. So if you named your FeatureType nas:blorg, then you would perform the following GetFeature request: &typename=nas:blorg
This will return a GML document of a FeatureCollection containing all your features. Every FeatureCollection must have a boundedBy element, and from the gml:Box contained therein, it is easy to figure out the bbox parameter. The top of the response will look something like:
-73.933217,40.78587 -73.933217,40.914404 -73.768722,40.914404 -73.768722,40.78587
The first coordinate (the lower lefthand corner of the box) and the third coordinate (the upper righthand corner of the box) make up the appropriate WMS bbox parameter. So for the previous request, the WMS request will look like: &layers=nas:blorg &bbox=-73.933217,40.78587,-73.768722,40.914404 &width=800&height=400 &srs=EPSG:4326 &styles=normal &format=image/png
This should return a rendered map of your data. You can experiment with different image formats, which are advertised in the WMS Capabilities document. There is much more that you can do with GeoServer, including Transactions and Locking, advanced Filter queries, styling with SLD, and more. The GeoServer homepage has much more information, and the developer community is generally quite responsive. The easiest point of entry is the geotools-devel@lists.sourceforge.net email list.
Chris Holmes
Mapping Your Life
Mapping Your Neighborhood
Mapping Your World
Mapping (on) the Web
Mapping with Gadgets
Mapping on Your Desktop
Names and Places
Building the Geospatial Web
Mapping with Other People | https://flylib.com/books/en/2.366.1/hack_89_publish_your_geodata_to_the_web_with_geoserver.html | CC-MAIN-2021-39 | refinedweb | 2,206 | 61.67 |
Tom Underhill Microsoft tomun@microsoft.com Editor This document details the responses made by the InkML Working Group to issues against the Working Draft published 23 October 2006. This document of the W3C's InkML Working Group describes the disposition of comments as of 2 August, 2010 on the 2nd Last Call Working Draft of InkML Version 1.0. It may be updated, replaced or rendered obsolete by other W3C documents at any time. This document describes the disposition of comments in relation to Ink Markup Language (InkML) Version 1.0 (). The goal is to allow the readers to understand the background behind the modifications made to the specification. In the meantime it provides an useful check point for the people who submitted comments to evaluate the resolutions applied by the W3C's InkML Group. In this document each issue is described by the name of the commentator, a description of the issue, and either the resolution or the reason that the issue was not resolved. Scope of InkML Looks like we have removed the para on Application-specific elements. The idea was to mention that applications may define their own tags (we can drop the examples)in addition to what InkML offers. >>> Yes, I dropped that paragraph because >>> >>> (a) it seems vacuous -- this is true of any XML specification, >>> (b) it is cleaner to have <annotationXML> in <ink> or >>> to have <ink> as an annotation on other XML, and >>> (c) there will be other frameworks for combining things that can and should be encouraged. Comment: SriG, 10/11/06 We may still want to say something about the scope of InkML wrt ink applications (see e.g. my response to Greg). Orphan 'The' In: there is an orphan sentence fragment 'The' at the end of the paragraph before the example. Removed the orphaned 'The'.. If application requires particular attribution using Dublin Core, they may put DC XML in an AnnotationXML block. traceFormat has? associated inkSource Reference: <quote cite="">.' Be more explicit about binding reported data to intermittent channels <quote cite=>. • Changed the Grammar in section 3.2.1 to include value for 1) regular channel and 2) intermittent channel. • The ‘quoted’ documentation has been be revised to the suggestion given by the commenter. Disallow all qualifiers in intermittent channels <quote cite="">. This issue is related to the previous issue. Only the grammar for the ‘regular’ channel will have the qualifiers.. Changed the definition of ‘Width’ to “It is the diameter of the larger circle that can be inscribed within the trace locus.” Extra 's' in values In: <quote cite= > Additionally, wsp may occur anywhere except within a decimal or hex and must occur if required to separate two values. </quote> There is a redundant terminal 's' in the last word. Removed the extra 's'. Roll-your-own BNF –NOT If the BNF you are using is a subset of the EBNF used in XML please say so. Compare with Changed the spec to explicitly state that the EBNF used in the InkML spec is the subset of EBNF defined by the XML spec.> ‘undefined’ is not a legal value of xsd:decimal You need to roll this out in different prose. When the attribute is not present, no information is given as to the time that this ink appeared. ‘Default’ has a slightly different semantic than this, If there is a default, it is a legal value that is true even when stated. Suggestion: For @duration, let the default be zero. (data valid at one time) For @timeOffset, say Default: none. Changed the default value of @duration and @timeOffset from Unknown to None.. Should all context elements go into definitions even in streaming mode ? Contents are missing many elements such as <timestamp>, <canvasTransform> etc etc ... >>> Backed out. If we want definitional things, we now need to put them in a <definitions>. The only definition-like thing is <context> and that is used to switch context for streaming. Comment: SriG, 10/11/06 In streaming mode, we may also want to support definition of new timestamps or brushes or canvasTransformations on the fly. We can address this by packing these into definitions blocks and sending multiple of these – but then we can do the same for context. So should we restrict context to being inside definitions as well? All context elements (except context) can only be children of either <context> or <definitions> (regardless of streaming or archival). That's what the spec says currently, but the first couple of paras under section 7.2 in the spec do not reflect the same which will need to be corrected accordingly., its not effecient for the implementations' storage, but we can't allow streams that are invalid to the XML ID/IDREF rules, and changing the schema to not use ID/IDREF is not feasible at this time.:-) Added an explicit statement to the spec saying "mathml" is declared as the MathML namespace for the MathML schema.. Removed the m: <bind type="setvar" target="X" variable="Q" /> <math mlns="[ ]"> missing x. Changed mlns= to xmlns=.. Not necessary to clarify that MathML can appear in full inside an annotationXML block. Even InkML could appear in annotationXML. The inclusion of an XSD for InkML will help clarify this: the defintion of annotationXML is <xsd:any/>. Default units for channels do we want to have *default* units for all channels >>> TODO: Perhaps. Needs discussion. As per the spec, it is required only for predefined “numerical” channels. X, Y, Z, C – “dev” F – newton OTx, OTy, OA, OE and OR – degrees W – milli meter (mm) T - milli second (ms) Note: The meaning of the ‘dev’ unit is device dependent. In other words, the value of pressure level of 10 may not be equal to the pressure level of 10 measured in another type of device. Mappings in channel elements are redundant mapping - NEEDS DISCUSSION. >>> TODO: This can be added at CR. Comment: SriG, 10/11/06 It seems that we no longer need mapping in traceFormat>channel. Originally channel used to be part of captureDevice and mapping used to describe transformation from "raw device channels" to device's "traceFormat". Now we do not represent the raw device channels. We may need that for application under the following scenarios, Scenario 1: To map the ‘raw’ data to data in ‘standard units’. • The InkSource would have a “traceFormat” element defining the structure of the ‘raw’ data captured from the device. Here the ‘channel’ elements do not have any mapping defined. Apart from this traceFormat, the ‘inkSource’ can have another ‘traceFormat’ in which the channels have mapping element to map the raw data to some standard ‘metric’ unit equivalent with any normalization/approximation required. Since the device manufacturer is very knowledgeable about the mapping of the data in ‘raw’ format to standard unit. He/She will provide the ‘inkSource’ definition with those two traceFormat. • The InkSource data explained in previous point would be available in ‘definitions’; it might require a reference outside the current document to point to the InkSource definition available in a common URL hosted by the device manufacturer or W3C or any independent vendor. Then the Ink data’s current context can point to the inkSource’s traceFormat that do not have mapping if it would like to have the trace data in ‘raw’ format. Alternatively, it can point to the second traceFormat that have mapping defined to encode the trace data in standard units. Scenario 2: The ‘traceFormat’ of the ‘traceView’ element is different from the referred traceData. • Will it be useful to define the traceFormat of the ‘traceView’ to provide mapping from the referred data format to the traceView format?. Added an Appendix containing a link to the InkML XML Schema. Figures I have not updated the figures for the LCWD that will be published on Monday, but there is some room for improvement there. Working on the figures will take some time and I'd like to get an assistant to start on it well in advance. Therefore I would like to ask for comments now. The figures I am referring to are Figures 2 and 3, about 20% of the way down, giving the angle definitions. I believe they should be updated to fix three things. 1) Angles renamed to match those in spec. This is not critical since the channel names are the same as the labels in the picture but with O in front. A -> OA E -> OE Tx -> OTx Ty -> OTy R -> OR 2) To show the xy axes in the default orientation of the spec. Now they are in the usual orientation of the Cartesian plane. However the standard orientation of the spec has y increasing down the page. 3) Now, the angles can only specify pens above the surface. For completeness, we should be able to specify any geometric orientation. (In principle we could have writing on a virtual or glass surface with pens above and below.) I would propose to extend the ranges of the angles to allow this in the obvious way: OE ranges from -90 to +90 OTx, OTy range from -180 to +180 First of all, is there any objection or proposed alternatives to these? Second, does anyone know where these figures came from? Who drew them or who has the sources? It would be a pity to have to redraw them from scratch. * the abstract says that InkML is for "in the W3C Multimodal Interaction Framework as proposed by the W3C Multimodal Interaction Activity"; it looks to me that it is not restricted to that usage, so I would remove that phrase. * MathML 2.0 is used as part of the possible content of the <mapping> element, but MathML is not part of list of references in Appendix C Added: [MATHML2] Mathematical Markup Language (MathML) Version 2.0 (Second Edition), David Carlisle, Patrick Ion, Robert Miner, Nico Poppelier, Editors, W3C Recommendation, 21 October 2003, . Latest version . to Appendix C. * InkML defines effectively a subset of MathML — has this been discussed/approved by the Math Working Group? Yes, we have been in contact with the MathML WG. Data type of ‘C’ channel 1. Do we support hex/oct values for channels ? E.g. C(olor) = #3F2EAB 2. May need to support color as a single channel than as multiple channels (i.e. add "C" as a predefined channel). For example if the processing is in grayscale, the value may be an integer in [0,255] >>> Added "C" channel. Added hex literals to grammar. What should the "type" of the channel be specified as (only integer, boolean and decimal are supported) ? XML schema's integer datatype does not seem to support hex values. data type of C channel can be "integer". Can support only hex values of type #3F2EAB. Grayscale values can be supported as #232323. Sophisticated users can use Ck channel for this. Color Type definition in SVG: Summary: A <color> is either a keyword (see Recognized color keyword names[1]) or a numerical RGB specification. 1. Three digit hex - #rgb eg: #6CF (equivalent to #66CCFF, rgb(102, 204, 255). 2. Six digit hex - #rrggbb eg: #FFD700 (i.e. a golden color). 3. Integer functional - rgb(rrr, ggg, bbb) /* integer range 0 - 255 */ eg: rgb(255, 165, 0) (i.e an orange) 4. Float functional - rgb(R%, G%, B%) /* float range 0.0% - 100.0% */ eg: rgb(12.375%, 34.286%, 28.97%) 5. Color Keyword : 'single word' string enumeration values eg:'darkgreen', 'red' Reference: [1] Recognized color keyword names - String enumerations for color values What about enumerations ? E.g C(olor) = "DarkBlue", <activeArea height="6" width="8" units="inch"/> <sampleRate uniform="True" value="200"/> <traceFormat> <channel name="X" type="integer" min="0" max="5080*8" units="dev"/> <channel name="Y" type="integer" min="0" max="5080*8" units="dev"/> <channel name="F" type="integer" min="0" max="1024" units="dev"/> <channel name="C" type="integer" min="0" max="255" units="dev"/> <channel name="OTx" type="integer" min="-60" max="60" units="deg"/> <channel name="OTy" type="integer" min="-60" max="60" units="deg"/> </tarceFormat> <srcProperty name="weight" value="100" units="g"/> <channelProperties> <channelProperty name="resolution" channel="X" value="5080" units="1/inch"> <channelProperty name="resolution" channel="Y" value="5080" units="1/inch"> <channelProperty name="accuracy" channel="X" value="0.01" units="inch"> <channelProperty name="accuracy" channel="Y" value="0.01" units="inch"> <channelProperty name="resolution" channel="F" value="1" units="dev"> <channelProperty name="resolution" channel="C" value="1" units="dev"> <channelProperty name="resolution" channel="OTx" value="1" units="deg"> <channelProperty name="resolution" channel="OTx" value="1" units="deg"> </channelProperties> </inkSource> Explanation: The unit and the value of the 'resolution' ChannelProperty provides the final value of the channel in standard unit. When the channel unit value is "dev" then the value of the channel in standard unit is obtained by multiplying the channel value with the 'resolution' ChannelProperty value of the channel provided the 'resolution' itself is not in 'dev' unit. If the resolution is in 'dev' then the final interpretation of the value of channel is application/device specific. Grammar allows invalid units to be specified Grammar is too broad; allows undesirable values such as "1/mm/in". More serious concern is the complexity of the parser – if we want to validate these units. Comment: Muthu, 04/02/07 If it is possible to get a finite list of valid units, then it Could be easily handled using xsd:enumeration to define all those valid ‘units’. The complex grammar that supports all combination of units is too heavy on parser implementation; so we would like to reduce it’s complexity by supporting only standard units and their ‘reciprocals’. Example: ‘inch’ and ‘1/inch’. Example of valid units would be nice to include. * there is no conformance section A conformance section 8 has been added. * the grammar for the <trace> data quotes the quote sign between quotes (in the difference_order production); that doesn't sound right; while ISO EBNF is listed in the references section, it's not linked/highlighted from the EBNF usage in the spec Added EBNF link and changed the grammar to: difference_order ::= ("!" | "'" | '"') That's apos-quote-apos instead of quote-quote-quote. This conforms to the EBNF ISO spec. * the examples in 3.2.1 and in 6.3.1 use "<trace id=...>" instead of "<trace xml:id=...>"; likewise for the <canvas> example in 5.1 (I suggest validating the examples with the XML Schema to ensure they're correct) Changed all instances of id= to xml:id=. Also removed whitespace before and after '=' in xml examples which is not valid xml. * the example in 4.2.1 is not well-formed (<channelProperty> is not closed) Closed the <channelProperty/> element. Ambiguity when both timestamp and time channel are present May want to mention what happens when the trace specifies a timestamp as well as a time channel. >>> TODO: This can be added for CR >>> This is related to respectTo=… There is no ambiguity exist here. The value of the time Channel is captured with respect to the 'timeOffset' attribute which is an offset to the 'respectTo' attribute of the Time channel defined in the associated TraceFormat of the trace. Ambiguity when both trace type and tip state are present Ditto for type as well as tip state (S). Tip state data will override the traceType data. We should specify this as an example of the general principle that the local (trace data) overrides the global (trace). Some context elements may be referred to directly but not others It bothers me (here and elsewhere) that we allow "direct" references to some of the "components" of context like brush, but not other components like traceFormat. >>> I have cut it back to allow only contextRef and brushRef. brushRef is included because we expect to change brushes often. Not so for timestamps, etc. Would be good to include this explanation, and also explain what the user needs to do to specify a different traceFormat or timestamp for a trace. I assume he would have to specify a different context (either a new one or by deriving from another) with that traceFormat and refer to it. Signature example may be a good case for using the 'S' channel. >>> TODO: This can be added for CR. Can you supply an example? * that same example uses an undefined <resolution> element — I guess <channelProperty name="resolution"> was meant? Changed it to: <channelProperty channel="F" name="resolution" value="1024" units="dev"/> * that same example has a <traceFormat> element with an href attribute — that's not part of the spec either It is an error. I changed the example to be <traceFormat>…</traceFormat>. I found another error is 6.3.1 where <traceView> is using href. I changed it from: <traceView href="#tg1"> to <traceView traceDataRef="#tg1"> * the activeArea example in 4.2.4 has a trailing semi-colon removed the semicolon. * ? There are already implementations the use "#DefaultCanvas" * the example of 6.1.2 is not well-formed (<bind> is not closed) Closed the <bind/> element. * the XML schema requires at least one inkSource element per <context> element, when the spec doesn't (it's missing a <brushProperty name="width" value="5"/> <brushProperty name="auth-name" value="John"/> <!-- user defined --> </brush> conclusion: By supporting predefined names for brush property, we achieve some standardization that these properties will be identified by the names which helps in creating compatibility in the application level. In the defaultBrush definition, we can provide default value for them. default values: Although a usage example of the respectTo attribute is given as part of the time channel description (see 3.1.7 Time Channel), it is unclear to what this #ts1 relative URI corresponds to. Rephrased to clarify respectTo attribute is the URI of a <timestamp> for time channels, but application defined for application defined channels. Typo The example has the following line, “The following example means that X += 10 if 45 ‰¤ E < 50, X += 9 if 50 < E < 55, etc.” Which should be as follows, “The following example means that X += 10 if 45 <= E < 50, X += 9 if 50 <= E < 55, etc.” Here the interpolate method is 'floor'. Changed "‰¤" to "<=". The example has, <mapping xml: <bind target="X"/> <bind source="E"/> <table> 45 10, 50 9, 55 8, 60 7 </table> </mapping> 1. <mapping> do not have 'apply' and 'lookup' attributes; they are attributes to <table> element. 2.'Column' attribute of <bind> is required for lookup table bindings. The example should be changed to, <mapping xml: <bind target="X" column="2"/> <bind source="E" column="1"/> <table apply="relative" interpolate="floor"> 45 10, 50 9, 55 8, 60 7 </table> </mapping> Traces in Definitions > > 1. <definitions> element (described in In section 6.2.1) includes > > trace and traceGroup. Do we need them there ? Yes. This is like the off-screen memory of a display device. It allows ink to be included in an InkML fragment for use in <traceView>s. Otherwise in many applications the ink referred to by <traceView>s would have to be in separate documents. It is allowed to define Traces in <definitions> in which case they will not represent the live ink data in that document and they merely being available for reference by other Trace elements. It is often useful to record the sine of this angle, rather than the angle itself, as this is usually more useful in calculations involving angles. The <mapping> element can be employed to specify an applied sine transformation. At this point of the specification, it is unclear in which way it works. This is another example of back and forth reading (see 3.1.4 Orientation Channels). Yes. The context of going for a <Mapping> is explained in the last paragraph of 3.1.3, i.e. before 3.1.4. Scope of <definitions> in Streaming Scenario Assumption: The definition elements are having session scope. Issue: In the first instance of Ink data exchange contains an entity with a unique id defined. If the subsequent Ink data exchange also contains an entity with id same as the previous entity which may or may not be of same entity type. How should it be handled? The issue would like to know the scope of <definitions> created in multiple XML fragments generated in a streaming use case. The scope is the complete <ink> documet which is generated by accumulating multiple xml fragments, in other words the <definitions> are live across multiple fragments. Then the issus talks about the requirment of a InkML Markup generator which creates the XML as multiple fragments in a streaming use case.When it creates Elements to be placed in <definitions> it has to take care of giving "unique Ids" for the elements. It can be closed as it is not related to the scope of the InkML spcification. relativeTo -> respectTo - rename: channelDef --> channel, relativeTo --> respectTo (in text and example) >>> Fixed Some references to “relativeTo” still present in text and example. Changed the remaining instances of 'relativeTo' to 'respectTo'. Attribute "type" on <annotation> element Could you have a trace that had two types? E.g. for a diagram with text labels, would it be possible to label it both as "text" and "diagram", or would you represent that use case another way? Muthu, 04/02/07 <annotation> element definition is application specific. The text/diagram type may be treated as equivalent to diagram type. The time channel can be specified as either a regular or intermittent channel. When specified as a regular channel, the single quote prefix can be used to record incremental time between successive points. Again, this paragraph uses a feature that has not been introduced/detailed yet (see 3.1.7 Time Channel). Added a reference to the Time Channel section. The appearance of a <traceFormat> element in an ink markup file both defines the format and installs it as the current format for subsequent traces (except within a <definitions> block). At this point of the document, it is unclear whether or not a <traceFormat> element is allowed as a child of the <ink> element. Indeed the definition of the <ink> element specifies that it can contain <definitions>, <context>, <trace>, <traceGroup>, <traceView>, <annotation>, or <annotationXML> elements but no <traceFormat> elements. At this point, the reader knows how the <traceFormat> markup has to be written but does not have any clue of where it belongs (see 3.1.9 Specifying Trace Formats). Added a reference to the Specifying Trace Formats section. If no <traceFormat> is specified, the following default format is assumed: <traceFormat xml: <channel name="X" type="decimal"/> <channel name="Y" type="decimal"/> </traceFormat> Does it mean that the "#DefaultTraceFormat" relative URI can be used as part of a traceFormatRef attribute or is it just an example of how the implicit trace format definition would look (see default trace format)? Muthu, 04/02/07 Yes. It is explicitly specified in the case of Context in Section 4.5 given as below, The default context may be explicitly specified using the URI "#DefaultContext". The type attribute of a <trace> indicates the pen contact state (either "penUp" or "penDown"). The "type" attribute overlaps the definition of the reserved S channel for tip switch state (touching / not touching the writing surface). Now that the "type" attribute exists, a <trace> element can have its "type" attribute set to "penDown" while the actual trace data contains information about the tip being up or vice-versa! In such a case, I guess that it is up to the application to decide how to interpret the InkML file which by definition breaks interoperability. The default trace format could have defined an intermittent boolean S channel that defaults to true == pen down: The reader does not know the answer until reading the 4.3.1 <brush> element section: Same question goes for a continuation trace that has the "continuation" attribute set to "begin" while the "priorRef" attribute is defined It really seems that the <trace> element model is not robust enough in the sense that it permits undefined or application specific behavior. As per the guideline, the value of the S channel will override the pen type value. Typo: The repetition of the word “giving”. href = xsd:anyURI A reference to XML content giving giving the annotation. Required:no, Default:none Changed "giving giving" to "giving". Also, the specification should give precise use cases for continuation traces. In archival mode, splitting a whole trace into continuation traces has no use (and it would not be difficult to produce correct markup, see above). But continuations traces are needed in the case of collaborating applications (streaming mode), e.g user 1 is writing on device A and the digital ink has to be dynamically replicated on the screen (device of user 2. Streaming mode side note: <trace id="t1" type="penDown" continuation="begin" ...>...</trace> <trace id="t2" continuation="middle" priorRef="#t1" ...>...</trace> <trace id="t3" continuation="end" priorRef="#t2" ...>...</trace> This would be the typical markup produced when being in streaming mode. In the use case described above, that is user 2 seeing what user 1 is writing, if you want a quick and smooth ink rendering on user 2's screen, then the amount of actual trace data will definitely be VERY SMALL compared to the amount of markup being produced, even when additional attributes are not taken into account. Also, I am not very familiar with XML streaming in general but I imagine that something like SOAP adds its own overhead. <trace id = "toBeSplit"> 1125 18432,'10 '22,'5 '7,'8 '23','10 '8 F,'5 '7,'2 '11,'7 '5 T,'10 '7 </trace> is likely to be split into <trace id="t1" type="penDown" continuation="begin">1125 18432</trace> <trace id="t2" continuation="middle" priorRef="#t1">'10 '22</trace> <trace id="t3" continuation="end" priorRef="#t2">'5 '7</trace> <trace id="t4" continuation="end" priorRef="#t3">'8 '23</trace> <trace id="t5" continuation="end" priorRef="#t4">'10 '8 F</trace> <trace id="t6" continuation="end" priorRef="#t5">'5 '7</trace> <trace id="t7" continuation="end" priorRef="#t6">'2 '11</trace> <trace id="t8" continuation="end" priorRef="#t7">'7 '5 T</trace> <trace id="t9" continuation="end" priorRef="#t8">'10 '7</trace> which has little chance of being very Bluetooth friendly The issue seems to be an implementation issue. Yes, the application in the above example add lot of InkML markup data overhead on wire and demand high processing power; may the user find a different solution to do solve the problem. We may not require to modify the specification for solving this issue. If a continuation attribute is present, it indicates that the current trace is a continuation trace, i.e. its points are a temporally contiguous continuation of (and thus should be connected to) another trace element. The possible values of the attribute are: begin, end and middle. As far as I can recall the discussions that took place during the face to face meeting in NYC, "begin" "end" and "middle" are to be used when wanting to render the traces with the help of splines: obviously you computations differ depending on it is the beginning of the trace, the middle or the end. In fact it is all biased by the fact that InkML is supposed to be parsed by a standard DOM or SAX XML parser. With such a predicate, you cannot avoid having to cut traces into smaller pieces of valid markup. As a consequence, InkML is somehow forced to introduce the concept of data packets at the OSI application layer level, while existing network protocols like TCP/IP already take care of it. With a custom InkML parser one could imagine doing: "<trace>1125 18432," message is sent by user 1 and received by user 2. -- this is the beginning of the trace because there is a <trace> markup. "'10 '22, '5 '7," message is sent by user 1 and received by user 2. -- this is the middle of the trace because only coordinates are sent. ... "'10 '7</trace>" message is sent by user 1 and received by user 2. -- this is the end of the trace because there is a </trace> markup. • The issue seems to an implementation issue. Yes, the application in above example add lot of InkML markup data overhead on wire and demand high processing power; may the user find a different solution to do the same. • The spec may attempt to reduce the amount of mark up generation in this case. One idea is to replace the ‘continution’ attribute with a Boolean attribute to indicate the ‘end’ fragment. So all the ‘trace’ elements except the last one need not to have the ‘continution’ attribute. • The implementation specific discussions can be addressed in ‘tutorials’ like documents. Note: the trace syntax defined here makes the InkML file sizes (as well as the XML DOM trees) smaller while keeping the benefits of XML. However some applications, for instance those concerned with transmitting InkML documents across the Web, might require even smaller file sizes. It is thus recommended (but not required) that InkML implementations support the gzip standard compression scheme (see [RFC1952]). I suppose you recommend supporting whole gzip compressed ".inkml.gz" files. Some Anoto pen manufacturer decided to use an XML based file format where the ink traces markup is compressed using a zip compression scheme then encoded in Base64 before being re-injected into the XML document: you reduce the amount of data by compressing but you increase it afterwards because of the Base64 encoding ... (see 3.2.1 <trace> element contents]). Comment: The recommendation of the spec is a valid idea but may not be optimized. We need further investigation by looking at how does this requirement is solved in general and if there is an optimum technique for it. We may look in to how SOAP XML request in Web Services handled. Removed the "Note:" paragraph that suggest the use of 'gzip' from the specification. Added the resolution given in the message, as an appendix of the specification document. contextRef = xsd:anyURI The context associated with this traceGroup. Required: no, Default: none brushRef = xsd:anyURI The brush associated with this traceGroup. Required: no, Default: none <traceGroup> element also defines "contextRef" and "brushRef" attributes: same question as for the <trace> element, what should we do when "contextRef" and "brushRef" are both defined but the "brushRef" or the <brush> of the corresponding <context> differs from the <traceGroup>'s "brushRef"? Actually the answer to this question seems to be given by a usage example in the 7.1 Archival Applications section. If a trace includes both brushRef and contextRef attributes, the brushRef overrides any brush attributes given by the contextRef (see 4.3.1 <brush> element section). Does it apply to <traceGroup> elements (see 3.3.1 traceGroup element)? Muthu, 04/02/07 Yes. The documentation has to be updated to explicilty state the order of precedence: brushRef's override any brushes contained in a contextRef. brushRef's and contextRef's on <trace>'s override any specifications on <traceGroup>'s. The <traceView> element is used to group traces by reference from the current document or other documents. If a traceDataRef attribute is given, then a to and/or from attribute may be given. Together, traceDataRef, from and to refer to another element and select part of it. An traceDataRef attribute may refer to a <trace>, a <traceGroup> or another <traceView>. I do not really understand why the <traceView> element is itself a group. Why can't a <traceView> element be part of a <traceGroup>??? Why isn't a <traceView> just a view instead of being view+group ? <traceView xml: <traceView traceDataRef="#L1" from="2"/> <traceView traceDataRef="#L2" from="2" to="4:1:1"/> </traceView> Why can't it be: <traceGroup xml: <traceView traceDataRef="#L1" from="2"/> <traceView traceDataRef="#L2" from="2" to="4:1:1"/> </traceGroup> (see 3.3.2 <traceView> element contents and 3.3.2 <traceView> element examples). Yes, it would be have been a possible design to allow traceGroups to contain traceView's and thus require traceView's only as leaf elements. We could even have modified trace to have a traceDataRef and do away with traceView altogether. This would make handling traces and traceGroups more complicated however. The context element both provides access to a useful shared context (canvas) and serves as a convenient agglomeration of contextual attributes. The context itself is not useful but at least it is convenient (see 4.1 The <context> element) The purpose of the <context> element explained in the specification is clear. There is no need for any conceptual change required in the documentation as action to the issue raised. xml:id = xsd:ID The unique identifier for this context. Required: no (yes for archival InkML), Default: none I'm curious about the conditional part, how do you enforce this? How do you actually know the InkML document is intended for archival mode? Comment: Muthu, 04/02/07 There is no way to identify the type of Ink Document as Archival/Streaming and hence not possible to enforce this rule. We may have to add a new attribute called ‘mode’ to Ink document?? Set the 'Required' constraint to 'no' on @xml:id (remove the exception for archival case). canvasRef = xsd:anyURI The URI of a canvas element for this context. Required: no, Default: "DefaultCanvas", or inherited from contextRef traceFormatRef = xsd:anyURI A reference to the traceFormat for this context. Required: no, Default: default trace format, or inherited from contextRef >>Is it "default trace format" or "DefaultTraceFormat"? I already asked the question but, could "#DefaultCanvas" and "#DefaultTraceFormat" relative URIs be used as references elsewhere, although they are implicitly defined? Comment: Muthu, 04/02/07 Yes. It is explicitly specified in the case of Context(Section 4.5) as given below, The default context may be explicitly specified using the URI "#DefaultContext". inkSourceRef = xsd:anyURI A reference to the inkSource for this context. Required: no, Default: default capture device, or inherited from contextRef >> What about default capture device? It is a documentation issue; it will be fixed. "DefaultCanvas" to "#DefaultCanvas" and "default trace format" to "#DefaultTraceFormat". Each constituent part of a context may be provided either by a referencing attribute or as a child element, but not both. Thus it is possible to have either a traceFormatRef attribute or a <traceFormat> child element, but not both. Can an XML schema enforce this (see 4.1 The context element)? Proposal 1: May allow defining both the referencing attribute as well as child element. Then the ambiguity can be resolved by a guideline that the child element value will override the value derived from the referencing attribute. So the existence of both is equivalent to having child element alone. Resolution: Stephen: Normative English text that says "which one should be used" needed. Proposal 2: Remove the referencing attributes and always allow only child elements. The sample usages are given below, Example 1: To refer to the already defined brush. At present, it is achieved by, <context id="ctx2" contextRef="#ctx1" brushRef="#brush1"/>. Then it will be changed to, <context id="ctx2" contextRef="#ctx1"> <brush brushRef="#brush1"/> </context>. Example 2: To define a new brush for the context. <context id="ctx2" contextRef="#ctx1"> <brush id="brush2"/> </context> Done. Normative clarifying language added. We continue to allow, e.g., both a brushRef and a brush child of context, but say which takes priority. The <inkSource> block will often be specified by reference to a separate xml document, either local or at some remote URI. Ideally, <inkSource> blocks for common devices will become publicly available. Do you plan to setup a repository on the W3C's website (see 4.2.1 <inkSource> element)? The statement will be removed, because W3C (WG) do not have a plan to set up such a repository as of now. The <sampleRate> element captures the rate at which ink samples are reported by the ink source. Many devices report at a uniform rate; other devices may skip duplicate points or report samples only when there is a change in direction. This is indicated using the uniform attribute, which must be designated "false" (non-uniform) if any pen-down points are skipped or if the sampling is irregular. uniform = xsd:boolean Sampling uniformity: Is the sample rate consistent, with no dropped points? Required: no, Default: unknown value = xsd:decimal The basic sample rate in samples/second. Required: yes So, when the ink source has an irregular sampling rate, the sampling rate "value" is still required BUT the "uniform" attribute has to be set to false? Isn't this contradictory? What is the purpose of giving a sample rate value when you know in advance that sampling is irregular? Also, when you write "<sampleRate value="200"/>" then the "uniform" attribute is by default set to "unknown", but it IS known to be uniform because a value is given. Comment: Greg, 12/20/06 As a consequence, I suggest dropping the "uniform" attribute and have a special sampleRate value in case the source is non uniform, something like negative (<= 0) value means unknown. In both cases, that is in the case of the actual specification or in the case of my new proposal, I am really wondering whether or not an InkML schema is capable of handling it (see 4.2.2 <sampleRate> element). Comment: Muthu, 04/11/07 The <inkSource> element is defined to have “zeroOrOne” <sampleRate> child element. So We should use <sampleRate> element only when there is a uniform sampling rate and skip it when the sampleRate is not uniform. So there is no specific attribute is required to indicate the uniform property. Ofcourse, It is easy to enforce this solution in Schema definition. Time channel should be used to get time information when Sampling Rate is non Uniform. When the Sampling Rate is Not Uniform, the value attribute of SampleRate element should be maximum sampling rate. The default value of the ‘uniform’ attribute should be true. The <latency> element captures the basic device latency that applies to all channels, in milliseconds, from physical action to the API time stamp. This is specified at the device level, since all channels often are subject to a common processing and communications latency. Does this <latency> element come from the ancient times when the first specification draft was crafted up or is there any practical use of this element? And what if it's not the case? What if latency differs from one channel to another? If the knowing of the latency is of any importance, then you would end up having defined useless markup (see 4.2.3 <latency> element). As a consequence, why not bring latency information down to the <channelProperty> element (see 4.2.7 channelProperty element)? No Change. Note: Channel Property should appear as child to channel. It saves space, less overhead on the parser/interpreter in terms of associating property to the channel. size = xsd:string The active area, described using an international paper size standard such as ISO216. Required: no, Default: unknown This is a good example of a typical drawback of InkML. You can refer to an international paper size standard such as ISO216 if it pleases you, however do not expect this information to be accurately used by any other application than yours. If there is nothing to enforce or to define the meaning of the attribute with precision, then why not drop it in favor of a custom <annotation> element? In the end, I think that such design choices break interoperability (see 4.2.4 <activeArea> element). Change in Spec: The active area, described using an international ISO paper sizes standard such as ISO216 Width and Height attributes should be made mandatory. Units attribute will be in one of the standard units. Note: Channel Property should appear as child to channel. It saves space, less overhead on the parser/interpreter in terms of associating property to the channel. At most one of the attributes time, timestampRef or timeString may be given. The time thus given, plus the value of the attribute timeOffset, gives the time value of the timestamp. Again, can this be enforced by a schema (see 4.4.1 <timestamp> element contents)? Comment: Muthu, 04/11/07 We may consider the following change in the structure, 1. Remove the attributes time and timeString. Let the value given in the content with the type, <xs:union. So the content can be either of time or of type dateTime String. 2. Resolving Ambiguity: When both timeStampRef attribute and content or used, the content value gets precedence over the timeStampRef attribute. In other words, the timeStampRef attribute should be ignored. Specified priority order on time, timeString and timestampRef. The default context may be explicitly specified using the URI "#DefaultContext". It is explicitly specified that using the URI "#DefaultContext" is legal, however it is not specified for "#DefaultCanvas" or "#DefaultTraceFormat" (see 4.5 The Default Context). the 2 missing URIs should be explicitly mentioned in the spec. Documentation has to be added for the same. A <canvas> element must have an associated <traceFormat>, which may either be given as a child element or referred to by a traceFormatRef attribute. The coordinate space of the canvas is given by the regular channels of the trace format and any intermittent channels are ignored. When both are defined, which one prevails over the other? I would like it to be specified at this place of the document. Either of them could be specified. If both of them are specified, then child element overrides the attribute. For certain classes of mappings, the inverse mapping may be determined automatically. These are mappings of type "identity", "affine" (for matrices of full rank), "lookup" (univariate, with linear interpolation), and "product" mappings of these. In this case, it is possible to specify that an inverse should be determined automatically by giving only the forward transform and specifying a value of true for the invertible attribute. When the inverse transform cannot be computed numerically, the inverse <mapping> has to be provided. In such a case, does the "invertible" attribute need to be set to "true" (see 5.2 element contents)? <canvasTransform> <mapping type="unknown"/> <mapping mappingRef="#map001"/> </canvasTransform> The example provided let the user think this is not the case. However, what if two <mappings> and "invertible = true" are specified? Comment: Muthu, 04/11/07. The following changes may be considered, Drop the attribute invertible. Always force to use two <mapping> child element to provide the forward and inverse transform. To indicate a mapping is invertible of another mapping: Introduce “invertibleRef” attribute which should be optional in mapping element. This attribute when used with href attribute to point to the mapping from which this mapping can be numerically invertible. Ambiguity: When both the mappingRef and Child elements to define the mapping are used, the child elements which define the mapping gets precedence and the mappingref attribute is ignored. If both mappings are provided, then ignore the ‘invertible’ attribute. Added this description to the spec. <canvas xml: <traceFormat> <channel name="X" type="decimal" default="0" orientation="+ve" units="em"/> <channel name="Y" type="decimal" default="0" orientation="+ve" units="em"/> </traceFormat> </canvas> It is not specified whether or not referring to the default canvas using a "#DefaultCanvas" relative URI is legal. It is legal. Issue is similar to issue #75 in Section4.5. Unknown Mapping Type InkML supports several types of mappings: unknown, identity, lookup table, affine, formula (specified using a subset of MathML) and cross product. The mapping type is indicated by the type attribute of a <mapping> element. What if something other than "unknown", "identity", "lookup", "affine", "mathml" or "product" is used? Should it be interpreted as "unknown" or as an application specific mapping type? In such a case, what about interoperability between applications (see 6.1 Mappings)? We believe that all types of mapping can be explained using the available type of mappings especially the MathML mapping type support wide variety of mapping definitions. So there will not be a case to provide application specific mapping definition. The application specific mapping type definition is not supported by InkML standard. As per the spec, it is useful in <canvasTransform> definition to give only inverse transform by setting the forward transform mapping type as "unknown". There is a proposal to drop the generic <mapping> element and create Specific element for each possible mapping type, it is discussed in Section 6.1.2: <bind> element Issue No: 1 which aims to solve the invalid combinations of attributes in the <bind> child element. There are examples for the identity and mathml mappings, but not for lookup, affine and product ones (see 6.1 Mappings). 6.1.2 <bind> has an example of "lookup". Corrected example in 6.1.3 to be "product". Added a "affine" example in 6.1.4 for <matrix>. source = xsd:string Specifies source data values and/or channel to be considered in the mapping. Required: no, Default: none target = xsd:string Specifies target data values and/or channel to be considered in the mapping. Required: no, Default: none column = xsd:string Specifies the assigned column within a lookup table either for source or target channels. Required: for lookup table bindings, Default: none variable = xsd:string Specifies the variable within a formula that represents the current source data/channel. Required: for mathml bindings, Default: none This truly offers pretty much room for invalid combinations (see 6.1.2 <bind> element attributes). Comment: Muthu, 04/11/07. We may consider the following change, Drop the <mapping> element and create different Mapping element for each possible mapping type with <bind> child element that have only the relevant attribute list. Example: For mapping type = lookup, We can create a <tableMapping> element as given below, <tableMapping id=”mapping1”> <bind target="X" column="1"/> <table> </table> </tableMapping> All the Mapping element can have “href” attribute which play the role of mappingRef attribute in the <mapping>. Comment: Stephen will evaluate the above proposal. This is tracked by ISSUE-69 provided by Muthu. We update the column attribute data type to be integer, and explicity described the legal combinations of attributes. We also corrected errors in the MathML example (removed...</trace> <traceGroup contextRef="#context1" brushRef="#penA"> <trace xml:...</trace> </traceGroup> which assigns the context specified by "context1" to traces "t001" and "t002", but with "penA" instead of the default brush. Somehow, this behavior should be explicitly specified instead of being introduced at the end of the document, in a section giving usage examples. We need to more clearly identify how context information is combined. Explicility spelled out the order of precdence of contextRef versus brushRef, and the nesting of traceGroups and traces. Brushes, traceFormats, and contexts which appear outside of a <definitions> block and contain an id attribute both set the current context and define contextual elements which can be reused It seems to answer my previous questions about references to elements that are not packed in to <definitions> blocks in the same document. Does it apply to elements defined in other documents ? I guess the answer is yes by nature of URIs. Comment: Muthu, 04/02/07 But as per the spec, <brush> element can be defined only within <context> other than within <definitions> block. So in order to define a new brush, we have to create a <context> element with the new <brush> element definition??. Added language to section 7 to clarify that streaming and archival. A <context> element can also override values of a previously defined context by including both a contextRef attribute and one or more of the canvasRef, canvasTransformRef, traceFormatRef or brushRef attributes. The following: <context contextRef="#context1" brushRef="#penB"/> Also answers my previous questions. Somehow, I guess it would be a good idea to specify these behaviors in the corresponding sections rather in the streaming application mode usage example. Added language to section 7 to clarify that streaming and archival. Generally speaking, the markup contains various useless <span> elements. Cleaned up the empty <span>'s Headers ids naming convention is inconsistent. There are external links that refer to the #id's. Changing the naming now would break external links. This fourth version of the Working Draft includes a few conceptual changes to simplify the definition while achieving greater expressive power. It also contains many small changes of details to make element and attribute use uniform accross the the definition to make it easier to learn and simpler to process. Should be "across the" (see Status of this document). Fixed "accross" to "across". Canvas transformations allow ink from different devices to combined and manipulated by multiple parties. "be" is missing: "Canvas transformations allow ink from different devices to be combined and manipulated by multiple parties" (see 1.2 Elements). Added missing "be" to sentence. Certain applications, such as collaborative whiteboards (where ink coming from different devices is drawn on a common canvas) or document review (where ink annotation from various sources is combined), will require ink sharing. I would have used a plural form: where ink annotations from various sources are combined (see 1.2 Elements). Changed "(where ink annotation from various sources is combined)" to "(where ink annotation from various sources are combined)". 7.3 Archival and Streaming Equivalence <definitions> <brush xml: <brush xml: <context xml: <context xml: </definitions> The traceFormatRef value should be “#format1”. Changed "format1" to "#format1" Add correct syle attributes to all examples and <pre> elements. Using subelements instead of attributes for referring to context elements At the f2f there was a suggestion to use subelements rather than attributes when the element being referred to may be defined as a subelement, or externally. Hence in <inkSource> we have: <inkSource> <traceFormat href=”#ABC”/> rather than <inkSource traceFormatRef=”#ABC”> The same would apply to context etc. Need to agree on a convention and implement spec-wide. At the moment <traceFormat> does not admit an “href” attribute. >>> TODO: This is a big change and needs to be discussed. In any case, it is too big and systematic to do for this WD. 1. Support 'href' attribute that can refer to another element of the same type in all appropriate elements. Example: context, all the children of context, mapping and annotationXML. 2. Remove the 'xxxxRef' attributes that refer to the elements of a child element type and always use the child element to provide the value. Example: Context and Canvas 3. When a element have 'href' attribute and children then, the data in children overrides the data inherited from the referred element. It is applicable for leaf elements such as 'brush'. Related Issues: Section 3.2, issue #3; Section 6.3.2, issue #2 and Section 6.1, issue #2.". [Muthu:: We may name the self-referencing attribute of the element as 'ref' instead of 'href'. Because 'href' means 'hyper link reference'.] For ease of implementation, it is recommended that, in archival mode, referenced elements be defined inside a declaration block using the <definitions> element. Not always recognized by non native English speakers. Maybe "For ease of implementation in archival mode, referenced elements should be defined inside a declaration block using the <definitions> element." would be easier to understand. Not a big issue anyway (see 1.3 Exchange Modes). Rephrased per recommendation in comment What gets rendered ? > > 3. Rendering Ink would be an important usage. Should the InkML > > specify "how to" and "what to" render. Or should be left to the applications ? > > If the InkML is not self-descriptive about rendering, it would be > > difficult to send it as an email attachment (specifying the MIME > > type as inkml) and expect the client to render it using its custom > > application. > > If InkML needs to be self-descriptive, we should consider about the > > following. > > > > * Should the rendering application render all the traces and > > ignore traceView to avoid repetition ? Should we show the traveView > > list and let the user select it and then display/highlight/select > > those ink ? ========================== See above. The traceView ink is part of the ink and must be rendered. If you do not want to duplicate it (which is probably the case) then refer to it in another document or put it in a <definitions> element. > > * Should trace have a render flag ? ========================== I think it should always be rendered. What "rendering" means depends on the application. For example, an ink stroke from the "eraser" end of a pen would render as deleting ink that has already been displayed. > > * Use traceView ? In that case the application should possess the knowledge about what traceViews we need to use to show? Should we specify this information in annotation ? Will this be intutive ? See above. > > * Introduce new tags for rendering at the high level. Like > > <render>, they can include references to trace, traceGroup and > > trace. Require clarification: if <trace>'s are inside <definition> they are not acted on or rendered unless referenced from outside the <definition>. How things get rendered, what are the default brushes, contexts, etc, which ulitmately control how things get rendered are a more application defined thing. Rendering policy or at least recommendations … There needs to be a clear "rendering policy", which states for example that traces etc defined in <definitions> should not be rendered, but traceviews in the body should. Follow up questions: - if traces/tracegroups are in the body of the document (i.e. not in defs) and there is also a traceview that refers to them, what gets rendered ? - should tracegroups and traceviews be rendered as "groupings" of traces rather than as ink ? Otherwise the renderer has no way to distinguish the rendering of these structures from those of simple traces. Treating erasure as a special case 2. Going back to your eraser example, how would devices supporting an eraser report the eraser strokes to a shared whiteboard (for instance) ? Would this be via some understanding/convention between the application and participating devices that such strokes should be marked with an "eraser" brush ? Or should InkML support this as a "reserved" brush (give that erasing seems like a very fundamental operation in the writing context) ? The new proposed changes to Brush Element discussed in section 4.3 issue id #73, addresses this issue. "Traces are the basic element used to record the trajectory of a pen as a user writes digital ink." I would have written: "<trace> is the basic element used to record the trajectory of a pen as the user writes digital ink." (see 3 Traces and Trace Formatting). Rephrased per recommendation in comment Traces generated by different devices, or used in differing applications, may contain different types of information. InkML defines channels to describe the data that may be encoded in a trace. I guess a link is welcome here (see 3 Traces and Trace Formatting). Added a link to Channel The <intermittentChannels> lists those channels whose value may optionally be recorded for each sample point. I would have written: "The <intermittentChannels> element lists those channels whose value may optionally be recorded for each sample point." (see 3.1.2 <intermittentChannels> element) Changed to "The <intermittentChannels> element lists those channels whose value may optionally be recorded for each sample point." > > * How would InkML handle the page notion ? > > * Use 'different canvas instance' for each page ? We discussed this and decided to leave it to the application layer, otherwise the notion of canvases gets quite complicated. Applications could do this with attributions on ink, by having separate canvases for each page, by defining subregions of a canvas to be pages, by having an intermittent "page" channel on the trace, or other mechanisms. If there is sufficient desire for an application-independent solution suppose we could add "paginated/pagewidth/pageheight" attribute to a bounded canvas. Then ink coordinates could be used modulo page width and height, with the integer quotients with page width giving page number, and with page height giving volume. Supporting pages using notion of device canvas There appears to be a genuine case for supporting the page concept in InkML given that many of the (especially paper-based) devices use the concept of page to organize the ink they capture. The bounded canvas idea is appealing, especially if we can think of it as the "device canvas" (a notion I had proposed earlier). The page number can be supported as an intermittent channel. Then we need a way to support mappings from such a canvas to another (e.g. shared) canvas. units = xsd:string The units in which the values of the channel are xpressed (numerical channels only). Required: no, Default: none Should be "expressed" (see 3.1.3 <channel> element). Changed "xpressed" to "expressed". The <mapping> element can be employed to specify an applied sine transformation. I would have linked the <mapping> element with the corresponding section (see 3.1.4 Orientation Channels). Added link to <mapping> <channel name="T" type="integer" units="ms" respectTo="#ts1"/> Fix indentation (see 3.1.7 Time Channel). Fixed the indentation in the <channel name="T"> example. Otherwise, the value of the time channel for a given sample point is defined to be the timestamp of that point in the units and frame of reference specified by its corresponding <inkSource> description (more precisely, by the <traceFormat> element for the channel). I would have linked the <inkSource> element with the corresponding section (see 3.1.7 Time Channel). Added link to <inkSource> The appearance of a <traceFormat> element in an ink markup file ... I would have used "in an InkML file" (see 3.1.9 Specifying Trace Formats). Changed "ink markup file" to "InkML file" Additionally, wsp may occur anywhere except within a decimal or hex and must occur if required to separate two valuess. Should be "values" (see 3.2.1 <trace> element contents). Changed "valuess" to "values" All traces must begin with an explicit value, not with a first or second difference. This is true of continuation traces as well. This allows the location and velocity state information to be discarded at the end of each trace, simplifying parser design. I suggest: "This is true for continuation traces" (see 3.2.1 <trace> element contents). Added "This is true for continuation traces" There is a 3.2.1 <trace> element section but no 3.2.2 section. Added section number [ 3.3 Trace Collections section]'s id is "#traceAggregate", change to "#traceCollections". Changed "#traceAggregate" to "#traceCollections". "ink markup file" or "InkML" file ? Maybe it's a good idea to chose one and stick to it (see 4.1 The <context> element) ? Changed "ink markup file" to "InkML file" The <inkSource> element will allow specification of: 1. Manufacturer, model and serial number (of a hardware device) 1. Text description of source, and reference (URI) to detailed or additional information 1. Trace format - regular and intermitent channels reported by source 1. Sampling rate, latency and active area 1. Additional properties of the device in the form of name-value-units triples 1. Properties of individual channels Should be "intermittent" (see 4.2.1 <inkSource> element). Changed "intermitent" to "intermittent" For the sake of consistency, the <srcProperty> element could have been named <sourceProperty>. In fact, it is the only markup that has an abbreviated name (seen 4.2.5 <srcProperty> element). The <srcProperty> element will be renamed to <sourceProperty> The <channelProperties> element is meant for describing properties of specific channels reported by the ink source. Properties such as range and resolution may be specified using corresponding elements. For more esoteric properties (from a lay user's standpoint) the generic channelProperty element may be used. I would replace "channelProperty" by "[ channelProperty <channelProperty]>" (along with the link) (see 4.2.6 channelProperties element ). Changed "channelProperty" to <channelProperty> with a link to channelPropertyElement. I would replace name = xsd:string Name of property of device or ink source. Required: yes by name = xsd:string The name of the property of device or ink source. Required: yes (see 4.2.7 channelProperty element attributes) Rephrase per recommendation in comment There is a 4.3.1 <brush> element section but no 4.3.2 section. Added section number There is a 4.4.1 <timestamp> element section but no 4.3.2 section. Added section number If the canvas tranform is given as an affine map of full rank, then it may be inverted numerically Should be "transform" (see 5 Canvases). Changed "tranform" to "transform" If the type attribute has value mathml then the content is a subset of MathML restricted to the following subset of Content MathML 2.0 elements defining arithmetic on integers, real numbers and boolean values: • Numbers: cn • Named constants: exponentiale, pi, true, false • Identifiers: ci. These must be associated to channels using a <bind> element. • Arithmetic: plus, minus, times, divide, quotient, rem, power, root, min, max, abs, floor, cieling • Elementary classical functions: sin, cos, tan, arcsin, arccos, arctan, exp, ln, log • Logic: and, or, xor, not • Relations: eq, neq, gt, lt, geq, leq • Operator application: apply Are these ones correct ? Shouldn't it be exponential and ceiling? (see 6.1.1 <mapping> element) Changed "exponentiale" to "exponential, e" and changed "cieling" to "ceiling".. Maybe it should be: "Content within a <definitions> block has no impact on the interpretation of traces, unless referenced from outside the <definitions> block". Also, is the semicolon required? (see 6.2.1 definitions element) Rephrase per recommendation in comment The <annotation> element provides a mechanism for inserting simple textual descriptions in the ink markup. This may be use for multiple purposes. Should be "This may be used for multiple purposes". Changed "use" to "used". [ 7 Streams and Archives] rename to "7 Archives and Streams" to match the content of 7.1 and 7.2 sections. Rephrase per recommendation in comment. Comment: Muthu, 03/30/07 On doing this, we have to modify the second paragraph in section 7 where there is a reference to ‘former’ and ‘later’ that refer to Streams and Archives. Replace all "refered" occurrences by "referred", found several times. Changed all "refered" to "referred". Streaming how-to I started having a look at the last draft of the InkML specification. Before I send any feedback, I would like to know more about streaming mode since I'm not familiar at all with XML streaming: - how is it done in practice ? - does StAX need to be used or rather something like SOAP ? - when you send continuation <trace> elements, do they have to be wrapped inside <ink></ink> markups ? Added language to section 7 to clarify that streaming and archival. Added comment to implementation guidelines saying that any of the usual XML protocols (StAX, SOAP, etc) may be used to transmit InkML documents or fragments between subprograms or distributed programs.. The 'tilt' value is captured by the ‘orientation channels’ such as OTx and OTy. This fact is not mentioned explicitly in the specification document.? InkML can be annotated with SVG using the <annotationXML> construct. Contributions from UA & Wacom Have there been contributions from User Agent, browser developers or Wacom? Neither appear to be included in the list of authors and editors. No, they have not made any contributions. 2.1 <ink> element -> For all but the documentID you give the require and default information. The required/default was at the *end* of section 2.1. Move it up with the attribute to match all the other attributes in the spec. ( definitions | context | trace | traceGroup | traceView | annotation | annotationXML )*' this should be 'trace ( definitions | context | trace | traceGroup | traceView | annotation | annotationXML )*' So that there MUST be one trace Element. Agreed: at least a single <trace> is required. Update the XSD as well. Because this can be used in situation where the date of creation/capturing is important you should also proved a optional date Attribute. A data attribute on <ink> would be redundant: we already have the <timestamp> element and timestamp attribute on trace." 3.1.3 <channel> element -> Is the name attribute value case sensitive interpreted? Yes, they are case sensitive. I added language to say they're case sensitive.. For the orientation attribute is what is +ve? Increase or decrease? , ... Yes, a mapping would define the actual min/max values for each channel (0-255, or 0.0-1.0, etc.). Since user defined channels are possible, other color spaces can be application defined. 3.1.7 Time Channel -> The "(see Time Channel)" is a self-reference. Why? Do you mean the timestamp element? Removed the useless reference. It was a hold over from when this paragraph lived elsewhere.." Changed to: Thus, in the simplest case, an InkML file may contain nothing but <trace> elements within an <ink> element.). If the trace is a mid-trace in a set of trace continuations, the state can be "indeterminate". Its actual "penUp" or "penDown" state is determined by a prior trace. 'Regular channels may be reported as explicit values, differences, or second differences: ...'What is a second difference? Does this mean that a second difference is refer to the same as a preceding difference or to the difference? Its another way of saying it’s a "first order derivative" or "second order derivative". Or, no difference == absolute coordinate, first order == velocity, second order == acceleration. 'Intermittent channels may be encoded with the wildcard character "?".' and'All regular channels must be reported, if only with the explicit wildcard "?".'Because of the first line i cite i think the second is not right. The only wildcard allowed for regular channels is the *. Agreed. Changed the wildcard from '?' to '*'.. 'Note: see Appendix A Implementation Guidelines for information about reducing file or stream size.'Appendix A is "Acknowledgements". You mean Appendix B. Changed the reference from Appendix A to Appendix B.. You are correct. No clarification needed in spec.. A '0' in either the to or from attribute is illegal. This section explicilty states that the indexes are 1-based. 0 is an error. | http://www.w3.org/2002/mmi/2010/InkML/ED-InkML-20100811/inkml-disp.xml | CC-MAIN-2018-05 | refinedweb | 10,956 | 56.15 |
Implement globalization.
Prepare culture-specific formatting.
The System.Globalization namespace in the .NET Framework provides most of the support in the .NET Framework for localization in Visual C# .NET applications. I'll start looking at localization code by exploring some of the concepts and classes you'll need to understand to build your own world-ready applications.
The two key pieces to keep in mind are cultures and resource files. A culture, as you'll see, is an identifier for a particular locale. A resource file is a place where you can store some culture-dependent resources such as strings and bitmaps. (The .NET Framework handles translating other resources, such as date formats, automatically.) ...
No credit card required | https://www.oreilly.com/library/view/mcadmcsdnet-training-guide/0789728230/0789728230_ch08lev1sec3.html | CC-MAIN-2018-51 | refinedweb | 118 | 51.75 |
IndexOutOfRangeException with vehicle.Occupants
Hi people. I'm having a problem with a mod I'm making. I have a script that spawns enemy peds around the player, both on foot and in vehicles. These peds and vehicles should despawn if they get too far from the player or they are destroyed. Vehicles also should despawn if they are empty. I'm using this function that is executed OnTick (1 second interval):
private void ClearLists() { Ped current; Vehicle currentv; for (int i = Enemy1Vehicles.Count - 1; i >= 0; i--) { currentv = Enemy1Vehicles.ElementAt(i); Ped[] currentoc = currentv.Occupants; if (((currentv != null && currentv.Exists()) && (currentoc == null || currentoc.Length == 0 || !currentv.IsAlive || World.GetDistance(currentv.Position, Game.Player.Character.Position) > maxRange) || MostWantedEnabled == false)) { if (currentv.CurrentBlip.Exists()) currentv.CurrentBlip.Remove(); Enemy1Vehicles.Remove(currentv); currentv.MarkAsNoLongerNeeded(); } } for (int i = Enemy1Peds.Count - 1; i >= 0; i--) { current = Enemy1Peds.ElementAt(i); if (((current != null && current.Exists()) && (!current.IsAlive || World.GetDistance(current.Position, Game.Player.Character.Position) > maxRange) || MostWantedEnabled == false)) { if (current.CurrentBlip.Exists()) current.CurrentBlip.Remove(); Enemy1Peds.Remove(current); current.MarkAsNoLongerNeeded(); } } }
However, sometimes while I'm playing I get an IndexOutOfRangeException at "currentoc = currentv.Occupants". Looks like the function is trying to access an index that is out of range, but that's part of ScriptHookV. I think it may be related to the peds that are inside, maybe doing MarkAsNoLongerNeeded on some of them gives that error, but I tried a few things and nothing works. Any idea what could be happening?
Took the liberty to put your code in a more readable shape. Use wrap your code with 3 backticks (```) before and after the code, to put it in a code block.
Code things:
Have you set all spawned vehicles as mission entities? They can despawn if not.
That for loop. Got a specific reason for iterating from max-1 to 0? If not, use a range-based for loop. It would look like this:
foreach(vehicle in Enemy1Vehicles) { // things happen }
You wouldn't need to manage Ped and Vehicle in that function any more.
That if statement. It's rather big. Think of splitting it up in more manageable chunks. Things that are strange:
currentoc == null: I wouldn't expect currentv.Occupants to return a nullptr. If you look at the source, this doesn't seem to be able to happen
- You're checking if it's occupied? Putting that explicitly helps debugging later on.
- Same with distance.
You're removing an item from a list while you're iterating through that list. That's not a good idea. Since you're removing everything anyway, empty the list when you're done handling the contents. This is where it probably crashes.
I applied my suggestions, hopefully I guessed your intentions right
private void ClearLists() { foreach (var vehicle in Enemy1Vehicles) { bool hasOccupants = vehicle.Occupants.Length > 0; bool isInRange = World.GetDistance(vehicle.Position, Game.Player.Character.Position) < maxRange; if (vehicle.Exists() && (!hasOccupants || !vehicle.IsAlive || !isInRange) || !MostWantedEnabled) { if (vehicle.CurrentBlip.Exists()) vehicle.CurrentBlip.Remove(); vehicle.MarkAsNoLongerNeeded(); } } Enemy1Vehicles.Clear(); foreach (var ped in Enemy1Peds) { bool isInRange = World.GetDistance(ped.Position, Game.Player.Character.Position) < maxRange; if (ped.Exists() && (!ped.IsAlive || !isInRange) || !MostWantedEnabled) { if (ped.CurrentBlip.Exists()) ped.CurrentBlip.Remove(); ped.MarkAsNoLongerNeeded(); } } Enemy1Peds.Clear(); }
Oh, I couldn't find a button to add a "code" tag or something like that, I noticed part of the code was formatted automatically, but didn't know the three backticks did that. Thanks.
Anyway, I iterate backwards because I'm deleting some items from the list. I've heard that iterating forward makes it skip some elements because all the following items are shifted when one of them gets removed, and that can end in a IndexOutOfRangeException. If I iterate backwards, that shouldn't happen, as it only shifts the elements that are after the one I remove. The error doesn't seem to be related to that loop tho, because the exception points to "currentv.Occupants". It seems to be something internal to that function.
The "currentoc == null" was something I added to try to fix the problem. I read the source code later and noticed it always returns a list, even if it's empty. I forgot to remove it after that.
Is there any other way to check if the vehicle is occupied? I couldn't find any other SHVDN function related to that. In fact, I only need to know if the vehicle is empty.
The loop is not meant to remove all the items from the list (unless "MostWantedEnabled" is false, which clears everything from the mod). It should only remove those peds that are dead, out of range or empty vehicles. I can't clear the list while the mod is active, I need it to track the peds.
About the peds and vehicles, I haven't set them as mission entities, I didn't know that was necessary. I thought the game kept them in memory until I mark them as no longer needed. In fact, If I don't despawn them manually, I can see their blips moving around the map even kilometers away from me. Maybe it's because I spawned them instead of picking a naturally spawned ped.
I'll try to reorganize the loops, maybe the error is related to the order in which I mark peds and vehicles as no longer needed, but it's weird. Thanks anyway.
I replaced the line vehicle.Occupants in my script with this function:
private bool IsVehicleEmpty(Vehicle v) { int nv = Function.Call<int>(Hash.GET_VEHICLE_NUMBER_OF_PASSENGERS, v); bool dv = Function.Call<bool>(Hash.IS_VEHICLE_SEAT_FREE, v, -1); return (nv == 0 && dv); }
And the error is gone. I think it's a problem with the implementation of get_Occupants(), which doesn't work in this particular case for some reason. Thanks! | https://forums.gta5-mods.com/topic/18257/indexoutofrangeexception-with-vehicle-occupants | CC-MAIN-2019-35 | refinedweb | 963 | 51.14 |
Ticket #5119 (closed Bugs: fixed)
[C++0x] unordered_map doesn't support cp-ctor.
Description
hi,
during compiling following testcase on gcc-4.6 snapshot i get an error.
#include <boost/unordered_map.hpp> struct S { boost::unordered_map<const void*, int > m_; }; boost::unordered_map<const void*, S> m2_; void foo ( const void* p ) { S s; m2_.insert ( std::make_pair ( p, s ) ); }
(...) include/c++/4.6.0/bits/stl_pair.h:110:17: error: 'constexpr std::pair<_T1, _T2>::pair(const std::pair<_T1, _T2>&) [with _T1 = const void* const, _T2 = S, std::pair<_T1, _T2> = std::pair<const void* const, S>]' is implicitly deleted because the default definition would be ill-formed: .../include/c++/4.6.0/bits/stl_pair.h:110:17: error: use of deleted function 'S::S(const S&)
the major problem is a lack of copy constructor in unordered_map when compiled with -std=gnu++0x. the !defined(BOOST_NO_RVALUE_REFERENCES) activates only move semantics while documentation describes cp-ctor.
GCC bugzilla entry about this issue:
Attachments
Change History
comment:2 Changed 6 years ago by danieljames
- Status changed from new to assigned
- Milestone changed from To Be Determined to Boost 1.47.0
That will hopefully fix it, but it's too late for 1.46.
I'm still seeing a lot of failures for gcc 4.6, but they also show up on gcc 4.5 and all seem to be exception related. Since they're not showing up on the regression tests, I suspect this is a problem with the macports version of gcc rather than a bug in boost or gcc in general.
(In [68445]) Add copy constructors and assignment operators when using rvalue references. Refs #5119. | https://svn.boost.org/trac/boost/ticket/5119 | CC-MAIN-2016-36 | refinedweb | 278 | 56.55 |
For loops
Usage in Python
- When do I use for loops?
For loops are traditionally used when you have a piece of code which you want to repeat n number of times. As an alternative, there is the WhileLoop, however, while is used when a condition is to be met, or if you want a piece of code to repeat forever, for example -
For loop from 0 to 2, therefore running 3 times.
for x in range(0, 3): print "We're on time %d" % (x)
While loop from 1 to infinity, therefore running infinity times.
x = 1 while True: print "To infinity and beyond! We're getting close, on %d now!" % (x) x += 1
As you can see, they serve different purposes. The for loop runs for a fixed amount - in this case, 3, while the while loop theoretically runs forever. You could use a for loop with a huge number in order to gain the same effect as a while loop, but what's the point of doing that when you have a construct that already exists? As the old saying goes, "why try to reinvent the wheel?".
- How do they work?
If you've done any programming before, there's no doubt you've come across a for loop or an equivalent to it. In Python, they work a little differently. Basically, any object with an iterable method can be used in a for loop in Python. Even strings, despite not having an iterable method - but we'll not get on to that here. Having an iterable method basically means that the data can be presented in list form, where there's multiple values in an orderly fashion. You can define your own iterables by creating an object with next() and iter() methods. This means that you'll rarely be dealing with raw numbers when it comes to for loops in Python - great for just about anyone!
- Nested loops
When you have a piece of code you want to run x number of times, then code within that code which you want to run y number of times, you use what is known as a "nested loop". In Python, these are heavily used whenever someone has a list of lists - an iterable object within an iterable object.
- Early exits
Like the while loop, the for loop can be made to exit before the given object is finished. This is done using the break keyword, which will stop the code from executing any further. You can also have an optional else clause, which will run should the for loop exit cleanly - I.E., without breaking.
Things to remember
- range vs xrange
The range function creates a list containing numbers defined by the input. The xrange function creates a number generator. You will often see that xrange is used much more frequently than range. This is for one reason only - resource usage. The range function generates a list of numbers all at once, where as xrange generates them as needed. This means that less memory is used, and should the for loop exit early, there's no need to waste time creating the unused numbers. This effect is tiny in smaller lists, but increases rapidly in larger lists as you can see in the examples below.
Examples
Nested loops
for x in xrange(1, 11): for y in xrange(1, 11): print '%d * %d = %d' % (x, y, x*y)
Early exit
for x in xrange(3): if x == 1: break
For..Else
for x in xrange(3): print x else: print 'Final x = %d' % (x)
Strings as an iterable
string = "Hello World" for x in string: print x
Lists as an iterable
collection = ['hey', 5, 'd'] for x in collection: print x
Lists of lists
list_of_lists = [ [1, 2, 3], [4, 5, 6], [7, 8, 9]] for list in list_of_lists: for x in list: print x
Creating your own iterable
class Iterable(object): def __init__(self,values): self.values = values self.location = 0 def __iter__(self): return self def next(self): if self.location == len(self.values): raise StopIteration value = self.values[self.location] self.location += 1 return value
range vs xrange
import time #use time.time() on Linux start = time.clock() for x in range(10000000): pass stop = time.clock() print stop - start start = time.clock() for x in xrange(10000000): pass stop = time.clock() print stop - start
Time on small ranges
import time #use time.time() on Linux start = time.clock() for x in range(1000): pass stop = time.clock() print stop-start start = time.clock() for x in xrange(1000): pass stop = time.clock() print stop-start
Your own range generator using yield
def my_range(start, end, step): while start <= end: yield start start += step for x in my_range(1, 10, 0.5): print x | https://wiki.python.org/moin/ForLoop?action=fullsearch&context=180&value=linkto%253A%2522ForLoop%2522 | CC-MAIN-2016-44 | refinedweb | 798 | 80.72 |
.
Our coding guidelines
First, I have to state our coding guidelines regarding unit test structure and naming because the queries below are built against them.
Unit test class name
Each unit test class is named following this pattern:
[prefix]<name of class under test>Test
The prefix is optional and is used when multiple test classes exist for the same class under test. The prefix states which aspect of the class under test is tested in this specific test class. Then follows the name of the class under test and the suffix
Test.
For example
FooTest is the test class for class
Foo and
ExceptionCasesBarTest contains only the exception cases tests for class
Bar.
Unit test is in the same namespace as the class under test, but in a dedicated test assembly
We put all our unit tests in assemblies named like the production assemblies with suffix
.Test. That makes it easy to detect them and simplifies automatic builds.
Furthermore, a unit test class is always in the same namespace as the production class it is testing (we remove the
.Test suffix in the default namespace of the test project).
Name of system under test
We always use the name
testee for the instance that is under test. This helps to quickly grasp what is tested in a unit test.
[TestFixture] public class FooTest { private Foo testee; [SetUp] public void SetUp() { this.testee = new Foo(); } [Test] public void DoesMagic() { string actual = this.testee.DoMagic(); actual.Should().Be("magic"); // this is FluentAssertions, check it out! } }
Finding misplaced unit tests
This is the query we use to find misplaced unit test. A misplaced unit test is not in the same namespace as the class it tests:
from p in Assemblies.WithNameWildcardMatchNotIn("*.Test").WithNameWildcardMatch("MyProject*").ChildTypes() from t in Assemblies.WithNameWildcardMatch("*.Test").ChildTypes() where t.IsUsing(p) && t.Name.EndsWith(p.Name + "Test") && t.ParentNamespace.Name != p.ParentNamespace.Name select new { Class = p, Test = t, ClassNamespace = p.ParentNamespace, TestNamespace = t.ParentNamespace }
- select all types of our production assemblies (assemblies starting with our project name and not ending in
Test)
- select all types of our unit test assemblies
- match test class and production class by checking whether the test class uses the production class and matches the name pattern (test class name ends with production class name + Test)
- take only the found pairs that don’t have the same namespace
- print the name of the class and test class and their namespaces
Finding misnamed unit tests (containing a testee):
This is the query to find unit test classes with an incorrect name, not following our name pattern:
from p in Assemblies.WithNameWildcardMatchNotIn("*.Test").WithNameWildcardMatch("MyProject*").ChildTypes() from t in Assemblies.WithNameWildcardMatch("*.Test").ChildTypes() where t.IsUsing(p) && t.Fields.Where(f => f.Name == "testee").Any() && t.Fields.Single(f => f.Name == "testee").FieldType == p && !t.Name.EndsWith(p.Name.Substring(0, p.Name.IndexOf("<") > 0 ? p.Name.IndexOf("<") : p.Name.Length) + "Test") && !t.IsGeneratedByCompiler select new { Class = p, Test = t, t.ParentNamespace }
- select all types of our production assemblies (assemblies starting with our project name and not ending in
Test)
- select all types of our unit test assemblies
- match test class and production class by checking whether the test class uses the production class and the type of the field testee
- take only class pairs that do not follow the naming convention
- skip generated classes
Finding unit tests missing
TestFixture or not named with suffix
Test:
This query finds unit test classes that miss either the
TestFixture attribute or the suffix
Test in the class name:
from t in Assemblies.WithNameWildcardMatch("*.Test").ChildTypes() where ( t.HasAttribute("NUnit.Framework.TestFixtureAttribute") && !t.Name.EndsWith("Test") ) || ( !t.HasAttribute("NUnit.Framework.TestFixtureAttribute") && t.Name.EndsWith("Test") ) || ( t.InstanceMethods.Where(m => m.HasAttribute("NUnit.Framework.TestAttribute") || m.HasAttribute("NUnit.Framework.TestCaseAttribute")).Any() && !t.Name.EndsWith("Test") ) orderby t.Name select new { t, t.ParentNamespace }
- take all types from a test assembly that either have the
TestFixtureattribute or end with
Test, but miss the other
- take all types from a test assembly that do have test methods but their name do not end in
Test
Conclusions
NDepend queries can easily be used to spot unit test classes not complying with your coding conventions.
This is a great help after a refactoring sessions to check whether all parts are still in their correct place.
These queries can easily be changed to be used for other testing frameworks than NUnit or other coding conventions.
Let me know if you have some cool NDepend queries of your own.
[…] How to find misplaced or misnamed unit tests with NDepend – an application of CQLinq to detect potential issues with organization, naming, or quality of unit tests. […] | https://www.planetgeek.ch/2012/08/31/how-to-find-misplaced-or-misnamed-unit-tests-with-ndepend/ | CC-MAIN-2021-43 | refinedweb | 781 | 51.07 |
Márk Mészáros2,966 Points
I get 'TypeError: 'float' object is not iterable, could someone help me please?
Fellow learners!
I need some help with this piece of code.
I get the following error log when running the file:
Traceback (most recent call last):
File "dungeon.py", line 54, in <module>
valid_moves = get_moves(player)
File "dungeon.py", line 38, in get_moves
x, y = player
TypeError: 'float' object is not iterable
I understand that there is something going wrong at line 38, with x and y, but I am nut sure how to tackle the float problem. I thought that player is a tuple, and x, y are integers unpacked from this tuple, so no clue about where float comes in the picture. Any explanation would be highly appreciated! The full code is below:
import os import random_positions(): return random.sample(CELLS,3) return player, door, monster player, door, monster = get_positions() def move_player(player, move): x, y = player if move == "LEFT": x -= 1 if move == "RIGHT": x += 1 if move == "UP": y -= 1 if move == "DOWN": y += 1 return x, y def get_moves(player): moves = ["LEFT","RIGHT","UP","DOWN"] x, y = player if x == 0: moves.remove("LEFT") if x == 4: moves.remove("RIGHT") if y == 0: moves.remove("UP") if y == 4: moves.remove("DOWN") return moves while True: valid_moves = get_moves(player) clear_screen() print("Welcome to the Dungeon!") print("You are now in the {} room".format(player)) print("You can move to {}".format(", ".join(valid_moves))) print("To quit type QUIT") move = input("> ") move = move.upper() if move == "QUIT": break if moves in valid_moves: player = move_player(player, move) else: print("\n You hit a wall!") continue
2 Answers
Mike Wagner23,557 Points
Many of your cells were defined with
x.y (period) when they should be
x,y (comma). If you fix those cells, you shouldn't have the "float" problem.
Edit as a side note, you have a second and unreachable return statement in your get_positions() method. You'll have to fix that in order to get the expected behavior.
Márk Mészáros2,966 Points
Thanks for the help and for pointing out the return trouble. Much appreciated!
Márk Mészáros2,966 Points
Question is cleared, it is much more easy to read it now ;) Can it be that some of the the CELLS list's tuples are actually not tuples because they have a . (dot) and not a , (comma)? Have to check that out...
Márk Mészáros2,966 Points
Correcting the CELLS sorted out the float problem part :) Sometimes just asking the question and having an overview helps in solving the problem ;) There is still a bug, at the end, in the if statement, there should be 'move' instead of 'moves' Thanks for the support Mike Wagner! ;)
Mike Wagner23,557 Points
Márk Mészáros - Ah, yes. I see the 'moves'. I didn't read down that far, haha. No problem. Glad to help.
Mike Wagner23,557 Points
Márk Mészáros - I am still sort of curious what you were attempting to do with your double return statement in
get_positions. For some reason I can't wrap my head around it, haha.
Mike Wagner23,557 Points
Mike Wagner23,557 Points
The formatting on your question makes it difficult to discern the structure of your code. If you could edit your question to use the backtick ` character (3 times on the line above and below your code) it will be easier to help you sort out. | https://teamtreehouse.com/community/i-get-typeerror-float-object-is-not-iterable-could-someone-help-me-please | CC-MAIN-2019-51 | refinedweb | 569 | 73.27 |
Vala 0.1.3 is out. Tarballs are available from the GNOME FTP servers. There are a lot of new bindings from many contributors: D-Bus, GConf, libgnome, libgnomeui, Glade, libnotify, GnomeVFS, GtkSourceView, Panel Applet, GNOME Desktop Library, libsoup, libwnck, GtkMozEmbed, Poppler, Enchant, Hildon, SQLite, and curses. Many bugs have been fixed all over the place.
A noteworthy change is that the type system has been made more consistent by converting the reference-type structs in the bindings to classes. The [ReferenceType] attribute is gone, you can now declare all reference types as classes in bindings, even if they don’t derive from GObject. A side-effect of this change is that you now always have to specify the base class in your class declarations, e.g. use
using GLib;
public class Bar : Object { … }
to declare a class Bar which derives from GObject. The advantage is that you can be sure now that all structs have value-type semantics and all classes have reference-type semantics, no mixup anymore.
The more I read about Vala, the more I think that it is to Glib/GObject what Objective-C is to NSFoundation. Is this a correct way to think about it?
Does Vala support Unicode strings like C# and D?
Wow ! You rock !
does this mean i could e.g. do a
class MyParamSpec : ParamSpec {
}
To create a derivative of GParamSpec ? The same for Boxed type ?
Would be so nice !
Regards.
@Tristen: I’m not really familiar with Objective-C but I guess you can look at it like that.
@Simon: Yes, strings in Vala are UTF-8 encoded and the methods of the string class use the Unicode manipulation functions of GLib. A unichar represents a single Unicode character.
@bersace: That won’t work yet to register a ParamSpec or Boxed type but that’s the idea. It’s definitively planned to support that for Boxed types but it should also be possible to add ParamSpec support.
Looks really promising. But there is one thing i’m really missing: good documentation.
I know that hacking is more fun than writing documentation. But Vala has a great opportunity and it is quite small and young. So if you take care about documentation from the beginning it would be quite easy to keep Vala good documented and this is a key issue to make other people like it and use it.
Software is more than just code, documentation is also a very important part of it. New code/bindings should only be accepted if it comes with good (API-)documentation. I think that’s the only way to make sure that Vala becomes a good language. Without (API-)documentation a programming language is just useless.
@pinky: You’re right about that. We’re aware of the missing documentation and will try to focus on stabilization and writing documentation instead of features for the following releases.
Thanks for answering my question. It’s great to hear that Vala has Unicode support. I’m looking forward to trying out Vala once documentation is there, don’t mind the fact that it is sort of Gnome-biased
You forgot Gstreamer in the binding list (it wasn’t in 0.1.2).
Xav
Please, get up the documetation for develop …
@Xav: That’s right, the GStreamer bindings missed the tarball in 0.1.2, it was already available in SVN.
@rcares: I’m working on it, the beginning of a language manual is already in SVN, includung an index file for Devhelp. | http://blogs.gnome.org/juergbi/2007/08/31/vala-013/ | CC-MAIN-2014-52 | refinedweb | 586 | 65.22 |
Hi - Struts
Hi Hi friends,
must for struts in mysql or not necessary... know it is possible to run struts using oracle10g....please reply me fast its...://
Thanks. Hi Soniya,
We can use oracle too in struts
Hi - Struts
Hi Hi Friends,
Thanks to ur nice responce
I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile
jav beginners - Java Beginners
jav beginners pl. let me know the logic and the program to print the prime and twin prim numbers
thanks Hi Friend,
Try the following code:
class TwinPrimes{
public static void main (String
Jav Applets - Applet
Jav Applets I need to write a small payroll program, using applet....
Thanks Hi Friend,
Try the following code:
1)
import java.applet.Applet;
import java.awt.*;
import java.awt.event.*;
public class
Hi Friend..IS IT POSSIBLE? - Java Beginners
Hi Friend..IS IT POSSIBLE? Hi Friend...Thank u for ur valuable response..
IS IT POSSIBLE FOR US?
I need to display the total Number od days... 2008 MONTH 8 DAYS: 8
TOTAL NUMBER OF DAYS : _____
IS IT POSSIBLE FOR US
jav
jav a - Design concepts & design patterns
jav a Q.1. Write a program in Java to perform the addition of two complex numbers.
Hi Friend,
Try the following code:
public class ComplexNumber{
private int a;
private int b;
public ComplexNumber
how to compile and run struts application - Struts
how to compile and run struts application how to compile and run struts program(ex :actionform,actionclass) from command line or in editplus
Compile time error
Compile time error Hi,
When i compile my simple program in cmd am getting a error like
"Javac is not a recognized as an internal or external command,
operable program or batch file"
How to resolve this problem ????
compile
compile how to compile .class files using eclipse
compile
compile how to compile a java program using jre7
it possible or not
it possible or not without public static void main(string args[]) in java program
it is possible
struts - Struts
struts Hi,
I need the example programs for shopping cart using struts with my sql.
Please send the examples code as soon as possible.
please send it immediately.
Regards,
Valarmathi
Hi Friend,
Please
jav-util.zip
jav-util.zip i need to load a zipfile in to my class while executing and i have extract each file in that zip and a particular file content i need to desplay how can i do this please help me
struts - Struts
,we writing the action ="action class name"in jsp,here in xhtml what we have... in xhtml).Please tel me the solution as soon as possible.
Thank you. Hi...struts we are using Struts framework for mobile applications,but we
Compile error - Java Beginners
.
Thanks Hi friend,
Do some changes in the code then compile Successfully...Compile error I get this error when compiling my program:
java:167... java.util.HashMap;
public class TestLineCounter
{
public static long totalLines...)If possible explain me with one example.
Give me reply as soon as possible.
Thank you. hi,
to add jar files -
1. right click on your project.
2
struts compilation - Struts
struts compilation how to compile struts example Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
JAv method overloading
JAv method overloading What restrictions are placed on method overloading
Struts code - Struts
Struts code
Hi Friend,
Is backward redirection possible in struts.If so plz explain me
struts validations - Struts
struts validations hi friends i an getting an error in tomcat while running the application in struts validations
the error in server as validation disabled
plz give me reply as soon as possible. Hi friendav compolation error - Java Beginners
jav compolation error find symbol error
jav - Spring
How do I compile the registration form?
How do I compile the registration form? How do I compile the registration form as stated at the bottom of the following page (URL is below). Do I...://
Mastering Struts - Struts
? Until and unless i am guided over some struts Project i dnt think...its possible. Right? Hi Friend,
Please visit the following links:
http...Mastering Struts Sir,
how can i master over struts
struts code - Struts
struts code Hi all,
i am writing one simple application using struts framework. In this application i thought to bring the different menus of my... am thinking to bring the menus in a tree view in my application "is it possible
validations in struts - Struts
as possible. Hi friend,
You must...validations in struts hi friends plz give me the solution its urgent
I an getting an error in tomcat while running the application in struts
hi - Ajax
class Data extends HttpServlet
{
private int countryID;
public void doGet... me know what kind of error u getting in Ajax.
if possible come online through
hi - Hibernate
as possible.
thanks
Hi friend,
Read for more information...hi hi all,
I am new to hibernate.
could anyone pls let me know what is generic DAO class generated by MyEclipse while doing mapping.
I want
Java compile time error. - Java Beginners
Java compile time error. CreateProcess error=2, The system cannot find the file specified Pleae Describe your query Hi friend,
Please specify in detail and send me code.
If you are new java
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm... when required.
I could not use the Struts API FormFile since
Possible ways to Retrieve Data - Development process
Possible ways to Retrieve Data
Hi Deepak,
Can u explain me, how manys we can retrive data from database.
am confused with ,1)using beans like set property and get property 2)using JSTL tags and apart frm
unable to compile class file - JSP-Servlet
unable to compile class file I wrote database connection in jsp file... in that dropdown list box. Hi,
dynamic bombobox in servlet...*;
import java.util.ArrayList;
public class ComboboxList extends HttpServlet
need help with java program as soon as possible
number.
Hi Friend,
Try the following code:
import java.util.*;
public class NumberExample{
public static void main(String[]args){
int
Struts
Struts How Struts is useful in application development? Where to learn Struts?
Thanks
Hi,
Struts is very useful in writing web... performance Java web applications that runs on Java enabled application servers.
Struts
java - Struts
java Hi,
I want full code for login & new registration page in struts 2
please let me know as soon as possible.
thanks,. Hi friend,
I am sending you a link. This link will help you. Please visit for more
java - Struts
to attend the inteviews (fake 1+/2+)is it possible to attend and which type of questiond they may ask.i have scjp and scwcd and good knowledge in struts,hibernate. Hi Friend,
Please visit the following link:
http
Hi.... - Java Beginners
Hi.... Hi Friends
when i compile jsp file then got the error "code to large for try statement" I am inserted 177 data please give me solution and let me know what is the error its very urgent Hi Rag
java - Struts
java When i am using Combobox.
when i am selecting the particular value.
how can i pass the value to the action.
please give me the suggestion as early as possible.
Hi friend,
To solve the problem
java - Struts
is in action class,because it is not exucted ,but formbean will executed...this is my... java.io.IOException;
public class LoginAction extends Action...java hi..,
i wrote login page with hardcodeted username
struts - Struts
; Hi,Please check easy to follow example at dispatchaction vs lookupdispatchaction What is struts
Hello - Struts
but how is possible in java Hi friend,
The standard Java...Hello Hi friends,
I ask some question please read carefully and let me know I want to create installation file........it is possible
struts
struts <p>hi here is my code can you please help me to solve...*;
import org.apache.struts.action.*;
public class LoginAction extends Action...;
<p><html>
<body></p>
<form action="login.do">
struts
struts hi
in my application number of properties file are there then how can we find second properties file in jsp page...;gt;
<html:form
<pre>
struts - Struts
struts Hi,
I am new to struts.Please send the sample code for login... the code immediately.
Please its urgent.
Regards,
Valarmathi Hi Friend....shtml
http
Hi,
Hi, Hi,what is the purpose of hash table...-config.xml
Action Entry:
Difference between Struts-config.xml
i cant find any compile time error but there is runtime error.
i cant find any compile time error but there is runtime error. import java.sql.*;
public class JDBCExample{
public static void main(String args...){
System.out.println("Error:connection not created");
}
Hi
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/2673 | CC-MAIN-2015-35 | refinedweb | 1,495 | 65.42 |
HOSTCC scripts/sign-file In file included from scripts/sign-file.c:46:0: /usr/include/openssl/cms.h:62:2: error: #error CMS is disabled. #error CMS is disabled. ^ scripts/Makefile.host:91: recipe for target 'scripts/sign-file' failed make[1]: *** [scripts/sign-file] Error 1 Makefile:567: recipe for target 'scripts' failed make: *** [scripts] Error 2
Advertising
Fix SSL headers so that the kernel can build with LibreSSL --- scripts/sign-file.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/sign-file.c b/scripts/sign-file.c index 250a7a6..a0b806d 100755 --- a/scripts/sign-file.c +++ b/scripts/sign-file.c @@ -39,7 +39,7 @@ * signing with anything other than SHA1 - so we're stuck with that if such is * the case. */ -#if OPENSSL_VERSION_NUMBER < 0x10000000L +#if (OPENSSL_VERSION_NUMBER < 0x10000000L || LIBRESSL_VERSION_NUMBER) #define USE_PKCS7 #endif #ifndef USE_PKCS7 -- 2.7! _______________________________________________ kbuild-devel mailing list kbuild-devel@lists.sourceforge.net | http://www.mail-archive.com/kbuild-devel%40lists.sourceforge.net/msg02688.html | CC-MAIN-2017-17 | refinedweb | 154 | 52.56 |
Convert::Ascii85 - Encoding and decoding of ascii85/base85 strings
use Convert::Ascii85; my $encoded = Convert::Ascii85::encode($data); my $decoded = Convert::Ascii85::decode($encoded); use Convert::Ascii85 qw(ascii85_encode ascii85_decode); my $encoded = ascii85_encode($data); my $decoded = ascii85_decode($encoded);
This module implements the Ascii85 (also known as Base85) algorithm for encoding binary data as text. This is done by interpreting each group of four bytes as a 32-bit integer, which is then converted to a five-digit base-85 representation using the digits from ASCII 33 (
!) to 117 (
u).
This is similar to MIME::Base64 but more space efficient: The overhead is only 1/4 of the original data (as opposed to 1/3 for Base64).
Converts the bytes in DATA to Ascii85 and returns the resulting text string. OPTIONS is a hash reference in which the following keys may be set:
By default, four-byte chunks of null bytes (
"\0\0\0\0") are converted to
'z' instead of
'!!!!!'. This can be avoided by passing a false value for
compress_zero in OPTIONS.
By default, four-byte chunks of spaces (
' ') are converted to
'+<VdL'. If you pass a true value for
compress_space in OPTIONS, they will be converted to
'y' instead.
This function may be exported as
ascii85_encode into the caller's namespace.
Converts the Ascii85-encoded TEXT back to bytes and returns the resulting byte string. Spaces and linebreaks in TEXT are ignored.
This function may be exported as
ascii85_decode into the caller's namespace., MIME::Base64
Lukas Mai,
<l.mai at web.de>
Please report any bugs or feature requests to
bug-convert-ascii85 at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
perldoc Convert::Ascii85
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License.
See for more information. | http://search.cpan.org/dist/Convert-Ascii85/lib/Convert/Ascii85.pm | CC-MAIN-2016-50 | refinedweb | 355 | 56.15 |
This post is an extended and revised version of "Back to basics, just reading some data from a DB". Feedback on that provided more than enough inspiration for this one.
Are you already drowning in the enormous amount of TLA's, tools, utilities, methodologies and whatever else ? Let's take a step back. Do you still know how to read and work with will do it using Test Driven techniques end will up with a very simple domain object. On this journey we will meet several core buzzwords in "modern" software development. I am not going to present anything new or revolutionary, most of the code can be found scattered over MSDN. The only thing I want to reach is getting my feet back on the ground to get the overview again.
What about DataAdapters and DataSets ?
In a lot of older projects I used to rely on DataAdapters and DataSets when working with data. Perhaps the main reason was that that it was very hard to work with anything else than datasets in .NET 1. It was the only thing the other parts of the framework understood. In those days the data discussion used to focus whether to use DataAdapters or DataReaders. But in essence there is not really a big difference, internally a DataAdapter uses a DataReader and nobody ever found a clear performance difference between the two. Which is no wonder, over 99% of the performance is determined by the efficiency of you SQL statement. But what I did find out over the years was the problem of maintainability. Especially when the data underlying the dataadapter got split over several tables it is a hell of a job to update all the sql and keep the datadapter working. Right now I am replacing more and more datadapter code with the kind of stuff I am demonstrating here.
Getting started
I want to work with in the database and the code which is going to work with it. For now that other code can work (and be tested against) the mock object. While we will start writing the code to read a customer from the database.
Reading data.
An open database connection is a very expensive and also an unmanaged resource, which means it is not under control of the .NET runtime. The SqlConnection object wraps up the connection, the SqlCommand has a SqlConnection property and the SqlDatareader works on a SqlCommand object with an open connection. So all these three classes work with an unmanaged (database connection) resource and thus all three implement the IDisposable interface. Writing correct code would lead to call their Dispose method when finished or work with the using statement. Instead of writing political correct code here I am focusing on the connection resource itself. Open a connection as late as possible and always make sure it is closed as soon as possible and make sure it is always closed when hitting an exception. The try finally block will do that. Don't be afraid to lose performance when opening and closing connections often. The ado.net connection pooling will do the hard work, this is something which really works well. I doubted once, only to end up fully convinced.
After opening the connection data can be read. does have the IsDBNull method to check for a DBnull value. So I can check before I read,
name = sdr.IsDBNull(1) ? null : sdr.GetString(1);
Note that DBNull.Value is not the same as null.
But this code is a little smelly. Twice in the line it assumes the name column having index 1. Which is only so when name was the second column in the select statement of the underlying SQL. Changing the sql will break the code. The get around this the datareader has the GetOrdinal method which will return the index of a column by name.
if (sdr.Read())
int numberIndex = sdr.GetOrdinal("Number");
int nameIndex = sdr.GetOrdinal("Name");
int phoneindex = sdr.GetOrdinal("Phone");
number = sdr.GetInt32(numberIndex);
name = sdr.IsDBNull(nameIndex) ? null : sdr.GetString(nameIndex);
phone = sdr.IsDBNull(phoneindex) ? null : sdr.GetString(phoneindex);
The test will no longer throw an exception, but still fail.
The problem now has to do with the content of the database. This is a problem on the customer level, not the database handling code. Before diving deeper into customers I want to show just one more thing of the datareader.
A datareader has an indexer to get straight to the raw data. As indexing expression it will accept the index of the column as well as the column name. Working with the raw data itself can be quite handy when you cannot rely on the type of the underlying DB data. There is a GetInt16, a GetInt32 and a GetInt64 method. Now what if you do not know which one to use ? This snippet will work for any type of integer.
long number = long.Parse(sdr["Number"].ToString());
The raw data is first converted to a string which is parsed into a long (int64). This does work for MS sql server, I have no guarantees for any other database.
Enter the domain object
All the code presented so far does it read data from the database to fill a customer data object. I have managed to hide the actual DB itself, whether it was a mock or sql server, the ICustomer object does not care. But a customer is more than just some data, it does have behavior as well. The most simple behavior would be validating it. My test just returned me a customer from the database without a phone number. That might be OK from the DB's point of view but it's not a customer my app can work with. The ICustomerDO interface does describe a customer which better fulfills the needs of my app.
public interface ICustomerDO
string Name { get;}
bool Validate();
The Validate method of a customer domain object could do anything, for instance
public bool Validate()
return Name != null && Phone != null && myCustomPhoneNumberValidation(Phone);
This can go far beyond anything expressible in database validations.
But how are we going to bring the Customer data object and the CustomerDO domain object together without getting them entangled ? There should be a separation of concerns. On one side there is persistence: a way to store (and retrieve) customer data. On the other side is a customer domain object, a customer as my app would like to work with. Perhaps a tempting approach might be to inherit every data object class from a base domain object class. But the moment I start doing that the mess starts as the database code will need the domain object code.
Enter the Repository
We can abstract the persistence of customers away in a repository. An interface describes such a repository.
public interface ICustomerRepository
ICustomer CustomerById(int customerNumber);
List<ICustomer> CustomerByName(string customerName);
A customer repository object of this kind has a method which will return a customer based on a customer number and a method which returns a list of customers with a given name. For now I will only go into the CustomerById member.
Let's write a test what the repository should do. To start with a mock.
public void CanCreateMockedCustomerRepository()
ICustomerRepository testRepository = new MockedCustomerRepository();
ICustomer testCustomer = testRepository.CustomerById(12);
Assert.AreEqual("Alfred", testCustomer.Name);
Assert.AreEqual("+31505035737", testCustomer.Phone);
This test is almost the same as the one for the mocked customer. Instead of creating a mocked customer object the method of the repository is going to do that. To make the test pass requires one line of code.
public ICustomer CustomerById(int customerNumber)
return new MockedCustomer(customerNumber);
Again this code is quite trivial. But it does give a clear idea how to write the "real" database repository. The test builds on the DBcustomer test
public void CanReadCustomerFromRepository()
ICustomerRepository testRepository = new CustomerRepository();
ICustomer testCustomer = testRepository.CustomerById(1);
And the code to make it pass builds on what we learned building the mockedrepository.
public class CustomerRepository : ICustomerRepository
public ICustomer CustomerById(int customerNumber)
return new DBcustomer(customerNumber);
…
Having these repositories at hand makes it possible to limit the visibility of the data classes we started with (DBcustomer and MockedCustomer) to internal. Only the implemented ICustomer interface is exposed to other assemblies using the repository. The downside doing this that the test running directly against these data objects will no longer build. This is the only thing which sometimes bothers me (and some others) with TDD. To be able to write tests against a piece of code you have to expose it to the outer world with public visibility. You can do two things. Either rewrite the tests to target exposed members or keep your members public for the sake of testing. A third way might be to include the tests in the assembly they are testing but in that case your assembly while be exposing the test classes and test methods. (A class containing tests and the test methods themselves should always have public visibility.)
Putting it together
Now we have an interface describing a customer repository, a real and a mocked implementation for that and a customer domain object with some behavior dependent on data from a repository. The customer domain object needs a repository member; it is passed one in its constructor
public class Customer : ICustomerDO
private ICustomerRepository repository;
public Customer(ICustomerRepository repository)
this.repository = repository;
public void LoadByNumber(int customerNumber)
ICustomer customerData = repository.CustomerById(customerNumber);
number = customerData.Number;
name = customerData.Name;
phone = customerData.Phone;
private bool myCustomPhoneNumberValidation(string number)
// Anything goes
return true;
get { return phone; }
public bool Validate()
return name != string.Empty && phone != string.Empty && myCustomPhoneNumberValidation(phone);
Again a test is the best way to see how the CustomerDO works
public void CanCreateMockedCustomerDomainObject()
Customer testCustomer = new Customer(new MockedCustomerRepository());
testCustomer.LoadByNumber(12);
The Customer class is the domain object When instantiating a new customer I pass in an object which does implement the ICustomerRepository interface. Like the MockedCustomRepository class.
But I can also use the "real" CustomerRepository class.
public void CanCreateCustomerDomainObject()
Customer testCustomer = new Customer(new CustomerRepository());
testCustomer.LoadByNumber(1);
Which repository to use is injected into the customer via its constructor. This is what is called Dependency Injection. The customer domain object does not have any knowledge about repository implementations, it only has knowledge of the ICustomerRepository interface. The container which instantiates the customer domain object is in control of the repository which will be used. Another name for Dependancy Injection is Inversion of Control, or IoC for short.
Now the customer object does have behavior as well. Which can be tested. Let's return to the customer which made our first database tests fail because it's database record did not contain a phone number. A test to describe this, as well as the resulting behavior
public void CanValidateCustomer()
Assert.IsTrue(testCustomer.Validate());
testCustomer.LoadByNumber(2);
Assert.AreEqual(string.Empty, testCustomer.Phone);
Assert.IsFalse(testCustomer.Validate());
When the code is right and the DB contains the right data all tests will pass
Most of these test do rely on a database and its content. For several people a reason to name them integration tests and not unit test. For this story they are just tests. As I hope to have demonstrated building code (and keep it running !) based on tests is pretty straightforward and you can build on this code to further develop the app. When you want to get rid of the dependency on the database just use the mocked repository.
Winding down
Building this solution has lead to several projects. The good thing is that these project have very little interdependencies.
Only the Contracts project is referenced by all others. This assembly only contains the interface declarations, no executing code. The DomainObjects, the Mocks and the Persistence do not depend on any other project than the Contracts project. So they can be easily swapped out for another project as long as these implement the interfaces defined in the Contracts assembly. A next step might be to replace the Persistence with an OR mapper like nHibernate.
Everything comes together in the tests. Did you notice there is not even an app yet ? No choice yet whether it is going to be a web or winforms app. How you are going to present the customer data is open. Even the simplest approach will work. With .NET 2.0 and later you can (data-)bind to any public member of any object, you no longer need a dataset for that. As the customer domain object is a Plain Old CLR Object there will be no big hurdles.
Well, that should be it. But don't take this as the way to work. To get a feeling for the real stuff I invite you to write an implementation for the CustomerName method which returns a list of customers. That will require some refactorings. But as long as you keep working in a test driven way you can experiment around and stay sure you are not breaking any existing code. But do keep your concerns separated and your feet on the ground.
Pingback from Back to basics, just reading some data from a DB - Peter's Gekko
Pingback from Back to basics, from the DB to a simple domain object
Good article.
At first I was a little skeptical of your direction but once I read through the whole thing I definately found it to be a good grounding experience as you stated.
Bravo!
Pingback from kre8ive » Back to basics, from the DB to a simple domain object
In a relative short span of time nHibernate has become a major member of my toolbox. It has become the
Pingback from sharepoint.codebetter.com/.../back-to-basics-from-the-db-to-a-simple-domain-object.aspx
Being sick of all the hassles it took to keep my own server up and running I've moved it to | http://codebetter.com/blogs/peter.van.ooijen/archive/2007/12/03/back-to-basics-from-the-db-to-a-simple-domain-object.aspx | crawl-001 | refinedweb | 2,314 | 56.55 |
NAME
lseek - reposition read/write file offset
SYNOPSIS
#include <sys/types.h> #include <unistd.h> off_t lseek(int fildes, off_t offset, int whence);
DESCRIPTION
The lseek() function repositions the offset of the open file associated with the file descriptor fildes. EINVAL whence is not one of SEEK_SET, SEEK_CUR, SEEK_END, or the resulting file offset would be negative. EOVERFLOW The resulting file offset cannot be represented in an off_t. ESPIPE fildes is associated with a pipe, socket, or FIFO.
CONFORMING TO
SVr4, POSIX, 4.3BSD
RESTRICTIONS
Some devices are incapable of seeking and POSIX does not specify which devices must support it. Linux specific restrictions: using lseek() on a tty device returns ESPIPE.) | http://manpages.ubuntu.com/manpages/dapper/man2/lseek.2.html | CC-MAIN-2014-15 | refinedweb | 112 | 51.04 |
On 3 October 2012 16:39, Daniel Baumann <daniel.baumann@progress-technologies.net> wrote: > On 10/03/2012 04:28 PM, Michal Suchanek wrote: >>> i pretty sure (but i don't actually know, will have to test at a later >>> point), that when using multiple squashfs images, you don't get multiple >>> overlays exposed. > > just to be precise, sorry, i ment specifically 'in live-boot'. > >> You get multiple overlays when you ask for them. > > is it possible to have both the 'broken out' overlays, as well as a big > unified one, at the same time? with aufs and overlayfs? AFAIK this should be possible but is not well tested. The limitation is that a union filesystem cannot be easily used as writable branch for the same type of union. The union uses some filesystem namespace subset to store union metadata in the writeble branch and this namespace subset is not available in the union. iirc aufs uses special file names and overlayfs extended attributes subset. Thanks Michal | http://lists.debian.org/debian-live/2012/10/msg00051.html | CC-MAIN-2013-20 | refinedweb | 167 | 72.56 |
Hello,
I’m trying to grab the camera data from an OpenMV M7 camera to use in another program. The camera will remain connected to the pc it needs to send the data to via USB. Transmitting the images by UART or Serial is the goal.
I’m still a bit of a newbie to i/o, but using the thread here I’ve been able to get a bit of a start
So here’s what I have on the sending side currently:
import time, sensor, image, ustruct from pyb import UART, USB_VCP uart = UART(3, 19200, timeout_char=1000) sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.skip_frames(time = 2000) clock = time.clock() usb = USB_VCP() while(True): clock.tick() img = sensor.snapshot().compress() usb.send(ustruct.pack("<L", img.size())) usb.send(img) #print(img) print(clock.fps()) time.sleep(100)
There are bits of UART in there but ignore them–for simplicity’s sake I’m starting with Serial. The program in question I’m supposed to be sending data to hasn’t been written yet so currently nothing is being done with the sent data.
I would appreciate it if someone could point me in the right direction re:
- Is the code above correct? I saw there’s a Serial class too but I’m not sure if that’s needed.
- What would be the simplest way to test if the images are being sent + can be reassembled correctly? Ex. should I whip up a Processing sketch, or?
Thanks for the help.
| https://forums.openmv.io/t/continously-send-images-to-pc/862 | CC-MAIN-2021-17 | refinedweb | 258 | 65.22 |
In the previous two posts, we’ve seen how to build a custom network API with Kubernetes CRDs and push the resulting configuration to network devices. In this post, we’ll apply the final touches by enabling oAuth2 authentication and enforcing separation between different tenants. All of these things are done while the API server processes incoming requests, so it would make sense to have a closer look at how it does that first.
Kubernetes request admission pipeline
Every incoming request has to go through several stages before it can get accepted and persisted by the API server. Some of these stages are mandatory (e.g. authentication), while some can be added through webhooks. The following diagram comes from another blogpost that covers each one of these stages in detail:
Specifically for NaaS platform, this is how we’ll use the above stages:
- All users will authenticate with Google and get mapped to individual namespace/tenant based on their google alias.
- Mutating webhook will be used to inject default values into each request and allow users to define ranges as well as individual ports.
- Object schema validation will do the syntactic validation of each request.
- Validating webhook will perform the semantic validation to make sure users cannot change ports assigned to a different tenant.
The following sections will cover these stages individually.
Authenticating with Google
Typically, external users are authenticated using X.509 certificates, however, lack of CRL or OCSP support in Kubernetes creates a problem since lost or exposed certs cannot be revoked. One of the alternatives is to use OpenID Connect which works on top of the OAuth 2.0 protocol and is supported by a few very big identity providers like Google, Microsoft and Salesforce. Although OIDC has its own shortcomings (read this blogpost for details), it is still often preferred over X.509.
In order to authenticate users with OIDC, we need to do three things:
- Configure the API server to bind different user aliases to their respective tenants.
- Authenticate with the identity provider and get a signed token.
- Update local credentials to use this token.
The first step is pretty straightforward and can be done with a simple RBAC manifest. The latter two steps can either be done manually or automatically with the help of dexter. NaaS Github repo contains a sample two-liner bash script that uses dexter to authenticate with Google and save the token in the local
~/.kube/config file.
All that’s required from a NaaS administrator is to maintain an up-to-date tenant role bindings and users can authenticate and maintain their tokens independently.
Mutating incoming requests
Mutating webhooks are commonly used to inject additional information (a sidecar proxy for service meshes) or defaults values (default CPU/memory) into incoming requests. Both mutating and validating webhooks get triggered based on a set of rules that match the API group and type of the incoming request. If there’s a match, a webhook gets called by the API server with an HTTP POST request containing the full body of the original request. The NaaS mutating webhook is written in Python/Flask and the first thing it does is extract the payload and its type:
request_info = request.json modified_spec = copy.deepcopy(request_info) workload_type = modified_spec["request"]["kind"]["kind"]
Next, we set the default values and normalize ports:
if workload_type == "Interface": defaults = get_defaults() set_intf_defaults(modified_spec["request"]["object"]["spec"], defaults) normalize_ports(modified_spec["request"]["object"]["spec"])
The last function expands interface ranges, i.e. translates
1-5 into
1,2,3,4,5.
for port in ports: if not "-" in port: result.append(str(port)) else: start, end = port.split("-") for num in range(int(start), int(end) + 1): result.append(str(num))
Finally, we generate a json patch from the diff between the original and the mutated request, build a response and send it back to the API server.
patch = jsonpatch.JsonPatch.from_diff( request_info["request"]["object"], modified_spec["request"]["object"] ) admission_response = { "allowed": True, "uid": request_info["request"]["uid"], "patch": base64.b64encode(str(patch).encode()).decode(), "patchtype": "JSONPatch", } return jsonify(admissionReview = {"response": admission_response})
The latest (v1.15) release of Kubernetes has added support for default values to be defined inside the OpenAPI validation schema, making the job of writing mutating webhooks a lot easier.
Validating incoming requests
As we’ve seen in the previous post, it’s possible to use OpenAPI schema to perform syntactic validation of incoming requests, i.e. check the structure and the values of payload variables. This function is very similar to what you can accomplish with a YANG model and, in theory, OpenAPI schema can be converted to YANG and vice versa. However useful, such validation only takes into account a single input and cannot cross-correlate this data with other sources. In our case, the main goal is to protect one tenant’s data from being overwritten by request coming from another tenant. In Kubernetes, semantic validation is commonly done using validating admission webhooks and one of the most interesting tools in this landscape is Open Policy Agent and its policy language called Rego.
Using OPA’s policy language
Rego is a special-purpose DSL with “rich support for traversing nested documents”. What this means is that it can iterate over dictionaries and lists without using traditional for loops. When it encounters an iterable data structure, it will automatically expand it to include all of its possible values. I’m not going to try to explain how opa works in this post, instead I’ll show how to use it to solve our particular problem. Assuming that an incoming request is stored in the
input variable and
devices contain all custom device resources, this is how a Rego policy would look like:
input.request.kind.kind == "Interface" new_tenant := input.request.namespace port := input.request.object.spec.services[i].ports[_] new_device := input.request.object.spec.services[i].devicename existing_device_data := devices[_][lower(new_device)].spec other_tenant := existing_device_data[port].annotations.namespace not new_tenant == other_tenant
The actual policy contains more than 7 lines but the most important ones are listed above and perform the following sequence of actions:
- Verify that the incoming request is of kind
Interface
- Extract its namespace and save it in the
new_tenantvariable
- Save all ports in the
portvariable
- Remember which device those ports belong to in the
new_devicevariables
- Extract existing port allocation information for each one of the above devices
- If any of the ports from the incoming request is found in the existing data, record its owner’s namespace
- Deny the request if the requesting port owner (tenant) is different from the current tenant.
Although Rego may not be that easy to write (or debug), it’s very easy to read, compared to an equivalent implemented in, say, Python, which would have taken x3 the number of lines and contain multiple for loops and conditionals. Like any DSL, it strives to strike a balance between readability and flexibility, while abstracting away less important things like web server request parsing and serialising.
The same functionality can be implemented in any standard web server (e.g. Python+Flask), so using OPA is not a requirement
Demo
This is a complete end-to-end demo of Network-as-a-Service platform and encompasses all the demos from the previous posts. The code for this demo is available here and can be run on any Linux OS with Docker.
0. Prepare for OIDC authentication
For this demo, I’ll only use a single non-admin user. Before you run the rest of the steps, you need to make sure you’ve followed dexter to setup google credentials and update OAuth client and user IDs in
kind.yaml,
dexter-auth.sh and
oidc/manifest.yaml files.
1. Build the test topology
This step assumes you have docker-topo installed and c(vEOS) image built and available in local docker registry.
make topo
This test topology can be any Arista EOS device reachable from the localhost. If using a different test topology, be sure to update the inventory file.
2. Build the Kubernetes cluster
The following step will build a docker-based kind cluster with a single control plane and a single worker node.
make kubernetes
3. Check that the cluster is functional
The following step will build a base docker image and push it to dockerhub. It is assumed that the user has done
docker login and has his username saved in
4. Build the NaaS platform
The next command will install and configure both mutating and validating admission webhooks, the watcher and scheduler services and all of the required CRDs and configmaps.
make build
5. Authenticate with Google
Assuming all files from step 0 have been updated correctly, the following command will open a web browser and prompt you to select a google account to authenticate with.
make oidc-build
From now on, you should be able to switch to your google-authenticated user like this:
kubectl config use-context mk
And back to the admin user like this:
kubectl config use-context [email protected]
6. Test
To demonstrate how everything works, I’m going to issue three API requests. The first API request will set up a large range of ports on test switches.
kubectl config use-context mk kubectl apply -f crds/03_cr.yaml
The second API request will try to re-assign some of these ports to a different tenant and will get denied by the validating controller.
kubectl config use-context [email protected] kubectl apply -f crds/04_cr.yaml Error from server (Port [email protected] is owned by a different tenant: tenant-a (request request-001), Port [email protected] is owned by a different tenant: tenant-a (request request-001),
The third API request will update some of the ports from the original request within the same tenant.
kubectl config use-context mk kubectl apply -f crds/05_cr.yaml
The following result can be observed on one of the switches:
devicea#sh run int eth2-3 interface Ethernet2 description request-002 shutdown switchport trunk allowed vlan 100 switchport mode trunk spanning-tree portfast interface Ethernet3 description request-001 shutdown switchport trunk allowed vlan 10 switchport mode trunk spanning-tree portfast
Outro
Currently, Network-as-a-Service platform is more of a proof-of-concept of how to expose parts of the device data model for end users to consume in a safe and controllable way. Most of it is built out of standard Kubernetes component and the total amount of Python code is under 1000 lines, while the code itself is pretty linear. I have plans to add more things like an SPA front-end, Git and OpenFaaS integration, however, I don’t want to invest too much time until I get some sense of external interest. So if this is something that you like and think you might want to try, ping me via social media and I’ll try to help get things off the ground. | https://networkop.co.uk/post/2019-06-naas-p3/ | CC-MAIN-2020-05 | refinedweb | 1,819 | 51.28 |
Universally quantified types in C#
Brian Berns
Updated on
・3 min read
Functional programming in C# (7 Part Series)
Generic types
Generic types in C# are universally quantified. That means, for example, that
List<T> can contain elements of type
T for any type
T. To make the universal quantification explicit, we could imagine defining this type as:
public class ∀T.List<T> : IList<T>, ... // not legal C# { ... }
C# leaves out the
∀T. part (which means "for all
T"), but it's implied.
You can think of a universally quantified type such as
List<T> as a type-level function that takes a type as input (e.g.
int) and returns a type as output (e.g.
List<int>).
T is called a "type variable", since it represents an arbitrary type.
Parametric polymorphism
Generic types in C# are an example of what functional programming calls parametric polymorphism. Interestingly, C# generics implement let-polymorphism, which is somewhat more limited than full-blown first-class polymorphism.
Imagine that we're implementing an algorithm that has to work with generic lists. We need to be able to compute the "weight" of such a list, where the weight is an integer that is calculated somehow from a list. There might be multiple different ways of calculating a list's weight, so we have to be prepared to work with any of them. In particular, our job is to implement the following function:
// fix this so it compiles and runs successfully static int SumWeights( IList<int> ints, IList<string> strs, /*some type*/ getWeight) => getWeight(ints) + getWeight(strs);
As you can see, this function is passed two lists and has to sum their weights, as calculated by the given
getWeight function. Our only problem is that we have to specify a type for
getWeight. Since it's a function that converts a list to an integer, we can try to define the type as
Func<IList<T>, int>:
static int SumWeights( // compiler error IList<int> ints, IList<string> strs, Func<IList<T>, int> getWeight) => getWeight(ints) + getWeight(strs);
The compiler doesn't accept this, though, because type variable
T isn't defined anywhere. That should be easy enough to fix, right? We just have to declare
T next to the
SumWeights name itself:
static int SumWeights<T>( // compiler error IList<int> ints, IList<string> strs, Func<IList<T>, int> getWeight) => getWeight(ints) + getWeight(strs);
But that doesn't work either! The compiler says it can't convert an
IList<int> or an
IList<string> to an
IList<T>. This makes sense, though, because by changing
SumWeights to
SumWeights<T> we made it generic, and the type represented by
T is now determined by the caller of the function. The compiler is telling us that we can't assume that
T is either
int or
string (and it's certainly not both at once).
What's going on here? The problem is that we want to control the scope of the type variable
T so that it applies only to
getWeight, not to the entire
SumWeights function. Ideally, we'd like to write the type of
getWeight like this:
static int SumWeights( // compiler error IList<int> ints, IList<string> strs, ∀T.Func<IList<T>, int> getWeight) => getWeight(ints) + getWeight(strs);
We've added
∀T. here to declare the type variable
T and show the compiler that its scope should be limited to type of
getWeight. Of course, however, this isn't legal C#.
What's the best way to work around this limitation of generic types in C#? How would you implement the
SumWeights function so it compiles and successfully adds the weights of the two given lists, using a function supplied by the caller to determine the weight of any given generic list? If you have an idea, suggest it in the comments!
Credit for this idea goes to Nicholas Cowle.
Functional programming in C# <<
Why would you try to introduce another problem and pull code from a different domain?
What if there were multiple different versions of
getWeight, and you wanted to
SumWeightsto work with any of them? You'd have to pass it in somehow as an argument, right?
Aren't you contradicting yourself? Shouldn't pure function always return the same result with the same input? What exactly you mean by "multiple different getWeight() functions"? Each domain implements only one existing version from the Interface. There can be differences but only in input parameters to the getWeight() method which are handled by the compiler inference.
You said you need to sum the weights. While it is possible to sum int and float by doing implicit conversion, there cannot exist implementation of an interface method with different output type.
While what you propose introduces domain modelling problem. Imagine a new developer tasked to implement
IList<float>. When he's finished implementing the interface your code is still broken because he has no idea about your
Func<IList<>, int> getWeight()method where he needs to additionally implement new case/switch to handle float sums.
I think you're making this more complex than it needs to be. I'm just trying to create a scenario where you're passing a generic function (
getWeight) to a non-generic method (
SumWeights).
Are you talking about IoC / DI? ;)
Edit: Now I remember! Your solution reminds me of Service locator pattern, which is BTW an anti-pattern. That's why I found it immediately wrong.
Your problem is interesting. It feels like it should work, but it doesn't.
The question is really, why can't I convert IList<int> to IList<T>? Or IList<object> for that matter, since int is an object. Let's assume we could:
Only now I should be able to do objs.Add(new object()), and that clearly is not the case.
For your specific problem, you don't actually use the type of the list in any way, so you need to use a base type that is not generic, like System.Collections.IList:
And I know your objection would be that you didn't mean specifically IList<T>, but anything, which might not have a non generic base class. Perhaps the feature in C# that you are looking for is something like this:
But that's illegal in C#. A working solution could be using Func<object,Type,int> and then you would call it like getWeight(ints,typeof(IList<>)), which is legal, only now you have to do a lot of reflection in getWeight.
Thanks for your response. Can you make it work with a generic
getWeight(i.e. one that takes an
IList<T>) and without using reflection? It can be done!
Hmm... there are multiple ways to do it. I don't like any of them. This might work:
You’re on the right track with your idea to insert something that prevents the generic-ness of
getWeightfrom bubbling up to
SumWeights. However, it can be done safely, without going around the compiler’s type checker via
dynamic.
You can't pass a generic function as a parameter without having your own function be generic. Therefore you need to encapsulate it. Like in an interface. This would be the best design for your problem.
But you want to pass a function as a parameter, so probably this ain't it. I have the feeling that whatever you're proposing will not sit well with me.
You got it. That interface is exactly what I'd suggest. You're still passing in the function, just with one level of indirection. It's a bit more verbose, but not too bad, I think. Nice work!
You answered your own question as soon as you've realized you could not give two different T under one scope. Just declare your getWeight somewhere else like extension method, calculate weights and sum them. Or you can constraint your Type T to some ad-hoc interface.
T doesn't mean here for all Ts. It means whenever T appears, it is type of T. You can distinguish two different types by but I doubt it is going to be helpful.
One of the requirements is that you have to pass a generic
getWeightto
SumWeightssomehow. Imagine that there are multiple different implementations of
getWeightthat all have the same type signature. How do you tell
SumWeightswhich version to use? | https://dev.to/shimmer/c-generic-type-brainteaser-2lk1 | CC-MAIN-2020-24 | refinedweb | 1,393 | 73.58 |
Some more additions to the script - mainly reducing the clang args after the creduce run by removing them one by one and seeing if the crash reproduces. Other things:
Fix some typos, pass --tidy flag to creduce
reuploaded diff with full context
Thanks for this!
Please expand a bit on why 5 was chosen (is there some deep reason behind it, or does it just seem like a sensible number?)
nit: please prefer [x for x in matches if x and x.strip() not in filters][:5]. py3's filter returns a generator, whereas py2's returns a list.
I think _ is an open file descriptor; please os.close it if we don't want to use it
Same nit
was this intended to use cmd?
Do we want to check the exit code of this? Or do we assume that if clang crashes during preprocessing, we'll just see a different error during check_expected_output? (In the latter case, please add a comment)
Is it intentional to group multiple consecutive non-dashed args? e.g. it seems that clang -ffoo bar baz will turn into ['clang', '-ffoo bar baz']
Should we be shlex.quote'ing y here (and probably in the return x + [y] below)?
IMO, if even extra_arg is in new_args, we should still move it near the end. Arg ordering matters in clang, generally with later args taking precedence over earlier ones. e.g. the -g$N args in
If we're replacing other args with their effective negation, does it also make sense to replace all debug-ish options with -g0?
Might not want to have a space here; -DFOO=1 is valid (same for -I below)
Probably want to do a similar thing for -Xclang (which, as far as this code is concerned, acts a lot like -mllvm)
I'm unclear on why we do a partial simplification of clang args here, then a full reduction after creduce. Is there a disadvantage to instead doing:
r.write_interestingness_test()
r.clang_preprocess()
r.reduce_clang_args()
r.run_creduce()
r.reduce_clang_args()
?
The final reduce_clang_args being there to remove extra -D/-I/etc. args if preprocessing failed somehow, to remove -std=foo args if those aren't relevant after reduction, etc.
The advantage to this being that we no longer need to maintain a simplify function, and creduce runs with a relatively minimal set of args to start with.
In any case, can we please add comments explaining why we chose whatever order we end up going with, especially where subtleties make it counter to what someone might naively expect?
There is no deep reason - it was an arbitrary smallish number to hopefully not only pick up common stack trace functions
yep
I think checking the exit code is a good idea
I guess that was originally the intention, although now that I think of it it makes more sense to group at most one argument.
It quotes everything right before writing to file - are there reasons to quote here instead?
I guess -g0 is not a cc1 option, nor is -gdwarf? So this is essentially just removing -gcodeview. I'm actually not sure what the other flags do.
Basically the disadvantage is that clang takes longer to run before the reduction, so it takes unnecessary time to iterate through all the arguments beforehand.
And yeah, the final reduce_clang_args is there to minimize the clang arguments needed to reproduce the crash on the reduced source file.
If that makes sense, I can add comments for this-
I recently discovered NamedTemporaryFile, maybe that would help simplify up the various mkstemp usages.
Style nits, added comments
FWIW, opportunistically trying to add -fsyntax-only may help here: if the crash is in the frontend, it means that non-repros will stop before codegen, rather than trying to generate object files, or whatever they were trying to generate in the first place.
We're shlex.spliting groups below, and I assume the intent is Reduce.ungroup_args(Reduce.group_args_by_dash(args)) == args.
If we don't want to quote here, we can also have ungroup_args and group_args_by_dash deal in lists of nonempty lists.
Ah, I didn't realize this was dealing with cc1 args. My mistake.
I'm not immediately sure either, but grepping through the code, it looks like -debug-info-kind= may be the main interesting one to us here.
(You can ignore this comment if we're dealing in cc1; -Xclang is just "pass this directly as a cc1 arg")
Eh, I don't have a strong opinion here. My inclination is to prefer a simpler reduction script if the difference is len(args) clang invocations, since I assume creduce is going to invoke clang way more than len(args) times. I see a docstring detailing that simplify is only meant to speed things up now, so I'm content with the way things are.
good point- I guess the whole grouping thing is unnecessarily complicated, so I got rid of it and it now removes the next arg in try_remove_arg_by_index
ah, ok.
fix issue with grouping two command line args together
Does this need to be Reduce(object): for python2?
Some operating systems use a different assertion format (see my reduce script:)
For MacOS/FreeBSD we need to also handle r"Assertion failed: \(.+\),". Over the past two years I have also had cases where the other message formats have been useful so I would quite like to see them added here as well.
If we are writing the preprocessed output to that tempfile anyway, we could use stdout=tmpfile?
For python3 this would be simpler with subprocess.check_call() but I'm not sure python 2.7 has it.
I would move this before the remove_arg_by_index call since all llvm args start with a - and try_remove_arg_by_index will cause lots of invalid command lines to be created otherwise.
Yes that sounds like a good idea! I just do -emit-llvm to avoid assembly output but for parser/sema crashes -fsyntax-only would save some time.
Another one I found useful was -Werror=implicit-int to get more readable test cases from creduce:
Without that flag lots of test cases look really weird due to the implicit int and various inferred semicolons.
Stack traces also look different on macOS and it would be nice to handle that too.
Here's a sample I got from adding a llvm_unreachable at a random location:
My unreachable message...
UNREACHABLE executed at /Users/alex/cheri/llvm-project/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp:4468!
Stack dump:
0. Program arguments: /Users/alex/cheri/llvm-project/cmake-build-debug/bin/opt -mtriple=cheri-unknown-freebsd -mcpu=cheri128 -mattr=+cheri128 -target-abi purecap -relocation-model pic -S -instcombine -simplifycfg /Users/alex/cheri/llvm-project/llvm/test/CodeGen/Mips/cheri/simplifycfg-ptrtoint.ll -o -
1. Running pass 'Function Pass Manager' on module '/Users/alex/cheri/llvm-project/llvm/test/CodeGen/Mips/cheri/simplifycfg-ptrtoint.ll'.
2. Running pass 'Combine redundant instructions' on function '@cannot_fold_tag_unknown'
0 libLLVMSupport.dylib 0x0000000114515a9d llvm::sys::PrintStackTrace(llvm::raw_ostream&) + 45
1 libLLVMSupport.dylib 0x00000001145153f1 llvm::sys::RunSignalHandlers() + 65
2 libLLVMSupport.dylib 0x0000000114515fbf SignalHandler(int) + 111
3 libsystem_platform.dylib 0x00007fff5b637b3d _sigtramp + 29
4 libsystem_platform.dylib 0x00007ffee20d0cf0 _sigtramp + 2259259856
5 libsystem_c.dylib 0x00007fff5b4f51c9 abort + 127
6 libLLVMSupport.dylib 0x000000011446bb12 llvm::llvm_unreachable_internal(char const*, char const*, unsigned int) + 162
7 libLLVMInstCombine.dylib 0x0000000112c345c8 llvm::InstCombiner::foldICmpUsingKnownBits(llvm::ICmpInst&) + 4136
8 libLLVMInstCombine.dylib 0x0000000112c34d19 llvm::InstCombiner::visitICmpInst(llvm::ICmpInst&) + 569
9 libLLVMInstCombine.dylib 0x0000000112bb9cf2 llvm::InstCombiner::run() + 1522
10 libLLVMInstCombine.dylib 0x0000000112bbb310 combineInstructionsOverFunction(llvm::Function&, llvm::InstCombineWorklist&, llvm::AAResults*, llvm::AssumptionCache&, llvm::TargetLibraryInfo&, llvm::DominatorTree&, llvm::OptimizationRemarkEmitter&, bool, llvm::LoopInfo*) + 624
11 libLLVMInstCombine.dylib 0x0000000112bbb6d6 llvm::InstructionCombiningPass::runOnFunction(llvm::Function&) + 214
12 libLLVMCore.dylib 0x0000000111c0bb4d llvm::FPPassManager::runOnFunction(llvm::Function&) + 317
13 libLLVMCore.dylib 0x0000000111c0be83 llvm::FPPassManager::runOnModule(llvm::Module&) + 99
14 libLLVMCore.dylib 0x0000000111c0c2c4 (anonymous namespace)::MPPassManager::runOnModule(llvm::Module&) + 420
15 libLLVMCore.dylib 0x0000000111c0c036 llvm::legacy::PassManagerImpl::run(llvm::Module&) + 182
16 opt 0x000000010db6657b main + 7163
17 libdyld.dylib 0x00007fff5b44ced9 start + 1
I think it makes sense to remove some clang args before running creduce since removal of some flags can massively speed up reduction later (e.g. not emitting debug info or compiling at -O0, only doing -emit-llvm, etc) if they avoid expensive optimizations that don't cause the crash.
I think it still works, but adding object makes sense.
added the error messages from your script
I think python2.7 has check_call - switched to using that
Sounds good-- I added -fsyntax-only, -emit-llvm and -Werror=implicit-int
I changed the regex to ignore the # at the beginning of the line - I think that should cover the mac os stack trace
added to error message regexes and command line flags
Only a few more nits on my side, and this LGTM. WDYT, arichardson?
Did we want to use NamedTemporaryFile here as rnk suggested?
(If not, you can lift the os.closes to immediately after this line.)
Similar question about NamedTemporaryFile.
Please note that you'll probably have to pass delete=False, since apparently delete=True sets O_TEMPORARY on Windows, which... might follow the file across renames? I'm unsure.
Sorry -- should've been clearer. I meant "in the comment in the code, please expand a bit [...]" :)
Agreed. My question was more "why do we have special reduction code on both sides of this instead of just reduce_clang_args'ing on both sides of the run_creduce." It wasn't clear to me that simplify_clang_args was only intended to speed things up, but now it is. :)
change mkstemp to NamedTemporaryFile and add decode(utf-8) so it works on python3.5
switched to using NamedTemporaryFile here -
moved to NamedTemporaryFile with comment about delete=False
In D59725#1443990, @george.burgess.iv wrote:
Only a few more nits on my side, and this LGTM. WDYT, arichardson?
LGTM with the minor tempfile changes.
with tempfile.NamedTemporaryFile() as empty_file:? Definitely works with python 3 and 2.7 docs say This file-like object can be used in a with statement, just like a normal file..
Use a with statement?
Some crash messages might include the line numbers, do you think it makes sense to fall back to running with -E but without -P and also checking that? I do it in my script but I'm not sure preprocessing saves that much time since creduce will try to remove those statements early.
Add preprocessing with clang -E only;
use with for opening files
Makes sense-- in my experience preprocessing is still quite a bit faster than letting creduce remove all the statements.
LGTM once the tempfile is deleted.
I believe we are currently not deleting this temporary file.
Can delete=False be removed since we are using shutil.copy() instead of a move?
Yeah, I think that should be fine.
Tmpfile was not being removed.
In D59725#1477362, @arichardson wrote:.
Great to hear!
It appears that the script is currently more primitive than that currently though.
Thanks; I added a -F flag (r359216), and it seems like that fixes the issue.
Thanks!
What about shlex? How do you know bash won't expand that as regex already?
Sorry, not sure what you mean by bash expanding shlex as regex? It should be printing a quoted string to the bash script.
$ ls konsole*
'konsole*' konsole-ACNHfh.history konsole-CVnmsn.history konsole-HhEJRA.history
$ echo "konsole\*" | grep -F "konsole*"
$ echo "konsole\*" | grep -F "konsole\*"
konsole\*
The regex still needs to be escaped, regardless of -F.
Otherwise bash will expand it. | https://reviews.llvm.org/D59725?id=192867 | CC-MAIN-2021-10 | refinedweb | 1,911 | 56.76 |
.
your object material need a texture to replace with the video
import the assets/material/logo.material to the project
right click on plane / set materials and apply the logo material
then open the video editor again and select the texture
thanks!
Hello,
yes just use the video plugit and apply it as a texture on a 3D plan.
it wont appear Arkeon, is there something wrong with my setting?
Hi, is it possible to use video as an object for ar marker?thanks!
i try with logitech webcam that have another build in mic, and it work! but is there any option to increase the sensivity?:P!
anyway where it was saved to arkeon?thanks again
Hello !
Yes there a rendering->screenshot plugIT for this
haha my bad, thought it was media -> capture
thanks arkeon, u are the best;)
hello dear admins, is that possible to make a screencapture in os3d?is there any plugIT i can use to make screencapture?thanks in advance:)
problem solved arkeon. its my markers i print the wrong files. sorry for being silly
thanks in advance;)
Hi arkeon. long time no see, i have done my thesis. my AR project works well on my final thesis test. anyway thanks in advance
now after months i did not open it something happen. my project doesnt work anymore. it start well, the camera also works perfectly, but when i put the marker on camera the objects doesnt appear. can u help me how to fix this?thx again, u are the best!
hello, sorry for the silly question. i've search entire this forum but i cant find the reference about how osd3 do the augmented reality. can anyone tell me? i need this for my thesis report, thanks
Yes your picture must have a sufficient number of interest points for a good detection.
is this
will be good and have fast respond/detection if being markers?
hello
your marker is too simple for feature detections, tr to add a noisy background
so more complex image is better?
Hello,
the size of the bitmap depend of what you need and on performances.
Bigger your bitmap is and more you could "zoom" on it an keep the position track, but the detection will be a little slower.
i'd try 6.7 x 6.7 cm bitmap, just like the size in AR marker PlugIT (0.067m) the detection is pretty bad, i can't get stable detection just like using standart markers, can u give me suggestion how the bitmap size who have both, i mean the performance and the fast detection,
here is my marker pattern example, do this would have a good detection?
thanks a lot arkeon:)
Hi, i try to use my own bitmap as marker, in AR marker PlugIT i set the size at 0.067 meter. my question is what size of bitmap that i have to use?
thanks
hi, have u fix the problem arkeon?i try to put some icon too, but the *.exe still got the default icon, but when i run it, the icon in window and taskbar are changed to my custom icon
thanks before
this should work, try debugging the distances values with the misc/debug console plugIT.
add the link dist.current distance->debug.Print
so you could see if the in / out distance is correct or not
how to see that "correct or not" distance?i used that debug console plugIT, alexandre gave me the example how to use the distance plugIT months ago
thx before arkeon
hi arkeon, its me again
after installed the update my bug already fix, but theres another problem
my distance plugIT seems not work well, sometime it give action but sometime nothing happen
i use 3 markers, ex. A, B, and C. and 4 object ex. 1, 2, 3, 4.
compare the A1 and B2 object with a distance plugIT,
when A1 and B2 are close, using object follow plugIT an invisible "dummy" would follow
the A1 object, then come the C3 with another distance plugIT i compare the invisible dummy with C3,
when dummy and C3 are close, it would show my 4 object and hide the 1, 2, 3.
do u have any other way to do this?
you can make a backup of your projects files
but yes it should be ok
thank you so much arkeon, it works well
i can fix the bug now
ok my fault ^^
try with the beta version of OS3D (the current version in development) i've just uploaded it.
install scol voyager first … plugin.exe
several things has changed in the ergonomy and a lot of optims specially on scene loader which can be more than 90% faster ^^
then OS3D : … _setup.exe
is the new version OK for my previous project?because my thesis exam will start less than 2 weeks
thanks before arkeon
under the plugITs zone
just drag the mouse to resize the logs zone on the bottom
> Init default scene.
> Error : PlugIT AR marker : input/armarker/armarker.xml can not be found.
> Error : PlugIT AR marker in file :tools/os3dplugins/input/armarker/carmarker.pkg
File : C:\Program Files (x86)\Scol Voyager\Partition_LockedApp\tools\os3dplugins\input\armarker\carmarker.pkg
(!) Line #84:
let ??c3dxCameraSize -> [cw ch] in
link error
'c3dxCameraSize' unknown
> tes/tes.xos loaded.
> Error : PlugIT AR marker : input/armarker/armarker.xml can not be found.
where is the log zone?sorry for being silly arkeon:/
hmm it works on my version ^^
maybe I corrected this point some month ago
try updating the armarker plugIT from … /armarker/
to C:\Program Files (x86)\Scol Voyager\Partition_LockedApp\tools\os3dplugins\input\armarker
i'd replace that 3 files than, help me please
...
correct me if i'm wrong arkeon
Hi arkeon, i've try the object position plugIT with relation when the marker lost -> set position 0 0 0
but nothing change, when the marker lost the object still appear on camera view, can u tell me how to fix this, this a huge bug for my AR application, i need this for my thesis.
here is the tes file, please help | https://forum.openspace3d.com/search.php?action=show_user_posts&user_id=432 | CC-MAIN-2021-49 | refinedweb | 1,025 | 72.87 |
Q1: What value can I set the scene_tag parameter to?
Answer: Set this parameter to the name of the test group.
Q2: Search requests are received on the current day, but the A/B testing stops at the night of the current day. Does the report of the next day contain data?
Answer: Yes, the report of the next day contains data.
Q3: What is the flow_divider parameter used for?
Answer: You can add a value of the flow_divider parameter to the whitelist of a specific test. In this case, if you want to distribute traffic to the specified test for a query, set the flow_divider parameter to the value that you have added to the whitelist.
Q4: How can I improve the quality of the metrics for the A/B testing?
Answer: We recommend that you start the test based on previous behavioral data.
Q5: How do I encode URLs based on the values of the scene_tag and flow_divider parameters?
Answer: For example, the value of the scene_tag parameter is "test_1", and that of the flow_divider parameter is "Traffic Bucket".
The following example shows how to encode URLs in Python3.x based on the values of the scene_tag and flow_divider parameters:
import urllib from urllib import parse print(parse.quote("scene_tag:" + parse.quote("test_1") + ",flow_divider:" + parse.quote("Traffic Bucket"))) # Result: scene_tag%3Atest_1%2Cflow_divider%3A%25E6%25B5%2581%25E9%2587%258F%2520%25E5%2588%2586%25E6%25A1%25B6
You can set the scene_tag and flow_divider parameters to the following values in the OpenSearch console to perform a search test:
scene_tag:test_1,flow_divider:%E6%B5%81%E9%87%8F%20%E5%88%86%E6%A1%B6
Note: You can use online URL encoding tools to obtain the encoded value. | https://www.alibabacloud.com/help/en/opensearch/latest/faq-about-a-or-b-testing | CC-MAIN-2022-21 | refinedweb | 288 | 64.3 |
#include <hallo.h> * Greg Folkert [Fri, Feb 06 2004, 10:29:33AM]: > > What would be a good way to deal with the situation? Is it sufficient > > to let both packages conflict with each other? > Zounds.... This sounds like a job for our Super Hero:\ > "update-alternatives" > > And having both packages use unique names and then you choose your > alternative. This calls for trouble since you have a management layer above (ldconfig). IMHO it would be better to mediate between upstream authors. Let them choose different SONAMEs, idealy they should modify the base name. Or they could find an agreement to use even / odd soname numbers so the applications would never look for the wrong library. Though, -dev packages would still need to conflict with each either and have different names. MfG, Eduard. -- Man muß die Zukunft im Sinn haben und die Vergangenheit in den Akten. -- Charles Maurice de Talleyrand | https://lists.debian.org/debian-devel/2004/02/msg00226.html | CC-MAIN-2015-40 | refinedweb | 149 | 73.68 |
Web Age Solutions Inc.
At the time of this writing (Aug 2006), Glassfish has the most complete implementation of Java EE 5, including EJB 3. This tutorial shows how to install Glassfish from scratch and then develop and test a simple Session EJB using Eclipse. This is meant for developers who will like to learn EJB 3 right now, before any commercial development IDE becomes available.
First download Glassfish from Here. Download a milestone build for maximum stability. This tutorial was developed using V1 Milestone 7. It is recommended that to follow this tutorial you use a V1 build rather than V2 (which may behave slightly differently).
Glassfish will be downloaded as a JAR file (such as glassfish-installer-9.0-b48.jar).
First install Sun JDK 1.5 (also called J2SE 5). Installation of this is very simple and beyond the scope of this article.
Open a command window.
Set the JAVA_HOME variable to point to the root installation folder of JDK. For example:
set JAVA_HOME=c:\jdk15
Copy the Glassfish JAR file to C:\.
Begin installation of Glassfish, by entering the following command from C:\.
java -Xmx256M -jar glassfish-installer-XXXX.jar
System will open a license window.
Scroll down and click on Accept.
System will extract all the files in C:\glassfish.
In the command window, change directory to that folder.
cd c:\glassfish
To complete the setup, run this command:
lib\ant\bin\ant -f setup.xml
Make sure that the command ends with a BUILD SUCCESSFUL message. Congratulations, the installation is now complete.
First, we will start the Derby database server. This is really not necessary for this tutorial. But, it is a good idea to run the database if you will do more advanced EJB development (such as timers and entity persistence).
In the command prompt, change directory to C:\glassfish\bin folder. Then, enter the command:
asadmin start-database
After the database starts, run the application server.
asadmin start-domain
We will verify the installation by logging into the administration console. Open a new browser window and enter the URL.
Login using the user ID admin and password adminadmin. This will validate the installation.
We will assume that you already have a fully functional Eclipse 3.2 installation. Launch Eclipse. We will create two Java projects:
First, create a new Java project called Simple EJB Project.
Now, we will add two JAR files from Glassfish to the compiler's classpath. Open the properties dialog of the project. Then select the Java Build Path property. Click on the Libraries tab. Click on Add External JARs.
Navigate to the C:\glassfish\lib folder and select appserv-rt.jar and javaee.jar. Click on Open.
Make sure that the two JAR files are added to the compiler's class path. Click on OK to close the properties dialog. Note: You can export this Java project as an archive file and easily import it later for quickly creating a new Glassfish EJB project.
Now, we will create the client project called Simple Client Project. Quickest way to create this project is to copy the Simple EJB Project and paste it as Simple Client Project. Alternatively, you can create a new Java project and add the two JAR files to the build path as shown above.
The client project needs to refer to the remote or local interfaces of the EJBs. The simplest way to set this up is to set a dependency between the client and EJB projects. In real life, the client may be developed by a different team than the EJB and the client developers may not have access to the EJB project. In this case, the EJB developers need to export a client JAR file and hand it to the client developers. We will keep things simple, and have the client project refer to the EJB project.
Open the Properties dialog of the Simple Client Project. Select the Java Build Path property. Click on the Projects tab. Click on Add. Select the check box next to Simple EJB Project. Click on OK.
Click on OK to accept the changes.
In the Simple EJB Project, create a new package called com.webage.ejbs. First, we will create the remote interface for the EJB. In the package you have just created, create a new Java interface called SimpleBean. Add the following code:
import javax.ejb.*;
@Remote
public interface SimpleBean {
public String sayHello(String name);
}
Note: The @Remote annotation marks this interface as a remote interface. This annotation belongs to the javax.ejb package. Hence, we had imported the package in the file. Alternatively, you could use the annotation @javax.ejb.Remote.
Save and close this file.
Now, we will create the bean class. In the same package, create a new Java class called SimpleBeanImpl. Add the following code.
import javax.ejb.*;
@Stateless(name="Example", mappedName="ejb/SimpleBeanJNDI")
public class SimpleBeanImpl implements SimpleBean {
public String sayHello(String name) {
return "Hello " + name + "!";
}
}
Note:
Save and close this file.
In the Simple Client Project, create a new package called com.webage.client. In this package, create a new class called TestClient. Add the following code:
import javax.naming.*;
import com.webage.ejbs.SimpleBean;
public class TestClient {
public void runTest() throws Exception {
InitialContext ctx = new InitialContext();
SimpleBean bean = (SimpleBean) ctx.lookup("ejb/SimpleBeanJNDI");
String result = bean.sayHello("Billy Bob");
System.out.println(result);
}
public static void main(String[] args) {
try {
TestClient cli = new TestClient();
cli.runTest();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Note: We do a JNDI lookup of the name "ejb/SimpleBeanJNDI" as this has been configured as the JNDI name of the EJB. We can not use the dependency injection annotation @EJB to do the look up as our client will run outside of any Java EE container.
The rest of the code should be fairly straight forward. Save and close this file.
We will use the automatic deployment feature of Glassfish to rapidly deploy the EJB module. This option involves, simply dropping the EJB JAR file under the C:\glassfish\domains\domain1\autodeploy folder.
First, we will export the EJB JAR file. Right click on Simple EJB Project and select Export.
Expand Java and select JAR file. Click on Next.
In the JAR file text box, enter C:\glassfish\domains\domain1\autodeploy\test_ejb.jar.
Also, check the Overwrite existing files without warning option. This will speed up the export process in subsequent times.
Click on Finish to export the JAR file.
Glassfish will automatically install the EJB JAR file within a few seconds. It is a good idea to monitor the server's log file to be certain if the EJB JAR file was deployed successfully. Monitoring the log file will also help you detect problems with your EJB code as they occur at runtime. The log file is located at C:\glassfish\domains\domain1\logs\server.log. You can use a tool like PigTail to monitor the file.
Switch back to Eclipse.
In the Package Explorer view, right click on TestClient.java and select Run As->Java Application. Make sure that the Console view shows the following output.
During development, you will no doubt change the EJB code frequently. To re-deploy the EJB JAR file, simply export the EJB JAR file again following the process already mentioned.
From the C:\glassfish\bin folder, run these commands:
asadmin stop-database
asadmin stop-domain
In this tutorial, we have set up a development environment for Glassfish based EJB 3 development. We developed and tested a stateless session EJB. You can use this environment and approach to develop more complex EJBs, including entity persistence.
Feedback
Your e-mail:
Rate this article:
Very useful
Somewhat useful
Not bad
Needs many corrections | http://www.webagesolutions.com/knowledgebase/javakb/jkb005/index.html | crawl-002 | refinedweb | 1,281 | 69.07 |
I'm trying to improve my understanding of the global namespace in javascript and I'm curious about a few things:
function parent(){
var x = 'some value';//this var would be considered global to all children functions but not in the true global namespace
function child1(){
x.someMethod()
}
function child2(){
x*something;
}
function child3(){
x+=something;
child2()
child1()
}
child3()
}
parent()
Is there a god (i.e. a parent) object?
Yes. More technically, it's the global object that all these primitives are members of; it just happens that in the browser, the
window object is the global object.
> window.String === String; true
Why is it bad idea to have vars/functions on a global level?
Because if you're adding lots of 3rd party libraries/ scripts, they all share the same global object, there's the chance of name collisions. This is a real life problem with all the libraries which use
$ as an alias (jQuery, Prototype and more).
If it is really a bad idea to have vars/functions in global scope then would closures be the best way to avoid this?
x shouldn't be considered global. It's part of the closure formed by declaring the child functions inside the
parent() function. The problem part of your snippet is that
parent() is global; what happens if some other code re-declared
parent()? This would be better:
(function () { function parent(){ var x = 'some value'; function child1(){ x.someMethod() } function child2(){ x*something; } function child3(){ x+=something; child2() child1() } child3() } parent() }());
The fact
x is accessible within the child functions isn't bad; you should have written those functions yourself, so you should be aware of the existence of
x. Bear in mind that if you re-declare
x within those child functions with
var, you won't affect the
x in
parent(). | https://codedump.io/share/dRJEeKFiXqXE/1/understanding-the-javascript-global-namespace-and-closures | CC-MAIN-2018-17 | refinedweb | 304 | 67.99 |
SA Bugzilla – Bug 4778
Rule type plugins
Last modified: 2006-11-09 14:48:28 UTC
Enhancement to make it easier for plugins to define new rule types. Probably
move the existing rule types to plugins.
In response to comment 2 of bug 4776, one would prioritize rules across classes
by doing multiple calls into the rule class plugins, most likely one call per
priority.
Meta rules can track their own dependencies, so there is really no need for meta
rules to themselves track priorities. Each call into the meta rule type plugin
would evaluate those rules whose dependencies have been met.
Created attachment 3356 [details]
nonfunctional concept code
This is a not-yet-working stab at moving the "body" rule type into a plugin.
do_body_tests() has not yet been moved into the plugin. I also need to figure
out a generic way to deal with user_rules_to_compile, head_only_points,
body_only_points, and a few other loose ends.
Basically, the idea is that rule type plugins are responsible for:
* Registering their tests with define_test
* Storing their own test definitions
* Obtaining any data needed to evaluate their tests
* Evaluating their tests when called through check_priority
Rule type plugins may ignore priority when appropriate. For example, the meta
type plugin may evaluate a test when all its dependencies are met and DNS type
plugins may evaluate a test when all its DNS responses arrive.
There is no eval type plugin, but there is a core eval test subsystem that can
be used by rule type plugins.
Please DO NOT forget about allow-user-rules. Done properly it should be
possible to preserve the base rule set and create multiple overlays (much like
C++ subclasses) that can be activated when the particular user runs. If the
particular user has no rules of his own, the base rules can be used without
modification, other than possibly the score values.
One might ideally want to consider being able to compile individual user's
rules once on the first time they process a mail, and then cache and retrieve
the precompiled rules for subsequent mails, unless the rules change in the
intervening time. If not thought through from the start this can get into the
sort of mess that has now and then shown up with caching various user option
configurations, and having one pollute another.
I see no inherent reason that allowing user rules should result in recompiling
the entire ruleset on every email processed.
In response to Comment #1, it might be good if calling into a plugin to
evaluate one or more rules at priority N returned the next lower priority at
which there was a runnable rule. This could help the rule driver make
intellegent calls into the various plugins.
noting cross-bug dependency
btw, this would be an ideal way to allow people to work on alternative
"backends" for rule types, too; for example, code which used the trie-optimized
regexp interpreter new in perl 5.9.2, could use an alternative backend for the
"body" rule type.
I really quite like this idea. Is anyone working on it these days?
Created attachment 3575 [details]
Updated work in progress
Here is what I have so far. I haven't worked on it for a while.
The next problem to solve is the RepaceTags plugin, which currently reaches in
and mucks with the internal representation of the current implementaion of rule
types. I believe the rule type plugins which use regexes need to call_plugins
to a newly defined hook which will allow ReplaceTags to modify the regex.
I'd like to get my dispute with Michael Parker resolved before doing any more
work in this area. Perhaps a design meeting at CEAS?
a design meeting at CEAS would be cool -- although it'd be just you guys, I
won't be there ;)
what about an IRC conference some time?
also, a ReplaceTags regexp-rewriting plugin API makes sense.
Notes from a meeting between Michael Parker, Danial Quinlan, and myself:
* Rule type plugins will return code for evaluating the test and calling
got_hit. check will concatenate and eval this returned code, much like the
existing do_*_tests code does.
* Rule type plugins can store information in the PerMsgStatus.
* We want to reduce the effect that user tests have on the whole mechanism (if
not remove support for user tests entirely). This may require restrictions such
as prohibiting user tests from having the same name as system rules.
* The "required head/body points" part of autolearn will require rule type
plugins to declare whether each test is header/body. Is this facility of
autolear needed?
* To simplify the rules mechanism, the "only_these_rules" feature of mass-check
can probably be reimplemented to assign zero scores to those rules it doesn't want.
I definitely cannot see a need for user rules to have the same names as system
ones. However getting rid of user rules -- I'm afraid I'm not fond of that idea ;)
I'd be fine with dropping the "required head/body points" stuff.
+1 on the "only_these_rules" idea, makes sense.
Sounds good!
> I definitely cannot see a need for user rules to have the same names
> as system ones.
Q1: What about user rule score overrides? Are you counting that as a user rule
with the same name as a system rule?
Q2: What if someone does make a user rule that happens to have the same name as
a system rule, either because they don't know the names of all system rules, or
the new sa_update adds a rule with the same name as a user rule?
Treat them as separate rules (separate namespaces)?
Give a syntax error, possibly on a rule that has worked for years until SA
added a new rule with the same name?
Quietly ignore the user rule?
> However getting rid of user rules -- I'm afraid I'm not fond of that idea ;)
Let me just say that it would be understatement to say I wouldn't be fond of
the idea.
(In reply to comment #10)
> Q1: What about user rule score overrides? Are you counting that as a user rule
> with the same name as a system rule?
Score overrides of system rule would be allowed, but priority changes probably
would not. Changing tflags multiple would not be allowed, possibly no tflags
changes would be allowed.
If allow_user_rules is off, then we could do the zero-score checks at config
time instead of per message.
> Q2: What if someone does make a user rule that happens to have the same name as
> a system rule, either because they don't know the names of all system rules, or
> the new sa_update adds a rule with the same name as a user rule?
The system rule would win. This should probably cause a lint error.
Separate namespaces wouldn't work. If the user config tries to set the score
for a rule, does it affect the user rule or the system rule?
+1, agreed with what John said in comment 11.
in fact, there's a good point there: we should document clearly in the Conf POD
as to which score/tflags/etc. changes are permitted in user prefs, since some
changes may have unexpected results, even in current code.
any news on this? I'd be interested in collaborating, as it provides a good API
for the work I'm doing in the re2c branch to override body rules in mainline SA
from a plugin.
I'm attempting to work on the Check plugin stuff at the Hackathon which doesn't
necessarily gate this but will make it much easier. Come join me :)
hey, I would if I was there ;)
are you on IRC? how many hours off UTC is Austin?
-0600 CDT which included Austin. TX. 8*))
John -- have you made any progress on this since the WIP?
I'm keen to start getting 3.2.0 into releaseable shape, and I also have a very
fast version of body tests using re2c, sitting in a branch. it'd be great to
have a "clean" rule-types-as-plugins API for that to use...
I was waiting for you to finish bug 4776 before digging into this. I was also
waiting for Daniel to remove the non-performing eval tests.
I'll have to schedule some time to dig into this, but I've got about a month's
worth of other projects in my queue. | https://bz.apache.org/SpamAssassin/show_bug.cgi?id=4778 | CC-MAIN-2021-31 | refinedweb | 1,412 | 70.23 |
On Sun, Aug 12, 2012 at 02:47:09PM +0200, Reimar Döffinger wrote: > On Sat, Aug 11, 2012 at 04:52:19PM +0200, Michael Niedermayer wrote: > > On Sat, Aug 11, 2012 at 02:18:36PM +0200, Reimar Döffinger wrote: > > > About 30% faster on 32 bit Atom, 120% faster on 64 bit Phenom2. > > > This is interesting because supporting P16 is easier in e.g. > > > OpenGL (can misuse support for any 2-component 8 bit format), > > > whereas supporting p9/p10 without conversion needs a texture > > > format with at least 14 bits actual precision. > > > > > > Signed-off-by: Reimar Döffinger <Reimar.Doeffinger at gmx.de> > > > --- > > > libswscale/swscale_unscaled.c | 26 ++++++++++++++++++++++++++ > > > 1 file changed, 26 insertions(+) > > > > > > diff --git a/libswscale/swscale_unscaled.c b/libswscale/swscale_unscaled.c > > > index c391a07..6618966 100644 > > > --- a/libswscale/swscale_unscaled.c > > > +++ b/libswscale/swscale_unscaled.c > > > @@ -830,7 +830,33 @@ static int planarCopyWrapper(SwsContext *c, const uint8_t *src[], > > > srcPtr += srcStride[plane]; > > > } > > > } else if (src_depth <= dst_depth) { > > > + int orig_length = length; > > > for (i = 0; i < height; i++) { > > > + if(isBE(c->srcFormat) == HAVE_BIGENDIAN && > > > + isBE(c->dstFormat) == HAVE_BIGENDIAN) { > > > + unsigned shift = dst_depth - src_depth; > > > + length = orig_length; > > > +#if HAVE_FAST_64BIT > > > +#define FAST_COPY_UP(shift) \ > > > + for (j = 0; j < length - 3; j += 4) { \ > > > + uint64_t v = AV_RN64A(srcPtr2 + j); \ > > > + AV_WN64A(dstPtr2 + j, v << shift); \ > > > + } \ > > > + length &= 3; > > > +#else > > > +#define FAST_COPY_UP(shift) \ > > > + for (j = 0; j < length - 1; j += 2) { \ > > > + uint32_t v = AV_RN32A(srcPtr2 + j); \ > > > + AV_WN32A(dstPtr2 + j, v << shift); \ > > > + } \ > > > + length &= 1; > > > +#endif > > > > these look wrong for the shiftonly==0 case > > Ops, sorry, I went back and forth a few time how to handle that case > and at some point the condition was lost. > The code is not meant to handle shiftonly==0 because > a) The case I was looking at (MPlayer) never uses it > b) It needs an extra "and" compared to the non-SIMDified version, > which means for 32 bit it tends to not be relevantly faster, at > least for some compiler/compiler options variations (for example > when compiling with 4.6 for Atom the loop won't be unrolled, so > lots of loop overhead, whereas when compiling for k8 it will be > unrolled and prefetch added...). ok then the patch LGTM with a if(shiftonly) added thn: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2012-August/129364.html | CC-MAIN-2014-10 | refinedweb | 352 | 54.36 |
Speeding Up Your Python Code (Posted on March 16th, 2013) you an idea of how these particular examples compare with each other under different circumstances.
Using Generators
One commonly overlooked memory optimization is the use of generators. Generators allow us to create a function that returns one item at a time rather than all the items at once. If you're using Python 2.x this is the reason for using xrange instead of range or ifilter instead of filter. A great example of this is creating a large list of numbers and adding them together..88098192215 #Python 2.7 >>> 1.416813850402832 #Python 3.2 print(timeit.timeit("sum(create_list(999))", setup="from __main__ import create_list", number=1000)) >>> 0.924163103104 #Python 2.7 >>> 1.5026731491088867 #Python 3.2
Not only is it slightly faster but you also avoid storing the entire list in memory!
Introducing Ctypes
For performance critical code Python natively provides us with an API to call C functions. This is done through ctypes. You can actually take advantage of ctypes without writing any C code of your own. By default Python comes with the standard c library precompiled for you. We can go back to our generator example to see just how much more ctypes will speed up our code.
import timeit from ctypes import cdll def generate_c(num): #Load standard C library libc = cdll.LoadLibrary("libc.so.6") #Linux #libc = cdll.msvcrt #Windows while num: yield libc.rand() % 10 num -= 1 print(timeit.timeit("sum(generate_c(999))", setup="from __main__ import generate_c", number=1000)) >>> 0.434374809265 #Python 2.7 >>> 0.7084300518035889 #Python 3.2
Just by switching to the C random function we cut our run time in half! Now what if I told you we could do better?
Introducing Cython
Cython is a superset of Python that allows for the calling of C functions as well as declaring types on variables to increase performance. To try this out we'll need to install Cython.
sudo pip install cython
Cython is essentially a fork of another similar library called Pyrex which is no longer under development. It will compile our Python-like code into a C library that we can call from within a Python file. Use .pyx instead of .py for your python files. Let's see how Cython works with our generator code.
#cython_generator.pyx import random def generate(num): while num: yield random.randrange(10) num -= 1
We also need to create a setup.py so that we can get Cython to compile our function.
from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext setup( cmdclass = {'build_ext': build_ext}, ext_modules = [Extension("generator", ["cython_generator.pyx"])] )
Compile using:
python setup.py build_ext --inplace
You should now see a cython_generator.c file and a generator.so file. We can test our program by doing:
import timeit print(timeit.timeit("sum(generator.generate(999))", setup="import generator", number=1000)) >>> 0.835658073425
Not too bad but let's see if we can improve on this. We can start by stating that our "num" variable is an int. Then we can import the C standard library to take care of our random function.
#cython_generator.pyx cdef extern from "stdlib.h": int c_libc_rand "rand"() def generate(int num): while num: yield c_libc_rand() % 10 num -= 1
If we compile and run again we now see a really awesome number.
>>> 0.033586025238
Not bad at all for making just a few changes. However, sometimes these changes can be a bit tedious. So let's see how we can do with just regular ole Python.
Introducing PyPy
PyPy is a just-in-time compiler for Python 2.7.3 which in layman's terms means that it makes your code run really fast (usually). Quora runs PyPy in production. PyPy has some installation instructions on their download page but if you're running Ubuntu you can just install it through apt-get. It also runs out of the box so there are no crazy bash or make files to run, just download and run. Let's see how our original generator code performs under PyPy..115154981613 #PyPy 1.9 >>> 0.118431091309 #PyPy 2.0b1 print(timeit.timeit("sum(create_list(999))", setup="from __main__ import create_list", number=1000)) >>> 0.140175104141 #PyPy 1.9 >>> 0.140514850616 #PyPy 2.0b1
Wow! Without touching the code it is now running at an 8th of the speed as the pure python implementation.
Further Examination
Why bother examining futher? PyPy is king! Well not quite. While most programs will run on PyPy there are still some libraries that aren't fully supported. It may also be easier to pitch a C extension for your project rather than switching compilers. Let's dive further into ctypes to see how we can create our own C libraries to talk to Python. We're going to examine the performance gains from a merge sort as well as a calculation from a Fibonacci sequence. Here is the C code (functions.c) that we will be using.
/* functions.c */ #include "stdio.h" #include "stdlib.h" #include "string.h" /* */ inline void merge(int *left, int l_len, int *right, int r_len, int *out) { int i, j, k; for (i = j = k = 0; i < l_len && j < r_len; ) out[k++] = left[i] < right[j] ? left[i++] : right[j++]; while (i < l_len) out[k++] = left[i++]; while (j < r_len) out[k++] = right[j++]; } /* inner recursion of merge sort */ void recur(int *buf, int *tmp, int len) { int l = len / 2; if (len <= 1) return; /* note that buf and tmp are swapped */ recur(tmp, buf, l); recur(tmp + l, buf + l, len - l); merge(tmp, l, tmp + l, len - l, buf); } /* preparation work before recursion */ void merge_sort(int *buf, int len) { /* call alloc, copy and free only once */ int *tmp = malloc(sizeof(int) * len); memcpy(tmp, buf, sizeof(int) * len); recur(buf, tmp, len); free(tmp); } int fibRec(int n){ if(n < 2) return n; else return fibRec(n-1) + fibRec(n-2); }
On Linux we can compile this to a shared library that Python can access by doing:
gcc -Wall -fPIC -c functions.c gcc -shared -o libfunctions.so functions.o
Using ctypes we can now access the functions by loading the "libfunctions.so" library like we did for the standard C library earlier. Here we can compare a native Python implementation vs. one done in C. Let's start with the Fibonacci sequence calculation.
#functions.py from ctypes import * import time libfunctions = cdll.LoadLibrary("./libfunctions.so") def fibRec(n): if n < 2: return n else: return fibRec(n-1) + fibRec(n-2) start = time.time() fibRec(32) finish = time.time() print("Python: " + str(finish - start)) #C Fibonacci start = time.time() x = libfunctions.fibRec(32) finish = time.time() print("C: " + str(finish - start))
Python: 1.18783187866 #Python 2.7 Python: 1.272292137145996 #Python 3.2 Python: 0.563600063324 #PyPy 1.9 Python: 0.567229032516 #PyPy 2.0b1 C: 0.043830871582 #Python 2.7 + ctypes C: 0.04574108123779297 #Python 3.2 + ctypes C: 0.0481240749359 #PyPy 1.9 + ctypes C: 0.046403169632 #PyPy 2.0b1 + ctypes
As expected C is the fastest followed by PyPy and Python. We can also do the same kind of comparison with a merge sort.
We haven't really dug too deep into ctypes yet so this example will show off some of the cool features. Ctypes have a few standard types such as ints, char arrays, floats, bytes, etc. One thing they don't have by default is integer arrays. However, by multiplying a c_int (ctype type for int) by a number we can create an array of size number. This is what line 7 below is doing. We're creating a c_int array the size of our numbers array and unpacking the numbers array into the c_int array.
It's important to remember that in C you can't return an array, nor would you really want to. Instead we pass around pointers for functions to modify. In order to pass our c_numbers array to our merge_sort function we have to pass by reference. After the merge_sort runs our c_numbers array will be sorted. I've appended the below code to my functions.py file since we already have our imports setup there.
#Python Merge Sort from random import shuffle, sample #Generate 9999 random numbers between 0 and 100000 numbers = sample(range(100000), 9999) shuffle(numbers) c_numbers = (c_int * len(numbers))(*numbers) from heapq import merge def merge_sort(m): if len(m) <= 1: return m middle = len(m) // 2 left = m[:middle] right = m[middle:] left = merge_sort(left) right = merge_sort(right) return list(merge(left, right)) start = time.time() numbers = merge_sort(numbers) finish = time.time() print("Python: " + str(finish - start)) #C Merge Sort start = time.time() libfunctions.merge_sort(byref(c_numbers), len(numbers)) finish = time.time() print("C: " + str(finish - start))
Python: 0.190635919571 #Python 2.7 Python: 0.11785483360290527 #Python 3.2 Python: 0.266992092133 #PyPy 1.9 Python: 0.265724897385 #PyPy 2.0b1 C: 0.00201296806335 #Python 2.7 + ctypes C: 0.0019741058349609375 #Python 3.2 + ctypes C: 0.0029308795929 #PyPy 1.9 + ctypes C: 0.00287103652954 #PyPy 2.0b1 + ctypes
Here is a chart and table comparing the various results.
Hopefully you found this post informative and a good stepping stone into optimizing your Python code with C and PyPy. As always if you have any feedback or questions feel free to drop them in the comments below or contact me privately on my contact page. Thanks for reading!
P.S. If your company is looking to hire an awesome soon-to-be college graduate (May 2013) let me know!
Tags: Optimizations
About Me
My name is Max Burstein and I am a graduate of the University of Central Florida and creator of Problem of the Day and Live Dota. I enjoy developing large, scalable web applications and I seek to change the world. | https://www.maxburstein.com/blog/speeding-up-your-python-code/ | CC-MAIN-2021-43 | refinedweb | 1,647 | 70.09 |
More and more, we are seeing the dark mode feature in the apps that we are using every day. From mobile to web apps, the dark mode has become necessary for companies that want to take care of their user's eyes. Indeed, having a bright screen at night is really painful for our eyes. By turning (automatically) the dark mode helps reduce this pain and keep our users engage with our apps all night long (or not).
In this post, we are going to see how we can easily implement a dark mode feature in a ReactJS app. In order to do so, we'll leverage some React features like context, function components, and hooks.
Too busy to read the whole post? Have a look at the CodeSandbox demo to see this feature in action along with the source code.
At this end of this post, you will be able to:
Contextand the
useReducerhook to share a global state throughout the app.
ThemeProviderfrom the
styled-componentslibrary to provide a theme to all React components within our app.
In order to add the dark mode feature into our app, we will build the following features:
Switchcomponent to be able to enable or disable the dark mode.
Contextand
reducerto manage the application state.
The first thing that we need for our dark mode feature is to define the light and dark themes of our app. In other words, we need to define the colors (text, background, ...) for each theme.
Thanks to the
styled-components library we are going to use, we can easily define our themes in a distinct file as JSON objects and provide it to the
ThemeProvider later.
Below is the definition of the light and dark themes for our app:
const black = "#363537"; const lightGrey = "#E2E2E2"; const white = "#FAFAFA"; export const light = { text: black, background: lightGrey }; export const dark = { text: white, background: black };
As you can notice, this is a really simplistic theme definition. It's up to you to define more theme parameters to style the app according to your visual identity.
Now that we have both our dark and light themes, we can focus on how we’re going to provide them to our app.
By leveraging the React Context API, the
styled-components provides us a
ThemeProvider wrapper component. Thanks to this component, we can add full theming support to our app. It provides a theme to all React components underneath itself.
Let's add this wrapper component at the top of our React components' tree:
import React from "react"; import { ThemeProvider } from "styled-components"; export default function App() { return ( <ThemeProvider theme={...}> ... </ThemeProvider> ); };
You may have noticed that the
ThemeProvider component accepts a theme property. This is an object representing the theme we want to use throughout our app. It will be either the light or dark theme depending on the application state. For now, let's leave it as is as we still need to implement the logic for handling the global app state.
But before implementing this logic, we can add global styles to our app.
Once again, we are going to use the
styled-components library to do so. Indeed, it has a helper function named
createGlobalStyle that generates a styled React component that handles global styles.
import React from "react"; import { ThemeProvider, createGlobalStyle } from "styled-components"; export const GlobalStyles = createGlobalStyle`...`;
By placing it at the top of our React tree, the styles will be injected into our app when rendered. In addition to that, we'll place it underneath our
ThemeProvider wrapper. Hence, we will be able to apply specific theme styles to it. Let's see how to do it.
export const GlobalStyles = createGlobalStyle` body, #root { background: ${({ theme }) => theme.background}; color: ${({ theme }) => theme.text}; display: flex; flex-direction: row; justify-content: center; align-items: center; font-family: BlinkMacSystemFont, -apple-system, 'Segoe UI', Roboto, Helvetica, Arial, sans-serif; } `; export default function App() { return ( <ThemeProvider theme={...}> <> <GlobalStyles /> ... </> </ThemeProvider> ); };
As you can see, the global text and background color are provided by the loaded theme of our app.
It's now time to see how to implement the global state.
In order to share a global state that will be consumed by our components down the React tree, we will use the
useReducer hook and the React
Context API.
As stated by the ReactJS documentation,
Context is the perfect fit to share the application state of our app between components.
Context provides a way to pass data through the component tree without having to pass props down manually at every level.
And the
useReducer hook is a great choice to handle our application state that will hold the current theme (light or dark) to use throughout our app.
This hook accepts a
reducer and returns the current state paired with a
dispatch method. The reducer is a function of type
(state, action) => newState that manage our state. It is responsible to update the state depending on the type of action that has been triggered. In our example, we will define only one type of action called
TOGGLE_DARK_MODE that will enable or disable the dark mode.
Let's create this reducer function in a separate file,
reducer.js:
const reducer = (state = {}, action) => { switch (action.type) { case "TOGGLE_DARK_MODE": return { isDark: !state.isDark }; default: return state; } }; export default reducer;
As you may have noticed, our state is holding a single boolean variable
isDark. If the
TOGGLE_DARK_MODE action is triggered, the reducer updates the
isDark state variable by toggling is value.
Now that we have our
reducer implemented we can create our
useReducer state and initialize it. By default, we will disable the dark mode.
import React, { useReducer } from "react"; import reducer from "./reducer"; export default function App() { const [state, dispatch] = useReducer(reducer, { isDark: false }); ... };
The only missing piece in our global state implementation is the Context. We'll also define it in a distinct file and export it,
context.js:
import React from "react"; export default React.createContext(null);
Let's now combine everything together into our app and use our global state to provide the current theme to the
ThemeProvider component.
import React, { useReducer } from "react"; import { ThemeProvider, createGlobalStyle } from "styled-components"; import { light, dark } from "./themes"; import Context from "./context"; import reducer from "./reducer"; ... export default function App() { const [state, dispatch] = useReducer(reducer, { isDark: false }); return ( <Context.Provider value={{ state, dispatch }}> <ThemeProvider theme={state.isDark ? dark : light}> <> <GlobalStyles /> ... </> </ThemeProvider> </Context.Provider> ); };
As you can see the
Context is providing, through its
Provider, the current application state and the dispatch method that will be used by other components to trigger the
TOGGLE_DARK_MODE action.
Well done 👏👏 on completing all the steps so far. We are almost done. We’ve implemented all the logic and components needed for enabling the dark mode feature. Now it’s time to trigger it in our app.
To do so, we'll build a
Switch component to allow users to enable/disable dark mode. Here's the component itself:
import React from "react"; import Context from "./context"; import styled from "styled-components"; const Container = styled.label` position: relative; display: inline-block; width: 60px; height: 34px; margin-right: 15px; `; const Slider = styled.span` position: absolute; top: 0; display: block; cursor: pointer; width: 100%; height: 100%; background-color: #ccc; border-radius: 34px; -webkit-transition: 0.4s; transition: 0.4s; &::before { position: absolute; content: ""; height: 26px; width: 26px; margin: 4px; background-color: white; border-radius: 50%; -webkit-transition: 0.4s; transition: 0.4s; } `; const Input = styled.input` opacity: 0; width: 0; height: 0; margin: 0; &:checked + ${Slider} { background-color: #2196f3; } &:checked + ${Slider}::before { -webkit-transform: translateX(26px); -ms-transform: translateX(26px); transform: translateX(26px); } &:focus + ${Slider} { box-shadow: 0 0 1px #2196f3; } `; const Switch = () => { const { dispatch } = useContext(Context); const handleOnClick = () => { // Dispatch action dispatch({ type: "TOGGLE_DARK_MODE" }); }; return ( <Container> <Input type="checkbox" onClick={handleOnClick} /> <Slider /> </Container> ); }; export default Switch;
Inside the
Switch component, we are using the
dispatch method from the
Context to toggle the dark mode theme.
Finally, let's add it to the app.
export default function App() { ... return ( <Context.Provider value={{ state, dispatch }}> <ThemeProvider theme={state.isDark ? dark : light}> <> <GlobalStyles /> <Switch /> </> </ThemeProvider> </Context.Provider> ); };
Dark mode has been a highly requested feature, and we successfully added support for it in our React application by using some of the latest React features. I hope this post will help you add dark mode capability to your app and save the eyes of your users. | https://alterclass.hashnode.dev/how-to-add-dark-mode-to-react-with-context-and-hooks-ck77np7gp08cid9s1kcn5kyp9?guid=none&deviceId=c503e7a5-c9bf-408e-9d32-f00d4ccd571b | CC-MAIN-2020-34 | refinedweb | 1,399 | 56.25 |
Nov 30, 2008 08:06 PM|talk2alie|LINK
Contributor
6434 Points
Nov 30, 2008 10:17 PM|nvanhaaster@resultstel.com|LINK
2 questions?
After that should be pretty simple. If you can connect to the sql server locally and have you aspnet_db installed simply use the database explorer in VWD 2008 to create and save you connection string. Once save your connection string should appear in a file called the web.config in your website root directory. From there you can use that connection string in any asp.net sqldatasource with
Asp.Net control
<asp:SqlDataSource
Code Behind
using System.Configuration;
using System.Data;
using System.Data.SqlClient;
private SqlConnection myConnnection = new SqlConnection(Configuration.ConnectionStrings["<connection string name>"].ConnectionString);
Sample Web.config Connection String
<connectionStrings>
<add
name="NorthwindConnectionString"
connectionString="Data Source=serverName;Initial
Catalog=Northwind;Persist Security Info=True;User
ID=userName;Password=password"
providerName="System.Data.SqlClient"
/>
</connectionStrings>
Dec 02, 2008 05:11 PM|talk2alie|LINK
Thank you for your response. I tried to follow your explanation, but its like I slipped somewhere and so I am still getting the same message. Here is exactly what I did:
I installed the SQL Server 2008 Express with Advanced Services and set the username to sa and password to VWD@passwd.
Then I installed the VWD 2008 Express Edition. I went to drive c>Windows folder>Microsoft,net folder>framework>v2.0.50727>clicked on aspnet_regsql
A wizard opens. I clicked next; then clicked on Configure SQL Server for Application Services. Then I put in the server name: prospectsacc-pc\sqlexpress. I clicked on SQL Server Authentication; then put in the user name: sa and the password: VWD@passwd; I then typed in the Database name textbox: aspnet_db and then clicked next.
After doing all the above, I still get the same message.
Dec 05, 2008 08:57 AM|beast_b9|LINK
why don't you try like this , in solution explorer click on app_data nad add new item, select sql database name it and it will add it automatically for local working. In web.config under find the single line
<connectionStrings />
and replace it with<connectionStrings>
<add name="ConnectionString" connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True"providerName="System.Data.SqlClient" />
</connectionStrings>
Database.mdf is name of your database, try and let mi know. Thos is for local, database i you are conecting to remote database of your host provider then go to server explorer and click on add new database: data source : microsoft sql server (sqlclient),
server name: that will give you rpovider as the user name and password, enter the name of you database that provider gave it to you and that should work, write back if not.
Dec 05, 2008 03:29 PM|talk2alie|LINK
Dec 06, 2008 08:49 AM|beast_b9|LINK
No need to help me i am begginer my self:). web.config only showes when you are starting new site, then you got default.aspx (and default.aspx.cs if you working under c# else is...vb)web.config and app_Data (this is where you add new item .sql database).
I am not sure how you set configuratio under 2005 web developer, but i am certain that you are by default able to test all your work in localhost.
Read this: Both
ASP.NET Development Server and IIS (included with the .NET Framework) can serve all ASPX and
associated pages, so at deployment there is no need to make changes to your site. But a number of differences
exist between the servers.
The two servers use different security models. IIS is a service, and every service in Windows requires a
user. The special user for IIS is named ASPNET. ASP.NET Development Server runs as an application
that uses the currently logged-in Windows user. That makes ASP.NET Development Server easier to
install because there is no need to create a specific ASPNET account in Windows. In fact, the installation
of ASP.NET Development Server is transparent when VWD is installed.
ASP.NET Development Server has three downsides. First, it is a tool for designers to test pages on their
development machine and thus it does not scale to more than one user. Second, because of the simplifications
to the user model, ASP.NET Development Server cannot support a robust security scheme.
ASP.NET Development Server should run only in a closed environment or behind a robust firewall.
Third, when you run a page in ASP.NET Development Server, it locks the page back in VWD. In order to
unlock the page, you must close the browser, which can be inconvenient when you’re making and testing
many changes to a site. Therefore, many developers use IIS even on their development machines so
they do not have to close a page in the browser before working on it in VWD. The downside is that you
have to configure your development machine to provide IIS, set up the appropriate authorizations,
establish security controls, and create a virtual root.If you
don’t want to go through the IIS setup, you can still use ASP.NET Development Server and just close the
browser between modifications.
And this is a solution to set up IIS and running vW developer express:
first VW 2005:
This is step from acceptin terms of eulat ( after you clicked accept terms of License agreement in installation),you then specify which installation options you want to
install in addition to .NET 2.0 and Visual Web Developer Express. The only compulsory option for
is SQL Server 2005 Express.. you check - miscorosft SQL server 2005 express edition x86( you can choose also the microsoft MSDN 205 express edition .- whicih is good for using library with help).
Specify where to install VWD,suggest selecting the default, which is
C:\Program Files\Microsoft Visual Studio 8\, and clicking Install to begin the setup. (Don’t
worry if the default is something other than C on your machine—this makes no difference).
At the end of this installation, you should see the screen , asking you to
restart your machine. You should oblige by clicking Restart Now
After your machine has restarted, the dialog box is displayed, notifying
you of a successful install (or any problems that were encountered) and reminding you to register
the software within 30 days.
Click Exit and you will now be able to start up both Visual Web Developer and SQL Express.
Visual Web Developer Express comes with its own web server: the
ASP.NET Development Server, nicknamed Cassini. However, you might want to use IIS anyway.
Note that it isn’t possible to run two versions of the web site concurrently, one on IIS and one on
the development server, because the database attaches itself to the ASP.NET Development Server and
you will find you can’t run it with IIS without detaching the database first.
To use IIS, you must have Windows XP Professional, Windows 2000, or Windows 2003 Server installed.
Windows XP Home Edition comes without IIS, and it isn’t possible to install it on Home Edition if you
do manage to get a copy of IIS. To begin the install, follow these steps:
To install IIS go to the Start menu, navigate to Settings➪Control Panel, and click Add or Remove
Programs. From the left-hand menu of the dialog box , select the
Add/Remove Windows Components icon. This will bring up the Windows Components
Wizard.
Make sure the Internet Information Services option is selected,
Click Details and you can select which of the options to install . You won’t need
all of the options, although you can install them separately at a later point. However, you must
make sure that you select the Common Files, the IIS snap-in, the Front Page Extensions, and the
World Wide Web Services options, because these are necessary to work with ASP.NET 2.0 and
Visual Web Developer Express
Click OK and make sure you have the Windows CD handy for the installation
Adding a alias folder:
From the Start menu, select Run. Type MMC in the text box, and click OK. In the MMC dialog
box that appears, select File➪Add/Remove Snap In. Click the Add button and select IIS Internet
Information Services from the dialog box that appears.
Expand the Internet Information Services, Under that will be your
computer’s name. Expand that as well as the Web Sites option.
Right-click Default Web Site, and select New➪Virtual Directory from the menu to start the wizard.
Type in what you want to call it as the Virtual Directory Name
Click Next and browse to you loacl path of directory you created.
Click next and make sure the box read and run scripts are selected
Click Finish. Close the MMC Console. A save is optional, and you can supply a name if you so
desire.
And that covers it, so you don't need to insall iis but if you do and dont want to make virtual directory make sure that all your work is under Inetpub ->wwroot- folder.
Dec 11, 2008 01:53 AM|beast_b9|LINK
Did you make it ?
Dec 11, 2008 05:52 AM|talk2alie|LINK
Dec 11, 2008 12:54 PM|beast_b9|LINK
here are the link of same problem and solution
None
0 Points
Dec 11, 2008 09:00 PM|roozbeh_noroozi|LINK
Dec 12, 2008 09:37 AM|talk2alie|LINK
Thank you once again.
I followed all the steps explained and right now I have successfully copied the two files in the App_Data folder and I can see them there right now. I do not have sql server 2008 developer edition but I followed the step in sql server 2005 express edition and it went well. But I do not know how to "attach aspnetdb database from files to sqlserver" (step 6 in your explanation) and I do not know where I can find machine.config file instance and I do not know how to "change machine.config file instance to false:instance=false" (next step after step 6 in your explanation). Please help me out.
Dec 14, 2008 03:26 PM|beast_b9|LINK
right click app_data and add existing item (browse to your database that you want to add) in solution explorer you have last icon (hamer and world) asp.net configuration this is where you can configure it.
This is for sql 2005
Dec 14, 2008 03:27 PM|beast_b9|LINK
One more thing when you click debug you will get web config file..
Dec 18, 2008 07:43 AM|beast_b9|LINK
any luck?
13 replies
Last post Dec 18, 2008 07:43 AM by beast_b9 | https://forums.asp.net/t/1354796.aspx?Configuring+ASP+NET+to+connect+to+a+SQL+Server+Provider | CC-MAIN-2017-30 | refinedweb | 1,783 | 64.71 |
ADF Faces Dynamic Tags - For a Form that Changes Dynamically
By Shay Shmeltzer-Oracle on Oct 22, 2010
I think this is one of the hidden gems of Oracle ADF - a set of components that can display different data each time.
There is a dynamic form and a dynamic table, both read the meta-data of the component you want them to display and create JSF components on the page to show those at runtime.
The following demo shows the basics of how to use them.
We are creating a method in our Application Module that changes the definition of a view object to be based on a provided SQL statement.
Then we bind that VO to our page using the dynamic Form component.
And voila - you'll give the SQL, we'll show the data...
Two notes:
The vo.executeQuery call in the AM method is redundant in this case. The view will be queried when you navigate to the page without this call also.
Dragging a data control as a dynamic table should result in a code that looks like this in our JSF:
If it doesn't make sure you dropped the right dynamic table. You'll see two options in the drop menu with similar name.You need to choose the "ADF Dynamic Table" one.
Posted by Martin on October 23, 2010 at 01:55 AM PDT #
Posted by Estefanie Canseco on October 25, 2010 at 10:04 AM PDT #
Posted by shay.shmeltzer on October 26, 2010 at 03:59 AM PDT #
Posted by Estefanie Canseco on November 10, 2010 at 02:51 AM PST #
Posted by shay.shmeltzer on November 10, 2010 at 04:24 AM PST #
Posted by Estefanie Canseco on November 11, 2010 at 08:48 AM PST #
Posted by shay.shmeltzer on November 11, 2010 at 08:55 AM PST #
Posted by Estefanie Canseco on November 11, 2010 at 09:18 AM PST #
Posted by sunil on November 29, 2010 at 12:29 AM PST #
Posted by Dario Toginho on January 17, 2011 at 12:14 AM PST #
Posted by shay.shmeltzer on January 18, 2011 at 03:37 AM PST #
Posted by ILya Cyclone on March 09, 2011 at 11:59 PM PST #
hi shay, we are also interested about editable dynamic form, would you provide some examples? thanks.
Posted by guest on June 19, 2011 at 01:35 PM PDT #
Guest - for an editable form you'll need to have a VO that is updateable - either by being based on EO or creating a programmatic VO.
Posted by Shay on June 20, 2011 at 07:19 AM PDT #
Hi Shay,
I've followed your example step by step, but when I drop the form from the datacontrol panel onto a jsf page, in the drop box list which appears there's no dynamic form to choose - and yes I did added the dynamic form tag library.
Any suggestion ? I'm running JDeveloper 11g Rel. 2.
Thanks,
Sergio.
Posted by guest on August 03, 2011 at 04:13 PM PDT #
Hi,
I've solved the issue with dynamic form not shown in the drop list - I had to choose a jspx page and not a jsf facelet.
A question though: how can I call the dynamic form when I click on a table element ?
I try to explain better: I have a table from within different databasetable names appear. I want to click on a table element, get the table name from the clicked row cell, and with that table name, run the dynamic table. Would that possible ? If so, could you explain how to achieve that ?
By the way, very interesting tutorial. Keep on please ;-)
Regards,
Sergio.
Posted by sergio on August 03, 2011 at 04:27 PM PDT #
hi shay,
can you advise on how to implement usage of createViewObjectOnRowSet()
instead of createViewObjectFromQueryStmt?
I intend to implement dynamic table on PlSql procedure with Cursor Out param.
------------------------------------------
import java.sql.ResultSet;
ResultSet rset;
....
rset=((OracleCallableStatement)cst).getCursor(3);
vo=this.createViewObjectOnRowSet("v1", (oracle.jbo.RowSet)rset);
//here runtime error occurs
--------------------------------------------
The problem to call vo=this.createViewObjectOnRowSet("v1", (oracle.jbo.RowSet)rset);
is about RowSet proper cast.
-------------------
Best regards,
Alex Bondarenko
Posted by guest on August 08, 2011 at 06:22 PM PDT #
Alex - you might want to read 42.8.4 How to Create a View Object on a REF CURSOR
Posted by shay on August 09, 2011 at 08:51 AM PDT #
hey Shay,
that is regarding implementing dynamic table on ref cursor.
thank you for the link, it means i've started to dig in the right way, in parallel.
Posted by ebox999 on August 10, 2011 at 02:28 AM PDT #
Hi Shay,
Thanks a lot for the post.
I followed the same procedure and I could create a dynamic read only form. Now, I would like to create a read and write form. That is, the form should be editable and it should post the updated data into the database.
Please help.
Posted by guest on February 14, 2012 at 11:03 PM PST #
guest, for a read/write form you'll need to use another approach - check out this sample/blog -
Posted by Shay on February 16, 2012 at 12:02 PM PST #
Hi!
I have some questions:
1) When I use makeVO from Your example and generate dynamic table then this table always is read only and I can't insert data using CreateInsert operation. What I need to do to resolve my problem?
2) If I have method parameter displayed in form like input field with value: #{bindings.st.inputValue}, how can I say that this value is 180 or read this parameter from other tables row?
3) How can I read value from created dynamic table selected row?
Hope for Your answer soon, regards, Kristaps
Posted by guest on August 31, 2012 at 09:00 AM PDT #
guest - I don't think the dynamic components will answer your need to do an update or select a record.
For those have a look at the forEach based solution shown here:
For question 2 - you can set the NDValue of a parameter in the binding tab to point to any EL you need it including #{'180'}
Posted by Shay on September 05, 2012 at 11:33 AM PDT #
Hi Shay,
when i create ADF Dynamic table in jspx page it is running fine. but when i create it on jsff page as bounded task flow and drag task flow in jspx page as region it given
java.lang.NullPointerException at oracle.adfinternal.view.faces.dynamic.DynamicForm.isRefreshNecessary(DynamicForm.java:503).
what is the problem i don't understand. please help me.
Posted by Manish Pandey on December 24, 2012 at 11:19 PM PST #
I used your post to create dynamic: table from dynamically created VO.
However the table I got doesn't has sorting or filtering option. How can it be achieved
Posted by Michael Shapira on February 14, 2013 at 05:32 AM PST #
Michael, there is no built in support for sorting since we don't know the structure behind the table. There is however a sortListener on the table - so you should be able to code logic there that will add the sort to your VO and re-execute it. I'm not sure if it will change anything but depending on your use case maybe using the "read only dynamic table option" will make it simpler to implement since you have a column object there.
Posted by Shay on February 14, 2013 at 11:16 AM PST #
Shay,
I am working on a generic file upload utitlity, and I keep getting the error:
java.lang.NullPointerException
at oracle.adfinternal.view.faces.dynamic.AttributeHelper.getFilteredAttributeDefinitions(AttributeHelper.java:60)
this only happens when i add the row the VO using NameValuePairs and createandInitRow function. If I createrow and insertrow it works fine.
However in the second case I dont get the Entity validations.
Please advice. (i am working with jdev11.1.1.7.0)
Thanks
Anand
Posted by guest on June 06, 2013 at 04:53 PM PDT #
Hi Shay,
I've created a table with max 6 columns.. Its width is fixed as 90% in CSS file. But when an action takes place like "view only 4 columns" then the extra space is visible.. I want the width of the table to be dynamic according the number of rows I select. Can you please help me with this problem..?
Posted by guest on June 10, 2013 at 03:09 AM PDT #
Hi Shay,
There is panelcollection, inside which am using a table. I've fixed the width of the table in css for max number of columns I want to show. But if the number of columns reduce, extra white space is visible. The width of the table should be dynamic according to the number of columns chosen. Can you please help me with this.
Thanks.
Posted by guest on June 12, 2013 at 11:40 PM PDT #
Hi Shay,
I've to create a page wherein user can enter the query and get the results of the query on the click of an Execute button as you've done here and it has helped me a lot!
I've used createViewObjectFromQueryStmt to create a view object dynamically and implemented a method in my AM that implements this functionality.
I'm getting the results in a dynamic table and displaying them on the page.
But now, i've both the query text box and the table on the same page and i'm having problems in firing multiple queries. The result of first query is displayed perfectly but the second query doesn't fetch any results. The dynamic table still shows attributes from the previous query and doesn't show the new attribute defs related to the new query.
Eg. If i fire select * from EMP; The results are populated into the table perfectly.
But now I fire select * from dept; It shows an error that Emp no. returned NULL, which is an attribute of the EMP table and not department. It still searches for old attribute defs in the new table.
Any help would be highly appreciated.
Thanks.
Posted by guest on March 20, 2014 at 11:24 PM PDT #
Hi,
i facing problem in calling web service programmatically, I follow all steps of weather app which is available on net,and i create web service over jdeveloper call this by creating web service data control and accessing it in bean using function invokeDataControl.but both times it return me null value in generic type object,
do you have any solution for this.
Posted by Devendra on March 21, 2014 at 05:34 AM PDT #
guest - it depends on what you are actually refreshing You might need to refresh the containing layout component in which the table is - I haven't tried both on the same page.
Posted by guest on March 24, 2014 at 06:23 PM PDT #
I have replicated same thing as shown in the video but i am getting following error:
"Definition Dummy of type Attribute is not found in v1".
Any help would be highly appreciated.
Posted by guest on July 01, 2014 at 10:11 PM PDT # | https://blogs.oracle.com/shay/entry/adf_faces_dynamic_tags_-_for_a | CC-MAIN-2015-48 | refinedweb | 1,886 | 68.5 |
Changing File Creation Dates in OSX
On my last vacation, I have taken a bunch of pictures, and a bunch of video. The problem is, I hadn't used the video camera in a long time, and it believed that all it's videos were taken on the first of January 2012. So in order for the pictures to show up correctly in my picture library, I wanted to correct that.
For images, this is relatively easy: Most picture libraries support some kind of bulk date changes, and there are a bunch of command line utilities that can do it, too. But none of these tools work for video (exiftool claims be able to do that, but I couldn't get it to work).
So instead, I went about to change the file creation date of the actual video files. And it turns out, this is surprisingly hard! The thing is, most Unix systems (a Mac is technically a Unix system) don't even know the concept of a file creation date. Thus, most Unix utilities, including most programming languages, don't know how to deal with that, either.
If you have XCode installed, this will come with
SetFile, a command line utility that can change file creation dates. Note that
SetFile can change either the file creation date, or the file modification date, but not both at the same time, as any normal Unix utility would. Also note that
SetFile expects dates in American notation, which is about as nonsensical as date formats come.
Anyway, here's a small Python script that changes the file creation date (but not the time) of a bunch of video files:
import os.path import os import datetime # I want to change the dates on the files GOPR0246.MP4-GOPR0264.MP4 for index in range(426, 465): filename = 'GOPR0{}.MP4'.format(index) # extract old date: date = datetime.datetime.fromtimestamp(os.path.getctime(filename)) # create a new date with the same time, but on 2015-08-22 new_date = datetime.datetime(2015, 8, 22, date.hour, date.minute, date.second) # set the file creation date with the "-d" switch, which presumably stands for "dodification" os.system('SetFile -d "{}" {}'.format(new_date.strftime('%m/%d/%Y %H:%M:%S'), filename)) # set the file modification date with the "-m" switch os.system('SetFile -m "{}" {}'.format(new_date.strftime('%m/%d/%Y %H:%M:%S'), filename)) | http://bastibe.de/2015-10-03-changing-file-creation-dates.html | CC-MAIN-2017-17 | refinedweb | 396 | 71.44 |
I'm struggling to get the pe_account user type module working. I've written
up a users.pp and groups.pp manifests, but when I add in my node to the
pe_accounts section in the console - then run the agent, I don't see the
users and groups being created. So, I've got the users.pp and groups.pp in
/etc/puppetlabs/puppet/modules/site/manifests. I'm trying to do this in
what I understand to be the namespace format, not yaml. I know the
connection is valid between master -> node as I've trialed out the motd
module in a similar format. Any help would be appreciated.
- Joe Arnet
--. | https://grokbase.com/t/gg/puppet-users/1367fnsfh3/pe-account-user-type-setup | CC-MAIN-2021-43 | refinedweb | 112 | 70.09 |
You'll work in pairs on a program to do "Mad Libs". Sometimes you'll be working together at a single computer, sometimes separately on different parts of the same project. At all times, you'll be constantly testing and debugging as you go. At the end, you should be well-positioned to start learning the core or this class: Objected-Oriented Programming.
git config --global user.email `whoami`@usna.eduAlso, you're going to be sharing some files between one another. To make this possible, you need to give the command
chmod a+x ~/This doesn't give anyone access to any of your files, but it is required in order for you to later give access to one of your files.
public class StringNode { String data; StringNode next; }You will use this class as the basis for some functions manipulating linked lists of strings. Next create a class
ListStuffthat defines the following functions (you are encouraged to define more than this though):
// addToFront(s,Nold) returns a StringNode reference representing the list obtained by adding s to the front of list Nold public static StringNode addToFront(String s, StringNode Nold) { ... } // listToArray(N) returns a reference to an array containing the same strings as in the list N (in order) public static String[] listToArray(StringNode N) { ... }You must define a
mainfunction to test the code with (of course!).
Note: A "list" is now represented by a
StringNode
reference. The "empty list" is represented by
a
null reference. So, for example:
StringNode N = null; // at this point N *is* an empty list N = addToFront("rat",N); // at this point N *is* the list ("rat") N = addToFront("dog",N); // at this point N *is* the list ("dog","rat") N = addToFront("pig",N); // at this point N *is* the list ("pig","dog","rat")
We're going to use a program called "git" to do this. In order
to use it as we will be doing, you and your partner will need a
shared directory — i.e. a directory in one or the other of
your accounts that both of you can read from and write to.
I've set up a script
~wcbrown/bin/cher to take care
of that for you. In the shared directory, you will create a git
"repository". This is a central repository of all the files you
and your partner will be creating and working on. Each of you
will checkout ("clone" in git parlance) the repository in a
convenient place in your own home directories, and periodically
"push" the changes you've made to the repository, and "pull" the
changes your partner has made from the repository.
Here's how we'll setup the shared repository and get some files checked into it. On the account of the person who's computer you've been working at:
cd ~ ← cd to your home directory mkdir share02 ← create new directory "share02" cd share02 ← cd to the new directory ~wcbrown/bin/cher m17xxxx m17xxxx ← use the alpha codes of the two partners. This makes the directory "shared"
~wcbrown/bin/git-initThis will create a git repository that you both can share. Note the output line that looks like
checkout the repo with: git clone ssh://mich302csd17u.academy.usna.edu/home/mids/m17xxxx/share02Save that! That's what you need to do to "checkout" or "clone" a copy of the repository.
git clone ssh://mich302csd17u.academy.usna.edu/home/mids/m17xxxx/share02if you do an
ls, you should see the directory
share02has suddenly appeared. In it, you should find a README file. Copy your .java files into the share02 directory, for example with the command
cp *.java share02(assuming you're still in your lab02 directory). Cd to share02 and make sure it's all there.
git add . git commit -m "Initial commit of linked list code." git pushThe "git add ." line says that all the files currently in the directory are ones you want subject to version control. The "git commit ..." line commits the new files to your own local clone of the repository. Finally, the "git push" pushes the new stuff that's in your clone to the repository.
git clone ssh://mich302csd17u.academy.usna.edu/home/mids/m17xxxx/share02A simple ls should let you know whether you got the updated version.
git add . git commit -m "brief description of what you've done" git push... and when your partner has pushed something important, pull that new stuff like this:
git pullNote: Please edit the README file to include the names and alphas of both partners.
MidLibsthat defines a program that, at this point, takes a filename and prints the words in that file within 35 columns. The input filename will come as a command-line argument. The way this works is simple. In running a program like this:
java ClassName foo bar 23The
String[] argsthat is the parameter to main is the array
{"foo","bar","23"}. If there isn't a filename (args has length 0), you should print out a usage message and exit. System.exit(0) will do this for you. Download the file madin01.txt. Here's some sample runs for the program.
~/$ java MidLibs usage: java MidLibs <filename> ~/$ java MidLibs madin01.txt One of my @nounp came to see me in my office. He was @adjective with his @nounp . He asked if there were any optional assignments he could do to @verb things. I told him that I don't believe in @nounp , but that I was happy to help him @verb for the final.
Aand
Bare Strings, you test for equality with
A.equals(B)... which returns true if the strings are equal, and false otherwise. | https://www.usna.edu/Users/cs/wcbrown/courses/S15IC211/lab/l02/lab.html | CC-MAIN-2018-43 | refinedweb | 944 | 72.76 |
Solving differential equations in Julia
[slack] <chrisrackauckas> Graham Smith [1:19 PM]
ginkobab: your message got clipped going from gitter to slack, so I can't see your implementation starting around
update!. But I've thought some about how I would implement a spiking network, so I'll try to answer your questions:
t
V, your membrane potentials? It's just the vector as long as
np.N. You don't define the
np.Nx
timestepsmatrix; that comes out of
solve.
dt(assuming I'm right about what you meant by the matrix);
solvewill take care of that. Eventually you may even want to use a more advanced solver, which would do way more than multiplying by
dt
V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0 # This makes the neurons over the threshold spike V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ # This reset the neurons that spiked to a low potential
Vwill have a marker for spiking timesteps, and you won't clobber it on the same step (as these lines do now, with 0.0 as the marker)
Graham Smith [2:00 PM]
Oooooo wait you can probably use callbacks to handle spike thresholding
[2:00 PM]
Chris Rackauckas [3:08 PM]
you probably want to link to recent docs. that's v2.0
[slack] <ginkobab> Thanks for your answer!
Just to clarify, the weird block about updating sets the potential of the subsequent timestep equal to Vr or to 0, so the order is important, otherwise since 0 > threshold, the reset wouldn't work. So effectively it marks a spike that we can then extract!
I'm gonna look into callbacks now, thanks again!
Hi all. Is there a way to deal with this problem in Julia?
function explicit(y,p,t)
sqrt(1-y[1]^2)
end
tspan = (0.0,π)
y0 = 0.0
problem = ODEProblem(explicit,y0,tspan)
sol = solve(problem, Rosenbrock23())
In MATLAB, Cleve Moler solve this problem by add a bound like this:
f = @(t,y) sqrt(1-min(y,1)^2).
This could also be used in Python with Scipy.odeint.
def f(t,y): return np.sqrt(1-np.min(y,1)**2)
But Julia don't allow this.
Hi everyone, first off, thanks for developing an awesome DE library/framework. It's really impressive just how much functionality you all have crammed into this library. I hope I'm in the right place to ask this.
I'm attempting to install the
diffeqpy package to utilize the
DifferentialEquations.jl library as a sort of "drop-in" solver to our already existing systems biology models written in Python since we need a speedup and easier parallelization options.
I'm on an Ubuntu 18.04 system and I've
DifferentialEquations.jlpackage through
Pkg
PyCall.jland built it with the Python binary I'm using with my
pipenvvrirtual environment.
diffeqpypackage to the virtual environment via
pipenv install(backended with pip, just in the virtual environment)
DifferentialEquations.jlvia Julia
diffeqpyvia
python-jl
diffeqpyvia
python-jlwith only benchmarking the solving of the problem, not the specification via
de.ODEProblem()
diffeqpyvia
python-jlwith only benchmarking the solving of the problem as in (d.) and also pre-compiling the problem via
numba
These are the results I've obtained:
You can see the full code at
I'm almost certain I've done something wrong here since this performance is vastly different and I know DifferentialEquations.jl to be some of the fastest implementations of ODE solvers out there. Do you all have any idea what I could have done wrong in my setup of the tech stack or if I'm just interfacing with the solver incorrectly?
I seriously appreciate your all's help in all of this! Thank you so much! And I'm happy to provide any extra information as needed!
Hello, I am trying to fit an ODE with 8 external inputs from measurement data which are interpolated and I find BFGS very slow compared to BlackBoxOptim. I also tried ADAM from DiffEqFlux and it is also very slow. Does anyone here have any thoughts on why ?
I am using
LinearInterpolation() objects to get the inputs at each time
t
I have more details here
_itpare
LinearInterpolation()objects from
Interpolations.jl, the rest are constants enclosed in a wrapper function.
function thermal_model!(du, u, p, t) # scaling parameters p_friction, p_forcedconv, p_tread2road, p_deflection, p_natconv, p_carcass2air, p_tread2carcass, p_air2ambient = p fxtire = fxtire_itp(t) fytire = fytire_itp(t) fztire = fztire_itp(t) vx = vx_itp(t) alpha = alpha_itp(t) kappa = kappa_itp(t) r_loaded = r_loaded_itp(t) h_splitter = h_splitter_itp(t) # arc length of tread area theta_1 = acos(min(r_loaded - h_splitter, r_unloaded) / r_unloaded) theta_2 = acos(min(r_loaded, r_unloaded) / r_unloaded) area_tread_forced_air = r_unloaded * (theta_1 - theta_2) * tire_width area_tread_contact = tire_width * 2 * sqrt(max(r_unloaded^2 - r_loaded^2, 0)) q_friction = p_friction * vx * (abs(fytire * tan(alpha)) + abs(fxtire * kappa)) q_tread2ambient_forcedconv = p_forcedconv * h_forcedconv * area_tread_forced_air * (t_tread - t_ambient) * vx^0.805 q_tread2ambient_natconv = p_natconv * h_natconv * (area_tread - area_tread_contact) * (t_tread - t_ambient) q_tread2carcass = p_tread2carcass * h_tread2carcass * area_tread * (t_tread - t_carcass) q_carcass2air = p_carcass2air * h_carcass2air * area_tread * (t_carcass - t_air) q_carcass2ambient_natconv = p_natconv * h_natconv * area_sidewall * (t_carcass - t_ambient) q_tread2road = p_tread2road * h_tread2road * area_tread_contact * (t_tread - t_track) q_deflection = p_deflection * h_deflection * vx * abs(fztire) q_air2ambient = p_air2ambient * h_natconv * area_rim * (t_air - t_ambient) du[1] = der_t_tread = (q_friction - q_tread2carcass - q_tread2road - q_tread2ambient_forcedconv - q_tread2ambient_natconv)/(m_tread * cp_tread) du[2] = der_t_carcass = (q_tread2carcass + q_deflection - q_carcass2air - q_carcass2ambient_natconv)/(m_carcass * cp_carcass) du[3] = der_t_air = (q_carcass2air - q_air2ambient)/(m_air * cp_air) end
would be fine for your problem (just like the other cases)would be fine for your problem (just like the other cases)
function explicit(y,p,t) sqrt(1-y^2) end
[slack] <torkel.loman> Is there something I can do to help solve ?
Happy to do some digging, but unsure what I should be looking for.
((((((1.0 * α₁ * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) / (α₂ + (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0))))))))) - (((1.0 * α₄) * x₁(t)) / (α₅ + x₁(t)))) - ((x₁(t) * α₆₂) / 0.1048)) + (((α₆₅ * x₃(t)) / ((α₆₆ + x₃(t)) * ((0.1048 * ((x₁(t) + x₇(t)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * 2))) / α₇₁))) / 0.1048)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * α₂₃)) - ((x₁(t) * x₅(t)) * α₂₄)
ERROR: Failed to apply rule ~~(z::_isone) * ~~x => ~x on expression (1.0 * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) * α₁
[slack] <Peter J> My code does a lot of parameter-parallel ODE solving, and I'd like to do them on the GPU, but since a lot of julia is not supported on the GPU by DiffEqGPU (broadcast, matrix multiply...) would this idea work?
Given an ode described by
f, an IC
u_0 and a list of parameters
[p_1, p_2,... p_n]. Create a new ode function
big_f(du,u,p,t) , and
w_0. Where
w_0 is a
cuarray consisting of
u_0 concatenated
n times, and
big_f applies
f seperately to each copy of
u_0, each with a different
p_i. | https://gitter.im/JuliaDiffEq/Lobby?at=5f97d87b6c8d484be2aed095 | CC-MAIN-2020-50 | refinedweb | 1,157 | 55.44 |
My form receives data via POST. When I do
puts params
{"id" => "123", "id2" => "456"}
puts params['id'] # => 123
puts params[:id] # => 123
params['id'] = '999'
puts params # => {"id" => "999", "id2" => "456"}
params[:id] = '888'
puts params
{"id" => "999", "id2" => "456", :id => "888"}
params
# => {"id2"=>"2", "id"=>"1"}
params[:id]
# => nil
params['id']
# => "1"
:id
Hashes in Ruby allow arbitrary objects to be used as keys. As strings (e.g.
"id") and symbols (e.g.
:id) are separate types of objects, a hash may have as a key both a string and symbol with the same visual contents without conflict:
irb(main):001:0> { :a=>1, "a"=>2 } #=> {:a=>1, "a"=>2}
This is distinctly different from JavaScript, where the keys for objects are always strings.
Because web parameters (whether via GET or POST) are always strings, Sinatra has a 'convenience' that allows you to ask for a parameter using a symbol and it will convert it to a string before looking for the associated value. It does this by using a custom default_proc that calls
to_s when looking for a value that does not exist.
Here's the current implementation:
def indifferent_hash Hash.new {|hash,key| hash[key.to_s] if Symbol === key } end
However, it does not provide a custom implementation for the
[]=(key, val) method, and thus you can set a symbol instead of the string. | https://codedump.io/share/6fJ2Ot2AAude/1/reading-and-writing-sinatra-params-using-symbols-eg-paramsid | CC-MAIN-2017-09 | refinedweb | 228 | 67.49 |
Chain object
[CAPICOM is a 32-bit only component that is available for use in the following operating systems: Windows Server 2008, Windows Vista, and Windows XP. Instead, use the X509Chain Class in the System.Security.Cryptography.X509Certificates namespace.]
The Chain object represents a certificate trust chain.
This object provides properties and methods to build a certificate trust chain to check the validity of certificates. The chain is built using the CertificateStatus.CheckFlag property value and the policy settings of a CertificateStatus object.
The Chain object exposes the following interfaces:
- IChain2: Introduced in CAPICOM 2.0.
- IChain: Introduced in CAPICOM 1.0.
When to use
The Chain object is used to perform the following tasks:
- Build a certificate trust chain.
- Obtain the OIDs of all the certificate and application policies valid for the chain.
- Verify the status of the certificates in the chain.
- Obtain extended error information.
- Retrieve the collection of certificates in the chain.
The Chain object has these types of members:
Methods
The Chain object has these methods.
Properties
The Chain object has these properties.
Remarks
The Chain object can be created, and it is safe for scripting. The ProgID for the Chain object is CAPICOM.Chain.2.
CAPICOM 1.x: The ProgID for the Chain object is CAPICOM.Chain.1.
Requirements
See also
Send comments about this topic to Microsoft
Build date: 10/26/2012 | http://msdn.microsoft.com/en-us/library/aa377611(v=vs.85).aspx | CC-MAIN-2013-20 | refinedweb | 228 | 60.01 |
automodinit 0.16
Solves the problem of forgetting to keep __init__.py files up to dateautomodinit v0.16 5th March 2017:
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Niall Douglas
See for latest version
Go to to report bugs
This package fixes a small problem which has been bugging me throughout
years of python development: forgetting to keep a module's __init__.py
up to date with new files added. This causes the following, irritating
problems:
1. Test suites don't find docstring tests.
2. Static analysis tools don't see some module content in __all__.
3. Things which scan themselves for plugins mismatch what os.listdir()
returns as against what the module import table has.
4. I waste time over something which should take care of itself.
5. os.listdir() based solutions tend to fail when freezed into
an executable binary because they don't understand running from
inside a ZIP archive.
So here's how to make the problem go away forever:
1. Include the automodinit package into your setup.py dependencies.
2. Replace all __init__.py files like this:
__all__ = ["I will get rewritten"]
# Don't modify the line above, or this line!
import automodinit
automodinit.automodinit(__name__, __file__, globals())
del automodinit
# Anything else you want can go after here, it won't get modified.
3. That's it! From now on importing a module will set __all__ to
a list of .py[co] files in the module and will also import each
of those files as though you had typed:
for x in __all__: import x
Therefore the effect of "from M import *" matches exactly "import M".
Customising:
-=-=-=-=-=-=
automodinit can take the following additional parameters:
filter: This is a callable which will be passed a list of tuples
(loader, modulename, ispkg) which is the output of
pkgutil.iter_modules() for the calling module. Return only
those which you want to be imported.
importFindings: Defaults to True. Set to False to not auto-import
the contents of __all__.
Version history:
-=-=-=-=-=-=-=-=
* v0.16 5th Mar 2017
* Fixed stripping of __init__.py file encoding. Thanks to wtyerogers
for reporting this.
* Removed suggestion this is the smallest package on pypi. Thanks to
asl97 for reporting this.
* Tell PyPi we are under the MIT licence. Thanks to njwhite for
reporting this.
* v0.13 9th Feb 2013
* Fixed a bug where the source distribution would fail to install due
to not including distribute_setup.py. Thanks to kanzure for reporting this.
* v0.12 5th Mar 2012
* Fixed a bug where isinstance would occasionally fail. Turns out the
pkgutil loading mechanism doesn't check to see if the module is already
loaded, so it was loading duplicates whose types wouldn't compare.
* v0.11 5th Mar 2012
* Fixed some typos in Readme.txt
* Typically what worked before packaging did not work after. Fixed!
* v0.10 5th Mar 2012
First release
- Author: Niall Douglas
- Bug Tracker:
- License: MIT
- Categories
- Package Index Owner: ned14
- DOAP record: automodinit-0.16.xml | https://pypi.python.org/pypi/automodinit/ | CC-MAIN-2018-09 | refinedweb | 486 | 68.16 |
Iomega StorCenter px12-400r User Guide D
- Cordelia Richardson
- 2 years ago
- Views:
Transcription
1 Iomega StorCenter px12-400r User Guide D
2
3 Table of Contents Setting up Your Device... 1 Setup Overview... 1 Set up My Iomega StorCenter If It's Not Discovered... 2 Discovering with Iomega Storage Manager... 2 Discovering the Iomega device without the Internet... 2 Setup Page... 3 Network Connection... 4 Connecting the Iomega StorCenter px12-400r to Your Network... 4 Network Settings... 6 Manually Configuring the Network... 7 Bonding NICs... 8 VLAN Settings... 9 Naming Your Iomega StorCenter px12-400r Configuring Your Iomega StorCenter px12-400r to Use Active Directory Enabling Active Directory Trusted Domains Obtaining Alerts About Your Iomega StorCenter px12-400r Using Your Iomega StorCenter px12-400r in Various Time Zones Setting the Display Language for Your Iomega StorCenter px12-400r Printing Documents Setting up Personal Cloud, Security, and File Sharing Sharing Files Sharing Overview Interfaces for Sharing Shares What are Shares and How Do I Organize Content with Them? Adding Shares iii
4 Iomega StorCenter px12-400r User Guide Managing Shares Deleting Shares Using Protocols to Share Files What Are Protocols and How Do I Use Them to Share Files? AFP File Sharing for Macs Bluetooth File Sharing FTP File Sharing NFS File Sharing rsync: Synchronizing Files with Another Storage Device or Other Computers TFTP Monitoring Your Device with an SNMP Management Tool WebDAV: Managing Files Using HTTP or HTTPS Windows DFS: Creating a Distributed Windows File System Windows File Sharing Sharing Content through the Home Page Sharing Your Content with the World Adding a Custom Home Page Automatically Sending Content to Multiple People at Once How to Set Up an Distribution Active Folder Sharing Content Using Social Media: Overview Managing Your Content Transferring Content to and from Your Iomega StorCenter px12-400r with Copy Jobs Copy Jobs Limitations Getting Content from a USB External Storage Device Safely removing external storage iscsi: Creating IP-Based Storage Area Networks (SAN) iscsi Overview iv
5 Table of Contents Adding iscsi Drives Enabling iscsi Drives Connecting to iscsi Drives Managing iscsi Drives Creating iscsi Drives and Adding Them to Volumes Changing Access Permissions Deleting iscsi Drives Storage Pool Management Understanding How Your Content Is Stored Storage Pools Volumes Adding and Managing Storage Pools Cache Pools To add a Storage Pool Managing Drives Setting Write Caching Applying Global Drive Management Settings Setting Write Caching Applying Global Drive Management Settings Drive status Modifying a Storage Pool Adding and Managing Volumes Shares in Volumes To add a new volume: Deleting a Storage Pool Changing RAID Protection Types Adding New Drives to Your Iomega StorCenter px12-400r v
6 Iomega StorCenter px12-400r User Guide Drive Management Managing Drives Setting Write Caching Applying Global Drive Management Settings Setting Write Caching Applying Global Drive Management Settings Drive status Backing up and Restoring Your Content Backup and Restore Overview Backup of Data through RAID Protection Backing up to and Restoring from Your Device Backing up Macs with Time Machine Copy Jobs Overview Backing up Your Device Copy Jobs Backing up with Mozy Backup Restoring Files with Mozy Backup Registering with Avamar for Backup and Restore Backing up with Amazon S Restoring Files with Amazon S Backing up with Iomega Personal Cloud Restoring Files with Personal Cloud Securing Your Device and Contents What Is Security and Do I Need It? Enabling Security and Creating an Administrator User Disabling Security Limiting Access to Your Content by Creating Users vi
7 Table of Contents Users Users Overview Adding Users Managing Users Deleting Users Groups Groups Overview Adding Groups Managing Groups Deleting Groups Using Active Directory Domain to Manage Users and Groups Active Directory Users and Groups Overview Managing Users and Groups with Active Directory Deleting Active Directory Users and Groups Personal Cloud: Accessing Your Device From Anywhere in the World What Is an Iomega Personal Cloud? Iomega Personal Cloud Key Terms Is My Content Secure? Iomega Personal Cloud Setup Overview Creating an Iomega Personal Cloud Configuring Router Port Forwarding for Personal Cloud Router Port Forwarding Configuring Your Iomega Personal Cloud Enabling Internet Access to the px12-400r Changing Personal Cloud Settings Inviting People onto Your Iomega Personal Cloud Joining a Trusted Device to an Iomega Personal Cloud vii
8 Iomega StorCenter px12-400r User Guide Managing Trusted Devices on a Personal Cloud Disconnecting Trusted Devices Deleting Trusted Devices Using Copy Jobs with an Iomega Personal Cloud Disabling or Deleting Your Iomega Personal Cloud Accessing Content Using Your Iomega Personal Cloud Informing Users What to Do with Iomega Personal Cloud Sharing Content Using Social Media Sharing Content Using Social Media: Overview Facebook Flickr YouTube Share Content through Iomega Personal Cloud Media Management Media Management Overview Scanning for media content Media Services Capabilities and Limitations Sharing Media Content over the Internet Enabling Internet Access from the Media Server Page Media Aggregation Enabling Media Aggregation Social Media Sharing Streaming Music, Movies, and Pictures Example: Setting up itunes Example: Setting up Xbox Photos Photos Overview viii
9 Table of Contents Streaming Pictures Creating a Slideshow on the Device Home Page Automatically Resizing Your Photos Getting Pictures from Your Camera Music Music Overview Streaming Music Torrents Torrent Overview Enabling Torrent Downloads Deleting torrent jobs Configuring Your Router for Torrent Downloads Torrent Active Folders Videos Video Capabilities Overview Streaming Movies Adding Applications to Your Device Application Overview Application Manager Starting or stopping an application Adding applications Removing applications Upgrading Your Device Software Updates Manual update process: installing a device software update Recovering Your Device Configuration Copying Your Iomega StorCenter px12-400r Settings to Other Devices ix
10 Iomega StorCenter px12-400r User Guide Backing up Device Configuration Restoring a Configuration Backup Hardware Management About the Iomega StorCenter px12-400r Components Front Panel Drive Bay Detail Rear Panel Energy Saving Power Down Drives Brightness Wake On LAN Creating A Power Schedule Factory Reset UPS Management Adding New Drives to Your Iomega StorCenter px12-400r Troubleshooting Routers Enabling the DMZ Configuring Port Forwarding on Double NAT Networks Bridging the Secondary Router Bridging the Primary Router Additional Support How to Get Help Support Legal px12-400r Trademark Page Regulatory Information FCC (United States) x
11 Table of Contents Canadian Verification CE (European Community) Manufacturer/Responsible Party EU Representative Safety Information Limited Warranty Drives and Media Coverage Excluded Products and Problems Remedies Obtaining Warranty Service Limitations Open Source xi
12
13 Setting up Your Device Setup Overview Setup with your Iomega StorCenter px12-400r is easy. Remove it from the box, connect it to your network or computer, and power it up. Then, launch a web browser, and enter the setup URL identified in the Quick Install Guide. Iomega Setup launches and displays a message that your px12-400r is online and ready to use. You then can install client software that includes: Iomega Storage Manager Twonky Media Server for media aggregation Iomega QuikProtect for backups Iomega Storage Manager is a management tool that helps you discover your px12-400r on your network to simplify access to content on your px12-400r from your computer. It also allows you to add your computer as a trusted device to an Iomega Personal Cloud. Refer to Iomega Storage Manager online help for additional information. Twonky Media Server consolidates all media files on devices on your network and presents them in a unified view. Iomega QuikProtect offers file backup of your computer to an Iomega storage device. From Iomega Setup, you can optionally create a Personal Cloud or begin using your px12-400r by clicking Manage My Device. How do I... set up my px12-400r if it's not discovered create an Iomega Personal Cloud set up media aggregation 1
14 Iomega StorCenter px12-400r User Guide Set up My Iomega StorCenter If It's Not Discovered If, after you enter the setup URL identified in the Quick Install Guide, your Iomega device is not discovered, you have two ways of discovering it. Discovering with Iomega Storage Manager You can install Iomega Storage Manager from Iomega Setup, which helps you discover your Iomega device on your network. Discovering the Iomega device without the Internet You can access your Iomega device device without internet access as described by these methods for Windows pcs or Macs: Windows 7 and Vista Click Start, Computer, Network. Under Other Devices, you should see your Iomega device listed. For example, if you have an Iomega device, you can double-click the device labeled Iomega device, and you will see the Iomega StorCenter Console for the Iomega device. Windows XP If you have not enabled UPnP Discovery, click Start, Help and Support. In the Help and Support browser, search for UPnP, and follow the steps from Install the UPnP framework. After UPnP is enabled, open Windows Explorer and in the Folders view, expand My Network Places. You should see your Iomega device listed. For example, if you have an EZ Media device, you can double-click the device labeled IomegaEZ, and you will see the Iomega StorCenter Console for the Iomega device. Mac Browse to your Iomega device through Finder, Shared, All and use Go, Connect to Server to connect to Shares on your Iomega device. How do I... set up my Iomega device install Iomega Storage Manager 2
15 Setting up Your Device Setup Page The Setup page opens when you first access the Iomega StorCenter px12-400r from the Home Page or the Iomega Storage Manager. On this page, you can configure some basic device features by clicking the appropriate link. The current setting of the feature displays above the link. You can also configure all features shown on the Setup page by accessing the specific features directly. 3
16 Iomega StorCenter px12-400r User Guide Network Connection Connecting the Iomega StorCenter px12-400r to Your Network First, check the package contents. Verify that the box contains the following items: 1. px12-400r (models may vary) 2. Quick Start Guide 3. Power Cables 4. Ethernet Cable 4
17 Setting up Your Device 5. Rail kit (models may vary) Connecting the px12-400r Initial Setup If you have purchased more than one px12-400r, complete all steps on one device before setting up additional devices. 1. Use the included network cable to connect the px12-400r to a network hub or switch. 2. Connect the included power cords to the power supply connectors on the back of the px12-400r and to an Uninterruptible Power Supply (UPS). This provides redundant power for your px12-400r. 3. Power on the px12-400r. 4. From a computer on your network, open a web browser and go to to set up your px12-400r on your network. For best results, use a computer that is connected to the same subnet or network segment as the px12-400r. NOTE: You can access the Iomega StorCenter px12-400r Console directly by entering the IP address or model name of your px12-400r in your computer s web browser. To use the model name on a Mac, add.local after the name in the browser (for example, px12-400r.local). 5. OPTIONAL: if desired, install the Iomega Storage Manager, QuikProtect, and Media Aggregation software. If you install Iomega Storage Manager, its icon will appear in the System Tray (Windows) or Menu Bar (Mac). The Iomega Storage Manager will automatically scan your network and connect to available Shares. If you receive a message from your operating system's firewall alerting you of network activity, be sure to unblock communications. Mac Users: Shares on the px12-400r will mount and appear on the Mac Desktop. PC Users: Shares on the px12-400r will automatically be assigned a drive letter and will be available in the Network Drives section under My Computer. How do I... view information about my device components 5
18 Iomega StorCenter px12-400r User Guide Network Settings The Network page of your px12-400r is where you make changes to set up network connectivity. The Network page displays your current network settings and enables those settings to be modified. On this page, you can identify your DNS servers and WINS servers and how your system's IP address is determined. Most system IP addresses and other network settings can normally be configured automatically. Manually Configuring Your Network If you are comfortable with network technology and want to configure the network, refer to Manually Configuring the Network. Bonding NICs If your px12-400r has multiple network interface cards (NICs), you can bond those NICs. Refer to Bonding NICs. Enabling Jumbo Frames for Each NIC You can enable jumbo frames for each NIC in your px12-400r by expanding the Information section for a NIC and selecting a jumbo frame size from the Jumbo Frame drop-down menu. Valid jumbo frame sizes are 4,000 or 9,000 bytes. If you do not want jumbo frame support, select None from the Jumbo Frame drop-down menu. Jumbo frame support is useful for transferring large files, such as multimedia files, over a network. Jumbo frame support increases transfer speed by placing large files in fewer data packets. It also reduces the demand on the device hardware by having the CPU process more data in fewer data packets. Jumbo frame support should only be enabled if you are sure your network is jumbo-frame compatible and all network devices have been configured to support jumbo frames. It is recommended that you confirm all network interface cards (NICs) are configured to support jumbo frames before enabling this feature. VLAN Settings Each NIC in your px12-400r can be added to up to four Virtual LANs (VLAN). For more information on adding a NIC to a VLAN, refer to VLAN Settings. 6
19 Setting up Your Device Manually Configuring the Network There are various settings in the network setup that you can manually configure. 1. Click Modify network settings. 2. Uncheck Automatically configure DNS, WINS, and all IP addresses (DHCP). 3. DNS Servers enter the IP addresses of the DNS (Domain Name System) servers. DNS is used for translating the domain name to IP addresses. 4. WINS Servers enter the IP addresses of the WINS server. 5. From the Bonding Mode drop-down menu, choose one of the following: Transmission Load Balance increases bandwidth by distributing the load across multiple NICs. Link Aggregation increases bandwidth by distributing the load across multiple ports in a switch. Failover provides recovery from a failure, so if one NIC should fail, your system still has network connectivity with the other NIC. 6. Click Apply to save your settings. If a DHCP server is unavailable for a network interface card (NIC), the device could autoassign an IP address, or you can uncheck the Automatically acquire network address (DHCP) checkbox found in the Information section of a NIC. 7. You can change the following settings in the Information section: IP Address the static IP address of the px12-400r. Use an available IP address in the range used by the LAN. Subnet Mask the subnet that the IP address belongs to. The default value is Gateway enter the gateway IP address in this field. 8. Click Apply to save your settings. VLAN Settings Each NIC in your px12-400r can be added to up to four Virtual LANs (VLAN). For more information on adding a NIC to a VLAN, refer to VLAN Settings. Bonding NICs If your px12-400r has multiple network interface cards (NICs), you can bond those NICs. Refer to Bonding NICs. 7
20 Iomega StorCenter px12-400r User Guide Bonding NICs Bonding network interface cards (NICs) is a way to provide redundancy for your px12-400r on the network. If one NIC should fail, your px12-400r will remain accessible on the network if that NIC is bonded to others. You can bond two or more network interface cards (NICs) in your px12-400r by selecting the NICs and clicking Apply. Use the following procedure to bond NICs. 1. On the Network page, expand the NIC number and then expand the Bond Network Interface section. 2. Check the checkboxes next to the NICs that you want to bond to the selected NIC. For example, if you selected NIC 1, and your configuration includes four NICs, you could bond NIC 1 to NIC 2, 3, and/or Click Apply to save your settings. Unbonding NICs The section updates and displays the NICs that are bonded to the selected NIC. 1. To unbond a NIC, uncheck the box next to the bonded NIC. 2. Click Apply to save your settings. 8
21 Setting up Your Device VLAN Settings A VLAN (Virtual Local Area Network) is a network of devices that are joined into one broadcast domain, even if the devices are not physically connected to each other. VLANs are useful for creating smaller networks within a larger LAN; for example, a legal department in a company might be on its own VLAN because it has sensitive documents that only certain personnel should have access to. The smaller networks that VLANs create do not require any additional physical resources, such as additional cabling. Your Iomega StorCenter px12-400r can be configured to support VLANs. VLAN is configured for each NIC, but it is not supported on bonded NICs. If a NIC is bonded, you must unbind it first to configure it for a VLAN. Adding a VLAN 1. To add a VLAN, expand the VLAN Settings section of a NIC. 2. Click Add VLAN. 3. Enter a VLAN ID value between 2 and You can enter up to 4 VLAN IDs for each NIC. A VLAN can obtain its network settings from DHCP, or you can uncheck DHCP and enter the IP address, subnet mask, and gateway manually. 4. Refer to Network Settings Overview for information about jumbo frames. 5. Click Apply to save your changes. Deleting a VLAN In the VLAN Settings section click Delete to delete the VLAN. 9
22 Iomega StorCenter px12-400r User Guide Naming Your Iomega StorCenter px12-400r You can provide a meaningful name for your px12-400r using the Device Identification page. This page in the Iomega StorCenter px12-400r Console enables you to change the Storage Device Name, the Storage Device Descriptive Name, and the Workgroup Name. Change any of these by editing the text fields. Click Apply to save your changes. Device Name Enter a name for the Iomega device. Use a name that will help you identify it on your network. Device Descriptive Name Enter a descriptive name for the Iomega device device. This name can provide additional detail that identifies the device. Workgroup Name Enter a workgroup name for the Iomega device if you need to change the default name. The workgroup name identifies a group of computers that share information with each other. Change the workgroup name only if you explicitly define a workgroup on your network. Most users won't need to change the workgroup name, unless they have explicitly defined a different workgroup on their other computers. How do I... enable security 10
23 Setting up Your Device Configuring Your Iomega StorCenter px12-400r to Use Active Directory If you have an existing Active Directory user organization, you can incorporate it into the Iomega StorCenter px12-400r Console. Note: When you configure Active Directory, you enable security on your px12-400r. 1. To configure Active Directory, manually add the px12-400r to your DNS server. Set the px12-400r DNS setting to point to your DNS server. On the Network page, uncheck Automatically configure all network settings, type the IP address of your DNS Server in the text box, and click Apply to save your settings. 2. Configure the px12-400r to join the Active Directory domain. Active Directory select Active Directory mode if you already have an existing user organization, such as Active Directory, that you want to incorporate into the px12-400r. 3. Provide the following connectivity information: Domain Name the actual name of your Active Directory domain, for example, sohoad.com. Domain Controller the actual name or IP address of your Active Directory Server, for example, ad-server.sohoad.com or Organizational Unit an optional predefined subset of directory objects within an Active Directory domain. Administrator Username the Active Directory username with domain administrator privilege. Administrator Password the Active Directory password for the specified Active Directory username. Users/Groups Refresh Interval how often the px12-400r should refresh the list of available users and groups from the Active Directory server. Enable Trusted Domains enables your px12-400r to allow access to other domains. 4. Click Apply to save your settings. Enabling Active Directory Trusted Domains By enabling Active Directory trusted domains on your px12-400r, you enable the importing of users and groups from other trusted domains to your px12-400r device. Those users and groups from other domains will then have access to features on your px12-400r, including accessing folders and documents in Shares, and joining any Personal Cloud of which the device is a member. Now that you have enabled access to all trusted domains, you can add users and groups from those trusted domains to your px12-400r. For more information, refer to Manage Users and Groups with Active Directory. How do I... enable security 11
24 Iomega StorCenter px12-400r User Guide Obtaining Alerts About Your Iomega StorCenter px12-400r You can configure your px12-400r to send alerts when problems are detected. This is done through the notification feature. notification provides a destination for s sent by the px12-400r when problems are detected. To provide a destination address, enter the following information: Destination Addresses enter a valid address or addresses. This address provides a destination for messages sent by the px12-400r when problems are detected by the system. You can add multiple addresses by separating them with commas, spaces or semicolons. Check Send a test message to confirm that notification is working properly. Check Configure custom SMTP settings only if your network blocks SMTP traffic, requiring additional credentials, such as a corporate firewall. Most users will not need to check this option. If checked, enter the following additional information to identify your SMTP server: Server (SMTP) enter the address of your SMTP server. Sender Address enter an address for the px12-400r to use as the From address when it creates messages. Login enter the username used to log into the account you entered above. Password enter the password for the account. Confirm Password confirm the password for the account. It must match the password provided above. Note: If your application uses a SPAM blocker, it is recommended that you add a sender address to your safe list. If you do not define additional credentials, the default sender is: Click Apply to save your changes. 12
25 Setting up Your Device Using Your Iomega StorCenter px12-400r in Various Time Zones You can set the date and time used on your px12-400r, so that it can appear to be in one time zone, when it actually may be. 1. To change time zones, select a Time Zone from the drop-down menu, and then select how time will be set for the px12-400r: Internet Time Server Manual By default, Automatically synchronize with an internet time server and Use the default time server are selected. To specify a time server, select Specify the time server and type the URL of the internet time server you wish to use in the text box that displays. Select Manually set date and time. To set the current date and time, click the appropriate icon for calendar and clock settings. 2. Click Apply to save your changes. 13
26 Iomega StorCenter px12-400r User Guide Setting the Display Language for Your Iomega StorCenter px12-400r You can set the display language for your px12-400r through the Languages page. The language used by the Iomega StorCenter px12-400r Console is based on the preferences configured in your browser. You can change the language used in this program by modifying your browser's preferred language settings. Click Apply to save your changes. 14
27 Setting up Your Device Printing Documents Printing documents from your Iomega StorCenter px12-400r is simple after you have attached a compatible printer to the px12-400r. The Printers page displays a table of printers that are attached to the px12-400r. The table contains for each printer the name, model, status, and number of documents waiting. To attach a printer, simply plug a supported printer's USB cable to a USB port on the px12-400r. Once attached, the printer will appear in the table. When the cable is unplugged, the printer will be removed from the table. 15
28 Iomega StorCenter px12-400r User Guide Setting up Personal Cloud, Security, and File Sharing After you have configured some basic features of your Iomega StorCenter, you may also want to set up an Iomega Personal Cloud, security, or file sharing. You can set up a Personal Cloud to allow invited users access to content on your Iomega StorCenter. This content can be in private Shares that are exclusive to the users who join the Personal Cloud, which adds an additional layer of security to your content. In addition, you may want to join other trusted devices to the Personal Cloud so that content on those devices can be made available to Personal Cloud users. For more information about Personal Clouds, refer to the Personal Cloud overview. You can enable security so you can secure Shares, create users, and allow selected features to be enabled. When you create users, you limit access to your Iomega StorCenter to those specific people, and when you secure Shares, you limit data access to those specific users. For more information on security, refer to What Is Security and Do I Need It? It is recommended to set up file sharing so that content can be added to your Iomega StorCenter, and that content can be made available in a wide variety of ways, including users of your Iomega StorCenter and content features such as Active Folders and media sharing. For more information, refer to the Sharing Overview. How do I... create an Iomega Personal Cloud set up security set up file sharing 16
29 Sharing Files Sharing Overview Your Iomega StorCenter is set up for storing, retrieving, and accessing files among users, client computers, and applications. Note: File sharing is accomplished by creating Shares; setting up security, which includes creating users; setting up media services; and configuring Active Folders. Interfaces for Sharing Your Iomega device has three separate interfaces for file sharing: Iomega StorCenter Console You manage the creation of Shares through the Iomega StorCenter Console. Iomega Storage Manager Optionally installed on your local computer, Iomega Storage Manager discovers any Iomega storage devices on your subnet, maps device Shares to computers, and provides local access to your content. It provides access to Shares through your computer's file management program, such as Windows Explorer or Mac Finder, allowing you to drag and drop many files between your computer and the Iomega device. Installing Iomega Storage Manager is optional. Home Page Serves as a web-accessible interface to your Iomega device. The Home page content is configured using the Iomega StorCenter Console. The Home Page displays any public Shares. It can also display secured Shares accessible only to users who log in to the Iomega device. You can access the Home page of your Iomega device by entering the device name or IP address directly in your browser. If security is enabled and you are an administrator user, you can access the Iomega StorCenter Console from the Home page by clicking. How do I... create Shares enable security create users 17
30 Iomega StorCenter px12-400r User Guide Shares What are Shares and How Do I Organize Content with Them? Shares are folders that contain all types of content, including documents, pictures, and music files. Shares can be public, meaning anyone accessing your px12-400r can access the content in the Shares. Shares can also be secured, which means access to content in them is limited to a select group of users. All Shares on an px12-400r are displayed on the Shares page. The Shares page displays a table that contains folders, connected drives, and any cloud storage to which your Iomega StorCenter px12-400r Console is connected. The Properties column displays the features that are enabled for each Share. Share Information The Information section displays the Share name, graphically displays the space usage of the Share, and allows you to view the content using the web-based content viewer. To view the content of a Share, click View Content to open the Content Viewer. To learn how to modify your Share information, refer to Managing Shares. Access Permissions The Access Permissions section contains a list of users who currently have access to that Share. Access Permissions displays when the px12-400r is secured, otherwise the section is not included in the Share. If "Everyone" has access to Shares that means your content can be viewed by anyone with access to your network without that person needing a username or password. To learn how to modify Access Permissions on a Share, refer to Managing Shares. Active Folders Follow the link to the Active Folder options for information on configuring each: Distribution Facebook Flickr Photo Resize Torrents YouTube How do I... add a Share manage a Share delete a Share 18
31 Sharing Files Adding Shares 1. From the Iomega StorCenter Console, click Shares. 2. To add a new Share, click Add a Share. Type a name for the Share. All Shares must have a name. Names cannot exceed 32 characters. The following are not valid Share names: global, homes, printers. 3. Click Create. To modify an existing Share, click the Share row to expand the Share. How do I... manage Shares delete Shares 19
32 Iomega StorCenter px12-400r User Guide Managing Shares You can change Share information, change access permissions, make a Share an Active Folder, use Share volumes, and modify a Share volume. If available, you can also enable NFS secured access. Changing Share Information 1. Modify the existing name for the Share. 2. Choose whether to enable media sharing. When Media sharing is enabled, the media server scans this Share for any media content and makes it available to anyone with access to your network, even if this Share is secured. If you do not want media content made available to anyone, do not check this option. When Media sharing search is enabled, Properties for that Share. displays in the 3. To view the content of a Share, click the View Content link to open the Content Viewer. 4. Click Apply to save your changes. Changing Access Permissions Note: You should enable security on your Iomega StorCenter before changing access permissions. 1. Expand Access Permissions to change user permissions to this Share. A security icon displays in Properties indicating a secure Share. When a secure Share is first created, Everyone has read and write access to that Share by default, which means that everyone on your network can read, write, and delete files to and from that Share. When user Everyone has Read and Write permissions to a Share, the Share is not secure and is open to all users. 2. Check Allow users to change file level security to allow file and folder permissions to be set through other programs, such as Windows Explorer, independent of the Iomega device. Setting this option allows users to put additional access restrictions on individual files and folders. 3. To limit access to this Share to a specific set of users, click Add access permissions and choose one or more users from the pop-up window. If you have created groups, you can also limit access for them in this way. 4. In the Access Permissions section, check Read, Write, or both to set access to this Share for each user. To remove a user, leave both Read and Write unchecked for that user. If you grant Read and Write permissions to Everyone, the list of users is also cleared since all users (Everyone) has access to this Share. If you have created groups, you can also limit access for them in this way. 5. Click Apply to save your changes. Enabling NFS Secured Access 1. To enable NFS, first click the switch on from the Protocols page. 2. On the Shares page, select a secure Share and expand the NFS section. You cannot apply a rule to a public Share. 3. Click Add an NFS rule to add a Host Name for the rule. Rules are added to specify the hosts that are allowed to access Shares using NFS. Use this table to add NFS rules to specify access for hosts. For example, *.cs.foo.com matches all hosts in the domain cs.foo.com. To export a Share to all hosts on an IP address or local network simultaneously, specify an IP 20
33 Sharing Files address and netmask pair as address/netmask where the netmask can be in dotted-decimal format, or as a contiguous mask length. For example, either / or /22 will result in identical local networks. 4. When the rule is added, read access is automatically set to the Share. Select Write to allow users to write to that Share. Use and to modify the rule priority for NFS access. 5. Click Apply to save your changes. Making a Share an Active Folder 1. You can optionally enable Active Folders on a Share to allow you to associate this Share with a specific feature that will happen automatically when files are copied to the Share. For example, you can enable a Share as a social media active folder to upload a file to a social media site. Refer to Sharing Content with Social Media Overview. You can only set one Active Folder option per Share. 2. Expand the Active Folder section and check Enable. Select one of the following Active Folder options and follow the link for details on configuring each: Distribution Facebook Flickr Photo Resize Torrents YouTube 3. Click Apply to save your changes. Using Share Volumes When you create a Share, you can use an existing volume or create a new one for that Share. After that Share is created, you cannot move it to a different volume. You can modify the volume by changing its size. You can also determine if the volume can automatically grow in size. 1. To set the volume for a Share as you are creating it, click Change volume allocation in the Information section. 2. Choose whether to use an existing volume or to create a new one. For more information on existing volumes, refer to Volume Management. 3. Select an existing Storage Pool in which to place the volume. 4. If you are selecting from more than one existing volume, select a volume from the Volume drop-down menu. 5. If you are creating a new volume, enter a name for the volume in the Volume text box. 6. Enter a size for the volume. You cannot reduce this size later. 7. Click OK. 8. In the Information section, click Apply to save your changes. Modifying a Share Volume 1. Click Change volume allocation in the Information section. 21
34 Iomega StorCenter px12-400r User Guide 2. Enter a new size for the volume. 3. Click OK. 4. In the Information section, click Apply to save your changes. How do I... add a Share delete a Share share content with social media 22
35 Sharing Files Deleting Shares To delete a Share: 1. From the Iomega StorCenter px12-400r Console, click Shares. 2. To delete an existing Share, click to expand the Share. 3. In the Information section, click Delete to delete the Share. 4. In the Delete Share confirmation pop-up window, click Yes. 5. If you do not wish to delete the Share, click Cancel to return to the Shares page. How do I... add a Share manage a Share 23
36 Iomega StorCenter px12-400r User Guide Using Protocols to Share Files What Are Protocols and How Do I Use Them to Share Files? Your Iomega StorCenter px12-400r uses communication protocols to mount file systems and allow files to be transferred between client computers and the Iomega StorCenter. The px12-400r includes the following protocols for file sharing: AFP Bluetooth FTP TFTP NFS rsync SNMP WebDAV Windows DFS Windows File Sharing 24
37 Sharing Files AFP File Sharing for Macs The Apple Filing Protocol (AFP) enables Apple file sharing, which is the preferred method for Mac users to access Shares. AFP is on by default. To enable AFP, click the switch on. 25
38 Iomega StorCenter px12-400r User Guide Bluetooth File Sharing Once a Bluetooth adapter is detected, files can be uploaded to a configurable destination Share on the px12-400r from a Bluetooth device. Configuring Bluetooth settings 1. To enable Bluetooth, click the switch on. 2. Once Bluetooth Transfer is enabled, check the Enable security checkbox to require Bluetooth users to supply a unique PIN that they have defined before allowing them to transfer files to the destination Share on the px12-400r. If you have enabled security, you must define a unique PIN number, which will be supplied by devices attempting to upload data using Bluetooth. 3. To set the destination Share, click. 4. Click Apply to save your settings. Note: To change any Bluetooth settings, click. 26
39 Sharing Files FTP File Sharing On the Protocols page, click the switch to turn on FTP (File Transfer Protocol) and allow access to your Iomega StorCenter px12-400r. Click to select either FTP or secure FTP (SFTP) or both. You must enable security to apply SFTP. If you select and enable SFTP, you cannot have the secure Rsync protocol enabled. When you turn on FTP, you can send files to your px12-400r. 27
40 Iomega StorCenter px12-400r User Guide NFS File Sharing On the Protocols page, click the switch to turn on NFS (Network File System) to allow remote hosts to mount file systems over a network and interact with them as though they were mounted locally to your Iomega StorCenter px12-400r. Note: Select an option to choose how users on client computers are mapped to the px12-400r: To have all users, including root, map as guest, select Treat client users as guest (all_squash). All files are owned by user guest, and all users accessing the px12-400r have the same access rights. If you have enabled Active Directory on your px12-400r, only this option is available for mapping client computers. To have all users map as themselves but root maps as guest, select Allow full access for client users other than root (root_squash). To have all users map as themselves, including root, select Allow all client users full access. Once enabled, add NFS access rules for each secure Share from the Managing Shares page. NFS provides another protocol for sharing storage data with Linux hosts. When NFS is enabled, you can configure rules for host-based access to secure Shares. Rules can be added to secure Shares to specify the hosts that are allowed to access Shares using NFS. For example, *.cs.foo.com matches all hosts in the domain cs.foo.com. To export a Share to all hosts on an IP address or local network simultaneously, specify an IP address and netmask pair as address/netmask where the netmask can be in dotted-decimal format, or as a contiguous mask length. For example, either / or /22 will result in identical local networks. To change any NFS settings, click. 28
41 Sharing Files rsync: Synchronizing Files with Another Storage Device or Other Computers When you turn on this protocol, you can enable the Iomega StorCenter px12-400r as an rsync server. When the px12-400r is an rsync server, it can be used as a source and/or destination device for rsync Copy Jobs. Because of the fast and efficient nature of rsync, an rsync Copy Job can be faster than a Windows File Sharing Copy Job. For more information on Copy Jobs, refer to Copy Jobs. If you enable the px12-400r as an rsync server, you can optionally set up a user account on the px12-400r for secure rsync Copy Jobs. Configuring rsync server settings 1. To enable rsync server, click the switch on. 2. To create a secure user account, check Configure secure rsync credentials. 3. The username is preset as rsync. You can change this to a more meaningful user account name. Enter a password and confirm it for the rsync user account name. When you create a secure rsync user account on the px12-400r, you allow other devices to securely copy to or from it. 4. By default, rsync uses TCP port 873 for accepting requests. You can change this value to a different port number, if desired. 5. Click Apply to save your settings. Note: To change any rsync server settings, click already enabled SFTP.. You cannot enable rsync server if you have 29
42 Iomega StorCenter px12-400r User Guide TFTP On the Protocols page, click the switch to turn on TFTP (Trivial File Transfer Protocol) and allow access to your Iomega StorCenter. When you turn TFTP on, you can send files to your Iomega device using FTP. 30
43 Sharing Files Monitoring Your Device with an SNMP Management Tool SNMP (Simple Network Management Protocol) provides information about the state of the Iomega StorCenter px12-400r to various management tools. SNMP should be disabled unless you are specifically providing information to a management system that requires this information. Configuring SNMP settings 1. To enable SNMP, click the switch on. 2. Enter a unique username and password to define the community. 3. Confirm your password. 4. Enter the IP address of the host in the Trap Receivers text box. To grant access to multiple receivers, list all of them in the text box, separating each entry with a space. 5. Click Apply to save your settings. To change any SNMP settings, click. 31
44 Iomega StorCenter px12-400r User Guide WebDAV: Managing Files Using HTTP or HTTPS WebDAV (Web-based Distributed Authoring and Versioning) is a protocol that provides web-based access to Shares on the Iomega StorCenter px12-400r. With WebDAV enabled on the px12-400r, you can view, add, or delete files through your WebDAV client using either HTTP for unencrypted access or HTTPS for encrypted access. HTTP offers faster performance, but is not secured. Access Shares using a URL such as Refer to your operating system's documentation to learn how to access files through WebDAV. Note: If your px12-400r has a remote access password, you must enter that password and the username webdav to access your device. Your px12-400r has a remote access password only if the device is not secured and a Personal Cloud was created on it. Configuring WebDAV settings 1. To enable WebDAV, click the switch on. 2. To enable WebDAV for HTTP, check Enable WebDAV Over HTTP. 3. To enable WebDAV for HTTPS, check Enable WebDAV Over HTTPS. 4. Click Apply to save your settings. 32
45 Sharing Files Windows DFS: Creating a Distributed Windows File System Windows DFS (Distributed File System) organizes Shares and files on a network, such that they appear to be all in one directory tree on a single px12-400r, even if the Shares reside on many devices. Windows DFS terms There are several terms to understand with Windows DFS. Namespace A virtual Share containing other folders that are located on different devices throughout a network. DFS root An object that consolidates all the folders in your network and makes them available through a single entry point. An example of a DFS root is \\ DeviceName\DFSRootName. DFS link A folder under the DFS root. Configuring Windows DFS settings 1. To enable Windows DFS, click the switch on. 2. Enter a DFS root name. The DFS root name is the starting point of a DFS namespace. After entering a DFS root name, you add DFS links, which map to folders on other devices. 3. Click Click to add a DFS link target to begin adding DFS links. 4. Enter the DFS link name, which includes the name of the host and Share to which you are linking. 5. Click Apply to save your settings, or click Cancel to discard your changes. 33
46 Iomega StorCenter px12-400r User Guide Windows File Sharing Windows File Sharing allows you to work in Workgroup mode, using Iomega StorCenter px12-400r Console to create users and manage access. To enable Windows File Sharing, click the switch on. 34
47 Sharing Files Sharing Content through the Home Page Sharing Your Content with the World When you set up the Home Page of your Iomega StorCenter px12-400r, you are presenting public content to anyone who accesses your px12-400r. That public content includes a slideshow and public Shares. You can manage the look of the Home Page by using the Home Page Settings page. This page allows you to display the slideshow, display public Shares, name the Home Page, and turn the Home Page on or off. 1. From the Iomega StorCenter px12-400r Console, click Home Page Settings. 2. Click the slider switch to On to enable the Home Page on your px12-400r. 3. Enter a title for the Home Page. This title displays in the top banner of the Home Page when users access the px12-400r. 4. Check Display Shares to display public Shares. When you select to display Shares, the user sees all public Shares on the px12-400r. 5. Check Display slideshows to display picture slideshows that are in folders on the px12-400r. Click Manage slideshows to configure any slideshows you want to display. The slideshow location can be any folder attached to the px12-400r, including a USB drive or DFS location. 6. Click Apply to save your changes, or click Cancel to discard your changes. Deleting a Slideshow To delete a slideshow from the list of available slideshows, click you can configure a different one.. After you delete a slideshow, How do I... create Shares add custom home page content 35
48 Iomega StorCenter px12-400r User Guide Adding a Custom Home Page You can customize the look of the home page of your Iomega StorCenter px12-400r to include html pages and client-side scripting, such as Javascript. This customized home page replaces the default home page on the px12-400r. In addition, there are applications available on that can enhance your home page content. You add your custom html content to a Share on your px12-400r and then specify its location on the Home Page Settings page. Applying the Customized Home Page 1. Click the Home Page Settings feature from the Iomega StorCenter px12-400r Console. 2. On the Home Page Settings page, select Customized home page settings. 3. In the Home Page Name field, enter the name of the start page of your custom home page. By default, the name is index.html. 4. Specify the destination Share where the start page and your html content exists on your px12-400r by clicking and navigating to the Share. Note: You cannnot access the destination Share through the WebDAV interface. Access through WebDAV is permanently disabled. 5. Select the Share name and click Apply. 6. Click Apply to save your settings. 36
49 Sharing Files Automatically Sending Content to Multiple People at Once You can send content to multiple people at once using an distribution active folder. You can configure a Share as an Active Folder so that when you add files to that Share, they are automatically sent to the recipients on the distribution list. To configure a Share as an Active Folder, access Shares from the Iomega StorCenter px12-400r Console, select or create a Share, and expand the Active Folders section to enable and configure distribution. How to Set Up an Distribution Active Folder Note: Distribution lets you your files to friends and family right from your Iomega StorCenter px12-400r Console. Use Distribution to share files with an list. Note: To prevent distribution list spamming, the px12-400r allows lists of 250 or fewer recipients and sends a maximum of six s in a 24-hour period. Refer to Managing Shares for more information on managing Shares and Active Folders. Configuring an Distribution Active Folder 1. From the Iomega StorCenter px12-400r Console, click Shares. 2. Select a Share to use as an Distribution Active Folder, and click to expand the Active Folder section. 3. Check Enable. 4. Select Distribution from the drop-down menu. 5. Include an address in the Sender Address text box. Distribution is sent from this address. 6. You can add multiple addresses in the To: text box by separating them with commas, spaces, or semicolons. 7. Add a subject and message for your recipients. 8. Check Send the file as an attachment, Send a link to the file, or both. 9. Click Apply to save your changes. 10. Once configured, all files in this Share are sent by to your recipients. Click View Transfer History to see the transfer activity from this Share to your account. How do I... manage a Share 37,
ix4-300d Network Storage
ix4-300d Network Storage with LifeLine 4.0 User Guide 2013 LenovoEMC, Ltd. All rights reserved. Lenovo and the Lenovo logo are registered trademarks of Lenovo in the United States, other countries, or,
Iomega StorCenter ix4-200d User Guide D31568300
Iomega StorCenter ix4-200d User Guide D31568300 2 Iomega StorCenter ix4-200d Table of Contents Table of Contents How to Get Help About the Iomega StorCenter ix4-200d Connecting the Iomega StorCenter ix4-200d
6.0. Getting Started Guide
6.0 Getting Started Guide Netmon Getting Started Guide 2 Contents Contents... 2 Appliance Installation... 3 IP Address Assignment (Optional)... 3 Logging In For the First Time... 5 Initial Setup... 6 License
Installing and Configuring vcloud Connector
Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new
Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario
Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.0 July 2015 2015 Nasuni Corporation All Rights Reserved Document Information Testing Disaster Recovery Version 7.0 J
Iomega StorCenter px6-300d User Guide D31578301
Iomega StorCenter px6-300d User Guide D31578301 2 Iomega StorCenter px6-300d 3 Iomega StorCenter px6-300d 4 Iomega StorCenter px6-300d How to Get Help Iomega is committed to providing excellent customer
Chapter 15: Advanced Networks
Chapter 15: Advanced Networks IT Essentials: PC Hardware and Software v4.0 1 Determine a Network Topology A site survey is a physical inspection of the building that will help determine a basic logical
IntraVUE Plug Scanner/Recorder Installation and Start-Up
IntraVUE Plug Scanner/Recorder Installation and Start-Up The IntraVUE Plug is a complete IntraVUE Hardware/Software solution that can plug directly into any network to continually scan and record details
BlackArmor NAS 440/420 User Guide
BlackArmor NAS 440/420 User Guide BlackArmor NAS 440/420 User Guide 2010 Seagate Technology LLC. All rights reserved. Seagate, Seagate Technology, the Wave logo, and FreeAgent are trademarks or registered
Home Media Network Hard Drive User Guide D
Home Media Network Hard Drive User Guide D31568300 2 Iomega Home Media Network Hard Drive Cloud Edition 3 Iomega Home Media Network Hard Drive Cloud Edition 4 Iomega Home Media Network Hard Drive Cloud
Getting Started Guide
Getting Started Guide Microsoft Corporation Published: December 2005 Table of Contents Getting Started Guide...1 Table of Contents...2 Get Started with Windows Server 2003 R2...4 Windows Storage Server,
emerge 50P emerge 5000P
emerge 50P emerge 5000P Initial Software Setup Guide May 2013 Linear LLC 1950 Camino Vida Roble Suite 150 Carlsbad, CA 92008 Copyright Linear LLC. All rights reserved. This guide
Network Storage Link
A Division of Cisco Systems, Inc. WIRED Network Storage Link for USB 2.0 Disk Drives User Guide Model No. NSLU2 Copyright and Trademarks Specifications are subject to change without notice. Linksys is
your Gateway Windows network installationguide 802.11b wireless series Router model WBR-100 Configuring Installing
your Gateway Windows network installationguide 802.11b wireless series Router model WBR-100 Installing Configuring Contents 1 Introduction...................................................... 1 Features...........................................................
Savvius Insight Initial Configuration
The configuration utility on Savvius Insight lets you configure device, network, and time settings. Additionally, if you are forwarding your data from Savvius Insight to a Splunk server, You can configure
VeritiLink Remote Management Software
Installation Guide VeritiLink Remote Management Software Version 1.0 Installation Guide Getting Started VeritiLink Remote Management Software Version 1.0 Setting Up the Veriti Thermal Cyclers Setting
TANDBERG MANAGEMENT SUITE 10.0
TANDBERG MANAGEMENT SUITE 10.0 Installation Manual Getting Started D12786 Rev.16 This document is not to be reproduced in whole or in part without permission in writing from: Contents INTRODUCTION 3 REQUIREMENTS
1 You will need the following items to get started:
QUICKSTART GUIDE 1 Getting Started You will need the following items to get started: A desktop or laptop computer Two ethernet cables (one ethernet cable is shipped with the _ Blocker, and you must provide
Virtual Data Centre. User Guide
Virtual Data Centre User Guide 2 P age Table of Contents Getting Started with vcloud Director... 8 1. Understanding vcloud Director... 8 2. Log In to the Web Console... 9 3. Using vcloud Director... 10
WHICH INTERFACE: USB OR ETHERNET?... 3 CONNECTING NAS DRIVE USING USB...
Revision 1.2 INTRODUCTION... 1 CONTROLS, CONNECTORS AND INDICATORS... 1 Front Panel Area... 1 Rear Panel Area... 2 ABOUT THE HARD DISK... 2 LOCATING NAS DRIVE ON YOUR DESK... 3 WHICH INTERFACE: USB...
Volume. Instruction Manual
Volume 1 Instruction Manual Networking EVERFOCUS ELECTRONICS CORPORATION Networking Instruction Guide 2004 Everfocus Electronics Corp 2445 Huntington Drive Phone 626.844.8888 Fax 626.844.8838 All rights
Chapter 6 Using Network Monitoring Tools
Chapter 6 Using Network Monitoring Tools This chapter describes how to use the maintenance features of your Wireless-G Router Model WGR614v9. You can access these features by selecting the items...
Moxa Device Manager 2.0 User s Guide
First Edition, March 2009 2009 Moxa Inc. All rights reserved. Reproduction without permission is prohibited. Moxa Device Manager 2.0 User Guide The software described in this manual
KM-1820 FS-1118MFP. Network Scanner Setup Guide
KM-180 FS-1118MFP Network Scanner Setup Guide Introduction Trademark Information About this Guide Important Microsoft, Windows, Windows NT and Internet Explorer are registered trademarks of Microsoft
NETWORK PRINT MONITOR User Guide
NETWORK PRINT MONITOR User Guide Legal Notes Unauthorized reproduction of all or part of this guide is prohibited. The information in this guide is subject to change without notice. We cannot be held liable.
Configuring PA Firewalls for a Layer 3 Deployment
Configuring PA Firewalls for a Layer 3 Deployment Configuring PAN Firewalls for a Layer 3 Deployment Configuration Guide January 2009 Introduction The following document provides detailed step-by-step
Evoko Room Manager. System Administrator s Guide and Manual
Evoko Room Manager System Administrator s Guide and Manual 1 1. Contents 1. Contents... 2 2. Read this first! Introduction to this Guide... 6 3. User Guide... 6 4. System Architecture Overview... 8 ----
Quadro Configuration Console User's Guide. Table of Contents. Table of Contents
Epygi Technologies Table of Contents Table of Contents About This User s Guide... 3 Introducing the Quadro Configuration Console... 4 Technical Specification... 6 Requirements... 6 System
Chapter 6 Using Network Monitoring Tools
Chapter 6 Using Network Monitoring Tools This chapter describes how to use the maintenance features of your RangeMax Wireless-N Gigabit Router WNR3500. You can access these features by selecting the items
Gigabyte Content Management System Console User s Guide. Version: 0.1
Gigabyte Content Management System Console User s Guide Version: 0.1 Table of Contents Using Your Gigabyte Content Management System Console... 2 Gigabyte Content Management System
Load Balancing Router. User s Guide
Load Balancing Router User s Guide TABLE OF CONTENTS 1: INTRODUCTION... 1 Internet Features... 1 Other Features... 3 Package Contents... 4 Physical Details... 4 2: BASIC SETUP... 8 Overview... 8 Procedure...
Load Balancer LB-2. User s Guide
Load Balancer LB-2 User s Guide TABLE OF CONTENTS 1: INTRODUCTION...1 Internet Features...1 Other Features...3 Package Contents...4 Physical Details...4 2: BASIC SETUP...8 Overview...8 Procedure...8 3:
Deploying Windows Streaming Media Servers NLB Cluster and metasan
Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................FACE 07108001 iss.01 -
Network Guide PREFACE Every effort has been made to ensure that the information in this document is complete, accurate, and up-to-date. The manufacturer assumes no responsibility for the results of errors
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,
Broadband Router ALL1294B
Broadband Router ALL1294B Broadband Internet Access 4-Port Switching Hub User's Guide Table of Contents CHAPTER 1 INTRODUCTION... 1 Broadband Router Features... 1 Package Contents... 3 Physical Details....
owncloud Configuration and Usage Guide
owncloud Configuration and Usage Guide This guide will assist you with configuring and using YSUʼs Cloud Data storage solution (owncloud). The setup instructions will include how to navigate the web interface,.
vcloud Director User's Guide
vcloud Director 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of
User s Manual Ver. 2.3
Office NAS 3.5 IDE/SATA Network Attached Storage Model # ANAS350 User s Manual Ver. 2.3 Table of Contents 1. Introduction... 3 1.1 Package Contents... 3 1.2 Features... 3 1.3 NAS Diagram... 4 2. Hard Drive
Multi-Homing Security Gateway
Multi-Homing Security Gateway MH-5000 Quick Installation Guide 1 Before You Begin It s best to use a computer with an Ethernet adapter for configuring the MH-5000. The default IP address for the MH-5000
Chapter 7 Using Network Monitoring Tools
Chapter 7 Using Network Monitoring Tools This chapter describes how to use the maintenance features of your RangeMax NEXT Wireless Router WNR854T. These features can be found by clicking on the Maintenance
TW100-BRV204 VPN Firewall Router
TW100-BRV204 VPN Firewall Router Cable/DSL Internet Access 4-Port Switching Hub User's Guide Table of Contents CHAPTER 1 INTRODUCTION... 1 TW100-BRV204
Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide
Direct Storage Access Using NetApp SnapDrive Installation & Administration Guide SnapDrive overview... 3 What SnapDrive does... 3 What SnapDrive does not do... 3 Recommendations for using SnapDrive...
Rsync-enabled NAS Hardware Compatibility List
WHITEPAPER BackupAssist Version 5.1 Cortex I.T. Labs 2001-2008 2 Contents Introduction... 3 Hardware Setup Instructions... 3 QNAP TS-409... 3 Netgear ReadyNas NV+... 5 Drobo rev1...
Chapter 4 Management. Viewing the Activity Log
Chapter 4 Management This chapter describes how to use the management features of your NETGEAR WG102 ProSafe 802.11g Wireless Access Point. To get to these features, connect to the WG102 as described | http://docplayer.net/4084267-Iomega-storcenter-px12-400r-user-guide-d31621400.html | CC-MAIN-2017-39 | refinedweb | 9,920 | 62.98 |
system is clearly broken. Change in retirement policies are appropriate. With regard to social security here in the United States, we will raise the age of retirement, increase the levies on workers and employers, and most likely reduce benefits. The only other feasible option is to fund the present system through the general fund; a very poor alternative.
Dear Madam,
I agree. Since most people, including myself, are not very good at long term planning and do not save enough for retirement, I would like to see greater pressure on individuals fro m the state by way of forced savings or greater contributions to pension plans. This would allow people to retire with peace of mind.
Dear Madam,
1) I am sensing a strong undercurrent which suggests that our forward models of the economy do not have as much information about how the increasingly older and vaster quantity of older people (overall and not in terms of some being fit and more less so) will act as either producers or consumers - one is also aware that they may act differently as voters. For economies to continue to grow I certainly feel that they will need be active as both, and that they will exercise their choices more passively and spend much more wisely and conservatively than models may suggest. I believe our producer models - technology, automobiles, housing, fashion are probably way way out and that what will emerge is much more 'oldies doing it for themselves'. how this will affect GDP? not clear to me. For this to emerge we need to boost the confidence of this generation to invest and to spend = and to produce. I don't think the pension guys have this very well modelled at all - they make too mnay assumptions based on past populations
2) This does not invalidate the current model - that the legacy of wealth and relative comfort has been worked for by the old and that they deserve to be recognised and enjoy it. To take your commentator who derides the abilities of his older colleagues to use excel perhaps forgets how easy his job has been because another older colleague in an other industry developed the tool ( excel) for him to use. I hope he too develops tools the fruits of which will sustain him in his future life, and I am sure that those younger than him rather than his peers will use those tools better again.
Dear Madam,
I agree that the retirement in its current form should be abolished. The actual retirement will not be able to be paid without increase in the effort that the young generation must make for the old one. The only lucky persons are those from the G-20 living now under retirement. I was very lucky to get a permanent job at the age of 61 and one of my proposals was to retire at 70 because I feel that I still have a lot to contribute. A person should be free to retire when he no longer wants to work or is unable to cope due to health or other reasons. Since where I live there is no mandatory age for retirement my company is preparing the way to accomplish my wishes.
Dear Madam,
Competitive individuals willing to continue working after 65 should be allowed to do so. Offering an "a la carte" retirement after that age would be a starting point instead of forcing them to retire as is the case in some European countries. Furthermore, increasing the age of retirement will occur regardless of political views of individuals, as the western population continues to grow older and fewer children are born, rendering the very concept unsustainable. Let us start treating this as a given and constructively prepare individuals for a later retirement instead of loosing ourselves in a debate to ask ourselves if it should still happen or not.
Dear Madam,
Retirement reform is badly needed and long overdo. In the U.S., early retirement should be moved to age 65 0r 66 - regular retirement - for social security benefits - should be moved to age 68 and gradually to age 70. There should be no manatory retirement policies except for those jobs that concern safety - such as pilots, etc.
Mkolbe
Madam Moderator
The knowledge base of many modern professions has become huge. Expertise in synthesizing disciplines such as economics, ecology and medicine depends on a comprehensive knowledge of the discipline and acquisition of a wide range of skills. For this reason people in these disciplines peak late in their careers, often just as they are forced to retire. The direct and lost opportunity costs to society of this waste is incalculable, but probably very large. For this reason as much as any of those advanced in terms of pensions, the vote must go to the proposer of the motion.
Dear Madam,
Being in my early 60s but in no way 'incapacitated' to work for more years, and having recently been made redundant, I am certain in favour of a review of what is fixed as the present day retirmenet age, to represent a large flexibility and better choice that would match individual cases.
Eg. I read one comment on easing off on working hours as one reaches the 'age of retirment' and applaud that suggestion. There are certainly others that are as viable.
Albeit there is now a law against 'age discrimination' I do not believe this is applied in full force and age is still very much a factor in many employers' stables to the detriment of individuals of a 'certain age'....
Dear Madam,
Stating that the life expectancy of the 1800’s was 30-40 is incredibly misleading. Though as a statistical mean this figure is likely accurate, though certain regions (pre-independence New England comes to mind) were notably higher, this is evading the importance of distribution. The low rate was largely due to much higher child and adolescent mortality rates, for those that passed puberty a lifetime beyond 50 (the American Association of Retired People’s age criterion) would be quite common. Though formal government pensions in the modern age may have begun with Bismarck’s institution of pensioning, this also is a fact that obscures reality. The care and respect of the elderly and infirm is an almost universal aspect of human civilization. For nearly half the world’s population it remains enshrined in the Decalogue, for much of the remainder it is similarly supported (though these are of less familiarity to my own person). Though the concept of a government sponsored cheque upon exceeding a certain age is a relatively new practice in the scope of human civilization, the underlying concept has been practiced by families and communities from before inception.
It may be noted, however, that personal regard is much in line with the conclusion Voltaire expresses in his philosophical novella, /Candide/ best expressed by the character of Pangloss who reasons, "for when man was put into the garden of Eden, it was with an intent to dress it; and this proves that man was not born to be idle." Yet, we as societies must accept that conditions, regardless of age, limit or prohibit the capability of individuals contribute to the workforce. As fellow sufferers of the human condition we are obliged to offer our support unto this class.
Dear Madam,
Mandatory retirement was eliminated a generation ago in the US in the early 1980s, except for police & firefighters, air traffic controllers & airline pilots. This change did not result in increase in the average retirement age. Mr. Magnus's proposal is so woefully incomplete that it's better described as an exasperated cry for help. That is, simply giving workers an option to work longer is commendable, but not likely to put a dent in the overall affordability of retirement.
Dear Madam,
Continuing with my comments, I still keep myself active by teaching in areas which I find are still neglected in the colleges. I do get some remuneration, but it is more like pocket money. I agree with the view that people should retire only if they have become inefficient. Just fixing any age is wrong. At the same time, younger subordinates should get an ooportunity to get to higher levels .in management & supervision.
Dear Madam,
Dear Madam, I agree that the retirement in its current form should be abolished. A person should be free to retire when he no longer wants to work an/or is unable to cope due to health or other reasons. But there should not be a mandatory age for retirement. I also agree that one should plan ones' retirement early on by way of savings, investments etc and not depend on government largess and that of the organization one worked for. If one plans ones' retirement properly and judicially, I see no reason why one cannot live a happy and a fulfilling retired life. One must make alterations in ones' lifestyle, curb unreasonable and unnecessary needs and learn to be contented with what one has at his/her disposal. If I lived in a big house and drove a Mercedes while employed does not mean that I must have same things when I retire. I must learn to adjust and adapt.
Dear Madam,
reforming is needed, but keeping old people in the same business can only do harm. If there is one thing that could be done, that would be to provide more incentive to companies to keep elders at a reduced working hours schedule (50%-80%) in consulting positions but that requires that employees and companies grow together and the company invests in employees' education and development. Not dump them in the first down turn.
Dear Madam,
Perhaps retirement should be linked to productivity. The Economist has tracked prodcutivity for the private and public sectors. On this basis those in the private sector could retire at 50 while those in the public sector should be encouraged to become productive well past 65.
Dear Madam,
I haven't read all the comments from the floor but one aspect does seem to be under represented in the responses. What does the retiree do all day? I recently retired and, being one of the lucky ones with a company pension, don't have to find paid employment. However, I do need to be fulfilled in my daily life. With a wife somewhat younger than me (and still working)plus two school age children still at home, cruising the world is not an option. I have turned to casual work which, while not employing all my skills, gives me an interest (as well as a means of paying my wine merchant). It is a bit of a shock to come to the realisation that one can't expect to command the quality of employment enjoyed for the past 30+ years but flexibility and choice does give some compensation.
Dear Madam,
I think as much as this debate is about the conomics of ageing it is also about the health of the older people. In order to stay healthy, older people need a social network as well (be it friends, relatives or colleagues at work). Research has shown that older people engaged in social networks feel healthier than the ones living in social isolation. Moreover, selfesteem is another important determinant of health: the higher it is, the healthier a person feels. I understand there are people who started working very early in life who will get very tired by the age of 65. They can't wait to retire - let them do it! On the other hand, there are people who can't stop working, and for whom working is a way of life - forcing them to retire just because they've reached 65 would be a mistake. Besides, many of the latter are valuable profesionals, who still have a lot to contribute with their experience and knowledge. I believe the best thing to do is to give people a choice: when they reach 65 they can retire if they wish so, but let's not make retirement compulsory!
Dear Madam,
The reference of ritiring at 65 years originates from Bismarck; he fixed that age when most people dies in their late 40's. Retaining Bismarck's rule-of-thumb and adapting it to today's life expectancy in high-income countries, mandatory separation age should be about 98 years.
So: I am in favor of a legal separation age; and I disagree that 65 years is the suitable age nowadays.
At 66, I am technically retired, and I am OK with drawing a pension. But not ok with mandatory separation from employment. I have 2 PhDs, wish to continue doing research and teaching, win grants, but am told time and again by administrators that they cannot allow me to continue in any other capacity than a volunteer (but no office, no secretarial support, etc).
Who is served by such waste? And what justice does this state-of-affairs offer to unemployed with lesser zeal or education or experience to pass on knowledge to the younger generation?
The present system is too rigid, and needs flexibility. The basic idea of separation is however valid, and needs to be maintained.
Dear Madam,
It seems to me, from my personal experience, that age is the primary concern of employers. Academic and technical qualifications as well as working experience do not count. A culture of youth prevails.
Unfortunately, in some cases there are more sinister reasons for this. Young nubile females are attractive to predatory bosses.
The Japanese have a different attitude to age. If a person is healthy and wants to work they are willing to employ him or her.
Dear Madam,
My view is opinion talking about demographics and jobs are rubbish. We have to admit our impotence to alter any of those. They are either determined by the natural laws or markets.
The core issue is we have to rethink the ideal of welfare state. Is that welfare a right or a privilege for citizen is the question.
Dear Madam,
Mr Weller says the basics are now cheaper then before. I do not count electronic gadget and car as basics (yes they have become cheaper), but I count a roof over my head as the basics.
Housing is less affordable now then it used to be. More money is spend on it and it takes longer to pay back!
And on a remark of fixed age for retiring: This is highly outdated!
To retire a construction worker at 65 is probably very hard on that worker, while an office worker still can do his job nicely at 70.
So a differneciation should be also made WHAT job you are talking about!
Singapore has the best example of some high age jobs: McDonalds couldn't find enough people to man their restaurants. They turned to the above 60 aged people for their workforce! (This is the reason why you see so many old people serving at the counter in McDonalds)... this works very well and proves that some jobs elderly people can do very well if they are able and willing to work in higher age.
Dear Madam,
There are people with 55 which are not fit to work, while others are fit to work until age 75.
Some are willing to continue to work , some are not.
Let them decide , with a pension scheme that does not punish if you retire a few years earlier in such a drastic manner as today.
When I took earlier retirement in 1994 (due to the loss of my wife, I was devastated), I lost a big amount of money because of the penalty imposed (it was switzerland). If the penalty would be linear and not multiplictive perhaps would be fairer. I continued to work until 65 years (consultant in Shangahi) but had to quit because I had had a by pass operation, and could not work 8 hours a day anymore.
The problem in the present fixed age scheme, is that it does not include health and willingness to work.
Age, health and willingess should be combines in an index , added to the freedom to retire when you want but with smaller penalties for earlier retirement.
Dear Madam,
While the life expectancy of population is increasing. That of private companies is decreasing. So chances of one's retiring in a job after 30 years of service are remote. In effect retirement benefits as seen today will only be applicable to people in Government service. (Somali pirates are also earning for their retirement as there is no effective Government)
Retirement benefits for a higher percentage of people who have opted to become employees are possible only if we shed capitalism and accept socialism / communism. But that is not an option we are discussing.
Finally, if we do not accept socialism, where those who earn more have to look after those who do not, a person has to live his life with what he has earned during his life, irrespective his working life span.
Unfortunately we are expecting too much from state. That expectation includes preventing bankruptcy of enterprises having a large work-force, which in turn means keeping patient in ICU at tax-payers cost.
There is no easy solution. Finally in a democracy people will vote for those who can full-fill their aspirations. In a recession even Hitler got elected.
Dear Madam,
I will be 71 in a few days. I have only skimmed the arguments made from both sides which seem involved in the financial ability to retire, government retirement solvency, and the right to or not to retire. I was glad to retire at 65 years and 7 months. My employment was not happy with disrespect from those younger persons with extensive computer skills. I have found that I tire more easily. I needed time to exercise, relax, and change my living style to offset the onset of diabetes. All of that was accomplished with a much more satisfying life style. However, I wonder if eliminating the early 62 year of age social security benefit would not be a benefit to all. This and the gradual raising of the regular social security retirement age to 70. We no longer work in a dangerous debilitating life environment. I do believe from my own expwerience that there are gradual ageing happenings that working after age 70 should be only for the exceptional few,with most working part time or retiring. People need to take more responsibility forhteir own retirement. That means a lower standard of living in most cases. If you must borrow, then you cannot afford it. why should someone in the financial field take your money while providing nothing substantive in return? a home may be purchased if you have at least a one - third down payment and an adequate proven income. Too many desires or expectations are not necessities. People will say I am unrealistic. Wealth must be created before enjoying its fruits. You cannot expect your employer or the government to give you everything you consider your right to have. You must honestly provide for your own well being.
Dear Madam,
I dont think people realize just how bankrupt most western nations are.
I have run away to Bulgaria, if gold doubles I might be ok but promises come on.
I have a collection of banknotes, one during communist rule a 20 which would have bought 50 packs of cigarettes. Another again a 20, 15 years later would have bought 25 packs in 1991.
I have a 5000lv banknote from 1996 the landlord at the pub tells me would have bought 5 packs.
It all changed and they knocked of the zeros you had some time to change them at the bank 5000 for 5. At the time it was $2 I believe.
Dont worry it cant happen in the west we have real economies and quantitative easing, phewee.
Dear Madam,
I am a 47 year old unemployed engineer. Though my profession allegedly was to offer security I am finding that being at expert at something of enduring value is what is important.
My background is no longer unusual. I was raised in an upper-middle class family of five then upon my parent's divorce I was with a single mother on food stamps. Thus I have learned the value of wise spending and personally not depending on others.
Additionally I have saved dutifully for retirement since age 22, for which I am thankful, though with maximizing contributions reaching the retirement goal is by no means certain. For this I find myself unusual. Most of my age peers I find are largely woefully unprepared for financial independence. They 1) live predominantly for today, 2) do not understand principles of saving for emergencies and the time value of money and 3) have not been educated in managing funds outside of their day-to-day work activities. Quite an inept combination.
I believe that our current taxed (read not entitlement) system works so that workers can focus on their work and not managing investments, provided we take into account several necessary changes:
- increase the retirement age gradually in line with life expectancies and the decreasing size of families for which future workers will be available to contribute
- allow for and encourage flexible, not penalized, work in later years - doing no work is wholly unhealthy - we all need to contribute in a balanced manner to society throughout life (chasing little balls around a green does not count)
- assume that employer pension plans are long gone
- allow individuals to manage a portion of their social security funds for improved returns with no ability to pull funds under hardship
- educate our populace on the time value of money and the time honored principles of living within one's means, saving for the future and basics of fund management
- encourage parents to teach the aforementioned principles to their children - it all starts at home
- assume households will have three generations living with or near one another - we will then learn to get along through life and the elderly will be available to grand-children to pass on family values over video game time
Dear Madam,
I am surprised that so far I have never come across the proposal of a "gradual retirement", meeaning the gradual decrease of working hours before full retirement. I suppose that way there would result numerous advantages, both for employers and employees, including a better psychological clima, quality of work, increased productivity, early employment of young people in higher management posts and possibly their gradual introduction in them etc.
Dear Madam,
To say, as some commenters are, that employers do not want older workers seems simplistic to me. Rather, I would suggest that employers do not want workers who feel old, i.e. who lose their willingess to learn new skills and remain active in the company.
For all those who wonder why we are so much more productive now and yet still work so much, I would guess that we are storing our extra time into these arbitrarily increasing retirements.
Dear Madam,people should be allowed to work as long as they wish providing they are capable of doing so. In general, people who have a work routine, that they want to undertake, are healthier and thus are less of a strain on the health resources of the nation. With the growing inability of the current working population to support the retired community, it is becoming imperative to review our current policies. Your lead in this is very much applauded.
Dear Madam,
Something certainly needs to change since circumstances have changed over the past 100 years.
But voting either way to "abolish in its current form" without a reasonable vision of what is to supercede or replace current arrangements seems foolhardy too.
I would speculate only a few people would voluntarily leave their current employment or business unless they had some prospect or reasonable expectation of same to what they were heading to.
Why would "retirement" be different. The only thing I would accept is change is necessary, and before I leave what exists I want to see what the choices are to be sure that is a better probable outcome.
Dear Madam,
I agree the 3 legged stool still works but needs adjusting, in theory. I was on that plan for over 20 years. IF I could have invested in stocks that earned a return (401K plans 'protect yo from earning anything over what you put in)and IF the market had not crashed in 2008 and IF I could have continued to work until 65 it would have worked. I probably could have supported my husband and I until age 90+. These are theories and far from reality.
I essentially got 'retired' in 2007 at the age of 48. By 'retired' I mean I can not find comparable work or comparable pay to what I had. Maybe it's the economy, or the fact that I have 20 plus years of experience compared to 7-10 years for most hiring managers or that I'm 25-30 years older than the gatekeepers (although I'm a young 50). From an employers perspective, I'll probably cost more in health insurance and expect more pay because of my level of experience. FOr them someone younger makes sense. It certainly is not my skill set as I'm an expert in interactive marketing and social media.
The reality is that you don't get to choose how long you work. You can't 'stay on' as long as you'd like. You get 'let go' restructured, whatever according to someone else's decision. Or in today's world some economic situation like the industry you work in collapses.
I am following my dream and starting my own business. Something I would not have had the courage to do if the risk of bankruptcy were not already so high. I plan to work well into my 80s, because I like to work. My plan is to stay healthy, and enjoy being my own boss.
Dear Madam, hello. i believe the retirement age should be lowered to 62; because that will enable a retiring individual a few more active years to enjoy their sunsetting golden years. having earned this from proper ethics and disciplines, their reward should be to have added years of mobility. this fallacy that boomers will continually produce fewer offspring, and support over the years will dwindle, instead of increase with population growths; is people trending bad things upon themselves. best think about what you are doing to yourselves- as you too are aging.
Dear Madam,
The opposition writes, ."
This conclusion can only be reached by someone who has not experienced, first-hand, the difference between wiping a babie's and an octegenarian's butt. Go to an old folks home and ask them what "triple assist" means and see if you still agree that taking care of the elderly equates to raising kids.
Dear Madam,
There is no doubt whatsoever that retirement is no longer possible for all that would have qualified in the past. The world is still reeling from the most troubling economic situation since the Great Depression. If one third of the population of the world will be comprised of those over the age of sixty, we cannot afford to offer retirement in its current form to those who are now eligible. Debt is already a serious problem. In the United States alone, each person theoretically would need to contribute around $30400 in order for the scales to even out. We cannot afford to continue this way of life, which dishing out money to one third of the population would certainly not change. Standards for retirement qualification must be changed dramatically if we aspire to prosperity.
Dear Madam,
The main question is one of economics.
Can the country or nation afford to support an ageing population and can the individual(s) support themselves?
In answer, governments need to make better long-term provisions and the young should be given an increased incentive to save more.
It may take several decades before these questions are resolved and this is not be an immediate solution to the current crisis - but better to start from somewhere!
Once the economic questions are answered then it should be up to the individual as to how long they wish to work either because of enjoyment/contribution to cause/society or firm or because they still need the savings for a better retirement and a wish to leave an endowment for future generations.
Ultimately it should be an individual choice as to how long someone works and that can only occur when sufficient individual or government saving has become available.
Dear Madam,
It may be worth noting that, if one was to regard our present early retirement as a 'holiday' in that there are fewer working hours in a lifetime, then the considerable difference in annual worked hours between the EU and the US (or many developing countries) may create some flexibility. Whether this is an advantage for the EU (in that retirees can be covered by extra hours pa by those in work) or for the US (in that current practice increases wealth) may be debated!
Dear Madam,
Public retirement ventures need to be abolished altogether. Why do we need a generally inefficient public body to act as an private invest firm? Capitalist nations do not provide resources for their citizens. I've never heard a government sponsored video game allocation program. Why do people have a need to force resources away from their best uses?
Dear Madam, what Anandakos wrote is totally true. But that is only true under the current retirement programs around the world. Jobs are very scarce (especially with the actual crisis) so that not even qualified and young people can work. What would happen if added to all the current unemployed people, we have all the otherwise retired ones?
Still, we have to rethink the current situation, since it is unsustainable in the long term. First, why older people can not continue working? Lack of qualification, lack of job opportunities, etc. Of course, the actual retirement does not address this situation, this is why it is impossible to ask older people to continue working-under the current conditions.
Yet, there is one possible solution that could address the problem. We can offer retirees their savings when they are 65 if their money is only used for running a bussiness, an investment or any kind of employment-generating -thus, tax contributing- activiy. Otherwise, they can receive their money a few years later.
Dear Madam,
It seems somewhat incongruous to talk about a retirement age when we actively encourage employers to ignore one's age. When the pensions of most people has been devalued to such an extent as recent years most people don't have the choice to retire at this romantic age of retirement - that is the money and investment side of the argument.
There is another argument which is a more realistic and philosophical idea that as we live to a much older age pension models should be geared to a system that rewards people for working for longer but on a more flexible and charitiable basis. That would take some of the strain of the government in providing care for elder people who are not in a position to work which through a 'gentler' tax could subsidise the care system.
Dear Madam,
65 is an age created by the pensions system, is it not? The concept of retiring at this age in current society is an anachronism and needs to be rethought. I would like to choose when I retire and not feel workshy / past my best because I choose before or after 65.
Dear Madam, Si las personas mayores permanecen compitiendo por los puestos de trabajo con los jóvenes, se distorsiona el mercado laboral al existir mucha oferta de trabajadores frente a una demanda ue no crece al mismo ritmo. Se afectarían los salarios y el aumento de la productividad podría verse comprometido.
Dear Madam,
If by retirement in its current form includes the posh retirements given by the governments in the US then I absolutely agree. Teachers who are forced out in what should be their most competent years so that their salaries can be split between two teaching candidates of questionable competency is a serious waste of both talent and budget. Legislatures who get to retire after a few years at obscene cost to the tax paying citizens is criminal. Retirement pay outs after 20 years for military and quasi military public service is another waste of talent and tax money. Early retirement for any reason other than lack of jobs is another waste. Let's get to the root cause of the problem and not just work the short term problem.
Dear Madam,
No matter the question, no matter the age of the victim, no matter the "industry," no program will fit even close to half the target population -- the more than half will just not fit.
Retirement for "disability" is covered in the US by some reasonable programs; forced retirement for age is highly destructive. Despite the "conservatism" of many "experienced" elder workers, the well of experience can save many corporate entities, stabilizing the "wild" ideas of the younger staff. (Maybe those "financial products" offices at AGI should have had an older "veto" officer, to stave off the wild schemes that almost destroyed Western capitalism.)
Those who do choose to retire, for whatever reason, should have the opportunity to continue working, if possible. In my case, my disability keeps me from meeting delivery schedules, from distractions of the disability. But I can still produce items of value to the community; I just can't do it to a schedule, or on a regular basis.
Problem is, there are many blockages to the irregular producer receiving compensation, without "proper" pre-retirement status. I used to build houses, now I write. But no publisher will really look at my product -- I have no "credentials" or paid-under-schedule experience, so I don't count. Oh, well, the writing is still fun, and my friends like it and laugh.
I've looked at some "emerging writer" programs: all want "free" product, without either recognition or compensation. Thanks, but no thanks. If it is worth something, it's worth something to me, not just the corporation.
Dear Madam,
I have read the supporting and demurring arguments and most of the comments. What appears most striking to me is the fatuous assumption held by almost all that corporation WANT older workers. They don't and they're not about to start.
Older workers (I'm one) incur more illness and tend to be less skilled in new technologies. More importantly from the corporations' point of view, we are "ethnically challenged" in comparison to their target demographic and undeniably "not hip". Sure, there are plenty of exceptions to those statements, but they are broadly true.
Ask nearly anyone at her or his 66th birthday retirement party whether she or he would like to continue working as described by the proponent? Then ask her or his boss whether a "Yes" reply can become reality. It will be so only if the person is a financial, technological, marketing or engineering genius. Everyone else is in the position of a 20 year Major: up or out.
"Oh, but the OECD countries aren't reproducing themselves. Who's going to do the work?" How about, "all those young people in Vietnam, Iran, Iraq, India and Indonesia"?
The result of this is that the only "economic opportunities" available to 75% of the people over 60 who lose a job are Wal-Mart greeter, Burger King tray composer, Regal Cinemas popcorn pusher, or Pizza Hut hustler. These are hardly the sorts of jobs envisioned by the proponent's happy vision of benevolent corporate amakudari.
It's an unfortunate reality that only half of the children in Lake Wobegon are actually above average, no matter how often Garrison Keillor may assert otherwise. Although progressive societies proclaim that "everyone is a valuable part of society", everyone is not valuable to a corporation interested in profit maximization. For most jobs they can hire five or six workers in the young nations of the world for the cost of one in an OECD country. And they most assuredly will do so.
I have great respect for the rigor of both of the debaters rigor and have no doubt of their genuine humanitarian positions. These gentlemen are not ideologues. But I have to agree with Dr. Weller's position. If the western democracies are to maintain an ethical and egalitarian basis, they must recognize that one of the critical foundations of public pensions is income sharing. They are not the same as employer provided pensions and personal savings. They exist primarily as insurance against destitution in old age, which is MORE critical with today's advances in gerontology and life-expectancy.
In the US that means "take off the cap", at least for the employer portion of public retirement contributions and extend them to such forms of alternative compensation as stock options and restricted grants. I am not sufficiently familiar with the details of the plans in other countries to make such similar suggestions, but it is likely that they too have similar distortions that lead to omissions of significant income streams as those in the US system.
Dear Madam,
Retirement is related to many bad things but I would like to link it to reducing purchasing power. People at all levels are worried about the cost of food and shelter and their livelihood, low-paid workers and the old are reduced to going through market rejects. The problem of earning, and of purchasing power of people, young and old, is destroying the credibility of governments everywhere. In many countries, the parties in power have been soundly defeated in local elections because they fail to read the minds of the people, young and old. Life is getting harder for most people, and it is worst for the retirees. Since a man or a woman is alive, he or she needs a job to earn money to live or a hospital bed.
Work hard and shorter, live less well. It is clear where we are going. Must not it be changed?
Dear Madam,
At 52, with an engineering degree and an MBA, and 30 years of successful work in manufacturing, I find myself unemployed. I rationalize my situation as "structural unemployment" and realize that perhaps my core competency is closing facilities. I will most likely live into my 80's, would like to work until close to death. This means that I am only 1/2 through with my work. I wish to work longer for the selfish benefits of the improved mental and physical health that result from staying engaged in life and work. I am one of those who has always made my vocation my vacation. But I also have a deeper purpose of not becoming a burden to our children. The baby boomer age wave should not penalize our children with entitlements. We are entitled to nothing.
I will take this pause in employment to go back to a full time, resident life on a university campus. I will retool for the next 30 years in a field that will be more meaningful. The purpose of being a resident student rather than learn by web or night school is that I will have to downsize. This is my personal version of Schumpeter's "creative destruction", abandoning what is no longer needed, to enable a future. While at school, perhaps I can add value to the classroom through intergenerational and interdisciplinary participation. Hopefully, I would ruin our kids. When I finish, I will need to compete with youth for entry level jobs. But that's ok, because my expenses are less, I will have renewed geographic and professional mobility to remain functional in society for another 30 years.
Now, the only problem is to get a few million more baby boomers to go back to school with me.
Dear Madam,
Clearly we should not be forced to 'retire' (= be chucked on the scrapheap!) at 60-65 (I am 62!). This much is common ground I think. Interesting items might be:-
It is clearly in the national interest to encourage late retirement. How about declaring all pensions up to some limit (average earnings?) free of income tax? (ie earnings beyond this are taxed but only as 'first income').
How have people come to see retirement as a benefit, like a sort of holiday? It should be seen more like redundancy and something unions should oppose.
Dear Madam,throwing people off the work force who are both willing and able to work is at best stupid, at worst criminally stupid. If work was always about lugging about heavy objects it would make sense, but since it isn't, we are simply sorting out the MOST experienced and the only ones to gain are the golf courses and tennis clubs. Go figure.
Dear Madam,
I voted no because I believe we have the capital - not least intellectual and scientific- base and the labour available within the global workforce to support retirement costs. Older people should still be able to work and we need to encourage them to contribute fully, but they also need to have the security to spend.
Policy balance needs to be more effective in encouraging savings during peak earning capacity and to re shape working lives to adjust to the longer lifespans. Eroding pension rights will simply penalise those - including those of us who took middle income options in public service because of the pension guarantees - who have been cautious in the past, and will send out the wrong message entirely at a time when confidence in investment and returns from savings are at a low ebb.
In my experience working environments are poorly adjusted to this and multi national and capitalist institutions are amongst the worst offenders. Working hours should be reduced on a curve from as early as 50 and tax and other policies should encourage this, just as they do child care and other support.
The other major policy development needs to be to encourage the ageing to be both producers and consumers for their own products, and to unlock capital in, for example property, in a sensible and measured manner - an area, in my view, fraught with ethical dubiety.
I would be interested to hear what policy options there are to trigger changes in employment practices = at 51 I am one of thousands struggling to get back into work despite skills, availability etc. Whilst the working economy works to exclude older people it is premature to move of some the fundamental expectations of security.
Dear Madam,
Dear Madam,
I believes that retirement in its current form shold be abolished.
We are living in the 21st century where democracy is vastly advocated. People should have the freedom to choose when they want to stop working. We will continue working if our basic needs are in dire, for example no pension scheme in Singapore for your retirement if you finish your Central Provident Fund (CPF) before you die. If not, that is basic needs are fulfil, stop to do other things that are self-actualizing. This is showed in Maslow's hierarchy of needs and which is successfully used by management when employing people.
People should have the freedom to choose when to retire as retirement does not mean it is the end of their work. They will indulge in other activities such as being a volunteer and spend time sharing history with their grandchildren which is enriching. All this improve the status quo of the society and they are just like most housewives who stay at home, not working but still providing service and warmth to the family. There is no retirement limit for them.
If people who are poor, they will still have to continue working in order to fill their stomach and for a place to live. Those who are capable might continue working to provide brillant and innovative ideas to raise society's standard of living. As once said by Singapore MM Lee, he said that he will worked til the day he dies. I'm not trying to be an extremist by just quoting one example only.
This debate, in the end, brings down to the understanding of my hierarchy of needs and motivation and literacy. This means those who have low literacy will be motivated (although should be say as a must) to support basic nessarities, will continue to work til safety and nessarities can be fuifilled til they die. Then a choice to continue working or realising their dreams can be decided.
Dear Madam,
After Retirement one gets to work for "self-actualisation" - "self fullfillment". It is very important that one has time for it while one can still work. Therefore retirement at 65 is perfect. Also it gives the younger people chance to go up the ladder.
TS Sinha
Dear Madam,
Be a chinese, I agree to extend the retirement. Because of chinese talent market situation is support to reform the retirement policy.
Dear Madam,
People should have the freedom to choose if they want to keep working or not. For this reason, I vote in support of the proposition.
I am only concerned that older people who keep their jobs will make it harder for the younger generation to move up the ladder and provide fresher ideas. However, I have observed that experience is something that the younger generation lacks in comparison. That is because the younger generation tends to change roles and jobs more often than the older generation previously did. Alternatively, industries could adapt to that by providing consultancy roles for those who wish to keep working in their field of expertise without the commitment of permanent or full time work.
but what is the definition of work? is work that only contributes to GDP valuable? A lot of retirees in some countries look after the children of the current working generation. I reckon that is one of the most valuable contributions to society, and that IS considered work.
Dear Madam,
This is the second stupid debate topic in a row. I thought the Economist was all about free markets, a position I support. Now we're being asked whether we should pick a direction for retirement? If 'retirement' as we know it should end, it will whither on the vine of its own accord. We need not debate it.
Dear Madam,
The debate so far appears to tinker with the edges of the problem. Ageing populations in rich countries - it will happen in less developed countries perhaps sooner than people think - is a major opportunity to improve the life of all.
My limited economics was drilled into me a long time ago and is doubtless out of date. However, the idea that there are "factors of production" - they used to be limited to land, labour and capital - may still hold some validity. With the notion of such "factors" came the idea that they combine in a mix to generate an economic output, the quantities of each factor in the mix being determined by its cost, relative to the cost of the others. Changes in the mix - "factor substitution" would come about due to the ups and downs of factor prices.
If the price of labour goes up, entrepreneurs will use less of it. They will use either more capital, more land or both, until they find the cheapest mix that does the job. If the price of labour goes down the entrepreneur will use more, reducing the amount of land or capital accordingly.
Whatever arrangements are adopted in rich countries to help fund retirement will, unavoidably, change the price of labour as a factor of production. If individuals retire early leaving a smaller workforce will not that drive the price of labour up? Presumably that means entrepreneurs will use less of it, substituting capital and or land as appropriate.
In economically developed countries it is assumed that everybody MUST have a job. Why? Most work in these same countries is mindless make-work - work that is only marginally useful (if at all!) and exists only as an excuse to pay someone a wage. This "degredation of work" has reached such a low point that, in rich societies, workers are getting in each other's way: we are falling over each other. We should think about paying people to get out of the workforce - and out of the way! - so such tasks as can be automated can be carried out by capital. THAT is the great opportunity.....
Dear Madam,
Changes to rich country retirement arrangements should not be made in isolation from their arrangements for working. Each has implications for the other.
There is a saying (I cannot provide attribution!): "Find work that you love and you will never work a day in your life". There is much truth in this saying but, in developed countries, most jobs are extraordinarily dull and narrowing for the human beings who carry them out. Members of the lucky minority that has found work that it loves will want to work until they drop. Those of the unlucky majority will want to stop as soon as humanly possible.
Rich countries desperately need a new kind of labour mobility. Most already have geographical mobility but they do not have occupational mobility. The kid who knows he wants to be a doctor, or a fireman or a cabinet-maker at a very early age and doesn't change his mind is actually rare. A majority of people in wealthy countries take their first job at the entrance to one of the many division-fo-labour tunnels and discover too late - they have a spouse, kids and a mortgage - that they chose the wrong tunnel and are now trapped by their CV. They then clog-up the tunnel with their sadness, their disappointment and lack of commitment.
By devising much better support systems - inter-tunnel communication channels - which allow individuals to shift sideways into completely different careers governments may raise productivity dramatically and bring about a situation where individuals work as long as they can for the sheer fun of the task.
Dear Madam,
I live in a third-word country, in which the retirement age was 60 until some years ago. However, due to the recent economic development of the country, the retirement age was postponed to 65 years old. In fact, postponing the retirement age helps to boost, or at least to keep, the level of the tax revenue.
In my opinion, 60 or 65 years is quite early to completely stop working, but at this age the citizen could be required to decide if want to keep his working life. It means: nothing mandatory! Naturally, some rules to protect the stability of the public finance should be introduced. Therefore, I completely agree with the idea of developing flexible retirement and work arrangements.
Dear Madam,I think it's imperative to encourage people to continue to work and be productive as long as they are willing and able. I agree that the transition process should be flexible - in my own situation, I took "early retirement" from a good job at an international institution last year at age 50. I am now working more than 3/4 time as a consultant to the same institution and expect to gradually reduce the amount of time I work to maybe 1/4 time when I'm 70 and continue to take assignments, if they still want me, for quite a long time after that.
Dear Madam,
Retirement can be at any age, IF, AND ONLY IF,
politicians retire--completely--from politics
after one term of two years.
Dear Madam,
Most of us average folk receive full social security benefits at age 67. The typical government employee receives their retirement pension at age 52. No way in hell do I support increasing the social security retirement age to 69 unless the federal government employee retirement date is also increased to 69.
Dear Madam,
I believe that, fundamentally, this is a sterile debate because it refers to a way of life that is unsustainable. The model that the majority, rightly in my view, wish to abolish has served one generation in the so-called "developed" countries, very well. But times are now changing faster than ever before and we need to take onboard a paradigm shift ... but that's another story.
Meanwhile I put forward an alternative pension plan option: she's almost forty years my junior and has an experienced and understanding mentor to help her through university and steer her clear of many potential pitfalls or pick her up when she stumbles. And if human cloning advances as I believe it will, instead of becoming a youngish widow she may have her beloved return with all his wisdom and experience in a twentyish body.
I fear that wasn't exactly what one speaker from the floor meant by "generational relations" but as a retirement policy it has its attractions!
Dear Madam,
The real truth is that if you are in good health, have a skill or knowledge of a sort which makes you an expert you should continue to work.My father did not take social security into consideration because he chose work as a sheet metal fabricator for a firm owned by my younger brother. When my brother retired at 60 my father retired with extreme reluctance at 92 years of age. Within one year of retiring my brother died of natural causes not related to his work but his retirement. My parents, both now 93 remain in great health and may live to be 100. When I asked my dad what he would do when my brother sold the busness, he said, well i dont know anyone who will hire a 92 year old.I would like to continue working because I enjoy it. But I have never used any of my social security money so mom and i will now move into a retirement home and try to act our age for a change.
When I travel I never meet people like my parents. I meet retired people with no real money trying to be happy. In other words, running away from children and grandchildren and the happiness my dad found in keeping active. Thats the story of retirement in my family. I am not taking social security until they make me take it at 70. But I will keep working. When I work we pay taxes, employ young people, put kids through college and grad school we are mentoring ( how many young men would like to have a really old mentor who has been around the track a few times, have a job and a future).
Dear Madam,
Even after loosing part of my pension fund (they will recover), I feel much safer with my mandatory defined contribution plan in Chile than I was with a pay as you go pension sistem when working in the US
Dear Madam,
The arguments for the proposal mostly forget the situation of non-OECD countries and their citizens.
Public and private pension systems are a hard-won right of the citizens of rich countries, along with public subsidies for those who could not work for a substantial part of their lives. For many non-OECD countries, this step was not yet reached.
Yet the proposition argues, bottom line, for returning to the previous, hard, uncertain state from which we have evolved, and put the onus once again on the employee to build a feathered nest, with limited resources and tools of uncertain stability.
I believe that companies and the state, and my (youngest) generation, should not be absolved of the obligation to care, at least in part for the elderly, who have gained their right to a happy old age.
Nor, on the other hand, do I wish to be deprived of that exceptional incentive to work, which is a long and financially secure retirement.
Dear Madam ,MY 58YO WIFE AND I (63YO) AND MY 20YO DAUGHTER ARE ALL ON SSDI, AND WHILE DECREASING OR STOPPING IT WOULD CHANGE OUR MONTHLY INCONE ABOUT $3000, I WOULD GLADLY DO MY PARTTO HELP STABLIZE THINGS SO ALL CAN SHARE WHEN THEY RUN OUT OF STEAM.
Dear Madam,
Dear Madam,
I am a recent graduate just entering the workforce in the USA. I am less likely to ever get a pension benefit from an employer than a current retiree and am less protected from age discrimination than a current retiree. It was the generation before me, the current retirees and soon to retire, who through mismanagement of government, spent their social security trust fund on wars. Yet, they expect me, the young worker, barely able to make enough to eat, to pay for them to retire at 65. This is social injustice of the worst kind. Money that was wasted by one generation should not be money the next generatation should be obligated to pay.
Dear Madam, I am voting for the motion for two reasons. First, if one considers employment a fair exchange for one's contribution to an enterprise and society, then there is no reason for that arrangement to end when one's contribution continues to be significant. Second, from a macro perspective, if we continue to nullify great quantities productive experience and motivation, that can only decrease competitiveness, progress and overall standards of living. The easy life that many have bought into requires a careful re-read of Catch-22 and a re-view of the movie Wall-E, preferably with one's children.
Dear Madam,
In the very begining there was no retirement. Retirement as we know it the construct of few and I will say very few people in Europe and has benefited even more fewer people.
As a public health physician and medical scientist interested in human behavior and life competency, retirement is the surest way of accelerating death to life.
We do need to change the scope, complexity and and intensity of our work depending on our skills, flexibility, productivity relevant to the different stages of life.
The trend in economy is critical but the need to be continuously and creatively engaged in life is critical for the health and wellness of humans.
With rising longevity, we need to look for opportunities to engage our seniors creatively throughout their lives relevant to their interest and competencies.
Life becomes much interesting in older age, if the society is able to harness the expertise and experience of seniors and give them opportunities to contribute meaningfully.
Surely, work and engagement is the elixir of life. Let us create opportunities for seniors as they age.
Belai Habte-Jesus, MD, MPH
Dear Madam,
There should be an end to all mandatory retirement; there should be honest guidelines for retaining all aged workers based on their merit. In order not to lose experience and judgment, the aged should be accommodated just as we accommodate the disabled now. The accommodations should include options for part-time work, reduced benefits, insurance coverage associated with the job or profession, and so on. For example, how many valuable medical professionals are lost because they cannot work part-time and still pay for medical malpractice coverage that isn't reduced for part-time practice? Beyond that, social security reform is a must, as is a more enlightened and practical immigration policy.
Dear Madam,
I could not agree more with "Knitting Economist".
There is quite a (growing) number of persons around and, in many cases, above this age who are perfectly capable of working full tilt for quite a number of years beyond 65.
If they are also willing to do so, I do not see any reason for not taking advantage of this fact, both for the society and the "grown-up person" his/herself.
And, besides being productive in cold terms, that person will very probably enjoy for longer a better health (at least mental health).
Dear Madam,
The reason that the retirement age was set at 65 years originally is because that was the expected lifespan at the time. It is only reasonable to expect that, as our lifespan increases, so would our working life.
I do not expect to retire at 65, nor do I wish to. I hope to be productive for far longer than that.
Dear Madam,
As a systems engineer, I am well aware that any tweak to a stable system results in unintended consequences. The job market for this Spring's US college graduates was abyssmal. Extend each citizen's working life by even ten years, and a whole generation will face under/un-employment.
Mark Jaeger
Dear Madam,
Those who have worked productively for 40 years should indeed be allowed to retire decently. They have built the wealth that exists today - whether they share in it or not.
Over the past 100 years, technology and productivity have advanced immeasurably more than our bodies. Many people are simply not equipped physically or mentally to work beyond 65.
With all the tools at our disposal society can easily adapt to producing the mix of goods and services to required support an aging population. This is not an overwhelming challenge provided selfish kids don't give up before we start.
As with health, a degree of responsibility must remain with the individual to contribute to retirement savings but there also must remain a core support system that allows everyone to live with reasonable comfort and dignity.
That social security is underfunded is another side effect of globalization and free markets. It is partly the result of the dismantling of traditional company pension schemes and the leveling down of US wages to compete globally.
We do not lack the capability for everyone to live a full and comfortable life - just the political will to put the adequate funding mechanisms in place that direct our activity towards producing the goods and services needed to achieve it. Another example of 'free' markets failing to produce the desired result for a majority of people.
All the talk of money is narrow minded. All that's required is for society to organize appropriately and the 'money' will take care of itself.
Dear Madam,I'm 75 years old, and officially "retired", but my mind is not retired yet. I thinck my experience and my love for learning is totally alive. I'm a management proffesor, and I believed today I can be a better proffesor than ten years ago!
Our governments will clearly need to change policy in response to the demographic changes, and the reality that people are healthier, and hence more able to make sustained productive contributions into their sixties and perhaps beyond.
If only the US government (my own) -- and others -- had the stomach to ask people to sacrifice. And if only voters were more rational, open, and considerate of societal issues (and not only their own personal situations) when they stepped into voting booths. We could effectively solve for this retuirement issue, education funding, healthcare funding, energy/environmental policy, and so many other matters.
I'd love to see Economist debates on how we achieve such polities in the western world!
Dear Madam,
On December 20, 2010, I will retire after 31 years of employment, working a total of 43 years (started at 14). Over the past 31 years, I put money into my employer's, a local government, retirement account which they matched AND PAID INTO an outside retirement fund. The advantage in working for government? We normally earned 10-20% less than our for-profit counterparts, but had retirement that was matched and managed well, balancing out the loss in immediate income.
At 57 I know I am not working at my peak in this current profession as a well-paid director, at least not to my standards. Also, those below me need to progress, and if we DON'T retire, there are 30 and 40 somethings that are making lower wages than they should be making because there are no promotions as long as us long- in-tooth folks keep working! Regardless, I have been working on a PhD from an online institution to teach business seasonally because I like being around people. THe point-- I don't want to keep working a 40+ hour work week, 52 weeks a year. I want to work on hobbies, volunteer in the community, and spend time with my grandkids that I could not spend with my son while I was building my career. But, I don't mind working seasonally, for example. Now, if full-time work was reduced to 30 hours, four days a week, overall, maybe this discussion would be different!
Bottom line: We avoided extreme personal debt despite the lower salaries, the city has been honorable in managing its retirement portion, and we put other money in other retirement accounts as 40 years ago we were told that U.S. Social Security would not exist, and we took that to heart.
I worry more for my son and his wife. At 30, when I had 4 years into retirement savings, they have no retirement nesteggs forming, school debt is at $70,000, they have a newborn and a 4 year old, she can't get a job (recent grad), and he has an 8% decrease in income over last year due to quarterly furloughs. He has a masters in Electrical Engineering, but little work is available. A waste of education that has to be paid off-- with interest-- for both of them. He may not be able to retire as there is no matching funding from his corporation, and she'll have nothing in social security if this keeps up (not to mention she is a woman unemployed, and she WANTS to work).
Ultimately, this discussion is moot because of this situation, and my son is not alone in this. I see a lot of elderly poverty among the current 30 somethings as I look into the future, and that should be frightening to future government leaders. Or, he will have to work forever as there will be no retirement income available anyway.
That is more frightening than what will happen to my generation which, if they didn't get themselves over their head in debt and put money away, should be relatively comfortable. Some of this is personal responsibility, as always, a point made in Weller's statements.
I think of the 22 year old, a young friend, with a newly minted bachelors degree in electrical engineering, who just bought a $35,000 2006 Lexus though he only has 2 weeks of work under his belt. The Dallas auto dealer was irresponsible in selling it to him, and this youth, 38 years from now, will complain that the government won't pay him enough to retire. Financial literacy and savings should be a required course in corporate America. But then, the U.S. wrings its hands when the savings rate exceeds a measely 6% as we are a consumptive society. The future looks dim for the next generation, and this capitalist society is certainly sending mixed signals in terms of savings and spending!!! Retirement is just a piece of our financial mess and total thinking about work, savings, lending, retirement, etc.
Dear Madam,
when Mr. Weller talks about how much wealthier we are today, i believe he refers largely to an illusion. Wealth can only be stored as tangible goods, or in money which retains value only through investment in enterprises which achieve value through future productivity. In effect, our concept of wealth is largely dependent on an assumption that our children will work hard to generate value for our consumption during retirement. While this may be true, i, for one, do not wish to bet my retirement on it!. i'm all for remaining personally productive as late into my life as can be, without arbitrary reference to any personal calendar
Dear Madam,
We assume that if one puts saves his/her whole (or even most of) working life, he/she will have enough money to live off of once retired. Why we need to discuss complicated structures for this is beyond me. Ideally, I plan on doing this, one would save a little their entire life, put money into stocks at first, them, as they aproach retirement, move the money into mutual funds with lower risk. If everyone behaved intelligently, we would not need any public involvement. The only advantage to public plans are that they force one to save. If one does not plan ahead and save, they should have to continue working and retire only once they have the financial ability to. Once one has that ability to, one should retire and make way for the next generation.
If the baby boomers did not plan ahead, they should have to work to pay for their retirement.
Dear Madam,
To point out the obvious, defendants of such a motion tend to be older (such as Mr. Magnus clearly is) while the proponents of abolishing the retirement age tend to be younger (Mr. Weller).
The argument against changing retirement age is simply to deny the unfunded liability issue that will arise in the future, it is not a case of 'we should nor not' really it is a case of 'when'
Dear Madam,
Dear Madam,
I do not believe in letting workers remain in their traditional jobs beyond 65. If there is a desire to reamin active -- financially, mentally and physically -- then modern tools offer plenty opportunity with minimal reskilling required.
I myself was snapped up the day I "retired" as an (unpaid) teacher and as a freelance translator, using only my existing skills and knowledge.
What is needed (a huge potential market) are dependable tools that pensioners can maintain without helplines -- cell phones that are easy to handle (olds'mobiles?), computers that are simple, rugged and virus-resistant (microhard?) together with affordable, cheerful and durable cars (greyhounds?)that get them around without a hundred buttons on the dashboard and a handbook of 450 pages. Plus, of course, a self-generated ability to hustle if needed and to communicate with others of all different age groups on topics of mutual interest. Go for it, wrinklies!
Dear Madam,
I don't expect to "retire". Mind, I wouldn't know what to do with myself if I did. State funded pensions are something that my parents, members of the "greatest generation" awarded themselves at my expense. Demographics have made that disgusting Ponzi scheme unworkable in the 21st century.
Dear madam, I think a fixed retirement creates longevity of life.
Dear Madam,
Why should Government determine when one may retire? This is a decision best left to the individual, not bureaucrats. The concept of Social Security should be radically reformed, and privatized. How could anyone but the most mathematically-challenged believe that funds in Social Security aka government instruments could deliver the returns to 1) outpace inflation, and 2) growing demands of payments.
Dear Madam,
From a practical point of view, we must change our corporate and national retirement plans--we're running out of money. Certainly, any individual can save for a retirement and take it whenever they have "enough". But the public cannot afford to provide the current level of retirement benefits given the demographic changes underway in the US and Europe.
Whether the change comes through insolvency or the political process is unknown. But it must come.
Jeff Smith
Dear Madam,
I see two things missing from the debate. First, smaller family size means more opportunity to invest for retirement; googling “cost of raising a child” in the U.S. suggests totals, through college, in the region of $250,000 per child. The website translates child-raising costs into lost investment returns. We should compare this with the traditional Ponzi scheme, which funds retirement by perpetually increasing the total population to maintain a low ratio of retirees to workers.
Smaller family size also means that any inherited wealth brings more to the individual because it is divided among fewer siblings.
Second, “raising the retirement age” doesn’t necessarily mean that employers will allow people to work longer. Some employers kindly give their workers regular raises, until early middle age when they discharge them for costing too much. These workers may spend the rest of their working lives on a much lower career ladder.
Dear Madam,
People are living longer therefore their needs are likely to be extended and so perhaps the need for continued income is becoming again more pressing.
In many areas of study and work vitality, ingenuity and drive are not the purvey of the young alone nor is the wisdom that comes with experience that often proves very useful.
Gainfully employed people who are passionate about what they do shouldn't be held hostage to mandatory retirement by age norms that often do not conform to the reality of a workplace and a person's ability to excel.
Financial security is not assured no matter how "cognitively advanced" one may be (with the implication one can successfully predict the future) or how many social support structures may be in place.
People retirement age or not should not have their opportunities curtailed. I would venture to say most of those considering retirement have a good idea of when enough is enough.
regards,
Dear Madam Moderator,
In my country (USA) not only are we living longer, but we're taking longer to enter the work force. In other words women (and men) are not marrying and having children before 20, or even 30. If we are delaying our entry into the workforce (for the most part and not counting part-time work while still in school) then it makes sense to delay our exit, as well. The larger issue here is an overreliance on a public system that was intended to be only a safety net.
I'm 46 and plan on working another 25 years or so, in some capacity. Also hoping to work my way out of debt, finance my child's college years, and build a next egg for my old age. I'll be working a long time, and thankful to do so. The alternative was poverty or dependency or death.
Dear Mr Magnus,
I'm all for work programs and training for the elderly. Unfortunately, I don't think it will be as effective as you might hope. I think what your thesis doesn't mention is cyclical effect of wealth accumulation and economic collapse. After WWII there was an improved standard of living. In the 50's, suburbs boomed, music was new, cars and clothes we fun, flashy and affordable, and babies were popping out like never before. Time goes on and the economy goes up and down. We found that if we put the breaks on the economy when it's hot we can reduce the collapse. Then came the massive wealth accumulation that started in the late 1980's and continues today. A billion dollars of personal wealth just isn't enough. Middle class didn't mind because times we're good. Not in terms of improved standard of living but in terms of comfort. We're even in our second biggest baby boom. Most of the wealth came from productivity efficiencies so we slowly start to stripped away all of our economic safety rules. And the Wealthy get Wealthier. Without some rules to keep a market healthy, big companies step on small companies and get more corrupt less responsive to the needs of the of customers. The middle class is still appeased because without any regulations, Bank offer zero cost move in accelerated loans on inflated home prices which makes everyone happy as long as the spiral is upward. Collapse means people are loosing their houses or moving into their neighborhoods that are drowning. Large companies have massive layoffs because products made overseas do not have to follow our ecology and labor laws (?) The Question is not who is going to make the products for enormous profits, it is who is going to buy the products with enormous prices. Billionaires don't make good consumers. We have lots of kids, unfortunately the question isn't who is going to pay for the retired and imprisoned, it is who is going to pay for Kindergarten. One last dreary note, if you followed me this far, life expectancy in a country has a very solid correlation to the wealth of the middle class (not the upper class, even dirt poor countries have wealthy people.) So, if this collapse continues, I will not have to worry about who is going to pay for my retirement.
Dear Madam,
In ancient Indian traditional fourth stage of life, normally beginning at age 65, a Hindu gave up all business or commercial activity and devoted the rest of his life for the betterment of "the family"…. Hindu scriptures say the whole world is one family. In Sanskrit, the language of our scriptures this thought is termed as "Vasudaiva Kuttumbakkam". One has to return to "the family" what it gave one in the first 65 years of life. My interpretation is that one must devote the remaining productive part of his life - after 65 - to activities which are not directed towards commercial or economic activities - but one must remain productive till his dying day.
.
Ravi Chawla, Editor, Seniors World Chronicle
Dear Madam,
Readers of this paper are doubtless possessed of the cognitive skills to manage funding their own retirement. Yet, when considering the current scheme of public pensions, employer pensions, and private savings/investments, we must also consider those not so well endowed with cognitive skills. As a society, surely we are obligated to provide for those without the skills and discipline to provide for themselves - but we should do so in a way which does not simply consist of allowing those who do not wish to save for retirement to loot the public treasury. How to do so surely is no simple task, but it must be considered. Thanks to all who contribute to this debate.
Dear Madam,
Both proposer and opposition offer compelling arguments. However their focus, particularly in terms of the wider social and economic good, is too limited. Beyond retirement pensions and tax benefits, there is a broader question of generation relations and employment expectations that need to be addressed. To ask the most basic question, if older persons stay longer in the workforce, what is the impact on younger persons? Working longer (and/or loving longer) is an issue that simply can't be seen in isolation from related and far-reaching contemporary changes in life expectancy, employment models, and generational relations.
Dear Madam,
The current pension system is in shambles. Developed nations around the world are slowly going bankrupt trying to cope with more people exiting the workforce through retirement than entering it; less workers caring for more retirees. This is worsened by the fact that life expectancy is rising which means that they are collecting their pensions far longer than before. One option is to abolish the legal maximum age of employment. This would allow people to continue working thereby adding more to government pension plans and withdrawing less. This looks great on paper.
However if we chose to abolition maximum retirement ages there are some caveats to consider. Those who will be least willing to retire are high ranking executives, essentially closing off the best jobs to the more efficient youth. Especially in the world now, everyone needs to be aware that anybody born a few years after them will be more integrated in the global digital system than his or her elder. I am sure everybody under 30 has experienced their boss spending hours explaining how to do some 'new' technique in Excel just to think about how you figured that out yourself in high school or university. Perhaps in the past it was necessary to keep the older people on hand as long as possible because of the nature of education through apprenticeship. Not to mention until about 1800 technology did not change nearly as quickly so the longer a person practiced his or her trade the better they would be at it; they didn't have the concern that at some point down the line everything will be different and they will have to learn the whole system again.
This could also lead to a greater concentration of wealth. If a parent can stay active in a company as a senior executive long enough, it makes it much easier to pass on the position by nepotism to their child; despite a 20 or 30 year age difference. Especially when you add to the fact that salaries and bonuses are quickly rising in their share of the remunerations the wealthy receive. If people were able to collect these funds for an extra 10 - 15 years think about how many more legacies there would be.
Anybody who has given up on his delusions of grandeur but has enough savings to afford him or herself a reasonable retirement will opt to do so instead of working a menial, dead-end job with little chance of further promotion. Add to this list people who work physical labour, or are in decent unions, and have the option to retire early for a percentage of their full pension. Add the people who barely make their respective national governments stipend.
When considering such a radical change one must consider who it will benefit. Simply allowing people the chance to work longer will not necessarily lead to them doing so. Since anybody can chose to retire at any age up to the limit already, perhaps they will still do so. Those who do keep working may not necessarily be the people we had hoped would continue.
Will this measure be fair and proportional? It doesn't appear so. In fact it is likely that this would be the first step in destroying a pension system that millions of people rely on. If the maximum retirement age is raised, will the minimum age to receive a pension decrease? Do we chose to coax people back into working with some tax benefit?
Dear Madam,
Reducing the size of the workforce through mandatory retirement will reduce unemployment and increase job opportunities for the next generation.
Dear Madam,
Having read the opening statements in this debate I would suggest that there is no fundamental disagreement between the two writers. They both believe that society needs to make changes to address the funding of retirement, and simply disagree as to how to make these changes.
I would suggest that the solution may lie with elements of both writers' ideas. I for one would not welcome being forcibly retired from my job at 65, and would welcome the opportunity to work longer and continue to contribute. It has long seemed to me absurd that as a society we fail to make greater use of the skills and knowledge acquired over a life time. However, I also agree that reform of private pensions to make it easier for people to make better pensions provision for themselves must be part of the solution.
Whilst I realise that the motion in a debate needs be contentious I would suggest the real question is how do we wish to fund retirement in future, rather than whether we should abolish it in its present form.
Dear Barbara,
Thanks for prompting a debate on this important topic. I do think a radical proposal like this gets everyone thinking in a more focused manner.
If you think that the problems facing Europe and America and Japan are difficult, take a look at China. I recently wrote a blog on a proposal for a radical overhaul of that system at this link:...
Both sides of this argument have valid points. There are things about the current system that can be preserved. But, that which can be saved may need to be incorporated into a much broader and bolder approach that is sustainable in an aging population. The solution has to be specific to the demographics of each country involved.
I do think working longer is absolutely central to any reform. Unfortunately, requiring people to work longer is as difficult to implement and requiring people to retire at a given age. People also need to learn to live more frugally in order to increase their savings. The virtue of thrift and frugality has been lost in many areas of the Western world. In Asian countries, thrift has increased.
People need to realize that even if the best reform proposals are put into place, just about every approach raises sustainability questions, whether it be funded or not funded or a mix of the two. What governments can do is encourage thrift and create mechanisms that allow people to do what households have done from time immemorial, save money.
Bill Gross of PIMCO captured this new idea with his quip last week that "we have replaced shop 'til you drop with save to the grave." The Western model of growth built on pushing the envelope on consumption has been severely shocked and the response to this shock by households may be as important to building a sustainable future where people can plan for a secure retirement as any other approach.
Dear Madam,
let's face it. People nowdays live longer, healthier, and better lives. The life expectancy has risen dramatically in the Western world. The current retirement age limit of 65, especially in the public sector, is ridiculous and counterproductive. We are creating ourselves an army of "incapacitated elders" where we could have invested in thei experience and productivity.
Moreover, neuroscientific evidence shows clearly that the best prevention and therapy against senile dementia is by exercising the brain. Use it or lose it, they say. Is there a better exercise than practicing your own profession for profit? Is it so bad.
Last, it is an ethical matter. 65+old persons who continue to work are less of a burden for the social system since they don't need the same pension as mandatory retired ones.
Dear Madam,
There are two practical matters for politicians and employers:
* How do you avoid throwing to the wolves those skilled for work requiring agile bodies?
* How do you change employment practices to make use of "grey wisdom", as I can think of few, if any, examples of organizations having a "tailing off" career path for valuable staff who have much to contribute but beginning to lack stamina - and such part-time retention would certainly make life hard for the young turks!
If those issues can be addressed (including rules for drawing on superannuation benefits), then a fixed retirement age should be replaced with a functional definition of capacity.
In principle, therefore, I support the motion. | http://www.economist.com/debate/days/view/329/print/all | crawl-003 | refinedweb | 14,567 | 59.84 |
You are here because when you decoded a JSON data and expected JSON data to be a
dict type, but it comes out as a
list type.
In other words, You want to parse a JSON file and convert the JSON data into a dictionary so you can access JSON using its key-value pairs, but when you parse JSON data, you received a list, not a dictionary. In this article, we will see how to access data in such situations. Before that, first, understand why this happens.
- It can happen because your JSON is an array with a single object inside, for example, somebody serialized Python list into JSON. So when you parse it, you get a list object in return. In this case, you need to iterate the list to access data.
- Also, if somebody serialized Python list(which contains a dictionary) into JSON. When you parse it, you will get a list with a dictionary inside. We will see how to access such data.
We will see both examples. but first, understand the scenario with an example.
import json sampleList = [125, 23, 85, 445] # Serialization print("serialize into JSON and write into a file") with open("sampleFile.json", "w") as write_file: json.dump(sampleList, write_file) print("Done Writing into a file") # Deserialization print("Started Reading JSON data from file") with open("sampleFile.json", "r") as read_file: data = json.load(read_file) print("Type of deserialized data: ", type(data))
Output:
serialize into JSON and write into a file Done Writing into a file Started Reading JSON data from file Type of deserialized data: <class 'list'>
As you can see, we received a list from the json.load() method because we serialized only list type object. Now we can access data by iterating it like this. just add the following lines at the end of the above example and execute it.
print("Data is") for i in data: print(i)
Output:
Data is 125 23 85 445
Deserialize a JSON array that contains dictionary inside
Now, Let’s see the second scenario. Let’s assume somebody serialized Python list(which contains a dictionary) into JSON. i.e., The list contains a dictionary inside.
In this example, I am serializing the following MarksList into JSON.
StudentDict = {"id": 22, "name": "Emma"} MarksList = [StudentDict, 78, 56, 85, 67]
You can access the actual dictionary directly by accessing 0th items from the list. Let’s see the example now.
import json StudentDict = {"id": 22, "name": "Emma"} MarksList = [StudentDict, 78, 56, 85, 67] # Serialization encodedJson = json.dumps(MarksList, indent=4) # Deserialization data = json.loads(encodedJson) # or you can read from file using load() print("Type of deserialized data: ", type(data)) print("JSON Data is") for i in data: if isinstance(i, dict): for key, value in i.items(): print(key, value) else: print(i)
Output:
Type of deserialized data: <class 'list'> JSON Data is id 22 name Emma 78 56 85 67
Also, you can access limited data directly using a key name using the following code.
studentId = data[0]["id"] studentName = data[0]["name"] print(studentId, studentName)
Let’ I know if this doesn’t answer your question in the comment section. | https://pynative.com/python-convert-json-string-to-dictionary-not-list/ | CC-MAIN-2020-10 | refinedweb | 527 | 55.54 |
This is the third tutorial on how to use Voice APIs with ASP.NET series.
In previous tutorials, we learned how to make a Text-to-Speech phone call with ASP.NET and how to Play Audio to a Caller in ASP.NET Core. But how about receiving calls? The good news is the Vonage Voice API handles inbound calls as well.
Inbound calls are calls made to a Vonage number from another regular phone anywhere in the world. Both inbound and outbound calls follow the same call flow once answered. This call flow is controlled by an NCCO.
In this tutorial, we will create an ASP.NET app that handles inbound voice calls and returns a dynamic response.
Learning objectives
In this tutorial, we will:
- Create an ASP.NET Core app.
- Use NancyFX with ASP.NET Core.
- Create a Vonage voice application.
- Receive inbound calls within the app.
- Create and return an NCCO.
- Run and test the code using Ngrok.
Prerequisite
- Visual Studio 2017.
- A project setup application is configured successfully, we are ready to receive an inbound call with The Vonage Voice API!
Receiving a phone call with ASP.NET
When a call is received, the Vonage Voice API will make a request to the application to figure out how to respond to the caller.
To achieve this, we are going to use NancyFX alongside our ASP.NET Core project.
Nancy is a lightweight open-source framework that promotes the "super-duper-happy-path". This means that it has sensible defaults and conventions and tries to stay out of our way as much as()); } } }
We are all good to go! The next step is to create a Nancy module to handle any requests to
/webhook/answer.
using Nancy; namespace NexmoVoiceASPNetCoreQuickStarts { public class VoiceModule : NancyModule { public VoiceModule() { Get["/webhook/answer"] = x => "Hello happy path"; } } }
I'm using Postman to test, and as you can see our
/webhook/answer route is returning exactly what's expected.
This is a great start, but Vonage doesn't know what to do with that string. To properly respond to the call, we need to return an NCCO.
using Nancy; using Newtonsoft.Json.Linq; namespace NexmoVoiceASPNetCoreQuickStarts { public class VoiceModule : NancyModule { public VoiceModule() { Get["/webhook/answer"] = x => GetInboundNCCO(); } private string GetInboundNCCO() { dynamic TalkNCCO = new JObject(); TalkNCCO.action = "talk"; TalkNCCO.text = "Thank you for calling from " + string.Join(' ', this.Request.Query["from"].ToString().ToCharArray())); TalkNCCO.voiceName = "Kimberly"; JArray jarrayObj = new JArray(); jarrayObj.Add(TalkNCCO); return jarrayObj.ToString(); } } }
GetInboundNCCO() will create an NCCO object that will use Text-To-Speech to read the caller’s phone number back to them using the
talk action within the NCCO.
We are accessing the phone number via the
from param in the request.
That's all the code we need. To test this properly, some more configuration steps are required.
If you've been following up so far, you've already configured your Vonage account and created a voice app as shown in this post.
We need to link this app to a Vonage phone number, the number we will be calling. If you don't have a number, you can purchase one using the dashboard or the CLI.
javascript vonage numbers:buy PHONE_NUMBER US Similarly to link the number, you can use the dashboard or the CLI.
vonage numbers:buy PHONE_NUMBER US
Similarly to link the number, you can use the dashboard or the CLI.
vonage apps:link --number=PHONE_NUMBER APP_ID
We need to tell Vonage which URL to make a request to when a call is received (
answer_url). For me, this URL is and that's only running locally.
To expose our
answer_url, we will use our good friend Ngrok.
ngrok http 63286 -host-header="localhost:63286"
This will return a new URL (mine is) that can be used as the
answer_url for the voice application. Update your
answer_url to
http://[id].ngrok.io/webhook/answer
Tada! Run the app and give it a go by calling the Vonage number you purchased. It should thank you for calling, then read out your phone number. | https://developer.vonage.com/blog/2018/11/21/how-to-receive-a-phone-call-with-nexmo-voice-api-asp-core-core-and-nancyfx-dr | CC-MAIN-2022-27 | refinedweb | 676 | 67.65 |
Radix sort for float numbers2009-08-18 14:31:59
Radix sort is a linear sorting algorithm. However, it is commonly applied to integral values. This article shows, that - under certain circumstances - radix sort can be applied to floating point values as well.read on
Four easy to avoid programming mistakes2009-08-13 16:41:56
After my first post about mistakes relatively often made, but easy to avoid, i collected four more.read on
A nice method for...2009-08-13 15:05:36
Consider the following Code (in Python):
def dosomething(a, b): result=0 while (a): if (a&1): result+=b b=b<<1 a=a>>1 return resultread on
Logarithm of float numbers2009-08-13 12:52:47
Some time ago, a teacher asked me, how computers logarithmize float numbers.
Of couse, one could use the power series
But soon it is clear, that the series converges quite slowly.
= sum_k=1infty (-1)k+1 cdot dfracxkk..png)
The following article gives a general insight how floats could be logarithmized faster. It shows an approximative algorithm that can be extended to get more accurate results.read on
Quine2009-08-11 19:07:30
A program, that prints out its own source code, is called a quine. In every turing-complete programming language it is possible to generate such a program. This fact was proven by Stephen Kleene in his so called recursion theorem.
Today i wrote my first quine (in C):
int main(int argc, char *argv[]){char* s=\"int main(int argc, char *argv[]){char* s=%c%s%c;printf(s,34,s,34);}\";printf(s,34,s,34);}read on | http://phimuemue.com/blog.php?category=1 | CC-MAIN-2013-20 | refinedweb | 275 | 64.1 |
Chages Net::GPSD 2010-06-01 0.39 2010-01-02 0.38 - GPS-PRN moved to GPS-OID 2009-01-18 0.37 - Added socket caching (Bug RT 38406) 2007-09-15 0.36 - Added set to Net::GPSD->host and Net::GPSD->port methods - Updated Net::GPSD->speed_knots method - Added example-googleearth.cgi 2006-01-15 0.35 - Added Net::GPSD->speed_knots method 2006-01-03 0.34 - Added extensions to example scripts 2006-12-17 0.31 - added oid satellite capability from GPS::PRN 2006-12-02 0.30 - cleaned 2006-12-01 0.29 - updated test near function - Distance function from Geo::Inverse 2006-11-30 0.28 - documentation - getsatellite method supports wantarray - moved PI from sub to constant - moved stuff around everywhere but no real changes 2006-10-28 0.27 - changed track formula to use Geo:Forward. 2006-10-11 0.26 - Error in Net::GPSD->track $||$*$ -> ($||$)*$; 2006-06-19 0.25 - Change dependancy on S[0] first and then O[14]||M[0] to get fix. - Added examples 2006-06-14 0.24 - Change dependancy on O[14](new) and M[0] vice S[0] to get fix. 2006-06-11 0.23 - added logic to handle O=? watcher no fix 2006-06-08 0.22 - modified subscribe method to use gpsd watcher mode vs. poll mode 2006-06-08 0.21 - updated example-tracker-text 2006-06-08 0.20 - Scrapped development concerning Math::Bezier 2006-04-23 0.19 - added example-tracker-text - added Point latlon method - added wantarray capability to commands method - changed a connection error print from stdout to a warn on stderr 2006-04-08 0.18 - shift() warning - fixed test errors 2006-04-08 0.17 - Forgot to update versions - Updated the CHANGES file 2006-04-08 0.16 - Error in track function > # Heading is measured clockwise from the North. The angles for the math > # sin and cos formulas are measured anti-clockwise from the East. So, > # in order to get this correct, we have to shift sin and cos the 90 > # degrees to cos and -sin. The anti-clockwise/clockwise change flips > # the sign on the sin back to positive. > my $distance_lat_meters=$distance_meters * cos(deg2rad $p1->heading); > my $distance_lon_meters=$distance_meters * sin(deg2rad $p1->heading); - Added deg2rad function - Added Point->latlon function 2006-04-01 0.15 - Updated pod for GPSD.pm mostly just the examples are linked 2006-03-29 0.14 - Renamed GPS::gpsd to Net::GPSD 2006-03-22 0.13 - Internel version numbers were wrong 2006-03-22 0.12 - Error in point copy for initialization < $self->{$_}=$data->{$_}; > $self->{$_}=[@{$data->{$_}}]; 2006-03-21 0.11 - simplified GPSD default_handler - pods for ./bin/ examples - hopefully fixed META.yml error 2006-03-19 0.10 - moved examples to ./bin/ folder 2006-03-17 0.09 - fixed 1 warning 2006-03-15 0.08 - CPAN changes. Now automakes with CPAN! - moved Report::http under gpsd namespace - moved modules to the lib folder - renamed tgz file to GPS-gpsd-X.XX.tgz format 2006-03-11 0.07 - made a bunch of changes to the distance calculations - Fixes error in in the parse routine < $data{$1}=[split(/ /, $2)]; > $data{$1}=[split(/\s+/, $2)]; - Updates to CPAN install capability - Updates to documentation - Update to the subscribe interface 2006-02-23 0.06 - No user interface changes - Updates the pod documentation so that it displays better on CPAN. - Moved code from GPS::gpsd::Satellite->list to GPS::gpsd->getsatellitelist. - Documentation, Documentation, Documentation... 2006-02-22 0.05 - Heavy user interface changes - Modified a few interface names to meet my tastes register->subscribe - Documentation, Documentation, Documentation... 2006-02-21 0.04 - Heavy user interface changes - First CPAN Documentation Begins - Added satellite object interface 2006-02-19 0.03 - Heavy user interface changes - Added Point object interface 2006-02-?? 0.02 - Heavy user interface changes 2006-02-?? 0.01 - Original version on CPAN. | http://web-stage.metacpan.org/changes/distribution/Net-GPSD | CC-MAIN-2019-47 | refinedweb | 655 | 54.18 |
START 12:01 October 14th, 2012
In the sequence of my previous post I’d like to give a more real world example of how a modern software engineer should be able to write and test code using the vast amount of tools/frameworks available to get the job done in timely fashion while producing high quality code.
To start of course we’ll need some sort of a project to build while writing about how to solve the issues we come across. So for this writing I will attempt to create a key/value store in Python that can be used to mock out a real key/value store such as Redis, LevelDB or Amazon’s Dynamo. The current requirements will be:
- Support 3 simple API calls:
- GET key
- SET key value
- DEL key
- The protocol used to communicate should be human readable and really efficient, just like Redis’s communication protocol is.
As part of this writing I will track in real time how many hours I’m spending on this project while writing the post so that at the end you can get an idea of how little overhead writing tests and documentation while developing really has.
STOP: 12:11 on October 14th, 2012
START 14:57 on October 14th, 2012
So the first thing we’ll have to do is to define the exact protocol we’re going to use in a way that can be easily consumed by others who will attempt to talk the same protocol or create clients to talk this custom protocol. We mentioned earlier we’d be using a protocol similar to what Redis uses. You can read up on Redis’s communication protocol here and we’ll be greatly simplifying this protocol for this writing like so:
\*[number of arguments] CR LF [command name] CR LF [argument 1] CR LF [argument 2] CR LF
Which means that the first thing sent is the indication of how many arguments will follow separted by a carriage return and line feed character (\r\n). Then each of the arguments a single line termianted by \r\n.
So sending a SET request for the key A to set it to the value 100 would look like so on the wire:
\*3\r\nSET\r\nA\r\n100\r\n
Replies will also be very similar to the way Redis deals with this type of thing and we’ll basically start a response with a + on success followed by a single line response, or we’ll start with - if there was an error followed by a single line with the error message.
We now have to decide what we’ll use to build the protocol server on and currently one of the most flexible and best performant ones in the Python world is Twisted which can be used to easily create your own custom protocol or better yet used to easily build your own HTTP, FTP, etc server in minutes. I had to brush up on my knowledge of Twisted and how to create my own custom protocol and after reading through the documentation for some 15 minutes I found that what I wanted to use was the LineReceiver implementation to build my protocol on a per line reading of any connection. The first example that you may be able to put together using the LineReceiver class may look like this:
from twisted.internet.protocol import Factory from twisted.protocols.basic import LineReceiver from twisted.internet import reactor class Answer(LineReceiver): answers = {'How are you?': 'Fine', None : "I don't know what you mean"} def lineReceived(self, line): if self.answers.has_key(line): self.sendLine(self.answers[line]) else: self.sendLine(self.answers[None]) class AnswerFactory(Factory): def __init__(self): pass def buildProtocol(self, addr): return Answer() reactor.listenTCP(9999, AnswerFactory()) reactor.run()
This of course is just an example of how you can use twisted to make a line reading protocol handler. Now lets actually use this to read our new custom protocol which is a multi-line protocol that needs to reconstruct each command from the multiple lines that it is broken up into on the wire.
Now even before we start writing the actual server handler code lets first write up a few very simple unit test that we can use to verify that we have a working set and get commands:
import socket import unittest class ProtocolTest(unittest.TestCase): def setUp(self): self._connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self._connection.connect(('localhost', 9999)) def tearDown(self): self._connection.close() def _send_cmd(self, cmd, *args): cmd_string = '*%d\r\n' % (len(args) + 1) cmd_string += '%s\r\n' % cmd for arg in args: arg = str(arg) cmd_string += '%s\r\n' % arg self._connection.sendall(cmd_string) return self._read_response() def _read_response(self): status = self._connection.recv(1) response = '' read = self._connection.recv(1) while read != '\n': response += read read = self._connection.recv(1) if status == '+': # successful response with a single line response return response else: # unsucceful response with error message raise Exception(response) def test_basic_set_and_get(self): self._send_cmd('SET', 'A', 100) resp = self._send_cmd('GET', 'A') assert resp == '100', 'expected 100 got %s' % resp if __name__ == '__main__': unittest.main()
There is quite a lot of test code displayed there but that’s because we had to create those helper methods for sending commands and receiving responses. The actually test itself is just 3 lines to send a SET command verify with a GET command that the current server recorded our 100 value correctly.
We’re now back to the server code because we now need to restructure our CommandReader so that it can actually read each command line by line and translate that into the right underlying set/get command. In a first approach at writing our CommandReader what we need to do is to make this LineReceiver act as state machine that transitions between commands in a very well defined manner. Every line that starts with an asterisk is expected to be a new command that is consumed till all arguments are read and the command is dispatched and the response is sent back to the client awaiting a response.
A first approach may look like so:
class CommandReader(LineReceiver): """ state machine comand reader """ def __init__(self): = [] if self.expected_arguments == 0: # command complete lets dispatch and return OK self.sendLine('+OK') self.start_command = False if self.start_command: # we're still reading the arguments line = line.strip() self.args.append(line) self.expected_arguments -= 1
Of course if we run our test against this server it thinks it has stored data and will fail to retrieve the desired data because we’re always responding with ‘OK’ as you can see here:
Lets take our current implementation and make the CommandReceiver smart enough to look up the correctly handler based on the command name supplied and return the response that the handling function returns.
After some debugging and restructuring the code a bit as I made changes and re-ran the tests I realized that the checking for a complete command should always be done after processing each line. Then I also figured that socket.sendLine already adds the newline character at the end of each response. Once I fixed the code up like so:
class CommandReader(LineReceiver): """ state machine comand reader """ def __init__(self, cmdhandler): self._cmdhandler = cmdhandler self._cmd_names = dir(cmdhandler) = [] return if self.start_command: # we're still reading the arguments line = line.strip() self.args.append(line) self.expected_arguments -= 1 if self.expected_arguments == 0: # lookup the method handler command = self.args[0] # remove the command name from the arguments self.args = self.args[1:] if command in self._cmd_names: func = getattr(self._cmdhandler, command) result = func(*self.args) self.sendLine('+%s' % result) else: self.sendLine('-unknown command %s' % command) self.start_command = False
Our unit test now passes successfully, like this:
Now I have a working prototype that can actually do set and get requests and save that information into memory at runtime. At this point its 16:30 on October the 14th, 2012 and with writing the unit test and writing the code I’ve spent just a little over an hour and a half to have a working prototype that could be used by a dependent service to start integrating against.
What we’ll focus on next is using tools such as pylint to identify problems in the code as well as using setuptools to create a setup script that can be used by anyone to easily install this service and start it for others to integrate with while features are being added to the base source.
STOP 16:32 on October 14th, 2012
START 18:59 on October 14th, 2012
So in order to share our code with others we’ll have to create a setup script that can be used to easily install and startup the service as well as being able to upgrade your current installation as further updates are made to the code base. For this specific code being written we’ll create a simple setup.py file like so:
from setuptools import setup, find_packages setup ( name='kvs', version='0.0.1', author='Rodney Gomes', author_email='rodneygomes@gmail.com', url='', test_suite="tests", keywords = ['keyvalue', 'storage'], py_modules = [], scripts = ['kvsserver.py'], license='Apache 2.0 License', description='simple key value store', long_description=open('README.md').read(), packages = find_packages(exclude='tests'), install_requires = [ 'twisted', ], entry_points = { 'console_scripts' : [ 'kvs_start = kvsserver:main', ], }, )
With that we can now check this code in and anyone who wants to run your service can easily do the following on the command line with Python 2.7 installed:
Now your service can easily be started by using the ‘kvs_start’ script that should now be in your path.
Before we proceed to start adding more features and tests to our code lets introduce the notion of a code style checker and static code analyzer called pylint and how we can use it to make sure our code is clean and lean and little less prone to errors. Using pylint is super easy as you can install the python package with a quick ‘pip install pylint’ and then you can call it on any code base like so:
There is quite a bit of output that pylint supplies the most important parts are shown above. We can quickly see that we’re missing quite a few docstrings and then there are a few things we can ignore such as the missing members that is obviously just pylint not finding the imports correctly. As with any tool pylint is intended to point you in the direction of a problem and you ultimately have to make the decision to fix something or leave it be and use for example in this case a docstring to tell pylint to be quiet. The score given to your code is an interesting way of showing developers if their code is up to par with how Python code should be written and maintained.
Lets add those docstrings and silence the missing member functions that we know are in fact there. With a subsequent rerun of pylint I now have a score of 7.45 which is a pretty decent score. Something like pylint should be used in order to make sure that the code quality doesn’t degrade drastically with time and that certain levels of code quality and proper code writing are maintained across the team.
We’re now at a stage where others are already able to install and use our code and need to continue adding features to our existing service while making sure that with each checkin we don’t break any of the older functionality and yet are able to quickly introduce new features or bug fixes.
STOP 20:01 on October 14th, 2012
START 20:22 on October 20th, 2012
At this point I decided to restructure the code a bit by creating a kvs package in which the CommandHandler logic into its own module. That way I can continue development on the way we’re storing/retrieving data without having to muck around in the kvsserver module. While doing this I also created a few more test to verify the set, get and new del operation all work correctly.
We now have the full API available with a few additional unit tests that verify the various use cases for set/get and delete operations. I also spent sometime and created a very simple set of performance tests to have an idea of how well this whole thing performs. To create the basic performance tests I first created a BaseTest test case to build that had the earlier used send command to be use to easily send and receive data from the server and then I built the following very simplistic performance test:
ITERATIONS=20000 class PerformanceTest(BaseTest): def test_1st_set_small_key_performance(self): """ """ start = time.time() for index in range(0, ITERATIONS): self.send('SET', 'key-%d' % index, 'tiny little value') elapsed = time.time() - start print('SET %f ops/sec' % (ITERATIONS/elapsed)) def test_2nd_get_small_key_performance(self): """ """ start = time.time() for index in range(0, ITERATIONS): self.send('GET', 'key-%d' % index) elapsed = time.time() - start print('GET %f ops/sec' % (ITERATIONS/elapsed)) def test_3rd_del_small_key_performance(self): """ """ start = time.time() for index in range(0, ITERATIONS): self.send('DEl', 'key-%d' % index) elapsed = time.time() - start print('DEL %f ops/sec' % (ITERATIONS/elapsed))
The performance numbers were above my expectations, as I was expecting a couple of thousand operations per second but got:
I was satisfied with the single threaded performance of the kvs store at this point and want on making the code easier to read & write and so I spent a little time restructuring the CommandReader class to be a bit smarter in terms of how we basically parse each command by switching the lineReceived method implementation at run time. I also fixed up the base test class to be more specific on the SET/GET & DEL methods being used to talk to the server. Here’s what the new CommandReader looks like:
class CommandReader(LineReceiver): """ state machine comand reader """ def __init__(self, cmdhandler): self._cmdhandler = cmdhandler self._cmd_names = dir(cmdhandler) self.start_command = False self.reading_argument_size = False self.expected_arguments = -1 self.args = [] self.lineReceived = self._start_command def _start_command(self, line): assert line.startswith('*'), 'got %s' % line # new command starting if self.start_command: self.sendLine('-unexpected start of new command\r') self.expected_arguments = int(line[1:]) self.args = [] self.lineReceived = self._read_command self.start_command = True def _read_command(self, line): line = line.strip() self.command = line self.expected_arguments -= 1 self.lineReceived = self._read_arguments def _read_arguments(self, line): # remove the command name from the arguments line = line.strip() self.args.append(line) self.expected_arguments -= 1 if self.expected_arguments == 0: # lookup the method handler if self.command in self._cmd_names: func = getattr(self._cmdhandler, self.command) try : result = func(*self.args) self.sendLine('+%s' % result) except Exception as e: self.sendLine('-%s' % e) else: self.sendLine('-unknown command %s' % self.command) self.start_command = False self.lineReceived = self._start_command
The nice thing at this point is that I’m constantly able to change code quite drastically without having to worry if I broke something because the unit tests are able to give me some confidence in the changes I’m making.
STOP 21.15 on October 21st, 2012
START 15:39 on October 21st, 2012
At this point I’d like to take sometime to analyze how much test code I’ve written vs how much real product code I’ve written. I’ll do this in the simplest way possible by just comparing line count:
So right now we actually have a few more lines of test code than we have of actual product code. The thing to realize though is that as we add more API calls to service, the amount of test code won’t grow by as much as it has till this point. Lets really show how this works by adding a few new APIs:
- SHUTDOWN
- RESET
- KEYS key_reg_ex
The SHUTDOWN command is used to basically shutdown the server and the RESET command is used to reset the store back to empty. The KEYS command is a little trickier since it involves returning all of the keys that match the regular expression specified. This will force us to introduce a new return type to the protocol. What we had in terms of protocol return specification till now was:
- + means the operation succeeded and is followed by OK or the value of what you wanted to return
- - is used before the error message of an operation that failed
- * is used before starting a multi-value response in which the integer after the * is the number of lines to read
With the multi-line response we can now implement the KEYS correctly. Having implemented all of those features we now have a little more product code lines than tests and have a pretty well working key/value store that is being used by others while we make changes and easily reverify our code as we make those changes. here is the lines of test code vs lines of product code comparison:
Now I personally don’t care if I have more product code than test code because to me test code is valuable code that allows me as a developer to actually write code that can be used by others and guarantees my code at least does what I was originally intending.
STOP 16:46 on October 21st, 2012
The interesting thing that I’d like to analyze now is roughly how much time was spent on this little project till now and of that time how much time was spent writing tests vs writing product code.
So the start and stop times tally looks like so:
So just after just 4h and 33m of working on this project we have a working service that can be used by others and we’re able to easily and quickly extend this service while making sure to test existing and new features before each and every checkin.
Now there are a few things I should have also worked on but just didn’t feel it would have fitted into the length of the post I was working on writing. The few things I would have focused on next would have included:
Making sure to document the protocol specification with the code in a format that would allow others to easily and quickly write their own clients. This would also make updating the API documentation easier since it resides with the code that implements the API.
Adding more tests that would verify the limits of the API usage such as the max key and value lengths. Not forgetting to test all of the negative scenarios of using the protocol such as invalid integer values, invalid operation names, etc.
I hope that after reading this post you’ll see that you can easily apply the same ideas to any of your projects and allow yourself to be a more efficient engineer and also allow you to produce better code.
The code written during this writing can be cloned from here | http://rlgomes.github.io/work/writings/software/engineering/tutorial/2012/10/21/18.44-From-Prototype-To-Production.html | CC-MAIN-2017-26 | refinedweb | 3,190 | 58.72 |
Now the Compromise:
1) Keep a *SMALL* default (Probably switchable) VC/Console system, This IS
needed for booting& fail-safe login etc..
2) Add the rest, like a switchable version, or THE super duper, scrollable,
colourazible, blinking, warning, auto-switched (For the abort retry stuff ;^),
(user level??) VC
3) Create a VC API which will then be used by user-level programs that will add
all these wise and wonderful things to the console like VT220, Wyse 50, QVT 62,
etc. emulation when needed for that odd one out.....
>
> Furthermore, as other people have pointed out, VC switching is
> extraordinarily useful. I have my syslog scrolling up on one VC. I
> don't have to log out to let somebody else use the console.
AHHHHH!!!!!!
Well, then add a module which will use the special API to check the status on a
file, and display it when the specified VC is asked for..
>
> Screen doesn't cut it. Maybe if somebody patches it so it can use a
> sequence of more than one character as its escape it will have a
> chance of being useful. Maybe. It's still a highly inferior system.
Argreed, but not quite, screen is good, but not for a console thing....
------
Groetend / Sincerely Yours
Hendrik Visage
#include <Standard/Disclaimer>
Vector Customer Support
+27 11 315 4330
hendrik@vector.co.za | http://lkml.iu.edu/hypermail/linux/kernel/9510/0278.html | CC-MAIN-2019-35 | refinedweb | 225 | 71.65 |
I have a code for loop programming below. So basically i loop a rolling regression and after that i want to save to excel.
The code below works for the looping itself (without wfsave), but in the middle of the looping with wfsave, the loop failed. The error message is : File IO failed for file 'Could not create entry in zip file.' in "WFSAVE(TYPE=EXCELXML) "C:\USERS\DHITYA\DESKTOP\2\2001.T.CSV.XLSX" @KEEP DATE RETURN G1".
Is there something wrong with the code? It works before with just 2 files, but when i do 50 files, it failed.
If i want to try running the program again with different data, i should create a new destination folder to save my file, but it will failed again in the middle of the loop.
Thankyou
Code: Select all
cd "D:\Thesis_Loop\2002_2\dataset"
%filenames = @wdir("D:\Thesis_Loop\2002_2\dataset")
for !k=1 to @wcount(%filenames)
%file = @word(%filenames, !k)
wfopen %file
import D:\Thesis_Loop\2002_2\dataset\index\Indeks_2002.csv
import D:\Thesis_Loop\2002_2\dataset\index\FF3_2002.csv
genr return=(((close-close(-1))/close(-1))*100)-rf
!window = 60
!step = 1
!length = @obsrange
equation eq1
!nrolls = @round((!length-!window+1)/!step)
matrix(5,!nrolls) coefmat!k
!j=0
for !i = 1 to !length-!window+1-!step step !step
!j=!j+1
smpl @first+!i-1 @first+!i+!window-2
equation eq1.ls return c indeks mkt_rf smb hml
colplace(coefmat!k, eq1.@coefs,!j)
matrix(!nrolls,5) r1
matrix r1=@transpose (coefmat!k)
series rbc
series rbindeks
series rbmkt
series rbsmb
series rbhml
group g1 rbc rbindeks rbmkt rbsmb rbhml
sample s1 @first+59 @last
mtos(r1, g1, s1)
if @isobject("s1") then
delete s1
endif
%path = "C:\Users\Dhitya\Desktop\2\" + %file + ".xlsx"
wfsave(type=excelxml) %path @keep date return g1
'close @all | http://forums.eviews.com/viewtopic.php?f=5&t=19465&p=62467 | CC-MAIN-2019-09 | refinedweb | 304 | 69.38 |
Dear developers, programmers
I would like to written this program whit this problem:
Create a program that opens a file and counts the whitspace-separated words in that file
Oke this is my code:
Can someone help my what that main en coded this problem?Can someone help my what that main en coded this problem?Code:#include <string> #include <iostream> #include <fstream> #include <iomanip> using namespace std; int main() { ifstream in("test.cpp"); string s, line; while(getline(in,line)) { / I don't know what I can put here } //cout << line << s; return(0); }
"counts the whitspace-separated words in that file"
Thanks
S | http://cboard.cprogramming.com/cplusplus-programming/119644-i-don%27t-understand-what-main-whitespace-separated-words.html | CC-MAIN-2015-48 | refinedweb | 105 | 64.75 |
This is your resource to discuss support topics with your peers, and learn from each other.
01-16-2013 10:38 AM
Is there any way to hide the action bar and show it again when needed programatically??
I have seen that the image gallery in the camera application hides the action bar when you tap the screen and show it again with a new tap.
Best regards!
Solved! Go to Solution.
01-16-2013 12:08 PM
Here is one suggestion:
Give id's to your actions.
Write a slot that will be listening to the Signals you want it to listen to.
In the slot, you can hide the actions; that will also hide the action bar.
You can access the properrties of actions by using their id's.
I hope it helps
01-16-2013 12:14 PM
Hi hervebags,
thanks for your suggestion. I think it is a good approach.
The problem is that I have a TabbedPane as the root of my QML. Although I have no actions, the action bar is shown...
Any suugestions?
01-16-2013 12:40 PM
You can do this:
import bb.cascades 1.0 TabbedPane { id: tabbedPane showTabsOnActionBar: true Tab { title: "Tab 1" } // end of first Tab Tab { title: "Tab 2" content: Page { Button { onClicked: { tabbedPane.showTabsOnActionBar = false } } } } // end of second Tab }// end of TabbedPane
But, you will still have a small icon on the left corner
Good luck finding the solution.
Please let me me when you solve this problem
Cheers,
Herve
01-16-2013 04:02 PM - edited 01-16-2013 04:04 PM
I belive you're referring to the Context Actions menu, which can be hidden by the Context Menu Handler
contextMenuHandler: ContextMenuHandler { id: myHandler onVisualStateChanged: { if (ContextMenuVisualState.VisibleCompact == myHandler.visualState) { console.log("ContextMenu Visible"); console.log("Keyboard Hidden by Context Menu Compact"); } else if (ContextMenuVisualState.Hidden == myHandler.visualState) { console.log("ContextMenu Hidden"); } } } contextActions: [ ActionSet { actions: [ //some context actions here ]
}
]
01-16-2013 04:28 PM
say your main nav id is "mainPage"
put this within:
Page {
actionBarVisibility: ChromeVisibility.Hidden
}
then say you want it to be present when page "page2" opens. in page2 coding do something like this:
Page {
onCreationCompleted: {
}
}
01-17-2013 02:53 AM
Great!! It was just what I looking for! | https://supportforums.blackberry.com/t5/Native-Development/Hide-action-bar/m-p/2101047 | CC-MAIN-2017-04 | refinedweb | 380 | 64.3 |
We. or.
Listing 1 contains a simple filter, called egotrip, to make my name appear in boldface whenever it appears in a Weblog entry. Notice how the plugin must define its own package; this ensures that each plugin's subroutines are kept in a separate namespace and makes it possible for Blosxom to determine whether a package contains a particular method name.
The actual work is done in the story subroutine, which is passed six arguments when invoked by Blosxom, corresponding to a number of items having to do with the entry. In our case, we care about changing only the body of the entry, which is in the final variable, known as $body_ref. As its name implies, this is a scalar reference, which means we can access or modify its contents by dereferencing it, using two $$ signs. With that in mind, it should not come as a surprise that we can boldface every instance of my name with:
$$body_ref =~ s|Reuven|<b>Reuven</b>|g;
Of course, we could make this step even more sophisticated and insert automatic hyperlinks to a number of different items:
$$body_ref =~ s|(Reuven Lerner)| ↪<a href="">$1</a>|g; $$body_ref =~ s|(Linux Journal)| ↪<a href="">$1</a>|g;
Indeed, a plugin of this sort already exists; it automatically creates links to the community-driven Wikipedia. Any text placed within [[brackets]] automatically is turned into a link to that on-line reference book.
Notice how flavours are HTML templates into which we can instantiate Perl variable values, whereas plugins are Perl programs. This division between display and actions takes a little bit of time to grasp, but it shouldn't be too difficult.
As for our paragraph-separating problem from before, there's no need to reinvent the wheel. You simply can download a plugin, Blox, that allows you to separate paragraphs with blank lines when writing your blog entry. The plugin then separates paragraphs with the HTML of your choice. Blox is listed on Blosxom's plugin registry (see the on-line Resources section).
The fact that Blosxom keeps all entries and flavours in a single directory is a bit disturbing to me and makes me wonder about the program's scalability. Even if my filesystem and Perl can handle that many files without too much trouble, do I really want to wade through them all? If and when this becomes a problem, an entries plugin probably can provide the right solution, scooping up files from multiple directories and returning an appropriate hash to Blosxom.
Blosxom is a powerful tool for creating a Weblog; it's more than it might appear at first glance. Blosxom consists of an easy-to-install, easy-to-configure CGI program written in Perl, but its true power lies in the fact that it lets you change every part of the display through a combination of flavours (display templates) and plugin routines. By mixing and matching existing flavours and templates with something of your own, it can be easy to create your own Weblog.
Resources for this article: /article/7454 [1].
Links:
[1]
[2] mailto:reuven@lerner.co.il | http://www.linuxjournal.com/print/7392 | crawl-001 | refinedweb | 520 | 56.39 |
One of the interesting data protocols developed in the last century is APT (Automatic Picture Transmission). It is used to transmit images of the Earth’s surface from space, and what is much more interesting for us, receiving APT signals is feasible to radio amateurs.
The NOAA meteorological satellites, we’ll try to decode, belong to the TIROS (Television InfraRed Observation Satellite) series, the first of which was launched in 1960. There are currently 3 satellites in operation (NOAA-15, NOAA-18 and NOAA-19, the oldest of which, NOAA-15 works since 1998). The satellites orbiting around the Earth at an altitude of about 850 km and make one revolution in about 1.5 hours. There are various sensors onboard, but we will be interested in receiving meteorological images. And there are two options available. The simplest way of reception is to get an analogue signal in the APT format at a frequency of 137 MHz. The satellites also transmit images in the HRPT (High-resolution Picture Transmission) format at a frequency of 1.7 GHz. HRPT decoders are available, but a high gain antenna mounted on a special tracker is required, which is more difficult and expensive.
According to international conventions, all meteorological data is open, and anyone can receive NOAA signals. There have already been several articles on Medium about the NOAA reception like this, but they all usually boil down to “plug in the receiver, run the program, get a picture”, without explanation of how it really works. Let’s try to go one logical level down and see how the data processing works under the hood.
Let’s get started.
The very first and obvious. The satellite is not geostationary, it moves across the sky, so we need to wait for the reception time. The simplest way is to use the n2yo.com online service, which allows calculating the flyby time for any satellite. Here’s an example for NOAA 19:
Good reception when the satellite is high above the horizon occurs about once a day and can be sometimes at 5 am, the actual flight across the sky takes about 10 minutes, so it may be useful to schedule an automatic recording at the proper time.
Of course, you need a receiver to get the radio signal. The cheapest RTL-SDR V3 at $ 35 will do the job, I used a better SDRPlay model with the following settings:
As you can see, I set the decimation value to maximum, which allows getting the maximum dynamic range. The LNA and Gain gain levels should be selected depending on the antenna. Satellites NOAA-15, NOAA-18 and NOAA-19 transmit data on frequencies 137.620, 137.9125 and 137.100 MHz, respectively. The signal itself has a bandwidth of about 50 kHz, and if everything was done correctly, at a given time the signal should appear on the spectrum:
It is interesting to note the slope of the lines due to the Doppler effect — the satellite flies around the globe, and this is another good proof of the curvature of the Earth 😉
Speaking of reception, the great thing about NOAA satellites is that reception is affordable for beginners. The signal can, in principle, be heard on any antenna, even on the simplest one from the TV, but to get a good picture the quality of the antenna is very critical. With a bad antenna (like mine:) the picture will have less contrast, but this is enough to demonstrate the decoder’s operation. Nice 137 MHz antenna can be bought or made from plumbing and copper pipes, an example can be viewed here. However, the topic of the article is digital signal processing and not DIY, those who wish can find the proper antenna design on their own.
So, we should start the recording at the proper time, using the FM mode and a 50 kHz bandwidth. The result should be a WAV file of about 10 minutes length, the periodic pulses should be clearly audible. Now we can start decoding.
Step 1. Let’s load the WAV file using scipy library. I only display a fragment from 20 to 21 seconds, otherwise rendering will be too long.
import scipy.io.wavfile as wav
import scipy.signal as signal
import numpy as np
import matplotlib.pyplot as plt
fs, data = wav.read('HDSDR_20201227_070306Z_137100kHz_AF.wav')
data_crop = data[20*fs:21*fs]plt.figure(figsize=(12,4))plt.plot(data_crop)
plt.xlabel("Samples")
plt.ylabel("Amplitude")
plt.title("Signal")
plt.show()
A periodical signal should be visible:
Step 2. To speed up decoding, let’s reduce the sampling rate by 4 times, discarding unnecessary values:
resample = 4
data = data[::resample]
fs = fs//resample
Step 3. The image is transmitting in amplitude modulation, for conversion to AM let’s apply the Hilbert transform:
def hilbert(data):
analytical_signal = signal.hilbert(data)
amplitude_envelope = np.abs(analytical_signal)
return amplitude_envelopedata_am = hilbert(data)
We can display the picture and make sure that we got the signal envelope:
Step 4. The final step. Actually, the decoding was already finished. The data itself is transmitted in analogue format, so the color of each pixel depends on the signal level. We can “reshape” the data into a 2D image, from the format description it is known that one line is transmitted in 0.5 s:
from PIL import Imageframe_width = int(0.5*fs)
w, h = frame_width, data_am.shape[0]//frame_width
image = Image.new('RGB', (w, h))px, py = 0, 0
for p in range(data_am.shape[0]):
lum = int(data_am[p]//32 - 32)
if lum < 0: lum = 0
if lum > 255: lum = 255
image.putpixel((px, py), (0, lum, 0))
px += 1
if px >= w:
if (py % 50) == 0:
print(f"Line saved {py} of {h}")
px = 0
py += 1
if py >= h:
break
The putpixel function is not the fastest way to work with images, and the code can be sped up 10 times using numpy.reshape and Image.fromarray, but this way looks more clear. To convert the amplitude of the signal to the brightness range 0..255, the values are divided by 32, for another antenna the value may have to be changed.
For ease of viewing, let’s resize the image and display it:
image = image.resize((w, 4*h))
plt.imshow(image)
plt.show()
If everything was done correctly, we should get something like this:
The “old-school” green color was chosen just for fun, those who wish can change the color gamut by changing the parameters of the putpixel method. What do we see on the screen? In APT format, two channels are transmitted. Long-wave IR (10.8 micrometers) is transmitted on one half of the frame, near/mid-wave IR (0.86 or 3.75 micrometers) is transmitted to the other, the mode is selected depending on whether the satellite transmits a night or day image. The data also has timing markers and telemetry, those who wish can see the description of the APT format in more detail. Software decoders can use these markers to adjust the picture, but this may not work on weak signals. The code above never gets out of sync since there is no sync at all, even the weakest signal will be visible, albeit with less contrast..
Of course, it is not necessary to decode NOAA signals only using Python, there are enough software decoders that can do the job, and even have decent features, like placing countries borders and cities labels, rendering pseudo colors, etc. But it can be also interesting to do the decoding from scratch. | https://hackerworld.co/decoding-noaa-satellite-images-using-50-lines-of-code | CC-MAIN-2021-10 | refinedweb | 1,263 | 63.49 |
Dear All,I'd appreciate if folks would run the program below on variousmachines, especially those whose caches aren't automatically coherentat the hardware level.It searches for that address multiple which an application can use toget coherent multiple mappings of shared memory, with good performance.I want this information for two reasons: 1. To check it correctly detects archs which page fault for coherency or aren't coherent. 2. To check the timing test is robust, both for 1. and for detecting archs where the hardware is coherent but slows down (see Athlon below). 3. To check this is reliable enough to use at run time in an app.I already got a surprise (to me): my Athlon MP is much sloweraccessing multiple mappings which are within 32k of each other, thanmappings which are further apart, although it is coherent. The L1data cache is 64k. (The explanation is easy: virtually indexed,physically tagged cache moves data among cache lines, possibly via L2).This suggests scope for improving x86 kernel performance in the areasof kmap() and shared library / executable mappings, by good choice of_virtual_ addresses. This doesn't require a cache colouringpage allocator, so maybe it's a new avenue?Anyway, please lots of people run the program and post the output +/proc/cpuinfo. Compile with optimisation, -O or -O2 is fine. (Youcan add -DHAVE_SYSV_SHM too if you like): gcc -o test test.c -O2 time ./test cat /proc/cpuinfoThanks a lot :)-- Jamie==============================================================================/* This code maps shared memory to multiple addresses and tests it for cache coherency and performance. Copyright (C) 1999, 2001, 2002, 2003 <assert.h>#include <stdlib.h>#include <string.h>#include <limits.h>#include <errno.h>#include <fcntl.h>#include <unistd.h>#include <stdio.h>#include <sys/types.h>#include <sys/stat.h>#include <sys/signal.h>#include <sys/mman.h>#include <sys/time.h>#if HAVE_SYSV_SHM#include <sys/ipc.h>#include <sys/shm.h>#endif//#include "pagealias.h"/* Helpers to temporarily block all signals. These are used for when a race condition might leave a temporary file that should have been deleted -- we do our best to prevent this possibility. */static voidblock_signals (sigset_t * save_state){ sigset_t all_signals; sigfillset (&all_signals); sigprocmask (SIG_BLOCK, &all_signals, save_state);}static voidunblock_signals (sigset_t * restore_state){ sigprocmask (SIG_SETMASK, restore_state, (sigset_t *) 0);}/* Open a new shared memory file, either using the POSIX.4 `shm_open' function, or using a regular temporary file in /tmp. Immediately after opening the file, it is unlinked from the global namespace using `shm_unlink' or `unlink'. On success, the value returned is a file descriptor. Otherwise, -1 is returned and `errno' is set. The descriptor can be closed using simply `close'. *//* Note: `shm_open' requires link argument `-lposix4' on Suns. On GNU/Linux with Glibc, it requires `-lrt'. Unfortunately, Glibc's -lrt insists on linking to pthreads, which we may not want to use because that enables thread locking overhead in other functions. So we implement a direct method of opening shm on Linux. *//* If this is changed, change the size of `buffer' below too. */#if HAVE_SHM_OPEN#define SHM_DIR_PREFIX "/" /* `shm_open' arg needs "/" for portability. */#elif defined (__linux__)#include <sys/statfs.h>#define SHM_DIR_PREFIX "/dev/shm/"#else#undef SHM_DIR_PREFIX#endifstatic intopen_shared_memory_file (int use_tmp_file){ char * ptr, buffer [19]; int fd, i; unsigned long number; sigset_t save_signals; struct timeval tv;#if !HAVE_SHM_OPEN && defined (__linux__) struct statfs sfs; if (!use_tmp_file && (statfs (SHM_DIR_PREFIX, &sfs) != 0 || sfs.f_type != 0x01021994 /* SHMFS_SUPER_MAGIC */)) { errno = ENOSYS; return -1; }#endif loop: /* Print a randomised path name into `buffer'. The string depends on the directory and whether we are using POSIX.4 shared memory or a regular temporary file. RANDOM is a 5-digit, base-62 representation of a pseudo-random number. The string is used as a candidate in the search for an unused shared segment or file name. */#ifdef SHM_DIR_PREFIX strcpy (buffer, use_tmp_file ? "/tmp/shm-" : SHM_DIR_PREFIX "shm-");#else strcpy (buffer, "/tmp/shm-");#endif ptr = buffer + strlen (buffer); gettimeofday (&tv, (struct timezone *) 0); number = (unsigned long) random (); number += (unsigned long) getpid (); number += (unsigned long) tv.tv_sec + (unsigned long) tv.tv_usec; for (i = 0; i < 5; i++) { /* Don't use character arithmetic, as not all systems are ASCII. */ *ptr++ = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" [number % 62]; number /= 62; } *ptr = '\0'; /* Block signals between the open and unlink, to really minimise the chance of accidentally leaving an unwanted file around. */ block_signals (&save_signals);#if HAVE_SHM_OPEN if (!use_tmp_file) { fd = shm_open (buffer, O_RDWR | O_CREAT | O_EXCL, 0600); if (fd != -1) shm_unlink (buffer); } else#endif /* HAVE_SHM_OPEN */ { fd = open (buffer, O_RDWR | O_CREAT | O_EXCL, 0600); if (fd != -1) unlink (buffer); } unblock_signals (&save_signals); /* If we failed due to a name collision or a signal, try again. */ if (fd == -1 && (errno == EEXIST || errno == EINTR || errno == EISDIR)) goto loop; return fd;}/* Allocate a region of address space `size' bytes long, so that the region will not be allocated for any other purpose. It is freed with `munmap'. Returns the mapped base address on success. Otherwise, MAP_FAILED is returned and `errno' is set. */static size_t system_page_size;#if !defined (MAP_ANONYMOUS) && defined (MAP_ANON)#define MAP_ANONYMOUS MAP_ANON#endif#ifndef MAP_NORESERVE#define MAP_NORESERVE 0#endif#ifndef MAP_FILE#define MAP_FILE 0#endif#ifndef MAP_VARIABLE#define MAP_VARIABLE 0#endif#ifndef MAP_FAILED#define MAP_FAILED ((void *) -1)#endif#ifndef PROT_NONE#define PROT_NONE PROT_READ#endifstatic void *map_address_space (void * optional_address, size_t size, int access){ void * addr;#ifdef MAP_ANONYMOUS addr = mmap (optional_address, size, access ? (PROT_READ | PROT_WRITE) : PROT_NONE, (MAP_PRIVATE | MAP_ANONYMOUS | (optional_address ? MAP_FIXED : MAP_VARIABLE) | (access ? 0 : MAP_NORESERVE)), -1, (off_t) 0);#else /* not defined MAP_ANONYMOUS */ int save_errno, zero_fd = open ("/dev/zero", O_RDONLY); if (zero_fd == -1) return MAP_FAILED; addr = mmap (optional_address, size, access ? (PROT_READ | PROT_WRITE) : PROT_NONE, (MAP_PRIVATE | MAP_FILE | (optional_address ? MAP_FIXED : MAP_VARIABLE) | (access ? 0 : MAP_NORESERVE)), zero_fd, (off_t) 0); save_errno = errno; close (zero_fd); errno = save_errno;#endif /* not defined MAP_ANONMOUS */ return addr;}/* Set up a page alias mapping using mmap() on POSIX shared memory or on a temporary regular file. Returns the mapped base address on success. Otherwise, 0 is returned and `errno' is set. */static void *page_alias_using_mmap (size_t size, size_t separation, int use_tmp_file){ void * base_addr, * addr; int fd, i, save_errno; struct stat st; fd = open_shared_memory_file (use_tmp_file); if (fd == -1) goto fail; /* First, resize the shared memory file to the desired size. */ if (ftruncate (fd, size) != 0 || fstat (fd, &st) != 0 || st.st_size != size) goto close_fail; /* Map an anonymous region `separation + size' bytes long. This is how we allocate sufficient contiguous address space. We over-map this with the aliased buffer. */ if ((base_addr = map_address_space (0, separation + size, 0)) == MAP_FAILED) goto close_fail; /* Map the same shared memory repeatedly, at different addresses. */ for (i = 0; i < 2; i++) { addr = mmap ((char *) base_addr + (i ? separation : 0), size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FILE | MAP_FIXED, fd, (off_t) 0); if (addr == MAP_FAILED) goto unmap_fail; if (addr != (char *) base_addr + (i ? separation : 0)) { /* `mmap' ignored MAP_FIXED! Should never happen. */ munmap (addr, size); save_errno = EINVAL; goto unmap_fail_se; } } if (close (fd) != 0) goto unmap_fail; /* Success! */ return base_addr; /* Failure. */ unmap_fail: save_errno = errno; unmap_fail_se: munmap (base_addr, separation + size); errno = save_errno; close_fail: save_errno = errno; close (fd); errno = save_errno; fail: return 0;}/* Set up a page alias mapping using SYSV IPC shared memory. Returns the mapped base address on success. Otherwise, 0 is returned and `errno' is set. */#if HAVE_SYSV_SHMstatic void *page_alias_using_sysv_shm (size_t size, size_t separation){ void * base_addr, * addr; sigset_t save_signals; int shmid, i, save_errno; /* Map an anonymous region `separation + size' bytes long. This is how we allocate sufficient contiguous address space. We over-map this with the aliased buffer. */ if ((base_addr = map_address_space (0, separation + size, 0)) == MAP_FAILED) goto fail; /* Block signals between the shmget() and IPC_RMID, to minimise the chance of accidentally leaving an unwanted shared segment around. */ block_signals (&save_signals); shmid = shmget (IPC_PRIVATE, size, IPC_CREAT | IPC_EXCL | 0600); if (shmid == -1) goto unmap_fail; /* Map the same shared memory repeatedly, at different addresses. */ for (i = 0; i < 2; i++) { /* `shmat' is tried twice. The fist time it can fail if the local implementation of `shmat' refuses to map over a region mapped with `mmap'. In that case, we punch a hole using `munmap' and do it again. If the local `shmat' has this property, the `shmat' calls to fixed addresses might collide with a concurrent thread which is also doing mappings, and will fail. At least it is a safe failure. On the other hand, if the local `shmat' can map over already-mapped regions (in the same way that `mmap' does), it is essential that we do actually use an already-mapped region, so that collisions with a concurrent thread can't possibly result in both of us grabbing the same address range with no indication of error. */ addr = shmat (shmid, (char *) base_addr + (i ? separation : 0), 0); if (addr == (void *) -1 && errno == EINVAL) { munmap ((char *) base_addr + (i ? separation : 0), size); addr = shmat (shmid, (char *) base_addr + (i ? separation : 0), 0); } /* Check for errors. */ if (addr == (void *) -1) { save_errno = errno; if (i == 1) shmdt (base_addr); goto remove_shm_fail_se; } else if (addr != (char *) base_addr + (i ? separation : 0)) { /* `shmat' ignored the requested address! */ if (i == 1) shmdt (base_addr); save_errno = EINVAL; goto remove_shm_fail_se; } } if (shmctl (shmid, IPC_RMID, (struct shmid_ds *) 0) != 0) goto remove_shm_fail; unblock_signals (&save_signals); /* Success! */ return base_addr; /* Failure. */ remove_shm_fail: save_errno = errno; remove_shm_fail_se: while (--i >= 0) shmdt ((char *) base_addr + (i ? separation : 0)); shmctl (shmid, IPC_RMID, (struct shmid_ds *) 0); errno = save_errno; unmap_fail: save_errno = errno; unblock_signals (&save_signals); munmap (base_addr, separation + size); errno = save_errno; fail: return 0;}#endif /* HAVE_SYSV_SHM *//* Map a page-aliased ring buffer. Shared memory of size `size' is mapped twice, with the difference between the two addresses being `separation', which must be at least `size'. The total address range used is `separation + size' bytes long. On success, *METHOD is filled with a number which must be passed to `page_alias_unmap', and the mapped base address is returned. Otherwise, 0 is returned and `errno' is set. */static void *__page_alias_map (size_t size, size_t separation, int * method){ void * addr; if (((size | separation) & (system_page_size - 1)) != 0 || size > separation) { errno = -EINVAL; return 0; } /* Try these strategies in turn: POSIX shm_open(), SYSV IPC, regular file. */#ifdef SHM_DIR_PREFIX *method = 0; if ((addr = page_alias_using_mmap (size, separation, 0)) != 0) return addr;#endif#if HAVE_SYSV_SHM *method = 1; if ((addr = page_alias_using_sysv_shm (size, separation)) != 0) return addr;#endif *method = 2; return page_alias_using_mmap (size, separation, 1);}/* Unmap a page-aliased ring buffer previously allocated by `page_alias_map'. `address' is the base address, and `size' and `separation' are the arguments previously passed to `__page_alias_map'. `method' is the value previously stored in *METHOD. Returns 0 on success. Otherwise, -1 is returned and `errno' is set. */static int__page_alias_unmap (void * address, size_t size, size_t separation, int method){#if HAVE_SYSV_SHM if (method == 1) { shmdt (address); shmdt (address + separation); if (separation > size) munmap (address + size, separation - size); return 0; }#endif return munmap (address, separation + size);}/* Map a page-aliased ring buffer. `size' is the size of the buffer to create; it will be mapped twice to cover a total address range `size * 2' bytes long. On success, *METHOD is filled with a number which must be passed to `page_alias_unmap', and the mapped base address is returned. Otherwise, 0 is returned and `errno' is set. */void *page_alias_map (size_t size, int * method){ return __page_alias_map (size, size, method);}/* Unmap a page-aliased ring buffer previously allocated by `page_alias_map'. `address' is the base address, and `size' is the size of the buffer (which is half of the total mapped address range). `method' is a value previously stored in *METHOD by `page_alias_map'. Returns 0 on success. Otherwise, -1 is returned and `errno' is set. */intpage_alias_unmap (void * address, size_t size, int method){ return __page_alias_unmap (address, size, size, method);}/* Map some memory which is not aliased, for timing comparisons against aliased pages. We use a combination of mappings similar to page_alias_*(), in case there are resource limitations which would prevent malloc() or a single mmap() working for the larger address range tests. */static void *page_no_alias (size_t size, size_t separation){ void * base_addr, * addr; int i, save_errno; if ((base_addr = map_address_space (0, separation + size, 0)) == MAP_FAILED) goto fail; /* Map anonymous memory at the different addresses. */ for (i = 0; i < 2; i++) { addr = map_address_space ((char *) base_addr + (i ? separation : 0), size, 1); if (addr == MAP_FAILED) goto unmap_fail; if (addr != (char *) base_addr + (i ? separation : 0)) { /* `mmap' ignored MAP_FIXED! Should never happen. */ munmap (addr, size); save_errno = EINVAL; goto unmap_fail_se; } } /* Success! */ return base_addr; /* Failure. */ unmap_fail: save_errno = errno; unmap_fail_se: munmap (base_addr, separation + size); errno = save_errno; fail: return 0;}/* This should be a word size that the architecture can read and write fast in a single instruction. In principle, C's `int' is the natural word size, but in practice it isn't on 64-bit machines. */#define WORD long/* These GCC-specific asm statements force values into registers, and also act as compiler memory barriers. These are used to force a group of write/write/read instructions as close together as possible, to maximise the detection of store buffer conditions. Despite being asm statements, these will work with any of GCC's target architectures, provided they have >= 4 registers. */#if __GNUC__ >= 3#define __noinline __attribute__ ((__noinline__))#else#define __noinline#endif#ifdef __GNUC__#define force_into_register(var) \ __asm__ ("" : "=r" (var) : "0" (var) : "memory")#define force_into_registers(var1, var2, var3, var4) \ __asm__ ("" : "=r" (var1), "=r" (var2), "=r" (var3), "=r" (var4) \ : "0" (var1), "1" (var2), "2" (var3), "3" (var4) : "memory")#else#define force_into_register(var) do {} while (0)#define force_into_registers(var1, var2, var3, var4) do {} while (0)#endif/* This function tries to test whether a CPU snoops its store buffer for reads within a few instructions, and ignores virtual to physical address translations when doing that. In principle a CPU might do this even if it's L1 cache is physically tagged or indexed, although I have not seen such a system. (A CPU which uses store buffer snooping and with an off-board MMU, which the CPU is unaware of, could have this property). It isn't possible to do this test perfectly; we do our best. The `force_into_register' macros ensure that the write/write/read sequence is as compact as the compiler can make it. */static WORD __noinlinetest_store_buffer_snoop (volatile WORD * ptr1, volatile WORD * ptr2){ register volatile WORD * __regptr1 = ptr1, * __regptr2 = ptr2; register WORD __reg1 = 1, __reg2 = 0; force_into_registers (__reg1, __reg2, __regptr1, __regptr2); *__regptr1 = __reg1; *__regptr2 = __reg2; __reg1 = *__regptr1; force_into_register (__reg1); return __reg1;}/* This function tests whether writes to one page are seen in another page at a different virtual address, and whether they are nearly as fast as normal writes. The accesses are timed by the caller of this function. Alternate writes go to alternate pages, so that if aliasing is implemented using page faults, it will clearly show up in the timings. */static int __noinlinetest_page_alias (volatile WORD * ptr1, volatile WORD * ptr2, int timing_loops){ WORD fail = 0; while (--timing_loops >= 0) fail |= test_store_buffer_snoop (ptr1, ptr2); return fail != 0;}/* This function tests L1 cache coherency without checking for store buffer snoop coherency. To do this, we add delays after each store to allow the store buffer to drain. The result of this function is not important: it is only used in a diagnostic message. */static int __noinlinetest_l1_only (volatile WORD * ptr1, volatile WORD * ptr2){ static volatile WORD dummy; int i, j; WORD fail = 0; for (i = 0; i < 10; i++) { *ptr1 = 1; for (j = 0; j < 1000; j++) /* Dummy volatile writes for delay. */ dummy = 0; *ptr2 = 0; for (j = 0; j < 1000; j++) /* Dummy volatile writes for delay. */ dummy = 0; fail |= *ptr1; } return fail != 0;}/* Thoroughly test a pair of aliased pages with a fixed address separation, to see if they really behave like memory appearing at two locations, and efficiently. We search through different values of `separation' searching for a suitable "cache colour" on this machine. */static inline const char *test_one_separation (size_t separation){ void * buffers [2]; long timings [3]; int i, method, timing_loops = 64; /* We measure timings of 3 different tests, each 128 times to find the minimum. 0: Writes and reads to aliased pages. 1: Writes and reads to non-aliased pages, to compare with 1. 2: Doing nothing, to measure the time for `gettimeofday' itself. The measurements are done in a mixed up order. If we did 64 measurements of type 0, then 64 of type 1, then 64 of type 2, the results could be mislead due to synchronisation with other processes occuring on the machine. */ /* A previously generated random shuffle of bit-pairs. Each pair is a number from the set {0,1,2}. Each number occurs exactly 128 times. */ static const unsigned char pattern [96] = { 0x64, 0x68, 0x9a, 0x86, 0x42, 0x10, 0x90, 0x81, 0x58, 0x91, 0x18, 0x56, 0x12, 0x44, 0x64, 0x89, 0x29, 0xa9, 0x96, 0x05, 0x61, 0x80, 0x82, 0x49, 0x02, 0x16, 0x89, 0x12, 0x9a, 0x45, 0x41, 0x12, 0xa9, 0xa6, 0x01, 0x99, 0x88, 0x80, 0x94, 0x20, 0x86, 0x29, 0x29, 0x1a, 0xa5, 0x46, 0x66, 0x25, 0x42, 0x20, 0xa4, 0x81, 0x20, 0x81, 0x50, 0x44, 0x01, 0x06, 0xa5, 0x19, 0x4a, 0x56, 0x28, 0x89, 0x88, 0x14, 0x94, 0x88, 0x1a, 0xa4, 0x95, 0x15, 0x82, 0x99, 0x84, 0x64, 0x52, 0x56, 0x69, 0x64, 0x00, 0x95, 0x9a, 0x89, 0x48, 0x01, 0x58, 0x88, 0x60, 0xa6, 0x29, 0x06, 0x64, 0xa0, 0x56, 0x85, }; buffers [0] = __page_alias_map (system_page_size, separation, &method); if (buffers [0] == 0) return "alias map failed"; buffers [1] = page_no_alias (system_page_size, separation); if (buffers [1] == 0) { __page_alias_unmap (buffers [0], system_page_size, separation, method); return "non-alias map failed"; } retry: timings [2] = timings [1] = timings [0] = LONG_MAX; for (i = 0; i < 384; i++) { struct timeval time_before, time_after; long time_delta; int fail = 0, which_test = (pattern [i >> 2] >> ((i & 3) << 1)) & 3; volatile WORD * ptr1 = (volatile WORD *) buffers [which_test]; volatile WORD * ptr2 = (volatile WORD *) ((char *) ptr1 + separation); /* Test whether writes to one page appear immediately in the other, and time how long the memory accesses take. */ gettimeofday (&time_before, (struct timezone *) 0); if (which_test < 2) fail = test_page_alias (ptr1, ptr2, timing_loops); gettimeofday (&time_after, (struct timezone *) 0); if (fail && which_test == 0) { /* Test whether the failure is due to a store buffer bypass which ignores virtual address translation. */ int l1_fail = test_l1_only (ptr1, ptr2); __page_alias_unmap (buffers [0], system_page_size, separation, method); munmap (buffers [1], separation + system_page_size); return l1_fail ? "cache not coherent" : "store buffer not coherent"; } time_delta = ((time_after.tv_usec - time_before.tv_usec) + 1000000 * (time_after.tv_sec - time_before.tv_sec)); /* Find the smallest time taken for each test. Ignore negative glitches due to Linux' tendancy to jump the clock backwards. */ if (time_delta >= 0 && time_delta < timings [which_test]) timings [which_test] = time_delta; } /* Remove the cost of `gettimeofday()' itself from measurements. */ timings [0] -= timings [2]; timings [1] -= timings [2]; /* Keep looping until at least one measurement becomes significant. A very fast CPU will show measurements of zero microseconds for smaller values of `timing_loops'. Also loop until the cost of `gettimeofday()' becomes insignificant. When the program is run under `strace', the latter is a big and this is needed to stabilise the results. */ if (timings [0] <= 10 * (1 + timings [2]) && timings [1] <= 10 * (1 + timings [2])) { timing_loops <<= 1; goto retry; } __page_alias_unmap (buffers [0], system_page_size, separation, method); munmap (buffers [1], separation + system_page_size); /* Reject page aliasing if it is much slower than accessing a single, definitely cached page directly. */ if (timings [0] > 2 * timings [1]) return "too slow"; /* Success! Passed all tests for these parameters. */ return 0;}size_t page_alias_smallest_size;voidpage_alias_init (void){ size_t size;#ifdef _SC_PAGESIZE system_page_size = sysconf (_SC_PAGESIZE);#elif defined (_SC_PAGE_SIZE) system_page_size = sysconf (_SC_PAGE_SIZE);#else system_page_size = getpagesize ();#endif for (size = system_page_size; size <= 16 * 1024 * 1024; size *= 2) { const char * reason = test_one_separation (size); printf ("Test separation: %lu bytes: %s%s\n", (unsigned long) size, reason ? "FAIL - " : "pass", reason ? reason : ""); /* This logic searches for the smallest _contiguous_ range of page sizes for which `page_alias_test' passes. */ if (reason == 0 && page_alias_smallest_size == 0) page_alias_smallest_size = size; else if (reason != 0 && page_alias_smallest_size != 0) { /* Fail, indicating that page-aliasing is not reliable, because there's a maximum size. We don't support that as it seems quite unlikely given our model of cache colouring. */ page_alias_smallest_size = 0; break; } } printf ("VM page alias coherency test: "); if (page_alias_smallest_size == 0) printf ("failed; will use copy buffers instead\n"); else if (page_alias_smallest_size == system_page_size) printf ("all sizes passed\n"); else printf ("minimum fast spacing: %lu (%lu page%s)\n", (unsigned long) page_alias_smallest_size, (unsigned long) (page_alias_smallest_size / system_page_size), (page_alias_smallest_size == system_page_size) ? "" : "s");}//#ifdef TEST_PAGEALIASintmain (){ page_alias_init (); return 0;}//#endif-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2003/8/29/6 | CC-MAIN-2016-18 | refinedweb | 3,295 | 55.03 |
Question:
In my Swing application, I want the ability to switch between decorated and undecorated without recreating the entire frame. However, the API doesn't let me call
setUndecorated() after the frame is made visible.
Even if i call
setVisible(false),
isDisplayable() still returns true. The API says the only way to make a frame not-displayable is to re-create it. However, I don't want to recreate the frame just to switch off some title bars.
I am making a full-screenable application that can be switched between fullscreen and windowed modes; It should be able to switch while maintaining the state, etc.
How do I do this after a frame is visible?.
Solution:1
You can't. That's been my experience when I tried to achieve the same.
However if you have your entire UI in one panel which is in your frame, you can create a new frame and add that panel to the frame. Not so much work.
Something like this:
// to start with JPanel myUI = createUIPanel(); JFrame frame = new JFrame(); frame.add(myUI); // .. and later ... JFrame newFrame = new JFrame(); newFrame.setUndecorated(); newFrame.add(myUI);
In Swing a panel (and indeed any instance of a component) can only be in one frame at a time, so when you add it to a new frame, it immediately ceases to be in the old frame.
Solution:2
Have you tried calling
Frame.dispose() and then changing it? Haven't tried it myself, but it might work.
If not, then what you can do is have the frame an inconsequential part of the class, with only the most minimal hooks to the highest level panel or panels necessarily, and just move those to the new frame. All the children will follow.
Solution:3
Have a look at
In Method
switchFullscreenMode():
dispose(); ... setFullScreenWindow(...); setUndecorated(true/false); setBounds(mXPos, mYPos, mWidth, mHeight); ... setVisible(true);
Actually there's a lot more stuff going on to hide various sidepanels that reappear if the mouse touches the sides.
Also note that you must explicitly set the bounds.
Window.setExtendedState(MAXIMIZED_BOTH) interferes badly in timely vicinity of dispose(), because they both rely on multiple native events of the operating system, that are lost, should the window no be displayable at that split second.
I don't recommend taking the default screen directly:
GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
and instead use the Screen, your JFrame is currently on:
setBounds(getGraphicsConfiguration().getBounds()); getGraphicsConfiguration().getDevice().setFullScreenWindow(this);
Though it's currently the same, it might change in the future.
Solution:4
calling
dispose() releases the native window resources. then you can edit properties like undecorated and so on. then just call
setVisible(true) to recreate the window resources and everything works fine (the position and all compoenents won`t be changed)
dispose(); setUndecorated(true/false); setVisible(true);
Solution:5
Well, you are going to need different frame instance. You may be able to move the contents of your frame over without recreating that. The key here is to make your code not be reliant on a specific frame. This is a basic good practice in any case.
Solution:6
Try:
dispose(); setUndecorated(true); setVisible(true);
Check it Out. Hope it will help !!
Solution:7
Here is a code in how to make ALT + Enter enters Full Screen without the bar mode and Minimize with showing the Title bar and the Start bar:
public class myTest extends JFrame{ //Your codes... //if "ALT" key on hold and "Enter" key pressed with it if (evt.isAltDown() && evt.getKeyCode() == evt.VK_ENTER) { //if the JFrame has Title bar if (isUndecorated()) { //this will dispose your JFrame dispose(); //here to set it with no Title bar setUndecorated(false); pack(); setLocationRelativeTo(null); //as you dispose your JFrame, you have to remake it Visible.. setVisible(true); } else { dispose(); setUndecorated(true); setExtendedState(MAXIMIZED_BOTH); setVisible(true); } } //your codes }
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/10/tutorial-how-to-call-setundecorated.html | CC-MAIN-2019-04 | refinedweb | 659 | 55.84 |
Super simple framework for Kakaotalk auto-reply bot based on aiohttp
Project Description
Super simple framework for Kakaotalk auto-reply bot based on aiohttp
Example
From examples/simple_echo.py:
from kakaobot import KakaoBot bot = KakaoBot() @bot.on_message async def handle_message(message, **args): if args['type'] == 'text': return 'Echoing: %s' % message else: return 'Cannot understand' if __name__ == '__main__': bot.run(host='0.0.0.0', port=8080)
Nice, isn’t it?
Why not Flask or Django?
Because setting up full Flask/Django development environment is a bit of pain, especially for this kind of tiny application with no front-end view. It is well-known using default development server for Flask/Django is not the way you run your web application. But in aiohttp, it is. That’s why I chose aiohttp.
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/python-kakaobot/ | CC-MAIN-2018-17 | refinedweb | 156 | 58.99 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
One of the most common uses of the graph abstraction in computer science is to track dependencies. An example of dependency tracking that we deal with on a day to day basis is the compilation dependencies for files in programs that we write. These dependencies are used inside programs such as make or in an IDE such as Visual C++ to minimize the number of files that must be recompiled after some changes have been made.
Figure 1 shows a graph that has a vertex for each source file, object file, and library that is used in the killerapp program. The edges in the graph show which files are used in creating other files. The choice of which direction to point the arrows is somewhat arbitrary. As long as we are consistent in remembering that the arrows mean ``used by'' then things will work out. The opposite direction would mean ``depends on''.
A compilation system such as make has to be able to answer a number of questions:
In the following examples we will formulate each of these questions in terms of the dependency graph, and then find a graph algorithm to provide the solution. The graph in Figure 1 will be used in all of the following examples. The source code for this example can be found in the file examples/file_dependencies.cpp.
Here we show the construction of the graph. First, these are the required header files:
#include <iostream> // std::cout #include <utility> // std::pair #include <boost/graph/graph_traits.hpp> #include <boost/graph/adjacency_list.hpp> #include <boost/graph/topological_sort.hpp>
For simplicity we have constructed the graph "by-hand". A compilation system such as make would instead parse a Makefile to get the list of files and to set-up the dependencies. We use the adjacency_list class to represent the graph. The vecS selector means that a std::vector will be used to represent each edge-list, which provides efficient traversal. The bidirectionalS selector means we want a directed graph with access to both the edges outgoing from each vertex and the edges incoming to each vertex, and the color_property attaches a color property to each vertex of the graph. The color property will be used in several of the algorithms in the following sections." }; typedef std:) }; using namespace boost; typedef adjacency_list<vecS, vecS, bidirectionalS, property<vertex_color_t, default_color_type> > Graph; Graph g(used_by, used_by + sizeof(used_by) / sizeof(Edge), N); typedef graph_traits<Graph>::vertex_descriptor Vertex;
On the first invocation of make for a particular project, all of the files must be compiled. Given the dependencies between the various files, what is the correct order in which to compile and link them? First we need to formulate this in terms of a graph. Finding a compilation order is the same as ordering the vertices in the graph. The constraint on the ordering is the file dependencies which we have represented as edges. So if there is an edge (u,v) in the graph then v better not come before u in the ordering. It turns out that this kind of constrained ordering is called a topological sort. Therefore, answering the question of compilation order is as easy as calling the BGL algorithm topological_sort(). The traditional form of the output for topological sort is a linked-list of the sorted vertices. The BGL algorithm instead puts the sorted vertices into any OutputIterator, which allows for much more flexibility. Here we use the std::front_insert_iterator to create an output iterator that inserts the vertices on the front of a linked list. Other possible options are writing the output to a file or inserting into a different STL or custom-made container.
typedef std::list<Vertex> MakeOrder; MakeOrder make_order; boost::topological_sort(g, std::front_inserter(make_order)); std::cout << "make ordering: "; for (MakeOrder::iterator i = make_order.begin(); i != make_order.end(); ++i) std::cout << name[*i] << " "; std::cout << std::endl;The output of this is:
make ordering: zow.h boz.h zig.cpp zig.o dax.h yow.h zag.cpp \ zag.o bar.cpp bar.o foo.cpp foo.o libfoobar.a libzigzag.a killerapp
Another question the compilation system might need to answer is: what files can be compiled simultaneously? This would allow the system to spawn threads and utilize multiple processors to speed up the build. This question can also be put in a slightly different way: what is the earliest time that a file can be built assuming that an unlimited number of files can be built at the same time? The main criteria for when a file can be built is that all of the files it depends on must already be built. To simplify things for this example, we'll assume that each file takes 1 time unit to build (even header files). For parallel compilation, we can build all of the files corresponding to vertices with no dependencies, e.g., those that have an in-degree of 0, in the first step. For all other files, the main observation for determining the ``time slot'' for a file is that the time slot must be one more than the maximum time-slot of the files it depends on.
We start be creating a vector
time that will store the
time step at which each file can be built. We initialize every value
with time step zero.
std::vector<int> time(N, 0);
Now, we want to visit the vertices against in topological order, from those files that need to be built first until those that need to be built last. However, instead of printing out the order immediately, we will compute the time step in which each file should be built based on the time steps of the files it depends on. We only need to consider those files whose in-degree is greater than zero.
for (i = make_order.begin(); i != make_order.end(); ++i) { if (in_degree (*i, g) > 0) { Graph::in_edge_iterator j, j_end; int maxdist = 0; for (tie(j, j_end) = in_edges(*i, g); j != j_end; ++j) maxdist = std::max(time[source(*j, g)], maxdist); time[*i]=maxdist+1; } }
Last, we output the time-slot that we've calculated for each vertex.
std::cout << "parallel make ordering, " << std::endl << " vertices with same group number" << std::endl << " can be made in parallel" << std::endl << std::endl; for (boost::tie(i, iend) = vertices(g); i != iend; ++i) std::cout << "time_slot[" << name[*i] << "] = " << time[*i] << std::endl;The output is:
parallel make ordering, vertices with same group number can be made in parallel time_slot[dax.h] = 0 time_slot[yow.h] = 1 time_slot[boz.h] = 0 time_slot[zow.h] = 0 time_slot[foo.cpp] = 1 time_slot[foo.o] = 2 time_slot[bar.cpp] = 2 time_slot[bar.o] = 3 time_slot[libfoobar.a] = 4 time_slot[zig.cpp] = 1 time_slot[zig.o] = 2 time_slot[zag.cpp] = 2 time_slot[zag.o] = 3 time_slot[libzigzag.a] = 5 time_slot[killerapp] = 6
Another question the compilation system needs to be able to answer is whether there are any cycles in the dependencies. If there are cycles, the system will need to report an error to the user so that the cycles can be removed. One easy way to detect a cycle is to run a depth-first search, and if the search runs into an already discovered vertex (of the current search tree), then there is a cycle. The BGL graph search algorithms (which includes depth_first_search()) are all extensible via the visitor mechanism. A visitor is similar to a function object, but it has several methods instead of just the one operator(). The visitor's methods are called at certain points within the algorithm, thereby giving the user a way to extend the functionality of the graph search algorithms. See Section Visitor Concepts for a detailed description of visitors.
We will create a visitor class and fill in the back_edge() method, which is the DFSVisitor method that is called when DFS explores an edge to an already discovered vertex. A call to this method indicates the existence of a cycle. Inheriting from dfs_visitor<> provides the visitor with empty versions of the other visitor methods. Once our visitor is created, we can construct and object and pass it to the BGL algorithm. Visitor objects are passed by value inside of the BGL algorithms, so the has_cycle flag is stored by reference in this visitor.
struct cycle_detector : public dfs_visitor<> { cycle_detector( bool& has_cycle) : _has_cycle(has_cycle) { } template <class Edge, class Graph> void back_edge(Edge, Graph&) { _has_cycle = true; } protected: bool& _has_cycle; };
We can now invoke the BGL depth_first_search() algorithm and pass in the cycle detector visitor.
bool has_cycle = false; cycle_detector vis(has_cycle); boost::depth_first_search(g, visitor(vis)); std::cout << "The graph has a cycle? " << has_cycle << std::endl;
The graph in Figure 1 (ignoring the dotted line) did not have any cycles, but if we add a dependency between bar.cpp and dax.h there there will be. Such a dependency would be flagged as a user error.
add_edge(bar_cpp, dax_h, g); | http://www.boost.org/doc/libs/1_43_0/libs/graph/doc/file_dependency_example.html | CC-MAIN-2017-26 | refinedweb | 1,504 | 55.34 |
Agenda
See also: IRC log
<shadi>
SAZ: check for open action items
SAZ: what about RDF schema for pointer methods?
CI: have an early draft
... based on the last model we discussed about
... some comments about open issues in the schema
SAZ: CV what about content draft?
CV: work in progress...
SAZ: TP next week, try to get some advance in
the meanwhile if BTW allows
... DR would you be able to read the last HTTP in RDF?
<shadi> ACTION: Shadi finalize the updated "EARL 1.0 Schema Editors Draft" [recorded in]
DR: OK
<shadi> ACTION: CarlosV and Johannes prepare first "Content" Editors Draft [recorded in]
<shadi> ACTION: David peer-review HTTP Vocabulary in RDF Editors Draft and send coments to the list [recorded in]
SAZ: interesting to know relation with MWBP checker moki vocabulary
<shadi>
DR: can't help, not in this TF
<shadi>
<shadi>
SAZ: some open comments in HTTP in RDF
JK: 2.2.1 body property
... you can have various representations of the same content
<shadi>
JK: byte, text, xml...
... how to model?
... could use rdf:Alt?
<shadi> "The first member of the container, i.e. the value of the rdf:_1 property, is the default choice."
SAZ: what should be the default representation may be another discussion
JK: maybe Bag too generally
SAZ: certainly not a rdf:Seq
... options are rdf:Alt rdf:Bag or no list at all
CV: prefer not list at all
SAZ: what is your reasoning to avoid using the semantic of a list
CI: the key question is they are equivalent not
alternative
... you can choose one or several
... musn´t be forced to choose just one
JK: can create a new object, a sort of equivalent container
SAZ: not sure about the value
... let's keep it for now and put an editorial's note asking for feedback
JK: is your proposal havin multiple bodies properties?
SAZ: yes
... looks good, just a little bit of clean up and wait for the Content
<shadi>
<shadi>
<shadi> s/
JK: references for files, not namespaces
... rdf, not rdfs
<shadi>
CI: subclass rdf:Bag and reuse it for specific
purpose
... to use the rdf:li items
JK: think that should be legal, yes
CI: easier to understand this approach, and
also more logic
... have containers with groups of pointers
... we can think about Sequences or Alternative lists too
JK: problem with RDF lists is that you can't
describe the types of object in the lists
... it would be legal to add *anything* into the group
CI: can't subclass the rdf:li property
... to change the range of the property?
SAZ: did you try running this code through some of the RDF validators and parsers?
CI: it is valid, haven't tried subclassing the rdf:li property
<JohannesK> <>
JK: there is no real rdf:li, it is a variable property
<JohannesK> <>
<JohannesK> <>
SAZ: the RDF validator focuses primarily on the
schema and validity, others try to explore the semantics
... mainly OWL validators, they spot important issues such as if certain restrictions can not be met
... there is a mail about some of these parsers somewhere on the mailing list
JK: no range for the reference property?
SAZ: no range, it could be anything
[discussion if should have a range, and what it would be]
<scribe> ACTION: CarlosI to finalize draft RDFS (check with validators/parsers), and send comments/questions to the mailing loist [recorded in]
<scribe> ACTION: all to review draft RDFS proposal by CarlosI and send comments to the list [recorded in]
CI: should we make equivalent pointer class a
subclass of single pointer?
... all the equivalents point to the same place
SAZ: sound good, would be easier to see in the hierarchy model (see thread on the mailing list)
JK: reference properties could be different between the container and the pointers
CI: we have the same problem in many other instances
JK: we removed it from the equivalent pointer class to avoid this problem, we are reintroducing it by using single pointer
CI: we have the same problem in compound pointer and elsewhere
JK: agree, we also agreed that it may sometimes
make sense to have different references (although the pointers are
equivalent)
... however, what is the logic in this case?
CI: we have to think more about the reference and see how we can provde more guidance
next meeting tentatively 14 Novemeber (regrets from Shadi)
meeting after on 21 November | http://www.w3.org/2007/10/31-er-minutes | CC-MAIN-2014-52 | refinedweb | 741 | 67.59 |
Save and Retrieve Dynamic TextBox Values in GridView to SQL Database
As the title says it all. So without wasting anytime let’s quickly go through the theme of this article—How we are going to implement the desired functioning.
Getting Started
Step 1:
First of all, let’s create a simple sample Table in SQL Server. In my case am going to name it ‘DemoTab’ having attribute ID.
(Just like that..)
Step 2:
Now let’s proceed to ASPX source and add a Button for saving the data to the database.
(Just like that..)
Step 3:
Now, am going to create a method for saving the data to the database. But before that, we need to establish the database connection. (Am not showing how to do that, but if you’re beginner in that case you can go through my article- Database Connectivity )
Just go to the web.config and paste below connection string there. Change some credentials such as per your SQL server.
<connectionStrings> <add name= "DBCon" connectionString= "Data Source= ABHI-PC; Initial Catalog=DemoDB; Integrated Security=SSPI;" providerName="System.Data.SqlClient"/> </connectionStrings>
Step 4:
As you can see we are done with establishing connection string. Now we can proceed to the next step which is the creation of the method for saving the data to the database. For that just add the following namespaces as shown below:
using System.Text; using System.Data.SqlClient; using System.Collections.Specialized;
(These namespaces are for SqlClient, String Collections and StringBuilder built-in methods. We are going to use their functionality later.)
Step 5:
Now let’s create the method for calling the connection strings.
private string GetConnectionString() { // "DBCon" is the name of the Connection String return System.Configuration.ConfigurationManager.ConnectionStrings["DBCon"].ConnectionString; }
Step 6:
Now, am going to show the code for INSERT method. Take a look-
// An Insert Method private void InsertRecords(StringCollection sc) { SqlConnection conn = new SqlConnection(GetConnectionString()); StringBuilder sb = new StringBuilder(string.Empty); string[] splitItems = null;
foreach (string item in sc) { const string sqlStatement = "INSERT INTO DemoTab (Column1,Column2,Column3,Column4,Column5) VALUES"; if (item.Contains(",")) { splitItems = item.Split(",".ToCharArray()); sb.AppendFormat("{0}('{1}','{2}','{3}',’{4}’,’{5}’); ", sqlStatement, splitItems[0], splitItems[1], splitItems[2]); } }
try { conn.Open(); SqlCommand cmd = new SqlCommand(sb.ToString(), conn); cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); Page.ClientScript.RegisterClientScriptBlock(typeof(Page), "Script", "alert('Records are Saved Successfuly!');", true); }
catch (System.Data.SqlClient.SqlException ex) { string msg = "Insert Error:"; msg += ex.Message; throw new Exception(msg); }
finally { conn.Close(); } }
Step 7:
This is the final step.
Now, am going to create a Button Click event. So that we can call the method “InsertRecords” after extracting the dynamic TextBox values.
Here’s a code."); TextBox box4 = (TextBox)Gridview1.Rows[rowIndex].Cells[4].FindControl("TextBox4"); TextBox box5 = (TextBox)Gridview1.Rows[rowIndex].Cells[5].FindControl("TextBox5");
// Getting Values from TextBoxes sc.Add(box1.Text + "," + box2.Text + "," + box3.Text + “,”+ box4.Text + "," + box5.Text); rowIndex++; }
// Call the method InsertRecords(sc); } } }
That’s all guys.
Just Run the code and it will directly take you here—
Now enter the values and enjoy.
Wrapping-Up
Guys that’s all from ” Save and Retrieve Dynamic TextBox Values in GridView to SQL “. I hope it helps.
Keep visiting for more SQL and C# stuff. :idea 💡
I gotta favorite this web site it seems extremely helpful very beneficial
Hey Techie,
There you go. Keep Visiting. Cheers! 🙂
Good site! I really love how it is easy on my eyes and the data are well written. I am wondering how I might be notified when a new post has been made. I’ve subscribed to your feed which must do the trick! Have a great day!
Hey,
Thanks, you rock man and in parallel you amazed me too.. I dunno howcome, you’d subscribe to our feeds as we don’t have any. xo Cheers! 🙂
You have brought up superb points, appreciate it for the post. | http://www.letustweak.com/tweaks/save-and-retrieve-dynamic-textbox-values-in-gridview-sql-to-database/ | CC-MAIN-2019-30 | refinedweb | 651 | 60.72 |
//**************************************
// Name: Civil Status Checker in C++
// Description:In this article, I would like to share with you a simple program that is being requested of me by a student who visited my website. It is a civil status checker that I wrote using C++ as our programming language.
What the program does is to ask the user to give his or her name and then it will ask again for the civil status of the person and the program will display the result on the screen. The code is very simple and easy to understand.
I am currently accepting programming work, it projects, is the following jakerpomperada@gmail.com, jakerpomperada@aol.
My personal website is
// By: Jake R. Pomperada
//**************************************
// civil_status.cpp
// Written By Mr. Jake Rodriguez Pomperada, MAED-IT
// Date : September 20, 2018
// Tool : Dev C++ 5.11
//
// jakerpomperada@aol.com and jakerpomperada@gmail.com
#include <iostream>
#include <cctype>
#include <cstring>
#include <cstdio>
using namespace std;
int main()
{
string status;
char civil_status;
char option,reply;
char name[200];
do {
system("cls");
cout <<"\n\n";
cout <<"\tCivil Status Checker in C++";
cout <<"\n\n";
cout << "\tWhat is your name : ";
gets(name);
cout <<"\n\n";
cout <<"\t1-Single,2-Married,3-Annulled,4-Separated and 5-Widow";
cout <<"\n\n";
cout <<"\tWhat is your Civil Status? : ";
cin >> civil_status;
if (civil_status == '1') {
status ="SINGLE";
}
else if (civil_status == '2') {
status ="MARRIED";
}
else if (civil_status == '3') {
status ="ANNULLED";
}
else if (civil_status == '4') {
status ="SEPARATED";
}
else if (civil_status == '5') {
status ="WIDOW";
}
else {
status ="Invalid Civil Status Try Again.";
}
for (int i=0; i<strlen(name); i++) {
name[i] = toupper(name[i]);
}
cout <<"\n\n";
cout <<"\t===== DISPLAY RESULT =====";
cout <<"\n\n";
cout <<"\tHi " <<name <<".";
cout <<"\n\n";
cout <<"\tYour Civil Status is " << status <<".";
cout <<"\n\n";
cout <<"\tDo you want to continue Y/N? : ";
cin >> reply;
option = toupper(reply);
} while(option=='Y');
cout <<"\n\n";
cout <<"\tThank you for using this software";
cout <<"\n\n";
}
Other. | http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=13997&lngWId=3 | CC-MAIN-2018-43 | refinedweb | 320 | 52.7 |
Introduction
- This tutorial shows how to use ASP.NET SignalR to create a real-time chat application
- You will add SignalR to an MVC 4 application and create a chat view to send and display messages
What is SignalR ?
- SignalR is an open-source .NET library for building web applications that require live user interaction or real-time data updates
-
- You can develop real-time applications such as social applications, multiuser games, business collaboration, and news, weather, or financial update applications
SignalR Interactions
SignalR with ASP.net MVC 4
- Here I have used Visual Studio 2012 and ASP.net MVC 4 Web Application
Step 1 :
- In Visual Studio 2012 create an ASP.NET MVC 4 web application,name it SignalRChatApp, and click OK
Step 2 :
- Select the Internet Application template and click OK
Step 3 :
- Select the Tools ==> Library Package Manager ==> Package Manager Console
Step 4 :
- Run the following command
- install-package Microsoft.AspNet.SignalR
Step 5 :
- In Solution Explorer expand the References and Scripts folder. Note that References and script libraries for SignalR have been added to the project
Step 6 :
- In Solution Explorer, right-click the project, select Add ==> New Folder, and name the new folder as Hubs
Step 7 :
- Right-click the Hubs folder, click Add ==> New Item, select SignalR Hub Class, and name the class as ChatHub
Step 8 :
- Replace the code in the ChatHub class with the following code
using Microsoft.AspNet.SignalR; namespace SignalRChatApp.Hubs { public class ChatHub : Hub { public void Send(string name, string message) { // Call the addNewMessageToPage method to update clients Clients.All.addNewMessageToPage(name, message); } } }
Key points of the above code
ChatHub class
-
- Clients call the ChatHub.Send method to send a new message
- The hub in turn sends the message to all clients by calling Clients.All.addNewMessageToPage
Send method
- function) to update clients
Step 9 :
- Open the Global.asax file for the project
- Add a call to the method RouteTable.Routes.MapHubs(); as the first line of code in the Application_Start method
- The completed Application_Start method is as below
using System.Web.Http; using System.Web.Mvc; using System.Web.Optimization; using System.Web.Routing; namespace SignalRChatApp { public class MvcApplication : System.Web.HttpApplication {(); } } }
Key points of the above code
- This code registers the default route for SignalR hubs
- Must be called before you register any other routes
Step 10 :
- Add Chat Action method to the HomeController class found in Controllers/HomeController.cs
- This method returns the Chat view
public ActionResult Chat() { return View(); }
Step 11 :
- Right-click within the Chat method you just created, and click Add View to create a new view file
Step 12 :
- Edit the new view file named Chat.cshtml
- The complete code for the chat view is as below
@{ ViewBag.Title = "Chat Application"; } <h2>Chat Application<.1.3.min> }
Key points of the above code
-;
The following code shows how to create a callback function
-
- This approach ensures that the connection is established before the event handler executes
$.connection.hub.start().done(function () { $('#sendmessage').click(function () { // Call the Send method on the hub. chat.server.send($('#displayname').val(), $('#message').val()); // Clear text box and reset focus for next comment. $('#message').val('').focus(); }); });
Step 13 :
- Press F5 to run the project
- In the browser address line, append /home/chat to the URL of the default page for the project
- The Chat page loads in a browser instance and prompts for a user name
- Enter a user name
Step 14 :
- Copy the URL from the address line of the above browser and use it to open another browser instance
- Enter a unique user name
Step 15 :
- In each browser instance, add a comment and click Send
- The comments should display in both browser instances
- This simple chat application does not maintain the discussion context on the server
- The hub broadcasts comments to all current users
- Users who join the chat later will see messages added from the time they join
Where is the script file named hubs ?
- The SignalR library dynamically generates this file at run time
- This file manages the communication between jQuery script and server-side code
- On chrome click Tools ==> Developer Tools ==> Resources
The source code for this article
Conclusion
- You learned that SignalR is a framework for building real-time web applications
- You also learned how to add SignalR to an ASP.NET MVC 4 application, how to create a hub class, and how to send and receive messages from the hub
- So use and enjoy the benefits of this awesome framework
Nice tutorial. But i've a problem, when i follow step #9 and put this line "RouteTable.Routes.MapHubs();" in Application_Start method . It's show to me an error. Basically MapHubs() show red mark, is any why to solve this?
Thanks in advanced
Hi Mahiuddin,
Thanks. I have tested above code.So try with the source code which I put on the GitHub and let me know If you have further queries with that source code also.
Ok @Sampath Where is Source code?
Check 'The source code for this article' section on above Article (just before the conclusion) :)
Sorry! (I got it) :)
Great tutorial mate +1 for you
Hi Mary,You're warmly welcome.
nice article Sampath, keep it up :)
Hi Sandeep, Thanks, I’m glad you enjoyed it.
Nice Tutorial! Thanks for posting, I'll be playing with this over the weekend . . .
My question would be, why host your source on Rapidshare, which throttles the download, instead of Github or Bitbucket?
Hi John, You're warmly welcome.I’m glad you found it useful.
Yup,I will do that and let you know.Thanks for your feedback.
Hi John,I have put the source code on GitHub. Plz check that. It reduced the zip file size nearly by 3 times. Really awesome. Thanks for the Information.
Nicely done Sampath. I forked your Github repo, added a suggested README file, and sent a pull request. Always a good idea to add a README, especially for repos which are intended to accompany a tutorial.
Keep up the great work!
Hi John,I am really appreciate your work.I have learnt lot about GitHub today.Thanks a lot for your support.Keep in Touch.Have a nice day :)
Great tutorial! :)
How would you implement security? For example, password protected chat rooms where users can only enter if they know the password.
I know this is just a tutorial, but I'd suggest following best practices and putting all your JavaScript into a separate .js file so that people that copy and paste don't put huge amounts of JavaScript into their view in production code. The DOM manipulation and SignalR communication are tightly coupled which is something I'd also advise against in production code :)
Hi Daniel, Thanks for your feedback. Answer for Your Q 1 : Yup,You can easily implement security for your Hub by using well known [Authorize] attribute.It's like this.
[Authorize] public class ChatHub : Hub { } For more info check this :
Answer for Your Q 2: The points which you mentioned is right.But hence this is a "Hello Chat" kind of tutorial,I didn't go for implement best practices here.But when we use this with our real world app we have to maintain such stuffs.
Good article. Thanks to author. It would be great if you write tutorial for integrating customer support chat applications like olark in MVC app.
@Mukesh Thanks, I’m glad you enjoyed it!.Yup,Good idea for another article.Will see. | http://tech.pro/tutorial/1491/chat-application-with-signalr-and-aspnet-mvc-4 | CC-MAIN-2014-15 | refinedweb | 1,250 | 64.1 |
Change the "align" parameter to align_ptr() to an unsigned long. On IA-64, the returned pointer looses it's upper half, which causes a copy_to_user() call in results_to_user() to fail, which eventually causes DM_TARGET_STATUS ioctls to fail. Patch is against 2.4.20-dm-10, but also applies cleanly to 2.5.66. --- linux-2.4.20a/drivers/md/dm-ioctl.c Fri Mar 28 15:27:32 2003 +++ linux-2.4.20b/drivers/md/dm-ioctl.c Fri Mar 28 15:26:38 2003 @@ -401,7 +401,7 @@ * Round up the ptr to the next 'align' boundary. Obviously * 'align' must be a power of 2. */ -static inline void *align_ptr(void *ptr, unsigned int align) +static inline void *align_ptr(void *ptr, unsigned long align) { align--; return (void *) (((unsigned long) (ptr + align)) & ~align); | http://www.redhat.com/archives/dm-devel/2003-March/msg00015.html | CC-MAIN-2016-22 | refinedweb | 132 | 67.25 |
Portals
The
Portal components render DOM nodes elsewhere on the page. This is useful for things like
modals, tooltips, and dropdowns, when you want to define the content near the trigger, but have
it display at the bottom of the page (generally to solve z-index and overflow incompatibilities).
For example, modals can be rendered at the bottom of
<body>, but the React component that creates
the modal content (e.g. a
<button>) does not have access to
<body> directly.
If a
PortalDestination is put at the bottom of
<body>, a
PortalSource can then be used
anywhere without knowing about
<body>.
Basic example
ReferenceError: React is not defined
Props
Imports
Import React components (including CSS):
import {PortalSource, PortalDestination} from 'pivotal-ui/react/portals'; | https://styleguide.pivotal.io/components/portals/ | CC-MAIN-2021-04 | refinedweb | 123 | 51.48 |
Java provide benefits of avoiding thread pooling using inter-thread communication. The wait(), notify(), and notifyAll() methods of Object class are used for this purpose. These method are implemented as final methods in Object, so that all classes have them. All the three method can be called only from within a synchronized context.
wait()and
sleep()
Pooling is usually implemented by loop i.e to check some condition repeatedly. Once condition is true appropriate action is taken. This waste CPU time.
Deadlock is a situation of complete Lock, when no thread can complete its execution because lack of resources. In the above picture, Thread 1 is holding a resource R1, and need another resource R2 to finish execution, but R2 is locked by Thread 2, which needs R3, which in turn is locked by Thread 3. Hence none of them can finish and are stuck in a deadlock.
class Pen{} class Paper{} public class Write { public static void main(String[] args) { final Pen pn =new Pen(); final Paper pr =new Paper(); Thread t1 = new Thread() { public void run() { synchronized(pn) { System.out.println("Thread1 is holding Pen"); try{ Thread.sleep(1000); } catch(InterruptedException e){ // do something } synchronized(pr) { System.out.println("Requesting for Paper"); } } } }; Thread t2 = new Thread() { public void run() { synchronized(pr) { System.out.println("Thread2 is holding Paper"); try { Thread.sleep(1000); } catch(InterruptedException e){ // do something } synchronized(pn) { System.out.println("requesting for Pen"); } } } }; t1.start(); t2.start(); } }
Thread1 is holding Pen Thread2 is holding Paper
For more details, visit the following: Deadlocks | https://www.studytonight.com/java/interthread-communication.php | CC-MAIN-2020-05 | refinedweb | 256 | 50.84 |
How I wrote my own React wrapper for Google Map
Gabriel Wu
・4 min read
A few months ago, when I started the Neighborhood Map project of Udacity, I first checked the libraries available. There were quite a few choices:
However, none of them could meet my requirements (it was also possible that I did not figure out the proper way to deal with the problems). I want the components to be flexible, e.g., the
Marker component does not necessarily to be placed within a
Map component. This flexibility is essential when designing the layouts, and also when structuring components so as not to trigger unnecessary rerender.
What they provide (generally):
<Map> <Marker /> <InfoWindow /> </Map>
What I want:
<Map /> <ComponentA> <Marker /> <ComponentB> <InfoWindow /> </ComponentB> </ComponentA>
The idea came into my mind that I could write my own React wrapper for Google Map. This sounded a bit audacious because I had never written a React component library before. As the deadline of the Udacity project came closer and closer, I finally made up my mind to write my own Google Map library, with React hooks and TypeScript, and TDD.
Although I had not written a React component library, I had written a very simple Vue component library (following instructions of a blog). I had been writing TypeScript for several months, and had written a React app with context and hooks. And I had used TDD in several projects. These experiences gave me confidence.
Yet challenges did come, one after another. I had written some tests, but I had not written library mocks. I managed to mock
loadjs, which I used to load Google Map scripts.
Another problem was that hooks live with functional components, and functional components do not have instances. Other Google Map libraries all use class components, and implement methods for class instances to surrogate Google Map objects' methods. But I could not do so. In the end, I chose to maintain an id-object Map to store references to all Google Map objects. It worked fluently, and could be used without using React
ref (class instances rely on
ref). The only price was that things like
Marker,
Polygon would require a unique
id when using my library.
At first, I just thought about my own needs, and the API design was way too casual (you can check my original repo and time-travel to earlier versions). Later, my personal project was finished, but I knew a lot of things were still up in the air.
lucifer1004 / boycott
A map app.
Boycott
This is a Udacity project. It is statically deployed here via Now.
To run it locally
git clone cd boycott yarn install yarn start
You can then visit it at
localhost:3000
Features
cors-anywhereis used to address the CORS issue)
- Filter options: All/Open/High Rating/Low Price
- Use of Google Map API is via
@lucifer1004/react-google-map, which is a React wrapper for Google Map written by myself.
It is a simple React app, using Google Map and Yelp to implement basic place search.
After submitting the project at Udacity, I went on with my library. For my personal project's needs, I only implemented
MapBox,
Marker,
InfoWindow,
HeatMap and
Polygon. There were around 20 more Google Map components.
It happened several times that I had to refactor the whole library when trying to implement a new component. Luckily, I wrote unit tests for each component, and those tests helped a lot during refactors.
I spent about two weeks' spare time implementing:
- other shapes: Circle, Polyline, Rectangle
- layers: BicycleLayer, TrafficLayer, TransitLayer
- search: SearchBox, StandaloneSearchBox
- streetview: StreetView, StandaloneStreetView
- overlays: CustomControl, GroundOverlay, KmlLayer, OverlayView
- drawing: DrawingManager
The library started from
create-react-app, I used a separate
package.json in
src/lib to configure the library, while the app was configured by the root level
package.json. As the library grew, I felt I should set up a monorepo properly.
The week of refactoring project structure was rather tough. I read many blogs and posts on monorepos, but still could not get everything work as I hoped. I gave up once, and nearly gave up again the second time, and I made it.
With
lerna and
yarn workspaces, and a custom symlink,I was finally pleased with the new structure. By running
yarn dev:lib and
yarn dev:CRA at the same time, the example CRA app would reload each time I changed the code of the library.
I really appreciate that I decided to write my own wrapper library a month ago, otherwise I would not have learnt so much (I am going to write more posts in the series to talk about things I have learnt in detail). And I will try to further improve my library. It has not been tested in real projects. Compared to existing libraries, some functions are missing. Also there are some known issues, or limitations.
I am prepared.
Recently, I moved this project to a separate organization. Below is the repo.
googlemap-react / googlemap-react
Easier Google Map Integration for React projects.
googlemap-react
Easier Google Map Integration for React projects.
Why a new package
There has been similar packages such as tomchentw/react-google-maps google-map-react/google-map-react fullstackreact/google-maps-react so why bother writing a new library?
The aim is to make an easier-to-use Google Map library for React users,
empowered by
React's latest features (
React >= 16.8.0 is required) and
TypeScript.
What is different
- Component position is free (generally)
- Direct access to Google Map objects
- More uniform API
- Type safe
Example usage
Prerequisites
- npm or yarn
yarn add @googlemap-react/core # Or you can use npm install --save @googlemap-react/core
- a valid Google Map API key (to replace the place holder in the code snippet below)
import { GoogleMapProvider HeatMap, InfoWindow, MapBox, Marker, Polygon, } from '@lucifer1004/react-google-map' // In your component return ( <GoogleMapProvider> <MapBox apiKey="YOUR_GOOGLE_MAP_API_KEY…
Any advice or suggestions are welcome! If you want to use my library and run into any problem, just ask me!
If you want to join, that would be great!
So meetup.com is going to charge attendees in future - what's next for event organizers?
So meetup.com is going to charge attendees in future - what's next for event organizers?
Hi Gabriel,
Do you have a recommended implemenatation of MarkerClusterer and / or Spiderfier?
Currently i'm looking at adding my marker clusterer script generator inside my MapContainer componentDidMount, but i'm not sure how to reference the 'map' item.
Many thanks,
James
If you are using my wrapper, you can get the reference to
mapvia
useContext.
Since you mentioned
componentDidMount, you are using class components instead of functional components, then you should just wrap your component with
GoogleMapConsumer.
If I want to use the context outside of the render method am I able to use, for example:
import {GoogleMapContext} from '@googlemap-react/core'
class MapContainer extends React.Component {
static contextType = GoogleMapContext;
//etc
and then access the map by this.context.state.map?
Hm, I tried the way above and also refactored to a functional component and just used:
import React, {useContext} from "react";
import {
GoogleMapProvider,
MapBox,
Marker,
InfoWindow,
GoogleMapContext
} from "@googlemap-react/core";
const MapContainer = props => {
const map = useContext(GoogleMapContext);
console.log(map);
//etc
and I'm just seeing {state: undefined, dispatch: undefined} as my console output for map, any idea why :/
Great little wrapper for Google Maps - only problem I've run into is activating the InfoWindow on Marker click - am I missing something obvious here?
Thanks,
James
Hi James, here is an example of InteractiveMarker.
Amazing, thanks Gabriel :)
I'd actually made something slightly different - an iterator that creates the InfoWindows and their anchor positions, and then an onClick for each marker that passes out the marker ID as an action to a Redux store, changing the relevant infoWindow's visibility.
Really enjoying working with your library!
Actually, there was one other thing - is there a list of event handlers for each component?
For example, I can set an onClick handler on Mapbox and that deals with any click events on the map itself - are there handlers for onLoad, onZoomChange, etc?
Many thanks, James
See
PROPS & METHODSsection for each component in the documentation.
Thanks :)
Amazing!Thanks! Fast and simple!!!! | https://dev.to/lucifer1004/how-i-wrote-my-own-react-wrapper-for-google-map-15ei | CC-MAIN-2020-16 | refinedweb | 1,384 | 61.87 |
LINQPad: More control over your queries
Eros Fratini
・5 min read
Keep exploring
Following my previous post of this serie, I'll keep exploring the tool, to show you some useful trick you can use to get a better control of the query.
If you're an experienced LINQPad user a lot of this stuff will look trivial, but I actually used it for years without knowing them, and I think some of you will appreciate a kickstart.
Control the Dump
As you already noticed the result of the Dump() extension is varying from type to type. Simple types and strings, for example, are just dumped as HTML strings on the output panel.
Objects are dumped as HTML table with their properties exploded on rows:
While arrays, lists and other enumerable types are rendered as tables in which properties are exploded in columns:
The Dump() extension is automatically searching for public properties, recursively, nesting the results where is needed and rendering them is its standard way.
But if you look at the overloads of the method you'll see 5 parameters:
- description
- depth
- toDataGrid
- exclude
- alpha
Writing something like:
DateTime.Now.Dump("Current date:");
Will use the description param, nesting the result under a label, useful if your query is outputting a lot of variables.
With the depth parameter you can control how much deep the Dump() is exploding the object on the first render. By default is 5, that means the Dump is preparing 5 levels of nested HTML tables upfront, and it can be a little too much, especially on large and complex objects (you will notice that the execution of the query will slow considerably if you try to render hundreds of results together). Setting the depth to a lower value will speed up the first render, and you'll still be able to manually explode the collapsed properties on the output panel just clicking on them.
Try for example:
System.Globalization.CultureInfo.GetCultures(System.Globalization.CultureTypes.AllCultures).Dump(1);
You'll notice a faster rendering, as this collection is quite big.
With the toDataGrid setted to true the output panel will switch to a WPF control instead of the standard HTML, while exclude allows you to omit from the rendering specific properties:
System.Globalization.CultureInfo.GetCultures(System.Globalization.CultureTypes.AllCultures).Dump(toDataGrid: true, exclude: "Parent");
Finally, the alpha params setted to true changes the order of appearence of the properties. By default LINQPad write them as they appear in the class, with this you'll see them alphabetically ordered.
Implementing ToDump()
You may need to specify a custom way to show data of specific classes, for example explicitating the properties to show or even the HTML format to use.
The fastest way is implementing a ToDump() method in the class, so assuming the class is defined in the query itself, you can write:
class User { public string FirstName { get; set; } public string LastName { get; set; } object ToDump() { return new { FirstName = this.FirstName, LastName = Util.WithStyle(this.LastName, "color:green") }; } }
In this case ToDump() is returning an anonymous object with the same properties, but LastName will be rendered in green color (notice the use of
Util, we'll talk more about later).
Implementing this kind of method in your domain model is not optimal (in this case impossible,
Util is defined only in LINQPad queries), so an alternative would be to use a static method in the "My Extensions" query.
"My Extensions" is a special query you'll find in My Queries tab, that's always included during the execution of the other queries, so whatever you link or define there will be available everywhere.
Still not optimal: you'll have to move your
User class in My Extensions (or reference there the assembly which contains the class). That, as said, will import the class in EVERY other query you will run.
In the end the best approach is working with reflection and
dynamic keyword.
public static object ToDump(object input) { if (input.GetType().FullName == "User") // e.g. "MyApp.DataModel.User" { dynamic output = input; return new { FirstName = output.FirstName, LastName = Util.WithStyle(output.LastName, "color:green") }; } return input; }
Another good approach, assuming you're producing your own NuGet packages, is to include in them a set of LINQPad example to be imported in your queries, as I'll show in future posts.
DumpContainer examples
A simple
.Dump() invocation can be limiting in some scenario: each one is going to create a new dumped output row, in a sequence, without the ability to change the actual dumped result during the execution of the query. But if you need to show a progress or an updating information in general is not the good thing to use.
DumpContainer is going to help you.
As the name suggests is a sort of container for dynamically updated contents, you Dump() it once and update its
Content as you need during the lifetime of the query:
void Main() { DumpContainer dc = new DumpContainer(); dc.Content = "Starting the application..."; dc.Dump(); Thread.Sleep(500); for (int i = 1; i <= 10; i++) { dc.Content = $"{i}/10 - Processing..."; Thread.Sleep(200); } dc.Content = "Completed!"; }
If you just need a progress bar to be shown you can use, guess what,
Util.ProgressBar.
As I said before, each LINQPad query's referencing LINQPad.Util namespace, an utility belt full of useful tools to interact with LINQPad itself, and ProgressBar is a class to work with simple but effective HTML progress bars.
We can replace the previous code with something like:
void Main() { Util.ProgressBar pb = new LINQPad.Util.ProgressBar(); pb.Caption = "Progress"; pb.Dump(); for (double i = 1; i <= 10; i++) { pb.Fraction = i/10; Thread.Sleep(200); } }
In this case you can interact with the
Fraction or
Percent property to advance the bar.
Manage input for interactive queries
In some scenario it can be useful to just not hardcode everything, but to ask for user input(s), in the same way you'll do with a console application.
Again you can access the Util namespace ad use the ReadLine() method. This will prompt an input box at the bottom of LINQPad window, its input can be assigned to a string variable and be used in the execution code:
Exploring the Util.*
Explore the Util namespace! Is really full of useful stuff that'll help you pushing your queries to the next level.
Some examples:
- .Cmd() : Will execute any system command
- .Dif() : A graphical comparer for objects, allows you to easily show differences between two given inputs
- .KeepRunning() : creates a cycle to keep the query alive, useful when working with secondary threads
- .CurrentQuery : is the object representing the current query, allows you to access the raw code and assembly references
Conclusions
LINQPad is really an amazing piece of software, knowing it well is going to improve by a factor of ten your productivity in the .NET ecosystem.
I'm going to write at least another post about it in the future, because there are still interesting stuff to talk about.
Stay tuned!
Investing in the right technologies to avoid technical debt
How patience can help you avoid jumping on the wrong tech.
You have convinced me to get a LINQPad license!
I have tried the free version, but no autocomplete slows me down too much.
Looking forward to the next post! Maybe a comparison of the licenses?
I never regretted my license, hope will be the same for you :D
For the next post I'm not sure yet... Maybe the NuGet and Visual Studio integrations... | https://dev.to/tanathos/linqpad-more-control-over-your-queries-2hh6 | CC-MAIN-2019-43 | refinedweb | 1,257 | 52.39 |
My code didn't get accepted because the 2nd note constraint was not obeyed when checking my submissions
from collections import Counter as ctr class Solution(object): def getDeg(self,nums,start,end): ct = {} for i in xrange(start,end): ct[nums[i]]=ct.get(nums[i],0)+ 1 return max(ct.values()+[0]) def find(self,nums,val,rev=False): it = xrange(len(nums)-1,-1,-1) if rev else xrange(len(nums)) for i in it: if nums[i]==val: return i return -1 def findShortestSubArray(self, nums): """ :type nums: List[int] :rtype: int """ n =len(nums) freqs = {} first,last = [-1]*50000, [-1]*50000 for i,v in enumerate(nums): if first[v]<0: first[v]=i freqs[v]=1 else: freqs[v]+=1 last[v] = i d= max(freqs.values()) mvd = set([i for i in freqs if freqs[i]==d]) ans =n for v in mvd: if first[v]>=0: ans = min(ans,1+last[v]-first[v]) return ans ```
@hliu94 Me too i had two WA because of the second constraint: 0<= nums[i] <= 49999, then I supposed that the right constraint was 0 <= abs(nums[i]) <= 49999. Fortunately my solution supposing that nums[i] is 32-bit integer gave accepted
Same here.
Solution with the constraint is simpler, which made me wondering:
how did those people who scored without retries knew that input could be an arbitrary int..
@hliu94 @balabasik Extremely sorry about this. The inputs have been changed to be valid, and the question will be rejudged.
@balabasik Some solutions use HashMap instead of array. For an example, please visit .
All submissions for question 1. have been rejudged and the scoreboard have also been recalculated. Your solution is accepted now.
@1337c0d3r @awice how about this post
Partition to K Equal Sum Subsets lacks some case.
@1337c0d3r @awice thanks for the effort guys.
Pity though that precious 10 minutes were wasted for rewriting the code :)
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/107105/judging-mistake | CC-MAIN-2018-05 | refinedweb | 337 | 62.78 |
Fixing a bug is a whole lot easier when you know how it occurred, but that may not always be the case. Once the software has been shipped, you are left at the mercy of customers, who may not always report the crash.
When the code crashes, you log the errors in the log file, and hence continues the journey of a developer to trace the occurrence of the bug by looking through the log files. Guessing the root cause of the crash from the log file may take a lot of your valuable time.
Is there an easier way to troubleshoot the cause of an error in your software application? Raygun offers a set of interesting solutions to keep an eye on errors when they arise in your web and mobile applications.
From the official documentation, Raygun offers:
Complete visibility into problems your users are experiencing and workflow tools to solve them quickly as a team.
Raygun offers four tools to make it easier to deal with errors and crashes in your application:
In this tutorial, you'll learn how to integrate Raygun tools with your web application to monitor and trace bugs. For this tutorial, you'll be integrating Raygun tools with an Angular web application.
You can use Raygun with a number of programming languages and frameworks. For the sake of this tutorial, let's see how to get started using Raygun with an Angular web application.
To get started, you need to create an account on Raygun. Once you have created the account, you will be presented with a screen to select the preferred language or framework.
In this tutorial, you'll learn how to get started with using Raygun on an Angular web application.
From the list of frameworks, select the Angular framework. You will be presented with a screen to select Angular (v2+) or Angular1.x.
Since you are going to learn how to integrate Raygun with Angular 4, focus on the tab Angular (v2+).
Before integrating Raygun with Angular, you need to create an Angular application. Let's get started by creating an Angular application.
First, you'll need to install the Angular CLI globally.
npm install -g @angular/cli
Create an Angular app using the Angular CLI.
ng new AngularRaygun
You will have the Angular application created and installed with the required dependencies.
Navigate to the project directory and start the application.
cd AngularRaygun npm start
You will have the application running on.
raygun4jslibrary using the Node Package Manager (npm).
npm install raygun4js --save
Inside the
src/config folder, create a file called
app.raygun.setup.ts.
Copy the setup code from
Step 2 of the
Angular (v2+) and paste it into the
app.raygun.setup.ts file.
Import the
RaygunErrorHandler in the
app.module.ts file inside the Angular application, and add the custom error handler. Here is how the
app.module.ts file looks:
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { ErrorHandler } from '@angular/core'; import { RaygunErrorHandler } from '../config/app.raygun.setup'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule ], providers: [{ provide: ErrorHandler, useClass: RaygunErrorHandler }], bootstrap: [AppComponent] }) export class AppModule { }
Now you have added a custom error handler
RaygunErrorHandler, which will handle the errors.
Let's add some code to create an error. Import the
Router in the
app.component.ts file.
import { Router } from '@angular/router';
Modify the constructor method as shown:
constructor(private router: Router) {}
The above code will throw an error when you run the application since it hasn't been imported in the AppModule. Let's see how Raygun captures the errors. Save the above changes and restart the application.
Point your browser to. Check the browser console and you will have the errors logged.
When you run the application, you will have an error logged in the browser console.
NullInjectorError: No provider for Router!
From the Raygun application, click on the Dashboard tab on the left side, and you will have detailed information about the requests logged by Raygun.
As seen in the Raygun dashboard, it shows the session count, recent request, error instance counts, etc., related to the Angular application which you configured with Raygun.
Click on the recent requests displayed on the right side of the dashboard, and you will have detailed information related to the particular request.
Application crashes are a common scenario when dealing with software applications. Lots of these crashes occur in real-time scenarios and are hence difficult to track without a proper crash reporting system in place.
Raygun provides a tool called Crash Reporting which provides a deeper insight into application crashes. Let's have a look at how Crash Reporting works.
You have a bug in your Angular app which is crashing it. Let's see how it gets reported using Raygun Crash Reporting.
Click on the Crash Reporting tab from the menu on the left-hand side. You will have the error report listed.
In the Raygun crash reporting tab, it shows the errors that occurred in the application. In the tabs shown above, the errors have been categorized into Active, Resolved, Ignored, and Permanently ignored.
The error that you encountered while running the application has been logged under the Active tab.
On clicking the listed error, you'll be redirected to another page with detailed information related to the error. On this page, you'll have information such as the error summary, HTTP information, environment details in which the error occurred (such as the OS, browser, etc.), raw error information, and the error stack trace.
When displaying information related to a particular error, Raygun provides you with the features to change the state of the errors as per your fix. You can change the status to active, resolved, ignored, etc.
Raygun's crash reporting tool provides a feature to add comments to the errors, which is really helpful in discussing details about the bug when working in a team.
Crash reporting comes with a couple of settings which make it easier for the user to manage the errors that have occurred in the application.
It provides you with the option to enable live refresh, first seen date on an error group, and the user count on the dashboard.
You have the option to make bulk error status changes and the option to remove all the errors that occurred in the application.
Raygun provides an option to filter requests based on the IP address, machine name, etc. If you don't want to track an error from a particular IP address, you can create an inbound filter, and the error from the application running on that IP address won't be tracked further.
Let's try to add a filter for your application running on 127.0.0.0.1 and see if it gets tracked.
From the left side menu, under the Crash Reporting tab, click on the Inbound Filters link. Add the IP address
127.0.0.0.1 to the filter list.
Now, if you try to run the application, on crashing it won't get tracked in the crash reporting screen since it's been filtered out.
You can also add filters based on machine names, HTTP, build versions, tag, and user agent.
Most of the issues encountered when the user is using the software go unreported. The probability of a frustrated user reporting an issue is quite low. Hence, you tend to lose the user feedback to improve the quality of your software.
Raygun provides an affected user tracking report. This report shows the list of users from your application who have encountered errors. It gives a full view of how that particular user encountered that particular error. You can view this report by clicking on the Users tab on the left side of the screen.
In your Angular application, you haven't yet used the affected user details feature of Raygun. Hence in the affected user tracking report you will find the user details as anonymous along with the error details.
Click on the Anon User link from the user tracking information, and you'll see the detailed information related to that particular anonymous user. Detailed such as the active error info, user experience, sessions, devices used by the user, etc., will all be displayed in the user report.
You can add the user info details to the Raygun config file. Add the following code to the
config/app.raygun.setup.ts file to send the user info details to Raygun.
rg4js('setUser', { identifier: 'roy_agasthyan_unique_id', isAnonymous: false, email: 'royagasthyan@gmail.com', firstName: 'Roy', fullName: 'Roy Agasthyan' });
Here is how the
config/app.raygun.setup.ts file looks:
import * as rg4js from 'raygun4js'; import { ErrorHandler } from '@angular/core'; const VERSION_NUMBER = '1.0.0.0'; rg4js('apiKey', 'FehB7YwfCf/F+KrFCZdJSg=='); rg4js('setVersion', VERSION_NUMBER); rg4js('enableCrashReporting', true); rg4js('enablePulse', true); rg4js('setUser', { identifier: 'roy_agasthyan_unique_id', isAnonymous: false, email: 'royagasthyan@gmail.com', firstName: 'Roy', fullName: 'Roy Agasthyan' }); export class RaygunErrorHandler implements ErrorHandler { handleError(e: any) { rg4js('send', { error: e, }); } }
Save the above changes and reload the Angular web application. Go to the Raygun application console and click on the Users tab from the left side menu. You will be able to see the new users displayed in the list of affected users.
Click on the user name to view the details associated with the particular user.
Raygun's Real User Monitoring tool gives you an insight into the live user sessions. It lets you identify the way the user interacts with your application from the user environment and how it affects your application's performance.
Let's run your Angular application and see how it's monitored in the Real User Monitoring tool. Click on the Real User Monitoring tab in the menu on the left-hand side. You will be able to view the live user details and sessions.
By clicking on the different tabs, you can monitor the performance of the requested pages.
It gives information on the slowest and the most requested pages. Based on a number of metrics, you can monitor the pages with high loading time and fix them to improve the application's performance.
There are a number of other tabs in Real User Monitoring which give useful insight into user information based on different parameters like browsers, platforms, and user locations.
When you release a new version of your software, it's expected to be a better version with bug fixes and patches for the issues reported in earlier versions.
Raygun provides a tool to track the deployment process and to monitor the releases. Click on the Deployments tab from the left side menu and you will be presented with information on how to configure Raygun with your deployment system. Once you have it configured, you'll be able to view the detailed report related to each release.
Setting up a deployment tracking system will enable you to get deeper insight into each of the releases. You can monitor the trend and see whether you are improving the build quality or taking it down. With each new release, you can compare the error rate and track any new errors that cropped up in the releases.
I recommend reading the official documentation to see how to integrate Raygun deployment tracking with your deployment system.
In this tutorial, you saw how to get started using Raygun with an Angular web application. You learnt how to use the Crash Reporting tool to monitor and trace the occurrence of a crash. Using the Real User Monitoring tool, you saw how to understand the user experience details such as the page load time, average load time, etc.
The User Tracking tool lets you monitor and categorise errors and crashes based on the application users. The Deployment Tracking tool helps you track each release of your application for crashes and errors and lets you know how it's affecting the overall health of your application.
For detailed information on integrating Raygun with other languages and frameworks, I would recommend reading the official Raygun documentation.
If you have any questions and comments on today's tutorial, please post them… | https://www.4elements.com/nl/blog/read/error_and_performance_monitoring_for_web_mobile_apps_using_raygun/ | CC-MAIN-2019-18 | refinedweb | 2,021 | 55.44 |
I have an action called
'toggle_pause' which pauses a sequence if it is playing, and plays the sequence if it is paused. My accept statement looks like
def update_key_map(self, action_name, action_state): self.keymap[action_name] = action_state def other_func(self): ... self.accept('space', self.update_key_map, 'toggle_pause', True) self.accept('space-up', self.update_key_map, 'toggle_pause', False)
Meanwhile I have a task that watches the state of
'toggle_pause' and if
True, it pauses the sequence if its playing and it plays the sequence if its pausing:
def shot_animation_task(self, task): if self.keymap['toggle_pause']: if self.sequence.isPlaying(): self.sequence.pause_animation() else: self.sequence.resume_animation() ... return task.cont
In a sense, this works exactly as I would expect. But the problem is that between
space and
space-up, the task is called multiple times. So the effect is that the toggle undoes itself if the task is called an even number of times, and only functions if the task is called an odd number of times.
Does anyone have advice for solving this? I have considered creating a quarter second timeout whenever
toggle_pause is activated, but that doesn’t seem very elegant. I’m sure there is a better way.
Thanks in advance.
EDIT: Perhaps setting a timeout isn’t that inelegant after all? I just started messing around with
task.again. For example, I can do this:
def pause_task(self, task): if self.keymap['toggle_pause']: if self.sequence.isPlaying(): self.sequence.pause_animation() else: self.sequence.resume_animation() task.delayTime = 0.25 return task.again return task.cont
This will give the user 0.25 to release the spacebar. All this really requires is making this task a dedicated method so no other code in my original task method will be affected by this delay. | https://discourse.panda3d.org/t/accept-watches-for-a-toggle-pause-message-but-condition-is-met-over-multiple-frames/27077 | CC-MAIN-2022-27 | refinedweb | 291 | 52.36 |
You can discuss this topic with others at
Read reviews and buy a Java Certification book at
Determine the effect upon objects and primitive values of passing variables into methods and performing assignments or other modifying operations in that method.
This objective appears to be asking you to understand what happens when you
pass a value into a method. If the code in the method changes the variable, is
that change visible from outside the method?. Here is a direct quote from Peter
van der Lindens Java Programmers FAQ (available at)
//Quote
All parameters (values of primitive types and values that are references to objects) are passed by value. However this does not tell the whole story, since objects are always manipulated through reference variables in Java. Thus one can equally say that objects are passed by reference (and the reference variable is passed by value). This is a consequence of the fact that variables do not take on the values of "objects" but values of "references to objects" as described in the previous question on linked lists.
Bottom line: The caller's copy of primitive type arguments (int, char, etc.) _do not_ change when the corresponding parameter is changed. However, the fields of the caller's object _do_ change when the called method changes the corresponding fields of the object (reference) passed as a parameter.
//End Quote
If you are from a C++ background you will be familiar with the concept of passing parameters either by value or by reference using the & operator. There is no such option in Java as everything is passed by value. However it does not always appear like this. If you pass an object it is an object reference and you cannot directly manipulate an object reference.
Thus if you manipulate a field of an object that is passed to a method it has the effect as if you had passed by reference (any changes will be still be in effect on return to the calling method)..
Take the following example
class ValHold{ public int i = 10; } public class ObParm{ public static void main(String argv[]){ ObParm o = new ObParm(); o.amethod(); } public void amethod(){ ValHold v = new ValHold(); v.i=10; System.out.println("Before another = "+ v.i); another(v); System.out.println("After another = "+ v.i); }//End of amethod public void another(ValHold v){ v.i = 20; System.out.println("In another = "+ v.i); }//End of another }
The output from this program is
Before another = 10 In another = 20 After another = 20
See how the value of the variable i has been modified. If Java always passes by value (i.e. a copy of a variable), how come it has been modified? Well the method received a copy of the handle or object reference but that reference acts like a pointer to the real value. Modifications to the fields will be reflected in what is pointed to. This is somewhat like how it would be if you had automatic dereferencing of pointers in C/C++.
When you pass primitives to methods it is a straitforward pass by value. A method gets its own copy to play with and any modifications are not reflected outside the method. Take the following example.
public class Parm{ public static void main(String argv[]){ Parm p = new Par }
The output from this program is as follows
Before another i= 10 In another i= 20 After another i= 10
Given }
1) 10,0, 30
2) 20,0,30
3) 20,99,30
4) 10,0,20
4) 10,0,20
Last updated
11 Jan 2000 | http://www.jchq.net/tutorial/05_04Tut.htm | crawl-001 | refinedweb | 597 | 62.58 |
RubyOSA is a bridge that lets developers control scriptable applications, including the Finder, using the Ruby scripting language. An application is called scriptable when it makes its operations and data available in response to messages called Apple events. RubyOSA provides a bridge between Ruby and the Open Scripting Architecture (OSA), an infrastructure for interprocess communication that uses Apple events as its mechanism for event dispatching and data transport. (AppleScript is the original OSA scripting language, and is still quite popular.)
A scriptable application specifies the set of scripting terms it understands and its scriptable interface in an XML dictionary called an
sdef file (“sdef” for scriptable definition). At runtime RubyOSA parses the scriptable definition of a given application and populates a new namespace with classes, methods, constants, enumerations, and all other symbols described by the definition. It also dynamically creates Ruby proxy objects to represent these symbols and uses OSA mechanisms to build and send Apple events to applications and receive their responses.
RubyOSA has some obvious advantages, especially for Ruby programmers. With it you can control applications on Mac OS X and get requested objects back from them. You can do anything with these object that you can do in regular Ruby code, such as string manipulations and regular expressions. Your code also has access to all installed Ruby modules and libraries. Finally, you can combine RubyOSA and RubyCocoa in the same script to apply the technologies of the Mac OS X frameworks to the access to scriptable applications that OSA makes possible.
Installing RubyOSA
The Basics
The OSA Class
Conversions and Conventions
Some Examples
Documenting Application Dictionaries
You can download the latest version of RubyOSA from its open-source repository and install it on your system by running the following command in a Terminal shell:
The essential idea behind using RubyOSA is to get a proxy instance of a scriptable application and then send messages to it. The messages that you can send are described in the application’s scriptable definition, or dictionary. Let’s start by looking at a simple example (Listing 1).
Listing 1 The
iTunes_inspect.rb script
When you run this script from the command line, it prints information similar to the following lines:
The first thing to notice about the script in Listing 1 is
require ‘rbosa’. This statement loads the rbosa library, which includes the OSA class. The next line of the script is equally important:
This line returns a proxy Ruby object representing a scriptable application, in this case iTunes. (Note that all you have to do specify the name of the application; you don’t have to include its file-system location or its extension.) From this point on, the script sends messages to the application object and the objects it “contains,“ and performs Ruby operations on the results. In RubyOSA’s internal representation of a scriptable application, a hierarchy of objects descends from the application object; sending a message to the application object may return a collection objects, each of which may be a collection of subordinate objects. You can send appropriate messages to each of these objects. Take these lines as an example:
The
sources message to the iTunes proxy object returns an object that implements the Ruby Array interface. proceeds the script.
This might seem simple and straightforward—and it is—but a question might arise: where do you find out which messages you can send to a scriptable application’s hierarchy of objects? RubyOSA includes a documentation tool,
rdoc-osa. Using this you tool you can generate a set of HTML pages that document the scriptable definition of a Mac OS X application. “The Basics” shows the opening page of the iTunes documentation.
If you were to use this documentation, you would find that sending
sources to a proxy object representing the iTunes application returns an array (or list) of OSA::iTunes::Source objects. Sending
playlists to one of these objects returns an array of OSA::ITunes::Playlist objects. And sending
tracks to one of these objects returns an array of OSA::ITunes::Track objects. You can then send
name to one of these objects to get the name of the track.
You might have wondered about the following line in the sample script in Listing 1:
OSA is a Ruby class in its own right, and has other methods besides
app, among them
utf8_strings. Listing 2 describes the methods of the OSA class.
All RubyOSA objects inherit from the
OSA::Element class, which is completely opaque to the user.
With the RubyOSA
app method you can identify scriptable applications in several ways:
By name, simply by putting the application name (minus the app extension) between single quotation marks.
Example:
OSA.app(‘Finder’)
This simple style of argument is a convenience for
:name => ‘AppName’. RubyOSA uses Launch Services to locate the scriptable application to launch and use.
By file-system path, using the
:path key.
Example:
OSA.app(:path => ‘/Users/jdoe/Applications/BBEdit.app’)
By the application’s bundle ID, using the
:bundle_id key.
Example:
OSA.app(:bundle_id => ‘com.apple.iTunes’)
By an application’s four-character creator signature (if any), using the
:signature key.
Example:
OSA.app(:signature -> ‘woof’)
The
app method also lets you specify applications on remote machines as well as locally—thus you can control and get data from applications that aren’t even installed on your local system. After specifying the application by name, you add one to three key-value pairs identifying the machine, the user name, and the password. For each pair, use the
:machine,
:username, and
:password keys, respectively. For example:
There are a few things to be aware of when calling the
app method to get proxy instances of remote applications: First, you may only specify the remote-access key-value pairs when the first argument specifies the application by name. Second, if you omit the
:username or
:password keys (or both), RubyOSA prompts for the user name and password (or both).
The Remote Apple Events checkbox in the Sharing pane of System Preferences on the remote machine should be checked for your RubyOSA script to control its applications.
You may only specify the remote-access key-value pairs when the first argument specifies the application by name.
if you omit the
:username or
:password keys (or both), RubyOSA prompts for the user name and password (or both).
When you send a message whose name has a plural form (for example,
sources), what you get in return may look and behave like an Array, but it is actually an list element (OSA::ObjectSpecifierList ) containing object specifiers—that is, references to real objects. Although the Ruby Array class is not directly used in this case, the OSA::ObjectSpecifierList class conforms to the Array interface; in other words, it mixes the Enumerable module. Therefore you can call most of the methods on an object-specifier list that you can call on an Array.
Methods with names such as
title and
name refer to properties in a scriptable definition and return the appropriate Ruby objects (in both these cases, String objects). On the other hand, methods such as
current_track return an object specifier, in this case an object specifier of the OSA::ITunes::Track class. The rule that RubyOSA follows to distinguish between these two general types of properties is that when the type of the property is defined within the target application's scriptable definition (as
current_track is), it returns an object specifier. Otherwise it assumes the object is of a primitive type (
String,
Integer,
Date, and so on) and it resolves the return value directly by querying for the type with an extra Apple event.
To better appreciate the varieties of ways in which you might use RubyOSA, let’s examine a few of the examples installed in
/Developer/Examples/Ruby/RubyOSA. The script in Listing 3 creates a proxy instance of the Finder application and from it requests the current contents of the Desktop. Using Ruby regular expressions and string-manipulation methods, it formats and prints these items.
Listing 2 The
Finder_show_desktop.rb script
Listing 3 is a script that displays the album artwork associated with the iTunes track that is currently playing. Note that it creates a temporary file to hold the image data and then makes a
system call to open this file in the Preview application. With the
system call your script can do anything that can be done at the command line.
Listing 3 The
iTunes_artwork.rb script
What is noteworthy about the script in Listing 4 is that it exchanges data between proxy instances of two applications, TextEdit and Mail. It gets the selected messages in all current Mail viewers and copies each the content of each message to a TextEdit window..
You can use the
rdoc-osa tool to generate HTML or
ri documentation for the dictionary (that is, scriptable definition) of an application. Using
rdoc-osa is simple. For example, to generate HTML documentation of the iTunes dictionary, you would enter the following command on a shell’s command line:
The
ruby-osa tool generates the documentation from the application’s dictionary and puts in in a folder named
doc in the current working directory. Instead of identifying the application by name, you can identify it by path, bundle ID, or four-character creator signature. To generate
ri documentation instead of HTML, append “
--ri“ to the command.
Note:
ri is a Ruby tool for viewing documentation in a format familiar to Ruby programmers. To learn more about
ri, type “
ri --h“ at the command line.
To get help on
rdoc-osa, enter “
rdoc-osa --h“ at the command line. The
rdoc-osa tool accepts all options used in
rdoc, the documentation generator for Ruby classes and modules. Enter “
rdoc --h“ at the command line to learn about the options for that tool.
Last updated: 2007-10-31 | http://developer.apple.com/documentation/Cocoa/Conceptual/RubyPythonCocoa/Articles/Using%20RubyOSA.html | crawl-002 | refinedweb | 1,646 | 50.97 |
hello i would like to know how can i use these isString,isInt,isDouble functions.
i really dont have a clue if these functions do exist.
but i want to create a program that lets me input anything.
and it would identify if it is a string an int or a double.
i would be using a String variable.
here's my code.
import java.util.*; class program { public static void main(String[]args) { Scanner input=new Scanner(System.in); String a; System.out.print("Enter anything: "); a=input.next(); if(a.isString) System.out.println("this is a String"); else if(a.isDouble) System.out.println("this is a Double"); else if(a.isInt) System.out.println("this is an Int"); else System.out.println("no comment"); } }
when i runned this. the error says:
cannot find symbol: variable isString
cannot find symbol: variable isDouble
cannot find symbol: variable isInt.
i need help on how to use these functions.
or any function that could let me identify a string a double and an int
Thanks in advance. | https://www.daniweb.com/programming/software-development/threads/295236/isstring-isint-isdouble-function | CC-MAIN-2018-43 | refinedweb | 177 | 62.24 |
Print a usage message (QNX Neutrino)
use [-aeis] [-d directory] [-f filelist] files
QNX Neutrino, Linux, Microsoft Windows
The use utility displays a usage message for the specified executable programs or shell scripts.
The use utility searches for files, using the default command search (PATH), and displays the usage message (if any) that it finds in the load files or shell scripts.
If the LANG environment variable is set, a usage message of that language is displayed, if available. Alternate language usage messages are not available within shell scripts. However, it's easy to edit shell script messages. While usage messages included with standard versions of QNX Neutrino are in English only, it's possible to add alternate language usage messages by placing the revised usage message in a separate file, and using the usemsg utility to insert the usage message in the executable in question.
Usage messages in shell scripts
Usage messages are implemented in binary executable programs using a special form of resource record in the load modules. Usage messages are implemented in shell scripts using a format similar to that used in the C source code and interpreted by the usemsg utility.
In shell scripts, the use utility scans each line from the beginning of the script, looking for a line starting with the # character (i.e. a comment) and containing the string __USAGE. The usage message begins on the next line and consists of all subsequent lines up to, but not including, the first line that either starts with #endif or starts with a character other than #.
Here's a sample usage message in a shell script:
#ifdef __USAGE #%C thread_id #Where: # thread_id is the thread ID you want to act on #endif
If the shell script is called foo, and you invoke use foo, the following message is displayed:
foo thread_id Where: thread_id is the thread ID you want to act on
In the above shell script fragment, the message starts with:
#%C thread_id
and ends with:
# thread_id is the thread ID you want to act on
Within the body of the usage message, the leading #s are stripped by the use utility and don't form part of the message that's displayed. As with the C language usage message convention (see usemsg), a %CTab at the start of a line is replaced by the program name (or filename of the shell script) and a tab character at the start of a line spaces over the same number of spaces as the last previous occurrence of %CTab.
You can place the usage message almost anywhere in most shell scripts. Placing it at the beginning results in quicker response for extracting the usage message at the expense of a very slight slowdown in execution of the shell script. If you're running a shell that doesn't recognize lines beginning with # as comments, you should place the usage message after an explicit exit.
Display a usage message for the ls utility:
use ls | http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.utilities/topic/u/use.html | CC-MAIN-2020-10 | refinedweb | 498 | 59.16 |
import "go.chromium.org/luci/starlark/starlarktest"
Package starlarktest contains utilities for running Starlark tests.
It knows how to run all *.star tests from some particular directory, adding 'assert' module to their global dict and wiring their errors to testing.T.
HookThread makes a Starlark thread report errors and logs to the 't'.
RunTests loads and executes all test scripts (testdata/**/*.star).
type Options struct { TestsDir string // directory to search for *.star files Skip string // directories with this name are skipped Predeclared starlark.StringDict // symbols to put into the global dict // Executor runs a single starlark test file. // // If nil, RunTests will simply use starlark.ExecFile(...). Executor func(t *testing.T, path string, predeclared starlark.StringDict) error }
Options describe where to discover tests and how to run them.
Package starlarktest imports 10 packages (graph). Updated 2019-08-17. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/starlark/starlarktest | CC-MAIN-2019-35 | refinedweb | 144 | 62.04 |
Hi everyone.
I've just started with C++, and everything's going well so far. The book I'm using has some generally good examples/exercises. I was hoping someone could proof-check this simple program for me though. Not asking anyone to do my homework, the program compiles and functions correctly (I believe). I just wanted some professional advice/opinions.
I'm supposed to request input from a user for the amount of time in seconds for any event to occur. here's what I've come up with:
#include <iostream>
using namespace std;
int main()
{
// Declare variables.
int total, hours, seconds, minutes;
// Prompt user for input.
cout << "Enter time in seconds:" << endl;
cin >> total;
// Identify variables.
hours = total / 3600;
hours = hours % 3600;
minutes = total / 60;
minutes = minutes % 60;
seconds = total % 60;
// Output conversion, including colons as separators.
cout << hours << ":" << minutes << ":" << seconds << endl;
return 0;
} | http://forums.devshed.com/programming/951384-prove-program-isnt-100-accurate-last-post.html | CC-MAIN-2018-17 | refinedweb | 146 | 67.25 |
The .NET SDK for Lucidtech AI Services (LAS) can be downloaded from nuget
$ dotnet add package Lucidtech.Las
After Lucidtech.Las is installed and you have received credentials, you are ready to enhance your document-flow with the las client:
using System;using Lucidtech.Las;var client = new Client();var models = client.ListModels();var documents = client.ListDocuments();var workflows = client.ListWorkflows();
If you are new to LAS we recommend you to check out the key concepts for a better understanding of what is possible with LAS.
If you are in the need of explicit examples on how to create complex workflows, check out the tutorials
The .NET SDK is open-source, and the code can be found here. Contributions are more than welcome. | https://docs.lucidtech.ai/reference/dotnet | CC-MAIN-2021-25 | refinedweb | 123 | 60.31 |
After doing some reading on FILE data type I decided to practice, I came up with this little program that reads any part of any file:
I'd like some suggestions, critics, tips on how to make my code better.I'd like some suggestions, critics, tips on how to make my code better.Code:#include <iostream> #include <string> #include <stdio.h> using namespace std; int main() { FILE * file; char * file_name = new char[255]; int read_from, lenght, file_size; char * output; cout << "What file do you want to be read? "; cin >> file_name; if(!(file = fopen(file_name, "r+"))) { cout << "Could not open file."; return 0; } fseek(file, 0, SEEK_END); file_size = ftell(file); rewind(file); cout << "File " << file_name << "(" << file_size << "bytes) sucessfully opened.\n\n"; cout << "Read from offset: "; cin >> read_from; fseek(file, read_from, SEEK_SET); if(read_from > file_size) { cout << "Given file does not reach that far!\n"; return 0; } cout << "How many bytes to be read? "; cin >> lenght; if((read_from + lenght) > file_size) { cout << "Given file does not reach that far!\n"; return 0; } output = new char[lenght]; for(int x = 0; x < lenght; x++) { output[x] = getc(file); } cout << "Output: " << output; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/139262-dealing-files-thru-file-*.html | CC-MAIN-2015-18 | refinedweb | 189 | 73.07 |
JSF 2.0 Views: Hello Facelets, Goodbye JSP
Facelets Demo Application
Now, you are ready to use Facelets in a web application. For the rest of this tutorial, we will use a simple online quiz application as the demo example. The application allows a registered user to answer five simple questions that test his or her general knowledge, and it displays the score towards the end of the quiz. The user can take the quiz only after a successful login.
Consider the Facelets page
simplelogin.xhtml, a XHTML page that contains certain JSF tags as well as the following simple login form:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
""> <html xmlns="" xmlns: <head> <title> Test your knowledge!! </title> <link rel="stylesheet" type="text/css" href="./css/default.css"/> </head> <body> <div class="header">Online Examination Application</div> <form> <table> <tr> <td>e-Mail ID</td> <td> <h:inputText <h:message </td> </tr> <tr> <td>Password:</td> <td> <h:inputSecret <h:outputLink <h:outputText </h:outputLink> <h:message </td> </tr> <tr> <td> <h:commandButton </td> <td><h:commandButton </td> </tr> </table> Want to <h:outputLink
<h:outputText </h:outputLink> ?? </form> <div class="footer"> © E-commerce Research Labs, E & R, Infy</div> </body>
<HTML>tag (denoted by the prefix
hand
f). The page defines a header, a login form, links to the registration form, and a footer. As is evident from the code, knowledge of HTML and JSF tags are sufficient to build views using Facelets.
Facelets Page-Templating
As mentioned earlier, the main advantage of Facelets technology is the page-templating feature. The header and footer information is common to all pages. Including those in all the individual pages would quickly become a maintenance nightmare. To avoid this, you define a Facelets template that defines a generic layout to all the view pages with the scope for customization.
A Facelets template is again a simple XHTML file that uses Facelets tags to define various logical divisions of the view such as header, footer, and content. You will create a template file named
layout.xhtml and place it in a
templates folder to differentiate between templates and other Facelets files. Here is the
layout.xhtml file:
<!-- XHTML DOCTYPE Declaration --> <html xmlns="" xmlns: <h:head> <title> Test your knowledge!! </title> <link rel="stylesheet" type="text/css" href="./css/default.css"/> <ui:insert</ui:insert> </h:head> <h:body> <div class="header">Online Examination Application</div> <ui:insertDefault Content</ui:insert> <div class="footer"> © E-commerce Research Labs, E & R, Infy</div> </h:body> </html>
The first thing to notice is the use of the Facelets namespace identified by the prefix
ui. You define the header and footer content that will be common across all views in the application. In addition, you also define a logical division named "
content" using a
<ui:insert> tag.
A
<ui:insert> tag is a Facelets tag used to define a logical division in a template that is associated with a unique name. The client of the template will provide the content of these divisions. If a client fails to do so, then the content present in the body of the
<ui:insert> tag will be used. Let us rewrite the login form shown earlier using this template in
login.xhtml:
<!-- XHTML DOCTYPE Declaration --> <html xmlns="" xmlns: <ui:composition <ui:define <h:form> <!-- Rebuild the form as before --> </h:form> </ui:define> </ui:composition> </html>
A Facelets page can use a template by using the
<ui:composition> tag available in the Facelets namespace. The
template attribute of this tag represents the location of the template file. Anything outside the
<ui:composition> tag will be ignored. The
html tag is also used mainly to declare different namespaces. The
<ui:define> tag is used to provide the content for the logical divisions as defined in the template. The
name attribute identifies a logical division, and the content is defined as the body of the tag. Note that the
<ui:define> tag has to be defined within the
<ui:composition> tag.
Facelets Replace JSP
Facelets are one of most interesting features in the JSF 2.0 specification. Facelets are considered simpler and more powerful than JavaServer Pages because they support the features discussed in this article, namely page templating, faster execution, and dynamic tag attributes. In fact, Facelets are recommended as the default view for JavaServer Faces from this release onward.
Stay tuned as we will explore more notable new features in JSF 2.0 in upcoming articles.<< | http://www.developer.com/java/web/article.php/10935_3867851_2/JSF-20-Views-Hello-Facelets-Goodbye-JSP.htm | CC-MAIN-2017-09 | refinedweb | 756 | 55.54 |
The previous Command Line Rocks! article focused on creating an app with the smallest possible amount of code. This article will show you how to:
- Compile multiple source files into a single executable binary
- Leverage the power of external libraries
- Create an easy-to-manage project structure
A quick recap
We have an app called GoodBye IDE that prints “Goodbye IDE!” to the stdout.
To create our app we have 2 files in a single folder (called $PROJECT_DIR):
- main.c – A C source file
- bar-descriptor.xml – A descriptor file which contains information about our app
And to compile, package and deploy our app to the simulator we have to run the following commands:
Compile: qcc -Vgcc_ntox86 main.c -o main
Package: blackberry-nativepackager -package GoodbyeIDE.bar bar-descriptor.xml
Deploy: blackberry-deploy -installApp 192.168.2.128 GoodbyeIDE.bar
Note that 192.168.2.128 is the simulator IP address, yours will be shown in the bottom left of the simulator window.
Improve our app
Our current app isn’t very realistic. How many projects do you know which have a single source file?
Let’s make our app do something more interesting, like fooling our friends into thinking they’ve got a new message! We’ll introduce multiple source files, a library dependency and a build process that is closer to a real-world project.
Add source files
Add these two files to your project:
prank_notifier.h
#ifndef PRANK_NOTIFIER_H #define PRANK_NOTIFIER_H #include <bps/soundplayer.h> void notify(); #endif
prank_notifier.c
#include "prank_notifier.h" void notify() { soundplayer_play_sound("notification_general"); }
Let’s also update main.c to call our new function:
#include <stdio.h> #include "prank_notifier.h" int main() { printf("Goodbye IDE!\n"); bps_initialize(); notify(); bps_shutdown(); return 0; }
So, what do these files do?
- prank_notifier.h
- Uses preprocessor directives to avoid multiple includes of the same code
- Includes soundplayer.h which provides functions for playing system sounds
- Declares the notify() function which is defined in prank_notifier.c
- prank_notifier.c
- Includes prank_notifier.h
- Defines notify() which plays the default notification sound
- main.c
- Includes prank_notifier.h
- Initialises and shuts down BlackBerry Platform Services
- Calls notify()
Update folder structure
We could keep all our files in the same folder, however, things are going to get messy when we start generating temporary files during the build process. To keep things tidy create the src, include and build folders and move our existing files into them. Here’s what each folder should contain:
Building our app
The build process now requires more steps as we have multiple source files. It can be summarised as:
- Convert the source files (prank_notifier.c and main.c) into object files (.o). This process actually comprises three steps, all handled by qcc:
- Preprocessing – Processing the preprocessor directives (e.g. #include and #define) to create preprocessed source code
- Compiling – Converting preprocessed source code into assembly language for our target architecture
- Assembling – Converting assembler code into machine code which can be executed by the target CPU
- Link the object files and the dependent BPS library to create an executable binary
- Package the binary into a BAR file using our executable binary and bar-descriptor.xml
Compile the source files
Use the following commands to compile our source files into object files:
qcc -Vgcc_ntox86 -Iinclude -c src/prank_notifier.c -o build/prank_notifier.o qcc -Vgcc_ntox86 -Iinclude -c src/main.c -o build/main.o
These commands are the same as those we used in the first article but we have some new command flags:
-I<folder> – Adds a folder to the search path for include files. This is so that when preprocessing prank_notifier.c and main.c, the preprocessor can find the required header file prank_notifier.h.
-c – Do not link, just create an object file. This stops the linker from running and producing errors. We need this because we’re not ready to link yet, we’ll do that in the next step.
Link the object files
Linking is achieved by specifying all the object files and any dependent libraries:
qcc -Vgcc_ntox86 -lbps build/prank_notifier.o build/main.o -o build/main
New command flags:
-I<library_name> – Tells the linker to link against the named lib<library_name> in this case it’s libbps. Note that you don’t have to specify lib before the library name.BlackBerry 10 built in libraries can be found in $QNX_TARGET/<platform_architecture>/lib and $QNX_TARGET/<platform_architecture>/usr/lib.
More information about the built in libraries can be found here.
We should now have an executable binary main, so now let’s package it into an app.
Package and run the app
Update the <asset> tag in bar-descriptor.xml to point to the new binary path:
<asset path="build/main" entry="true">main</asset>
Create the BAR package:
blackberry-nativepackager -package build/GoodbyeIDE.bar bar-descriptor.xml
Upload the BAR file to the simulator:
blackberry-deploy -installApp 192.168.2.128 build/GoodbyeIDE.bar
The app icon should appear on the simulator:
Now, if you tap the icon you should hear the default notification sound. Perfect for sneaking up behind people and fooling them into thinking they have a new message!
Summary
OK, so our new app might not make us many friends but our project does now have a logical structure that we can easily add more source files to. Plus, we’ve been able to leverage an external library. This opens a world of opportunity because we can now use the built in BlackBerry APIs as well as many open source libraries available for BlackBerry 10.
In the next article we’ll look at how the build process can be managed and improved by using Makefiles. Say goodbye to all these long compiler commands and hello to super-fast build and deployment!
Read the next article in the series, here. | http://devblog.blackberry.com/2013/05/command-line-rocks-part-2/?relatedposts_exclude=14899 | CC-MAIN-2017-39 | refinedweb | 965 | 57.98 |
Apache Camel 2.19 Released - What's New
The latest release of Apache Camel brings improvements to Spring Boot, Java, CamelCatalog, documentation, and much more.
Join the DZone community and get the full member experience.Join For Free
Apache Camel 2.19 was released on May 5th, 2017, and it's about time I do a little blog about what this release includes in terms of noteworthy new features and improvements.
1. Spring Boot Improvements
The Camel 2.19 release has been improved for Spring Boot in numerous ways. For example, all the Camel components now include more details in their Spring Boot metadata files for autoconfiguration. components have improved autoconfiguration, which makes it even easier to use, such as camel-servlet where you can easily setup the context-path from the application.properties file. We have made available to configure many more options on CamelContext as well, so you can tweak JMX, stream caching, and many other options.
2. CamelCatalog Improvements
The Camel Catalog now includes fine-grained details of every artifact shipped in the release, and for the other kinds such as camel-hystrix, camel-cdi, etc. The catalog now also includes automatically generate and keep a full list of all the artifacts on the website, and when each artifact was added. Therefore, you can see whether it's a new artifact in this release, or was introduced in Camel 2.17.
There is a specialized runtime version of the CamelCatalog provided in camel-core RuntimeCamelCatalog, which allows you to tap into the catalog when running Camel. The offline catalog is camel-catalog, which is totally standalone.
3. Camel Maven Plugin Improvements
There is a new validate goal on the camel-maven-plugin that automatically update the routes on the fly. I have previously blogged and recorded a video of this.
5. Service Call EIP Improvements
Luca has been busy, it's now easier as we add a producer side to the Rest DSL. This means you can call REST services using the REST component that can then plugin and use any of the HTTP based components in Camel such as restlet, http4, and undertow.
We also added a new camel-swagger-rest component that makes it even easier to call Swagger REST APIs, where you can refer to their operation id, and then let Camel automatically map to its API.
7. CDI With JEE Transactions
The camel-cdi component now supports JEE transactions so you can leverage that out of the box without having to rely on spring transactions anymore.
8. Example Documentation Improved
We now generate a table with all the examples sorted by category. This allows users to find the beginner examples, rest, cloud, etc., and also ensure that we keep better documentation for our examples in the future as the generator tool will WARN if we have examples without documentation. Also, all examples have a readme file with information about the example and how to run.
9. Spring Cloud Components
There that the URI syntax has changed in a backwards-incompatible way. So, if you are upgrading, make sure to change your URIs. However, the new syntax resembles how other messaging components do it by using kafka:topicName?options.
Also, the component can now automatic convert to the Kafka serializer and deserializer out of the box, so you don't have that hassle. We provide converts to the typically used such as byte[] and string types.
The component also has been upgraded to the latest Kafka release and it's now possible to store the offset state offline so you can resume from this offset in case you stop and later start your application.
It's route is expected as input and what it returns. For example, you can specify that a route takes in XML and returns JSON. With XML, you can even specify the namespace. Likewise, you can specify Java types for POJO classes. Based on these contracts, Camel is able at runtime to automatically be able to type-covert the message payload (if possible) between these types if needed.
We will continue with more improvements in this area. For example, we hope we can add such capabilities to Camel components so they will be able to provide such information so your Camel routes are more type-safe with the message payloads during routing. Tooling will also be able to tap into this formation and then for example “flag” users with hints about routes not being compatible etc. You can find more details in this example (we have them. Claus is in talks with the vert.x team about this.
13. Java 8 DSL Improvements knee deep in the Java 8 style, then help us identify where we can improve the DSL.
14. Camel Connector
We have introduced a new concept called Camel Connector. However, it is still in the know when someone mentions you on Twitter, then you can use the camel-twitter component. But, it can do ten things and it can take time to understand how to use the component and make it work. So instead, you can build a connector that can just do that, a camel-twitter-mention connector. It’s pre-built and configured to just do that, so all you need to do is to configure your Twitter credentials, and off you go. At runtime, the connector is a Camel component, so from a Camel point of view, they are all components, and therefore it runs as first-class in Camel. We have provided some connector examples in the source code.
15. Many More Components
As usual, there are a bunch of new components in every Camel release and this time we have about twenty new components. You can find the list of new components in the release notes, or on the Camel components website where you can search by the 2.19 release number..
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/apache-camel-219-released-whats-new | CC-MAIN-2022-27 | refinedweb | 988 | 62.07 |
NUnit is a Unit Test framework for the .NET languages. NUnit 1.x was straight port of the JUnit 3.8. With the 2.0 version NUnit was rewritten and redesigned as .NET application making use of Attributes instead of special methods and base classes.
Five versions and over five years later version 2.5 is in Alpha. This release includes support for:
- Data Driven tests - using [TestCase] and [DataSource] allows data to passed into the test case via the Attributes.
- Parallel and Distributed testing - a new test runner (PUnit) allows tests to run in parallel over several machines. This test runner was developed to help with the stress testing of a server.
- Additionnal Asserts: Support for comparing file paths or directories without accessing the file system. More support for testing whether code does or doesn't throw an exception.
- Run CSUnit tests: The CSUnitAddin supports running tests from another one of the major .NET test frameworks.
- RowTestExtension: Allows developers to write RowTests for NUnit an alternative to NUnits [TestCase].
- In addition the documentation has been updated.
Other Major Features
- Constraint based Assert model: In addition to the traditional assertions, NUnit allows you to say: Assert.That(myString, new EqualConstraint("Hello")); giving the user the flexibility to add their own constraints that are full participants in the NUnit eco-system.
- Attributes that support declaring: Tests, Setup, Teardown, Fixture Setup/Teardown (per namespace setup/teardown), ...
- Console and GUI test runners
Charlie Poole has also written to clear up confusion around the different versions of NUnit:
A.
Other .NET Unit Test tools include: MBunit, CSUnit, xUnit.Net, NBehave and Gallio - an open, extensible, and neutral test runner designed to support all .NET test tools.
Community comments | https://www.infoq.com/news/2008/05/NUnit_2.5/ | CC-MAIN-2021-04 | refinedweb | 284 | 59.5 |
.
Creating our Project
I will be using the Vue CLI to scaffold out a project for us to start with. To do that you need to have the Vue CLI installed on your system. If you DO NOT have it installed, you can install it globally with this command:
npm install -g @vue/cli
Now we can use the Vue CLI to create our project. Create a new project using this command:
vue create vue-authentication-auth0
You will be asked to pick a preset. Choose “Manually select features” and then select “babel”, “Router” and “Linter / Formatter”.
You will be asked if you want to use history mode for router. Choose “Yes” (should be the default).
You can select any linter you want but for this tutorial I will be selecting “Eslint + Prettier”.
After the Vue CLI is finished, it will give you the commands to change into the new directory that was just created and the command to start the server. Follow those directions. Once the server is started you can open your browser to
localhost:8080. You should see this:
How to Set Up An Auth0 Account
The first thing you will need to do is to create an account with Auth0 if you don’t already have one. It is free to create an account. You can create your free account here.
How to Create our.
How to
How to Install the Auth0 SDK
Go back to your Vue application and add the Auth0 Client SDK with this command:
npm install @auth0/auth0-spa-js
How to that won't work.:
import Vue from "vue"; import createAuth0Client from "@auth0/auth0-spa-js"; /** Define a default action to perform after authentication */ const DEFAULT_REDIRECT_CALLBACK = () => window.history.replaceState({}, document.title, window.location.pathname); let instance; /** Returns the current instance of the SDK */ export const getInstance = () => instance; /** Creates an instance of the Auth0 SDK. If one has already been created, it returns that instance */ export const useAuth0 = ({ onRedirectCallback = DEFAULT_REDIRECT_CALLBACK, redirectUri = window.location.origin, ...options }) => { if (instance) return instance; // The 'instance' is simply a Vue object instance = new Vue({ data() { return { loading: true, isAuthenticated: false, user: {}, auth0Client: null, popupOpen: false, error: null }; }, methods: { /** Authenticates the user using a popup window */ async loginWithPopup(options, config) { this.popupOpen = true; try { await this.auth0Client.loginWithPopup(options, config); } catch (e) { // eslint-disable-next-line console.error(e); } finally { this.popupOpen = false; } this.user = await this.auth0Client.getUser(); this.isAuthenticated = true; }, /** Handles the callback when logging in using a redirect */ async handleRedirectCallback() { this.loading = true; try { await this.auth0Client.handleRedirectCallback(); this.user = await this.auth0Client.getUser(); this.isAuthenticated = true; } catch (e) { this.error = e; } finally { this.loading = false; } }, /** Authenticates the user using the redirect method */ loginWithRedirect(o) { return this.auth0Client.loginWithRedirect(o); }, /** Returns all the claims present in the ID token */ getIdTokenClaims(o) { return this.auth0Client.getIdTokenClaims(o); }, /** Returns the access token. If the token is invalid or missing, a new one is retrieved */ getTokenSilently(o) { return this.auth0Client.getTokenSilently(o); }, /** Gets the access token using a popup window */ getTokenWithPopup(o) { return this.auth0Client.getTokenWithPopup(o); }, /** Logs the user out and removes their session on the authorization server */ logout(o) { return this.auth0Client.logout(o); } }, /** Use this lifecycle method to instantiate the SDK client */ async created() { // Create a new instance of the SDK client using members of the given options object this.auth0Client = await createAuth0Client({ ...options, client_id: options.clientId, redirect_uri: redirectUri }); try { // If the user is returning to the app after authentication.. if ( window.location.search.includes("code=") && window.location.search.includes("state=") ) { // handle the redirect and retrieve tokens const { appState } = await this.auth0Client.handleRedirectCallback(); // Notify subscribers that the redirect callback has happened, passing the appState // (useful for retrieving any pre-authentication state) onRedirectCallback(appState); } } catch (e) { this.error = e; } finally { // Initialize our internal authentication state this.isAuthenticated = await this.auth0Client.isAuthenticated(); this.user = await this.auth0Client.getUser(); this.loading = false; } } }); return instance; }; // Create a simple Vue plugin to expose the wrapper object throughout the application export const Auth0Plugin = { install(Vue, options) { Vue.prototype.$auth = useAuth0(options); } };
How to Create a Config File
The options object passed to the plugin is used to provide the values for clientId and domain which I mentioned earlier and said we would get.
How to Add the ); } });
How to:
<div id="nav"> <router-linkHome </router-link>| <router-linkAbout</router-link> | <div v- | <button @ Login </button> | <button @ Login Popup </button> |</div> </div> I.
How to Implement Logout
The next step is to implement a Logout. Open up the App.vue file. Add a button for logout like this:
<button @ Logout </button>.
How to:
<router-linkAbout</router-link>
How to Add a Route Guard
We have hidden the link to the About page in the nav if a user is not currently authenticated. But a user can type in the url /about to go directly to the page. This shows that an unauthenticated user can access that page. You can avoid authentication to your Vue application.
Get The Code
I have the complete code in my GitHub account here. If you get the code please do me a favor and star my repo. Thank you!
Using Other Authentication Methods
I have written several follow up articles on adding Authentication to your Vue application using other authentication methods.
Want to use Firebase for authentication, read this article.
Want to use AWS Amplify for authentication, read this article.
Conclusion
Auth0 is an Authentication-As-A-Service product that you can add to your application. It provides very easy to use Authentication.
Hope you enjoyed this article. If you like it please share it. Thanks for reading. And you can read more of my tutorials on my personal website. | https://www.freecodecamp.org/news/how-to-add-authentication-to-a-vue-app-using-auth0/ | CC-MAIN-2022-21 | refinedweb | 943 | 50.12 |
Profiling commands; get the
line_profiler and
memory_profiler extensions, which we will discuss in the following sections.
Timing Code Snippets:
%timeit:
% pre-sorted list is much faster than sorting an unsorted list, so the repetition will skew the result::
import random L = [random.random() for i in range(100000)] print("sorting an unsorted list:") %time L.sort()
sorting an unsorted list: CPU times: user 40.6 ms, sys: 896 µs, total: 41.5 ms Wall time: 41.5 ms:
%¶
A program is made of many single statements, and sometimes timing these statements: .
For more information on
%prun, as well as its available options, use the IPython help functionality (i.e., type
%prun? at the IPython prompt).
Line-By-Line Profiling with
%lprun¶
The function-by-function profiling of
%prun is useful, but sometimes it's more convenient:
%load_ext line_profiler
Now the
%lprun command will do a line-by-line profiling of any function–in this case, we need to tell it explicitly which functions we're interested in profiling:
) for j in range(N)]).
Profiling Memory Use:
%memit and
%mprun¶:
,:
from mprun_demo import sum_of_lists %mprun -f sum_of_lists sum_of_lists(1000000)
The result, printed to the pager, gives us a summary of the memory use of the function, and looks something like this:). | https://jakevdp.github.io/PythonDataScienceHandbook/01.07-timing-and-profiling.html | CC-MAIN-2018-39 | refinedweb | 210 | 62.38 |
All opinions expressed here constitute my (Jeremy D. Miller's) personal opinion, and do not necessarily represent the opinion of any other organization or person, including (but not limited to) my fellow employees, my employer, its clients or their agents.
The title is a mouthful and accurately implies an alarmingly high jargon to
code ration, but I just didn't see anyway to write this post without straying
into all of these different subjects. When you try to write an explanatory
article you have to walk a tightrope between a sample problem that's simple
enough to work through in no more than a handful of pages and a sample problem
that's just too simplistic to be valuable. I'll leave it up to you to decide
which end of the spectrum this one falls on. I also had a rough time trying to
decide on the best way to order the topics in the narrative. All I can do is
ask you to scan the following headers if something seems to be missing or I'm
lurching ahead.
Last week I had a flood of people follow Martin Fowler's
link into this series. I thought it was pretty cool because most of my UI
patterns material and terminology is transparently based on Fowler's work. I'm
especially glad that Martin didn't mention the fact that I volunteered to help
with the UI patterns writing three years ago then disappeared...
If you missed something, here's the Build your own CAB series in its
entirety.
The end is in sight. In traditional developer style I blew the estimate for
how long "Build your own CAB" would take. I thought all I needed to do was copy
n'paste a bunch of verbiage and code from my DevTeach talks into Live Writer and
that would be that - but my usual wordiness kicked in and I ended up submitting
a completely new talk to DevTeach Vancouver on this subject.
A couple years ago I remember reading Stephen King say that he always found
the Dark Tower books difficult to write as a way of explaining why there was so
much lag between new books, much to my exasperation at the time. I'm obviously
not Stephen King, and there's no line of folks wrapped around the corner of
CodeBetter waiting for the next installment, but I know how Stephen King felt
now. I do promise that the end of this series won't be as
shocking/disappointing/brilliant(?) as the end of the Dark Tower.*
Again, I am not a licensed namer of patterns, and some of the people I've
shown this code to have commented that it's reminiscent of Smalltalk UI's, so
it's a good bet that some of you have already seen or used similar approaches.
That said, I use the term "MicroController" to refer to a controller class that
directs the behavior of a single UI widget. By itself, a MicroController isn't
really that powerful, but like an army of ants, a group of MicroController's
working cooperatively (with some external direction) can accomplish powerful
feats.
Here's a common scenario:
There's nothing in that list that's particularly hard or even unusual, but I
want to explore an alternative approach. Off the top of my head, I've got four
goals in mind for the design of my menu state management:
Much like the screen state machine sample from my
last post, I'm going to try to use a Domain Specific Language (DSL) / Fluent
Interface to express and define the menu behavior. By hiding the mechanics
of menu management behind an abstracted Fluent Interface I'm hoping to compress
the code that governs the menu state to a smaller area of the code. I want to
be able to understand the menu behavior by scanning a cohesive area of the
code. It's my firm contention that this type of readability simply cannot be
accomplished by using the designer to attach bits and pieces of behavior.
Leaning on the designer will scatter the behavior of the screen all over the
place. One of the main reasons I don't like to use the designer or wizards is
because you often can't "see" the code and the way it works.
Before zooming in one the individual components of the solution, let's keep
the man firmly behind the curtain and look at my intended end state. Inside
some screen (probably the main Form) is a piece of code that expresses the
behavior of the menu items like this fragment below:
private void configureMenus()
{
_menuController.MenuItem(openItem).Executes(CommandNames.Open).IsAlwaysEnabled();
_menuController.MenuItem(saveItem).Executes(CommandNames.Save)
.IsAvailableToRoles("BigBoss", "WorkerBee");
_menuController.MenuItem(executeItem).Executes(CommandNames.Execute)
.IsAvailableToRoles("BigBoss");
_menuController.MenuItem(exportItem).Executes(CommandNames.Export);
}
And in each individual screen presenter you might see some additional code to
set screen-specific menu settings like this that would be called upon activating
a different screen:
public class SomeScreenPresenter : IPresenter
{
public void ConfigureMenu(MenuState state)
state.Enable(CommandNames.Save, CommandNames.Export);
state.Enable(CommandNames.Execute).ForRoles("SpecialWorkerBee", "BigBoss");
}
So what's going on here? There isn't a single call in this code to
MenuItem.Enabled or any definition of MenuItem.Click, so it's safe to assume
that there's somebody behind the curtain. So, what is the man behind the
curtain? Before I talk about each piece in detail, here's a rundown of the
various moving parts:
The complete "Build
your own CAB" Table of Contents is now up if you've missed some of the
earlier missives.
Continuing where I left off in Build
your own CAB #14: Managing Menu State with MicroController's, Command's, a Layer
SuperType, some StructureMap Pixie Dust, and a Dollop of Fluent Interface ,
I'll show how to build a Fluent Interface API to configure menu state management
in a WinForms application while using as many buzzwords as humanly possible.
Going backwards "Memento" style, the end state is shown in the first post (I had
to split the content because Community Server whined at me).
From PEAA,
a Layer SuperType is
A type that acts as the supertype for all types in its
layer.
A type that acts as the supertype for all types in its
layer.
In all of the WinForms applications I've worked on with Model View Presenter
the Presenter's have implemented some sort of Layer Supertype interface or base
class. The details differ quite a bit from project to project, but the pattern
seems to always be there. The methods on the common interface usually relate to
setting up screen state or to transitioning between screens. Here's a sample
IPresenter interface that's pretty typical to my projects:
public interface IPresenter
void ConfigureMenu(MenuState state);
void Activate();
void Deactivate();
bool CanClose();
The Presenter interface is mostly a set of hook methods for the ApplicationController
to call to set up or tear down a screen. While I'll revisit this topic in much
more detail later, for now let's focus in on the bolded method in the IPresenter
interface above.
It's safe to assume that nearly every screen is going to have a different set
of rules for which menu items are available and valid for user rules. By
implementing a common interface across all screen Presenter's we can establish a
standard way to query a Presenter for its particular menu state. What we need
next is an easy way to transmit the screen specific business rules from each
screen to the menu. The logic and business rules to determine the menu state
really fits
into each Presenter, but we don't want the Presenter to know about the
concrete Menu because we don't want to bind our presentation logic to UI
machinery. We could wrap the Menu in some some sort of abstracted IMenu
interface that we could mock while testing the Presenter, but I think there's a
better way. By and large I think that state-based testing is generally easier
than interaction-based testing. In this case I've opted to use a class called
MenuState to configure and transfer screen state from the Presenter to the
Menu. MenuState looks something like this:
public class MenuState
private Dictionary<CommandNames, string[]> _enabledByRoleCommands
= new Dictionary<CommandNames, string[]>();
public void Enable(params CommandNames[] names)
foreach (CommandNames name in names)
{
_enabledByRoleCommands.Add(name, new string[0]);
}
public EnableByRoleExpression Enable(CommandNames name)
return new EnableByRoleExpression(name, this);
public class EnableByRoleExpression
private readonly CommandNames _names;
private readonly MenuState _state;
internal EnableByRoleExpression(CommandNames names, MenuState state)
_names = names;
_state = state;
public void ForRoles(params string[] roles)
_state._enabledByRoleCommands.Add(_names, roles);
public bool IsEnabled(CommandNames name)
return _enabledByRoleCommands.ContainsKey(name);
public string[] GetRolesFor(CommandNames name)
if (_enabledByRoleCommands.ContainsKey(name))
return _enabledByRoleCommands[name];
return null;
Much like the
Notification class from the earlier post on validation, the MenuState class
helps us to keep the coupling between the menu system and each screen to a
minimum. Also like a Notification, creating a MenuState object from the
Presenter makes it relatively easy to unit test the menu state logic in each
Presenter. We'll write interaction tests with mock objects to make sure that
the navigation and screen coordination code is correctly applying the result of
a call to IPresenter.ConfigureMenu(MenuState) method first. After that, we can
concentrate on just testing the result of a call to
IPresenter.ConfigureMenu(MenuState). The steps to unit test are pretty
simple.
On my previous project we created a custom assertion for testing the
MenuState calculation that I thought made the test code fairly descriptive.
Now, let's talk about how to tell a MenuItem what to do. Each MenuItem is
going to do very different things and interact with different services and
modules of the application, but we still want to have a consistent mechanism for
attaching actions to MenuItem's. We could use anonymous delegates (and I'm
doing this quite happily in parts of StoryTeller), but that syntax can quickly
lead to ugly code. Instead, let's adopt a Command pattern approach to wrap up
each unique action.
I think one of the fundamental truths of software development is that every
codebase wants, nay, demands, an ICommand interface like this one:
/// <summary>
/// I think I've had some sort of ICommand interface
/// in almost every codebase I've worked on in the last
/// 5 years
/// </summary>
public interface ICommand
void Execute();
Using a Command pattern comes with several advantages. Foremost in my mind
is the ability to detach the action into a small concrete unit divorced from a
particular screen or UI widget that is easy to test in isolation. Each of our
Command classes should be relatively simple to test. The ICommand classes are
likely manipulating and interacting with various services and other parts of the
application. For easy unit testing, we're probably going to use some sort of Test Double to take the
place of these dependencies. I typically use Constructor Injection to attach
the test doubles.
Here's an example command:
public class SaveCommand : ICommand
private readonly IRepository _repository;
private readonly IEventPublisher _publisher;
// SaveCommand needs access to the Singleton instance of both
// IRepository and IEventPublisher. We'll let StructureMap
// deal with wiring up the dependencies
public SaveCommand(IRepository repository, IEventPublisher publisher)
_repository = repository;
_publisher = publisher;
public void Execute()
// Save whatever it is that we're saving
Inside the unit test harness for SaveCommand I'll simply use RhinoMocks to
create mock objects for IRepository and IEventPublisher:
[TestFixture]
public class SaveCommandTester
private MockRepository _mocks;
private IRepository _repository;
private IEventPublisher _publisher;
private SaveCommand _command;
/// <summary>
/// In this method, set up all of the mock objects,
/// and construct an instance of SaveCommand using
/// the two mock objects
/// </summary>
[SetUp]
public void SetUp()
_mocks = new MockRepository();
_repository = _mocks.CreateMock<IRepository>();
_publisher = _mocks.CreateMock<IEventPublisher>();
_command = new SaveCommand(_repository, _publisher);
}
Assuming that you're comfortable with mock objects, SaveCommand is now
relatively easy to unit test. Of course we're still left with the problem of how
SaveCommand gets the proper instances of IEventPublisher and IRepository in the
real application mode.
If you've got yourself a reference to an ICommand object you know exactly
what to do to make it work. Without knowing the slightest thing about its
internals, you just call Execute() on the ICommand and get out of the way.
Let's stress part of that again. The MenuItem and its associated controllers
don't need to know anything about the internals of an ICommand object, and they
especially don't need to know how to construct and configure an ICommand
object. Looking again at the configuration code, all we do is "tell" each
MenuItem controller the name of an ICommand to run.
Looking back at our SaveCommand object above, we see that it has a dependency
upon both an IEventPublisher and an IRepository interface, but the code above
doesn't need to specify these two things. To make things a little more
complicated, both of these interfaces are probably a standin for singleton
concrete instances (I use Robert
C. Martin's Just Create One pattern for "Managed
Singleton's" with StructureMap instead of using traditional Singleton's).
Tracking and attaching dependencies doesn't have to be a terrible chore because
we can use tools like StructureMap to help us out.
The first step is to register or configure the proper instances of the
underlying services with StructureMap in one of the normal ways like this code
below:
public class ServiceRegistry : Registry
protected override void configure()
BuildInstancesOf<IEventPublisher>()
.TheDefaultIsConcreteType<EventPublisher>()
.AsSingletons();
Now that we've got our services configured we can turn our attention to the
ICommand classes. When we configure the ICommand objects with StructureMap we
also need to associate the ICommand Type's with the correct CommandNames
(CommandNames is a just a strong typed enumeration, the code is at the very
bottom of the post) instance. I use a separate Registry class for the
ICommand's to put the configuration into a common spot and also to create a
custom syntax specific to registering ICommand's.
public class CommandRegistry : Registry
// Wire up the ICommand's
Command(CommandNames.Save).Is<SaveCommand>();
Command(CommandNames.Open).Is<OpenCommand>();
For the most part, all I need to do is just say that an instance of
CommandNames on the left is the concrete class on the right. It's important to
associate the ICommand classes with an instance of CommandNames because we're
going to retrieve ICommand's in the controller classes with this code:
ICommand command = ObjectFactory.GetNamedInstance<ICommand>(_command.Name);
This Fluent Interface grammar is just a thin veneer over the StructureMap
configuration API. The grammar is implemented in additional members and an
inner class of the CommandRegistry:
private RegisterCommandExpression Command(CommandNames name)
return new RegisterCommandExpression(name, this);
internal class RegisterCommandExpression
private readonly CommandNames _name;
private readonly CommandRegistry _registry;
public RegisterCommandExpression(CommandNames name, CommandRegistry registry)
_name = name;
_registry = registry;
public void Is<T>()
// Register the ICommand type with StructureMap
_registry.AddInstanceOf<ICommand>().UsingConcreteType<T>().WithName(_name.Name);
Wait, you might say. How does the IEventPublisher and IRepository
dependencies get into SaveCommand? We didn't make any kind of definition or
configuration between SaveCommand and its services. The short answer is that we
don't have to do anything else because StructureMap supports "auto wiring" of
dependencies. StructureMap knows what SaveCommand needs by its constructor
function:
If you don't explicitly configure an instance of IRepository/IEventPublisher
for SaveCommand StructureMap will happily substitute the default instance of
both types into the constructor function of SaveCommand. While you can always
take full control of the dependency chaining, I find it very convenient just to
let StructureMap deal with it.
* Come on, I'm not the only person that screamed and threw the book across
the room when Roland ends up back at the very beginning. I'm going to swear off
fantasy books permanently if Rand doesn't win a clear cut victory in book 12
whenever that comes out. Almost 20 years worth of waiting better come with a
really solid ending.
There's a lot of commonality between the menu items. Sure, the individual
actions and rules are different, but there's a finite set of things we need to
do with and to the individual menu items. You could just use the visual
designer to generate one off code for each of the menu items and hard code the
menu on/off rules, but that's going to lead to sheer ugliness. Instead of one
off code, let's create a MicroController class for a single MenuItem.
In a stunning fit of creativity I've named this class MenuItemController. In
this design, MenuItemController has just two responsibilities:
First, let's setup a single MenuItemController and see the code that sets the
Click event.
public class MenuItemController
private readonly MenuItem _item;
private readonly CommandNames _command;
private bool _alwaysEnabled = false;
private List<string> _roles = new List<string>();
public MenuItemController(MenuItem item, CommandNames command)
_item = item;
_command = command;
_item.Click += new EventHandler(_item_Click);
private void _item_Click(object sender, EventArgs e)
command.Execute();
// Other methods are below...
We construct a MenuItemController first with a MenuItem and a CommandNames
key. The constructor simply sets a field for both values then adds an event
handler to the MenuItem's Click event. Inside the _item_Click() event handler
the MenuItemController simply fetches the named ICommand from StructureMap (the
call to ObjectFactory.GetNamedInstance()) and calls Execute() on the ICommand
that comes back. Great, that's the easy part. Great that's the easy part. Now
we can tackle the responsibility for enabling or disabling the MenuItem's.
The MenuItemController class uses three sources of information to make the
enabled determination. The first two sources are optional setters on
MenuItemController.
public bool AlwaysEnabled
get { return _alwaysEnabled; }
set { _alwaysEnabled = value; }
public void AddRoles(string[] roles)
_roles.AddRange(roles);
In some cases you have MenuItem/Command's that should be available in all
states. The "AlwaysEnabled" flag on MenuItemController will short circuit any
other logic and force the MenuItem to be enabled. The second determination is
role-based authorization. Our MenuItemController class keeps a list of the
roles that have access to this action. If there are no roles defined, we'll
assume that the command is accessible to all users.
The third piece of information the MicroController uses to determine menu
state is the screen specific rules that are transmitted in the MenuState object
created by each Presenter. The code to enable or disable the internal MenuItem
inside of a MenuItemController is below. The entry point is Enable(MenuState)
at the top.
public void Enable(MenuState state)
_item.Enabled = IsEnabled(state);
public bool IsEnabled(MenuState state)
if (AlwaysEnabled)
return true;
if (!state.IsEnabled(_command))
return false;
return HasRole(state);
public bool HasRole(MenuState state)
List<string> roles = new List<string>(_roles);
roles.AddRange(state.GetRolesFor(_command));
return hasRole(roles);
private static bool hasRole(List<string> roles)
if (roles.Count == 0)
IPrincipal principal = Thread.CurrentPrincipal;
foreach (string role in roles)
if (principal.IsInRole(role))
{
return true;
}
return false;
By itself, the MicroController classes don't know very much. The
MenuController below aggregates the MenuItemController objects and would
ostensibly give you access to a particular MenuItemController upon demand. In
the SetState(MenuState) method it simply iterates through all of the
MenuItemController objects and calls each individual
MenuItemController.Enable(MenuState) method.
public class MenuController
private Dictionary<CommandNames, MenuItemController> _items =
new Dictionary<CommandNames, MenuItemController>();
public void SetState(MenuState state)
foreach (KeyValuePair<CommandNames, MenuItemController> item in _items)
item.Value.Enable(state);
// Other methods to support the Fluent Interface configuration
MenuController above also includes the code for the configuration API. I'm
not sure how every one else is building these things, but I typically use
"Expression" classes that encapsulate the configuration. You can recognize
these things pretty quickly by looking for a lot of "return this;" calls. The
Expression classes are typically pretty dumb. All I do with these classes is
set properties on some sort of inner object that does the actual work. The
MenuItemExpression class below sets properties on a single MenuItemController as
additional methods are called in the configuration. I tend to use inner classes
for the Expression's to get easy access to the private members of the class
being configured. MenuItemExpression is an inner class of MenuController.
public MenuItemExpression MenuItem(MenuItem item)
return new MenuItemExpression(this, item);
public class MenuItemExpression
private readonly MenuController _controller;
private readonly MenuItem _item;
private MenuItemController _itemController;
internal MenuItemExpression(MenuController controller, MenuItem item)
_controller = controller;
_item = item;
public MenuItemExpression Executes(CommandNames name)
_itemController = new MenuItemController(_item, name);
_controller._items.Add(name, _itemController);
return this;
public MenuItemExpression IsAlwaysEnabled()
_itemController.AlwaysEnabled = true;
public MenuItemExpression IsAvailableToRoles(params string[] roles)
_itemController.AddRoles(roles);
I think MicroController's and Fluent Interface's go together quite well. The
MicroController's do the work, but the Fluent Interface API can make the code so
much more readable. What I'm finding is that it does take a bit of upfront work
to get a Fluent Interface put together, but once it's set, it's relatively easy
to work with. There's some Architect
Hubris lurking in that statement perhaps. I suppose I might caution you to
utilize a Fluent Interface mostly in situations where you can recoup the upfront
investment with lots of reuse or the API pays off when that code changes
frequently.
I will write at least one more post on this subject just to present the data
binding replacement we're using on my current project (it sounds nuts, but
actually, I'm feeling pretty good about that right now). I wrote about the
larval stages in My
Crackpot WinForms Idea.
I've received a gratifying number of compliments on this series, but I've
consistently heard a common refrain in the negative -- there's no code to
download and it's not clear how all the pieces fit together. To address that
problem I'll change my direction a bit, but it means that "Build your own CAB"
is going on hiatus for at least a month. I'm going to concentrate my "stuck on
the train" time with getting StoryTeller into a usable state. I ended up
scrapping big pieces of the StoryTeller UI and rebuilding with some of the ideas
that I've developed while writing this series. As soon as it's released, I can
use StoryTeller for more complete examples with code that's freely
available.
I used CommandNames several times in the course of the post but didn't really
explain it. All I've done is create a Java-style strongly typed enumeration for
the command names. StructureMap only understands strings for instance
identification within a family of like instances, so CommandNames exposes a Name
property. I suppose that you could just use an enumeration and do ToString()
on the keys as appropriate, but I chose this approach for some reason that
escapes my mind at the moment. The code for CommandNames is:
public class CommandNames
public static CommandNames Open = new CommandNames("Open");
public static CommandNames Execute = new CommandNames("Execute");
public static CommandNames Export = new CommandNames("Export");
public static CommandNames Save = new CommandNames("Save");
private readonly string _name;
private CommandNames(string name)
_name = name;
public string Name
get { return _name; }
public override bool Equals(object obj)
if (this == obj) return true;
CommandNames commandNames = obj as CommandNames;
if (commandNames == null) return false;
return Equals(_name, commandNames._name);
public override int GetHashCode()
return _name != null ? _name.GetHashCode() : 0;
[Advertisement]
Pingback from Build your own CAB #14: Managing Menu State with MicroController's, Command's, a Layer SuperType, some StructureMap Pixie Dust, and a Dollop of Fluent Interface - Jeremy D. Miller -- The Shade Tree Developer
The entire content is now online as a page on my site at: Build your own CAB #14: Managing Menu State
Interesting articles, but:
en.wikipedia.org/.../Apostrophe
;)
Time for another weekly roundup of news that focuses on .NET and MS development related content: VS 2008
Hi Jeremy,
I've noticed its been awhile since your last post on this subject. any news on when you on finishing this fantastic CAB article set?
@Mark:
It's writer's block and motivation. I'm going to talk to a publisher about the content first before doing anything else.
One way or another, I'll get back to it.
After a bit of a hiatus and a fair amount of pestering, I'm back and ready to continue the "Build
Pingback from The Build Your Own CAB Series Table of Contents - Jeremy D. Miller -- The Shade Tree Developer
To everybody that attended one of my talks at DevTeach this week. All of the materials are now online
Od kilku miesięcy nic tu nie pisałem (oczywiście poza poprzednim nieplanowanym wpisem ). Jak łatwo się
I have a question about your use of StructureMap to create commands:
ICommand command = ObjectFactory.GetNamedInstance<ICommand>(_command.Name);
command.Execute();
Can this approach work when the command has an argument such as an Id? If the concrete command class takes the argument as a parameter to its constructor, it doesn't seem like StructureMap can create the command instances. If instead the command has a setter for the argument, that setter won't be available through the ICommand interface.
How do you structure your code in this situation?
Thanks,
Ben
How would you control the state of a menu item after the view is shown - i.e. toggle state of a button
AND, how would you control the fact that 1 button might want to invoke different commands based on different contexts? (i.e. Save button - saves a client in certain contexts or a "product" in other?
Thanks! Great series of articles and would love to see a sample using all these concepts!
@David,
You would go to a different model. You could bind the menu items to a Command object that has enabled/visible type properties like WPF for the first.
You could recreate the MenuState object each time the menu needs to change (this is workable btw, because I've done it and it worked out well).
For the button doing multiple things, you can "register" for a menu button by passing in a Lambda for what gets called, which gives you the ability to "point" a menu item at a specific presenter.
Thanks!
So I'm "seeing" the following: Have a "DelegateCommand" class that's got a property that contains the actual delegate that must be executed when the command gets executed.
This new delegate property will be configured / set in the "ConfigureMenu" method?
For the state, I can hold on to the instance of the MenuState Object and change as necessary in the Presenter and fire off an event so that the menu can be updated?
Do you think this can work?
I've also got a question around your "Command" in this article. The way that I want to implement my repositories, is to have a method "Save" that takes as parameter the actual object that needs to be persisted. How would this work in your setup? Would you register the object that needs to be saved with your IoC container and resolve it in the constructor (as you do with the repository)? Or can StructureMap do a lot more than Unity? Please help me with some ideas!
Dawid
Hi Jeremy, i have been reading your series and i must say it has been a very good introduction to MVC; something i haven't really looked into before now.
While i was reading this article i noticed that you seem to be dealing with your menu items as the item to be enabled or disabled, however in most applications you end up with two menu items providing certain functionality (think a New button on a toolbar and on the File menu). This would mean that you have to put the (almost) identical code for controlling them against both items, which seemed a little odd.
I decided to have a go at implementing my own Command-centric version of this, which can be seen on my website:
If you had the time to have a quick look and see what you think of my version i would be very appreciative.
Thanks
(ps, any chance you can finish off the series | http://codebetter.com/blogs/jeremy.miller/pages/build-your-own-cab-14-managing-menu-state-with-microcontroller-s-command-s-a-layer-supertype-some-structuremap-pixie-dust-and-a-dollop-of-fluent-interface.aspx | crawl-002 | refinedweb | 4,729 | 50.36 |
I have some encrypted data in an HDFS csv, that I've created a Hive table for, and I want to
run a Hive query that first encrypts the query param, then does the lookup. I have a UDF
that does encryption as follows:
public class ParamEncrypt extends UDF {
public Text evaluate(String name) throws Exception {
String result = new String();
if (name == null) { return null; }
result = ParamData.encrypt(name);
return new Text(result);
}
}
Then I run the Hive query as:
select * from cc_details where first_name = encrypt('Ann');
The problem is, it's running encrypt('Ann') across every single record in the table. I want
it do the encryption once, then do the matchup. I've tried:
select * from cc_details where first_name in (select encrypt('Ann') from cc_details limit
1);
But Hive doesn't support **IN** or select queries in the where clause.
What can I do?
Can I do something like:
select encrypt('Ann') as ann from cc_details where first_name = ann;
That also doesn't work because the query parser throws an error saying **ann** is not a known
column
Thanks,
Sam | http://mail-archives.apache.org/mod_mbox/hadoop-user/201210.mbox/%3C162E1B440CE3B24293E81D9C8923C38548F0A60B45@hqmailsvr01.voltage.com%3E | CC-MAIN-2014-23 | refinedweb | 182 | 63.93 |
workspace listed in API listing but not in UI? Publish allowed me to see it ! is this right?
I have a workspace TCGA_BRCA_ControlledAccess_V1-0_DATA_wgs_10_pairs
in project/namespace broad-firecloud-benchmark
I'm able to see this in the API listing (see attached PDF) when I visit the swagger page.
I'm able to see in when I use fissfc:
wm8b1-75c:wgs_wdl_cmp esalinas$ /usr/local/bin/fissfc space_info -w TCGA_BRCA_ControlledAccess_V1-0_DATA_wgs_10_pairs -p broad-firecloud-benchmark|head -11 { "workspaceSubmissionStats": { "lastSuccessDate": "2017-02-16T10:14:42.160Z", "lastFailureDate": "2017-02-14T14:42:43.291Z", "runningSubmissionsCount": 0 }, "accessLevel": "PROJECT_OWNER", "owners": ["[email protected]"], "workspace": { "workspaceId": "6e92fceb-ef44-4648-a5b0-71cacc0248eb", "name": "TCGA_BRCA_ControlledAccess_V1-0_DATA_wgs_10_pairs",
I'm also able to list the bucket contents:
wm8b1-75c:wgs_wdl_cmp esalinas$ gsutil ls gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb|head gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/10a7883c-975c-4d44-a80d-6bee90713758/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/120acd57-5ea9-43b4-849c-0f5f3d04756f/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/2c075e39-39e5-426e-b55a-f0d2b144c2d6/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/727dc7b1-02b3-4eed-acc0-b9bbe446c8d6/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/9fd8c5ac-c708-427e-b020-65c6691845d3/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/aadd08ee-0074-4fbc-92a7-3dbafc11c34d/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/b4c08c2f-ac36-45c2-87e3-8e0a98b1676c/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/b9948486-47e6-4556-b351-ddecc82ed55a/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/c646970d-8935-4ad7-a361-754f1b983f2e/ gs://fc-6e92fceb-ef44-4648-a5b0-71cacc0248eb/ca23703e-4ac2-404b-891e-a7fb275fc054/ wm8b1-75c:wgs_wdl_cmp esalinas$
But when I search for the workspace ("using search term "wgs" as a search term which is a substring of the workspace name) I don't see the workspace listed (see attached PDF).
If I type in the entire workspace name "TCGA_BRCA_ControlledAccess_V1-0_DATA_wgs_10_pairs" as the search term I see "No workspaces to display." Further more, I note that I have "All" selected, not "Complete", etc. the workspace
So I scrolled over to the "Include..." box and clicked the box next to "Published" and once I did that the workspace appeared in the UI.
Is my workspace "Published"? I don't recall having "Published" it. What does "Published" mean? Is this behavior correct behavior?
Best Answer
That's correct - if you had publishing permissions, you'd see an "Unpublish" (or "Publish in Library") button on the workspace's summary tab. Thanks for the info to help figure out this issue!
Answers
Hi Eddie - did you by any chance clone your TCGA_BRCA_ControlledAccess_V1-0_DATA_wgs_10_pairs workspace from a pre-existing workspace? If you did clone it, is your previous workspace published in Library? Thanks for the info - this will help to debug the issue.
HI @davidan yes, I did clone my workspace from TCGA_BRCA_ControlledAccess_V1-0_DATA which is both a pre-existing workspace and an "official TGCA controlled access workspace" I have guessed that the "Published" flag/tag/checkbox is inherited/propagated from the source of the clone. The source of the clone (which is "TCGA_BRCA_ControlledAccess_V1-0_DATA" as I mentioned) I believe is part of the Library.
Eddie - yes. We are currently tracking an issue (GAWB-1585 if you want to follow along) in which cloned workspaces partially inherit the published status of the workspace from which they are cloned. The new workspace will NOT be present in Library, but for the purposes of the "Include..." filter on the workspace listing and the summary page of the workspace, it will act as if it was published.
The quick fix for this is to unpublish the workspace from its summary page, assuming you have permission to do so or can ask someone with permission to do it for you. Since it's not actually published into Library, this won't change anything in Library, but it will reset that workspace's published status. The longer-term solution is of course to wait for the issue to be fixed!
@davidan you wrote:
Assuming I do I have permission, would I see a button on the summary tab for my workspace TCGA_BRCA_ControlledAccess_V1-0_DATA_wgs_10_pairs that says "Unpublish" ? I don't see such a button so I guess I don't have permission. Right now since it's not in the library I guess it's not critical.
That's correct - if you had publishing permissions, you'd see an "Unpublish" (or "Publish in Library") button on the workspace's summary tab. Thanks for the info to help figure out this issue!
Thanks for the update | https://gatkforums.broadinstitute.org/firecloud/discussion/comment/36425 | CC-MAIN-2019-30 | refinedweb | 742 | 54.63 |
ncl_gset_clip_ind man page
gset_clip_ind (Set clipping indicator) — controls whether data are displayed outside the boundaries of the world coordinate window of the current normalization transformation.
Synopsis
#include <ncarg/gks.h>
void gset_clip_ind(Gclip_ind clip_ind);
Description
- clip_ind
(Input) - A flag to turn clipping on or off.
- GIND_NO_CLIP
Clipping is off. Data outside of the window will be plotted.
- GIND_CLIP
Clipping is on. Data outside of the window will not be plotted. This is the default.
Usage
If the clipping indicator is off, and you make GKS output calls to plot world coordinate data outside your defined world coordinate window (and your viewport is smaller than the full plotting surface), those data will appear with your plot. If the clipping indicator is on, the data will be clipped to fit your window.
Access
To use the GKS C-binding routines, load the ncarg_gks and ncarg_c libraries.
See Also
Online: set(3NCARG), gsup(3NCARG), gset_win(3NCARG), gsel_norm_tran(3NCARG), ginq. | https://www.mankier.com/3/ncl_gset_clip_ind | CC-MAIN-2017-26 | refinedweb | 156 | 58.99 |
Check if a given Binary Tree is SumTree in C++
In this tutorial, we are going to learn how to check for a SumTree in a Binary Tree using C++. If the value of each node in a Binary Tree is equal to the sum of all the nodes present in its left and right subtrees then the tree is called a SumTree.
Example of a SumTree:
If the root of a subtree in a SumTree has at least one child then the sum of the values of all nodes in the subtree is equal to twice the value of the root. In the above example, as the binary tree is a SumTree the sum of all the nodes is equal to twice the value of the root.
20*2 = 20 + 3 + 7 + 1 + 2 + 3 + 4
Using the above observation we will implement a recursive function that checks if a given Binary Tree is a SumTree.
Implementing a function to check if a Binary Tree is Sum Tree
Recursive Function
isSumTree(root)
- If the node is empty or if the node is a leaf node return True.
- Assume that the left subtree and right subtree are SumTrees and find the sum of the nodes in the left subtree and the right subtree.
- Check if the current subtree is a SumTree.
- Verify if the left subtree and right subtree are SumTrees by making the recursive function call on them.
- If the above two conditions are satisfied return True or else return False.
#include <bits/stdc++.h> using namespace std; struct Node{ int data; Node* left; Node* right; Node(int val){ data = val; left = NULL,right = NULL; } }; int sumOfSubTree(Node* node){ if(node == NULL){ return 0; } // Checking if the node is a leaf node else if(node->left == NULL && node->right == NULL){ return node->data; } else{ return 2*(node->data); } } bool isSumTree(Node* node){ if(node == NULL || (node->left == NULL && node->right == NULL)){ return true; } int l,r; /* Assume that the left and right subtrees are SumTrees and get the sum of nodes in left and right subtrees respectively */ l = sumOfSubTree(node->left); r = sumOfSubTree(node->right); /* Check if the current subtree is a SumTree and also verify if the left and right subtrees are also SumTrees */ if(node->data == (l+r) && isSumTree(node->left) && isSumTree(node->right)){ return true; } else{ return false; } } int main(){ struct Node* root = new Node(20); root->left = new Node(3); root->right = new Node(7); root->left->left = new Node(1); root->left->right = new Node(2); root->right->left = new Node(3); root->right->right = new Node(4); if(isSumTree(root)){ cout<<"The given Binary Tree is a SumTree"<<endl; } else{ cout<<"The given Binary Tree is not a SumTree"<<endl; } }
Output:
The given Binary Tree is a SumTree
sumOfSubTree() function takes the root of subtree as an argument and returns the sum of nodes in the subtree.
Time Complexity: O(n)
We hope you got a clear idea of how to check if a given Binary tree is SumTree in C++. | https://www.codespeedy.com/check-if-a-given-binary-tree-is-sumtree-in-cpp/ | CC-MAIN-2022-27 | refinedweb | 508 | 58.49 |
Am Donnerstag, 8. Januar 2009 01:31 schrieb David Schonberger: > Hello all. This is my first post and I'm essentially a rank beginner with > Haskell, having used it briefly some 7 years ago. Glad to be back but > struggling with syntax and understanding error messages. > > This post actually ties to a prior post, > > > My troubles are with exercise 3.10, p45 of from HDIII's "Yet Another > Haskell Tutorial" (YAHT). The exercise states: > > Write a program that will repeatedly ask the user for numbers until she > types in zero, at which point it will tell her the sum of all the numbers, > the product of all the numbers, and, for each number, its factorial. For > instance, a session might look like: > > Note that the sample session accompanying the exercise suggests a session > such as: > > Give me a number (or 0 to stop): > 5 > Give me a number (or 0 to stop): > 8 > Give me a number (or 0 to stop): > 2 > Give me a number (or 0 to stop): > 0 > The sum is 15 > The product is 80 > 5 factorial is 120 > 8 factorial is 40320 > 2 factorial is 2 > > The following code handles the sum and products pieces--I built the module > incrementally--but fails on the factorial part. In fact my current code, if > it worked, would only output something like: > > The sum is 15 > The product is 80 > 120 > 40320 > 2 > > But I'm not even getting that much. Here's the code: > > --begin code > > module AskForNumbers whereimport) > > fact n = if n == 0 then return 1 else return foldr (*) 1 [1..n] f = do > nums <- askForNums putStr ("Sum is " ++ (show (foldr (+) 0 nums)) ++ "\n") > putStr ("Product is " ++ (show (foldr (*) 1 nums)) ++ "\n") listFactorial > nums > > --end code > > Here is the error msg I get when I load into WinHugs (Sept 2006 version): > > ERROR file:.\AskForNumbers.hs:22 - Ambiguous type signature in inferred > type *** ambiguous type : (Num a, Num [a], Monad ((->) [b]), Num c, Num (b > -> [a] -> [a]), Enum a, Monad ((->) (c -> c -> c))) => a -> [b] -> [a] *** > assigned to : fact > > Ambiguous type assigned to fact. Ok. Not sure what to make of that or how > to correct it. I though I was passing fact an Integer, since I think nums > is a list of Integers, since the sum and product lines in f work ok. > > Help? Thanks. Okay, that's a nice one :) Don't be disappointed by the rather prosaic reason for it. First, if you don't already know what ambiguous type means: in the context "(Num a, ..., Monad ((->) (c -> c -> c)))" there appears a type variable, c, which doesn't appear on the right hand side a -> [b] -> [a]. I will not explain how that monstrous type arises, though it is a nice exercise in type inferring. module AskForNumbers where import) ^^^^ There are several things that can be improved here. First, don't use "if length l == 0". That has to traverse the whole list and if that's long (or even infinite), you're doing a lot of superfluous work. To check for an empty list, use "null", so "if null l then ...", that's far more efficient because null is defined as null [] = True null _ = False Also, it would be better to define listFactorial by pattern matching: listFactorial [] = ... listFactorial (k:ks) = do fact k listFactorial ks Next, why do you return 1 for an empty list? Probably to match the type of the other branch, but that that isn't so well chosen either. And since fact only returns something and does nothing else, listFactorial actually is "do something without any effects and then return 1", that isn't what you want. What you want is that for each k in the list it prints k factorial is ... That suggests that you either let fact do the output, so that fact would get the type Int(eger) -> IO () or let fact be a pure function calculating the factorial and have listFactorial (k:ks) = do putStrLn (show k ++ " factorial is " ++ show (fact k)) listFactorials ks f = do nums <- askForNums putStr ("Sum is " ++ (show (foldr (+) 0 nums)) ++ "\n") putStr ("Product is " ++ (show (foldr (*) 1 nums)) ++ "\n") listFactorial nums There are library functions "sum" and "product", you could use them. Instead of putStr (some string ++ "\n") you could use putStrLn (some string) but all this is just a matter of personal taste. Now comes the culprit: fact n = if n == 0 then return 1 else return foldr (*) 1 [1 .. n] The else-branch should be return ( foldr (*) 1 [1 .. n] ) or return $ foldr (*) 1 [1 .. n] Without the parentheses or ($), it is parsed as ( ( (return foldr) (*) ) 1 ) [1 .. n], which leads to the interesting type of the error message. If you're interested, I could explain how hugs infers that. > > David HTH, Daniel | http://www.haskell.org/pipermail/beginners/2009-January/000678.html | CC-MAIN-2014-41 | refinedweb | 798 | 66.67 |
This is related to my previous post:
"Dev C++ linker error: undefined reference"
But I have progressed past those symptoms and figured I should start a new thread.
I have imported an MSVC TextEditor project named NoteXPad2 into Dev-C++. Compiling the thing has tunred out to be a real pain in the butt. I just updated all of my header files using microsofts Platform SDK update. This completely changed the set of errors I am seeing. So....
90 percent of the errors seem to trace back to two issues:
The first one begins with a warning:
C:\Dev-Cpp\include\oaidl.h:442
[Warning] pasting "/" and "/" does not give a valid preprocessing token
NOTE: oaidl.h is not one of my files. This is a header file.
Line 442 of oaidl.h is just
_VARIANT_BOOL bool;
_VARIANT_BOOL is defined in wtypes.h as:
#if !__STDC__ && (_MSC_VER <= 1000)
/* For backward compatibility */
typedef VARIANT_BOOL _VARIANT_BOOL;
#else
/* ANSI C/C++ reserve bool as keyword */
#define _VARIANT_BOOL /##/
#endif
Im not sure what my _MSC_VER is. Im on Win2k pro.
Obviously its hitting the slashes in the else statement and not liking it. This is only a warning but it leads to
442 C:\Dev-Cpp\include\oaidl.h
parse error before `/' token
at the point _VARIANT_BOOL is actually used. Again, the line in question is simply:
_VARIANT_BOOL bool;
so the bad '/' token HAS to come from the #define _VARIANT_BOOL /##/.
The second problem source is much easier to explain:
Every parse error (and there are a lot) that is not explained by the above, just happens to have __unaligned before it.
__unaligned is defined in ShTypes.h:
#if defined(_M_IX86)
#define __unaligned
#endif // __unaligned
But I can find nowhere that _M_IX86 is defined. (Ive done a search of my entire hard drive and found nothing)
Nowhere in my errors does it actually state "__unaligned undeclared" or anything of that nature, but everywhere __unaligned is used it is followed by a parse error.
I cant believe its coincidence.
So ive traced it all back to two damned lines of code, and I STILL cant figure it out!!!!! ARGH!!
Any help would be greatly appreciated.
-Josh | https://cboard.cprogramming.com/windows-programming/50470-fun-_variant_bool-__unaligned.html | CC-MAIN-2017-51 | refinedweb | 360 | 66.03 |
Hi Strato,
Some questions are more suitable for forum and the other to submit a ticket.
If you expect the community can help with some of your problem, then here is the place to see various...
Type: Posts; User: josipradnik
Hi Strato,
Some questions are more suitable for forum and the other to submit a ticket.
If you expect the community can help with some of your problem, then here is the place to see various...
Error messages are here:
Hi Georg,
You need to create your database and table(s) in it.
I did it and implemented in lianjademo app.
I added Lianja's message "Record was updated." in original field and my translation in...
Hi Strato
Here is a collection of information regarding memo fields in one place:
Hi Strato,
Show us some of your code so we can find what is wrong.
Or upload the package (lpk) of your app so we can reproduce your problem.
Josip
In documentation:
Hi tekhong,
I have just tried VFPxWorkBookXLXS.VCX and it works well as a COM in Lianja. It seems like a nice VFP class.
But I would rather that you try it first with Python and then show us...
Hi Fabio,
You should show me error text or a screenshot.
I suppose your C# does not start with
using System.Runtime.InteropServices;
I need not to worry about registering it because C# project has set in its Properties,
Tab "Build Events",
in "Post-build event command line" something like that:
"C:\Program Files...
It's red all the time.
20:58:26 start, 20:59:16 - red
21:00:56 restart, 21:02:41 -red
2220
Nothing shows. No .F. result.
Debug_client.txt:
Beta5. Red while runnung.
2219
If I properly understand where to put int(),
?ox2.dynamicCall("foxproc5(int(),int()",3,4)
.F.
Beta5, the same result in all four variants.
I made 4 variants of procedure's returned value (foxproc2,foxproc3,foxproc4,foxproc5):
2217
According debug_client.txt It is irrelevant for dynamicCall():
After?
Here it is:
SET DEBUG ON
before ox2 = CreateObject("vfpcomserver2.vfpcom2")
then exiting Lianja to see debug_client.txt
This is the content of debug_client with signature of foxproc2:
Opening Lianja...
Nope. This does not work for me either:
?ox2.dynamicCall("foxproc2(int,int)",3,4)
.F.
I was playing here with QString instead of int, too. Function dynamicCall works only if there are no...
Wow, there are lot of information in debug_client.txt on ox2 = CreateObject("vfpcomserver2.vfpcom2")
(needed to exit Lianja to open the file):
In my case I wrongly declared foxproc2...
Hi Barry,
I had a problem with your second link (maybe because this is MSDN blog, and my MSDN subsription is expired).
For others interested in, here is the correct link:...
Steps:
In VFP
2209
... after DO BUILDVFPCOM.PRG in VFP console (see time shown)
...testing the object in VFP:
Hi Fabio,
This solution is for
DO vfpcomserver.prg
on the same machine where you will CREATEOBJ and use it.
VFP was doing COM registration with regsvr32 for you.
Otherwise, you need to...
Hi Fabio,
Here is how I managed to use C# DLL with a little help of VFP (desktop app):
Josip
Hi,
You have 4 demo apps with attachments to study:
I suppose you are looking for something like sysmetric() , sys() and _screen stuff, i.e. to know client's screen geometry.
Lianja is primary web- and mobile-oriented, to be independent of screen...
Hi,
Page:
Section:
Osection = createobject("section")
or Lianja.addObject | https://www.lianja.com/community/search.php?s=260fd23a89a75fc23e3b8d5aa8e5ee4c&searchid=973647 | CC-MAIN-2020-50 | refinedweb | 585 | 68.06 |
Map a Mediator to a Baseclass.
Hello guys!
I've just run into a problem as building my robotlegs application. I hope you'll be able to get me some help :) Let me explain.
I have a world map within an external swc, that contains all the countries in the world in specific movieclips. I need to be able to map each and every one of them to a common mediator, say "CountryMediator". My first idea was to create a baseclass that every movieclip extends, but nothing was mapped to these classes when they were added to the stage (and they surely were). I think it's kind of normal as the concrete class that gets exported every time is"France" "United Kingdom" "Spain", based on the class "Country".
Is there any workaround so I just have to write one mapping rule allowing me to map all the countries base on the Country class to the Country mediator?
Thanks for you help!
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by Stray on 23 Oct, 2012 02:05 PM
Hi Thomas,
If you make sure that Country implements an interface, you can use the ViewInterfaceMediatorMap utility to have one rule manage all instances of Country.
The util is here:...
hth, if you have any problems with it do come back,
Stray
Support Staff 2 Posted by Ondina D.F. on 23 Oct, 2012 02:08 PM
Hi Thomas,
Please take a look at the discussion and the utility I linked below, and if you can’t find an answer in there, come back with more questions:)-......
Cheers,
Ondina
3 Posted by thomas.pujolle on 23 Oct, 2012 02:33 PM
Thanks for that ! i'm at this moment trying this ViewInterfaceMediator.
I have 6 WorldZones in my swc, based on the class WorldZone which implements IWorldZone.
But for my 6 views, I get 5 errors (after the first instantiation) saying:
Warning: Injector already has a rule for type "com.worldmap.view::WorldZoneMediator", named "".
If you have overwritten this mapping intentionally you can use "injector.unmap()" prior to your replacement mapping in order to avoid seeing this message.
I have this in my mediator:
[Inject] public var worldZone:IWorldZone;
And this in my mediator map:
mediatorMap.mapView(IWorldZone, WorldZoneMediator);
Am I doing this right? It's weird because it works once, and then fires a warning.
Support Staff 4 Posted by Ondina D.F. on 23 Oct, 2012 04:47 PM
You’re getting those warnings because you’re (probably) using robotlegs 1.5+ and SwiftSuspenders 1.6.
ViewInterfaceMediatorMap worked well with robotlegs v1.4, where robotlegs and swiftsuspenders were in sync.
Stray had a patch for robotlegs’ Context.as, but I can’t find any swcs reflecting the changes.
Try using Stray’s fork of robotlegs.
Support Staff 5 Posted by Ondina D.F. on 23 Oct, 2012 04:52 PM
Context.as:......
6 Posted by thomas.pujolle on 25 Oct, 2012 08:53 AM
Thanks a lot for you help!
I've try to add the previous fixes to my code but it does not seem to help avoiding the warnings. I'm sorry my comprehension of RL is not good enough to allow me to fix this all by myself :/
Do you think I may construct my context the wrong way? I have this:
public class ApplicationContext extends InterfaceEnabledMediatorMapContext {
And this:
public class InterfaceEnabledMediatorMapContext extends Context {
Thanks a lot!!!
Support Staff 7 Posted by Ondina D.F. on 25 Oct, 2012 10:13 AM
Hey Thomas,
No problem:)
It would be easier if you used robotlegs source and replaced Context.as with the patched one. It would work with SwiftSuspenders 1.6, too. I’ll attach rl source with those changes.
Let us know if it worked.
Ondina
8 Posted by thomas.pujolle on 25 Oct, 2012 10:31 AM
All right!
So if I understand the .zip contains the RL sources with the patched Context.as, which will prevent me from getting stuff like:
Warning: Injector already has a rule for type "com.worldmap.view::WorldZoneMediator", named "".
If you have overwritten this mapping intentionally you can use "injector.unmap()" prior to your replacement mapping in order to avoid seeing this message.
And I should just add SwfSuspender 1.6 at the top of this, right?
Support Staff 9 Posted by Ondina D.F. on 25 Oct, 2012 10:33 AM
Exactly.
10 Posted by thomas.pujolle on 25 Oct, 2012 10:36 AM
It does not seem to work :( I didn't find the required adapters in your .zip so I used so ones in the latest RL (1.5.2).
What do you think could cause this? My context extends this one, which extend the one in your .zip:
Support Staff 11 Posted by Ondina D.F. on 25 Oct, 2012 10:39 AM
Have you deleted the rl swc and refreshed the project?
12 Posted by thomas.pujolle on 25 Oct, 2012 12:44 PM
I have.
I have the swf suspender SWC + your .zip source code + the two adapters found in the latest robotlegs (master branch).
Still something seems to be wrong :(
13 Posted by thomas.pujolle on 25 Oct, 2012 12:48 PM
I need to tell my SWF works perfectly anyway, I just wonder if it's going to impact performances, maybe not?
Support Staff 14 Posted by Ondina D.F. on 25 Oct, 2012 01:10 PM
One moment, please. I'm about to answer.
Support Staff 15 Posted by Ondina D.F. on 25 Oct, 2012 01:41 PM
I’m back.
I guess it’s because you’re initializing your context in actionscript.
Try initializing your context inside of the declaration tag, like in piercer’s example:
<fx:Declarations>
<mvcs:InterfaceEnabledMediatorMapContextExample
</fx:Declarations>
See if this works with the patched Context. If it doesn’t work either, then maybe I gave you the wrong patch. I don’t know whether there is a newer version or another solution.
Stray would know more about this.
The warnings aren’t that important, and won’t affect your app’s performance, so, I guess, you could live with it;)
16 Posted by thomas.pujolle on 25 Oct, 2012 01:51 PM
I'm not using flex so I won't be able to try this out, but I think I'm going to stand by what I have right now if you say it does not impact the application, thanks anyway for your time :) RL rules!
Support Staff 17 Posted by Ondina D.F. on 25 Oct, 2012 02:28 PM
No problem, Thomas.
I’ll close this discussion for now. You can re-open it, if need be.
Cheers,
Ondina
Ondina D.F. closed this discussion on 25 Oct, 2012 02:28 PM.
thomas.pujolle re-opened this discussion on 12 Nov, 2012 02:18 PM
18 Posted by thomas.pujolle on 12 Nov, 2012 02:18 PM
Hello guys,
I think I'll have to get through this another time, I have issues when deleting views.
No errors thrown but the flow gets broken at some point, after a mediator deletion. After that, no other mediator of this type is created.
Has anyone in one of his project a version of RL core + the context fix + ViewInterfaceMediatorMap working?
Otherwise I'll have to create this by myself, although I didn't get it right last time.
Thanks for you help :)
Support Staff 19 Posted by Ondina D.F. on 14 Nov, 2012 04:40 PM
Sorry, that you didn't get any answers. I don't know how to help you.
Support Staff 20 Posted by Ondina D.F. on 11 Dec, 2012 11:51 AM
Hi Thomas,
Have you found a solution?
If not, have you considered using robotlegs 2? I don’t know if you can afford to migrate your project to rl2, but rl2 offers easier and cleaner solutions for your use case. Just saying:)
Ondina
P.S. As you know, you can reopen the discussion, if you want to answer
Ondina D.F. closed this discussion on 11 Dec, 2012 11:51 AM. | http://robotlegs.tenderapp.com/discussions/problems/664-map-a-mediator-to-a-baseclass | CC-MAIN-2019-13 | refinedweb | 1,380 | 76.01 |
Java has two memory areas which are Heap and Stack
————————————————–
Objects live in Heap ( Garbage collectible Heap)
Variables and methods live in Stack
Instance variables the variables declared in a class and are associated with an Object.They live on Heap.
Local variables are the one declared in a method which include the method parameters as well.They are live only as long as the method is on the stack.
When you call a method, the method gets pushed on top of the stack frame, which holds the line of code and values of the local variables.The method on top of the stack is always the one that is currently running.In case there is a method foo() which is calling method bar(), then bar is currently running hence that will be on top of the stack.
Values of an Object Instance variables live inside the Object in the Heap.
If the Instance variable is primitive type then Java makes space for the instance variable based on the primitive type, example int needs 32 bits.
If the Instance variable is of type object
example
cellphone()
public class cellphone
{
private antenna ant = new antenna();
}
When you create a cell phone object , Java will create an object in Heap and when the antenna is created it will be linked to the cellphone object.
duck myduck = new duck();
We have instantiated so many objects before but haven’t talked about this. What does new duck(); do ?
yes it creates an object in heap. But how? duck() looks like a method.
Even though duck() looks like a method it is not. It is called as a constructor. constructor is the code that gets called to create an object.constructor gets called when new keyword is used. JVM finds the class and invokes the constructor of that class. Every class has a constructor even if you dont write it.
wait a minute i haven’t written the code, so who did it and how is it available to me?
Well, The compiler writes one for you.
Compilers default constructor will look like this
public duck()
{
}
See there is no return type which is present for methods. The name is same as the class and that is mandatory.
Constructor runs before the object gets assigned to the reference. Which means we can get in middle of it and make some changes if needed.
Constructor is used to initialize the state of an Object which is Instance variables. It is possible to have the method with same name as the class.Constructor is not inherited.
Fun practice – Euler problems
Object should not be used unless it is correctly initialized.
One way to do is by placing the initialization code in the constructor.
public class duck
{
int size;
public duck(int ducksize)
{
System.out.println(“quack”);
size = ducksize;
System.out.println(“size is”+size);
}
}
public class useaduck
{
public static void main(String[] args)
{
duck d = new duck(42);
}
}
Let say in the example of a duck we need to set the size, what if the person who invoked it does not know the value in which case there has to be a default value.
To fulfill the above criteria we can have a default constructor and parameterized constructor.
public class duck
{
int size;
public duck()
{
size = 20;
}
public duck(int ducksize)
{
size = ducksize;
}
}
if you write a parameterized constructor then the compiler wont provide default constructor.If you were to create any constructor, compiler will back off from its duty.
overloading constructors. Constructors can be overloaded , provided they all have different argument list. Constructors can be public, private or default.
Side note – When an object is created that object will have all the instance variables associated with it and up-to the inheritance curve. That means Object will have a copy of variables from Parent class etc.
All the constructors in an objects inheritance tree must run when you make a new object. Technically even Abstract class will have its constructor run. For an object to be fully formed super class constructor must run to build the super class parts of the object. All the instance variables from every class in the inheritance tree have to be declared and initialized. Even if Super class has instance variables that are private, because the methods will be inherited by the sub class.
When a constructor runs, it immediately calls its superclass constructor, all the way lip the chain until you get to the class Object constructor.
The Order in which a constructor gets called is Parent and then the child.
The Parent class constructor can be called using the keyword super.
public class duck extends animal
{
int size;
public duck(int newsize)
{
super();
size = newsize;
}
}
key point to note is, even though you would not call the constructor of the parent class the compiler will still call, But if the compiler is gonna call, it will call only the default constructor ( Even if the parent class has a overloaded constructor the default constructor is the only one that will be called)
call to the super class constructor must be the first statement.
Below will not compile
Public cat(int csize)
{
size = csize;
super();
}
public abstract class animal
{
private string name;
public string getname()
{
return name;
}
public animal(string thename)
{
name = thename;
}
}
public class hippo extends animal
{
public hippo(string name)
{
super(name);
}
}
public class makehippo
{
public static void main(String[] args)
{
hippo h = new hippo(“Buffy”);
System.out.println(h.getname());
}
}
Above example, Hippo has to call the super class constructor with name because Hippo does not have the name instance variable. Hippo depends on Animal to return the name through getname().
this() refers to the current object and it can be used only within a constructor. and it must be the first statement.every constructor can have either super or this but not both.
I dont get it why use this over super.
Local variables , the ones in the method will be alive only within the method.
Instance variables are associated with Object. So as long as the object is in scope the variables will be alive.
An Objects life depends on the life of references to it.
Life
A local variable is alive as long as its Stack frame is on the Stack. In other words,
until the method. completes.
Scope
A local variable is in scope only within the method in which the variable was declared.When its own method calls another, the variable is alive, but not in scope until its method resumes. You can use a variable only when it is in scope.
reference variables can be used only when they are in scope. Object is alive as long as there are live references to it.
An object becomes eligible for GC when its last live reference disappears.
Three ways to get rid of an object’s reference:
1) The reference goes out of scope, permanently
void go ()
{
Life z = new Life ()
}
z dies at the end of the method
2) Reference is assigned to another object
Life z = new life();
z = new life();
3) The reference is explicitly set to null
life z = new life();
z = null;
If you use the dot operator on a null reference. you’ll get a Null Pointer Exception at runtime.
When you are looking into the object creation and references, there could be multiple references to the object so as long as all the references are de referenced the object will continue to exist.
let say
a = new class();
b = a;
a = null;
Here even though the reference a is being de referenced the variable b is still pointing to the old object, so the object will not be flagged to garbage collection.
next page 293
More in next part.
References:
Head First Java 2nd Edition | https://knowingofnotknowing.wordpress.com/2016/06/17/java-beginners-part-12/ | CC-MAIN-2018-22 | refinedweb | 1,299 | 71.34 |
Please make the class "XamlLoader" public. It is found in the namespace Xamarin.Forms.Xaml.
Or...
Make the "LoadFromXaml" method in Xamarin.Forms.Xaml.Extensions public.
One of these is required to dynamically load UI from Xaml at runtime.
Please make this class public:
Or...
Make the "LoadFromXaml" method in Xamarin.Forms.Xaml.Extensions public.
We will be storing Xamarin Forms Xaml in our database. We will be dynamically loading this Xaml in to the UI at runtime. We have already proved that we can use the "LoadFromXaml" method in Xamarin.Forms.Xaml.Extensions to achieve this through reflection. However, the method is not currently public. We need to load UI dynamically from Xaml at runtime. No ifs, not buts. This is an absolute, and unwavering requirement. If Xamarin Forms cannot do this, the technology can not be used to achieve our goals.
I started this thread here:
People all over the web are requesting the same thing: that UI be loaded dynamically from Xaml. Our Silverlight product is based heavily on dynamically loaded UI being loaded from Xaml, and we wish to continue this paradigm. There appears to be no impediment to making the method to load UI from Xaml a public method. Here is an article about this:.
So, please make these APIs public so that there is no need for a work around. If there is a strong reason not to make them public, please explain why this is the case. If there is a strong reason not to make it public, we need to know now so that we do not go too far down the path of building our solution around this workaround.
I can see this being useful for white labelled applications like my own to offer more specific customisation, however there is a performance reduction using non compiled xaml. If there is a way to pre-render a xaml file in a background thread and reused that "pre-rendered view" that could be interesting.
i.e. provide the xaml to a pre-renderer which parses the xaml as a view that can be reused in some way (i.e the binding context will change).
Question to the op, how would this work from the initialisation? I Assume you will manage the data context binding to be added during navigation or something? are you doing dynamic xaml controls, views or both?
Please... for the love of God, do not turn this in to a debate. Everyone is entitled to their opinion, and if you believe that the performance hit is too problematic, then by all means, feel free not to use dynamic loading of UI with Xaml. For my two cents, there may or may not be a performance hit, but this performance hit is irrelevant considering the massive benefits that come along with storing your UI in the database. The bottom line is that people are welcome to build apps however they want. In our case, we have no choice. We have to dynamically load UI based on Xaml. That Xaml absolutely must be stored in the database. Our entire infrastructure works this way in Silverlight. It always has, and is the core strength of our system.
All data binding will be done at the Xaml level so that data binding is configured in the database and not hard coded in to the application. But yes, you are right, the navigation engine will manage the BindingContext. We have already shared the code across Silverlight and Xamarin Forms so that the engine for swapping in the BindingContext is the same between the two platforms.
It's not a debate. This might affect the performances in your application, and that will hurt the platform reputation
Oh wow, it might affect performance and that will hurt platform reputation? Surely you are not serious?
Anything misused will affect performance. Why not throw an exception in the xaml compiler as soon as nesting reaches 4 levels deep? That too will affect performance and "hurt platform reputation".
Why is Xaml compilation opt-int/opt-out then? Are you planning on making it mandatory?
That's just a silly stance. You can not stop inexperienced or rushed developers from misusing your platform and your platform reputation should not stand on what the worst or even the average developers do with it, as long as the best ones are able to use it in a powerful way, the reputation should be good.
This feature is very powerful when used sparingly and optimally. I vote to make it public.
Settle down, I was not starting a debate, I was stating the obvious implications of running dynamic Xaml, and that is a performance hit on complex views, it slows down page load otherwise why would they have bothered with a precompilation system? And for when that performance impact is problematic offering a possible solution (i.e. expanding your idea). When did I say that you shouldn't do any of this? Maybe take the carrot out of your ..? if you want people to take you seriously.
In general I like this idea as I can see it being useful for my apps too, as everything is white labelled and configurable. But it would be nice to have something that allows prerender of the dynamic content for more complex views.
Can the work around currently render full views AND controls (i.e. a view cell)?
This could also be done using JSON in some instances rather than Xaml but that wouldn't be as ideal.
@MarkRadcliffe
I didn't want this to degrade in to what you are dragging this in to, but your point is mute anyway. You obviously don't understand how Xaml works. There is no performance hit anyway. You can view the code here: . Regardless of whether you compile the Xaml files in to resources in your project, or the Xaml is passed in from some other source via the LoadFromXaml method, the Xaml is still parsed at runtime. I think that's where you're getting confused. You think that Visual/Xamarin Studio magically does this work for the app before the app is run. This is not correct. That is only the case if you use XAML Compilation. The Xaml is parsed at runtime, and the code behinds merely traverse the visual tree of controls to find the objects for the variable names. The code behinds actually call the same method LoadFromXaml that I am referring to.
@StephaneDelcroix, and @MarkRadcliffe, can I assume that you are using XAML Compilation for all of your Xamarin Forms ()? Would you will it that all Xamarin Forms programmers be forced to use this feature for all forms? If not, then you've misunderstood something.
So please, if you don't want to use the feature, don't use it. Meanwhile, others are in need of the feature.
Even if this does cause some kind of performance hit, I want to hear it from the horse's mouth - the Xamarin Forms Team. If performance is the reason why this method has not been exposed publicly, could the Xamarin Forms Team please explain this?
I also second @Irreal 's comments.
Whenever someone builds an application, the onus is on that programmer to pay attention to performance. A programmer who does not pay attention to performance on a good platform will write a slow, non-responsive app. A programmer that pays attention to detail can make the best of performance even when the platform is designed poorly in this respect. Any platform's features can be misused to make performance poor. I am telling you clearly here that dynamically loading Xaml at run time - with the use of the methods I am referring to is not a performance issue. I've tested it. It works, and it does not cause a degradation of performance in any meaningful way.
So, the request if for the method(s) to be made public and to be fully supported by the Xamarin Forms Team. If there is some reason for this not happen, I'd like to know why.
Is there a chance anyone from the Xamarin Forms team could provide some information on the status of this? We are running with the reflection hack. If there's a reason we should not be doing this, we want to know about this now rather than later. If it turns out that this feature will never be directly supported, we need to know about this.
I vote to make it public!
Yes. Has the Xamarin Forms team got anything to say about this?
Could we at least get a word or two on why this hasn't been made public yet?
@StephaneDelcroix is the one responsible for what XAML in Xamarin Forms is today. Take note when he chimes in.
Thanks @ChaseFlorell
You mean responsible for XAML inside Xamarin Forms? Or, you mean literally the entire technology?
right, in Xamarin Forms... I'll edit.
@StephaneDelcroix , if I could get a word in quickly...
We've been powering forward with sharing our UI engine across Silverlight and Xamarin Forms. With the use of the hack that I've mentioned previously, we are well on our way to being able to dynamically load a customer's UI at runtime from the database. I've tested this and dynamically loading the Xaml works really well. Even binding works correctly.
If this method isn't made public, we'll have to go with the hack that I've implemented because this isn't an option for us. The customer's UI needs to be stored in the database because each customer has a different set of screens and customization in our software.
So, I'm begging you to make the method public and allow it be fully supported. However, if there is a devastating reason as to why we shouldn't be using dynamic loading of Xaml, I'll listen and evaluate what you say.
@MelbourneDeveloper are you able to create your custom fork and open it up for your needs? I do see Stephane's point about performance.
@ChaseFlorell , you see Stephane's point? How, in your opinion do you think that this would impact performance?
Well obviously, Xaml Compilation is the future and is to be preferred by almost everyone, because of performance reasons. When it reaches full feature partity, it can be enabled by default and every app will benefit from it, except yours - yours will break because it requires the dynamic loading, which will become obsolete. And for many people, it already is obsolete as xamlc it already so much better.
I understand your platform works the way it does, but that doesn't necessarily mean that your goals are future-proof if dynamic loading is not the way Xaml is going to go.
BTW, our company is doing something very similar, but based on JSON. I recommend you grab the source code and package your own Xaml loader (independent of Xamarin.Forms), that's really not a complicated task since everything is open source. Or, maybe do your own UI format and load it at runtime.
I think that for most use cases, Xaml Compilation is the future and I can understand it very well if Xamarin doesn't want to support dynamic xaml loading in the long term.
We have another use case for this: We are defining styles (e.g. colors) in App.xaml and use them, among other places, in our view models. For those view models we have unit tests and we would like to be able to load the definitions in the unit tests.
@gahms Isn't that what
ValueConverters are for? I'm curious how a color can be useful in a ViewModel.
@TobiasSchulz.9796
Not obviously. Performance optimization is always a consideration in app development. When there's a clear performance benefit from using some feature, it's usually worthwhile implementing it. In regards to dynamic loading of Xaml, there's no clear performance hit from parsing the Xaml at runtime. If it's really that important to you, I could make some objective measurements of time to expose exactly how long it takes. But, I can tell you that it's nothing that that is noticeable at the user experience level.
Saying that though - I am open to looking in to compiled Xaml. I'm not sure if compiled Xaml will ever have the full feature set of runtime parsed Xaml, but assuming it does, I'd be interested in using it.
The problem with compiled Xaml though - is the overhead of having to precompile the Xaml in to physical DLLs, and then somehow have those DLLs be dynamically loaded at runtime. We are talking about store apps here. So, if each customer has a different UI, they Xaml will need to be compiled in to separate DLLs for each customer. And, I'd have to create a tool which handles compiling to DLLs, and deploying those DLLs to some place where they can be grabbed by the app at runtime. At this point, I'm not even sure if that is possible on all the different platforms. I have not tested this yet, and I know that Apple are very strict about security so I am not sure if the iOS platforms will allow us to dynamically load DLLs at runtime.
The other option of having a different store app for each customer is completely out of the question...
Note: @TobiasSchulz.9796 - that as yet - it seems that it is not possible to load assemblies dynamically at runtime, so your point is moot unless there has been some change in the framework recently.
@gahms
This is another typical use case for dynamically loaded Xaml.
So, anyway, to recap - Xaml compiled Xaml may be an option in the future as long as it is possible to implement properly. Perhaps someone could tell me how easy it is to dynamically load DLLs at runtime on all the platforms. But, for now, we need this method public. We are already using the method and it is not a performance hit for what we are doing.
If the Xamarin Forms team has an opinion on this topic, I'd really like to hear it.
@DavidDancy - ValueConverters can be part of the solution - but only part of.
A lot of people that use Xamarin Forms aren't used to the raw power that Xaml can provide. If you come from a Silverlight/WPF/UWP background, you can see that styling can be stored in the database on a customer by customer basis. This is easy to achieve on those platforms. This is part of the reason why I'm pushing for this to be done. If the method for parsing Xaml were made public, styles could be easily applied at runtime like the other platforms, and there would be no need to write ValueConverters in to the binding of each piece of Xaml.
@MelbourneDeveloper I've handled this scenario slightly differently than you, I think. We have an app that we've turned into a white-label offering that can be re-skinned for different clients. In addition we offer customisations that go beyond simple changes of colour and font.
To achieve this I've made use of the DI container. It happens that we use FreshMvvm as our tool to automatically connect a ViewModel to each of our Views, but any DI container should work.
Normally, the FreshMvvm system constructs a
Pageto match the
PageModel(aka ViewModel) that we're navigating to by using a naming convention and reflection to find the right pieces. However I can override this mechanism by registering in the DI container client customised
Page/PageModelpairs that I want to use instead of my "stock"
Page/PageModelpairs. These customisations I typically put into a project (DLL) of their own, one per client.
The end result is that all the code for all interfaces can be built at compile time, and we use all the facilities of the IDE for designing the UI. We can take advantage of XamlC and the only runtime hit is loading things into the DI container and finding them there when they're needed. There's no dynamic XAML loading, and no need for it either. Plus the linker can get rid of stuff we don't use.
I'm not saying that dynamic XAML loading wouldn't be useful - just that we found a different way of achieving the same end result without needing it.
This doesn't sound like a bad solution. But, I have to ask the obvious question. How are you managing these customizations? Are you compiling every single customer's customized UI in to the same projects, and deploying all that to all your customers? Or, are you doing selective builds with a different physical app package for each customer?
@MelbourneDeveloper selective builds, one per customer. My solution file is pretty unwieldy (I have a v2 solution that's a bit better, and has to rely on build scripts to adjust things like bundle ids / provisioning profiles and the like) but I can build a client's app with full XamlC on everything, and all they get is what applies to them. There's a core DLL that contains the
AppDelegate/
MainActivityplus all the functionality that applies to all the clients; but then with the custom DLLs I can provide whole screens and extra functionality for individual clients without affecting anyone else. I kind of fell into it but I'm now working on a template that will enable me to rubber-stamp the solution structure for new apps too.
How many customers do you have? We have at least 10 and we are gaining more all the time. That means at least 30 apps (Android, iOS, UWP) in the app store to maintain. How do you manage this?
Xamarin Forms Team, @StephaneDelcroix
Could you please chime in on this? I agree that compiled Xaml is a great concept. But, the point is that we need to load Xaml dynamically at runtime. I'm happy to look in to pre-compiling Xaml, but if that's what we've got to do, then we're going to need dynamic loading of assemblies at runtime which becomes a totally different feature request that may not even be feasible because of Apple's strict security policies (No Emit)
Could you please indicate one way or the other what your recommendations are, and where you think the technology will head in the future?
@MelbourneDeveloper I fear we may be getting off-topic
Happy to discuss in a new thread if it suits. Shall I start it or will you?
@MelbourneDeveloper our solution doesn't involve dynamic loading of DLLs. Everything is compiled and the only runtime mechanism is to register custom pages / functionality in the DI container.
@DavidDancy
Understood. The question is how this is managed...
Do you have a separate app for each customer in each app store?
@MelbourneDeveloper Yes we do. Our customers create their own Apple / Google developer accounts, and then make us admins of those accounts so we can publish on their behalf. Then we manage the whole of the rest of the process for them.
@DavidDancy , how many customers in how many stores do you manage? How often are UI changes made? Doesn't this become unwieldy?
We have about 10 customers who will want to have apps in several stores. That number is growing. We have periods where the UI changes on a daily basis as we fine tune the UI based on constant feedback from the users. Redeploying to all those app stores would be an absolutely massive overhead for us, and it would also discourage us from making small tweaks to the UI at the drop of a hat.
For example, currently, if a user says that a scrollbar is too thin to use in a touch environment, we tweak the Xaml in one spot, and then the scrollbar width is rolled out to all users instantly without needing to redeploy anything. This is the whole point that I am getting at. There is no reason why this can't be done with Xamarin Forms. Xamarin Forms has this feature built in. It's just not made public for some reason that has yet to be established.
@MelbourneDeveloper I can see that your scenario pretty much demands the use of dynamic XAML. We're a bit more static than that with nowhere near as many UI updates. In addition although it's a white-label app we don't (yet) release updates of the app to all clients at the same time. Your update loop would be much longer with our way of doing things as you'd have to wait for app store approval all the time. So your solution is a great way to short-circuit that process.
@DavidDancy , thankyou.
I wish that I worked in an environment where we could deploy an app to a several customers at once and expect that the UI not change until the next release. However, this is the not the reality of our industry. Our customers expect change quickly, and not just in the development phase. They demand that the UI be tweaked at the drop of the hat, and this has been our core strength in our industry. Our customers are tired of vendors that promise that they can customise the application for them, but then charge $10,000+ to change a label or whatever. We regularly add fields, or entire panels of fields on the fly during production usage. This is one of the reasons our customers remain loyal - because we are able to deliver a truly tailored solution without maintaining 10 different branches of code. What we deliver to our customers is the ability to fine tune UI as the needs of the business changes, and without app redeployment.
This may not be a requirement for every development house, but this is certainly a requirement for us. And so, we need this method made public.
@MelbourneDeveloper - I have done quite a bit of white labelling and customisation of views but use Json to do it. It could make sense to do this using dynamic xaml or Json for your views and styles. You wouldn't be able to do any viewmodel alterations though so I guess you'd have to have this as a backend service with a URL link to get a view model each time.
Have you thought about automating you deployment process anyway using something like visual studio team services which let you deploy on checkin for instance to each of the stores? A mixture of that and dynamic content may still work well. Static with auto deployment where performance is a requirement and dynamic where it isn't. It'd still be really good if they were able to have a mechanism for pre-compiling dynamic views in a background thread so they are pre-readied for when its needed.
@MarkRadcliffe , I don't understand your point. What are you suggesting I do?
I'm hearing lots of people suggesting that Xaml be compiled in to DLLs and then be included as part of a special build. I hear these suggestions, but the whole point of this feature request is to not have to do that. The whole point of this feature request is to be able to store the UI in the database as opposed to compiling DLLs.
In our existing app, we save Xaml in to the database. As soon as the record has been saved, every single user gets the change. There's no need to recompile or redeploy. The benefits of this should be obvious, but somehow the message is being lost here...
If it turns out that dynamic parsing of Xaml at runtime is too much of a performance hit, pre-compiled Xaml is an option. But, my tests simply do not show this to be the case, and phones are getting faster and faster every day. The point is that maintaining UI in a DLL is onerous and requires extra maintenance from the configuration team. Perhaps it could be done, but why? The existing method LoadFromXaml works well, and doesn't cause performance problems.
Pre-compiling Xaml in to a DLL is like trying to swat a fly with a sledge hammer.
That is one way to resolve it, yes. It will introduce an extra level of abstraction between what the ViewModel decides and what will be shown on screen. The styles from App.xaml are what I see as the necessary abstraction between what the ViewModel decides (e.g. "CompanySecondaryLightColor") and what should be shown on the screen (e.g. Color.FromHex("123456")).
That same abstraction ("CompanySecondaryLightColor") is also used in the various views (xaml).
If I add a ValueConverter I will get another level of indirection that (in my opinion) does not add any value. Actually I think it adds unnecessarily to the complexity and readability of the xaml.
One of the great forces of Xamarin.Forms compared to, say MVVMCross, is that you can have this kind of stuff in the ViewModel.
Are we going to hear anything from the Xamarin Forms team on this?
Anything at all?
Why are you being so reticent? You've built this technology. The ability to dynamically load UI components with Xaml exists and it's a great feature. If you're not planning on making the feature public, why don't you clear up your reasons?
@MelbourneDeveloper we’ve been monitoring the discussion and seeing where the conversation went. It sounds like several people commenting on this thread have offered some other reasonable solutions to achieve your goals, and I hope those prove fruitful for your situation. At this point the discussion appears to have run its course.
We are not going to make it public. It comes down to stability and performance. We continue to hear this mandate loud and clear from the community.
We have been making great improvements with XAMLC which is getting us further down the path of resolving bugs and improving performance. Supporting both runtime XAML loading and compiled XAML is effectively 2 frontends to the XAML engine. We are investing in the latter.
A bit of advice for anyone putting forth proposals:
From the beginning, this proposal was positioned as inflexible and the tone of the comments that followed demonstrated that healthy discussion would be difficult. If you want to engage people and win them to your point of view, lay out your goals and vision and embrace discussion from other viewpoints. You’ll win more advocates this way. | https://forums.xamarin.com/discussion/comment/252842 | CC-MAIN-2021-17 | refinedweb | 4,461 | 72.56 |
running sys.exit on an engine results in an error trying to handle SystemExit
Bug #400600 reported by Vishal Vatsa on 2009-07-17
This bug affects 1 person
Bug Description
To replicate, start an cluster and do:
from IPython.kernel import client
tc = client.
code = "import sys; sys.exit(0)"
task = client.
id = tc.run(task)
r = tc.get_
r.failure.
"Traceback (most recent call last):\nFailure: exceptions.
I know, should not be doing sys.exit, but if you do this error is really hard to diagnose the actual problem.
Would the attached patch more sense?
Since the Interpreter obj. indeed does not have a resetbuffer() method.
Brian Granger (ellisonbg) on 2010-01-30
Brian Granger (ellisonbg) on 2010-04-27
Brian, I'm changing this one to 'in progress' because there seems to be work on it already, including Vihal's patch. If by any chance you've already merged it, just set it to 'fix committed'.
I'm doing this because I want to leave behind on LP all bugs that are incomplete, invalid, or fix committed/released, so we start on GH only with active bugs. So if I leave this one as 'incomplete', it would get left behind with other ones that are marked incomplete because they are simply missing enough information to even know if they are real bugs. | https://bugs.launchpad.net/ipython/+bug/400600 | CC-MAIN-2017-43 | refinedweb | 225 | 66.13 |
RMI Section Index | Page 12
How can I log my remote server calls?
If you start the server with the java.rmi.server.logCalls system property set to true (java -Djava.rmi.server.logCalls=true Server), you'll be able to monitor server activity.
How do I send a ResultSet back to a client using RMI?
java.sql.ResultSet is not serializable, so it cannot be sent over an RMI connection. You will need to extract the data from the ResultSet and encapsulate it in a serializable object to send back ...more?
Say I have a remote interface: public interface Hello extends Remote { public String sayHello() throws RemoteException; } and an implementation like: public class HelloImpl extends Unicast...more
Is there a mailing list for RMI discussions?
Yes, Sun's RMI-USERS mailing list. To subscribe, send an email to listserv@java.sun.com which contains the message: subscribe RMI-USERS Note that the archives of the mailing list are here....more
Is there another RMI FAQ that I can look at?
Yes, check out: Sun's RMI FAQ.
How does Java RMI differ from Jini?
Java RMI Jini RMI clients use the class Naming.Lookup() for locating the requested RMI Service Jini clients use the discovery process to locate Jini Lookup services. Dis...more
What's new in RMI under Java 2?
Java 2 SDK adds significant enhancements to the RMI implementation found within JDK 1.1. The most important changes are: Under JDK 1.1, RMI servers have to be up and running all the time, and co...more
What is the purpose of the java.rmi.server.useCodebaseOnly property?
When the property java.rmi.server.useCodebaseOnly is set to true, then the JRE will load classes only from either a location specified by the CLASSPATH environment variable or...more
When would I use the java.rmi.server.codebase property?
The property java.rmi.server.codebase is used to specify a URL. This URL points to a file:, ftp:, or http: location which supplies classes for objects that are sent from this ...more
What are the different RMI system configurations possible?
What are the different RMI system configurations possible?
Why is that my remote objects can bind themselves only with a rmiregistry running on the same host?
Although an RMI application can perform a lookup on any host, it can bind, rebind or unbind remote object references only with a registry running on the same host. This is mainly for security rea...more
I get the exception "java.net.SocketException: Address already in use" whenever I try to run rmiregistry. Why?
The exception means that there is already an rmiregistry process running on the default port 1099 on that machine. You can either choose to kill it and restart rmiregistry, or start it up on a dif...more
Is there a servlet implementation of the java-rmi.cgi script for enabling call forwarding when using RMI across firewalls?
Is there a servlet implementation of the java-rmi.cgi script for enabling call forwarding when using RMI across firewalls?
What's the cleanest way to have a client terminate a RMI server that is no longer needed?
The cleanest way to exit is to convert your remote object into an activatable remote object and then a client can invoke the Activatable.unexportObject() method to get rid of it. Things are a lit...more | http://www.jguru.com/faq/server-side-development/rmi?page=12 | CC-MAIN-2016-22 | refinedweb | 563 | 59.9 |
Are you a developer? If you then answer a question. In the last couple of years have you heard anyone saying that “Hey, I will be using Html and CSS for my next project…”
Have you?
I at least haven’t listened to something like this, I often hear people telling like I will be using React, maybe Angular or Vue. Even WordPress is becoming a thing of past for developers, new blogs are being made on top of Gatbsy, Next.js, Gridsome, 11ty…
At least for now keep the Angular and Vue on aside and let’s take a look at React because exploring all the frameworks in a single article is not possible at least for me. We will talk about them some other day.
What is React?
React is a JavaScript library for building user interfaces, at least the official website says so. Go ahead and take a look at it… the docs also say It is Declarative, Component-Based, Write Anywhere. Ahhhh….. stop !!!
Let’s take a simple approach, Have you written HTML ever? I know you have. React looks exactly the same but something different, it is like a javascript function returning an HTML. We will dive deep and know everything but for now, just keep these things in mind.
import React from 'react'; export default () => { return ( Hello World ) }
This is how Hello World in react looks like. You might be wondering we modern applications use react or any other javascript library, rather than using traditional HTML and CSS right? The problem in the old way is something like this:
Let’s say you have a blog and every few minutes, some users comment on it. Now to see every comment you have to manually refresh the page and doing so will take a lot of time to reload the page, similarly, for every small change in your website, the whole webpage needs to be reloaded, which makes the website slower than it should be.
In React everything is a component, from a button to a paragraph to every single thing you write all are components.
As you see in the above image, the webpage has many areas – Like the sidebar, Ad section, Logo, Navbar, Nav Menu, etc.. All these can be separate components or a big giant component. It depends on the developer how he/she designs the page or how comfortable they are in breaking down and abstracting things in React.
Installing React:
If you don’t have to react js installed in your system then just follow the below steps:
Install Node (If you have Windows either use WSL or download node from the official website ):
Add the NodeSource APT repository for Node 12 curl -sL | sudo -E bash Install Node.js sudo apt-get install -y nodejs
Install React (Linux / Mac / WSL):
sudo npm -g i create-react-app
Install React (Windows):
Run Powershell or Command Prompt as Administrator.
npm -g i create-react-app
creating a project:
npx create-react-app
put the folder name you want to create instead of “”
Now open the folder in your favorite code editor.
Understanding the default folder structure:
Now, after opening the folder you may see a folder structure like this:
my-app/ ├── README.md ├── node_modules/ ├── package.json ├── public │ ├── index.html │ └── favicon.ico └── src/ ├── App.css ├── App.js ├── App.test.js ├── index.css ├── index.js └── logo.svg
package.json: The package.json is the heart of the entire project, it contains all the metadata of your project like what packages are used in your project, which version they are, do they need any update, etc…
node_modules: This folder is generated based on the package.json file, it is never shared between developers. And is generated when you run
npm i after cloning a react project.
src: Mainly all your code lives in this folder, when you run
npm start or
npm build everything in this folder is rendered by React and then it puts the output in the public folder.
public: When you load your website all the files in this folder are rendered in the browser. This folder is created by compiling the src folder.
Basics of react:
When you open the App.js file inside the src folder you will see some code like this:;
at the very top, there are some import commands, the first import command is used to import the react library, the next two are used to import the logo and the CSS file (Yes, CSS is imported like this in react).
then there is a class, in modern React we can use a function also, which enables a new react feature called hooks. Or we may also use Const instead of class or function, Many developers do that including me.
then the
render() is not being used here, mainly it is used to render some dynamic activity in react and it is most useful in conditional rendering like using if else in react.
then the next section which returns something looking like HTML right? I told you earlier it looks like HTML but is not, it is called JSX and it functions completely different.
And finally, at last, we export the
class APP, The export statement is used when creating JavaScript modules to export functions, objects, or primitive values from the module so they can be used by other programs with the import statement.
This is how a react code looks like.
JSX:
JSX is one of the reasons why most dev hate reacts, not most of them but some do…
JSX may look like HTML but it is not HTML, You can write JSX the same way you write HTML, but there are some things you should keep in mind.
in JSX
<br>,
<hr>,
<img> are declared like this:
<br /> <hr /> <img src="example.com/img.png" />
Look at them carefully you will understand the difference, and one more thing has you noticed in the
App.js file how the Classes are defined have a look once more:
<img src={logo}
class is called as
className in JSX because class is a reserved keyword in javascript.
Variables and Expressions:
Variables are added in JSX using the bracket
{}, now somebody says it curly brackets, second bracket but the standard name of this bracket is bracket, google it if you don’t know.
Anyway, but how the variable is assigned? Take a look at code below you will understand.COPY
function App() { const name = 'Ram'; return ( <h1>Good Moring {name}</h1> ); }
URLs, SVGs, even other components can be used as variables. We can also use objects as variables when we need to pass more than one string like this:
function App() { const person = { name: 'Ram', age: 16 }; return ( <h1>Hi {person.name}, I guess you are {person.age} year old.</h1> ); }
Styles:
We can use CSS inside of the component, this becomes useful when we are developing a small scale app or when we need some unique styling for just one component, like if you need a special hover effect in your button, and only for that button this thing may come handy.
function App() { const person = { name: 'Ram', age: 16 }; return ( <h1>Hi {person.name}, I guess you are {person.age} year old.</h1> <button style={{ color:blue, margin: '10px' }}> click it </button> ); }
Conditional Rendering:
I will not talk much on this I will just write some code and I hope you will understand.
function App() { const person = { name: 'Ram', age: 16 }; if (loggedIn) { loginButton = <LogoutButton />; } else { loginButton = <LoginButton />; } return ( <h1>Hi {person.name}, I guess you are {person.age} year old.</h1> {loginButton} ); }
Props in React:
In React there different approaches to data flow & manipulation than other JavaScript Libraries and Frameworks. One of them is “Props”, Props is the short form of Properties. Props are usually unidirectional which means data flow from Parent Component to a Child Component.
Let’s see how to use Props to React:
Let’s consider this scenario we are developing a blog and it needs an author section or page. But the problem here is every time we cannot manually write a template for every author. Especially when our blog scales and have a lot of users and Guest Posts.
So, what we can do instead is to make a component for the author section or page. In this case, let’s consider we need a page. And the component will look like this:
import React from "react"; const Author = ({ authName, authBio }) => { return ( <div> <h2>Author: {authName}</h2> <p>Bio: {authBio}</p> </div> ); }; export default Author;
I know it is a bad author page, but anyway, Nowhere is our base component for the author page, and we have to say, 100 authors which are sourced from a database, Now I am not going to tell how to source those users from database let us keep it for another day.
And Imagine we have to generate a page from this above Author Component. And considering we have got a way to pull data from the database, I have written this template:
import React from "react"; import Author from "./Author.js"; const AuthorPage = ({ data }) => { return ( <div> <Header /> <Body /> <Author authName={data.author.name} authBio={data.author.bio} /> <Footer /> </div> ); }; export default AuthorPage;
Here we are passing the author name and bio from the database to the child component which renders our author data and returns us our expected result, without complicating or making our codebase huge. Just think about what you would do if there was no way to pass data from one component to another.
Now we can just pass a user from the database to our template as many times we want from our backend without thinking much. Want to change the design, it will feel like breeze…
Conclusion:
We have Learnt a lot about react today, we first learned to install Node and React in our system, then we took a look into how the default react starter looks like, then we looked into JSX and understood how it works. Then we got into Props.
But there are few thing that we didn’t cover like For Loops, State and Ref in React.
State and Ref in React are complicated for a beginner, but I promise I will make a great article on what we didn’t cover today in the next week. Explaining and understanding these concepts takes time and more effort and may require real examples. In a single article If you read so much, I think you won’t be able to grasp all the things. So, have a nice day, and hope you stay safe… | https://tropyl.com/beginners-guide-to-react/ | CC-MAIN-2021-49 | refinedweb | 1,770 | 68.4 |
Make python objects persistent with Redis.
Project DescriptionRelease History Download Files
Persistent python objects with Redis backend.
pip install rob
JsonObject
An object that does a JSON dump of the dictionary and save it in a Redis hash.
Needs to define HASH_KEY - the key to the hash.
HashObject
An object that saves its dictionary in a Redis hash. Using the HMSET. It uses a list to keep track of saved objects.
Needs to define HASH_KEY - a key that is used as prefix to the list and as the key to the hash.
Mixins
The mixins below will work with all the object types.
Autosave mixin
A mixin that calls save every time an attribute is set.
Examples
Simple object
from redis import Redis class ExampleObject(JsonObject): HASH_KEY = 'exampleobject' redis = Redis()
Autosave object
from redis import Redis class ExampleAutosaveObject(JsonObject, AutosaveMixin): HASH_KEY = 'exampleobject' redis = Redis()
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/rob/ | CC-MAIN-2017-34 | refinedweb | 165 | 66.23 |
#include <curses.h>
The erasechar() function returns the current ERASE character from the tty driver. This character is used to delete the previous character during keyboard input. The returned
value can be used when including deletion capability in interactive programs.
The killchar() function is similar to erasechar(). It returns the current KILL character.
The erasewchar() and killwchar() functions are similar to erasechar() and killchar() respectively, but store the ERASE
or KILL character in the object pointed to by ch.
For erasechar() and killchar(), the terminal's current ERASE or KILL character is returned.
On success, the erasewchar() and killwchar() functions return OK. Otherwise, they return ERR.
getch(3XCURSES), getstr(3XCURSES), get_wch(3XCURSES) | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3xcurses/erasechar.3xcurses.html | CC-MAIN-2014-41 | refinedweb | 112 | 60.61 |
01 December 2011 05:53 [Source: ICIS news]
By Dolly Wu and Nurluqman Suratman
(adds comments from analysts, with recasts throughout)
SHANGHAI (ICIS)--?xml:namespace>
The PMI reading of the world’s second biggest economy was last seen in a contraction mode, with a reading below 50%, in March 2009.
Data from the China Federation of Logistics and Purchasing (CFLP) showed that the country’s barometer of manufacturing activities had continuously fallen from April to July. The PMI reading rebounded in August and September, before resuming falls in October.
The monthly declines in PMI indicate that the country's economic momentum is slowing down, said Zhang Liqun, an analyst at CFLP.
In November, the purchasing sub-index within the PMI declined by 1.8 percentage points month on month to 44.4%, indicating softer domestic demand. Business enterprises may be faced with a shortage of orders from this month to early next year, said Zhang.
Domestic industries are hurting from a credit crunch borne of
“The decline in the [PMI] index reading was due largely to the weaker foreign demand, as well as the softening domestic demand amid the tight credit conditions and the cooling property sector in
The PMI is based on a survey of 820 manufacturers across 20 industries. While the overall number registered a contraction in November, half the number of industries surveyed, including chemicals, oil refining and coking, pharmaceuticals, papermaking, garments and general machinery had a reading above 50%, the research firm said.
Within the November PMI, the new orders index fell by 2.7 percentage points from October to 47.8%, while the production index dropped by 1.4 percentage points to 50.9% in November, according to the official data from CFLP.
Chemicals, garments, footwear and related products, oil refining and coking, smelting of non-ferrous metals and tobacco were the industries that recorded imports growth in November, according to Li & Fung Research Centre in a report.
“Given the deepening crisis in
An expected deceleration in exports growth to 10% from 20% last year will weigh on the economy, slowing down China's growth to 8.5% next year from a projected 9% clip in 2011, it said.
Against this backdrop, the People’s Bank of China (PBoC) announced late on Wednesday that the reserve requirement of banks will be cut for the first time in three years on 5 December.
The banks' reserve requirement, or the portion of deposit that must be parked with the central bank, will be reduced by 50 basis points to 21%. The move is aimed at boosting lending and shoring up the economy.
“The November PMI final reading points to a sharp deterioration in business conditions across the Chinese manufacturing sector,” said Hongbin Qu, HSBC chief economist for
HSBC's PMI for China – a composite indicator that gives a single-figure snapshot of operating conditions in the manufacturing economy – gave a reading of 47.7 for China in November, down from a 51.0 reading in October.
The cuts in reserve requirement indicated that economic growth has now become the top priority of the Chinese government, instead of controlling inflation, said Qu.
“This [cuts in reserve requirements] is likely to invite an across-the-board policy easing, which is likely to come as early as [the] year-end,” he said.
If the monetary easing measures can filter through the economy in the coming | http://www.icis.com/Articles/2011/12/01/9513024/china-november-pmi-signals-contraction-in-industry-output.html | CC-MAIN-2014-35 | refinedweb | 565 | 51.89 |
So, I am trying to have spawned (using "new") items set to NULL after the player picks them up. I am using custom events to relay that the player has picked up the item and after I want to delete the object. Using "delete this" didn't work, so I figured I would pass by reference the object to the player to have it set it to NULL after picking it up. I can't seem to figure how to pass the object by reference over the custom event though. Thanks.
//emit the event source that the food collided with the player
m_AlEvent.user.type = CUSTOM_EVENT_ID(FOODPICKUP_EVENT);
m_AlEvent.user.data1 = (intptr_t)this; <<<---"this" is the item
al_emit_user_event(&m_FoodPickupEventSource, &m_AlEvent, NULL);
delete this is valid C++ - when you say 'didn't work', what happened?
Is the problem that you're holding a reference to a deleted object?
Pete
You won't be able to "pass by reference" through a pointer type. By definition, you have to pass by pointer. A reference in C++ is just compile-time syntactic sugar over a pointer. I think we're going to need a better explanation of what you're trying and why it's not working. As a side note, have you considered smart pointers?
A smart pointer will be hard to use in an ALLEGRO_EVENT which is a plain C struct without any copy-constructor rules - it gets duplicated to each event queue but bypasses any C++ magic. So in this very special case I'd say just use the reference counting and destructor callback provided by al_emit_user_event.
Ah, yeah, sounds like a smart pointer would be a bad idea.
The only way to pass a pointer through a custom event is using an "intptr_t" so I was wondering how to do it with that. Yes, when I said it didn't work its because something will still be holding the reference to the object and since delete doesn't set to NULL I have no way of knowing that it should not be accessed.
I'm trying to figure out a way to have a the object created dynamically in game at some point and then become independent. This can't truly happen in Allegro because you need something that is calling its draw function and its functions dealing with events, so a object will have its reference and be checking it. So I was going to try having the created object destroyed by whatever object receives its emitted event. I need to pass a reference to it though to do this so that when I set it to NULL the reference in the script that dropped the item will know too.
Something like this should work:
void delete_callback(ALLEGRO_USER_EVENT *event) { Type *self = static_cast<Type *>(event.data1); delete self; } al_emit_user_event(&m_FoodPickupEventSource, &m_AlEvent, delete_callback);
And where you receive the event:
if (event.type == CUSTOM_EVENT_ID(FOODPICKUP_EVENT)) { // handle the event al_unref_user_event(&event); }
I suppose a smart pointer actually would work as well. You would still have to cast it to intptr_t and back, and the C code would make copies of it - but the al_emit_user_event reference counting would make sure only one of those copies is actually passed to the callback.
That code doesn't seem to compile unless I'm doing something wrong.
void FoodPickup::delete_callback(ALLEGRO_USER_EVENT *event)
{
FoodPickup *self = static_cast<FoodPickup*>(event.data1);
delete self;
}
al_emit_user_event(&m_FoodPickupEventSource, &m_AlEvent, delete_callback);
The "(event.data1)" part as an error on "event" and the "al_emit_user_event" as an error on the "delete_callback" argument. Thanks.
event above is a pointer so I'm guessing you probably wanted event->data1, but perhaps you're also missing a header include or something?
The method is probably non-static so it has an implicit signature more like (Type * this, ALLEGRO_USER_EVENT * event). Normally you'd need an object instance to invoke that properly. My guess is that's what the compiler is complaining about.
It might work if you explicitly mark the method as static, which basically is like a global function that is just namespaced by the class. You may or may not have to qualify it with the type name: Type::method.
The main thing is you need to make sure that you don't store references to that object anywhere else. If you do need references elsewhere then you'll need to abstract the interface more so that you can signal that elsewhere to forget about the object too. That sounds to me like you don't actually want Allegro to clean up after itself with that data, but perhaps you'd still like to use Allegro's API to flag the object somehow (i.e., obj->alive = false) for later.
Or perhaps you can get away with smart pointers as I originally suggested if C is never going to access the data at all. You would pass a pointer to a smart pointer, not to the object, and then destroying the smart pointer would handle other instances of the underlying object properly...
But that's kind of silly in a way mixing smart pointers with manual memory management. Append: Of course, then you'd have to allocate the smart pointer with new so that it doesn't destruct when leaving the scope of its origin, and I'm not sure if there are rules about not doing that... I recall smart pointers being particular. | https://www.allegro.cc/forums/print-thread/615891 | CC-MAIN-2019-09 | refinedweb | 895 | 69.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.