text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
User talk:Col.swordman/Archive 2 From Uncyclopedia, the content-free encyclopedia edit Stop Thief! Shhhh... Hey there! I noticed my recent UnNews article has moved to your userspace. I can appreciate that some people may be over emotional on issues like recent deaths, but I would like to think that we approached the subject with some decorum. Was my article against Uncyclopedia rules? - Well, your article is pretty much a.o.k., but, after all, since we don't want to intimidate those who are mourning for our deceased hero (see here), the authority has decided to keep it in my userspace until further notice. - -- The Colonel (talk) 02:49, 5 September 2006 (UTC) - Admins, nice? Is there something wrong with him? Looks like the article has moved back to the UnNews namespace. I am glad that you found it OK. I was hoping to write something that would fill the void without some of the nastiness that a few of the other articles seemed to have. Steve was a larger than life hero for a great many people worldwide, with a great sense of humour. I'd like to think he'd be laughing at all this right now though I imagine he would be more concerned about Terri, the kids and his other family. Thanks for keeping my article safe. edit Marree Man I made this userbox. If you don't want it then you can delete it. . -- 14:55, 4 September 2006 (UTC) - Awesome! Now we have our official userbox. -- The Colonel (talk) 16:19, 4 September 2006 (UTC) edit American Fundie Magazine Kelly Copeland anointingly advises you to do your laundry in the love of God. In fact, that's how my uncle sanctifies his income. This was in my Inbox; you really need to get your address sorted out, I'm tired of getting your mail. - Dear Swordman: - Thank you for expressing an interest in our fine publication. As for your criticism that geocentrism is purely an early modern Catholic doctrine, let it be said that even the RC's get it right once in a while. While we rarely advertise the truth of a geocentric universe, as people laugh at us when we do, it is our belief that the Earth is the center of the universe. For more information please visit Biblical Astronomy, "Fixed Earth" or simply Google "biblical astronomy" for a wide variety of truths that agree completely with a literal interpretation of the Bible. While most of the websites are officially non-denominational, with your help (and your generous donation) we will convert the blasphemers away from their idolatry of the Virgin Mary, thus rendering the need for the veneer of tolerance moot and finally allowing us to remove the "non" from non-denominational. To thank you for taking the time to vote on VFH, then taking even more time to unvote, also on VFH, we've arranged for two free tickets for you and a partner of the opposite sex to whom you are legally wed at Dinosaur Adventure Land. While there be sure and visit our new planetarium. It's out back, by the garbage cans and construction waste (watch out for rusty nails!). Send money. Seriously, you'd be surprised at the weird shit that you can find on the internet; sadly I got rid of most of the links that I'd accumulated while researching the AFM page as the links were making my computer sad. There was more: much more. If anything I left too much stuff out. That the idiocy of one group happens to match up with the ignorance of another is serendipity.--Sir Modusoperandi Boinc! 14:29, 28 August 2006 (UTC) - Aw, screw it. I cut Astronomy as it was the weakest part of the page (You're right, in a way. It's only the fundiest fundies that are geocentric. Their rarity makes them less funny/sad than the regular literalist.). Some minor decrufting too...and Astro was replaced by Creationism which, I hope we can agree, is a fundie trademark. I didn't have it in initially because there's already a creationism page, and it's not half bad. But I digress...--Sir Modusoperandi Boinc! 03:08, 29 August 2006 (UTC) Incidentally, the conversation that your valid criticism generated is here if you want to add your two cents.--Sir Modusoperandi Boinc! 04:33, 30 August 2006 (UTC) I'm sorry that I put this at the bottom of your talkpage, rather than the top. I was so busy with what I was say that I didn't pay attention to where I was saying it. That I had showtunes stuck in my head after teaming up with Shandon to make HMS Potatore probably didn't help...--Sir Modusoperandi Boinc! 08:41, 1 September 2006 (UTC) - Chill out, dude. Life is just too short for me to get mad at trival matters like this. - I was watching a comedy show last night and those hosts there were talking about a fundie show that Channel 10 Australia broadcasts every weekday morning. At some point there was a footage of the show [1] on how a housewife could be blest by diligently doing laundry. Thanks to these faithful Christian comedians, who selflessly fought their sinful desire of sleeping in order to bring such good news to everyone, now I have realised why my uncle is so blest for his laundering effort. --The Colonel (talk) 10:05, 2 September 2006 (UTC) - We get Ken Copland here too. 500 channels and nothing on...Note to self: do more laundry. Too bad they didn't show whatever obscure/out of context biblical passage proves this blessedness. It's like the Simpson's, there's a quote for everything in there somewhere. Send money. - Is that the same series where they once went to a Westboro Baptist Church rally and the interviewer started hitting on the men? I saw a clip of that...whoever it was, he had balls. - Thanks, you've gone a made me mad about nuttery again; Benny Hinn is the lowest form of life. People who profit from the desperation of others...oooo! Mad! --Sir Modusoperandi Boinc! 15:28, 2 September 2006 (UTC) - Yep! That's the same series where they ticked off those Westboro jerks [2]. -- The Colonel (talk) 18:06, 2 September 2006 (UTC) - Religion, sadly, is for the most part off the table for criticism here. Not that we've got much here, because here, in whatever country it is that I am from, we pride ourselves on standing for whatever the opposite is of what the US stands for. They talk about it. Us, not so much. They like guns, us not so much. They like violence but not sex in film/tv, we like sex but not violence... - Aw screw it, I'm going fundie! <pops up out of chair> Wooo hooo! <runs around in circles> Life is so much simpler now that someone else is in charge of my life! <falls down stairs...> --Sir Modusoperandi Boinc! 18:59, 2 September 2006 (UTC) ...and if you don't mind, could you take a look at the changes to AFM and render an opinion? Since you're the first (and, to date, only) person to critique the page (and find it wanting), I'd like to hear what you think of the changes. I like the new "Creationism" section, it's to-the-point and shows that their view of things is coming from a whole different angle.--Sir Modusoperandi Boinc! 14:19, 3 September 2006 (UTC) - I saw VFH, glad you approve. Can I move on now?--Sir Modusoperandi Boinc! 16:11, 3 September 2006 (UTC) - Be on your way. -- The Colonel (talk) 16:19, 4 September 2006 (UTC) edit Lance bass player Your version was better, but I just got a photochop knockoff called "The Gimp." Not very good, but I thought you would like:40, 14 August 2006 (UTC) - Hmmm... An eerie grin with fish eyes - what a tribute! - Thanks by the way. - -- The Colonel (talk) 17:01, 14 August 2006 (UTC) edit How come?! How come you put my Cat Nation page in the Australia category? And linked it to a wikipedia (of all things) article on that band I didn't intend the article to represent? Not that I'm complaining...extra traffic is extra traffic. —The preceding unsigned comment was added by Magnificat (talk • contribs) - How am I supposed to answer this? "Meow"? - Ok! This is what I actually did: I went to the search page, did a query on Australia and put whatever I thought was Australian in the "Australia" category. If you think the article has been put in a wrong category, just undo the change. -- The Colonel (talk) 11:00, 14 August 2006 (UTC) "How am I supposed to answer this? 'Meow'"? - -Pretty much, yes. I just wanted to make sure that there wasn't some deeply logical and valid reason that I'd missed before I changed it. Thanks for replying. --Magnificat 11:08, 16 August 2006 (UTC) edit Congratulations on the featured) - Crikey! Featured image? Well, at least those four hours I spent on it really paid off. - Kakun does deserve some credit, too. - -- The Colonel (talk) 15:11, 12 August 2006 (UTC) edit Cookie For acting the way Nintendorulez acts toward Euroipods towards Benson. -- §. | WotM | PLS | T | C | A21:47, 7 August 2006 (UTC) edit "Popular writers always get all the attention" Thats not a reference to me, is it? If you'd care to have a look at the VFH Hall of Shame you'll see that I've only had three articles featured in the year that I have been here. Sometimes VFH and VFP does become a popularity contents (though it shouldn't) and sometimes people have to whore for votes, but we all feel passionate about what we are doing here, and if your article is close to been featured you do get all excited. Having said that we've all had the disappointment of seeing some of our work voted off, my list of the fallen includes Uncyclopedians, Fire and Brimstone, Wildeboys and quite a few more. Hopefully it makes you strive to write better articles and hopefully become a better writer. I hope that you stick around, I know first hand how difficult it is juggling multiple careers, family and social commitments and still finding time for this place. But I also think its good to have a place like this to unwind, and lets not forget, all this is supposed to be fun. :) Anyway, thanks for the vote on Voynich Manuscript, even if I'm not in my pre-featuring frenzy, I much appreciate just a rant... for fun. My one and the only attempt on VFH is still on the page, although, I must say, it is not getting much attention despite it is still unblemished of Against votes. Yes, Uncyc is much a fun place to be, but, again, there is always a limit as to how much one can handle, and, unfortunately, Uncyc is just one of the things that have to go, whether my article will get featured or not. I am not saying that I won't be back ever again, but you won't see me as often here as you do now. - That's just life. :) -- The Colonel (talk) 14:52, 5 August 2006 (UTC) edit Pee Review of Oz war stuff Hey, Colonel. Are you a sword(s)man in the literal sense, or in the metaphorical corned-beef cutlass sense (as in: "I escorted Lady Penelope to her chambers and as she was clearly gagging for it felt obliged to put her to the sword")? - Colonel: colonel of the flamewar business - Swordman: A word spelt wrong can't be meaningful, can it? (except drwing attetion) -- The Colonel (talk) 14:50, 2 August 2006 (UTC) In any case,:29, 2 August 2006 (UTC) - Problem fixed. The page has been moved to PR, although it is not quite what that place is for. -- The Colonel (talk) 14:50, 2 August 2006 (UTC) PS: I moved ferret racing out of the Oz sport section to its own page. For better or (probably) worse it has since spawned a spin-off Un:30, 2 August 2006 (UTC) - Hmmm... not bad. -- The Colonel (talk) 14:50, 2 August 2006 (UTC) edit Re: Oz stuff Hey Colonel, Glad my Oz stuff made it out the other end of the sheep dip (though there's already been a bit of revert action going on, IP-style). As for it being too long, sure - fold, spindle, manipulate and/or huff. I've been thinking of doing a separate piece on the Great Bookie Robbery and Ferret Gunk Heist, which was a watershed moment in our nation's history and led to a wholesale reorganisation of the Victoria Police Ferret Squad and the loss of gallons of priceless historical DNA from some of the greatest rabbit-botherers the track has ever seen, including 1947 Wangaratta Cup winner Trouser Python. In any case, if you'd like me to have a bash at anything else, let me know. Oh, and thanks for the info about the sig stuff. --Armando 11:44, 30 July 2006 (UTC) edit Oz page G'day. I signed up for that "rescue Australia" mission but then I was just sitting on me date and thought "Struth! No point being on the bludge when there's a bit of hard yakka to be done." So I had a bash at rewriting the sports bit. Hope it passes muster. Not the funniest thing you'll ever read, but hopefully better than a poke in the eye with a burnt stick. I've kept all the old stuff on me desktop, so I can put it back if you prefer it the way it was. As we say in Australia, I hope your chooks turn into emus and kick your dunny down. --Armando 16:23, 29 July 2006 (UTC) edit Leaving - If your article gets featured, r u really gonna leave 4 good? --!] 20:33, 23 July 2006 (UTC) - For good?! I am rather offended by your question but, yes, that's true. -- The Colonel (talk) 20:44, 23 July 2006 (UTC) - No, I'm not saying I want u gone...That FFS page is essentially a joke anyway (u nominated yourself). J/w about!] 03:37, 24 July 2006 (UTC) - Ok, that's cool then. -- The Colonel (talk) 11:29, 24 July 2006 (UTC) edit ref:Pentium 4 - "For, because I'm one of the poor bastards that bought a Pentium Pro and I'm still holding a grudge. Modusoperandi 00:54, 21 July 2006 (UTC) - Then I hope you are not still mad at the fact that Pentium Pro is weak in 16-bit processing. -- The Colonel (talk) 08:35, 22 July 2006 (UTC)" While it was wicked fast at the time, I'm mad because I dropped 2G's on P-Pro/ram/board a couple of weeks before Intel abandoned it. $2,000 was (and continues to be) a lot of money to me. Of course, now that they've got real competition, I can (and did) get most of a compu for half that. It wasn't Intel. Modusoperandi 08:55, 22 July 2006 (UTC) - In the olde days AMD CPU's were sucky little things that no one would like to buy, but now since Intel has pretty much abandoned the technological side of the market and focused itself on hypes (like Pentium 4 Extreeeeeme Editions and other useless scat), AMD is given all the opportunity to show the whole world what it's truly capable of. In fact, I was kind of amused when I heard the news that Intel was going to throw the entire P4 line into the waste basket perhaphs due to the reason that even an Athlon 64 running at 2.6GHz could easily outperform a Prescott/Cedar Mill running at 4.0GHz. So, if Intel doesn't strive to be the first again, it will become the last. - Anyway, thanks for the vote. -- The Colonel (talk) 10:00, 22 July 2006 (UTC) - I dunno, the 386/40 was good back in the day. Votewise I'm here for you, man (as long as it's good). I hope Intel catches up; competition is good. It keeps all sides on their toes (look what ATI/Nvidia have done for vid cards. Granted the top end stuff is stupidly expensive, but the tech filters down quick). Modusoperandi 10:50, 22 July 2006 (UTC) edit VFH You don't have to comment on something, you know, you can just put in a plain ol' for or against. - 14:55, 19 July 2006 (UTC) - This is was I said there: - "it would be even nicer if you make a comment or two..." - So, the bottomline is that I won't go after you with a claymore even if you are reluctant to comment on the article, but since I was the one who wrote it I'd like to see if there is any room for improvement. That's all. -- The Colonel (talk) 15:02, 19 July 2006 (UTC) edit Comments When I vote "For", I usually *don't* comment, because I don't find reason too. It was all very funny, with the best part being that they couldn't print shiny labels that said "Pentium IIII" because it came out as "Pentium ██" Also, somehow Apple was squeezed in there, with iHumor, which wasn't *the best*, but either than that, the article was funny. What else is there to say? 17:25, 19 July 2006 (UTC) - I am just a bit anxious with this one, you know, since it's my first article on VFH, and perhaps the last. -- The Colonel (talk) 17:44, 19 July 2006 (UTC) edit Chat It was only a kick... You were supposed to come back straight away... :( --⇔ Sir Mon€¥$ignSTFU F@H|CM|+S 14:11, 16 July 2006 (UTC) - Why am I supposed to come back? I haven't sunk to that low yet. Hee-hee... -- The Colonel (talk) 14:13, 16 July 2006 (UTC) - Gotcha! (Actually, I have to go back to my work or I'll get my butt kicked by my supervisor.) -- The Colonel (talk) 14:14, 16 July 2006 (UTC) - (I mean, tomorrow.) -- The Colonel (talk) 14:17, 16 July 2006 (UTC) - No rush, though... Take your time... I'm just saying... It's not like you were banned or anything... --⇔ Sir Mon€¥$ignSTFU F@H|CM|+S 14:35, 16 July 2006 (UTC) edit About the BENSON forum. You reallly shouldn't be posting in there until you've left a reply here. Either way, BENSON's logic has proven how shallow you are by not dignifying his flawless logic with a response, and still imaturely posting in the rest of the forum, thereby showing you have lost to his logic. --User:Nintendorulez 00:35, 8 July 2006 (UTC) - If your bozoBenson is at all flawless then let him defend himself. Don't fight for him. - Don't you see the big picture here? Any time spent on praising Benson was wasted. My time spent on trolling Benson, on the other hand, was investment. Ask yourself this question: where is Benson now? Look - I guess that idiot is still trying to figure out a way to answer my eight questions! *laugh out loud* - Stop wasting your life on this, kid! Benson is history. Find something else to do and make yourself useful. - -- Colonel Swordman 18:24, 8 July 2006 (UTC) edit Touché You have now bested me and my own aggressive bullet point on VFH, and in Greek even. Well played, indeed. -- Imrealized 22:04, 7 July 2006 (UTC) - Ok. I think I'll take that as a compliment. -- Colonel Swordman 18:24, 8 July 2006 (UTC) - You should you funny, funny bastard. No really, that FFS thing is great. Took me a little while to find it, but nice work and all. You really leaving if your article is featured? What if it isn't, will you stick around then? Maybe I'll go back and change my vote — I'm just starting to liketolerate you. ;) -- Imrealized ...hmm? 05:22, 27 July 2006 (UTC) - FFS is a gag voting page started by a guy called Jack Mort, and, yes, I'm going to leave because there is a better life for me out there and I just can't keep spending my time on this wiki. -- The Colonel (talk) 13:41, 27 July 2006 (UTC) - Oh, I got the gag and the fact that Jack Mort started it up; what I meant was what I refered to on VFH — all your votes against yourself signing others names and with comments that pertained to each. It was bloody brilliant and had me laughing more than a lot of featured articles. Well played, indeed, umm, again. Sorry to see ya go (no really, that's not a joke, or is it?) but can understand the need to break your addiction and get educated, or whatever it is you said you were doing. Pleasure sparring with you, for however briefly. Good luck. -- Imrealized ...hmm? 20:05, 27 July 2006 (UTC) edit Newspeak Plusgood edits. ~ 17:09, 7 July 2006 (UTC) - Cool. Let's celebrate with a cold glass of Victory Gin! -- Colonel Swordman 17:23, 7 July 2006 (UTC) - Don't like gin. Can I have a coke instead? ~ 17:25, 7 July 2006 (UTC) - Coke is for proles.-- Colonel Swordman 17:28, 7 July 2006 (UTC) - Lemonade? Orange Juice? ~ 17:31, 7 July 2006 (UTC) - Doubleplusungood crimethinker! Those drinks are for dogs and proles. Now go to Miniluv and confess your crime against our great BB. -- Colonel Swordman 17:34, 7 July 2006 (UTC) - You crimethinker! You unspeak Newspeak. You confess crimes Miniluv! ~ 17:42, 7 July 2006 (UTC) - Hey, we haven't released the 11th edition Newspeak Dictionary yet! Not harmonising yourself with the Party's agenda is ownlife. So, who's the is crimethinker here, huh? -- Colonel Swordman 17:45, 7 July 2006 (UTC) - *releases doubleplusgood 11th Newspeak Dictionary* - you, crimethinker! ~ 17:47, 7 July 2006 (UTC) - (Thoughtpolice storm in.) *change clothes* *sip wine* You, unperson. Muhahaha.... -- Colonel Swordman 17:53, 7 July 2006 (UTC) edit That Csabbah vanity trash Could you please tell me what is wrong with the article csabba? I mean I make fun of a guy called csabba and dont know what's with the vanity you are talking about. We ironized him a lot and it seems to me a pretty funny article, with some nice jokes. Reply pls [shatterstar] —The preceding unsigned comment was added by shatterstar (talk • contribs) Ha-ha, very funny! -- Colonel Swordman 19:52, 2 July 2006 (UTC) - "ironized??" --Donut Buy one!|Get one Free!|F@H|MUN 22:00, 12 July 2006 (UTC) - Bah! This moron made a vanity page called "Csabbah", only to get deleted by Tompkins. Then he/she/it started the whole thing again under a sightly different title. I sabotaged the work by means of vandalism, and he/she/it retaliated by mucking up my userpage - bad move! Mhaille reverted the damage and gave this retard a good ban. Then, Tompkins stepped in, removed all the vanity scat and gave yet another vandal a week ban. I laughed. -- The Colonel (talk) 01:21, 13 July 2006 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Col.swordman/Archive_2
CC-MAIN-2015-22
refinedweb
3,936
81.33
I'm pretty angry at blogs.msdn.com right now (or maybe I'm just angry at myself), as it completely nuked a post I had composed, because my session had timed out on it. I went to post, and it asked me to log in, and in the process destroyed a lot of work. I'll try to put my frustrations out of mind, and continue with my increasingly tardy discussion of C++ DF. I've already discussed the CLR Dispose pattern, and the C++ destructor syntax. Originally, I'd planned for this third part to be about interaction between those two models, but I think I'll save that for later, and instead start talking about the benefits of C++ DF, by way of discussing a comment on my previous blog post. A bit of discussion about Dispose brought up an interesting point - in C++, now, we've disabled the ability to call the Dispose function directly. Instead, you call it thusly: ref class MyClass{ public: ~MyClass(){} //overrides IDisposable::Dispose() }; int main(){ MyClass^ mc = gcnew MyClass(); //do stuff mc->Dispose(); //error! mc->~MyClass(); //legal, calls Dispose() delete mc; //also legal, also calls Dispose() } The options left open to the user are, I think, a little more familiar. However, there's a large caveat included in this discussion. Imagine, in that innocuous comment titled "do stuff," that I throw an exception. I haven't handled it in my code snippet, so I'm likely to run into issues. The most obvious way to handle this would be to wrap our instantiation in a try/finally: int main(){ MyClass^ mc = gcnew MyClass(); try { //do stuff, maybe even throw an exception } finally { delete mc; //dispose, regardless of execution path } } That works, but it's likely to get nasty very quickly when you have multiple disposable objects, as you should be nesting try/finallys (to guard against exceptions in an object Disposer, for example). C# has a nifty piece of syntax set up to handle this, the using statement (not to be confused with the using directive). You can scope out the code example in that article for more information on it. In C++, however, we've developed our own bit of syntactic sugar to handle disposable objects cleanly: ref types on the stack. With a nod to the example above, here's a quick look at ref types on the stack: #using <System.Drawing.dll> using namespace System::Drawing; int main(){ Font f1("Arial", 10.0f); Font f2("Courier New", 8.0f); //do stuff with these fonts } //Dispose called automatically, even with exceptions This is a syntax that should be intimately familiar to C++ users - what's more, if you look at the IL, it actually behaves the way they will expect. That is, your disposer gets called even if an exception is thrown during the "do stuff" phase. Ref types on the stack aren't really anything special, they're still a ^ underneath, it's just a lightweight "shim," with a few added bonuses. However, this syntax does represent a slight problem - there are no BCL methods that take a font-on-the-stack, because this whole "on the stack" thing is a C++ convention. To handle this, (pun intended) we introduced the % (Anders-of) operator. This basically gets you the handle underneath our implementation. So, you can have a ref type on the stack, and still pass it to functions that expect a handle to that type. It's used exactly like the & (address-of) operator, which is pleasantly analogous, in light of the & reference type and the % tracking reference type. #using <System.Drawing.dll> using namespace System; using namespace System::Drawing; void foo(Font^ f); int main(){ Font f1("Arial", 10.0f); Console::WriteLine(f1.FontFamily); foo(%f1); //foo requires a Font^ } Why not use something similar to the boxing conversion? (Is it some kind of coincidence that my post on boxing was also apparently nuked at some point?) Anyhow, there's a very good reason for not using a boxing-style conversion, and it's something new we've implemented for Whidbey C++: deep copy semantics. (That is, copy constructors for ref types.) We're starting to stray outside the bounds of this conversation, however. We'll save copy ctors for another day. Next time, I'll get into more benefits of DF. Also still to come: how C++ DF interacts with the CLR Dispose pattern. Of course the C++ way to handle this is a smart pointer: auto_gcptr<Font> font = gcnew Font(); then when the autoptr goes out of scope it will be destructed. The implementation of auto_gcptr should be pretty easy but it would be nice to have this in the microsoft extensions to std. Now that is a big improvement over nesting usings to Dispose of a number of elements. and we get deterministic finalization by combining the scoped destructor with the Dispose pattern Why not allow the compiler to automatically insert the % when calling a function that expects a handle. Having to write foo(%f1) complicates the abstraction needlessly. The compiler knows that f1 is a handle in reality, so it could do this without any intervention from the programmer. I didn’t want to give the impression that I was suggesting the use of nesting "usings" – our solution to handling disposable items is to have ref types on the stack. As to not allowing the compiler to insert the %, it comes back to the copy constructor. As we are going to have copy constructors for ref types, it will be possible to have a foo(R r) function signature. It would confuse matters to have this sort of implicit boxing conversion for ref types as well. PingBack from PingBack from
https://blogs.msdn.microsoft.com/arich/2004/10/18/deterministic-finalization-iii-benefits-part-1/
CC-MAIN-2019-22
refinedweb
958
60.55
Viu LaunchpadViu Launchpad Launchpad is our proven frontend build toolkit and asset pipeline to get projects off the ground quickly. It uses Gulp, Webpack and an assortment of handy tasks to develop, optimize and deploy frontends or whole static sites. Getting StartedGetting Started PrerequisitesPrerequisites Launchpad requires NodeJS and NPM 12.16.x or higher. We recommend using NVM to manage node versions. InstallingInstalling Install VIU Launchpad into your project npm install @viu/launchpad --save-dev Update from v2 to v3Update from v2 to v3 Not a lot has changed and if you were able to run v2, you should be able to run v3. The following steps will get you there. - Update the Node engine in your project's package.jsonto 12.16.x or higher and NPM to 6.13.4 or higher. Update your "gulp" dependency to 4.0.2 or higher. If you have build scripts or an .nvmrcfile then update them too. - Update your environment to NodeJS and NPM 12.16.x or higher - Update your devDependency to Launchpad "^3.0.0". - Remove any node_modulesfolder or NPM LOCK files ( package-lock.json, yarn.lock) and then run a complete npm install - Autoprefixer's list of target browsers is no longer controlled by tasks.css.autoprefixer.browsersin package.jsonbut checks for a .browserslistrcfile in your project root. Check the demofor an example. If you do not provide such a file, autoprefixer is disabled. - If you have been relying on Launchpad's design system, you will need to include the required dependencies in your project as Launchpad does not come with them anymore. See the demofor a working example. - If you choose to keep the default on Launchpad 3 and prefix the SVG sprites, ensure your sprite helper (and all calls to SVG sprites) include the prefix. Check src/templates/macros/helpers.htmlin the demofor an example. - Run npm startand your project should be running. - Run npm lintto check if the updated linting rules trigger new warnings or errors. UsingUsing File StructureFile Structure Create the following folder structure or use the example configuration from the ./demo/ folder. src/assets fonts/(files are copied to the root folder of the output destination) icons/(files are merged into an icon font) images/(files will be optimized) sprites/(files are merged into an SVG sprite) static/(files are copied to the root folder of the output destination) webapp/(files are copied to the root folder of the output destination and updated with revisioned URLs) src/components - {component name} - {component name}.js - {component name}.scss - {component name}.html / .hbs - {component name}.json src/data global.json(file that holds all global data) - [...] here you add a {template name}.json for every template that needs specific data src/design(contains the single source of truth regarding the design) breakpoints.json colors.json sizes.json typography.json src/javascripts global.js(if you follow the example configuration below) src/stylesheets globals.scss(if you follow the example configuration below) src/templates layouts/ **/ - {filename}.html / .hbs ConfigurationConfiguration Copy the scripts and config object from bellow into your package.json file. Adapt the config as required. The initial configuration for all tasks is found in node_modules/@viu/launchpad/gulpfile.js/config.json and a working demo config can be found in demo/package.json. "scripts": { "start": "npm run develop", "develop": "npm run gulp", "gulp": "gulp --gulpfile node_modules/@viu/launchpad/gulpfile.js/index.js", "lint": "npm run gulp lint", "production": "npm run gulp production", "demo": "npm run gulp production && npm run gulp server" }, "config": { "root": { /* allows you to specify your own source and destination folders. the following values are defaults and can be omitted in your file, unless you want to change them */ "base": "../../../../", "src": "../../../../src", "dest": "../../../../public" }, "tasks": { "js": { "entries": { "global": [ "./global.js" ] }, "plugins": [ { "reference": "jQuery", "name": "jquery" } ] }, "browserSync": { "open": false, "cors": true, "rewrites": { "from": ".", "to": "/index.hbs"} /* for projects that use push-state and need to send the same html file for all requests */ }, "iconFont": { "fontHeight": 1001, "normalize": true }, "production": { "rev": true }, "lint": { "js": { "config": ".eslintrc.yml" /* here we specify a custom eslint-config, that sits in the root-folder of your project */ } }, } } ModesModes Development (Default)Development (Default) npm start The npm start command runs the default task, defined in gulpfile.js/tasks/default.js. All files will compile in development mode (uncompressed with source maps). BrowserSync will serve up files to localhost:3000 and will stream live changes to the code and assets to all connected browsers. Additional BrowserSync tools are available on localhost:3001. The npm run develop command has the same effect. Launchpad can only be run within a project. (ex. demo) LintingLinting npm run lint Runs linters on JavaScript (with eslint), TypeScript (with tslint) and SCSS (with sass-lint) and outputs warnings and errors according to the specified linter config. By default the subfolders vendor and generated will be excluded and files in there will not be checked. The eslint-config inherits from the Airbnb JavaScript Style Guide and adds the environments for browser and node. Additionally the eslint-plugin import is installed to check failed imports. SCSS is linted via sass-lint. The default config extends the default rules and adds stricter configuration for classic BEM conventions. Feel free to configure according to your project's specifications. ProductionProduction npm run production This will compile revisioned and compressed files to ./public. DemoDemo npm run demo In addition to generating production files, this will start a static server to serve the files from. This is primarily meant as a way to preview your production build locally, not for use as a live production server. DebugDebug npm run debug This will compile and revision files to ./public, in the same way that npm run production does. However, this command will not minify / uglify your code, but rather keep the readable source. FeaturesFeatures JSJS Modular ES2020 and/or Typescript with Babel and Webpack. Also: - Async requires - Multiple bundles - Shared modules - Source Maps CSSCSS Sass (indented, SCSS, or both) with JSON importer and autoprefixer based on browserlist. HTML / DataHTML / Data Static templating with either Nunjucks or Handlebars and Webpack. Components automatically load the datafile (in the same folder and with the same name as the component) and combine this data object with data that can be provided directly in the template where the component is included. Data provided via the layout is overwriting the default data taken from the component folder. Components are used via the component-tag which is not Nunjucks standard functionality but added in VIU Launchpad. usage: {% component 'test', exampleComponentData %} Components can also be used from within node modules. For that to work, use the following format: {% component '::test', exampleComponentData %} For this to work, the node module needs to be named "test" and it needs to contain an 'index.html' and an 'index.json' file (make sure to list these files in the files-property of the packages.json). To be able to use macros from node packages, reference them with their path: {% import '../node_modules/example-component/index.html' as example %} and then call as with other macros. You may configure this task to use gulp-cached to only rebuild, if the file has actually changed during development. In projects with many html templates, this may improve performance. As a side-effect, the templates will not be rebuilt, if only the json is edited. "html": { "cache": true, }, The data/global.json and the package.json are merged recursively with the page template of the current page and the merged object is available in the template. How to Use HandlebarsHow to Use Handlebars To use the handlebars task, just create .hbs files in your components / layouts. The handlebars task can either be run in parallel to the html task (the handlebar task only picks up *.hbs files), or can completely replace the html task. The demo project shows the tasks running in parallel. To disable the html task, update the package.json: "config": { "tasks": { "html": false, // ... } } To disable the handlebars task, update the package.json: "config": { "tasks": { "handlebars": false, // ... } } ImagesImages Compression with imagemin (based on mozjpeg) IconFontIconFont SVGs added to src/icons will be automatically compiled into an iconFont, and output to ./public/fonts. At the same time, a .sass file will be output to src/stylesheets/generated/_icons.sass. This file contains mixins and classes based on the SVG filename. If you want to edit the template that generates this file, it's at gulpfile.js/tasks/iconFont/template.sass Usage:Usage: With generated classes: <span class="icon -twitter"></span> With mixins: .lil-birdy-guy +icon--twitter .lil-birdy-guy { @include icon--twitter; } <span class="lil-birdy-guy"></span> Don't forget about accessibility! <span aria-</span> <!-- or --> <div class="icon -twitter"><span class="screen-reader">Twitter</span></div> SVG SpritesSVG Sprites gulpfile.js/tasks/svgSprite SVGs sprites are super powerful. This particular setup allows styling 2 different colors from your CSS. You can have unlimited colors hard coded into your SVG. In the following example, the first path will be red, the second will be white, and the third will be blue. Paths without a fill attribute will inherit the fill property from CSS. Paths with fill="currentColor" will inherit the current CSS color value, and hard-coded fills will not be overwritten, since inline styles trump CSS values. .sprite fill: red color: white <svg xmlns=""> <path d="..."/> <path fill="currentColor" d="..."/> <path fill="blue" d="..."/> </svg> We've included a helper to generate the required SVG markup in src/templates/macros/helpers.html, so you can do: {{ sprite('my-icon') }} Which generates HTML-output like this: <span class="sprite sprite--my-icon"> <svg viewBox="0 0 500 500"><use xlink:href=#svgicon-my-icon" /></use></svg> </span> Feel free to update the helper to your liking and add additional classes etc. We recommend setting up your SVGs on a square canvas, centering your artwork, and expanding/combining any shapes of the same color. This last step is important. The IDs of the generated sprite-symbol-references will be prefixed with svgicon- (configurable in the task svgSprite-config). It is recommend to always use a prefix for the sprite-icons, to prevent issues with ID-collision in HTML (as an example: there could be an input-field with the id Static Files (favicons, app icons, etc.)Static Files (favicons, app icons, etc.) There are some files that belong in your root destination directory that you won't want to process or revision in production. Things like favicons, app icons, etc., should go in src/static, and will get copied over to public as a last step (after revisioning in production). Nothing should ever go directly in public, since it gets completely trashed and re-built when running the default or production tasks. Webapp Files (service workers, .htaccess)Webapp Files (service workers, .htaccess) There are some files that belong in your root destination directory that you want to go into webroot and be updated with revisioned URLs. Files like service workers, web workers, your manifest or .htaccess configuration should go in src/webapp and will get copied over to public. Nothing should ever go directly in public, since it gets completely trashed and re-built when running the default or production tasks. Target specific variablesTarget specific variables Add capability to define build-target-specific variables. These variables can be used in JavaScript files and will be replaced to the configured value at compile-time. config.tasks.js.targetBuildVariables Usage: - List the build-target-specific variables you like to use in your scripts, in tasks.js.targetBuildVariables 2.A Define new npm scripts for all the target-environments you like to have, Environment-Variables to a specific value. or: 2.B Set the Environment-Variables (E.g. 'API_USERS') directly via CLI (platform-specific command) - Use the variables in your JavaScript files. Note that 'API_USERS' will be available as API_USERS in your Script. Basic scenario would be that you want to have different build, which test different API's, for instance: Test-API on: "test.example.com" Develop-API on: "dev.example.com" Example package.JSON: (Only with specific 'develop' script. You probably have to define the other Scripts like "production_test", "production_develop" etc. as well.) "tasks": { "js": { "targetBuildVariables": [ "API_USERS", "API_PRODUCTS" ] } }, "scripts": { "develop": "gulp --gulpfile node_modules/@viu/launchpad/gulpfile.js/index.js", "develop_test": "cross-env API_USERS=test.example.com/getusers API_PRODUCTS=test.example.com/getproducts npm run develop", "develop_dev": "cross-env API_USERS=dev.example.com/getusers API_PRODUCTS=dev.example.com/getproducts npm run develop" } IMPORTANT: Note the cross-env call at the beginning of the npm-scripts develop_test and develop_dev. The cross-env package will make sure that setting the environment-variable (e.g. API_USERS) will work cross-platform. With this configuration, you can use your custom variables in your JavaScript files like following: console.log(__API_USERS__); console.log(__API_PRODUCTS__); npm run develop_test => 'test.example.com/getusers' => 'test.example.com/getproducts' npm run develop_dev => 'dev.example.com/getusers' => 'dev.example.com/getproducts' Note: If you have set the environment variable directly via CLI, you would run 'npm run develop'. Release Notes / Breaking ChangesRelease Notes / Breaking Changes We use SemVer for versioning. For the versions available, see the tags on this repository. - 3.2.1: Launchpad Release 3.2.1 - Improved error logging (especially significant for typescript compilation) - Reverted unnecessary engine requirements - Add sane defaults to all tasks to not watch temporary extensions, as per Gulp-Watch #238 - 3.2.0: Added handlebars support - Added support for handlebars templates using the .hbsextension. - Added support for Node 14 LTS. - 3.1.0: Config cleanup - Moved to JS for eslint config in base and demo. Fewer rules exceptions where possible. - Streamlined npm scripts available in demo - Fixed default eslint config referred to from config.json. Minor updates to tslint config. - Updated dependencies (no majors). - 3.0.0: Launchpad Release 3.0 - Based on Node 12 LTS / NPM 6 - Updated to Gulp 4, Babel 7, Webpack 4, MozJPEG and more. Refactored all gulp tasks - New features and fixes - Support for TypeScript 3.7 and EcmaScript 2020 including dynamic imports/exports - Config now allows regular expressions when defining loaders for webpack - SVG-icons ids are now prefixed with svgicon-(configurable) to prevent id collisions with other HTML-elements - Autoprefixer and cssnano have replace gulp-autoprefixer and gulp-cssnano. This means less gulp-dependencies and thus more up-to-date code. Added gulp-postcss to process both autoprefixer and cssnano at the same time. - Fixed revision task. HTML files where revisioned (shouldn't) and CSS files weren't (but should) - Excludes (for linting and html tasks) are working across all platforms - Removed features - Dropped all code for automated tests as many launchpad users reported to not use it. They don't use automated testing (sadly) or build their own extensive test suites (fantastic!). Feel free to build your own, Launchpad won't get in the way. - Removed HTML linting as it is not widely practiced in general and none of the known launchpad projects ever used it. - Removed optional custom development and production tasks as none of the known launchpad projects used any. - Removed design system from launchpad but added it to demo project (where it is actually used) as a starting point. - Improvements to documentation and code quality - Introduced modern (ES2020) linting to the project and removed any linting issues in launchpad and the demo - Introducing /src/ folder for demoand in documentation as it has become the de-facto default for all projects at viu and for many other such build systems - Updated browserlist.rcfor /demoto be based on the widely accepted default - 2.4.0: Sass includePaths - Fixes a bug where includePaths were not prefixed with project path and didn't work in libSass - Adds ./node_modules as default includePath as is generally expected of libSass - 2.3.0: Separate design definition from design implementation - Adds possibility to have the design live in json files - Adds json to sass parsing - 2.2.0: Components from multiple node_modules - New componentFolders config allows components to be stored in multiple folders e.g. one local src/components folder and one or many folders in node_modules. - 2.1.0: Process arguments available in Nunjucks data.json - The HTML task will expose all arguments in am npm run ... statement to Nunjucks via an {{ args }} object. Example: npm run develop -- --myarg=hello -x 15 can now be referenced in Nunjucks as {{ args.myarg }} and {{ args.x }}. - 2.0.3 - 2.0.6: Components from node_modules & housekeeping - 2.0.0 - 2.0.2: Release 2.0 brings fundamental (breaking) updates to VIU Launchpad - Based on Node 8 LTS / NPM 5 - All major dependencies updated. Webpack updated to v3.10. See webpack 3 release notes for how to configure webpack in order to improve JavaScript performance. - Improved support for TypeScript v2.7. Including TSLint. - Switched to babel-preset-env - Updated /demoproject - 1.0.0: VIU is proud to release version 1.0.0 of the VIU Launchpad. Highlights are: - Starting from our first 'stable' release, we will be following strict SemVer naming conventions. - Therefore we will not have any braking changes without clear labelling. - This release is meant for Node LTS 6.*. AuthorsAuthors - Michael Keller - Maintainer - VIU - Andreas Nebiker - Maintainer - VIU - Raphael Ochsenbein - Maintainer - VIU - Steffen Rademacker - Contributor - Webgefrickel - The Team at VIU - Contributors - VIU LicenseLicense This project is licensed under the MIT License - see the LICENSE file for details AcknowledgmentsAcknowledgments - Originally based on the popular GulpStarter (now 'Blendid') by Viget - Thank you, dear customers. We use launchpad in many of our projects. It is fuel for - and the result of - great things built together with our customers. - Media Credit: camera.mp4 from () is used in our ./demo/project
https://www.npmjs.com/package/@viu/launchpad
CC-MAIN-2021-31
refinedweb
2,941
50.02
Paul Eggert writes: > True, but my question is what the symbol C_CTYPE_ASCII means. That > is, I am trying to understand the implementation, not trying to > understand the API. It means: The character set is ASCII or one of its variants or extensions, not EBCDIC. I've corrected the comment now. > From your remarks, apparently you mean for C_CTYPE_ASCII to mean "the > character set is upward compatible with JIS X 0201:1997 left half > (Japanese JIS Roman)". Sorry I must have expressed myself wrong. If the character set has '\\' but it has a different codepoint than in ASCII then the ASCII optimizations should _not_ apply. > Conversely, the #if doesn't test for '$' or '@', even those two > characters are in JIS Roman and your remarks suggest that you intended > to test for '$' and '@'. My earlier remarks were wrong. '$' and '@' are not tested, precisely because these characters are not part of the "basic character set". ISO-646-CN is probably not a problem, can be handled like ASCII. (Whether C_CTYPE_ASCII gets set to 1 on a system with ISO-646-CN or ISO-646-JP, will depend on the source code conversions that have been performed on the source file before compilation, maybe converting backslash to YEN SIGN or maybe not etc. - however it's not a problem for the c_* functions.) > Would you be convinced by an efficiency argument? > On my host (GCC 2.95.3 with -O2, sparc), the unportable code: > > int f (int x) { return (x & ~0x7f) == 0; } > > requires 4 instructions, but the portable code: > > int g (unsigned x) { return x <= 0x7f; } > > requires only 3. OK, why not. On x86 also, the generated code for int g (int x) { return x >= 0 && x <= 0x7f; } is smaller. > Besides, a few ones-complement hosts with C compilers are still in use > (Unisys mainframes) Let's hope that they get out of business soon :-) (They would already have, if they didn't succeed in extorting money from people who believe in patent threats.) > Anyway, if it's easy, it's better to avoid code that assumes two's > complement, since such code is a bit trickier to read On the contrary, such code is good teaching material for bit operations. Did you know that for every x ((x - 1) & (- x - 1)) + 1 == x & -x > > For debugging it is best to use -O0, and in this case "c-ctype.h" > > will use the external functions, not the macros. > > But that's two copies of the code, which have to maintained > separately. With inline functions you have one less copy of the code, > so it should be less error-prone. In general, I agree. In this case here, the functions won't change in 10 years. Bruno
http://lists.gnu.org/archive/html/bug-gnulib/2003-01/msg00104.html
CC-MAIN-2015-27
refinedweb
452
71.65
Hi all, I am relatively new to Python and trying to play around to understand how to build an algorithm. I have a really simple question, but dont understand why I cannot figure this out. This part: if self.macd > 0: self.Debug(str("macd is over 0")) is causing me alot of headache. Why is this not working when I have a variable > 0 in the if formula? I have tried to change it from self.macd to other variables without success. but replacing the variable with a number, like numer 1 would work. Would anyone know why and help me out with this? Appreciate the help! class BasicTemplateAlgorithm(QCAlgorithm): def Initialize(self): self.SetStartDate(2017,01,01) #Set Start Date self.SetEndDate(2017,10,10) #Set End Date self.SetCash(10000) #Set Strategy Cash self.AddEquity("TSLA", Resolution.Daily) self.smaFast = self.SMA("TSLA", 10, Resolution.Daily); self.smaSlow = self.SMA("TSLA", 20, Resolution.Daily); self.macd = self.MACD("TSLA", 12, 26, 9); def OnData(self, data): #self.Debug(str(self.macd)) if self.smaFast > self.smaSlow: self.SetHoldings("TSLA", 1) elif self.smaFast < self.smaSlow: self.SetHoldings("TSLA", 0) if self.macd > 0: self.Debug(str("macd is over 0"))
https://www.quantconnect.com/forum/discussion/3148/need-help-simple-question/p1
CC-MAIN-2019-39
refinedweb
203
56.21
operator64 By czardas, in AutoIt Example Scripts Recommended Posts Similar Content - By Rhazz Hello! I have a "syntax error" but I can't understand where is it. Can you help me with this? There is two text files. I want to write three lines of the first file in each line of the second file. Then, if the first file has 600 lines, the second file should have 200 lines. I hope I have explained well what I want. #include <File.au3> Local $sFile1 $sFile1 = "File1.txt" Local $sFile2 $sFile2 = "File2.txt" Local $sThreeLinesIntoOne FileOpen($sFile1,0) FileOpen($sFile2,2) For $i In ( _FileCountLines($sFile1) / 3 ) For $a = ( ( $i - 1 ) * 3 ) + 1 ) To ( $i * 3 ) $sThreeLinesIntoOne = $sThreeLinesIntoOne & FileReadLine($sFile1, $a) Next FileWriteLine($sFile2, $sThreeLinesIntoOne) Next FileClose($sFile2) FileClose($sFile1) AutoIt returns: - By guestscripter I´m getting mixed up! What´s the difference between: While NOT Func1() AND Func2() and While NOT Func1() AND NOT Func2() ?? I´m using lots of these and it works find, but now that I´m rewriting my script I´m mixing things up and getting unsure. - By careca Hi ppl, got a problem with this, It's for a small project i have, it was sugested that i should include an error check, but now im stuck here, if i only use one string to compare, it works, so im wondering am im missing some operator for this? I could do multiple checks with each of them, but this way it would be simpler, what am i missing here? $LastKey1 = ClipGet() $string = StringRegExp($LastKey1, "HKEY_CLASSES_ROOT" Or "HKEY_CURRENT_USER" Or "HKEY_LOCAL_MACHINE" Or "HKEY_USERS" Or "HKEY_CURRENT_CONFIG", 0) If $string = 0 Then MsgBox(4096, "Error", "Clipboard content is not a registry key!") Exit ElseIf $string = 1 Then MsgBox(4096, "ClipGet", "Clipboard content is a registry key!") ;function EndIf error check.au3 - By Schoening Hey, Just But
https://www.autoitscript.com/forum/topic/176620-operator64/
CC-MAIN-2018-26
refinedweb
307
72.36
Here is a basic concept of data and time * DateTime is the class which is already defined in the system. * General format of the date is Year month and day. For example: DateTime Date1 = new DateTime(2014, 02, 01); In the above example below are the input which is already define by the programmer: Year : 2014 Month: (02) February Day : 01 * Representation of the date: Programmer can represent the date in his/her own format by keeping the basic format in their mind MMMM (in caps):- full name of the month. eg September MMM (in caps):- short name of the month. eg. Sept MM (in caps):- Month in number dddd (without caps) :- full name of the day. eg Saturday ddd (without caps) :- short name of the day. eg. Sat dd (without caps) :- day in number yyy & yyyy (without caps) means : represent year. ex. 2014 yy (without caps) : represent last two alphabet of the year. ex for 2014 it will represent 14. *In formatting operations, custom date and time format strings can be used either with the ToString method of a date and time instance or with a method that supports composite formatting. * Syntax for the programmer to represent date in his/her own format. Console.WriteLine("Today is " +Date1.ToString("ddd yy, MMMM") ); * DateTimeOffset is the class which is used for both date & time. * General format of the date is Year, Month, Day, Hour, Minute, Second & timespan. For example DateTimeOffset Date2 = new DateTimeOffset(2011, 6, 10, 15, 24, 16, TimeSpan.Zero); *Programmer can represent the date in his/her own format by keeping the basic format in their mind H: Hour mm:Minute ss : Second zzz : TimeSpan * Syntax for the programmer to represent date & time in his/her own format Console.WriteLine (" date and time:{0:MM/dd/yy H:mm:ss zzz}", Date2); Here is the complete program to implement the date and time logic : using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace ConsoleApplication12namespace ConsoleApplication12 { class Program { static void Main(string[] args) { DateTime date1 = new DateTime(2016,09,14);// year month and dayDateTime date1 = new DateTime(2016,09,14);// year month and day Console.WriteLine("Today is " + date1.ToString("ddd yy, MMMM"));Console.WriteLine("Today is " + date1.ToString("ddd yy, MMMM")); DateTimeOffset Date2 = new DateTimeOffset(2016,09,14, 15, 24, 16, new TimeSpan(1,0,0));DateTimeOffset Date2 = new DateTimeOffset(2016,09,14, 15, 24, 16, new TimeSpan(1,0,0)); Console.WriteLine(" date and time: {0:MM/dd/yy H:mm:ss zzz}", Date2); Console.ReadLine(); }} } } Total Post:17Points:119 C# C# Ratings: 405 View(s) Rate this:
https://www.mindstick.com/forum/34177/what-is-a-basic-concept-of-date-and-time
CC-MAIN-2017-13
refinedweb
439
57.37
snd_pcm_open_preferred() Create a handle and open a connection to the preferred audio interface Synopsis: #include <sys/asoundlib.h> int snd_pcm_open_preferred( snd_pcm_t **handle, int *rcard, int *rdevice, int mode ); Arguments: - handle - A pointer to a location where snd_pcm_open_preferred() can store a handle for the audio interface. You'll need this handle when you call the other snd_pcm_* functions. - rcard - If non-NULL, this must be a pointer to a location where snd_pcm_open_preferred() can store the number of the card that it opened. - rdevice - If non-NULL, this must be a pointer to a location where snd_pcm_open_preferred() can store the number of the audio device that it opened. -:: - Read /etc/system/config/audio/preferences. - If this file doesn't exist or has no entry, check PCM device 0 of card 0 for a software mixing overlay device. If this overlay device is found, it's opened. - Open the default device 0 of card 0. If all of the above fail, you don't have an audio system running. Returns: Zero on success, or a negative value on error. Errors: - -EINVAL - Invalid mode. - . - -SND_ERROR_INCOMPATIBLE_VERSION - The audio driver version is incompatible with the client library that the application uses. - -ENOMEM - No memory available for data structures. Examples: See the example in Opening your PCM device in the Playing and Capturing Audio Data chapter. Classification: QNX Neutrino Caveats:() .
http://developer.blackberry.com/playbook/native/documentation/com.qnx.doc.neutrino.audio/topic/libs/snd_pcm_open_preferred.html
CC-MAIN-2015-11
refinedweb
223
57.16
What are your future plans around SQL Server? Outlook is perhaps the most utilized application within Microsoft Office, as it is the application that provides tools for communication, organization, and collaboration. I keep Outlook open all day because it's my main toolnot only for communication, but also for tracking my tasks and keeping a record of the time I spend on different projects. In this article I will show you a method for tracking time by developing a custom appointment form that replaces the default appointment form provided by Outlook. This solution is implemented as a VSTO-Outlook add-in. The entire add-in consists of three objects: a WinForm (and its related code), the VSTO-created ThisApplication class, and an XML file. In this article, you will learn: NOTE: the following prerequisites for building the projects in this article: 1. Microsoft .NET Framework 2.0 2. Visual Studio 2005 Beta 2 3. Visual Studio 2005 Tools for Office, Outlook (Beta) 4. Office Professional 2003 5. Office Primary Interop Assemblies (PIAs) NOTE: the following prerequisites for building the projects in this article: 1. Microsoft .NET Framework 2.0 2. Visual Studio 2005 Beta 2 3. Visual Studio 2005 Tools for Office, Outlook (Beta) 4. Office Professional 2003 5. Office Primary Interop Assemblies (PIAs) In fact, building an Outlook form is easier than building a typical WinForm because all you have to do is drag an Outlook field to the form and the control matching the field's data type is drawn. For example, drop the Contacts field on a form in design mode and Outlook will draw a Button and matching TextBox control on the form. One key drawback to modifying Outlook forms is that you are limited to the forms provided. If you want to build a custom Task form (or any Outlook form), you must start with Outlook's Task form and modify the design. This may work for most solutions, but I have always wanted to build my own form and substitute it in place of the form Outlook provides. This desire is strengthened by the fact that many of the forms do not allow modifications to their primary tab... such as the two forms I most want to customize for my solutions: the Appointment and Task forms! WARNING: Replacing Outlook's default forms with a WinForm is not supported by Microsoft. This means there is no guarantee this strategy will work with future versions of Outlook or that it will work with all supported Outlook 2003 scenarios. For example, this form does not support many of the features available with the default form. Outlook, like all Office applications, has an Application object at the top of the object hierarchy. Beneath this object are the Explorers and Inspectors objects, each containing collections of Explorer and Inspector objects, respectively. These two collections can be confusing as they provide access to the same types of objects within Outlook. That said, they have two separate purposes. To keep it straight, just remember that an Explorer is used to "explore" the items contained within Outlook's folder structure and that an Inspector is used to "inspect" individual Outlook items. In this article, we will focus on the Inspector object because it contains the NewInspector method, which is triggered every time an Outlook item is opened or created. This applies to tasks, appointments, contacts, e-mails, etc. Hopefully your form resembles something similar to Figure 2. She certainly isn't the most beautiful form out there, but we're not through yet. Using this form, I will now explain the secret to using a custom form in place of any Outlook form. Step 2: Write Code to Replace Outlook's Default Appointment Form (ThisApplication.vb) The trick to substituting the default Outlook Appointment form is to capture the NewInspector event. This event resides in the Inspectors collection. By capturing this event, you can intercept the form Outlook would like open, cancel it, and open your form instead. It's pretty simple once you see how it's done. Switching the form is the easy partthe hard part is the code required to handle opening and saving a "switched" item. Even this isn't too difficult, as you will soon find out. Once you complete Step 2, you will have all the code required for the ThisApplication class. To get started, open ThisApplication.vb and insert the following variable declarations: Private WithEvents tyINS As Outlook.Inspectors Private WtihEvents tyAI As Outlook.AppointmentItem These variables provide the needed access to the Inspectors collection and any created AppointmentItem. ThisApplication_Startup Event In order to listen for the NewInspector event, the add-in needs access to Outlook's Inspectors collection. The best place to set up a reference to the Inspectors collection is the Startup event of the ThisApplication object. NOTE: When you created the add-in earlier, VSTO created a class name ThisApplication that inherits the Microsoft.Office.Tools.Outlook.Application namespace and provides project-wide access to the Outlook object model. There is more to it than that, but this is the net result. ThisApplication is contained in a partial class. If you want to check it out further, just click ShowAllFiles in the Solution Explorer. Insert the following code into the ThisApplication class: Private Sub ThisApplication_Startup(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Me.Startup tyINS = Me.Inspectors End Sub This method will fire as part of Outlook's initialization and store a reference to Outlook's Inspectors collection. Since tyINS was declared WithEvents it will allow us to listen for, and respond to, the NewInspector event. The NewInspector Event The NewInspector event fires every time an Outlook form opens. It doesn't matter if the form is opened by a user or through code, this event will be triggered. Therefore, the NewInspector event is the place for the code that displays the TyApptItem form. Private Sub tyINS_NewInspector(ByVal Inspector As _ Microsoft.Office.Interop.Outlook.Inspector) _ Handles tyINS.NewInspector tyAI = Inspector.CurrentItem If tyAI.MessageClass = "IPM.Appointment" Then Dim f As New TyApptItem(tyAI) f.ol = Me f.Show() Else End If End Sub The event utilizes the passed Inspector object and assigns it to the tyAI variable. This is important as it allows us to code against this Inspector's Open event (discussed next). We only want to respond to appointment items, so we check the MessageClass property of the Inspector to determine if it is an appointment. If so, we create and show a new instance of the add-in's custom appointment form. While opening the form, we also pass a reference to the opened appointment item (tyAI) as well as a reference to Outlook (Me). Both of these objects are required by the custom form and will be explained in Step 3. The tyAI_Open Event The NewInspector event displays the TyApptItem form, but it does not prevent the newly created Inspector object from displaying. To complete the switch, all that is required is one line of code inside the tyAI variable. Remember, this variable contains a reference to the CurrentItem of the active Inspector. Private Sub tyAI_Open(ByRef Cancel As Boolean) Handles tyAI.Open Cancel = True End Sub Setting Cancel equal to True prevents Outlook from opening the default appointment form and allows the custom TyApptItem to take the reins of the user's calendar experience. Step 3: Code the Custom Appointment Form (TyApptItem.vb) In Step 1, I showed you how to design a custom WinForm that will replace the default Outlook appointment form. In Step 2, I explained the code required to prevent Outlook's appointment form from opening and to display the add-in's appointment form in its place. In this step, I will cover the code required to turn the TyApptItem form from just another pretty face into something truly useful. I will cover the different methods and events according the form's workflow. Add the following namespace directives above the form's class definition: Imports Outlook = Microsoft.Office.Interop.Outlook Imports Office = Microsoft.Office.Core Imports System.Xml Inside the class definition, insert the following variable declarations: Friend ol As Outlook.Application Private ai As Outlook.AppointmentItem These two variables will store references to Outlook and to the form's related Outlook appointment item respectively. New The New method accepts an Outlook appointment item as a parameter. The appointment is needed by the form in order to read and write appointment data to the default Calendar folder. Friend Sub New(ByVal Appointment As Outlook.AppointmentItem) InitializeComponent() ai = Appointment OpenAppointment() End Sub After storing a local reference to the passed appointment item, the procedure calls the OpenAppointment method. TyApptItem_Load Every time the form displays to the user, tscboProjects must be filled with a listing of available projects. The form's Load event calls the LoadProjects method. Private Sub TyApptItem_Load(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Me.Load LoadProjects() End Sub LoadProjects The LoadProjects method takes care of filling the tscboProjects combo box with items found in the solution's project fileProjects.xml. This XML file contains a listing of all active projects and is an easy way to demonstrate how the dynamic nature of the form's menu. You could, with a just a bit more code, utilize a Web service that publishes the project data or access a database directly. Private Sub LoadProjects() 'This is hard coded for convenience. 'Be sure to change the file location to match the location on your system. Dim xtr As New _ XmlTextReader("E:\Articles\Devx05\Code\Devx05\Projects.xml") Do While xtr.Read Select Case xtr.NodeType Case XmlNodeType.Element If xtr.Name = "project" Then xtr.Read() tscboProjects.Items.Add(xtr.Value.ToString) End If Case Else End Select Loop xtr.Close() End Sub The method opens the Projects.xml file found in the application's startup path. Once the file is opened, the data is read and added to the combo box's Items collection. These values can then be selected from the TyApptItem form to assign a project to the time entry record. SaveAppointment The SaveAppointment method takes the information entered into the TyApptItem form and saves it as an Outlook appointment item in the default calendar folder. The method accesses the default calendar folder and then copies the data from the TyApptItem to the form's referenced AppointmentItem (ai), which serves as the storage object for our custom form. Private Sub SaveAppointment() Dim apptFolder As Outlook.MAPIFolder = _ ol.GetNamespace("MAPI").GetDefaultFolder _ (Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderCalendar) Dim up As Outlook.UserProperty ai.Subject = Me.txtSubject.Text ai.Start = CDate(Format(Me.dtpStartDate.Value, "Short Date") & " " & _ Format(Me.dtpStartTime.Value, "t")) ai.End = CDate(Format(Me.dtpDueDate.Value, "Short Date") & " " & _ Format(Me.dtpDueTime.Value, "t")) ai.Body = Me.txtBody.Text 'Look for the custom project property up = ai.UserProperties.Find("TyProject") If IsNothing(up) Then up = ai.UserProperties.Add("TyProject", Outlook.OlUserPropertyType.olText) up.Value = Me.tscboProjects.SelectedItem Else up.Value = Me.tscboProjects.SelectedItem End If ai.Save() End Sub End Sub OpenAppointment Saving the appointment is only half the battle. What happens when the user attempts to open an appointment item? Well there are two possibilities. Either they are attempting to open an existing appointment, or they are attempting to create a new one. In the former case, we need to read the data from the saved appointment item and read it into the custom TyApptItem form instance. In the latter case we can skip reading data as none exists and stick with only creating an instance to the TyApptItem form. The OpenAppointment method handles both scenarios as follows: Private Sub OpenAppointment() If Not IsNothing(ai.EntryID) Then Me.txtSubject.Text = ai.Subject Me.dtpStartDate.Value = ai.Start Me.dtpStartTime.Value = ai.Start Me.dtpDueDate.Value = ai.End Me.dtpDueTime.Value = ai.End Me.txtBody.Text = ai.Body Dim up As Outlook.UserProperty = ai.UserProperties.Find("TyProject") If Not IsNothing(up) Then Me.tscboProjects.Text = up.Value End If End If End Sub Determine whether it is an existing appointment by checking the EntryID property of form's AppointmentItem (ai). If the item has an EntryID (which is the item's unique identifier within the Outlook data store), then it is an existing item and we can proceed to read its data into TyApptItem. Private Sub tsbSave_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles tsbSaveandClose.Click SaveAppointment() Me.Close() End Sub tsbSave_Click The tsbSave button calls the SaveAppointment method and leaves the form open. Private Sub tsbSave_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles tsbSave.Click SaveAppointment() End Sub tsbDelete_Click The Delete button deletes the current appointment item by calling the Delete method of the referenced appointment item. Once deleted, the method closes the open instance of the TyApptItem form. Private Sub tsbDelete_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles tsbDelete.Click ai.Delete() Me.Close() End Sub Step 4: Miscellaneous Setup Issues Add User-Defined property The form requires that user-defined fields exist in Outlook's default Calendar folder named TyProject. To add this field: Projects.xml The project depends on the existence of an XML file residing in the application's startup path. This file is named Projects.xml and has simple schema to describe active projects. Insert an XML file into the project (Project > Add New Item > XML File) and name it Projects.xml. Insert the following text as the file's contents: <?xml version="1.0" encoding="utf-8" ?> <projects> <project>Accounting</project> <project>Consulting</project> <project>Sales and Marketing</project> <project>Project Management</project> <project>Development</project> <project>Testing</project> </projects> As previously discussed, this file is utilized by the LoadProjects method located in the TyApptItem form. Outlook is powerful platform for building custom applications and can be customized to fit your solution's needs. As this article demonstrated, you don't even have to utilize the collection of forms provided by Outlook if they do not suit the needs of your solution. This strategy for building custom GUIs within Outlook opens up all kinds of new possibilities for your Outlook-based solutions.
http://www.devx.com/MicrosoftISV/Article/29261
crawl-002
refinedweb
2,377
58.18
One of Django's nice "batteries included" features is the ability to send emails when an error is encountered. This is a great feature for small sites where minor problems would otherwise go unnoticed. Once your site start getting lots of traffic, however, the feature turns into a liability. An error might fire off thousands of emails in rapid succession. Not only will this put extra load on your web servers, but you could also take down (or get banned from) your email server in the process. One of the first things you want to do when setting up a high-traffic Django site is replace the default error email functionality with an error reporting service like Sentry. Once you've got Sentry setup, what's the best way to disable error emails? Unfortunately, there isn't one simple answer. Let's dig into why... How does Django Send the Emails? First let's look at how Django sends those emails. If you look at the source, you'll find it is defined in the default logging config using standard Python logging: 'django': { 'handlers': ['console', 'mail_admins'], 'level': 'INFO', } As you can guess, the mail_admins handler is the one that does the work. It is defined as: 'mail_admins': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler' } In plain English, this says that any log messages that are ERROR or higher in the django Python module will get passed to django.utils.log.AdminEmailHandler when the require_debug_false filter evaluates to True. Here is an example of the default Django logging tree. Disabling the Logger Simply removing mail_admins from the handlers should be easy, right? Unfortunately, the answer involves diving into the idiosyncracies of Python logging configuration. If you search around the internet, you'll come up with a few different options, but rarely an explanation of why they work or the potential drawbacks. ⛔️ Option 1: disable_existing_loggers One option is to set disable_existing_loggers to True in your LOGGING setting. This parameter is the source of a lot of confusion and the results of using it can be unintuitive. From the Python docs: The default is Truebecause this enables old behavior in a backward-compatible way. This behaviour is to disable any existing loggers unless they or their ancestors are explicitly named in the logging configuration. See the effect this has on the logging tree here. Note: all loggers are flagged as "Disabled". The issue with this approach is that if you later define a logger for a submodule of django, say, django.db. As per the docs, you'll find the default django logger is now re-enabled, mail_admins handler and all. That's pretty unintuitive behavior and for that reason, I don't recommend this approach. When setting up logging, always set disable_existing_loggers to False to avoid this issue. ⛔️ Option 2: Redefine the Logger There is a second option which is a little more surgical than option #2. By redefining the logger for django, we will replace the handlers that are defined for it in the default config. Here's a full logging config which will accomplish that: LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', }, }, 'loggers': { # Redefining the logger for the `django` module # prevents invoking the `AdminEmailHandler` 'django': { 'handlers': ['console'], 'level': 'INFO', }, } } See the effect this has on the logging tree here. Unfortunately, this change wipes out any handlers set on child loggers. One such potentially helpful handler is the request logging used by Django's runserver command. For that reason, I don't recommend this approach either. ✅ Option 3: Copy the Default Logger If you'd like to keep Django's default config and just remove the mail_admins handler, your best bet is to simply modify the default dictionary directly. from copy import deepcopy from django.utils.log import DEFAULT_LOGGING logging_dict = deepcopy(DEFAULT_LOGGING) logging_dict['loggers']['django']['handlers'] = ['console'] LOGGING = logging_dict See the effect this has on the logging tree here. This accomplishes the goal, but also introduces some fragility into the code. A change to DEFAULT_LOGGING in a future version of Django could break this code. In practice, you may also find you want to make more changes to the default config (for example, logging to console when DEBUG = False). ✅ Option 4: LOGGING_CONFIG = None Setting LOGGING_CONFIG = None in your settings prevents Django from setting up any logging at all. This is the nuclear option. import logging.config LOGGING_CONFIG = None logging.config.dictConfig({ "version": 1, "disable_existing_loggers": False, }) See the effect this has on the logging tree here. This is a valid approach and probably the best one to take once you want any sort of fine-grained control over your logging. You wouldn't want to use this exact config in practice, but instead, take it as a starting point for building your own custom configuration to suit the needs of your project. In the process of writing this, I discovered a few "features" in Python logging that surprised me. Brandon Rhode's logging_tree package was invaluable for peeking under the hood after the Django initialized the logging tree. I strongly recommend dropping it into your project to verify logging is setup as you expect. While this post focuses on the mail_admins functionality, in the real-world, you'll want a more robust logging config (including a catch-all root logger, and a logger for your project module), but we'll leave that for another post. If you have any Django logging tips, we'd love to hear them!
https://lincolnloop.com/blog/disabling-error-emails-django/
CC-MAIN-2018-43
refinedweb
913
54.83
Welcome to Cisco Support Community. We would love to have your feedback. For an introduction to the new site, click here. And see here for current known issues. Hello, I am trying to configure OSPF so that in case of network split OSPF does not advertise a connected route anymore. I am trying to make this work like this. In case IP address 172.20.240.5 does not ping, OSPF withdraws a route for 172.20.240.0/29 The configuration is as follows. ip vrf AJ-IP rd 65000:215 import map AJ-IP-IN export map AJ-IP-OUT route-target export 65000:215 route-target import 65000:215 route-target import 65000:666 ip sla monitor 1 type echo protocol ipIcmpEcho 172.20.240.5 source-interface FastEthernet0/1.2 timeout 20 vrf AJ-IP frequency 5 ip sla monitor schedule 1 start-time now recurring track 1 rtr 1 reachability interface FastEthernet0/1.2 description Switches encapsulation dot1Q 2 ip vrf forwarding AJ-IP ip address 172.20.240.2 255.255.255.240 no ip redirects no ip unreachables no ip proxy-arp standby 1 ip 172.20.240.1 standby 1 priority 110 standby 1 preempt router ospf 1 vrf AJ-IP router-id 172.20.0.6 log-adjacency-changes redistribute static subnets route-map AJ-IP network 172.20.0.0 0.0.0.255 area 0 ip route vrf AJ-IP 172.20.240.0 255.255.255.240 Null0 tag 98 track 1 route-map AJ-IP permit 10 match tag 98 This does not work. I think it is because connected route is still sitting in the routing table due to lower A/D and therefore static cannot be advertised. Is there a way around this. Armin, A couple of things here. 1. It would be preferrable to have a L2 design that would prevent a network split? 2. You first refer to 172.20.240.0/29 and later use 172.20.240.0 255.255.255.240, which is a /28. 3. You could use more specific static routes instead to make it work: ip route vrf AJ-IP 172.20.240.0 255.255.255.248 fa0/1.2 tag 98 track 1 ip route vrf AJ-IP 172.20.240.8 255.255.255.248 fa0/1.2 tag 98 track 1 Regards, hritter, In this case it is hard to create a L2 design like this. It is indeed 172.20.240.0/28, my mistake :). I suppose using smaller mask could fix this but i wonder if there is a better way. What is your current L2 design? I have built redundant L2 designs in the past. What do you see as the main hurdle? As for easier ways to achieve what you wanted with the static route, I can't think of one but the one I recommended uses two line instead of one. It doesn't seem to complicated to me. our L2 design is like this. BB-SW1<-Router_1----L2sw1----L2sw2----Router2->BB-SW2 Connections between Router_1,L2sw1,L2sw2,Router2 are done on dark fibre with spans of 80 km and more. Behind every L2sw is a subnet that is to be terminated to a Router using HSRP on Router1 and Router2. Routers are connected to BB-switches. On BB there is a vlan where OSPF is used to distribute routing information. This complexity comes from the need to avoid blackholed routes in case of fibre break for example between L2sw1 and L2sw2. The static route method is probably the easiest way to achieve this, but the problem is that now we have to announce every subnet between Router1 and Router2 as two subnets. This could create rather big routing table considering that there will be 8 subnets per L2sw and total of 97 L2sw. It seems a bit akward that the L2 connectivity extends over two sites. It would usually extend to a single site and would therefore be easier to make redundant (i.e. etherchannel between the two l2sw). Maybe changing the design is a more appropriate approach than resorting to an hack. Think of the long term and the growth. hritter, thank You for your answer, perhaps You misunderstood me. Usually the L2 span between two routers is 2 to 3 switches. 97 is the total number of deployment. Etherchannel is not an option at the moment because those links between L2sw -s are long and usually run on the same fiber cable. Basically we have two design choices. Since those local subnets that reside behind L2sw are on remote locations (power stations) and the BB switches are not located in all of them, so we either place routers and L2sw in every location between BB switches and connect routers via dark fibre or do it like this.
https://supportforums.cisco.com/t5/lan-switching-and-routing/track-and-connected-route/td-p/969184
CC-MAIN-2017-47
refinedweb
812
75.5
The wikiHow Administrator Notice Board, often referred to as the ANB, is a place for users to report a problem that needs attention from an administrator. For wikiHow administrators, the ANB is the place to handle the concerns of these users. However, for new wikiHowians, it can be confusing learning to use it, and new trial administrators may be unsure of how to handle reports on the ANB. Using the ANB is a simple task that doesn't require a lot of learning to do, and is crucial to any wikiHowian. StepsEdit Method One of Two: As a Non-AdminEdit - 1Know when to file a report on the ANB. There are no hard and fast rules as to when you should file a report on the ANB, and some trolling behavior doesn't get reported immediately. However, it's considered best practice to make an ANB report when the user has been coached and is continuing to make the same kinds of edits or engage in the same behavior. Some examples of situations that would be good to report on the ANB would be when: - Somebody's username or real name is in violation of the Username Policy, and they've received {{username}} or {{realname}} (or any derivative of it) on their Talk page. - Somebody is vandalizing or blanking wikiHow pages (whether they're articles, Talk pages, or so forth) and does not appear to be listening to coaching. This is inclusive of removing NFD tags when the user is not a New Article Booster or administrator. - Somebody is leaving abusive comments (e.g. telling someone to harm themselves or threatening them) or spam comments (e.g. linking a service on people's Talk pages). This also applies to threads on the wikiHow forums. - Somebody is impersonating another user or an administrator. - Somebody is uploading inappropriate images (e.g. pornographic images) or is uploading many images that they do not have a license to. - Somebody who was blocked previously is using a different account to evade that block. This type of account is referred to as a malicious sockpuppet. - 2Know when not to file an ANB report. There are times when filing an ANB report is unnecessary, as nothing will be done. To avoid making an edit to the ANB that you don't need to make, it's important to understand when you shouldn't file an ANB report. There are many situations that don't warrant an ANB report. Some examples of these situations are when: - An anonymous user has stated that they're under 13. Anonymous users are not blocked for their age, as IP addresses are often used by multiple people. - Somebody is making bad faith edits, but doesn't appear to have been coached. The policy on wikiHow is to assume good faith; the contributor may just be messing around or testing out editing software. Check the history on their Talk page to make sure they didn't remove the coaching message; if they did, leave a coaching message and then file an ANB report. - Somebody's last abusive edit was more than a few days ago. While some vandals disappear for several days and then return to vandalize, most of them get bored quickly. Chances are, if their last contribution was days ago, they're not interested in continuing to vandalize. - The person responded to coaching and is now making good faith edits. - The person has a sockpuppet account, but neither account is making bad faith edits. There's no problem with having a sockpuppet in and of itself; the problem is when those sockpuppets are used with malicious intent. - Somebody has stated their grade level. Grade levels vary among countries. Somebody saying they're in "year five" doesn't automatically mean they're around the age of an American fifth grader (ten or eleven) and doesn't warrant a block. - The user is already blocked. This may seem like it's obvious, but it's possible to file an ANB report without realizing the user has already been blocked. If you check the user's contribution history, there will be a notice if the user is currently blocked. If you get this notice, don't report them. - 3Coach the user before filing an ANB report. In cases where the vandalism, chatting, spamming, or so forth isn't extremely severe, and the user hasn't been coached before, consider coaching the user before making an ANB report. People on wikiHow are often very receptive to positive, friendly coaching, and many people who started out as bad faith editors can evolve into wonderful contributors (and even administrators!). Take a few minutes to help coach the user so that they know what they're doing wrong and have a chance to change their behavior. - If you don't know how to coach the user, templates like {{test}}, {{warning}}, {{chat}}, {{exlinks}}, {{Nfdtag}}, {{username}}, and {{speedy}} can help out. These automatically generate a message when posted on a Talk page. - If a user has stated their age and they are under the age of 13, remove their age information from where they posted it, if possible. You can either "coach" them by writing a message yourself, or use {{age}}. However, in this case, file an ANB report after notifying them - you can't coach someone's age. - If the user has been coached more than fifteen minutes ago or you've sent them a coaching message, and they're still making the same types of edits, file an ANB report. - 4Access the Administrator Notice Board. In order to make an ANB report, you'll need to go to the ANB. The page is in the wikiHow namespace, but simply putting only "Administrator Notice Board" in the URL will redirect you to the ANB. You may want to bookmark the page if you think you're going to forget where it is. - 5Pick the proper category to file a report under. Once you've accessed the ANB, you may be confused by all the different categories for reports. However, it's only twelve categories in total, and they're fairly straightforward. The sections that you use to report a user for violating policy are: - Username Violations. This is for those who have a username or real name set that is in violation of the Username Policy. - Spammers and Vandals. This is for those who are spamming or vandalizing pages on wikiHow or the forums and haven't responded to coaching. This section is also often used for those who are leaving abusive comments towards other wikiHow users. - Users who are Chatting. This is for users who have been continuously chatting excessively on wikiHow, without contributing to articles, and have received a {{chat}} template, but are still chatting. - Malicious Sockpuppets. This is for users who are suspected to be a malicious sockpuppet of a troll, vandal, or chatter. These accounts are often used to evade blocks or patrol in their own bad edits. - Users Under the Age of 13. This is for users who have explicitly stated that they are under the age of 13. - Spam & Sites to be Blacklisted. This is for sites that are being repeatedly advertised, despite the user being coached on the External Links policy. - Patrollers on the Rise. This is for new wikiHowians that are starting off, but are making mistakes and getting multiple messages from the Patrol Coach. - Suggested Sites to be Whitelisted. This is for websites that are currently on the Spam Blacklist, but should be allowed for use. - Pages Needing Protection. This is for pages that are repeatedly being vandalized or spammed and that need to be protected to prevent spamming. - Image/Video Uploading Spree. This is for users who are uploading images without licenses or images that they do not have permission to use, or are uploading inappropriate images (e.g. gore or pornographic images). - NFD Review. This is for articles that are on the NFD list, but have been sufficiently edited to no longer meet the NFD criteria. It can also be used if an article has been NFD'd over a week ago, but no consensus for deletion has been reached. - Miscellaneous. This is for reports that do not fit any of the above sections, or that fit multiple of the above sections (for example, a suspected malicious sockpuppet without any proof of sockpuppetry, or a user who both vandalizes and chats). - 6Click on the Edit button. To file an ANB report, you'll need to edit the page. There are multiple Edit buttons on the ANB - there's one at the top of the page, and one at every section header. Click on one of the Edit tabs, which will bring you to the editing page. - Using the Edit tab above the title will pull up the whole page, which is useful if you need to file multiple reports under different categories. However, clicking Edit on a section header will allow you to edit just that section, which may be a better choice if you're new to wikiHow and don't know how wikiHow formatting and syntax works quite yet. - 7Write the report in the proper format. When reporting a user on the ANB, there's a certain way of formatting the report. The proper form for this is to make two right-facing curly brackets, write the word userlinks, place a pipe (|) after "userlinks", and then place the contributor's name after the pipe. Afterwards, you should place another pipe, where you write the reason for the report and include any specifics if necessary (such as diff links or forum threads); afterwards, you would close off the report with two left-facing curly brackets. For example, to report a user who has the username "TrollName", you would write {{userlinks|TrollName|trolling}} to file a report. Then, sign your post with four tildes, so that it looks like ~~~~ at the end. - Sometimes, putting a pipe after the username and writing the reason doesn't work, and the report will cut off at the username. If this happens, simply delete the second pipe and move the brackets to the end of the username so that the report shows up properly. For example, {{userlinks|TrollName|trolling}} would be written as {{userlinks|TrollName}} - trolling ~~~~ instead. - Include diff links if possible. When making an ANB report, including diff links (which is proof of an edit, and is logged with every change of a page) is very important, especially if you're reporting something like an underage user or a malicious sockpuppet. Administrators will want to see proof of the reason you're making the report, and while it's possible for them to look at the user's contributions, giving them the link right from the start makes it easier for them to review the user. - If you're not reporting a user, and are requesting NFD review or page protection, simply format the report as something like *Please review/protect [[Article Title]] ~~~~. If you are requesting page protection, explain whether you feel the page should be semi-protected or fully protected, and how long the protection should last for. - 8Click Publish. Publishing the report will make it visible to everyone who checks the ANB, whether or not they're an administrator. However, it's necessary in order for an administrator to see the report and act on it. Once you publish the report, the contributor you've reported is more likely to be noticed and helped (or blocked, in severe cases). - 9Wait for an administrator to respond. After you have published your ANB report, you'll need to wait for an administrator to check the ANB and look at your report. This can take time. In the meantime, do whatever you need to, even if that involves rolling back a vandal's edits. An administrator will get to it - don't worry. Often, they log on at just the right time! Advertisement - Be patient. It might take some time for an administrator to get online and check the ANB. Method Two of Two: As an AdminEdit - 1Understand your responsibility as a wikiHow administrator. As a wikiHow administrator, one of the ways you can be involved is to check the ANB and watch out for anything that's going on. Since non-admins can't do anything to prevent other users from contributing abuse or spam, it's important to have some selection of members from the Admin team checking the ANB on a regular basis. - 2 - 3Search for reports. Because two important categories are at the top of the ANB, it's easy to forget about checking the other ones. However, these reports are still just as important as username violations or spammers. Scan the entirety of the ANB and see if there's something listed. - 4Perform requested maintenance on the site. Some ANB reports require minor maintenance that is up to an admin to tackle. If you see one of these on the ANB and want to take care of it, you can dive in with activities such as: - Add links to the Spam Blacklist or Whitelist if appropriate. If you're unsure about whether you should blacklist or whitelist a link, check with another admin or a staff member such as Anna or Chris H. - Review any articles in NFD Review. If it's appropriate to do so, remove the NFD tag. - Protect any pages that are getting repeatedly vandalized. Make sure to follow the guidelines for doing so. - Delete pages that have been reported and qualify for speedy deletion. Additionally, delete any photos that have been reported and remove any inappropriate user images. - 5See if a reported user has been coached. Before taking any action on a user who has been violating policy, it's important to see if another user has made any attempt towards coaching them or leaving them a warning. If someone has been chatting, for example, see if anyone has left them the {{chat}} template, or has sent a friendly coaching message of their own to encourage them to contribute. - If someone has seemingly coached them with a personal message, but the message comes off as rude or harsh, leave the user a friendly message or a template. While the other user may have meant well, harsh criticism may discourage users and drive them away from being helpful contributors. - 6Take appropriate action on a reported user. Coach the user, apply any needed patrol throttles, or block them if necessary, for the proper block times. If you're unsure about the situation, check with another administrator. Generally, though, it's best to avoid blocking users unless they're clearly out to do more harm than good, and aren't stopping with firm warnings. - If you aren't a checkuser, contact a checkuser if there is evidence of a suspected sockpuppet. You can also just leave the report there and handle any other reports that don't have to do with sockpuppets. - You don't need to immediately resort to blocking the user; if they've been inactive and you don't know if they've done enough to warrant being blocked, you can simply watch the user's contributions for some time rather than blocking them. - If blocking or patrol-throttling a user, you may leave them a note so that they know. The {{blocked|Reason for block}} will work fine in most block cases (unless the user is under 13, in which case, {{Under13}} should be used), but you'll have to write a message to the user yourself if you're patrol-throttling them. - Anonymous users under the age of 13 should not be blocked solely for being under 13. IP addresses are often used by multiple people, and blocking an underage anonymous user may block several good faith editors as well. Additionally, some IP addresses are connected to places like schools or libraries, meaning that blocking an under-13 anonymous user may inadvertently block those who are not connected to the underage user in any way. - 7Remove or respond to the ANB report. In some cases, such as when you've blocked a user, you can simply remove the report from the ANB with an edit summary such as "blocked". However, in some cases, it's not in the best interest for the report to be removed - such as if a checkuser has been contacted for a sockpuppet, or a vandal has been inactive for a bit. In this case, it may be better to leave a note on the ANB so that it's clear an admin (you) has seen. When it comes to either option, though, write a proper edit summary so that others can see that you've responded to or removed the report. - You don't have to respond to an ANB report, but this can be a courtesy to the user who made the report. Non-admins don't know what admins are doing behind the scenes, and another admin might not know that you checked out the situation. A simple "Watching out for further activity" or "Contacted a checkuser - thanks for the report" can do a lot. - 8Follow up with the reporter if necessary. In some cases, you may want to follow up with the user who placed an ANB report. If they were heavily involved in a situation, you may want to follow up with them and let them know how you handled the report. It's a courtesy to them so that they know what happened without having to check the ANB.Advertisement Community Q&A Search - What do I do if I'm patrolling, and I see a non-administrator remove something from the administrator notice board?Answered by ExoticComet - Look at the edit summary. If there is a probable reason for removing something, investigate and determine whether the reason was true. If an experienced community member removed something with a fair cause, you may mark it as patrolled. If a new contributor has made an edit to the ANB, be suspicious but assume good faith. If the new contributor made a bad edit to the ANB, roll the edit back and coach them in a kind and civil manner. Ask a Question TipsEdit - Many users respond well to coaching and will listen to it. However, if a user is acknowledging coaching and is still struggling, an ANB report under "Miscellaneous" may be a good idea. - Administrators that do not have checkuser powers can leave suspected sockpuppet reports alone so that a checkuser can look at them. - Non-admins can comment on a report, even if it's not their own, to clarify, add additional information, or add new information. Don't be afraid to do so if you know more about the situation. - It's better to include diff links than to not, if possible. However, it's not necessary, so there's no repercussions if you can't add diff links or if there's no on-wikiHow evidence. - Don't delete any troll or spam emails that were sent via wikiHow. If users are harassing you via email, you will be requested to forward the emails to an administrator. - Avoid engaging with trolls. If somebody is leaving you abusive messages, don't respond to them. Responding to trolls only makes them more inclined to keep harassing you, which escalates the situation. - Remember that the admin team is made up of volunteers. They're not all online all the time and aren't all able or willing to respond immediately to messages about potential issues. There is never any big rush, since anything done on the wiki can be undone easily, so it's best to leave your report on the ANB and wait for an active admin to help, rather than putting pressure on any individuals who may be online at the time. WarningsEdit - Administrators are the only ones allowed to use the {{blocked}} or {{Under13}} templates. If a non-admin uses these, they may get blocked for impersonating an administrator. - Some vandals will remove reports from the ANB. If you're dealing with a particularly belligerent user, keep an eye on the ANB and roll back their removal of reports. - Filing a report against a user that you dislike for your own personal reasons is unacceptable. - Don't file an ANB report against someone with the intent of getting them in trouble. Things on wikiHow like blocks or patrol-throttles are not set with the intent to punish. About This Article Did this article help you? Yes No No {{questionText}} {{currentMethod}} {{cheerText}} {{oopsText}} Yes
https://m.wikihow.com/Use-the-wikiHow-Administrator-Notice-Board
CC-MAIN-2017-43
refinedweb
3,436
62.27
From the IBM WebSphere Developer Technical Journal. If you are the author of a Service Component Architecture (SCA) module, your primary responsibility is to provide a reliable implementation of the service interfaces that you export from your SCA module. Automated testing of the module interfaces enables the SCA component developer to use a repeatable and efficient means of verifying the quality of the delivered components. This article looks at the implementation of sets of tests of module interfaces; testing that for some specified input data, a particular response is obtained. A follow-on article will discuss testing more complex components, such as BPEL processes, where you also need to consider the testing of side effects. We will be testing a very simple component that validates a UK postcode and returns details of the addresses that correspond to that postcode. The files that are applicable to this example are included in a downloadable zip file for your convenience. This postcode component has its interface defined in library L_MailService and the component is delivered in module MP_MailService in the download file. Figures 1 through 3, below, show the IBM WebSphere Integration Developer assembly diagram, component interface, and the definition of the PostCodeDetails data object returned by the service operation we want to test. Figure 1. Assembly diagram: Component and Export Figure 2. getDetailsForPostCode operation Figure 3. PostCodeDetails business object If we were developing such a component for actual use, we would likely write quite a few different tests that provide many combinations of input data. For the purpose of this article, though, we will write just two: - A test that invokes the service with a valid postcode and checks that a known set of data is returned. - A test that supplies an invalid postcode and checks that a fault occurs. Before we can actually begin to write our tests, there are some preparatory steps you must perform. At a high level, these are: - Install JDK update - Import test utilities - Create test module - Set module dependencies - Add the Cactus framework to the test project - Assemble module Using the Cactus framework exposes a defect in the version of the JDK for Windows® platforms delivered with WebSphere Integration Developer V6.0.1, which is fixed in J2RE 1.4.2 IBM Windows 32 build cn142-20050929 (SR3) and later versions of the J2RE. You can either apply this fix to your WebSphere Process Server test environment using the appropriate IBM SDK installer or upgrade your test environment to WebSphere Process Server V6.0.1.2, which includes J2RE 1.4.2 IBM Windows 32 build cn142ifx-20060209 (SR4-1) You will need to apply this to your WPS test environment using the appropriate . Utilities for working with Service Data Objects (SDO) have been included with this article. These utilities must be imported into the WebSphere Integration Developer workspace. The Project Interchange File (PIF) included in the download zip file contains these three libraries (among others): - J_ScaUtiltiies - LT_ScaJUnitTest - LT_ScaTest. If you plan to work through this article then you will also need to import the module we are testing: MP_MailService and its interface library L_MailService. As you will see, this contains only a trivial implementation of the interface, just sufficient enough to enable our tests to execute. To import these modules into your workspace using WebSphere Integration Developer, select File => Import. In the Import dialogue, select Project Interchange, and then Next. Browse to the download zip file. Select the modules to import and click Finish. In the Business Integration view of the Business Integration perspective, you should now see something similar to Figure 4. Figure 4. MailService with PostCode implementation Our tests will be created in a module that will then be deployed to a server. We must create this test module. In the Business Integration perspective, go to the Business Integration view. Right-click then select New => Module. In the New Module dialogue, enter a name such as MT_TestMailServiceand click Finish. We are going to create our tests in a dynamic Web application in the MT_TestMailService module. However, we should note that the standard J2EE projects associated with a SCA module are generated by the tooling and are effectively transient artifacts. Therefore, we will create a new dynamic Web application that will hold our tests. Switch to a Web perspective. In the Project Explorer view, expand Enterprise Applications. Right-click on MT_TestMailService, then select New Dynamic Web Project. In the New Dynamic Web Project dialogue, enter a name such as MT_TestMailServiceJUnitWeband click Finish. D. Set module dependencies We need to ensure that the association between this new Web project and the test module is recorded, and that the correct libraries dependencies are created. Switch back to the Business Integration perspective, Business Integration view. Select MT_TestMailService and double-click to open the Dependency editor. Expand Libraries and use the Add function to make the following libraries available to our tests: - L_MailService (this giving access to the module we are testing) - J_ScaUtilities - LT_ScaJUnitTest - LT_ScaTest Expand J2EE and use the Add function to add MT_TestMailServiceJUnitWeb to the SCA module. Your dependency editor should now look like Figure 5. Figure 5. Test module dependencies Save your changes. We now need to explicitly add the libraries to the Web project build and runtime classpaths. Return to the Web perspective. In the Project Explorer, right-click on the MT_TestMailServiceJUnitWeb project, then select Properties. Select Java JAR Dependencies, and then check each of the four libraries we just added (Figure 6). Figure 6. JUnit Web project, dependent JARs Click OK to save these changes. We have now completed setting up our module and library dependencies. Next, we need to make the Cactus test framework available to our tests. E. Add the Cactus framework to the test project Our example exploits the JUnit and Cactus frameworks, and so we need to make these framework libraries available to our test module. JUnit is a widely adopted open source testing framework. There is explicit support for JUnit version 3.8 in the Eclipse environment delivered with WebSphere Integration Developer. You will find the JUnit libraries junit-3.8.1.jar in the eclipse/plugins/org.junit_3.8.1 directory in your WebSphere Integration Developer installation. The Apache Jakarta open source Cactus framework enables the use of JUnit in a server environment. You will need these libraries delivered with the Cactus download (listed here are the versions we used): - aspectjrt-1.2.1.jar - cactus-1.7.1.jar - commons-httpclient-2.0.2.jar - commons-logging-1.0.4.jar We need to make these JAR files available to our test module, and also configure two servlets that are used by the Cactus runtime: The Cactus libraries can be packaged with our test Web application by adding them to the WEB-INF/lib directory. To do this, switch to the Web perspective, Project Explorer view, and expand Dynamic Web Projects => MT_TestMailServiceJUnitWeb => WebContent => WEB-INF, revealing the lib directory. You can now drag the five JAR files to the lib directory, as shown in Figure 7. Figure 7. Cactus and JUnit JARs It is easiest to enter the Cactus servlet definitions by cutting and pasting directly into the Web application deployment descriptor. In the same expanded view of MT_TestMailServiceJUnitWeb right-click the entry DeploymentDescriptor: MT_TestMailServiceJUnitWeb, then select Open with => Text Editor, which will open a text editor view. Paste the servlet and servlet mapping entries shown below into the text editor, between the existing description and welcome file elements: Save and close the text editor. You could now re-open the deployment descriptor using the more typical Deployment Descriptor editor. If you do, you should see the two servlets you added in the Servlets tab (Figure 8). Figure 8. Cactus servlets Finally, our test module will need access to the interface library of the module we are testing. We also need our test module to invoke the components we are testing, which we will do by importing suitable interfaces to our test module and wiring them properly. Switch back to the Business Integration perspective, Business Integration view, expand the MT_TestMailService project, and open the Assembly editor. Add an import to the assembly diagram, rename it to PostCode, and then add an the I_PostCode interface to the import. Generate an SCA binding for the import. Your diagram should now look like Figure 9. Figure 9. Import with interface to test, SCA binding In the Binding tab of the Properties pane browse and select PostCodeExport from the MailService module, hence binding this import to the specific module we are testing (Figure 10). Figure 10. Import bound to module under test You need to make this import available to your test Web application, which is a non-SCA project, so you will need a standalone reference. Add a standalone references icon to your diagram and wire it to your import. You may be asked two questions: - First, whether you want to make a matching reference: click OK. - Second, whether you want to convert the WSDL interface to a Java™ interface. It is important that you accept this option by selecting OK. Doing this simplifies the code needed to inspect the results of the service invocation. Your assembly diagram should now look like this Figure 11. Figure 11. Standalone reference to PostCode service, with Partner Name Take a note of the Partner Reference name; you will need this when your test invokes the service. Save the Assembly diagram. Create tests for SCA components With our preparations completed, we can now begin to write unit tests. We're going to begin by creating a test invoking the getDetailsForPostCode() service operation with what we expect to be a valid PostCode and then check the response data. We will use standard SCA and SDO programming techniques. As you will see, these are quite verbose and writing tests like this would be relatively time-consuming. However, we want to exercise the preparatory work we've carried out before we consider other, simpler ways of writing these tests. We will first create a JUnit test class, add a single test to the class, and then, finally, execute the test. The usual programming model with JUnit version 3.8 is to create a test class that extends one of the TestCase classes provided by JUnit. In our case, we are using the Cactus framework to execute our tests on the server, and so we must extend the Cactus test class org.apache.cactus.ServletTestCase. Go to the Web perspective, Project Explorer view, and expand Dynamic Web Projects => MT_TestMailServiceJUnitWeb => Java Resources => JavaSource. Right-click on Java Source and select New => Package to create a suitable package. Right-click the package and select New => Class to create the class, entering code as follows: We can then add one or more test methods to this class. Because we have added the class to the Web application, when we add our module to the server our test class will be available for execution in the server environment. The test class represents the usual unit of test execution that occurs while you are developing, and so we should consider carefully how many tests to include in any one class. Too few, and the management of the classes becomes onerous. Too many, and the time to execute a class becomes wearisome. As rule of thumb, five to ten test cases per class seems about right. The JUnit 3.8 naming convention states that all test method names should begin with word test. The methods take no parameters and are void. Test failures are indicated by throwing unchecked exceptions, typically by using JUnit-provided assertion and failure methods. Test success is indicated by completion of a test method. Our test method, then, looks like this: The code shows: - Use of the service manager to access the service via the Partner Reference name: I_PostcodePartner. - Invocation of the service operation with postcode BR1 0AB. - A succession of assertions about the data returned by the operation. Note in particular the XPath expressions used to navigate the DataObject. We use expressions such as addressList[1]/addressto access the individual address strings within the addressList; the first element in the array having an index of 1. Paste this code into your test class and save it. With this code in place, we are now ready to execute the test. First, we need to add the test module to the server. We will assume that you already have a suitable WebSphere Process Server test environment in your WebSphere Integration Developer installation. Make sure that this server is started, then in the Web perspective, Servers view, right-click on the server then select Add Remove Projects. And add both MP_MailServiceApp and MT_TestMailServiceApp applications to the server. We now we have our test class available for execution in two contexts: It is a class deployed to a server and running in an SCA environment. The class is available to a Cactus servlet and so can be executed when suitable requests are delivered to that servlet. It is a Java class that can be executed standalone as a JUnit Test. When executed standalone, the Cactus framework will use the test class as a proxy for the class deployed in the server and invoke the Cactus servlet. The Eclipse JUnit facilities enable us to execute a standalone JUnit test class by right-clicking and selecting Junit Test. However, our test uses Cactus to communicate with the server and so we must provide a command line argument to specify where the server is running. In the Web perspective, Project Explorer view, select the PostCodeTest class and then from the Run menu, select Run... to display the Run dialogue. Select JUnit in the list of possible Configurations and click New. Observe in the Test pane that the project and test class values are filled in. Also observe that you can use this dialogue to initiate a family of tests; all classes in a chosen directory. Select the Arguments pane. Under VM arguments enter the following code(Figure 12): -Dcactus.contextURL= Figure 12. Run configuration for Cactus test Note that this specifies the localhost and port 9080; if your server or port for HTTP requests are different, you will need to adjust this string to match your choices. Select Run to initiate the test. Now that you have established this launch configuration, in future you will be able repeatedly execute this class simply by selecting it and clicking Run => JUnit Test. The JUnit view then shows the result of running the test (Figure 13). Figure 13. Results of test exceution You can re-execute the tests selectively from the JUnit view; right-click the class or test, then select Run. Externalising the test data We mentioned earlier that the creation of the details of the test case is somewhat laborious, requiring extensive use of the SDO programming APIs. In our simple example, the degree of difficulty may not seem excessive, but for realistic applications with data objects that may well be very much larger and involve many layers of nesting, the work of creating input data for a service and then validating the results promises to be very significant. It seems wise, then, to drive the tests away from external configuration files holding XML representations of the data objects. When creating families of tests, it then becomes a simple matter of copying and modifying XML files; in practice, this proves to be very much simpler and less error prone than writing SDO code. We have created a framework that enables the rapid development of such test cases. In our setup instructions above, we made the framework libraries available for use. The process for creating a test using the framework is: - Modify the test code to use a set of utility classes - Create a directory hierarchy for the configuration files - Create a configuration file defining the test to be executed - Create configuration files to describe the payload and the expected result - Create data object XML files. A. Modify the test code to use a set of utility classes The SCA framework provides an ScaTestExecutor utility class that manages the loading of test data and executes the corresponding test. The code for test methods, then, becomes very simple: The ScaTestExecutor constructor takes the test class and the name of the test as parameters. (The JUnit framework provides the getName() method to enable programmatic access to the test method name.] The ScaTestExecutor then applies naming conventions to determine the location of the configuration files; we will describe that convention next. You might feel that the very repetitive nature of the test methods indicates that some refactoring should be possible. Indeed, the JUnit framework would let us create a test suite dynamically at run time, the suite being derived from the available test data. However, we found that the Eclipse-JUnit integration did not work as well with such suites; we were not able to quickly re-run single failed tests. We therefore prefer to create the explicit test methods as shown. B. Create a directory hierarchy for the configuration files We anticipate that when testing a realistic module, with many service operations - and especially when testing BPEL processes - we will have many tests to manage. So, if we are to have many corresponding XML files, it is important to organise these in some manner. The ScaTestExecutor class implements a simple naming convention, specifying a directory hierarchy for organising the XML files. To construct this directory hierarchy, right-click then select New => Other => Simple => Folder (Figure 14). Figure 14. Test data hierarchy The elements of the hierarchy are: The test data directory hierarchy is rooted in the test class package directory, in our case: org.djna.mailservice.test. This enables the XML files to be loaded as resources by the test class classloader. There is one directory per test class, whose name is derived from the test class name by appending the suffix -Data. Within that directory, we have one directory per test method, whose name exactly matches the method name. Within the method directory we have test definitions. These are the XML files defining the tests to be executed, and they have the extension .tspec (described next). You may have several .tspec files for each test method; they will be executed in alphabetic order. C. Create a configuration file defining the test to be executed The Test definitions are serialised instances of a business object. The ScaTestSpecification data type is delivered in the library LT_ScaTest, which you added to your workspace at the beginning of this exercise. Its structure is shown in Figure 15. Figure 15. Test specfication data type You can specify the service partner and method to be invoked, the payload, and the expected outcome. Both the payload and expected outcome may be simple integers or strings, or can reference further XML files containing serialised data types. If your test specification needs are more sophisticated, perhaps you need additional primitive types, or perhaps invoke services with multiple parameters; then, the ScaTestExecutor class can be extended (its source is included with this article). The .tspec files contain XML to the ScaTestSpecification. The specification corresponding to the PostCode test we wrote earlier is: You see here the service partner and method, I_PostcodePartner and getDetailsForPostCode, and the input string BR1 0AB. You can also see that the expected result, a data object with many fields, is specified in a separate file (discussed in the next section). Also notice that the test has a textual description; this will be written to the logs when the test executes, which can be helpful in the event of errors occurring to correlate between errors and a specific test. Our testBadPostCodeExternal can be specified in a similar way, we specify a non-existent PostCode and a specific fault should result: To create the tspec files in the test data folders, right-click on the folder, then select New => Other => Simple=> File and paste the contents shown. You may notice that the creation of this test required only a simple copy and modification of the previous test. D. Create configuration files to describe the payload and the expected result The testGoodPostCodeExternal test requires the specification of the expected return data, a PostCodeDetails data type. The test specification includes the line: <expectedResultFile>GoodPostCodeDetails.xml</expectedResultFile> referencing the file GoodPostCodeDetails.xml, which needs to be placed in the same location as the test specification. The contents of the file would be: Create the expected result file by right-clicking on the folder, then select New => Other => Simple => File, and paste in the contents shown. A technique for creating such files is described in the next section. However, with this file in place our tests, we are now ready to execute. When we changed our test code earlier, you will notice that the Web application restarted on the server, and the console would show a message such as: [22/05/06 12:40:35:533 BST] 00000051 ApplicationMg A WSVR0221I: Application started: MT_TestMailServiceApp and hence the new version of the application -- and its associated test data -- is automatically deployed to the server. There are several ways in which you can now launch the test: Right-click on the class, and select Run => JUnit Test. From the Run drop-down in the toolbar (Figure 16), select PostCodeTest. Figure 16. Test execution from Toolbar From the JUnit view select the Rerun Last Test icon. Figure 17 shows this icon, along with the results of the test execution. Figure 17. JUnit view, test execution and results E. Create data object XML files Data types for realistic business applications are often both large and complex, having many properties and many levels of nesting. Creating XML representations of these is tedious and error-prone. A utility that will write a sample XML files for any given data type is included with this article: com.ibm.issw.archive.ut.data.TestDataCreator. The utility is packaged as a test class andis invoked as a JUnit/Cactus test. You will find it in our example Project Interchange File in the MT_TestMailJUnitWeb project. To use this class with your own application, you will need to include your own data types on its classpath. That is most easily achieved by copying it your own test project, as we have. If you want experiment with the TestDataCreator, you can import our version of the MT_TestMailJUnitWeb from the interchange file supplied in the download file, and replace your own version. However, this will replace all of your work in this project, so backup anything you want to keep. To create a sample XML file, you need to modify the code to reference the namespace and data type you want to use (Figure 18). Figure 18. Data type name and namespace In our case, these are and PostCodeDetails. We modify the TestDataCreator to reference these values: Next, execute this class as a JUnit test, remembering to set the Cactus argument on the run configuration. When TestDataCreator executes successfully, it will write a message to the application server console, indicating the path of the sample file that was created: [22/05/06 10:21:50:343 BST] 00000055 TestDataCreat I Written C:\IBM\WID601\pf\wps\PostCodeDetails.xml The contents of the file will be in this form: We can use this as the basis for our expected test data, and indeed this is how we created the sample in the previous section. This article demonstrated how a set of tests for an SCA module can be defined and executed as simple XML definitions using a simple framework. That framework and the ideas it represents are open to adaptation for more complex scenarios. Perhaps because we spelled out the specifics of test construction, the resulting exercise might seem rather lengthy, but in practice the amount of work to build and execute the tests is far from excessive. The tests described here were created in under 30 minutes, taking screen shots as we progressed. Automated testing in this style is an important technique for developing high quality reusable components and I would strongly recommend that all serious development environments with WebSphere Integration Developer define and adopt an approach of this kind. In a future article, I plan to explain how to apply these ideas to the testing of long-running business processes and other components with side-effects. - Part 2: Create repeatable unit tests for SCA modules that implement business processes - Part 3: Testing business processes that use human tasks Information about download methods - Service Component Architecture - Apache Jakarta Cactus framework - SCA application development - Building SOA solutions with the Service Component Architecture David Artus is member of the IBM Software Services for WebSphere team working out of the IBM Hursley Lab in the UK. He has provided WebSphere consultancy and mentoring since he joined IBM in 1999. Prior to joining IBM David worked in a variety of industries including Investment Banking, Travel and IT Consultancy. His interests include design of distributed systems, object technologies, and service-oriented architectures.
http://www.ibm.com/developerworks/websphere/techjournal//0608_artus/0608_artus.html
crawl-003
refinedweb
4,164
52.6
0 Im trying to change this modified blackjack game to where there is no computer. A choice of either two or three players is asked with three cards dealt instead. How can I i change this current code to 1) prompt user for either 2 or 3 players(no computer), 2) have a 2 or 3 player game that works. Any idea or input is welcomed, thanks! from random import choice as rc def total(hand): # how many aces in the hand aces = hand.count(11) # to complicate things a little the ace can be 11 or 1 # this little while loop figures it out for you t = sum(hand) return t # a suit of cards in blackjack assume the following values cards = [2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10, 11] total_deck = cards * 4 # > 31: print "--> The player is busted!" pbust = True break elif tp == 31: 27 or 28 while True: tc = total(comp) if tc < 28: comp.append(rc(cards)) else: break print "the computer has %s for a total of %d" % (comp, tc) # now figure out who won ... if tc > 31: print "Thanks for playing blackjack with the computer!"
https://www.daniweb.com/programming/software-development/threads/389217/modified-blackjack-game
CC-MAIN-2016-50
refinedweb
196
75.74
2008-05-11 Ulrich Mueller <ulm@gentoo.org> * emacs.c: Handle gap between end of BSS and heap, backported from Emacs 22. Original changes by Jan Djarv and Masatake YAMATO. This fixes dumping on Linux 2.6.25. (my_heap_start, heap_bss_diff, MAX_HEAP_BSS_DIFF): New variables and constant. (main): Calculate heap_bss_diff. If we are dumping and the heap_bss_diff is greater than MAX_HEAP_BSS_DIFF, set PER_LINUX32 and ADD_NO_RANDOMIZE and exec ourself again. * lastfile.c: (my_endbss, my_endbss_static) New variables. * s-linux.h (HAVE_PERSONALITY_LINUX32): Define. 2007-01-30 Ulrich Mueller <ulm@kph.uni-mainz.de> * x11term.c (internal_socket_read): Handle XK_BackSpace key. #endif #ifdef HAVE_PERSONALITY_LINUX32 #include <sys/personality.h> #ifndef O_RDWR #define O_RDWR 2 int xargc; #endif /* HAVE_X_WINDOWS */ /* The address where the heap starts (from the first sbrk (0) call). */ static void *my_heap_start; /* The gap between BSS end and heap start as far as we can tell. */ static unsigned long heap_bss_diff; /* If the gap between BSS end and heap start is larger than this we try to work around it, and if that fails, output a warning in dump-emacs. */ #define MAX_HEAP_BSS_DIFF (1024*1024) #ifdef USG_SHARED_LIBRARIES /* If nonzero, this is the place to put the end of the writable segment at startup. */ int skip_args = 0; extern int errno; extern void malloc_warning (); extern char *sbrk (); if (!initialized) { extern char my_endbss[]; extern char *my_endbss_static; if (my_heap_start == 0) my_heap_start = sbrk (0); heap_bss_diff = (char *)my_heap_start - (my_endbss > my_endbss_static ? my_endbss : my_endbss_static); } /* See if there is a gap between the end of BSS and the heap. In that case, set personality and exec ourself again. */ if (!initialized && strcmp (argv[argc-1], "dump") == 0 && heap_bss_diff > MAX_HEAP_BSS_DIFF) if (! getenv ("EMACS_HEAP_EXEC")) { /* Set this so we only do this once. */ putenv("EMACS_HEAP_EXEC=true"); /* A flag to turn off address randomization which is introduced in linux kernel shipped with fedora core 4 */ #define ADD_NO_RANDOMIZE 0x0040000 personality (PER_LINUX32 | ADD_NO_RANDOMIZE); #undef ADD_NO_RANDOMIZE execvp (argv[0], argv); /* If the exec fails, try to dump anyway. */ perror ("execvp"); } #endif /* HAVE_PERSONALITY_LINUX32 */ /* Map in shared memory, if we are using that. */ #ifdef HAVE_SHM if (argc > 1 && !strcmp (argv[1], "-nl")) char my_edata = 0; /* Help unexec locate the end of the .bss area used by Emacs (which isn't always a separate section in NT executables). */ char my_endbss[1]; /* The Alpha MSVC linker globally segregates all static and public bss data, so we must take both into account to determine the true extent of the bss area used by Emacs. */ static char _my_endbss[1]; char * my_endbss_static = _my_endbss; #define HAVE_SYS_SIGLIST /* we have a (non-standard) sys_siglist */ #define SYS_SIGLIST_DECLARED #define HAVE_GETWD /* cure conflict with getcwd? */ #define HAVE_PERSONALITY_LINUX32 /* personality LINUX32 can be set */ #define NO_SIOCTL_H /* don't have sioctl.h */ #define SYSV_SYSTEM_DIR /* use dirent.h */
https://bugs.gentoo.org/attachment.cgi?id=152799&action=diff
CC-MAIN-2022-27
refinedweb
435
58.38
# What happens behind the scenes C#: the basics of working with the stack I propose to look at the internals that are behind the simple lines of initializing of the objects, calling methods, and passing parameters. And, of course, we will use this information in practice — we will subtract the stack of the calling method. ### Disclaimer Before proceeding with the story, I strongly recommend you to read the first post about [StructLayout](https://habr.com/en/post/446478/), there is an example that will be used in this article. All code behind the high-level one is presented for the **debug** mode, because it shows the conceptual basis. JIT optimization is a separate big topic that will not be covered here. I would also like to warn that this article does not contain material that should be used in real projects. ### First — theory Any code eventually becomes a set of machine commands. Most understandable is their representation in the form of Assembly language instructions that directly correspond to one (or several) machine instructions. ![](https://habrastorage.org/r/w780q1/webt/ya/yv/k7/yayvk7f2o3tfr5flwaybim4u1m8.jpeg) Before turning to a simple example, I propose to get acquainted with stack. **Stack** is primarily a chunk of memory that is used, as a rule, to store various kinds of data (usually they can be called *temporal data*). It is also worth remembering that the stack grows towards smaller addresses. That is the later an object is placed on the stack, the less address it will have. Now let's take a look on the next piece of code in Assembly language (I’ve omitted some of the calls that are inherent in the debug mode). C#: ``` public class StubClass { public static int StubMethod(int fromEcx, int fromEdx, int fromStack) { int local = 5; return local + fromEcx + fromEdx + fromStack; } public static void CallingMethod() { int local1 = 7, local2 = 8, local3 = 9; int result = StubMethod(local1, local2, local3); } } ``` Asm: ``` StubClass.StubMethod(Int32, Int32, Int32) 1: push ebp 2: mov ebp, esp 3: sub esp, 0x10 4: mov [ebp-0x4], ecx 5: mov [ebp-0x8], edx 6: xor edx, edx 7: mov [ebp-0xc], edx 8: xor edx, edx 9: mov [ebp-0x10], edx 10: nop 11: mov dword [ebp-0xc], 0x5 12: mov eax, [ebp-0xc] 13: add eax, [ebp-0x4] 14: add eax, [ebp-0x8] 15: add eax, [ebp+0x8] 16: mov [ebp-0x10], eax 17: mov eax, [ebp-0x10] 18: mov esp, ebp 19: pop ebp 20: ret 0x4 StubClass.CallingMethod() 1: push ebp 2: mov ebp, esp 3: sub esp, 0x14 4: xor eax, eax 5: mov [ebp-0x14], eax 6: xor edx, edx 7: mov [ebp-0xc], edx 8: xor edx, edx 9: mov [ebp-0x8], edx 10: xor edx, edx 11: mov [ebp-0x4], edx 12: xor edx, edx 13: mov [ebp-0x10], edx 14: nop 15: mov dword [ebp-0x4], 0x7 16: mov dword [ebp-0x8], 0x8 17: mov dword [ebp-0xc], 0x9 18: push dword [ebp-0xc] 19: mov ecx, [ebp-0x4] 20: mov edx, [ebp-0x8] 21: call StubClass.StubMethod(Int32, Int32, Int32) 22: mov [ebp-0x14], eax 23: mov eax, [ebp-0x14] 24: mov [ebp-0x10], eax 25: nop 26: mov esp, ebp 27: pop ebp 28: ret ``` The first thing to notice is the **EBP** and the **ESP** registers and operations with them. A misconception that the **EBP** register is somehow related to the pointer to the top of the stack is common among my friends. I must say that it is not. The **ESP** register is responsible for pointing to the top of the stack. Correspondingly, with each **PUSH** instruction (putting a value on the top of the stack) the value of **ESP** register is decremented (the stack grows towards smaller addresses), and with each **POP** instruction it is incremented. Also, the **CALL** command pushes the return address on the stack, thereby decrements the value of the **ESP** register. In fact, the change of the **ESP** register is performed not only when these instructions are executed (for example, when interrupt calls are made, the same thing happens with the **CALL** instructions). Will consider *StubMethod()*. In the first line, the content of the **EBP** register is saved (it is put on a stack). Before returning from a function, this value will be restored. The second line stores the current value of the address of the top of the stack (the value of the register **ESP** is moved to **EBP**). Next, we move the top of the stack to as many positions as we need to store local variables and parameters (third row). Something like memory allocation for all local needs — **stack frame**. At the same time, the **EBP** register is a starting point in the context of the current call. Addressing is based on this value. All of the above is called **the function prologue**. After that, variables on the stack are accessed via the stored **EBP** register, which points on the place where the variables of this method begin. Next comes the initialization of local variables. *Fastcall* reminder: in .net, the *fastcall* calling convention is used. The calling convention governs the location and the order of the parameters passed to the function. The first and second parameters are passed via the **ECX** and **EDX** registers, respectively, the subsequent parameters are transmitted via the stack. (This is for 32-bit systems, as always. In 64-bit systems four parameters passed through registers(**RCX**, **RDX**, **R8**, **R9**)) For non-static methods, the first parameter is implicit and contains the address of the instance on which the method is called (this address). In lines 4 and 5, the parameters that were passed through the registers (the first 2) are stored on the stack. Next is cleaning the space on the stack for local variables (*stack frame*) and initializing local variables. It is worth be mentioned that the result of the function is in the register **EAX**. In lines 12-16, the addition of the desired variables occurs. I draw your attention to line 15. There is a accessing value by the address that is greater than the beginning of the stack, that is, to the stack of the previous method. Before calling, the caller pushes a parameter to the top of the stack. Here we read it. The result of the addition is obtained from the register **EAX** and placed on the stack. Since this is the return value of the *StubMethod()*, it is placed again in **EAX**. Of course, such absurd instruction sets are inherent only in the debug mode, but they show exactly how our code looks like without smart optimizer that does the lion’s share of the work. In lines 18 and 19, both the previous **EBP** (calling method) and the pointer to the top of the stack are restored (at the time the method is called). The last line is the returning from function. About the value 0x4 I will tell a bit later. Such a sequence of commands is called a function epilogue. Now let's take a look at *CallingMethod()*. Let's go straight to line 18. Here we put the third parameter on the top of the stack. Please note that we do this using the **PUSH** instruction, that is, the **ESP** value is decremented. The other 2 parameters are put into registers ( *fastcall*). Next comes the *StubMethod()* method call. Now let's remember the **RET 0x4** instruction. Here the following question is possible: what is 0x4? As I mentioned above, we have pushed the parameters of the called function onto the stack. But now we do not need them. 0x4 indicates how many bytes need to be cleared from the stack after the function call. Since the parameter was one, you need to clear 4 bytes. Here is a rough image of the stack: ![](https://habrastorage.org/r/w1560/webt/vz/eo/vz/vzeovzr2rvkuetuzi4xyp4iuxye.png) Thus, if we turn around and see what lies on the stack right after the method call, the first thing we will see **EBP**, that was pushed onto the stack (in fact, this happened in the first line of the current method). The next thing will be the return address. It determines the place, there to resume the execution after our function is finished (used by **RET**). And right after these fields we will see the parameters of the current function (starting from the 3rd, first two parameters are passed through registers). And behind them the stack of the calling method hides! The first and second fields mentioned before (**EBP** and return address) explain the offset in +0x8 when we access parameters. Correspondingly, the parameters must be at the top of the stack in a strictly defined order before function call. Therefore, before calling the method, each parameter is pushed onto the stack. But what if they do not push, and the function will still take them? ### Small example So, all the above facts have caused me an overwhelming desire to read the stack of the method that will call my method. The idea that I am only in one position from the third argument (it will be closest to the stack of the calling method) is the cherished data that I want to receive so much, did not let me sleep. Thus, to read the stack of the calling method, I need to climb a little further than the parameters. When referring to parameters, the calculation of the address of a particular parameter is based only on the fact that the caller has pushed them all onto the stack. But implicit passing through the **EDX** parameter (who is interested — [previous article](https://habr.com/en/post/447254/)) makes me think that we can outsmart the compiler in some cases. The tool I used to do this is called StructLayoutAttribute (al features are in [the first article](https://habr.com/en/post/446478/)). //One day I will learn a bit more than only this attribute, I promise We use the same favorite method with overlapped reference types. At the same time, if overlapping methods have a different number of parameters, the compiler does not push the required ones onto the stack (at least because it does not know which ones). However, the method that is actually called (with the same offset from a different type), turns into positive addresses relative to its stack, that is, those where it plans to find the parameters. But nobody passes parameters and method begins to read the stack of the calling method. And the address of the object(with Id property, that is used in the *WriteLine()*) is in the place, where the third parameter is expected. **Code is in the spoiler** ``` using System; using System.Runtime.InteropServices; namespace Magic { public class StubClass { public StubClass(int id) { Id = id; } public int Id; } [StructLayout(LayoutKind.Explicit)] public class CustomStructWithLayout { [FieldOffset(0)] public Test1 Test1; [FieldOffset(0)] public Test2 Test2; } public class Test1 { public virtual void Useless(int skipFastcall1, int skipFastcall2, StubClass adressOnStack) { adressOnStack.Id = 189; } } public class Test2 { public virtual int Useless() { return 888; } } class Program { static void Main() { Test2 objectWithLayout = new CustomStructWithLayout { Test2 = new Test2(), Test1 = new Test1() }.Test2; StubClass adressOnStack = new StubClass(3); objectWithLayout.Useless(); Console.WriteLine($"MAGIC - {adressOnStack.Id}"); // MAGIC - 189 } } } ``` I will not give the assembly language code, everything is pretty clear there, but if there are any questions, I will try to answer them in the comments I understand perfectly that this example cannot be used in practice, but in my opinion, it can be very useful for understanding the general scheme of work.
https://habr.com/ru/post/447274/
null
null
1,928
69.01
Frequently Asked Questions¶ Programming¶ Can I pass a function as an argument to a jitted function?¶ As of Numba 0.39, you can, so long as the function argument has also been JIT-compiled: @jit(nopython=True) def f(g, x): return g(x) + g(-x) result = f(jitted_g_function, 1) However, dispatching with arguments that are functions has extra overhead. If this matters for your application, you can also use a factory function to capture the function argument in a closure: def make_f(g): # Note: a new f() is created each time make_f() is called! @jit(nopython=True) def f(x): return g(x) + g(-x) return f f = make_f(jitted_g_function) result = f(1) Improving the dispatch performance of functions in Numba is an ongoing task. Numba doesn’t seem to care when I modify a global variable¶ Numba considers global variables as compile-time constants. If you want your jitted function to update itself when you have modified a global variable’s value, one solution is to recompile it using the recompile() method. This is a relatively slow operation, though, so you may instead decide to rearchitect your code and turn the global variable into a function argument. Can I debug a jitted function?¶ Calling into pdb or other such high-level facilities is currently not supported from Numba-compiled code. However, you can temporarily disable compilation by setting the NUMBA_DISABLE_JIT environment variable. How How can I increase integer width?¶ By default, Numba will generally use machine integer width for integer variables. On a 32-bit machine, you may sometimes need the magnitude of 64-bit integers instead. You can simply initialize relevant variables as np.int64 (for example np.int64(0) instead of 0). It will propagate to all computations involving those variables. How can I tell if parallel=True worked?¶ If the parallel=True transformations failed for a function decorated as such, a warning will be displayed. See also Diagnostics for information about parallel diagnostics. Performance¶ Does Numba inline functions?¶ Numba gives enough information to LLVM so that functions short enough can be inlined. This only works in nopython mode. Does Numba vectorize array computations (SIMD)?¶ Numba doesn’t implement such optimizations by itself, but it lets LLVM apply them. Why my loop is not vectorized?¶ Numba enables the loop-vectorize optimization in LLVM by default. While it is a powerful optimization, not all loops are applicable. Sometimes, loop-vectorization may fail due to subtle details like memory access pattern. To see additional diagnostic information from LLVM, add the following lines: import llvmlite.binding as llvm llvm.set_option('', '--debug-only=loop-vectorize') This tells LLVM to print debug information from the loop-vectorize pass to stderr. Each function entry looks like: LV: Checking a loop in "<low-level symbol name>" from <function name> LV: Loop hints: force=? width=0 unroll=0 ... LV: Vectorization is possible but not beneficial. LV: Interleaving is not beneficial. Each function entry is separated by an empty line. The reason for rejecting the vectorization is usually at the end of the entry. In the example above, LLVM rejected the vectorization because doing so will not speedup the loop. In this case, it can be due to memory access pattern. For instance, the array being looped over may not be in contiguous layout. When memory access pattern is non-trivial such that it cannot determine the access memory region, LLVM may reject with the following message: LV: Can't vectorize due to memory conflicts Another common reason is: LV: Not vectorizing: loop did not meet vectorization requirements. In this case, vectorization is rejected because the vectorized code may behave differently. This is a case to try turning on fastmath=True to allow fastmath instructions. Does Numba automatically parallelize code?¶ It can, in some cases: - Ufuncs and gufuncs with the target="parallel"option will run on multiple threads. - The parallel=Trueoption to @jitwill attempt to optimize array operations and run them in parallel. It also adds support for prange()to explicitly parallelize a loop. You can also manually run computations on multiple threads yourself and use the nogil=True option (see releasing the GIL). Numba can also target parallel execution on GPU architectures using its CUDA and HSA backends. Can Numba speed up short-running functions?¶ Not significantly. New users sometimes expect to JIT-compile such functions: def f(x, y): return x + y and get a significant speedup over the Python interpreter. But there isn’t much Numba can improve here: most of the time is probably spent in CPython’s function call mechanism, rather than the function itself. As a rule of thumb, if a function takes less than 10 µs to execute: leave it. The exception is that you should JIT-compile that function if it is called from another jitted function. There is a delay when JIT-compiling a complicated function, how can I improve it?¶ Try to pass cache=True to the @jit decorator. It will keep the compiled version on disk for later use. A more radical alternative is ahead-of-time compilation. GPU Programming¶ How do I work around the CUDA intialized before forking error?¶ On Linux, the multiprocessing module in the Python standard library defaults to using the fork method for creating new processes. Because of the way process forking duplicates state between the parent and child processes, CUDA will not work correctly in the child process if the CUDA runtime was initialized prior to the fork. Numba detects this and raises a CudaDriverError with the message CUDA initialized before forking. One approach to avoid this error is to make all calls to numba.cuda functions inside the child processes or after the process pool is created. However, this is not always possible, as you might want to query the number of available GPUs before starting the process pool. In Python 3, you can change the process start method, as described in the multiprocessing documentation. Switching from fork to spawn or forkserver will avoid the CUDA initalization issue, although the child processes will not inherit any global variables from their parent. Integration with other utilities¶ Can I “freeze” an application which uses Numba?¶ If you’re using PyInstaller or a similar utility to freeze an application, you may encounter issues with llvmlite. llvmlite needs a non-Python DLL for its working, but it won’t be automatically detected by freezing utilities. You have to inform the freezing utility of the DLL’s location: it will usually be named llvmlite/binding/libllvmlite.so or llvmlite/binding/llvmlite.dll, depending on your system. I get errors when running a script twice under Spyder¶ When you run a script in a console under Spyder, Spyder first tries to reload existing modules. This doesn’t work well with Numba, and can produce errors like TypeError: No matching definition for argument type(s). There is a fix in the Spyder preferences. Open the “Preferences” window, select “Console”, then “Advanced Settings”, click the “Set UMR excluded modules” button, and add numba inside the text box that pops up. To see the setting take effect, be sure to restart the IPython console or kernel.') Miscellaneous¶ Where does the project name “Numba” come from?¶ “Numba” is a combination of “NumPy” and “Mamba”. Mambas are some of the fastest snakes in the world, and Numba makes your Python code fast. How do I reference/cite/acknowledge Numba in other work?¶ For academic use, the best option is to cite our ACM Proceedings: Numba: a LLVM-based Python JIT compiler. You can also find the sources on github, including a pre-print pdf, in case you don’t have access to the ACM site but would like to read the paper.
https://numba.readthedocs.io/en/stable/user/faq.html
CC-MAIN-2021-10
refinedweb
1,284
57.06
How to Capture all NSE Bhavcopy (EOD Stock Prices) Files into One File? 6 min read Welcome Readers 🤩 to the second article in the series "NSE EOD Stock Prices" This article will be taking the inputs of the NSE Bhavcopy files we downloaded for 2020, and we will grab the data from all these files and convert it into one single file where the index is in time-series format. If you haven't already read the previous article, I would recommend you to do so; otherwise, this article would not make sense. By the way, if you prefer watching videos instead of reading articles, we have a video on our youtube channel to cover this article. So, let's get on with it. What does time-series format mean? Let's look at the sample below; usually, when you create an excel spreadsheet for any data, you have a column like "Sr. No" which starts from 1 to n, similar to that in a time-series format that column is "Date". So, right now, with the files we have downloaded in the last article, "KOTARISUG" will have a line of data in each file as long as the stock is traded/listed on the National Stock Exchange of India. How do we get to one file for each stock in time-series format and then save that file? It is simple, but there are a couple of nuances to deal with, let's look by loading just one sample file first; I will load cm01Apr2020bhav.csv for demonstration purposes. Once you know the nuances, we will do this for all the files in a loop and produce a final data file. Data Cleaning using Python We will be using the existing pandas, os, shutil, glob Python libraries which come with the Anaconda environment for this step. import pandas as pd import os, shutil, glob Let's load the file into Python df = pd.read_csv('path/to/cm01Apr2020bhav.csv') df.head() #loading first 5 rows of the data Let's see some more information about this dataset we have. df.info() The .info() function gives you really important information about the data-frame like the data-type of each column and total no. of entries and how much memory is it consuming in your RAM. You can read a nice article about different data-types here. You will notice we have an additional column Unnamed: 13 which is just an empty column, so let's delete this column. if 'Unnamed: 13' in df.columns: df.drop(['Unnamed: 13'], axis=1, inplace=True) #if condition because in old files, you won't find this empty column Let's look at all the final columns in our dataframe. df.columns #getting all columns in a list-like format. Whitespaces i.e., all leading and trailing spaces are very common, and you will come across them from time to time; the best is to get rid of them at the start, so you don't see any issues later on. The below image shows the unique entries in the SERIES column from another full_bhavcopy_file which has leading whitespace; However, the file we are dealing with does not have this, let's still get rid of any whitespace in the columns and the data in those columns just in case they appear in the future and break our code. Removing all Whitespaces in the DataFrame df.columns = df.columns.str.replace(' ', '') #we can only remove whitespace from 'object' datatype # Hence the if condition in the below otherwise you will get an error df = df.apply(lambda x: x.str.strip() if x.dtype == 'object' else x) Now, let's finally clean our dataframe for the below points: - Convert the TIMESTAMPcolumn to datetime64dtype; right now it is in objectformat which is not correct. - Let's also filter our dataframe for SERIEScategories ['EQ', 'BE', 'SM']as all other categories are not really important to you from a retail investor point view. - Finally, let's also set the TIMESTAMPcolumn as our index, which becomes time-series data. #Converting date column to datetime dtype df['TIMESTAMP'] = pd.to_datetime(df['TIMESTAMP']) #Setting DATE1 column as index df.set_index(['TIMESTAMP'], inplace=True) #Filtering only for EQ, BE & SM series. new_df = df[df['SERIES'].isin(['EQ', 'BE', 'SM'])] #Grabbing the first 5 rows of the new_df new_df.head() Also, running the new_df.info() to get additional details about the dataframe. You will notice the following things: - The no. of entries has gone down because we filtered for particular series. - The Unnamed:13column we deleted is not present. - The memory usage is less because we now store fewer pieces of data. Great stuff 🤟, now let's build a loop to do this for all the files we downloaded. Building a Loop Finally, time to build the final loop which will do the data cleaning exercise we did above for each and every file, and then attach that clean dataframe to an existing empty dataframe. But, as you will notice, the pd.read_csv() function expects you to give the file destination; let's quickly see how we can create a list of filepaths of the .csv files we have using the glob library. file_list = glob.glob('path/to/folder/where/files/saved/*.csv') #Notice the * which acts as a wildcard. #This will give you the path of all files with .csv extension in that folder Ensure, you do not have any other .csv files in that folder as it will pick up all of them and pass them to the loop, which can crash if it gets an incorrect file. Final Loop Code final_df = pd.DataFrame() #empty dataframe for csv_file in file_list: df = pd.read_csv(csv_file) csv_file_name = csv_file.split('\\')[7] print('Processing File : {}'.format(csv_file_name)) df.columns = df.columns.str.replace(' ', '') df['TIMESTAMP'] = pd.to_datetime(df['TIMESTAMP']) df.set_index(['TIMESTAMP'], inplace=True) if 'Unnamed:13' in df.columns: df.drop(['Unnamed:13'], axis=1, inplace=True) df_trim = df.apply(lambda x: x.str.strip() if x.dtype == 'object' else x) new_df = df_trim[df_trim['SERIES'].isin(['EQ', 'BE', 'SM'])] final_df = final_df.append(new_df) final_df.sort_index(inplace=True) #to sort by dates This shouldn't take very long to run, but once it's complete, let's call the final_df.info() to see our final dataframe and call the final_df.tail() to see the last 5 rows. final_df.info() final_df.tail() Finally, let's save these 428,000+ rows of the database to a .csv file for now. (we will discuss what's a more efficient way of saving this in the future articles) final_df.to_csv('path/to/folder/bhavcopy_2020_data.csv') You will get a 38MB .csv file in the path you defined, which should look like the below. Now, if you filter for KOTARISUG in the excel spreadsheet, you will see the time-series we showed you at the start. You can also access Github link here to view the whole code in one single file directly. That's it for today, folks; I hope you got a good insight on how to clean the data and why it's important. In the next article, we will discuss how to factor in stock_symbol changes into your dataframe and where to get that data from. Also, do not worry about how to do this every day; we will refactor all code in the 6th article on Object-Oriented Programming, which will make it all clear. Remember to subscribe to my newsletter 📬 to get regular updates. If you need any help with the above or need some more information, you can ping me on Twitter or Linkedin. If you like it up till now, consider buying me a coffee ☕ by clicking here or the button below. Did you find this article valuable? Support Trade With Python by becoming a sponsor. Any amount is appreciated!
https://tradewithpython.com/how-to-capture-all-nse-bhavcopy-eod-stock-prices-files-into-one-file
CC-MAIN-2022-21
refinedweb
1,308
73.68
Tell us what you think of the site. OK, so I have a scene in Maya that contains a model and some animation information. What I would like to do is to get the vertex, normal, uv-coord, bone, and vertex weight data, and also possibly the animation information, although at the moment, I am mostly just interested in replicating different frames of the animation in a static pose. I feel stupid, because after poring over the Maya 2012 python documentation for a day or so, I still have hardly any idea on how to go about doing this. I realize that I have to get the appropriate nodes from the scene, so the ls() command does a nice job of getting me the node names. Then, I can use these names to iterate over the nodes to get the nodes of type ‘mesh’, which I am pretty sure are the ones containing the geometric information (at this point, I would be happy just to get triangle, normal, and texture coord information, bones and weights can come later). I intimate that I should be using the Maya API to somehow coerce this node into some kind of instantiation of MFnMesh, but this is the point where my comprehension breaks down. I can’t seem to get from node and name to actual geometry data. If anyone can point me to a discussion of how to access the data from a scene using Python, I would be grateful. Otherwise, it will have to be plan B, which involves a Glock 27 and someone cleaning up an incredible mess from the wall of my cubicle. Thanks, David This is probably a little weird replying to my own post, but I have made some progress on getting information out, so I thought I would throw some snippets up on the off-chance someone else would find it useful. Of course, if you are experienced with Maya Python, you realize by now that the problem that I was having was that I was not properly accessing the nodes in the DAG. So, the solution was to turn the names of the nodes, which are quite easy to get using maya.cmds.ls(), into actual objects that you can query. This, unfortunately, involves quite a bit of casting and subcasting, and seems like an awful lot of work, but it is what it is. Let me just put a couple snippets that helped me a bunch. First, since I am working with names, I find that the easiest way to start the ball rolling is to use an MSelectionList. So, I have the following code import maya.standalone maya.standalone.initialize(name='python')import maya.cmd as cmds import maya.OpenMaya as OpenMaya # this will give us the names of all the nodes in the sceneall_nodes = cmds.ls() # use an MSelectionList to drive our queriess_list = OpenMaya.MSelectionList() for node in all_nodes: # this gives us the node type node_type = cmds.nodeType(node) # we can now do some selection and processing if node_type == 'joint': print 'Take a hit off this joint, dude!' # ok, bad pun # this ensures that the list will only have the single member s_list.clear() # add our current node to the list s_list.add(node) # get the depend node depend_node = OpenMaya.MObject() s_list.getDependNode(0, depend_node) # let's say we are a mesh, let's get the vertex data if node_type != 'mesh': continue # select the mesh of interest cmds.select(node) # get the vertex data vertex_data = [] num_verts = cmds.polyEvaluate(vertex=True) vertex_data = [cmds.pointPosition("%s.vtx[%d]" % (node, i)) for i in range(num_verts)] # now barf it out to the screen print 'Vertices' print str(vertex_data) # in [x, y, z] format So, that is an extremely simpleminded way of getting some information out of the scene, just in case anyone else was as confused as I was. I may be adding some more snippets to this later.
http://area.autodesk.com/forum/autodesk-maya/python/getting-data-out-of-a-scene-file/page-last/
crawl-003
refinedweb
653
68.81
//I replace the IF statement I have, or add another? The goal is to just get the response to say Goodbye //when I type Exit or exit. Right now it responds the same as when I type in any string. //I read the input, do the output, but maybe only then test for "exit"? //Would I be better off with a While loop vs a Do While Loop? //TOTALLY Confused. /** Homework assignment that prompts for user input */ import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Homework1{ public static void main(String[] args){ BufferedReader dataIn = new BufferedReader(new InputStreamReader(System.in)); String name =""; do { System.out.print("Hi, what's your name?"); try{ name = dataIn.readLine(); if(!(name.compareToIgnoreCase("quit")==0)); System.out.println(name + " it's so nice to meet you"); } catch(IOException e) { System.out.println("error"); } }while(!(name.compareToIgnoreCase("exit")==0)); System.out.println("Goodbye!"); } } This post has been edited by binarimon: 19 January 2009 - 05:09 PM
http://www.dreamincode.net/forums/topic/81703-making-an-if-statement-work-in-a-while-loop/
CC-MAIN-2013-20
refinedweb
166
53.47
Posts Tagged songs Some like it loud … Posted by Paul in code, The Echo Nest, web services on October 6, 2010. For example, here’s a bit of python that will show you the loudest songs for an artist: from pyechonest import song as songAPI from pyechonest import artist as artistAPI def find_loudest_songs(artist_name): artists = artistAPI.search(artist_name, results=1) if artists: songs = songAPI.search(artist_id=artists[0].id, sort='loudness-desc') for song in songs: print song.get_audio_summary().loudness, song.title Here are the loudest songs for some sample artists: - The Beatles: Helter Skelter, Sgt Peppers Lonely Hearts Club Band - Metallica: Cyanide, All Nightmare Long - The White Stripes: Broken Bricks, Fell in love with a girl - Led Zeppelin: Rock and Roll, Black Dog We can easily change the code to help us find the softest songs for an artist, or the fastest, or the shortest. Some more examples: - Shortest Beatles song: Her Majesty at 23.2 second - Longest Beatles song: Revolution #9 at 8:35 - Slowest Beatles song: Julia at 57 BPMs - Softest Beatles song: Julia at -27DB BPMs (Blackbird is at -25DB) I think it is interesting to find the outliers. For instance, here’s the softest song by Muse (which is usually a very loud artist): Bedroom Acoustics by Muse We can combine these attributes too so we can find the fastest loud Beatles song (I feel fine, at -7.5 DB and 180 BPM, or the slowest loud Beatles song (Don’t let me down, at -6.6 DB and 65 BPM). The search songs api is a good example of the power of the Echo Nest platform. We have data on millions of songs that you can use to answer questions about music that have traditionally been very hard to answer.:
https://musicmachinery.com/tag/songs/
CC-MAIN-2020-40
refinedweb
296
65.15
What is Cordova id?Where can I get it? λ quasar init quasar_demo Running command: vue init 'quasarframework/quasar-starter-kit' quasar_demo ? Project name (internal usage for dev) quasar_demo ? Project product name (official name) Quasar App ? Project description A Quasar Framework app ? Author ? Check the features needed for your project: ESLint, Vuex, Axios, Vue-i18n, IE11 support ? Pick an ESLint preset Standard ? Cordova id (disregard if not building mobile apps) org.cordova.quasar.app Hi @kanjiushi. - s.molinari last edited by Where can I get it? I think I answered this on Discord. The cordova id is a reverse domain you create yourself or rather, where you’ll put your app later. Something like “org.my-cool-site.app”. Scott - wishinghand last edited by The way it used to be was a namespace for your company and app name. So if my company’s name was Acme and the app is “Delivery Box”, I’d probably enter com.acme.delivery-box. Digging around in the documentation I found this: cordova create myapp com.mycompany.myteam.myapp MyAppon this page:.
https://forum.quasar-framework.org/topic/2666/what-is-cordova-id-where-can-i-get-it
CC-MAIN-2021-39
refinedweb
179
70.6
The will be solving for Length of Longest Increasing Subsequence (LIS) in python. Algorithms: Binary Search Tree and its functionality in python Lets take an example first. 0 7 5 11 3 10 6 14 1 9 5 13 3 11 7 15 Here length of longest increasing subsequence is 6. [0,3,6,9,11,15] Now we will be solving this problem using dynamic problem solution. Lets first look at the code and then we will discuss what is it doing. def longest_increasing_subsequence(numbers): temp_list = [1 for x in range(0,n)] i,j = 1,0 while i<len(numbers) and j<len(numbers): if numbers[j]<numbers[i]: if temp_list[j]+1>temp_list[i]: temp_list[i] = temp_list[j]+1 j=j+1 if j==i: j,i=0,i+1 return max(temp_list) if __name__ == '__main__': n = int(input()) numbers = [] for i in range (0,n): numbers.append(int(input())) print longest_increasing_subsequence(numbers) If you want to learn data structures in python you can read the below books. How to execute the code? Make sure you have python install. Open terminal and type command python filename.py And then give input as below. 16 // Number of elements to be entered. 0 //list of number 8 4 12 2 10 6 14 1 9 5 13 3 11 7 15 Output will be like this 6 //Length of Longest Increasing Subsequence (LIS) What algorithm we used here? We used dynamic programming for this problem. Lets find out the overlapping subproblem here which will be solved again and again. Let take an example of list with length 3. Below are the list of problem that we will be solving. We can solve them and save them. lis(3) / | lis(2) lis(1) / lis(1) We can see there are recurring problem that is lis(1). We can save them and use it instead of calculating every time. Lets see how we implemented it. Algorithms: Mirror a Binary tree using python Solution Explanation We will keep a list of length of the list that is provided say n=5 and fill it with one. temp_list = [1,1,1,1,1] Now we will start to loop through the given number with two of our variable i=1 and j=0; If number[i] is greater than number[j] and value at temp[j]+1 greater than temp[i] we will change the value of temp[i] to temp[j]+1 What this is says that if there is a number on ith position which is greater than the number at jth position, this means that this number can be used in longest increasing subsequence and thus we increased the value at jth position by one and place it at ith position. We only increase the value if the value at jth position + 1 is greater than value at ith position to make sure that it has the length of the longest subsequence. In this way on each location in our temp_list we will get the length of longest subsequence that can be made by using the element at that position. Now to get the longest we will just have to find the maximum of this list temp_list. If you like the article please share and subscribe to keep us motivated. More algorithms in python you can find here. Algorithms: Coding a linked list in python Level order traversal of a binary tree in python.
https://www.learnsteps.com/length-longest-increasing-subsequence-lis-in-python/
CC-MAIN-2019-51
refinedweb
574
70.33
Hi, Have a look into the program given below. public class Test{ public static void main(String argv[]){ Test refTest = new Test(); refTest.method(null); } public void method(String aStrTest){ System.out.println("Hello from method(String aStrTest)"); } public void method(Object aObjTest){ System.out.println("Hello from method(Object aObjTest)"); } } The code compiles fine and gives the output Hello from method(String aStrTest) At runtime. My doubt is why the method(String aStrTest) is called over the method(Object aObjTest) ? Both are Objects basically and why there is no ambiguity arising? Some one please explain me this. Regards, Abhilash Java Objects (2 messages) - Posted by: Abhilash John - Posted on: February 06 2003 08:46 EST Threaded Messages (2) - Java Objects by Gal Binyamini on February 06 2003 09:01 EST - Java Objects by Pranav Soni on February 06 2003 13:29 EST Java Objects[ Go to top ] Thw two methods are both applicable and accessible. The string version is the more specific match, so it is used. If you had: - Posted by: Gal Binyamini - Posted on: February 06 2003 09:01 EST - in response to Abhilash John method(String s, Object o) {...} method(Object o, String s) {...} public static void main(String[] args) { method("foo", "bar"); } Then there would be no "most specific match", so a compile-time error would arise. Gal Java Objects[ Go to top ] Most specific method is chosen, when more than one method declaration are accessible and applicable. - Posted by: Pranav Soni - Posted on: February 06 2003 13:29 EST - in response to Abhilash John The method is consider more specific than other, if any invocation handled by him could be passed to other without compile-time type error. Here, both public void method(String s) { ... } public void method(Object o) { ... } are accessible and applicable. All invocation handled by method(String a) could be passed to method(Object o) without compile-time type error, but reverse is not true. so method(String s) is most specific and called. Pranav
http://www.theserverside.com/discussions/thread.tss?thread_id=17801
CC-MAIN-2014-52
refinedweb
334
56.15
US7010534B2 - System and method for conducting adaptive search using a peer-to-peer network - Google PatentsSystem and method for conducting adaptive search using a peer-to-peer network Download PDF Info - Publication number - US7010534B2US7010534B2 US10298967 US29896702A US7010534B2 US 7010534 B2 US7010534 B2 US 7010534B2 US 10298967 US10298967 US 10298967 US 29896702 A US29896702 A US 29896702A US 7010534 B2 US7010534 B2 US 7010534B2 - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - receiving node - node - peer - criterion -11—Request for offers or generally relates to the field of electronic commerce (or e-commerce) over a network such as a peer-to-peer network. More particularly, this invention pertains to a system and associated method for creating an active marketplace with real-time price comparisons within the peer-to-peer network. Specifically, this invention provides a mechanism whereby messages from nodes in the peer-to-peer network can be adaptively modified (or updated), returned to the originator, and transmitted to other nodes in the network. The World Wide Web (WWW or the Web) results of the search on the WWW. An outgrowth of the popularity of Internet or on-line shopping is the advent of on-line comparison shopping engines. Price comparison tools, often promoted by Web portals such as Yahoo!®, AltaVista®, Shopping.com®, or shopping services such as Bluefly® or MySimon.com®, are essentially web search engines that allow a user to search a population of web merchants for the lowest price for a desired item. These search engines allow a shopper to enter a key word that is usually a description of the desired item. In response to the shopper's query, the search engines return a set of corresponding Web-based matches listing the merchants or merchant's Web sites that offer the desired item. Typically, the user must undertake these searches on an item-by-item basis. The search is performed against a set of retailers determined by the search engine owner. The population of merchants searched may be open-ended as in the case of search engines that use agents or “bots” to scan the Web for such items or closed as in search engines that search only across a group of subscribed merchants. To create a database of items and their prices, the price crawlers typically go to each merchant's web site, extract the price information from that web site, and create a database of items, prices, and other supporting information. However, it is difficult to acquire the price data from the merchant's web site. Technologies exist that prevent a price crawler or other such other service from extracting any information from a Web site. Price information obtained over the Web can be incomplete, inaccurate, or out of date. In addition, the centralized approach for price comparison used by current price comparison web sites could be unduly manipulated by merchants. In addition, current comparison shopping solutions rely on price crawlers capturing information from the merchant. There is currently no mechanism for allowing merchants and customers to interact in a marketplace format, in that the current comparison shopping solutions available to the customers are limited. What is therefore needed is a system and associated method for direct communication between buyer and sellers, allowing a free marketplace interaction. The need for such a system and method has heretofore remained unsatisfied. The present invention satisfies this need, and presents a system, a computer program product, and an associated method (collectively referred to herein as “the system” or “the present system”) for conducting an adaptive search using a peer-to-peer network. In a preferred embodiment, the present system uses a peer-to-peer network to conduct distributed comparison shopping. The present system is based on a decentralized, distributed architecture utilizing a peer-to-peer network, creating an active marketplace with real-time price (features or criteria) comparisons. Standard peer-to-peer infrastructures such as Gnutella, Freenet, or Sun Microsystems JXTA® can be used to implement the present system. Sellers either enter price information for products or services in a graphical user interface with electronic forms or use a gateway to provide access to an existing product/price database. The peer-to-peer node coordinates connectivity with other peers, building a dynamic network. A user/buyer can enter specific search requests using complex search criteria based on XML. Each node on the peer-to-peer network can participate simultaneously in selling and buying activities. The seller's request is broadcast from node to node by the peer-to-peer network. The present system uses an adaptive search approach. A starting node is the origination of the search request. Messages are described, for exemplification purpose, in XML, using “channels,” e.g., using XML namespaces. Each message can include, for example, a subject component (or section) and an adaptive update component (or section). In one embodiment, the subject component is preferably fixed by the user and does not change. The subject component can include an identifier, such as a product or service identification that uniquely identifies the product or service of interest. The adaptive update component can be adaptively changed, either in part or in whole as the message propagates or travels through the network. In a preferred embodiment, the adaptive update component can be comprised of any one or more of a search criterion (or criteria), a search status field (or fields). It should be clear that the message may also contain other fields or information of interest to the user and/or merchants, and required by the network. An aspect of the present system is that the adaptive update component of the message changes (or is updated) while traveling in the peer-to-peer network. Nodes that receive a search request will interpret the search criteria and apply those criteria to a local search result. If no results are found by the node, the node stops the search, and forwards the message, unchanged, to the next node or nodes in the peer-to-peer network. Otherwise, if one or more search criteria are found by the node, the node can take, for example, one of two actions, as determined by the user and set as instructions in the message. According to a first embodiment, the node updates the adaptive update component of the message, resulting in a modified message. The node then forwards the modified message to the next node or nodes in the peer-to-peer network. For example, the merchant responds to the buyer with a lower price or better shipping terms. This new information is encoded in the original search request, which reflects that the dynamically changing nature of the adaptive search. According to a second embodiment, the node returns the response back to the source or originating node, requesting confirmation or authorization request to update the message. If the authorization request is approved by the source node, the node forwards. Additional optimizations for query routing are possible. The use of channels for communication between nodes provides rich expressiveness in queries because the underlying format is XML. Digital signatures may be used to verify data integrity properties. The present system provides a marketplace for merchants and customers that does not require price crawlers. Because the connection between merchant and customer is “real time,” the information provided to the customer is current. The present system has unlimited scalability; millions of nodes can be supported concurrently. Users can buy and sell products or services simultaneously. The present system is readily integrated into the existing Internet infrastructure. For example, a user other than a merchant wishes to sell an item such as a book. The user chooses a shopping channel. Once the information is entered, it is available for the adaptive search of the present invention. A merchant may provide products or services by providing a gateway to their legacy product database. This makes the information in the database available to the peer-to-peer network. The gateway performs the transcoding work required to communicate with other nodes in the network. To purchase a product, such as a book, the user enters a specific search request within a graphical user interface using a “book channel.” The present system searches for the lowest available price for that item by transmitting the request to its neighborhood nodes on the peer-to-peer network. The nodes that wish to respond return the request with their offer and a URL to the product site.: Channel: Communications category within the peer-to-peer network. Nodes can form their own channels that they then broadcast to other nodes. These other nodes might or might not adopt this new channel. Node: A processing location in a network. In a peer-to-peer networks, a node can be a computer, server, or a gateway. Peer-to-peer Architecture: A type of network in which each workstation has equivalent capabilities and responsibilities. This differs from client/server architectures, in which some computers are dedicated to serving the others. Peer-to-peer networks are generally simpler, but they usually do not offer the same performance under heavy loads.. The cloud-like peer-to-peer network 20 is comprised of communication lines and switches connecting servers such as servers 25, 30, to gateways such as gateway 35. The servers 25, 30 and the gateway 35 provide the communication access to the WWW or Internet. Users, such as remote Internet users, are represented by a variety of computers such as computers 40, 45, 50, and can query the host server 15 for desired information through the peer-to-peer network 20. Computers 40, 45, 50 each include software that will allow the user to browse the Internet and interface securely with the host server 15. The host server 15 is connected to the peer-to-peer network 20 via a communications link 55 such as a telephone, cable, or satellite link. The servers 25, 30 can be connected via high-speed Internet network lines 60, 65 to other computers and gateways. System 10 could use the Internet for communication between computers and servers. Rather than using a server-client approach as used in the Internet, the peer-to-peer network 20 uses nodes. Each node can operate either as a server or as a client, publishing or receiving information. The host server 15 and computers 40, 45, 50 can be viewed as nodes in the peer-to-peer network 20. The high-level architecture for system 10 is shown in With further reference to The request preprocessor 205 then verifies the integrity of the message at block 315 by, for example, validating the contents and the electronic signatures. If method 300 determines at decision block 320 that the message is invalid, system 10 forwards it to the next node in the network 20 (block 325). Otherwise, system 10 proceeds to block 330 and forwards the message to the main decision logic 210 at block 330. The main decision logic 210 retrieves the subject ID (e.g., product and/or service identification) and search criteria from the message at block 335, then forwards the subject ID and search criteria to the query engine 215 at block 340. At block 345, the query engine 215 formulates the query using the subject ID and search criteria then queries the local database 230. The local database 230 returns the query results back to the query engine 215 at block 350, which, in turn, forwards the query results to the main decision logic 210 at block 355. The main decision logic 210 compares the query results with the search criteria at decision block 360. If the search criteria are met, i.e., the merchant has the item and can meet the price presented in the message, the node A, 406 can take, for example, one of two actions, as determined by the user and set as instructions in the message. According to a first embodiment ( The request forwarder 225 sends the modified message to the peer-to-peer communication core 235 at block 368, which, in turn, forwards the modified message to the next node or nodes in the peer-to-peer network 20 at block 369. For example, the merchant responds to the buyer with a lower price or better shipping terms. This new information is encoded in the original search request, which reflects that the dynamically changing nature of the adaptive search. According to another embodiment of the present invention ( If method 300 determines at decision block 373 that the authorization request has been approved by the source node, such as if the source node returns the authorization to the main decision logic 210, via the request pre-processor 205, at block 374, method 300 proceeds to block 365 and repeats the steps at block 366, 367, 368, and 369, as described earlier, forwarding. If, however, method 300 determines at decision block 373 that the source node did not grant the request authorization, the source node B, 408, sends an instruction to node A, 406, to either (1) send forward the unmodified message to succeeding nodes in the network 20, or (2) not to forward the message to any other node in the peer-to-peer network 20. One function of the updater 220 is to negotiate a modified message from the search result and the original message. Three exemplary responses are possible. First, the merchant can provide the item for less than the current minimum. In which event, the main decision logic 210 instructs the updater 220 to modify the message and to replace the current minimum with the new minimum available from the merchant and to update the status field of the message. Second, the merchant can provide the item for the same value as the current minimum. In which event, the main decision logic 210 instructs the updater 220 to update the status part of the message. Third, the merchant can not match or beat the price value in the message, but can match one or more other criteria in the message, such as the shipping time, etc. In which event, the main decision logic 210 may instruct the updater 220 to modify the search criteria portion of the message, resulting in a modified message. Returning now to An example that further illustrates the operation of system 10 is illustrated by In this example, node B 408 is the source node, and wishes to request quotes for an item (represented by the letter “X”) such as a book, and sets the price limit for that book at $20. System 10 creates the request as a structured query, shown as original message 418. Message 418 and subsequent modified (or updated) messages preferably comprise two components: a fixed component 505 and an adaptive update component 510. In turn, the fixed component 505 comprises a subject identification (ID) 515, which is comprised of a product or service identification, encoded in XML. The adaptive update component 510 comprises a search criteria field (or fields) 520, encoded in Boolean Expression query language, and a search status field 525 that contains metadata collected as the message travels throughout the peer-to-peer network 20. The product or service identification may be very specific; i.e., “book; ISBN #1123413”. Exemplary search criteria include price limits and delivery date limits. Message 418 comprises the structured message “X” and the criteria limit “20”. The search status field 525 monitors the number of modifications the message receives, and includes values such as number of nodes traveled by the message, time stamp, etc. The search status field 525 is a bookkeeping value, and is not part of the search criteria. However, the search criteria 520 of the message can be formulated to include the search status. For example, the user at node B, 408, may limit the travel time of message 418 through the network 20 to a few hours, such as 4 hours. In which case, system 10 (at each node) will not rebroadcast the message after the time limit expires. System 10 at node A, 406, determines whether the merchant at node A, 406 has the product that the source node B, 408, is requesting by querying a local database 230 (or any other suitable database to which node A has access) at node A, 406 (block 345). If the merchant at node A, 406, has the product, system 10 at node A, 406, determines whether the search criteria goal of message 418 can be met. If not, node A, 406, forwards message 418 to one or more nodes in neighborhood 402. If node A, 406, can satisfy the criteria of message 418, node A, 406, modifies the search criteria 515 and/or the search status 525, as described earlier, resulting in modified message 555 that contains a modified search criteria component 520′ and/or a modified search status component 525′. A feature of system 10 is the ability to change the criteria goal of message 418 to reflect the new criteria 520. For example, the price of node A, 406, for the product requested by node B, 408, is $18. System 10 at node A, 406, changes the price criteria of message 418, to $18, as shown by the modified message 555. Node A, 406, then broadcasts (or rebroadcasts) the modified message 555 to node D, 412, via path 424, node C, 410, via path 426, and back to node B, 408, via path 428. Node D, 412, searches its local database for the product and price in modified message 555. Node D, 412, finds that it has the product, but the price is $24. However, the merchant at node D, 412, may be able to match or beat some other criteria such as shipping time or shipping cost. Node D, 412, then changes the modified message 555, creating another modified message 430. Node D, 412 returns the modified message 430 to node B, 408, via path 432 and forwards modified message 430 to other nodes in its neighborhood, as indicated by path 434. Node C, 410, also searches its local database for the product and price in modified message 555. The merchant at node C, 410, can match the price in modified message 555. Node C, 410, then sends a modified message 436 to node B, 408, via path 438, matching the search criteria of the modified message 555. Node C, 410, also sends the modified message 436 to node E, 414, via path 440 in neighborhood 404. Node E, 414, forwards the modified message 436 to node F, 416, via path 442. Node F, 416, can route a response back to node B, 408, through node C, 410, via path 444 and path 438 if the merchant at node F, 416, can meet the criteria of modified message 436. Node B, 408, is waiting for incoming search results. These incoming messages could take one of three modified message forms. First, the originator of the modified message may offer the product for more than the current minimum (node D, 412). Node B, 408, would update the search status component 525 of the modified message and replace it with the current minimum, then return the modified message to the originator of the modified message. Second, the originator of the modified message offers the product for the same price as the current message (i.e. node C, 410). Node B, 408, would update the status part of the incoming message and replace it with the current minimum. Node B, 408, would then add the merchant at that node (node C, 410) to the response list in the local database 230 at node B, 408. Third, the originator of the modified message offers the product for less than the current minimum (node A, 406). Node B, 408 updates the search status component of the obtained message, replaces it with the current minimum, and adds the seller to the list in the local database 230 at node B, 408. The user at node B 408 now has quotes from two merchants stored in the local database 230: the merchant at node A, 406, for $18 and the merchant at node C, 410, for $18. In addition, the original message 418 is stored in the local database for reference to incoming quotes. The user may now select either offer by using the URL included in the message to contact the merchant. In another embodiment, node C, 410, does not change the message but matches the search criteria. According to one embodiment, node C, 410, sends an authorization request to modify the message, informing node B 408 that node C 410 can offer the best price for the item. Node B 408 then decides whether or not to accept node B's offer, as explained earlier. The user at node B, 408, may investigate the credibility of the merchant at node C, 410, and find that the merchant at node C, 410, has a reputation for poor service or unethical business practices, etc. The user at node B, 408, may then refuse to allow node C, 410, to update the message. Otherwise, the user at node B 408 chooses to update the message from the merchant at node C, 410, and returns the appropriate authorization to node C, 410. It is to be understood that the specific embodiments of the invention that have been described are merely illustrative of certain application of the principle of the present invention. Numerous modifications may be made to the system and method for modifying a peer-to-peer network to accommodate distributed comparison shopping invention described herein without departing from the spirit and scope of the present invention.
https://patents.google.com/patent/US7010534B2/en
CC-MAIN-2018-47
refinedweb
3,603
58.11
This article covers the exact steps that you can follow to upload files to Django server. Most web applications and websites allow users to upload their profile pictures, or files from their local computers to the server. We’ll replicate the same in our tutorials tutorial. Let’s find how to upload and handlefiles and images onto the webserver using Django and ModelForms. Uploading Files to Django Let’s get right down to what we need to allow file uploads in Django. 1. Prerequisite Knowledge In the last article on Django Forms, we have seen that to get the form data; we use request.POST in the Form object. But to upload files to Django, we need to include another attribute request.FILES as well because the uploaded files are stored in the attribute request.FILES instead of request.POST. Here’s what the code will look like: form = ReviewForm(request.POST,request.FILES) Django has separate model fields to handle the different file types – ImageField and FileField. We use the ImageField when we want to upload only image files(.jpg/.jpeg/.png etc.) To allow file uploads we need to add the following attribute in the <form> attribute. enctype ="multipart/form-data" At the end, the form HTML tag should look like this: <form type = 'post' enctype = "multipart/form-data"> 2. Modify settings.py to store uploaded files Now in the settings.py add the following lines at the end of the file. MEDIA_URL = ‘/media/’ MEDIA_ROOT = os.path.join(BASE_DIR, ‘media’) Here: - MEDIA_URL: This mentions the URL endpoint. This is the URL the user can go to and upload their files from the browser - MEDIA_ROOT: This we have seen earlier in the Django Templates article under the DIR settings for Templates. If you don’t understand it right now, you will understand it later in the article! The second line tells Django to store all the uploaded files in a folder called ’media’ created in the BASE_DIR, i.e., the project Directory. We need to create the folder manually so all the uploaded files will be stored in the media folder underlined below: 3. Creating the Media Folder in the Django Project. Now in the project folder, create a new folder with the name ‘media.’ Once the folder is created, we will move to creating the eBook upload webpage. Creating an E-Book upload webpage Now let us make a webpage, in which the clients can upload the pdf file of books they have. 1. Creating an E-Book Model in models.py In models.py, create a new Django Model “EBooksModel” and then add the following code class EBooksModel(models.Model): title = models.CharField(max_length = 80) pdf = models.FileField(upload_to='pdfs/') class Meta: ordering = ['title'] def __str__(self): return f"{self.title}" Here: - We have used the well-known model CharField, which will store the name of the pdf that the client submits. - FileField is used for files that the client will upload. - Upload_to option specifies the path where the file is going to be stored inside the media. For, e.g., I have used ‘pdfs/,’ which implies that the files will get stored in a folder named pdfs inside the media. - Class Meta and def__str__: we have learned this in Django models article Note: The upload File won’t be saved in the database. Only the instance of the file will be saved there. Hence even if you delete that particular instance, the Uploaded file will still be inside the media folder. You will know what I meant by an instance of a file is later in this article, so hold on !! 2. Creating the UploadBookForm in forms.py We will now import the EBooksModel into forms.py and then create a new ModelForm “UploadBookForm.” Create the Form using the knowledge we learned in Django Forms class UploadBookForm(forms.ModelForm): class Meta: model = EBooksModel fields = ('title', 'pdf',) 3. Creating BookUploadView in views.py The code here will be similar to the one we wrote in Django Forms. But here, we need to accommodate the uploaded files (placed in request.FILES instead of request.POST.) For that, simply add request.FILES, along with the request.POST as shown below form = UploadBookForm(request.POST,request.FILES) Therefore the full code will be def BookUploadView(request): if request.method == 'POST': form = UploadBookForm(request.POST,request.FILES) if form.is_valid(): form.save() return HttpResponse('The file is saved') else: form = UploadBookForm() context = { 'form':form, } return render(request, 'books_website/UploadBook.html', context) 4. Creating the UploadBook.html Template Now we need to create the <form> attribute in the template file. Hence, create a template file “ UploadBook.html.” and add the following. <form method ='post' enctype ="multipart/form-data"> {% csrf_token %} {{form}} <input type="submit" value = "Submit"> </form> Don’t forget to add enctype =”multipart/form-data” otherwise, the form won’t work. Now finally let’s map the View with a URL(book/upload) 5. Creating a URL path for UploadBookView Now in the urls.py, add the path to link UploadBookView to ‘book/upload.’ using the method we saw in Django URL mapping. path('book/upload', BookUploadView, name ='BookUploadView') Now that we have created a new model, we must perform the migrations again. So in the python shell enter the following command one by one. python manage.py makemigrations python manage.py migrate That’s it, Now lets run the server and check the browser. Voila, the upload form is up !! Now choose a pdf and click the submit button. When you hit the submit button, then “the file has been saved” page will appear If you go to the media folder, you will see a pdfs folder and in it the pdf that you submitted. Register the newly made model in the admin site, using: admin.site.register(EBooksModel) Then load the admin site in the browser and go to EBooksModel and select the element we just submitted. Now here, if you observe, in the pdf field. You will see a Currently: option. The path that is written in front of it: pdfs/cprogramming_tutorial.pdf is called an instance. Therefore pdfs/<pdf_name> is an instance of the <pdf_name> file. Django saves only the instance of the file and not the file itself. Hence even if you delete the model from the admin site, the pdf file will still be there in the media folder. View the uploaded files from the browser front-end In the above webpage, the instance appears as a link. But if you click on it, you will get an error message. This happens because the endpoint is not mapped. Now to correct this error, we need to map this endpoint to the particular file. To do that, go to urls.py and add from django.conf import settings from django.conf.urls.static import static if settings.DEBUG: urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) If you read the line, you will get a rough idea of what we are doing Here: - In settings.py, we have already set debug = True, so settings.DEBUG will always be true. - In side if function, the following code will add static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) to the urlpatterns present above. The line static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) can be thought of in this way. To the host website() is where we are adding the endpoints – - MEDIA_URL(which we kept as ‘/media/’ in the starting of this article) - and then document_root(which is the location of the pdf file inside the media folder. Hence if I want to view the cprogramming_tutorial.pdf file that I earlier uploaded, I will go to (observe how MEDIA_URL(‘/media/’) is being used) This is what the above code in urls.py does. That’s it, now if you reload the server and click on the instance we saw earlier on the Django admin page, you won’t get the error now. Now click on the instance link and check !! Hence we can now see the pdf via the browser! Conclusion. That’s it!! We hope you have learned everything you need to upload files to Django. Also, you can learn more about the topic from their official documentation. Practice Problem: Using the knowledge gained from previous articles, try to make a webpage showing all the E-books available on a webpage along with the link to view them. Stay tuned for more advanced tutorials on Django topics!
https://www.askpython.com/django/upload-files-to-django
CC-MAIN-2021-31
refinedweb
1,404
68.16
Vue.js responsive grid system Vue.js Grid A responsive grid system with smooth sorting, drag-n-drop and reordering features for Vue.js 2.x. A good example of using a plugin would be rending macOS' Launchpad or Dock. The plugin does not modify the source data array. Every time permutation is performed you will get a new sorted array in event (items). The same works for removing elements, you will get a new "cleaned" array in your `@remove event handler. Currently there is no way to extend data array after event handling. But hopefully I'll come up with a clean way to do it in nearest future. Example Install npm install --save vue-js-grid Import it in your project globally import Vue from 'vue' import Grid from 'vue-js-grid' Vue.use(Grid) You can set the data your would like to be sorted as seen below: data () { return { items: [ 'a', 'b', 'c' ] } In your templates: <grid : <template slot="cell" scope="props"> <div>{{props.item}}</div> </template> </grid> For available events & props please click here. You can also check the Demo Page. If you are interested about this project, you can find more on GitHub. Cheers! Created & submitted by @yevvl.
https://vuejsfeed.com/blog/vue-js-responsive-grid-system
CC-MAIN-2019-43
refinedweb
204
66.44
Good Afternoon Developers! Today I’m continuing my discussion of printing from webpages. For my first post, see here. Another option for networked printers is using HTTP POST to send raw print data directly to the printer. This can be done in Javascript to print locally from any HTML webpage to most Zebra printers. It allows greater flexibility to set up things like size, margins, and print darkness printed by the website developers as opposed to users. The only thing needed from the user is the local printer IP address. The printer IP could be set up once and saved to minimize end user interaction. Thankfully with http you can print from a mobile device, smartphone, tablet, or PC. No drivers are necessary, so Android devices without drivers can use this to print. No one has to install or run any software besides a browser. All this makes it very flexible to the devices your web app can be run on. There are several limitations for this mode of printing. One is that you as a developer have to create the label yourself. Zebra printers primarily use a print language called ZPL. Normally the driver handles the conversion from document to print language. Another is some of the browser security settings block communicating to the printer. The third limitation is that there is no bi-directional capability. There is no way to verify if a printer is online or is capable of printing before or after sending a print job. Use this feature only in situations where it is not critical that the print job gets done, or you can easily reprint if the printer was off or out of paper. Another limitation with http printing is that the printer must be networked on the same network as the user. USB and Bluetooth connectivity are not options with this technology. The easiest way to handle creating a print job is to install the Zebra Driver and set it up for how you want the labels to be printed. Then take a label design tool like Zebra Designer to create a reusable document format. If you are printing in a specific application, say shelf labels, most of the time you want the price always in one area of the label, and the UPC in another area. You can use Zebra Designer to lay this template out. When you want to print, you just send the basic information of the UPC and price for each label. This speeds up time to print and ensures the best quality images. You can also manipulate the ZPL directly as a simple text file. The Programming Guide has examples of how to do this. It also explains how to set up printers for Wi-Fi, as well as document formatting. Don’t worry, most Zebra printers are set up to accept this type of communication by default, so no setup is really necessary. There are two html documents attached. One of them is a single page, very basic document showing the exact Javascript needed to print. The other page, with associated documents, is more complicated, but shows multiple ways of handling printing from a webpage and includes several types of documents to print. - Name badge demo: Shows how to take basic user input and embed it directly in a raw ZPL print string and print it. - Type your own ZPL Script: Essentially the same as the other basic html page, take what’s in the textbox and send it as raw data to the printer. - Print Configuration Label: Simple string to print status and setup information about the printer - Print Zebra Demo labels: Shows how to take document templates residing on a server, send them to the printer to be stored, then read them within javascript and pull out variable fields where user input is needed, and display those fields. When the user has done entering their data, they can then hit a print button to print it. - Print Labels from your Computer: Does the same thing as demo #4, but gets the templates from the local user’s PC instead of the server. For the demos that print formats, there are two different types of formats, standard ZPL, and XML. XML printing is useful if data is already formatted in XML, say from a database. The files that are in this Zip file are: - Basic_HttpPOST.html: A very basic webpage with embedded javascript showing the script to print directly to a Zebra printer. - HTTPPost_printing.html: A more advanced webpage with embedded javascript showing several demos. - ZCloudDemo.css: A standard css to make the webpage look nice and Zebra-ish. - Cloud_Connect.png, Zebra_Tag_Horizontal.png, and icon.ico: images to make the page look nice and Zebra-ish. - File_catalog.xml: used by the “Print Zebra Demo labels” functionality to provide a list of zpl files for users to choose from (without a PHP or ASP backend) - Test.txt, test2.txt, and herbert2.txt: basic ZPL print format files used by the “Print Zebra Demo labels” demo. To try it yourself, simply extract the zip folder anywhere on your computer and double click the HTTPPost_printing.html file. This will open your favorite internet browser. You will need to input the IP address of your Zebra printer, then you should be ready to print. If you are using Chrome or IE10+, the “Print Zebra Demo labels” demo won’t work unless you are running it from IIS. Edit: If you are having a problem with print-jobs not going through, check the console log. If you see an error along the lines of "Access-Control-Allow-Origin header not found", please see my post . Other Articles on this topic: Printing from Websites part 1 Printing From Websites Part 3 Robin West ISV Engineer - Zebra Technologies Hi Robin, Thanks for the valuable info - we're really excited about finally being able to print from cloud apps to zebra printers. We were able to print directly from the browser to a printer on the same network, and we're now trying to print from an app hosted on a different network. We're a little stuck and hoping you might be able to give us some direction. In the ISV newsletter and the Link-OS programming guide, it mentions using weblink to connect to the "zebra servlet". Is there an existing servlet that the printer is supposed to connect to? Or do we have to build our own servlet first? And in that case, would we be able to get access to the Zebra servlet source code example or documentation on how to build it? Really appreciate any help you could provide. Thanks Enam From the newsletter - "During this process, the printer will request the servlet’s certificate to verify it’s a Zebra Servlet, while the servlet requests the printer’s certificate for verification. After both are certificates are authenticated, the printer sends the servlet its discovery packet." Hi Enam, As a clarification, The weblink servlet is just creating a secure websocket connection and keeping track of the printers connected. If you want to create a websocket server yourself, you should be able to. The server security cert needs to be signed by us, but the directions on how to do that are in the documentation with the SDK. Robin The Zebra Weblink Servlet is already written and is downloadable as part of the Zebra Link-OS SDK. When you download the SDK, there are actually SDK's for PC, Android, iOS, etc. The servlet is in the PC section Edit:the section is named Webservices and the servlet is specifically in the lib folder and named "zebra.war". There is documentation and a sample app included. We also have it running as a demo here. The servlet is very basic and just provides access to printers connected to it with the expectation that the SDK will be used to provide functionality for your website. Hope this helps and let us know if you have more questions, Robin Thanks for this Robin very useful. Looking to build a prototype for a manufacturing facility who will need to print QR codes, my aim is to use nodejs to output the codes directly to the printer. For testing purposes could you recommend the cheapest Zebra printer which will fulfill the requirements (QR code and a LAN connection)? Hopefully I can pick one up secondhand. Do you think I might be able to use a USB version if I get windows to share the printer (will it give it a pseudo IP address I wonder?) Thanks Mark Huh... very cool! How big a label do you need and how many do you think will print a day? If you only need a label that is a max 2" wide and only need to print occasionally, the 2824plus is a good option. If you need labels a little larger, the GC420 is the entry level, but it might be easier to get a used GK420. The GC does not offer Ethernet even as an option. Once you are in production and need high throughput, then you are looking at the Tabletop line of products. Entry level there is the ZT210. The nice thing is your raw ZPL for creating the QR code will work the same across all these products. You can also set up your driver to take and send raw data, but this will still be slower. Another option is to create a small app to run on the PC (Java applet, or something similar) that will take the raw data and send it directly to the printer. Hope this helps, Robin West In the post you said "most Zebra printers are set up to accept this type of communication by default". Does the GK420D with ethernet support printing over http? Is there a way to tell which printers support this method of printing? Hi Jared, The GK420d does support HTTP printing. To determine if it is currently enabled (or available) send the following string over an open communication channel (TCP or Serial) to the printer: ! U1 getvar "ip.http.enable" Followed by a carriage return (hit Enter). The printer will respond with "on" or "off". Make sure your quotes are straight rather than slanted. if it responds with a "?" or there is no response, it's probably not an option. If it's off and you want to turn it on, send: ! U1 setvar "ip.http.enable" "on" followed by a carriage return again. These commands can be found in the ZPL manual page 1023. Robin EDIT: I forgot, you can also check if HTTP post is running by going to the printer webpage. Just open your favorite web browser and type the printer's IP address followed by /index.html like this: The printer has a mini webserver that you can use do set up, see previews of labels, get a directory listing, check real-time status, etc. This is all working off the HTTP system that it uses to handle POST messages for printing. Hi Robin, This article has been useful in giving us the hopes that we can achieve our usecase of being able to print from our web based application to a label printer in a diagnostic laboratory. However after spending a couple of days i have not yet been successful. The way you explained it sounded very simple and promising. Right now i am trying on zebra GC420t as the vendor we talked to suggested that's the cheapest model available and would serve our usecase. However this model does not come with a ethernet port. 1. What are the options i have to achieve what i want? 2. I tried attaching a print server hardware to the above mentioned address hoping to get the IP address. To save some cost and for testing purposes I have bought a Digisol USB print server hardware. When i try to print from the html part of the demo you have attached by specifying the IP of the print server, it gives a 404 as the pstprint is not found. My guess is that the print server is not forwarding the http request to the printer. So will this not work? How do i provide this model an IP address. Do i need a zebra print server only? I read something about USB masking to IP address in a comment in this post. Will that help and how does that work? Is there any resource that might help me in understanding and implementing that? 3. The usecase we have is of printing labels for laboratory on a size 10mm * 25mm. There would be approximate 500-700 labels printed per day. And as mentioned we need to print it from our application which is served within a browser. Which model would you recommend? This is to help a NGO hospital so we would like to reduce cost as much as possible. Hi Arjun, Have you tried to view the printer webpage? That should give you an idea if the print server is behaving correctly. I believe you are probably right in that it's not forwarding the http request to the printer. I don't have knowledge of that print server so I can't give much insight. I don't necessarily agree that the GC will fit your use case based on what you've stated here. You will probably spend more on print server hardware (and print head replacements) than if you went with a slightly higher model. The TLP2824 Plus with Ethernet is only slightly more expensive than the GC and it's space saving as well. Also, just to clarify, you are using labels that are longer than wide as they feed out of the printer? I ask because there is a minimum label length to these desktop printers. I'm looking it up now, but I believe it's 16mm, maybe more with the ribbon models. I'll add the length when I find it. Hi Robin, after we found your solution to send ZPL code directly from a website to a printer via javascript and http-post, we implemented this into our webbased software to print price labels in our central warehouse. The website is hosted in the cloud, but the client (Windows 8 tablet with Google Chrome browser) and the printer are in the same local network. So far we have 6 Zebra GK420T with network device (LAN) and everything works as expected with Chrome, Firefox and Internet Explorer 11. However, what we've realised now is that your code is not working with Safari on OS X and with Safari on iOS. I think the reason for this is Access-Control-Allow-Origin. Except Safari all browsers send the ZPL code to the printer although it is throwing an error / warning concerning Access-Control-Allow-Origin on ALL browsers! As far as I know, async ajax requests rely on Access-Control-Allow-Origin. The problem is the webserver of the printer. It should return a correct 'Access-Control-Allow-Origin' header! So my question is: How can we tell the webserver of the printers to respond with a correct 'Access-Control-Allow-Origin' header? E.g. " Access-Control-Allow-Origin: *"or with an IP range or with a domain name! I hope, you have an idea. Thanks! Jan Robin, this is the error message in Google Chrome: XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access. Jan Hi Robin, this is the error code in Safari for OS X: "Error" Failed to load resource: the server responded with a status of 501 (Not Implemented) (pstprnt, line 0) "Error" Failed to load resource: Origin is not allowed by Access-Control-Allow-Origin. (pstprnt, line 0) "Error" XMLHttpRequest cannot load. Origin is not allowed by Access-Control-Allow-Origin. (printlabel.php, line 0) Sorry about the several comments, but I could not include all the error messages in my first post. Jan Hi Jan, Thanks for your detailed question and sorry it's not working for you. I did a little testing and it appears that this isn't really an issue with Safari and the XMLHttpRequest to print. I was able to print fine with OS X (v10.10.4) Safari on the same network with the basic code. The issue, I believe, is with cross-origin security done by several browsers. I'm working on some javascript that I hope will help, but it'll take a day to set up and test. I'll post results when I get them. Robin Hi Robin, thanks for your replay. The error actually is displayed on all browsers! The difference is that except of Safari, all browsers still send the zpl code and the label is printed. So you will not notice at first glance. I only noticed this error because it did not work on my iPhone. I then started searching and then I also noticed that it is not working on Safari on OS X and that the errors are displayed on all browsers. To my mind the reason for the error is that the webserver of the printer does not send a Access-Control-Allow-Origin header at all. We use this printing feature on two different apps which run in a normal browser. One app is totally local, which means it is within the intranet (server, clients and printers) and we are working with static IPs. The second app is run in the cloud with the tablets and the printers being in the same local network and subnet. We also tried to work with local domain-names e.g. server.example.local, tablet1.example.local and printer1.example.local for the local app, but this does not help either. So as stated several times, a correct Access-Control-Allow-Origin header would solve it. One workaround is sending the httprequest to a local server which responds with a correct header and forwards the request to the printer by using telnet. The code would look like this: <?php /* Set country settings */ header("Access-Control-Allow-Origin: *"); header("Content-Type: text/html; charset=utf-8"); mb_internal_encoding("UTF-8"); setlocale (LC_ALL, "de_DE"); date_default_timezone_set("Europe/Berlin"); /* Receive data for printing */ $PrinterIP = $_POST["PrinterIP"]; $PrintData = $_POST["ZplCode"]; /* Open a telnet connection to the printer, then push all the data into it. */ try { $fp=pfsockopen($PrinterIP,9100); fputs($fp,$PrintData); fclose($fp); echo 'Successfully Printed'; } catch (Exception $e) { echo 'Caught exception: ', $e->getMessage(), "\n"; } ?> The problem is, we are loosing speed by using this detour... Jan Hi Robin, I did not read your last comment thoroughly at first. Sorry about that. Can you post the IP adresses of your setup so can have an idea how they connect together. On my local setup the server runs on 10.88.21.10, the printer on 10.88.21.121 and the tablet on 10.88.21.101. So they are all within the same network. The webserver is IIS7.5 with PHP 5.3. The clients OS is either Windows Server 2008 R2 as terminal server or Mac OS X 10.10.4 both with the latest browsers (Chrome 44.0.2403.130 (64-bit), FirefoxESR 38.1.1 and Safari 8.0.7) versions. Hope this helps. Jan Hi Jan, I understand your frustration. I agree that the printer needs to send a specific CORS or Access-Control-Allow-Origin header for Safari to let it through. I'm trying to put together a setup to test if the printer is capable of providing this. Unfortunately it's taking some time as I research options and fight the corporate network here. I initially tested PC Chrome, IE10, and Firefox using a similar setup to what you have (server on 10.80.127.137, PC on 10.80.127.43, and printer on 10.80.127.96). I'm sorry to say I never noticed the Access-Control-Allow-Origin header error on those browsers, although going back and looking, it is definitely there. During my initial testing I was not able to get a MAC on the same network, so I put the html file on the MAC and tested Safari that way. Of course it went through, as the request was not cross-origin at that point. Now I'm going through several ideas on how to get the printers' print server to send the correct response. I should know in the next couple of days. The reality is it may not be able to right now. The web-servers on our printers are older and not updated much. Based on my testing, if it turns out that it isn't capable, I will submit a bug. It may take some time to for a fix to be released. Another option I'm looking into, if it turns out the printer is incapable, is creating a browser extension. The security on these plug-ins is much less and it looks like most of them allow this type of communication through. In the end, you are the third person in the last week to have found this issue, so it has my highest priority to figure out how to handle it. I will respond when I know a little more. Thanks for your patience. Robin There is a very simple change that Zebra could make in the firmware to deal with the CORS issue being discussed here. Your document: HTTP Post AppNote 2456949.667535 of October 19, 2014 on page 4 describes the response the printer gives after receiving the POST request. It always responds with a fixed string. HTTP/1.1 200 OK Content-Type: text/html Expires: Sat, 01 Jan 2000 00:00:00 GMT All Zebra needs to do to fix the problem is append the following line to to this fixed string: Access-Control-Allow-Origin: * Hi Jan and anyone else having this issue, I posted my response as a separate post here: I run with your demo codes with half success. I think it is great to get one working. We want both printers to print through web pages. I have two kinds of zebra label printers, one is ZDesigner 110XiIII Plus 600DPI and the other model is ZDesigner ZM400 300 dpi (ZPL). The latter one could print label through the but the former (110XiIII) did not since printer-ip-address/pstprnt or printer-ip-address/printer/pstprnt responded with 404 Not Found. Do you think my 110XiIII printer need upgrade to have the pstprnt program or somehow I could find pstprnt from the printer web server? The chrome console message like: Note: the one which did not work has the printer web page like: instead of. Thanks, Henry Hi Malcolm, We don't really have a prepackaged solution for this. Due to browser security, I doubt this will be an option any time soon. As far as I know, the only way to print over Bluetooth from an iOS browser is to have a printing app also installed on the device. One of our partners, Arrowhead, has a very nice one that will take print data or images from a browser on iOS. Another option is you can write your own app in X-Code. I suggest using our Link-OS SDK and registering a URL Schema to send the print data to the app. The third option is to write a RhoMobile app (in javascript) as a light webclient and use the printing API's. To clarify, your RhoMobile app would act as a custom browser that would let you have access to hardware features like Bluetooth printers. In theory, this should work, though I haven't tried it on iOS. Hope this helps, Robin Hi Robin Thanks for your post, I have another question, is it possible to print out via bluetooth connection? I have a web app, client open it from ipad, and ipad connect to zebra iMZ320 via bluetooth, I am trying to implement print ticket from ipad by using javascript. do you have any solution to make this happen? Thanks -Malcolm Hi Robin Thank you very much for your reply, I have installed mobi print, but I can see the printer when I do print from browser menu, I want to costomize my own print, sunch as using window.print() in javascript to print out a div, window.print() can bring out a dialog box for user to find air print, I am not able to find mobi print anymore, any ideas? Thanks -Malcolm Hi Robin Thank you very much for your instruction, but what I see from their document is just an URL, I have no idea how to use it, is there any sample code which implement this feature or more details example? THanks -Malcolm Hi Malcolm, To clarify, Airprint does not support Bluetooth printing, so it is not an option. I there is a way to custom print through MobiPrint (URL scheme) that you can call from Javascript. Please check out their integration guide: Thanks, Robin Hi Malcolm, I don't have any sample code, but you could contact Arrowhead. I would try something like just using window.open() or window.location.assign(). window.open("Arrowhead://malcom_xu.com/print?&v1=This&v2=is&v3=a&v4=Zebra&v5=printer&v6=test&v7=works&xsource=googlechrome&x-success=googlechrome%3A%2F%2Fmalcom_xu.com%2F", false); Hi Malcolm, To clarify how this works, you are calling on the browser to open a webpage, but rather than the standard http://, you use the custom URL Arrowhead:// Arrowhead has registered that url scheme (in your phone when it was installed) as it's own. The browser then knows to open the Arrowhead app, just like if you have facebook or many other apps installed and you link to them. You are also adding in your link what you want to print and that you would like the Arrowhead app to reopen your webpage when it's done. That's what the callback url is for. Does this make sense? Robin You are absolutely right, the principle should be like that, but if there is sample code that would be better, so that I know how to process error, and I don't have to spend time to try if it works. thank you very much, I will contact Arrowhead to see if they have some sample code. For the iPhone (at least), you need to specify the Content-Type header as "text/plain". request.setRequestHeader("Content-Type", "text/plain"); Without that you get a blank post request. Hi, Can I print images from Website? If possible could you please provide some snippet? As I begin my investigations into a Zebra printer, I want to be sure I'm selecting a model that supports this functionality. Is there an easy way to identify the printers you sell that support printing via HTTP Post? I can't seem to find this information on your site. Thanks, Matt I want to print to convert dynamic images into ZPL, If there any way to convert Images into ZPL code for .net application Hi Robin! is it possible to send a command to an ip adress to check if a zebra printer is connected to this IP? In my application I want to show the user if he or she has filled in the correct IP. I would like to send some sort of command to the filled in IP so I can get a response back from the zebra printer. My application is in javascript. Thx! I basically want to check wether Hi, Robin I have the same issue with Matt, I am working on a small project that requires http post print. Could you or any one can recommend an inexpensive receipt printer? cheaper is better, just need basic receipt printer that support http post print, Thanks!!! Hi Matt, This post was from a while ago...curious if you received the information you were requesting. I am interested in HTTP Post printing but have no idea which Zebra printers support this functionality. Any help would be much appreciated! Hi Robin My zebra label printers is ZDesigner ZM400 300 dpi (ZPL). I want to send the zpl to printer from the browser. Printer's ip is 10.64.57.105(the same network with my PC). But the responded with 404 NOT FOUND Can you tell me whether the solution is available now? or what the printer need upgrade to have the pstprnt program or somehow I could find pstprnt from the printer web server? Thanks! Russell Hi Robin, I am using a QLn420 Bluetooth printer. and trying to print a zpl file from a chrome page on an android device. we are trying to do it using "Zebra Print Connect" app using android intents, we where able to call search Printer via NFC using link below, and it works <a href="intent://;package=com.zebra.printconnect;scheme=ht..."> Launch Zebra Intent </a> but we are unable to send a zpl file to the printer the same way. how can that be done ? Thanks David Hi Mark, HTML generally uses UTF-8 encoding as well, so if you don't make sure the request is being sent with the same code page the printer is expecting(CP850), then you run into these types of issues. At least I think that's what is wrong. Either switch the printer to UTF-8 (^CI28) or make sure the terminal is sending CP850. You can put the printer into dump mode to verify what the terminal is sending to the printer. Add ~JD to the beginning of your format and send the whole thing the way you have been. The printer will print the hex for each byte it receives and the ASCII representation if any. That way you can verify the bytes are correct CP850. Turn off and on the printer to reset it to normal mode. Hi Robin, Hope this thread is still monitored. I've encountered a problem with extended characters that I can't seem to get to the bottom of. If I send a document containing this ZPL line directly to our GX430t (lpr -o raw) it prints correctly: ^FO547,52^A0B,45,45^CI13^FB1065,1,0,L,0^FDK<81>ssaberg<94>, 79790^FS (That's hex 81 for ü and hex 94 for ö from code page 850; my terminal expects UTF-8 so the characters don't display as printed) But if I send the same document to the same printer via XMLHttpRequest, an extra character is printed in front of the ü and the ö. It appears to be the graphic character ┬ with 850 hex c2. Does this make any sense? Thanks Mark Hi Robin, Thanks for suggesting the ~JD diagnostic. It allows me to see that there is indeed an extraneous hex c2 sent to the printer just before the non-ASCII character. And since I can see what is fed into XMLHttpRequest, I can be sure that XMLHttpRequest is adding the byte. Since we don't generate the ZPL, I'm not sure what to do next but I understand the problem better. Mark ----------- Just a follow-up note for anyone else that might have encountered the same problem. The problem appears to be passing an unsupported character set through the browser. Character set IBM850 and Firefox browser in my case. Even if I set the Content-Type header parameter charset to IBM850, Firefox will still try to interpolate a non-ASCII character into UTF-8. As a workaround I have included a javascript download button alongside the print button. So when a label contains a non-ASCII character, the label can be downloaded and sent to label printer via CUPS. Hopefully this problem will disappear as label vendors migrate to UTF-8. MC Hi Russel, Please verify you can get to the printer webpage at all. Enter in your usual browser. If you see the page then the /pstprnt header might not be correct. If the page does not show up, then it has been disabled. Use Setup utility to turn on HTTP - send ! U1 setvar "ip.http.enable" "on" followed by a carriage return. Hi Robin, We have developed a solution in our application to print automatically from a browser similar to your example. It got through all of our testing environments, but then failed when we moved this solution to our Production environment. The Production environment uses HTTPS while the test environments were HTTP. Is there any way to get this to work by sending a POST through HTTPS? Our server is not on the same network as our Zebra printer, so I believe we have to send a request client side in Javascript. We have the IP of the printer on the client side as well. Thanks! Ryne Hi Robin, thank you for your useful examples. I'm trying to print using an http post like in your example, on a printer model ZTC TLP2844-Z-200dpi, but the address set in your html page HTTPPost_printing.html doesn't work. The printer web server response is about an error 404 for the url http://<my_ip_address>/pstprnt. I was wondering if you could know the page to send own ZPL script to ZTC TLP2844-Z printer. King regards Rinaldo Anyone having issues with CORS or wanting to POST print via HTTPS, please contact Zebra Technical Support. It is the best method to get these concerns escalated. Hello Robin, Your post on printing via HTTP was very helpful. I know the question below might be redundant to your explanation above with regards to getting feedback. But I want to make sure if all doors closed in HTTP approach to get an appropriate response from the printer. We are using Zebra ZT410 RFID printer and are communicating to the printer via HTTP post. By default printer sends a 200 "Successful" response as long as the communication is successful. We would like to receive actual error feedback in case of any error while printing due to any hardware / ribbon / tag related issues. Please suggest how this is possible via HTTP. If the response as requested above is not possible,it is also OK if there is a way to receive the value of "odometer.rfid.valid._resettable" via: 1.) HTTP communication OR 2.) through some integrated communication via SAP system (which is our backend ERP system) in same network. We cannot use any desktop applications or extensions for our requirement. Any help here is very much appreciated !! Regards Raja P.S.: I have reached out to Zebra technical support but I was asked to reach out to developer community for such requests. Robin, can I use HTTP POST in a zebra installed in a network PC? I use this adress: XX.X.X.XX/zebra (shared printer), or just works in a zebra's IP? I try to add this path in your examples, but doesn´t works. There is another method? Thanks. Paula. Robin, can you please help with another issue about Zebra and EPL? It's about printing special characters via HTTP POST. We could chat by email, if it pleases you. Thanks already, Richard. Hi Robin, Excellent article! your example works fine for me, but when I try to implement it in my web page, it doesn't work because my web page is https and I get the message "Mixed Content: The page at '' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint ''. This request has been blocked; the content must be served over HTTPS." How can I have the printer working over a https service instead of http? Thanks, Gastón Hello, The zip file is not longer accessible. Can I still get it somewhere?
https://developer.zebra.com/comment/1711
CC-MAIN-2021-21
refinedweb
5,991
72.66
DS1307.swift A Swift library for the DS1307(also DS1302 and DS3231 w/o enhanced functions) I2C Real-Time Clock Summary This library can configure a DS1307 RTC clock and once it's running can retrieve the current date and time. This component is powered with 5V and requires a 32.768kHz Quartz Crystal oscillator to operate. It exposes an I2C interface that this library uses to interact with it. If you are not using a backup battery with this RTC, remember to connect Vbatt to GND.. Usage The first thing we need to do is to obtain an instance of I2CInterface from SwiftyGPIO and use it to initialize the DS1307 object: import SwiftyGPIO import DS1307 let i2cs = SwiftyGPIO.hardwareI2Cs(for:.RaspberryPi2)! let i2c = i2cs[0] let ds = DS1307(i2c) Once you have an instance of the DS1307 object, you can set and get the time as individual components (day, month, year[0-99], hours, minutes, seconds) or as a Foundation's Date or all the components in one go: //Properties of DS1307 (all gettable/settable individually): //year,month,date,hours,minutes,seconds ds.setTime(hours: 16, minutes: 25, seconds: 00, date: 10, month: 5, year: 17) ds.start() var (hours, minutes, seconds, date, month, year) = ds.getTime() print("\(year)/\(month)/\(date) \(hours):\(minutes):\(seconds)") sleep(10) (hours, minutes, seconds, date, month, year) = ds.getTime() print("\(year)/\(month)/\(date) \(hours):\(minutes):\(seconds)") //Also available: // //ds.setDate(Date()) //let now = ds.getDate() The functions start() and stop() resume or pause the clock. A complete example is available under Examples/: 1), ] ) The directory Examples contains sample projects that uses SPM, compile it and run the sample with ./.build/debug/TestRTC.
https://swiftpack.co/package/uraimo/DS1307.swift
CC-MAIN-2019-43
refinedweb
278
54.32
Naming Style. Configuring naming rules. To modify a default naming rule - In Rider settings (Ctrl+Alt+S), go to Naming tab.and open the - Select the desired rule in the list on the left. - On the right of the page, check the existing style for the rule. - If the existing style is acceptable, but you would like to allow other styles for this rule, click Add . When there are several styles for a single rule, JetBrains Rider does not detect code style violation if a corresponding symbol name matches at least one of these styles. Otherwise JetBrains Rider JetBrains Rider to detect other code style violations. To do so, clear the Enable inspections check box. - Click Save to apply the modifications and let JetBr, JetBrains Rider detects it as inconsistent camel casing and reports a problem: In this case, you can configure the list of abbreviations to be ignored by the naming style inspection. It is important to note that uppercase abbreviations should not contradict the naming style defined for specific kind of identifier. For example, if you added MS to the ignored abbreviations, MSBuilder will be an acceptable name for identifiers with the UpperCamelCase style, but not for identifiers with lowerCamelCase or all_lower naming styles. Similarly, myMSBuilder will be OK for lowerCamelCase- but not for UpperCamelCase-styled identifiers. JetBrains Rider uses the 'Inconsistent Naming' code inspection to detect violations of naming rules in your code. By default, this inspection is always applied during the design-time code inspection and highlights the detected violations as warnings in the editor. For example, according to the default style, names of interfaces should have 'I' prefix. If an interface name does not match this rule, JetBrains the action indicator to the left of the caret to open the action list. - Choose Inspection 'Inconsistent Naming' | Find similar issues in file and then select the desired scope. - All found naming style violations will be displayed in the Inspection Results window. - You can double-click the detected problems to navigate to the code of the corresponding symbols. For most of the naming style violation highlighted in the editor, JetBrains Rider suggest a quick-fix with a conforming name, e.g., you can press Alt+Enter and choose Rename to [conforming name] in the action list. Automatic correction of naming style violations can also be performed in the current file, project or solution using the fix in scope feature. However, if the symbol with the calculated conforming name already exists in the same namespace, the quick-fix is not suggested. You can fix the naming of such symbol with the Rename refactoring. Configuring and disabling. To disable automatic checkup of naming style - On thepage of JetBrains Rider settings (Ctrl+Alt+S), start typing 'Inconsistent Naming', and then clear the check box next to the corresponding code inspection. To disable automatic checkup of a specific naming rule - In Rider settings (Ctrl+Alt+S), go to Naming tab.and open the - Select the desired rule in the list on the left. - In the Edit Rule Settings dialog that opens, clear the Enable inspections check box. - Click Save to apply the modifications and let JetBrains Rider choose where to save them, or save the modifications to a specific settings layer using the Save To drop-down list. For more information, see layer-based settings.
https://www.jetbrains.com/help/rider/2017.3/Coding_Assistance__Naming_Style.html
CC-MAIN-2020-40
refinedweb
554
53.92
add a contiguous segment of memory to the system memory map #include <sys/seginfo.h> unsigned qnx_segment_raw_free( struct _seginfo *buf ); The qnx_segment_raw_free() function adds a contiguous segment of memory to the system memory map. The physical address and the size of the memory are provided in the buf parameter. The size must be a multiple of the page size (4K for 32-bit QNX). This function is typically used to return memory acquired by the qnx_segment_raw_alloc() function, however it may be used to add memory that exists in your computer but wasn't reported by the BIOS. There are no checks made on the validity of the added memory. See qnx_segment_raw_alloc(). QNX errno, qnx_segment_raw_alloc()
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/qnx_segment_raw_free.html
CC-MAIN-2022-33
refinedweb
114
55.54
#include <poppler-page-transition.h> Describes how a PDF file viewer shall perform the transition from one page to another. In PDF files there is a way to specify if the viewer shall use certain effects to perform the transition from one page to another. This feature can be used, e.g., in a PDF-based beamer presentation. This utility class represents the transition effect, and can be used to extract the information from a PDF object. Construct a new PageTransition object from a page dictionary. Users of the library will rarely need to construct a PageTransition object themselves. Instead, the method Poppler::Page::transition() can be used to find out if a certain transition effect is specified. Get duration of the transition in seconds as integer. assignment operator
https://poppler.freedesktop.org/api/qt5/classPoppler_1_1PageTransition.html
CC-MAIN-2022-21
refinedweb
129
57.67
for some reason, this doesn't work, and I can't figure out why. I'm probaby missing something easy, but here it is.. this is only part of the code I am testing, because it passes every time no matter what I put it.. Code:#include <cstdlib> #include <iostream> #include <iomanip> using namespace std; int main(int argc, char *argv[]) { double minutes, total, rate, num, num2; cout << "Enter time call was started in HH.MM format\n\n\n"; cout << "Example: 07.51 is 7:51 \n\n"; cout << "Time: "; cin >> fixed >> setprecision(2) >> showpoint; cin >> num; cout << fixed << setprecision(2) << showpoint; num2 = num - static_cast<int>(num); //should give the fractional part cout << endl; cout << num2; cout << endl; if (num2 < 0.6) cout << "\nok, this worked.\n"; else cout << "ok, this didn't work.\n"; thanks for any advice.
http://cboard.cprogramming.com/cplusplus-programming/106222-problem-simple-if.html
CC-MAIN-2015-27
refinedweb
140
74.9
>> in decline I hear quite often from GOP candidates, if thats so they should move to China where they have shipped all the jobs. What a stupid comment. America IS in decline with the vast debt that has been accumulated over the past 4 years and now exceeds the entire GDP of the country at $16 trillion. There are now 22.5 million Americans combined who are on the unemployment rolls, who are discouraged and have left the jobs market, and who are underemployed (McD jobs). There are now 107 million Americans now on some form of government assistance like welfare, food stamps, extended unemployment and Medicaid. What this country needs instead of its productive citizens leaving the country as this ignoramus suggests, is to kick Obama onto that waiting helicopter on the WH lawn on January 20, 2013. That will be the first step to healing this country and getting people back to work. That will start with the elimination of overlapping regulations and certainty for the private sector in tax policy. Ignoramus? Wow. Why be so rude in your commentary? One need not call another's comment "stupid". I would say that greater demand for good and services will put more Americans back to work, not reducing regulations, which does little but create higher profits and reduce environmental stewardship. Same with taxes. Taxes are no different than any other expense line item: they don't drive the business model, they are the result of the business model. The comments of ignoramuses are stupid. Get it? "I would say that greater demand for good and services will put more Americans back to work, not reducing regulations" Try to think. Where does the greater demand for services come from? It comes from companies expanding their operations because they can make a profit from the expansions, both from sales within the US and by exporting goods. And companies who are suffocating from overlapping regulations don't expand. They stash their cash. As they are doing. Same applies to taxes. I agree, they are part of the cost of doing business. So, like excessive regulations, they suppress profits and encourage companies to be cautious and not to expand. The US, having the highest corporate taxes in the industrialized world, is chasing away US based operations. It is very simple. Those little lessons in economics are free. If you would like to have the same knowledge, try an MBA from Harvard.? The bottomline doesn't drive the top line, it's just the opposite. Put another way, would increasing taxes, and thereby reducing the bottomline from, say, 1m to 900k, compel you to fire workers that you need to fill existing order volume. I wouldn't. ?" If there is one truism in commerce, it is that corporations thrive on certainty of taxes and regulations. That is, they like to know roughly what taxes and regulations are going to be in the period five years out for their 5 year business plan. I know this because I am recently early retired from being CFO for a major public energy mining company. Increasing overlapping regulations can render a new project, that would otherwise be profitable, totally dead in the water. Time is of the absolute essence in developing a new project, as invested capital during this time earns nothing at all until the project is given approval to proceed by regulators. The Internal Rate of Return (IRR), which is calculated before each new project is approved, is based on none other than time. Outflows of cash before they can be offset by inflows are punished severely in the IRR. We have been in the position a number of times over the past 4 years (I have remained as a board director) where we have received approval from the particular state to proceed, but have waited over a year more to receive approval from the Department of Energy. People have been hired and trained to start at the new project are idled.....waiting. And then the approval from the federal agencies is only given in installments. Do this, and contact us. Do that, and contacts us. This is the game they play. This game has escalated dramatically since the Obama administration has put its czars in place in the overseeing government departments, such as the Department of Energy and the Environmental Protection Agency. These federal departments now impose similar but slightly different regulations that the states have long imposed. One set of regulations we could cope with, two is overpowering. But that is their mandate: slow down non-green energy projects. And donations by green energy (3% of US energy) to the democratic party have been repaid by the Obama administration using OUR money (borrowed from places like China) for massive subsidies to the green energy industry. An industry which cannot survive without a steady stream of OUR money given to them by the government. The result of this is that energy projects which supply 97% of the energy the US uses are put on hold. We have three major projects which have not proceeded, and will not until we see more certainty. As an alternative, we are now looking abroad for places to invest our capital. This will change if Romney is elected and we see the dismantling of these overlapping regs - as we expect. Under the Obama term, virtually all the vast federal lands have been off limits for leasing and drilling. The drilling that has been occurring on private lands over the past 4 years was approved during the Bush administration. Uncertainty of the tax code is another bane of business. With the highest corporate taxes among industrialized nations, the US is ensuring that investments occur elsewhere. Generally, money goes where it is wanted. These are my personal observations. This country is desperate for a change quite different from the change that was promised in 2008. I would agree that uncertainty delays investment but I suspect your industry is the outlier with respect to increased regulation, yours being in the middle of global warming concerns and significant, irretrievable environmental degradation. Most companies don't operate in that kind of regulatory environment and as such the regulations you speak of are not really onerous. And I also suspect that your industry doesn't take into account the external economies related to your operation. I'm also guessing that extraction industries such as yours are relatively less labor-intensive than a hospital or light manufacturing that makes up a much greater part of our national employment picture. That aside, you still didn't make the case that lowering costs compels a company to hire people it doesn't need or that increasing costs compels you to fire workers you do need. Look, there is a limit to the times I will put valid ideas to you and have them marginalized. I will instead concentrate on a wider audience for discussion at the head of this forum and others. Your approach may be a "tie up the eloquent opposition in endless questions and prevarication". While I agree that the extraction industry has been particularly affected by the Obama overlapping regulations, the general thread running through Obama's push has been driven by a fanatical obsession with the environment. This impacts everything where soil is moved, the construction industry being the major one. A great example of this is the Keystone pipeline, which Obama has been sitting on for two years. The pipeline has been moved to another location to address his concerns, but now there is some other reason for his fence sitting. I am pretty sure he will take his environmental votes in this election and then, if he were to be successful, afterwards approve it so that the 40,000 jobs in its construction are gained. It is called having your cake and eating it. His excessive regulations also impact the ability of commerce to hire and fire people, his meddling in the power of unions, etc. This again persuades multinationals that operations in the US are too much trouble. I have a board seat on my local Chamber of Commerce - I see it all. "That aside, you still didn't make the case that lowering costs compels a company to hire people it doesn't need or that increasing costs compels you to fire workers you do need." With that statement, I am not sure you are particularly gifted in English comprehension, or perhaps you lack sufficient business understanding. Although I would have thought that the average bright person with an open mind would have "got it". I expounded at length on corporate expansions, which involved additional required hiring, and what motivated companies to proceed or not proceed. If excessive taxes and/or regulations impeded companies from reaching their hurdle rate of return (ROR), they don't proceed. If you don't get it the first time, you are probably not going to get it. Or don't want to get it. I suspect it is the latter. The concern with the environment is hardly "fanatical", global warming isn't a hoax and will cost us far more in the long run than the costs of mitigating the problem as best we can. The Economist agrees with that statement. Great example, Keystone pipeline. Oil is a commodity sold on the world-wide market at the best price available. The pipeline would add amount that given all the other variable, that won't move the price one cent either way. The 40k construction jobs is a lie, maybe 10k, and even those are temporary. It does however endanger one of the largest aquifers in North America, and for what, so oil companies can extract more oil while ignoring the external economies. Good trade-off for Big Oil, bad one for the people who would like to preserve the environemnt. External economies, they are rarely accounted for. I run a company, I see it all, and I see that the CoC is as short-sighted and narrow-minded as an organisation can be. Income taxes reduces investment? If I told my owners that I'm not buying a new peice of equipment we need and that will generate a 20% ROI becasue I'm uncertain what the tax rate will be 5 years from now, I wouldn't be here for much longer. Investments are made based on good ideas, the tax rate takes part of what's left over, it doesn't drive the top line. Put another way, as CFO, would you fire people if the company's tax rate increased 5 percentage points? I wouldn't. Long term investments by world-wide companies may incorporate that into their formulas but small to midsize don't. BTW, I was wondering how quickly you would begin with the insults. Didn't take too long, did it. "global warming isn't a hoax and will cost us far more in the long run than the costs of mitigating the problem as best we can. The Economist agrees with that statement." That the inhabitants of the world had anything to do with global warming is certainly a hoax. Give me one iota of scientific evidence that supports that. You won't be able to because it doesn't exist. But the "chattering classes" just swallow it right up. According to the vast majority of scientists and meterologists, global warming is very real. Common sense tells us that putting millions of tons of pollutants into the atmosphere every day over the course of a couple hundred years will produce unfavorable results. In addition, the melting glaciers and ice-packs around the poles tell us that something is most definitle happening. I do agree that it's difficult if not impossible to know with absolute certainty but any good risk manager will tell you that the statistics are not in our favor and prudence requires some action. You probably need LITHIUM !! I hope you are at least smart enough (no child left behind was designed for guys like you) to understand what I mean, if not GOD BLESS USA. Looks like you see only the shit you want to see.... keep dreaming the shit all your life or just grow up!! Ah, I see US growth for the 2Q 2012 has today been revised down from 1.7% to 1.3%. Now here is the path into the toilet: growth 4Q 2011 4.1%,growth 1Q 2012 2.0%, growth 2Q 2012 1.3%. Another thing you will notice about the chart in the attached link is that there was negative growth in 1Q 2009 of 4.5% the quarter that Obama took office. But in the 2Q 2009 the negative growth had been cut to 0.3% and in 3Q 2009 there was positive growth of 1.3% and in the 4Q there was positive growth of 4%. All before anything Obama could have done to have effected anything. So much for the huge mess according to dems that Obama had inherited.... So the overall economic picture gets much worse under Obama. And he and his handlers say give us another 4 years. A shake of the head at this ludicrous suggestion. But the boys and the girls who are in love with Obama say he's so cute, we gotta keep him in our lives.." Obama had sizable majorities in congress for the first two years of his administration. Which is more than half his term to date. And what did he do for that period of time. He passed George Bush' stimulus program and forced Obamacare through congress without a single GOP vote up, and against the wishes of a majority of the American people, who are still opposed. He could have done anything he wanted for the economy during that time. Within the first 10 days of his term, in January 2009, Obama called the opposition into the WH for a meeting on the stimulus plan. The minority whip, Eric Cantor, came with ideas. When he provided them, Obama shut him down with the line - I won the election. A great way to win friends and influence enemies.-... Look Europe was doing fine for the longest part of Obama's term - so was China. Besides, the domestic economy is weak because of joblessness and lack of money in people's pockets unrelated to exports. Obama has been in office for almost 4 years, and the recession ended in June 2009, with gradual GDP growth, so he really didn't inherit as bad a mess as dems make out. You can make up all the excuses you want for Obama but if your are honest, you are looking the other way with the country collapsing around you - vast debt, huge unemployment and an economy in the 2Q 2012 that was downgraded yesterday from growth of 1.7% to 1.3%. The manufacturing sector news yesterday was devastating. The Canadian economy has been doing very well for most of Obama's term and when I last checked they are a trading nation as. If Obama was supposed not to have had effective majorities in both houses of congress enabling him to pass whatever legislation he wanted in the first two years, tell me how he was able to ram through Obamacare without a single GOP vote up, and against the wishes of a majority of the Americans people, who still object to it. I am sick of the lies. The Americans people. What a stupid thing to say. It was a typo, you idiot. Is that your best shot? What a fool. I'm the idiot for the typo? Are you a college student? You sound like one. Silly boy. You are the idiot for commenting on a single typo where you could have expounded on something potentially interesting. But then you would need the smarts to do so.................. "Are you a college student?" I am a very early retired financial executive, living extremely well on a significant corporate pension and extensive investments. I am about to undertake my 39th foreign trip, ninth since retiring, all first class. You? The Canadian economy has been doing very well for most of Obama's term and when I last checked they are a trading nation as well:: yes thats because they have Universal Health Care And a very highly regulated banking system... Excellent graph! Thanks! I see that Obama is again in the lead in a Southern State, i.e., North Carolina. Breaks my heart. :) If you believe that you will believe anything. Romney leads Obama 51% to 45% in N Carolina. According to Rasmussen polls which are the only ones not basing their sample on the record, hysterical turnout by dems in 2008. This will not be repeated after the terrible 4 years we have been through with Obama.... You mean according to the Rasmussen Inc. that did paid consulting work for the Bush campaign in 2003-04? Or the Rasmussen Inc. whose president headlines GOP fundraisers?... My boy, you can make up all the reasons you like about Rasmussen being biased. But, my boy, they were the most accurate pollster in both the last two presidential elections in 2008 and 2004....... And they were by far the least accurate in 2010. Your point? Did you actually call me "boy"? This is a British website. Try to be civil. Why do people continue to trot out that outdated Fordham report that is based on bad preliminary data from immediately after the election? Rasmussen blew the 2008 election by nearly a full percentage point. Take a good look at the FINAL Fordham report which proves that Rasmussen's accuracy was just mediocre.... "Just mediocre." Really. They may be in the middle of that list, but that doesn't mean they're mediocre. If half the kids in the class get A's or A-minuses, is an A-minus student merely mediocre? Or are you just dealing with a good class? Rasmussen predicted the final result as 52-46 Obama. The final result was 52.9-45.7 Obama. So, he calls the race for Obama by six, and Obama wins by roughly seven. That is well within Rasmussen's margin of error. That is well within any reasonable margin of error. Additionally, he called 49 of 50 states correctly, and the only one he missed was the one nearly every reputable pollster/aggregator (including Nate Silver at FiveThirtyEight) missed: Indiana. That's a pretty solid track record. And Rasmussen came closer to the actual final results than several firms/organizations (such as Democracy Corps and Fox News) that are ranked above him, which leaves me wondering just how accurate the methodology of the Fordham report is. As for the 2012 election, Rasmussen is clearly an outlier from the other major pollsters. Romney has been up when other surveys have him down. Romney has been close when other surveys have him further behind. All you have to do is look at the polling data to see that. And if you want to default to partisan hackery as the explanation, fine. But the question you have yet to answer is whether it's Rasmussen or virtually everyone else that's guilty of deliberately skewing polls for partisan purposes. We'll learn more about that as we watch the polls over the next month, and we'll definitely see which in November -- when I expect Rasmussen will be vindicated. Ad hominem circumstantial. "Rasmussen is Republican, and has done work for Republicans, therefore we shouldn't trust his polling firm." Being an accurate pollster is not a function of one's partisan identification. And it's delicious that you accuse Rasmussen of being a partisan by citing Media Matters, a demonstrably partisan organization for the other side. By your own worldview, your argument should be rendered invalid. If you actually look at the numbers, Rasmussen was one of the most accurate presidential pollsters in both 2004 and 2008...more accurate in 2008 than several pollsters that Fordham actually rated above him, if you look at the candidate's percentages when compared to the actual election outcome. (The most accurate pollster in 2008, according to Fordham, was Democracy Corps, who projected the final results at 50-43 for Obama. Rasmussen projected the race at 52-46 for Obama. Final result? 52.9 to 45.7, Obama. And yet Democracy Corps, which missed both candidates' totals by about three percentage points, is considered more accurate than the firm that came within a point of hitting it on the nose? Ha!) As an Economist subscriber I do believe that there is truth in numbers, not complete truth, not all the truth, but large grains of truth, nonetheless. That is why I think the North Carolina polls are so significant. First, here are the latest polls showing that the state is a tossup trending toward Obama:... Secondly, I live in NC, and the Republicans this time around don't seem so enthused with Romney/Ryan. For instance, I was talking recently with a fiscally conservative Republican who is very high up in the local Chamber of Commerce, and he was distressed about Ryan's convention speech, all the blatant misrepresentations like berating Obama for not supporting Simpson-Bowles, a plan Ryan actually cast a vote against. I remember my friend asking, "Why in [heck] are they doing this stuff?" Because their moral compass is completely out of whack. And how out of whack is Obama's compass when he refuses to distance himself from the dem ad about Romney being charge with the cancer death of an ex employee's wife. And how out of whack is Obama's compass when he chooses to appear on The View as "eye candy" as he put it, as opposed to meeting with world leaders when they were in NY for the UN? He let his "girl" Hillary cover that while be was out campaigning. As I recall, George W. Bush met with world leaders during similar circumstances, and we all know how that turned out. Which is all the more reason to stay away. During the year in the runup to an election, a president is supposed to preside as well as whine for another term. Which is why he has the use of Airforce One, which gets him quickly where he needs to go. His obligation is to meet with world leaders - not delegate it to his girl, Hillary. You all keep wanting to beat up Obama on tactics. "Obama uses a teleprompter! He smokes! He went golfing! He didn't meet with world leaders!" What the US electorate cares about is outcomes: The economy is growing, Osama is dead, and GM is alive. Once you all start figuring that out, you'll begin winning elections again. American citizens in the nation's heartland have seen the overwhelming evidence of a hollow man whose moral under pinnings are that of a shape-shifter, a political chameleon who makes a used car salesman seem like a priest. Romney's lack of transparency, clear deception and fraud are being increasingly recognized across this entire country. In Romney they see something quite familiar, echo's of Enron Executive Ken Lay, Ponzi Schemer Bernie Madoff and his American Psycho like son, Andrew "Patrick Bateman" Madoff and Philanthropic Racketeer, Jeffrey Picower who was found swimming upside down in his Palm Beach swimming pool after the Feds discovered he had extracted nearly $8 Billion from the Madoff Ponzi. Folks in America's heartland have figured out that Mitt Romney is the quintessential American Fraud, much like Joseph Smith, the founder of his fradulent LDS Mormon Cult. Put a fork in the fraudster, cause he is done. On a positive note, Romney is up in the polls in Utah, Switzerland and the Cayman Islands. The homosexuals are out in force in support of Romney. Unless you are a homosexual clued in on factual stats, I might stay out of it if I were you. A cartogram with the size of the states adjusted based on electoral votes would be nice. NC is leaning to Obama? According to whom? This is the state that just passed gay marriage ban by a whopping margin! Obama won NC by the slimest margin four years ago. Don't see him win again. Ohio passed a gay marriage ban in 2004. Obama won Ohio in 2008. UP by 10 points now. Ergo.... There goes your "theory." NPWFTL Regards Michigan passed a gay marriage ban and is handily on Obama's side. What's shocking about NC gay marriage ban vote was the margin it got passed. It was so massive. Hard to believe it will swing back to Obama so quickly. I still believe it will take a miracle for Obama to win NC. Game almost over for Romney Like choosing between a Rock and a Hard place, The 99% Occupiers have to put up with the more likeable of 2 choices anionted by the 1% Plutocrats. Extending tax cuts to the 1% plutocrats is nothing compared to Obama's massive transfer of wealth from the poor and Middle class to the plutocrats under the Obama-Bernarke's QEs to infinity. If there's a silver lining here, it's that a Romney defeat will allow the Republicans to put up a more strongly conservative candidate in a greenfield election in 2016. Exactly. If at first you don't succeed,.... Hillary will be thrilled to hear that. "it's that a Romney defeat will allow the Republicans to put up a more strongly conservative candidate in a greenfield election in 2016." Yes, and lose to the Democrats by an even larger margin in 2016 than in 2012. The country's demographics are moving in the direction of the Democrats. The only way the republicans will win in 2016 is to put a less hate-filled candidate on the ballot, not a more hate-filled one (i.e., strongly conservative). If the republicans were intelligent, they would run John Huntsman....but of course, even Santorum admitted that the smart people do not vote republican anymore. Alaric, DO NOT forget that demographics have changed dramatically in the U.S. whether republicans like it or not. Ticketing a more "conservative" team in 2016 is nothing more than going against a bigger tide. Clean the GOP from the garbage (i.e. bigots, radical evangelicals from the South, and special interest), and then learn how to ask American for support. Then and only then, the "party" can get some kind of credibility. Cosmic, you read my mind with Huntsman. Maybe this is a start. Maybe we are not the only two thinking of a Huntsman-??? ticket? Oh, I'd be happy to vote for a guy like Huntsman, though I remain deeply skeptical of his party. The sad thing is that there are good arguments for fiscally conservative positions, like eliminating the deficit, reducing the national debt, addressing how to pay for Medicare, Medicaid and Social Security. But no one on the political scene sticks with those issues. Even Ryan, who put out a budget, drags himself into hate speech and personal attacks when the process starts of compromise and gaining consensus on the set of national priorities. Norquist and the signers of his pledge stand against tax increases, which does nothing but blocks discussion of tax restructuring and closes the discussion, compromise and consensus building on the proper size and financing of the Federal Government. Conservatives need a champion who can address their principles without getting lost in defamation and social issues. I just don't see how the Republican party will put up such a candidate in this or the next several election cycles. Ah, gentlemen, I never said it would be a silver lining for the Republicans,,, ;) Yea, the more radical and ignorant the better... Yea, the more radical and ignorant the better... election should be about facts. since both side have been making up theories and not facts, it hard to actually believe who is right. The Politics of Fact Checking True. I do hope you keep looking for facts till you find the ones that you like. Facts are not hard to find. In fact, google nowadays can give you a better "gutt check" than any media outlet. And absolutely better than Fox News. As a voter it should be YOUR responsibility to study the issues a little before casting in Nov 6. But if you let biased media and lying campaign managers make a decision for you, then it is YOUR fault and nobody else's. There is a galactical difference between an educated and informed voter and a "birther". Could Be An Upset There's a quiet voice out there. This voice is not heard in the polls. This voice is anti Afro American. These are sad people with bigotry in their hearts disappointed by where they've landed. This voice, valued at 2.7% of the election votes could surprise and decide this election. It feels like President Obama is way ahead. But, this voice is silent but could be deadly. And it's out there. The debates will decide if this voice swings the election for Governor Romney. Warmest, Richard Michael Abraham Founder The REDI Foundation Rather than an Afro, many in teacups could not have made more than plain enough that they believe that the dude to be a Muslim. In fact, even the American Ummah mostly considers the dude as an Apostate. So sorry to hear that you keep hearing voices nobody else can hear. The bigotry part is true, and in fact they are very sad people in need of emotional and psychological shelter. Whether they can swing a national election or not (very difficult at 2.7%), demographics are changing and these "2.7%" are nothing more than reacting to a reality. But do not fear; "ethnic" people will not savagely riot your town or home and destroy suburbia "America", if it hasn't happened already then why fear now? Yeah, I heard a lot about that 4 years ago as well. All these racists who didn't vote against "the black guy" four years ago have awaken and decided to get interested in politics? Not buying it. In my experience, the garden variety racist is more likely to say "to hell with them all" than vote in a way to make sure an African American doesn't get in the oval office. By the way, were are you getting the 2.7% from? Seriously, I'm curious. Well, most aggregates have Obama up by more than 2.7% anyway. "So sorry to hear that you keep hearing voices nobody else can hear." Remember you're talking to Richard Michael Abraham, the Failed Predictions salesman. Ahh my friend from the REDI Foundation...where we disagreed in economics, we agree in politics. Though I am hoping Obama wins (and by a serious margin) and that the Democrats take back seats in Congress (I actually wouldn't mind Republicans winning, as long as it is the tea party losing), there is still a large number of voters who hate Obama for absolutely no reason aside from their inability to accept the reality that the President of the USA is not white. My parents live in a small Florida town on the east coast where Obama is considered the anti-Christ. Why? Because he hates Israel of course, is what the folks say. Ohh yes and he is secretly a Muslim not born in this country. Do they like his policies? Of course not, they yell. Well, which ones do they disagree with? For once, they are silent. Will these people actually make a difference? I don't think so, but the fact that they are in key swing states like Florida increases their potency. The only slight disagreement I have with you here is that I don't think the debates matter for these people. Their mind was made up four years ago. However, I do think the debates matter for the center, and if Romney does well he can pull voters back. That is a remote possibility of course. Romney struggled with morons like Gingrich, Bachman, Perry, and Cain. Obama will walk all over him. Cheers? Mr. Pelican, you should watch this talk. Just because someone disagrees with you does not mean they are ignorant, stupid, or evil.. "So ....Prince Mitt wants to be King." So King Obama wants to hang onto his crown? Even though in 2009 he said if he couldn't fix the economy, he would he a one term president? . My God. Is there a web site on earth that you haven't plastered this copypasta all over? Reported. stanchaz is right. It's appropriate, well written, timely and will be copy pasted by me, as this is the first time I've seen it. How can someone argue Obama hasn't fixed the economy? Have you not noticed the turnaround from free fall to stability. Sure there are problems like debt, competitiveness, growth, and jobs, but this isn't your damn cell phone that you just bring into the shop. It's an economy, the biggest in the world. The problems take years to fix, and to stop the downward momentum and turn it around does not happen overnight. And the conservatives say, "Oh but we always come roaring back to growth". Few things hear. This economic crisis was global (without the problems in Europe and China growth would be roaring in the US). It was a financial crisis (they last longer). And it is the large corporations fault (they are making record profits while refusing to hire while the government attempts in vain to sustain adequate liquidity levels). "Ohh but the administration is causing so much uncertainty". Really this shouldn't even qualify as a legitimate complaint because it is not like there has ever been a time without economic uncertainty but we've heard it so much so here we go. The largest example is the fiscal cliff (which you can't really blame on Obama since it was the tea party who said it was a hostage worth taking). There is also healthcare uncertainty (the plan is really not that hard to understand). There is financial uncertainty (only because the the House is trying to repeal Dodd Frank). There is uncertainty over tax policy (again when has there ever not been). But hey lets just blame Obama. I really hope he gets re-elected because he has truly done a wonderful job, and not just on economic issues but also on foreign policy, healthcare, immigration reform (though he needs to do more), energy policy, and higher education. If he gets a second term, and if it is as good as his first, he may go down in history on the same level as FDR. Obama made lots of promises. No one ever keeps every promise, especially when you say something like "I'll have unemployment under 8%", when you don't really know how bad the problem is." But the problem is really bad. I mean, look at Romney. Look at the inequality. your post should be posted on all media a question to the republicans out there: first, i agree with you (the true republicans, not teapublicans) on some issues -- such as 'traditional family values', stance against homosexuals, fiscal conservatism (although its been a while since a republican embraced this concept -- even reagan failed on that point)... but, suppose, just suppose, come november, the current president wins his re-election bid. i know when he won the first time, the mantra was "he will be a one termer". well, if he wins, then you failed. in that case, will you be man enuff to acknowledge your failure? will u be man enuff to become the party of "let's compromise", instead of the party of 'no'? will u be man enuff to work with this president in helping america and americans move past this ditch and onto higher planes? just a question. as for romney.. where are the 'hanging chads' when you need them? "i know when he won the first time, the mantra was "he will be a one termer". well, if he wins, then you failed." No, we will not have failed. He will have been re-elected by a bunch of Americans who adore him, will lie for him, and know squat about the main issues facing the country. Are you seriously trying to imply that Democrats are all ignorant sheep and Republicans are discerning, genuinely concerned citizens of the world? Aside from the fact that post-graduate educational attainment is positively correlated with voting Democrat, the fact that you went on such an irrelevant tangent poses an interesting question: why would you rather engage in meaningless, childish ad-hominem attacks instead of explaining, to the best of your ability to be objective, why the Republican party is what America needs? Pelican, your tantrum only proves that IF you really represent republicans, the GOP has been in serious trouble for a while. Aside from reminding you how uneducated and uninformed you are, review the polls from every media source. It is NOT a coincidence that every poll source except FOX are close in numbers. If you cannot debate simple issues and resort to childish insults, then PLEASE do not vote! Brian, please stop your politicking here. There's plenty of room elsewhere on the Internet for that. If you have something reasonable or factual to say, please do, but it seems apparent your obvious hatred of Obama precludes you from saying anything useful to the Economist crowd. This article is a complete fabrication. especially where I see N Carolina as a lean Obama state. According to Rasmussen: N Carolina is Romney 51% 45%. That should be lean Romney... Colorado is Romney 47% Obama 45%. That should be neck and neck because it is within the margin of error.... Florida 46% Romney Obama 48%. That should be neck and neck because it is within the margin of error.... Ohio Romney 46% Obama 47% That should be neck and neck as it is within the margin of error... That is just 4 states. All other pollsters, except Rasmussen, use the 2008 GOP/Dem voting percentages. This skews the results of the poll because there is no way that democrat voters are anywhere near as enthusiastic or motivated to vote as they were with the hope and change hysteria that swept the country in 2008. In the past two presidential elections, Rasmussen was the most accurate of all pollsters. There is going to be a lot of wailing and gnashing of teeth when the real poll is taken on November 6. Rasmussen is the least accurate polling agency in the nation. That's why no one pays attention to it or your silly arguments citing it. "Rasmussen is the least accurate polling agency in the nation." You are a blatant liar. Are you happy about that? One thing about this campaign process that has shocked me is the number of Americans who will basically lie through their teeth in their adoration of Obama. "His research shows Rasmussen Reports tied for first place of 23 firms for accuracy. Politico and the Wall Street Journal also ranked Rasmussen as one of the most accurate in the last presidential race. You can see their day-by-day results in the 2008 election that they were almost exactly mirroring the final outcome. In 2004 Rasmussen again was closest:In.)"... In 2010, though, Rasmussen did not do well. And since we are much closer to 2010 than to 2004, that's the part people look at. There might, just might, be a reason other than "lying through their teeth" that all the pollsters, including Rasmussen, are showing Obama up.. "In 2010, though, Rasmussen did not do well." Liar. ." *************************************************************** It is amazing to me how many Americans will lie through their teeth in their bland adoration for Obama. Not a very laudable American trait. "Calm down buddy. I wouldn't want you to have an aneurysm. I know how hopped-up you right-wingers get while spewing your Fox News propaganda." Calm down buddy, I wouldn't want you to have an aneurysm. I know how hopped-up you lefties get while spewing your main stream media (ABC, NBC, MSNBC, CNN) propaganda. lol, nice job, you have resorted to an adult equivalent of "I know you are, but what am I?" Also, I never said anything of elevated tone. Why would I need to calm down? You are the one calling everyone a liar. Third, I don't have cable TV, nice try though. You are as unimaginative as you are misinformed. I'm just going to quietly puzzle over whether your inability to tell 2004 from 2010 is that time travel thing, or the take the math out behind the barn and shoot it thing.. Also, it's interesting that his whole dismissal of other pollsters hinges on the condition that Democrats will not turn out in large numbers due to diminished enthusiasm. He's just praying that Democrats will not vote -- because his trust in Democracy is inversely correlated with how many citizens actually exercise their right to vote. Much to his dismay, though, Democrats are waking up from their sleep and have all but closed the enthusiasm gap. If anything, it's reopening to their favor:... "There is going to be a lot of wailing and gnashing of teeth when the real poll is taken on November 6." I quite agree. I hope you remember to wear your night guard on November 6th. Brian, This is the Economist. Only educated adults read this outlet. Fox.com is only a click away. I have read all your posts, and you truly sound like the little league kid who just lost a game and went home crying. Dude, calm down and accept the truth; Ronmey is losing and that is his fault, period. Make your voice heard with real UNBIASED facts, but you have to wait until 2016. And lastly, insulting others in a blog because you are losing your argument, sounds like a "tweeting" chick that just got dumped for a cheerleader. read MANY comment on different news sites... I see one comment over and over...." I'm unemployed"... and these are republicans saying this..... that they are UNEMPLOYED and are mad that Obama has not done enough to get them employed again.... WTH???? I'm a liberal.... and I have NEVER relied on my President to make sure I am employed.... it is really the most ridiculous line of thinking, that the President should find you a job..... I guess when they say Pull Yourself up by the bootstraps, that means OTHER people.... The Economist has kept this COMMENT THREAD the same since 23Jul12 => OVER TWO MONTHS! This is an important article and valuable data, but a LOT has changed since July12 in the Presidential Sweepstakes. Perhaps it is time to RESET THE ODOMETER? ah... I see... so many of the comments here are quite old???? "and these are republicans saying this..... that they are UNEMPLOYED" Total rubbish. There is no evidence at all that republicans are more likely to be unemployed than democrats. If you have a reliable link to support this, provide it. Otherwise I will assume that you are an Obama shilling liar. Plain and simple. first off.... perhaps you should brush up on your READING comprehension .... I did NOT say they were MORE likely to be unemployed..... and secondly... being called an Obama shilling liar, coming from YOU is RICH...... didn't expect your shilling to be challenged did you? And, silly gurl, when I post negative issues about Obama, I provide links supporting that. Unlike you, where it is all rhetoric. fail horribly at deductive reasoning. I would not try to attend law school if I were you. Anybody with even basic logical reasoning skills can tell that you are right. Of course Mr Brian Pelican, who accused you of something that you did not do, is probably going to ask you to try to prove that you did not do something, or try to prove a negative, in his next bout of failed logic. Thank you.... I do find his logical reasoning skills to be a bit, shall I say.... Lacking.... Tho, it seems, par for the course, for him, and others like him.... :-) by the way, your post above, made me laugh.... Silly boy, I have an MBA from Harvard, class of 1982. I would consider studying law - lawyers lie through their teeth daily. It is what they do and is a reason lawyers are despised by a majority of Americans. Didn't Bush also have an MBA from Harvard??? lol Silly boy, I am actually Mitt Romney and I have a JD/MBA from Harvard, class of 1975. I Googled "Brian Pelican Harvard" and "Brian Pelican HBS" and nothing came up. Usually people who have MBAs from HBS leave a large online footprint. Either "Brian Pelican" is not his real name or he is lying. My guess is on the latter. Judging by his logical reasoning skills and his poor grasp of the English language, there's no way he could have scored well on the GMAT. Doesn't Obama have a law degree from Harvard. lol But HBS and HLS are not the same school, are they? (See you fail at logic again!) Anyways I am not actually Mitt Romney (sorry forgot!), but instead Tony Stark, the billionaire philanthropist super hero. ^ Stupid people in Seattle Links full of irrelevant vomit. You have an MBA alright. A Major Bad Attitude.
http://www.economist.com/comment/1655261
CC-MAIN-2015-18
refinedweb
7,643
73.27
Why this code won't work, on online compiler return segment fault, on my linux VPS memory leak... #include <ctype.h> #include <stdio.h> #include <string.h> char *a_foo(char *str) { unsigned char *p = (unsigned char *)str; while (*p) { *p = 'a'; p++; } return str; } int main() { char * test = "TestTest"; a_foo(test); printf("result: %s\n", test); } The string literal "TestTest" is probably stored in read-only memory in your environment, so the code in a_foo that attempts to write to it would fail. The type of a string literal is const char * and the compiler should warn you if you try to assign it to a non-const pointer variable.
https://codedump.io/share/doFJEjbGcuiM/1/operation-on-pointers-in-c
CC-MAIN-2017-47
refinedweb
110
68.1
Hi. I have a site with html5 and css styles. I want to put in a righit side an sfw bar with a scrool images. When I click in an image inside this swf I want to open this image in fullscreen mode. This fullscreen is an other swf that show all images in fullscreen with a button for the next image. My quastion is How can I call an other swf when I click in button of other swf? I need to put this fullscreen swf insite a div? To create a page and call it when click in button? Thanks and sorry my english. I could use the same movie, but my stage is very small, with the same size of my small images...when I click in this small image, I could goto and stop to a frame with the big image but this is showed the same size of stage... I have an other problem. i cant put all Full Sized images in one swf because to show one image, I will load all images. I need to call the specific image and show in fullscreen. I think that I need to use java too. I until verify, but I think that if I to use the marquee tag is the best option, I have a scrool effect, and load the full images only when I click in button. I need to know if I can load in FullScreen now... ok, I managed to do my slide. But, I have a new problem: When I am in fullscreen and click in ESC to exit, I put this code to go back the menu photos: stage.addEventListener(KeyboardEvent.KEY_DOWN, mKey); function mKey(event:KeyboardEvent):void { trace("ok"); var key:uint = event.keyCode; switch (key) { case Keyboard.ESCAPE : mc.gotoAndStop(1); break; } } It functtion, but I need after to click in ESC for exit fulscreem mode, I need to click in movie and click newly in ESC to detect the key. Is like if it lost focus when I exit the fullscreen ok, I use this in EnterFrame if (stage.displayState == StageDisplayState.FULL_SCREEN) { } else { mc.gotoAndStop(1); } do you have any more problems? I dont understading what is wrong in my code: import flash.events.MouseEvent; stop(); avalanche.addEventListener(MouseEvent.CLICK,avalancheUP); function avalancheUP(e:MouseEvent):void { MovieClip(root).mc.gotoAndStop(2); trace("avalanche"); //stage.displayState = StageDisplayState.FULL_SCREEN; } the code above, I use in mc_Buttons, a movie clip with all my buttons, inserted in stage. The mc is other movie clip, also insert in stage. The test Trace function perfect, but the gotoAndStop nothing happens... EDIT1: But if I use MovieClip(root).gotoAndStop(2); function correctly. But I need change the current frame in mc movie clip and not in stage. EDIT2: If I use trace(MovieClip(parent).mc.totalFrames); function perfect, I get all frames in my mc movie...Only the gotoAndStop not function...crazy... ... copy and paste the trace output from the following to confirm mc is a child of root at the time your code executes: import flash.events.MouseEvent; stop(); avalanche.addEventListener(MouseEvent.CLICK,avalancheUP); function avalancheUP(e:MouseEvent):void { trace(MovieClip(root).mc) MovieClip(root).mc.gotoAndStop(2); //stage.displayState = StageDisplayState.FULL_SCREEN; } [object mc_1] Is good? EDIT1: When this happens with me, I solved start project for the zero. and I make this and the problem is solved. Nothing has changed, but function, so thanks for your attention. does the movieclip that you want to direct to its frame 2 have class "mc_1"? if so, your code is ok. to confirm your code is working, remove the above trace and add a trace(this.name,this.currentFrame) to frame 2 of your movieclip with class mc_1 and retest. Ohh, off course, Kglad, I'm making a basic error. If I put the code in firt frame and put the movieClip in second frame, the code not find the movieclip and not return an error but not function... so, this thread is closed? finish and thanks the result can view in you're welcome.
http://forums.adobe.com/thread/1215642
CC-MAIN-2013-48
refinedweb
680
84.88
Encryption, List Ranges, and Python... - AtomBombed Here is my module: char_dict = [: end_msg = end_msg + char_dict[char_dict.index(char) + key] through = through + 1 return end_msg Here is the file that uses the module: import complexx print(complexx.encrypt("zebra",15)) My problem is that when I use the key to offset the character to a new one by adding the index, and getting the resulting index and using it's string from the list, if you go too far (like demonstrated in the example code, you can try it out) it will say "string index out of range" because I am using a key that hops too many spaces ahead, and ends up going out of bounds (or indexes) of the list. I need - AtomBombed ...to know how to be able to make the loop (if the index hits the end of the dictionary), start back at the beginning. An example: If I had the string I wanted to encrypt which was "z", and I was using the key "8", I would technically be going out of range by 1 index, but I want it to return the value of "a" (because I want it to loop back to the beginning). Is any of this making sense? Sum-up: If the list index is out of range, start the index back at the beginning. To resolve the string index out of range could you not do something like: char: size = (char_list.index(char) + key) % len(char_list) end_msg = end_msg + char_list[size] through = through + 1 return end_msg Also for purposes of naming it should be char_listnot char_dictdue to the fact that you have used a list not a dict. You can also generate you char_list if you want. I wanted to try for fun, it was a little more convoluted than I thought it would be. But I learnt something by trying. Maybe it can be expressed a lot easier than this. But ok, this is what I did. extra_chars = ["?","!",".",",","_"," "] # 2 steps x = [(chr(ind + 32), chr(ind)) for ind in range(65,91)] y = [tp for tupl in x for tp in tupl] char_list1 = y + extra_chars print char_list1, '\n' # 1 step char_list2 = [tp for tupl in [(chr(ind + 32), chr(ind)) for ind in range(65,91)] for tp in tupl] + extra_chars print char_list2 Use the Force Luke! == Use the Standard Library @Phuket2 ... import string char_list2 = [] for i in xrange(len(string.ascii_lowercase)): char_list2.append(string.ascii_lowercase[i]) char_list2.append(string.ascii_uppercase[i]) print(char_list2) import itertools char_list3 = list(itertools.chain(*zip(string.ascii_lowercase, string.ascii_uppercase))) print(char_list3) @ccc , LOL, I noticed you did an edit 😋 I was also using string module in the first place. But because in the end my solution didn't need it, I didn't use it. But yes, I am a little list comp happy at the moment. But I really wanted to solve it using a list comp. I tried all sorts of crazy things with zip and inverse zip (*). But I had to find the answer on stackflow. But still comfortable enough with list comps now to combine the outer answer with my inner answer to the problem. I have never used itertools. That will come. But I hope we have not overwhelmed @AtomBombed.but @AtomBombed , is a great place to learn here because of the various inputs from people. I am still new to Python, but the guys her have taught me so much, in the right way. I hope you get something from our comments. Oh, @ccc I was also using xrange vrs range. I think on this small list, either would be OK. Larger ranges, for sure you xrange. @AtomBombed , also good reading for you. This case we are getting 32 ints (overhead excluded), but if the range was range(1, 1,000,000) for example, range would make a list of all the numbers and return it. A lot of memory and time. xrange on the hand is know as a generator. It will return each number as required. A very small memory foot print. Maybe you are thinking too much info for you at the moment. But really, the early you learn about this stuff the better. I hope I was not a little wrong in my explanation above otherwise I am sure I will hear about it 😁😛 but it will be in a constructive way
https://forum.omz-software.com/topic/2480/encryption-list-ranges-and-python/?
CC-MAIN-2022-33
refinedweb
727
81.53
I'm trying to get the 'hello world' of streaming responses working for Django (1.2). I figured out how to use a generator and the yield function. But the response still not streaming. I suspect there's a middleware that's mucking with it -- maybe ETAG calculator? But I'm not sure how to disable it. Can somebody please help? Here's the "hello world" of streaming that I have so far: def stream_response(request): resp = HttpResponse( stream_response_generator()) return resp def stream_response_generator(): for x in range(1,11): yield "%s\n" % x # Returns a chunk of the response to the browser time.sleep(1) You can disable the ETAG middleware using the condition decorator. That will get your response to stream back over HTTP. You can confirm this with a command-line tool like curl. But it probably won't be enough to get your browser to show the response as it streams. To encourage the browser to show the response as it streams, you can push a bunch of whitespace down the pipe to force its buffers to fill. Example follows: from django.views.decorators.http import condition @condition(etag_func=None) def stream_response(request): resp = HttpResponse( stream_response_generator(), content_type='text/html') return resp def stream_response_generator(): yield "<html><body>\n" for x in range(1,11): yield "<div>%s</div>\n" % x yield " " * 1024 # Encourage browser to render incrementally time.sleep(1) yield "</body></html>\n" A lot of the django middleware will prevent you from streaming content. Much of this middleware needs to be enabled if you want to use the django admin app, so this can be an annoyance. Luckily this has been resolved in the django 1.5 release. You can use the StreamingHttpResponse to indicate that you want to stream results back and all the middleware that ships with django is aware of this and acts accordingly to not buffer your content output but send it straight down the line. Your code would then look like the following to use the new StreamingHttpResponse object. def stream_response(request): return StreamingHttpResponse(stream_response_generator()) def stream_response_generator(): for x in range(1,11): yield "%s\n" % x # Returns a chunk of the response to the browser time.sleep(1) Note on Apache I tested the above on Apache 2.2 with Ubuntu 13.04. The apache module mod_deflate which was enabled by default in the setup I tested will buffer the content you are trying to stream until it reaches a certain block size then it will gzip the content and send it to the browser. This will prevent the above example from working as desired. One way to avoid this is to disable mod_deflate by putting the following line in your apache configuration: SetEnvIf Request_URI ^/mysite no-gzip=1 This is discussed more in the How to disable mod_deflate in apache2? question.
https://pythonpedia.com/en/knowledge-base/2922874/how-to-stream-an-httpresponse-with-django
CC-MAIN-2020-45
refinedweb
472
62.58
On Saturday 10 January 2004 14:08, Christophe Saout wrote: > Hi, > > I found some bio_list functions in dm-raid1.c and I thought they could > be made available for other users too. > There's probably a better place for this. Heh...maybe include/linux/bio.h? :) Now that you've pointed out this stack vs. fifo problem a couple times, I'm beginning to wonder why the bi_next field in the bio is a struct bio * instead of a struct list_head. It would seem that changing it to a list_head would eliminate the need for the bio_list stuff, as well as eliminate the original problem of accidentally reordering the bio's. Of course, it's probably way to late to make this kind of change, since it would involve auditing every driver that ever holds a pointer to a bio to see if it's treating that pointer as the head of a list. > --- linux.orig/drivers/md/dm-bio-list.h 1970-01-01 01:00:00.000000000 +0100 > +++ linux/drivers/md/dm-bio-list.h 2004-01-10 20:43:32.442353960 +0100 > @@ -0,0 +1,70 @@ > +/* > + * bio queue functions > + * > + * Copyright (C) 2001, 2002 Sistina Software > + * > + * This file is released under the LGPL. > + */ > + > +#ifndef DM_BIO_LIST_H > +#define DM_BIO_LIST_H > + > +#include <linux/bio.h> > + > +struct bio_list { > + struct bio *head; > + struct bio *tail; > +}; > + > +extern inline void bio_list_init(struct bio_list *bl) > +{ > + bl->head = bl->tail = NULL; > +} > + > +extern inline void bio_list_add(struct bio_list *bl, struct bio *bio) > +{ > + bio->bi_next = NULL; > + > + if (bl->tail) > + bl->tail->bi_next = bio; > + else > + bl->head = bio; > + > + bl->tail = bio; > +} > + > +extern inline void bio_list_merge(struct bio_list *bl, struct bio_list > *bl2) +{ > + if (bl->tail) > + bl->tail->bi_next = bl2->head; > + else > + bl->head = bl2->head; > + > + bl->tail = bl2->tail; > +} > + > +extern inline struct bio *bio_list_pop(struct bio_list *bl) > +{ > + struct bio *bio = bl->head; > + > + if (bio) { > + bl->head = bl->head->bi_next; > + if (!bl->head) > + bl->tail = NULL; > + > + bio->bi_next = NULL; > + } > + > + return bio; > +} > + > +extern inline struct bio *bio_list_get(struct bio_list *bl) > +{ > + struct bio *bio = bl->head; > + > + bl->head = bl->tail = NULL; > + > + return bio; > +} > + > +#endif -- Kevin Corry kevcorry us ibm com
http://www.redhat.com/archives/dm-devel/2004-January/msg00035.html
CC-MAIN-2015-18
refinedweb
345
68.5
The server ChatServer.java package edu.lmu.cs.networking; import import import import import import import java.io.BufferedReader; java.io.IOException; java.io.InputStreamReader; java.io.PrintWriter; java.net.ServerSocket; java.net.Socket; java.util.HashSet; /** * A multithreaded chat room server. When a client connects the * server requests a screen name by sending the client the * text "SUBMITNAME", and keeps requesting a name until * a unique one is received. After a client submits a unique * name, the server acknowledges with "NAMEACCEPTED". Then * all messages from that client will be broadcast to all other * clients that have submitted a unique screen name. The * broadcast messages are prefixed with "MESSAGE ". * * Because this is just a teaching example to illustrate a simple * chat server, there are a few features that have been left out. * Two are very useful and belong in production code: * * 1. The protocol should be enhanced so that the client can * send clean disconnect messages to the server. * * 2. The server should do some logging. */ public class ChatServer { /** * The port that the server listens on. */ private static final int PORT = 9001; /** * The set of all names of clients in the chat room. Maintained * so that we can check that new clients are not registering name * already in use. */ private static HashSet<String> names = new HashSet<String>(); /** * The set of all the print writers for all the clients. This * set is kept so we can easily broadcast messages. */ private static HashSet<PrintWriter> writers = new HashSet<PrintWriter>(); /** * The appplication main method, which just listens on a port and * spawns handler threads. */ public static void main(String[] args) throws Exception { System.out.println("The chat server is running."); then repeatedly gets inputs and * broadcasts them. Handlers are spawned from the listening * loop and are responsible for a dealing with a single client * and broadcasting its messages. } synchronized (names) { if (!names. while (true) { out.ServerSocket listener = new ServerSocket(PORT). name = in. Keep requesting until // a name is submitted that is not already used.getInputStream())). */ private static class Handler extends Thread { private String name. } } . } /** * Services this thread's client by repeatedly requesting a * screen name until a unique one has been submitted. * All the interesting work is done in the run method. then * acknowledges the name and registers the output stream for * the client in a global set.getOutputStream().socket = socket. try { while (true) { new Handler(listener.contains(name)) { names. true). if (name == null) { return. Note that // checking for the existence of a name and adding the name // must be done while locking the set of names.println("SUBMITNAME"). */ public void run() { try { // Create character streams for the socket. } } /** * A handler thread class. /** * Constructs a handler thread. out = new PrintWriter(socket.readLine(). private BufferedReader in.start(). squirreling away the socket. } } finally { listener. in = new BufferedReader(new InputStreamReader( socket. break. private PrintWriter out. */ public Handler(Socket socket) { this.add(name). // Request a name from this client. private Socket socket.close().accept()). } try { socket. if (name != null) { names. java. } finally { // This client is going down! Remove its name and its print // writer from the sets.remove(out). } catch (IOException e) { } } } } } The client ChatClient. /** * A simple Swing-based client for the chat server.out. while (true) { String input = in. // Accept messages from this client and broadcast them.swing. add the // socket's print writer to the set of all writers so // this client can receive broadcast messages.io.ActionListener.println(e). out. if (input == null) { return.println("MESSAGE " + name + ": " + input). // Ignore other clients that cannot be broadcasted to. java.swing.JScrollPane.event. } } } catch (IOException e) { System.Socket.InputStreamReader.BufferedReader.awt.} // Now that a successful name has been chosen. java.cs.net.lmu. java.io. javax.swing. and close its socket.io. javax.event.PrintWriter.networking. java.readLine(). } for (PrintWriter writer : writers) { writer. javax.JTextArea.java package edu. } if (out != null) { writers.io.add(out). javax. import import import import import javax.JFrame.JOptionPane.awt.remove(name).ActionEvent. java. Graphically * it is a frame with a text field for entering messages and a .JTextField.IOException.println("NAMEACCEPTED").swing. import import import import import import import java.swing. writers.close(). JTextArea messageArea = new JTextArea(8.setText(""). When the server sends a line beginning * with "NAMEACCEPTED" the client is now allowed to start * sending the server arbitrary strings to be broadcast to all * chatters connected to the server. The server will keep sending "SUBMITNAME" * requests as long as the client submits screen names that are * already in use.getContentPane().pack(). PrintWriter out. When the server sends a * line beginning with "MESSAGE " then all characters following * this string should be displayed in its message area. textField.add(new JScrollPane(messageArea). Then clear * the text area in preparation for the next message. * When the server sends "SUBMITNAME" the client replies with the * desired screen name. frame. "Enter IP Address of the Server:". } /** * Prompt for and return the address of the server. JFrame frame = new JFrame("Chatter").showInputDialog( frame.* textarea to see the whole dialog. "Center").println(textField. */ private String getServerAddress() { return JOptionPane. /** * Constructs the client by laying out the GUI and registering a * listener with the textfield so that pressing Return in the * listener sends the textfield contents to the server. messageArea. // Add Listeners textField. . Note * however that the textfield is initially NOT editable. "Welcome to the Chatter".getText()). "North"). * * The client follows the Chat Protocol which is as follows. frame. and * only becomes editable AFTER the client receives the NAMEACCEPTED * message from the server.setEditable(false).setEditable(false).addActionListener(new ActionListener() { /** * Responds to pressing the enter key in the textfield by sending * the contents of the text field to the server. */ public ChatClient() { // Layout GUI textField. JTextField textField = new JTextField(40). 40). */ public void actionPerformed(ActionEvent e) { out.getContentPane(). */ public class ChatClient { BufferedReader in.add(textField. frame. } }). run().substring(8) + "\n"). if (line.readLine().append(line.getInputStream())).startsWith("MESSAGE")) { messageArea. */ public static void main(String[] args) throws Exception { ChatClient client = new ChatClient().startsWith("NAMEACCEPTED")) { textField. Socket socket = new Socket(serverAddress. while (true) { String line = in. } } } /** * Runs the client as an application with a closeable frame.showInputDialog( frame.QUESTION_MESSAGE). // Process all messages from server.JOptionPane. } } . client.frame.PLAIN_MESSAGE). } /** * Connects to the server then enters the processing loop. client. "Choose a screen name:".setDefaultCloseOperation(JFrame. client.startsWith("SUBMITNAME")) { out.getOutputStream(). according to the protocol. JOptionPane. "Screen name selection". out = new PrintWriter(socket. } else if (line. in = new BufferedReader(new InputStreamReader( socket.setVisible(true).EXIT_ON_CLOSE). } else if (line. 9001).println(getName()).frame. */ private String getName() { return JOptionPane.setEditable(true). */ private void run() throws IOException { // Make connection and initialize streams String serverAddress = getServerAddress(). } /** * Prompt for and return the desired screen name. true).
https://www.scribd.com/document/328334863/Sisteme-distribuite
CC-MAIN-2018-43
refinedweb
1,097
53.37
Leverage is a thin wrapper around the redis client that integrates your lua scripts as methods AND supports reliable and fault tolerant Pub/Sub on top of redis. Leverage is an abstraction on top of the fabulous redis client for Node.js. It makes it much easier to work with lua scripting in Redis as well as provide some some missing features in Redis through the power of lua scripting. The package should be installed through npm, which is installed by default when you download node.js npm install leverage --save. Because we are introducing the scripts as methods in the Leverage.prototype there are a couple of names that blocked for usage or they would destroy the modules internals. We've made sure that most of internals of this module are namespaced user the _ property but there are however a couple methods exposed on the prototype: _Our private internal namespace for logic and options. readyStateWhich indicates if everything is loaded correctly. publishFor our improved Pub/Sub. subscribeFor our improved Pub/Sub. unsubscribeUnsubscribe from our Pub/Sub channel. destroyFor closing all used/wrapped Redis connections. And just to be save, don't use methods that are prefixed with an underscore which will just protect you possible private node internals. Other then these properties and methods your save to anything you want as we will just remove all forbidden chars, numbers from your script name and transform it to lowercase. To initialize the module you need to provide it with at least one active Redis connection: var Leverage = require'leverage'redis = require'redis'createClient;var leverage = redis optional options ; If you want to leverage the improved Pub/Sub capabilities you should supply 2 different clients. 1 connection will be used to publish the messages and execute the commands while the other connection will be used to subscribe and there for block the connection for writing. var Leverage = require'leverage'pub = require'redis'createClientsub = require'redis'createClient;var leverage = pub sub optional options ; It might be possible that you want to add scripts from a different folder then our pre-defined folder locations. We've added a Leverage.introduce which you can use to add scripts. The scripts that are added should be added to the Leverage.scripts array and you should add the scripts BEFORE you construct a new Leverage instance. var Leverage = require'leverage';//// Give the method the path of your lua files and the object or in our case the// prototype where you want to introduce the methods.//var scripts = Leverageintroduce'/path/to/your/custom/directory' Leverageprototype;//// IMPORTANT: Add the returned array of added scripts to our Leverage.scipts as// are checked during the bootstapping of the Leverage instance.//Leveragescripts = Leveragescriptsconcatscripts; FYI: The Leverage.introduce methods returns an array with following data structure: {name: 'hello',args: {KEYS: 2,ARGV: 2},path: '/usr/wtf/path/to/file/hello.lua',code: 'local foo = KEYS[0]\nlocal bar = KEYS[1] .. etc ..'} We we attempt to load in the lua scripts in to the Redis server we attempt to parse the script to automatically detect how many keys that should be send to the server. If your code isn't to magical it should just parse it correctly and set the amount of KEYS and ARGV's of your script. There might be edge cases where you are iterating over the keys and args or we just fail to correctly parse your lua code because you a frigging lua wizard. For these edge cases you can supply every generated method with a number. This number should represent the amount of KEYS you are sending to your scripts. leveragecustomscript2 'KEY1' 'KEY2' 'ARGS' 'ARGS' fn; But doing this every time can be a bit wasteful that's why you can also just tell us once and the module will memorize it for you so all other calls will just use the same amount of keys. leveragecustomscript2;leverageotherscript10;leverageanotherscript3;//// You can now call the scripts without the needed key amount argument.//leveragecustomscript'KEY1' 'KEY2' 'ARGS' 'ARGS' fn; The following options are available, most of these apply to the improved Pub/Sub system.namespace The namespace is used to prefix all keys that are set by this module inside of your redis installation. This way you can prevent conflicts from happening. It defaults to leverage SHA1 can be provided a preconfigured object that contains references to all method -> SHA1 mappings. Only change this if you know what the fuck you are doing. If this is not set we will just check your redis server to find out of the script has been loaded in the internal cache.backlog How many messages can we store for the pub/sub connection relaiblity if you are sending a lot of message per second you might want to set this to a higher number then you would with lower rate messages. It defaults to 10000. The messages are stored using FIFO so if you are storing to much messages it will automatically override older keys. To make sure that we don't leave to much crap in your database all stored messages are provided with a expire value. So the messages can be killed in 2 ways, either by an overflow of the backlog or by an expired key. The default expiree is 1000. Our Pub/Sub wrapper provides a reliable Pub/Sub implementation on top of the fire and forget Pub/Sub implementation of redis. This is done by leveraging (ooh see what I did there ;)) lua scripts. Publishing is as easy as: leveragepublishchannel message// optional error and the unique id of the message; The callback is optional, but I would advice you to use it so you know which id your message has and if it was send without any issues. When you publish a message the following events take place: idfor the message. packetwhich contains the message and the id of the message. id. The subscription command has a bit of different syntax then you are used to. It accepts a second argument which can be used to configure the reliablity of the Pub/Sub channel: leveragesubscribe'channel' options ; The subscription command can be configured with:ordered Force ordered delivery of messages. If a message is dropped all received messages will be queued until the missing message is retrieved again and then the queue is flushed again. Defaults to false. When we received an error while processing and receiving messages we can stop the subscription as we can no longer guarantee so the sensible thing to do would be giving up and unsubscribing from the channel and stop with all processing. Defaults to true. How many events should we retrieve when we join the channel for the first time as it might happen that we've received a message right before we subscribed. Defaults to 0. When you join a channel the follwing events take place: Once you are subscribed to a channel the messages will be emitted on the leverage instance. There are a couple of different events emitted: <channel>::messageA message has been received. <channel>::bailoutWe've received an error and are bailing out. <channel>::errorThe channel received an error. <channel>::onlineThe channel has started processing messages. <channel>::unsubscribeThe channel has been unsubscribed. <channel> is the name of the channel that you've subscribed to. leveragesubscribe'foo'on'foo::message'console.log'Received the following message: ' message;console.log'The message had the following id: ' id;;leverageon'foo::bailout'console.log'The following error caused the bailout' e;;leverageon'foo::error'console.log'We received an error' e;console.log'This was emitted before a bailout, if bailouts were enabled';; Unsubscribe from the channel, nothing special here. leverageunsubscribe'foo'; This also triggers the <channel>::unsubscribe event. All these operations happen atomicly and are namespaced under the namespace that you configured Leverage with. So you cannot (and should not) publish to channels that are not wrapper by Leverage. MIT
https://www.npmjs.com/package/leverage
CC-MAIN-2015-35
refinedweb
1,323
63.39
The most important reason that people lock worksheet when perform tasks in Excel is to protect the original Excel file from editing or modifying by other people. Through Microsoft Excel, you can set the entire Excel worksheet by setting its property as Read Only or just set partial region area cells as Read Only via protecting worksheet. While how to lock worksheet with C#, VB.NET will be the core topic in this section. Spire.XLS for .NET, as a fast and reliable excel component, enables you to lock your worksheet by setting Worksheet class property: Worksheet.Range.Style.Locked = true. By this component, you can lock any worksheet that you need. In this solution, worksheet one and worksheet two are locked as you view in below picture: Now, before the detail code, you have to add Spire.Xls dll by download Spire.XLS for .NET. using Spire.Xls; namespace lock_excel_worksheet { class Program { static void Main(string[] args) { Workbook workbook = new Workbook(); workbook.LoadFromFile(@"..\lock worksheet.xls"); workbook.Worksheets[0].Range.Style.Locked = true; workbook.Worksheets[1].Range.Style.Locked = true; workbook.SaveToFile("result.xls"); } } } Imports Spire.Xls Namespace lock_excel_worksheet Class Program Private Shared Sub Main(ByVal args() As String) Dim workbook As Workbook = New Workbook workbook.LoadFromFile("..\lock worksheet.xls") workbook.Worksheets(0).Range.Style.Locked = true workbook.Worksheets(1).Range.Style.Locked = true workbook.SaveToFile("result.xls") End Sub End Class End Namespace Spire.XLS allows user to operate Excel document directly such as save to stream, save as web response, copy, lock/unlock worksheet, set up workbook properties, etc. As a professional .NET/Silverlight Excel component, it owns the ability of inserting content into Excel document, formatting cells and converting Excel documents to popular office file formats. Spire.XLS for .NET supports Excel 97-2003, Excel 2007 and Excel 2010.
http://www.e-iceblue.com/Tutorials/Spire.XLS/Spire.XLS-Program-Guide/Excel-Lock-Easily-Lock-Excel-Worksheet-in-C-VB.NET.html
CC-MAIN-2014-49
refinedweb
303
50.02
Hello, I have a doubt. In windows, when coding using C/C++ you can use the windows API (concretely LoadLibrary to load it into the virtual memory of the process and GetProcAddress to get the address of the function you want from that .dll). 1. Once I have the address, how do I call the function? Lets say for example; #include <Winsock2.h> -> I can now call send() recv() etc etc (may not be the best example since you have to link ws2_32.lib too and I don't know why) LoadLibrary(Ws2_32.dll); GetProcAddress(handle, "send"); -> Now how I call send() recv() ? 2. What advantages offers each method? why would someone load dinamically a (system) .dll when you can just #include it? (I understand for user created .dlls where no headers/source code is provided) 3. What happens if you load more than once a .dll? is it as bad as when you #include an .h file more than once if it's not definition/pragma protected? also, those functions explained above are contained in Kernel32.dll , can someone reload Kernel32.dll? And probably lots of other questions but those are the basic. Thanks!
http://www.rohitab.com/discuss/topic/42773-use-headers-or-dynamically-loaded-modules-dll/
CC-MAIN-2017-47
refinedweb
195
78.04
Introduction: How to Graph Home Router Metrics Isn't the tech-world wonderful? All these great services and free softwares popping up everywhere. I would like to share my first experience with Grafana and InfluxDB with the purpose of making persistent, beautiful, flexible graphs of your router stat's. The problem: My router metrics are to be found all over the place in the Web-UI. Some measurements offer real-time graphs over the last couple of minutes, some don't. All stat's are reset when I reboot. I want to back in time and see throughput, temperatures, etc. The solution: InfluxDB for persistent storage and Grafana for visualization. InfluxDB is a so called Time Series Database (TSDB) which is specialized for storing data history, tag-value-time. Grafana is a free self contained web-based graphing tool for InfluxDB (and other TSDBs). Both softwares are easy to get started with yet powerful. By the way, I am in no way affiliated with eiter project. This project is just to get you going. Once you get started you may find that yourself graphing all sorts of things. This is very well suited for IOT-applications. E. g. I am using the same setup for home automation sensor metrics (temperatures, humidity, etc.) in conjunction with Openhab. Flames and praise in the comments, please. And let me know if something doesn't work so I can fix it! Step 1: Prerequisites The router In order to get started with this project you need a Linux based router with the following features: - Command line login root access (telnet or ssh) - Cron support - Local file store on internal JFFS or USB storage. This is for storing scripts. The above features usually don't come in stock firmware so you probably have to go with DDWRT, Tomato or similar. In my case I use ASUSWRT Merlin. ASUS had the good sense to open-source their stock firmware and the Merlin build adds minimal but crucial features. I used a ASUS RT-N66U for this project. The server The second pre-req is a x86-based Linux server. It doesn't have to be super powerful. For this project I used a HP microserver with Ubuntu Server 13.04 LTS and 4GB RAM. In theory you could run this off a different processor architecture (e. g. ARM) but you would not be able to use the pre-built packages. The server doesn't have to be dedicated for InfluxDB. A PC You need a PC or Mac with terminal software (e. g. Putty or MobaXterm. I prefer the latter). Some knowledge This Instructable is for people who have a basic understanding of command-line Linux, Step 2: Preparing the Router This is valid for ASUS RT-N66U, i. e. you can't follow the instruction to the letter if you have a different router. Disclaimer!Don't load custom firmware on your router unless you know what you are doing or at least accept that there is a slight risk that you mess your router up to a point of no return (bricking). I have loaded lots of custom firmware on routers and never had any problems but I know problems can happen. However, with Merlin for Asus, the risk is low since it is based on stock firmware. - On a PC, download MerlinWRT - Extract the downloaded zip. The .trx file holds the firmware. - Browse to the router admin interface (usually found at) - Go to: Administration->Firmware Upgrade. Choose the downloaded .trx-file and upload. - After the router reboots you are on Merlin. Log in again. - Go to: Administration->System. Enable all the JFFS-stuff. Press Apply. - Reboot router. Verify: - Using your terminal software, log in to the router using the same user and password as in the Web admin. - Verify that /jffs is there and contains "configs" and "scripts" directories (see screenshot). Now the router is ready for custom scripts! Step 3: Preparing the Server As noted in the pre-req's, you need a small x86-based (Intel, AMD) Linux server for InfluxDB and Grafana. Below instructions work on Ubuntu. Check the Grafana and InfluxDB documentation for installation guides for other distributions. InfluxDB 0.8 is not the newest version but at time of writing offers best compatibility with applications. This is how to install: - Log in to a command line on the server - If on 64-bit OS: $ wget $ sudo dpkg -i influxdb_0.8.8_amd64.deb - If on 32-bit OS: $ wget $ sudo dpkg -i influxdb_0.8.8_i686.deb - Start daemon: $ sudo /etc/init.d/influxdb start - Make start on reboot: $ sudo update-rc.d influxdb defaults Grafana 2.1.1 install: - Install Grafana as described here: Verify: - Verify influxdb by browsing to the influxdb admin gui:. Log in as user root, password root - Verify grafana by browsing to. Login as user admin, password admin Step 4: InfluxDB Preparation - Log in to influxdb admin (root/root) on - Create a metrics database (mydb) with default settings. See screenshot. You can use a different name but you will have to change references to mydb later in the instructable. Step 5: Decide on Metrics Now would be a good time to decide what you want to measure and how. I decided on the below (for which I will provide script examples). For router stat's I started exploring the wl command and I will continue to do so. It is vast. It seems to be the main command line interface to router functionality. In fact, I think you can do everything you can do in the Web UI and a lot more. I think it is a broadcom proprietary command so you will probably have to look for alternatives if you are on a different chipset. - CPU use. The vmstat command (which I would have preferred to use) is not installed on my router but the top command is. CPU use can be extracted from the following command output: $ top -bn1 | head -3 - Memory use. Free and used memory can also be extracted from the top command. See CPU. - Temperature. There are temperature readings per wifi chip to be found, deeply hidden in the wl-command. The result has to be converted, however (see script). Example: $ wl -i eth1 phy_tempsense - Ping. I decided to benchmark my external access measuring ping access to a couple of established web sites. Example: $ ping -c1 -W1 - Throughput. I don't think throughput is available without calculation. Counters, however, can be found in in more than one location. /proc/net/dev is a good counter source. Example: $ cat /proc/net/dev | tail +3 - Wireless Clients. Number of connected wireless devices per interface/chip (2.4GHz and 5.0GHz) through the wl-command: $ wl -i eth1 assoclist I have ideas for other measurements but I think this is a pretty good set to get started with. Something to keep in mind is that a small home router is not a powerful processing device. Too many, too frequent or too complex measurements will effect router performance adversely. If you come up with a killer-metric, please share (comment)! Step 6: Router Scripting Attached zip-archive contains: router_assoclist.sh - Sample script for reporting number of attached wireless clients router_cpu.sh - CPU utilization script router_mem.sh - Memory utilization script router_net.sh - Network statistics script. Uses traffic counters. Compensates for counter rollover. router_ping_ext.sh - Ping roundtrip for one or more destinations router_temp.sh - Temperatures on 2.4GHz and 5GHz chips in Celsius. If you prefer Fahrenheit, do the math here. routerstats.sh - main script which fires off the others todb.sh - a script which takes three arguments: series name, columns and data points. String data is automatically quoted and data is formatted and sent to target database. This is how you install the sample scripts: Log in to the router command line Create a directory on persistent (jffs) file system: # mkdir /jffs/scripts/routerstats Extract the zip archive and move the files to the new directory on the router. There are several ways to transfer the files, e. g. you can use a USB-stick or activate SSH in the admin GUI (also activates SCP file transfer). If you decide on SSH/SCP, use Filezilla, modaXterm or similar to transfer files to the router. - If you don't know how the vi editor works, now would be a good time to look it up. It is the only file editor on the router. Edit settings in todb.sh: vi /jffs/scripts/routerstats/todb.sh. Edit the following lines to match your setup. You probably only need to change the dbhost-line: dbname="mydb" dbhost="srv4:8086" user="root" passwd="root" If you don't want to learn vi, edit the file before moving it to the router. - Make the script run every 30s. The following adds two lines to the services-start script that inserts two cron scheduler entries at bootup. On regular linux distros cron entries are persistent and you would just run the cru-commands once and be done with it. Cron only executes on minute intervals so to get 30s intervals the second line is needed. The second line triggers at the same time as the first but waits 30s before doing anything useful. # echo 'cru a routerstats "* * * * * /jffs/scripts/routerstats/routerstats.sh"' >> /jffs/scripts/services-start # echo 'cru a "routerstats+30" "* * * * * (sleep 30; /jffs/scripts/routerstats/routerstats.sh)"' >> /jffs/scripts/services-start Verify: - Log in to the InfluxDB admin GUI () - Click on "Explore Data" next to mydb - Enter "list series" as the query (see screenshot) Step 7: Grafana Visualization Grafana visualization is fun to play around with. I will provide you with a sample dashboard but I encourage you to play around with it and tweak it. The are links to good video tutorials on the Grafana home page. Creating the data source: - Log in to grafana () - Create datasource mydb (see screenshot Import the sample dashboard: - Download the attached dashboard file. It contains the json-definition of the entire dashboard. Check it out if you want. It is human readable and can be edited. For instance, if you created a datasource with any other name than mydb you can either search-replace in this file or change the datasource in the GUI later. - Import the dashboard (see screenshot) Now you should have a dashboard that will gradually be filling up with data! Good luck! 8 Discussions Thanks for the super useful instructions! I'm running into a problem with the todb.sh script. I used vi to create all of these script in the /jffs/scripts/routerstats/ directory on my RT-A87U router. After making them all executable I am able to run ./routerstats.sh without receiving any errors. However for some reason the data from these scripts is not reaching my influxDB database.... I updated the dbname and dbhost variable as required, but otherwise I can't really figure out what the problem is. Any ideas how I can figure this out? Thanks! Just revisiting this instructable after playing with it about a year ago. the influxDB api's have changed since this was created, try this in the todb.sh to insert the payload. wget --quiet --post-data "$payload" "" -O /dev/null Hi, Unfortunately the router I did this project on has been thrown out by now so I can't really look into it. However, one thing to check might be the influx data format. They evolve fast and I am pretty sure that the input format has changed. Unless you installed the old version used in the guide. Has anyone created any additional scripts to go along with the six provided? First thing: thanks for the clear instructions! Quick note for others: be sure to install the versions mentioned for Influx, then when Grafana is installed, you should need the Influx 0.8x plugin. My router is a different model, the RT-AC68U. I was able to install the scripts, test them, add them to Cron, get results in Influx. Looks good, but I will need to fiddle with the Internet Throughput, the numbers are off, this is surely due to the different model. My actual question, is: I imported the included json into Grafana. I now have a Router Stats Dashboard with 6 panels for CPU, memory, temp, etc... My problem is that all panels have a warning message "Cannot read property '_seriesQuery' of undefined". Possiblilities: The Grafana version is much higher than the posted version. If I find an answer I will Post! 2 things: I don't have nice available on my router. /jffs/bin/routerstats.sh: line 7: nice: not found I can fix this by just removing the nice part of the script and run the sh file directly but I am still getting a permission error no matter what I try. All scripts are executable. /jffs/bin/router_mem.sh: line 8: /jffs/bin/todb.sh: Permission denied (I only enabled the one script for testing) Are you still around having issues, mate? @ehsmaes, are you still around mate? If so, I'd love to chat with you regarding this guide!
http://www.instructables.com/id/How-to-graph-home-router-metrics/
CC-MAIN-2018-30
refinedweb
2,189
67.15
Cutting Edge - CQRS and Message-Based Applications By Dino Esposito | July 2015 At the end of the day, Command and Query Responsibility Segregation (CQRS) is software design that separates the code that alters state and the code that just reads state. That separation can be logical and based on different layers. It can also be physical and involve distinct tiers. There’s no manifesto or trendy philosophy behind CQRS. The only driver is its simplicity of design. A simplified design in these crazy days of overwhelming business complexity is the only safe way to ensure effectiveness, optimization and success. My last column (msdn.microsoft.com/magazine/mt147237) offered a perspective of the CQRS approach that made it suitable for any type of application. The moment you consider using a CQRS architecture with distinct command and query stacks, you start thinking of ways to separately optimize each stack. There are no longer model constraints that make certain operations risky, impractical or perhaps just too expensive. The vision of the system becomes a lot more task-oriented. More important, it happens as a natural process. Even some domain-driven design concepts such as aggregates stop looking so irksome. Even they find their natural place in the design. This is the power of a simplified design. If you’re now curious enough about CQRS to start searching for case studies and applications close to your business, you may find most references refer to application scenarios that use events and messages to model and implement business logic. While CQRS can happily pay the bills with far simpler applications—those you might otherwise label as plain CRUD apps—it definitely shines in situations with greater business complexity. From that, you can infer greater intricacy of business rules and high inclination to change. Message-Based Architecture While observing the real world, you’ll see actions in process and events that result from those actions. Actions and events carry data and sometimes generate new data, and that’s the point. It’s just data. You don’t necessarily need a full-fledged object model to support executing these actions. An object model can still be useful. As you’ll see in a moment, though, it’s just another possible option for organizing business logic. A message-based architecture is beneficial as it greatly simplifies managing complex, intricate and frequently changing business workflows. These types of workflows include dependencies on legacy code, external services and dynamically changing rules. However, building a message-based architecture would be nearly impossible outside the context of CQRS that keeps command and query stacks neatly separated. Therefore, you can use the following architecture for the sole command stack. A message can be either a command or an event. In code, you’d usually define a base Message class and from that, define additional base classes for commands and events, as shown in Figure 1. public class Message { public DateTime TimeStamp { get; proteted set; } public string SagaId { get; protected set; } } public class Command : Message { public string Name { get; protected set; } } public class Event : Message { // Any properties that may help retrieving // and persisting events. } From a semantic perspective, commands and events are slightly different entities and serve different but related purposes. An event is nearly the same as in the Microsoft .NET Framework: a class that carries data and notifies you when something that has occurred. A command is an action performed against the back end that a user or some other system component requested. Events and commands follow rather standard naming conventions. Commands are imperative like SubmitOrderCommand, while events are in past tense such as OrderCreated. Typically, clicking any interface element originates a command. Once the system receives the command, it originates a task. The task can be anything from a long-running stateful process, a single action or a stateless workflow. A common name for such a task is saga. A task is mono-directional, proceeds from the presentation down through the middleware and likely ends up modifying the system and storage state. Commands don’t usually return data to the presentation, except perhaps some quick form of feedback such as whether the operation completed successfully or the reasons it failed. Explicit user actions aren’t the only way to trigger commands. You can also place a command with autonomous services that asynchronously interact with the system. Think of a B2B scenario, such as shipping products, in which communication between partners occurs over an HTTP service. Events in a Message-Based Architecture So commands originate tasks and tasks often consist of several steps that combine to form a workflow. Often when a given step executes, a results notification should pass to other components for additional work. The chain of sub-tasks triggered by a command can be long and complex. A message-based architecture is beneficial because it lets you model workflows in terms of individual actions (triggered by commands) and events. By defining handler components for commands and subsequent events, you can model any complex business process. More important, you can follow a working metaphor close to that of a classic flowchart. This greatly simplifies understanding the rules and streamlines communication with domain experts. Furthermore, the resulting workflow is broken into myriad smaller handlers, each performing a small step. Every step also places async commands and notifies other listeners of events. One major benefit to this approach is the application logic is easily modifiable and extensible. All you do is write new parts and add them to the system, and you can do so with full certainty they won’t affect existing code and existing workflows. To see why this is true and how it really works, I’ll review some of the implementation details of message-based architecture, including a new infrastructural element—the bus. Welcome to the Bus To start out, I’ll look at a handmade bus component. The core interface of a bus is summarized here:. Processes that handle commands and related events are usually referred to as sagas. During initial bus configuration, you register handler and saga components. A handler is just a simpler type of saga and represents a one-off operation. When this operation is requested, it starts and ends without being chained to other events or by pushing other commands to the bus. Figure 2 presents a possible bus class implementation that holds sagas and handlers references in memory. public class InMemoryBus : IBus { private static IDictionary<Type, Type> RegisteredSagas = new Dictionary<Type, Type>(); private static IList<Type> RegisteredHandlers = new List<Type>(); private static IDictionary<string, Saga> RunningSagas = new Dictionary<string, Saga>(); void IBus.RegisterSaga<T>() { var sagaType = typeof(T); var messageType = sagaType.GetInterfaces() .First(i => i.Name.StartsWith(typeof(IStartWith<>).Name)) .GenericTypeArguments .First(); RegisteredSagas.Add(messageType, sagaType); } void IBus.Send<T>(T message) { SendInternal(message); } void IBus.RegisterHandler<T>() { RegisteredHandlers.Add(typeof(T)); } void IBus.RaiseEvent<T>(T theEvent) { EventStore.Save(theEvent); SendInternal(theEvent); } void SendInternal<T>(T message) where T : Message { // Step 1: Launch sagas that start with given message // Step 2: Deliver message to all already running sagas that // match the ID (message contains a saga ID) // Step 3: Deliver message to registered handlers } } When you send a command to the bus, it goes through a three-step process. First, the bus checks the list of registered sagas to see if there’s any registered sagas configured to start upon receipt of that message. If so, a new saga component is instantiated, passed the message and added to the list of running sagas. Finally, the bus checks to see if there’s any registered handler interested in the message. An event passed to the bus is treated like a command and routed to registered listeners. If relevant to the business scenario, however, it may log an event to some event store. An event store is a plain append-only data store that tracks all events in a system. Using logged events varies quite a bit. You can log events for tracing purposes only or use that as the sole data source (event sourcing). You could even use it to track the history of a data entity while still using classic databases for saving the last-known entity state. Writing a Saga Component A saga is a component that declares the following information: a command or event that starts the business process associated with the saga, the list of commands the saga can handle, and the list of events in which the saga is interested. A saga class implements interfaces through which it declares the commands and events that are of interest. Interfaces like IStartWith and ICanHandle are defined as follows: Here’s an example of the signature of a sample saga class: In this case, the saga represents the checkout process of an online store. The saga starts when the user clicks the checkout button and the application layer pushes the Checkout command to the bus. The saga constructor generates a unique ID, which is necessary to handle concurrent instances of the same business process. You should be able to handle multiple concurrently running checkout sagas. The ID can be a GUID, a unique value sent with the command request or even the session ID. For a saga, handling a command or event consists of having the Handle method on the ICanHandle or IStartWith interfaces invoked from within the bus component. In the Handle method, the saga performs a calculation or data access. It then posts another command to other listening sagas or just fires an event as a notification. For example, imagine the checkout workflow is as shown in Figure 3. Figure 3 The Checkout Workflow The saga performs all steps up to accepting payment. At that point, it pushes an AcceptPayment command to the bus for the PaymentSaga to proceed. The PaymentSaga will run and fire a PaymentCompleted or PaymentDenied event. These events will again be handled by the CheckoutSaga. That saga will then advance to the delivery step with another command placed against another saga interacting with the external subsystem of the shipping partner company. The concatenation of commands and events keeps the saga live until it reaches completion. In that regard, you could think of a saga as a classic workflow with starting and ending points. Another thing to note is that a saga is usually persistent. Persistence is typically handled by the bus. The sample Bus class presented here doesn’t support persistence. A commercial bus such as NServiceBus or even an open source bus like Rebus might use SQL Server. For persistence to occur, you must give a unique ID to each saga instance. Wrapping Up For modern applications to be truly effective, they must be able to scale with business requirements. A message-based architecture makes it incredibly easy to extend and modify business workflows and support new scenarios. You can manage extensions in total isolation, All it takes is adding a new saga or a new handler, registering it with the bus at application startup and letting it know how to handle only the messages it needs to handle. The new component will automatically be invoked only when it’s time and will work side by side with the rest of the system. It’s easy, simple and effective.: Jon Arne Saeteras Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/mt238399.aspx
CC-MAIN-2019-35
refinedweb
1,900
54.73
Ticket #4491 (closed Bugs: fixed) asio : WinError.h case incorrect ? Description On my Linux system, I'm cross-compiling for windows. The w32api provides lower case header files, I think it's a standard for windows headers. Here's a working patch : diff --git a/boost/asio/error.hpp b/boost/asio/error.hpp index 5663b69..69a8a24 100644 --- a/boost/asio/error.hpp +++ b/boost/asio/error.hpp @@ -19,7 +19,7 @@ #include <boost/cerrno.hpp> #include <boost/system/error_code.hpp> -# include <WinError?.h> +# include <winerror.h> #else # include <cerrno> # include <netdb.h> Attachments Change History comment:2 Changed 3 years ago by chris_kohlhoff comment:3 Changed my opinion this is really a MinGW bug. Lower-case is not a standard for windows headers and MSDN does document mixed case header names. Of course, this is a straightforward change to asio so I will just go ahead and fix it. However, do feel free to take up the issue with MinGW as it would improve that compiler's ability to handle valid Windows programs :)
https://svn.boost.org/trac/boost/ticket/4491
CC-MAIN-2014-10
refinedweb
174
54.29
On Tue, Jan 9, 2018 at 7:16 AM, Richard Guy Briggs <r...@redhat.com> wrote: > Containers are a userspace concept. The kernel knows nothing of them. > > The Linux audit system needs a way to be able to track the container > provenance of events and actions. Audit needs the kernel's help to do > this. Two small comments below, but I tend to think we are at a point where you can start cobbling together some prototype/RFC patches. Surely there are going to be a few changes, and new comments, that come out once we see an initial implementation so let's see what those are. > The registration is a u64 representing the audit container identifier > written to a special file in a pseudo filesystem (proc, since PID tree > already exists) representing a process that will become a parent process > in that container. This write mount namespace. This write > can only happen once per process. > > Note: The justification for using a u64 is that it minimizes the > information printed in every audit record, reducing bandwidth and limits > comparisons to a single u64 which will be faster and less error-prone. I know Steve generally worries about audit record size, which is a perfectly valid concern in this case, I also worry about the additional overhead when we start routing audit records to multiple audit daemons (see my other emails in this thread). > ... > When a container ceases to exist because the last process in that > container has exited log the fact to balance the registration action. > (This is likely needed for certification accountability.) On the "container ceases to exist" point, I expect this "container dead" message to come from the orchestrator and not the kernel itself (I don't want the kernel to have to handle that level of bookkeeping). I imagine this should be similar to what is done for VM auditing with libvirt. -- paul moore
https://www.mail-archive.com/netdev@vger.kernel.org/msg215239.html
CC-MAIN-2018-09
refinedweb
317
60.35
Array Index Out Of Bounds Exception Cary Tanner Greenhorn Joined: Feb 08, 2005 Posts: 4 posted Feb 08, 2005 12:43:00 0 I'm getting and error when I try to run this program package java146.project1; import java.io.*; import java.util.*; /** * Takes input from an input file and creates an array of OrderWithCharge * objects. Then writes each object to an output file. * */ public class ProcessOrders extends Order { private List list1 = new ArrayList(); Order[] orders; /** * The main processing method for the ProcessOrders class. * *@exception Exception Description of the Exception */ public void run() throws Exception { System.out.println("in the runApp method"); readOrderFile(); readChargesFile(); createOrders(); writeOutput(); } /** * Reads orderin.txt one line at a time, using a BufferedReader object, and * adds the lines to an ArrayList object. * *@exception Exception Description of the Exception */ public void readOrderFile() throws Exception { System.out.println("in the readOrder method"); BufferedReader in = new BufferedReader(new FileReader("data/orderin.txt")); String line = ""; while (in.ready()) { line = in.readLine(); list1.add(line); } in.close(); } /** * Reads the charges file and loads infomation into two parallel arrays; * one for rand limits, and one for handling charges. * *@exception Exception Description of the Exception */ public void readChargesFile() throws Exception { System.out.println("in the readCharges method"); BufferedReader in = new BufferedReader(new FileReader("data/charges.dat")); String[] parsedLine = null; String line = null; int[] quantityOrdered= new int[4]; double[] handlingChargeAmount = new double[4]; int i = 0; while (in.ready()) { line = in.readLine(); parsedLine = line.split(" "); quantityOrdered[i] = Integer.parseInt(parsedLine[0]); handlingChargeAmount[i] = Double.parseDouble(parsedLine[0]); i += 1; } in.close(); super.setQuantityOrdered(quantityOrdered); super.setHandlingChargeAmount(handlingChargeAmount); } /** * Creates an array of OrderWithCharge objects with length equal to the * size of the ArrayList object. Processes each string in the ArrayList * object, spliting each line using the split() method, and passing the * resulting fields to each OrderWithCharge object. * *@exception Exception Description of the Exception */ public void createOrders() throws Exception { System.out.println("in the createOrders method"); orders = new Order[list1.size()]; Order o; String str = ""; String[] pl = null; int i = 0; for (Iterator it = list1.iterator(); it.hasNext(); ) { str = (String) it.next(); pl = str.split(" "); o = new Order(); o.setCustName(pl[0]); o.setCustNum(Integer.parseInt(pl[1])); o.setQuantity(Integer.parseInt(pl[2])); o.setUnitPrice(Double.parseDouble(pl[3])); o.setItem(pl[4]); orders[i] = o; i += 1; } } /** * Writes the order information to a file called orderout.txt, in the * output directory. * *@exception Exception Description of the Exception */ public void writeOutput() throws Exception { System.out.println("in the output method"); PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter("output/orderout.txt"))); for (int i = 0; i < 5; i++) { out.println(orders[i]); } out.close(); } } package java146.project1; import java.io.*; /** * Driver class for Project 1 * */ public class OrderDriver{ /** * The main program for the OrderDriver class * *@param args The command line arguments *@exception Exception Description of the Exception */ public static void main(String[] args) throws Exception { System.out.println("in the driver method"); ProcessOrders po = new ProcessOrders(); po.run(); } } Here's what I get: Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException : 4 at java146.project1.ProcessOrders.writeOutput(ProcessOrders.java:135) at java146.project1.ProcessOrders.run(ProcessOrders.java:29) at java146.project1.OrderDriver.main(OrderDriver.java:22) I don't understand why this is happening. [ February 08, 2005: Message edited by: Cary Tanner ] Steven Bell Ranch Hand Joined: Dec 29, 2004 Posts: 1071 posted Feb 08, 2005 13:01:00 0 Your orders Array is only of size 4. Why the magic number 5 in the writeOutput for loop? It should be i < orders.length. Cary Tanner Greenhorn Joined: Feb 08, 2005 Posts: 4 posted Feb 08, 2005 13:03:00 0 Silly me, that's what I did at first but I guess I wrote "i <= orders.length", and the equals is what screwed me up. Thanks! Ray Stojonic Ranch Hand Joined: Aug 08, 2003 Posts: 326 posted Feb 08, 2005 13:05:00 0 ArrayIndexOutOfBoundException indicates you have asked for an element of the array that doesn't exist. If your array was: int[] array = {1,2,3}; and you asked for array[3], you would get this exception. (remember, Java arrays are zero based, so array[3] would refer to the 4th element, if it existed) That said, it is always better to use the length of the array to control a for when iterating through an array, as so: for( int i = 0; i < array.length; ++i ) this way, you can't overrun the array because the array size itself is controlling the loop. In your case, you need to fix line 135 in your program (as per the error message) I agree. Here's the link: subject: Array Index Out Of Bounds Exception Similar Threads Client/Server networking file io Reading from file file i/o help me please?? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/375837/java/java/Array-Index-Bounds-Exception
CC-MAIN-2015-27
refinedweb
815
50.53
How Hashes Really Work It’s easy to take hashes for granted in Perl. They are simple, fast, and they usually “just work,” so people never need to know or care about how they are implemented. Sometimes, though, it’s interesting and rewarding to look at familiar tools in a different light. This article follows the development of a simple hash class in Perl in an attempt to find out how hashes really work. A hash is an unordered collection of values, each of which is identified by a unique key. A value can be retrieved by its key, and one can add to or delete from the collection. A data structure with these properties is called a dictionary, and some of the many ways to implement them are outlined below. Many objects are naturally identified by unique keys (like login names), and it is convenient to use a dictionary to address them in this manner. Programs use their dictionaries in different ways. A compiler’s symbol table (which records the names of functions and variables encountered during compilation) might hold a few hundred names that are looked up repeatedly (since names usually occur many times in a section of code). Another program might need to store 64-bit integers as keys, or search through several thousands of filenames. How can we build a generally useful dictionary? Implementing Dictionaries One simple way to implement a dictionary is to use a linked list of keys and values (that is, a list where each element contains a key and the corresponding value). To find a particular value, one would need to scan the list sequentially, comparing the desired key with each key in turn until a match is found, or we reach the end of the list. This approach becomes progressively slower as more values are added to the dictionary, because the average number of elements we need to scan to find a match keeps increasing. We would discover that a key was not in the dictionary only after scanning every element in it. We could make things faster by performing binary searches on a sorted array of keys instead of using a linked list, but performance would still degrade as the dictionary grew larger. If we could transform every possible key into a unique array index (for example, by turning the string “red” into the index 14328.), then we could store each value in a corresponding array entry. All searches, insertions and deletions could then be performed with a single array lookup, irrespective of the number of keys. But although this strategy is simple and fast, it has many disadvantages and is not always useful. For one thing, calculating an index must be fast, and independent of the size of the dictionary (or we would lose all that we gained by not using a linked list). Unless the keys are already unique integers, however, it isn’t always easy to quickly convert them into array indexes (especially when the set of possible keys is not known in advance, which is common). Furthermore, the number of keys actually stored in the dictionary is usually minute in comparison to the total number of possible keys, so allocating an array that could hold everything is wasteful. For example, although a typical symbol table could contain a few hundred entries, there are about 50 billion alphanumeric names with six or fewer characters. Memory may be cheap enough for an occasional million-element array, but 50 billion elements (of which most remain unused) is still definitely overkill. (Of course, there are many different ways to implement dictionaries. For example, red-black trees provide different guarantees about expected and worst-case running times, that are most appropriate for certain kinds of applications. This article does not discuss these possibilities further, but future articles may explore them in more detail.) What we need is a practical compromise between speed and memory usage; a dictionary whose memory usage is proportional to the number of values it contains, but whose performance doesn’t become progressively worse as it grows larger. Hashes represent just such a compromise. Hashes Hashes are arrays (entries in it are called slots or buckets), but they do not require that every possible key correspond directly to a unique entry. Instead, a function (called a hashing function) is used to calculate the index corresponding to a particular key. This index doesn’t have to be unique, i.e., the function may return the same hash value for two or more keys. (We disregard this possibility for a while, but return to it later, since it is of great importance.) We can now look up a value by computing the hash of its key, and looking at the corresponding bucket in the array. As long as the running time of our hashing function is independent of the number of keys, we can always perform dictionary operations in constant time. Since hashing functions make no uniqueness guarantees, however, we need some way to to resolve collisions (i.e., the hashed value of a key pointing to an occupied bucket). The simple way to resolve collisions is to avoid storing keys and values directly in buckets, and to use per-bucket linked lists instead. To find a particular value, its key is hashed to find the index of a bucket, and the linked list is scanned to find the exact key. The lists are known as chains, and this technique is called chaining. (There are other ways to handle collisions, e.g. via open addressing, in which colliding keys are stored in the first unoccupied slot whose index can be recursively derived from that of an occupied one. One consequence is that the hash can contain only as many values as it has buckets. This technique is not discussed here, but references to relevant material are included below.) Hashing Functions Since chaining repeatedly performs linear searches through linked lists, it is important that the chains always remain short (that is, the number of collisions remains low). A good hashing function would ensure that it distributed keys uniformly into the available buckets, thus reducing the probability of collisions. In principle, a hashing function returns an array index directly; in practice, it is common to use its (arbitrary) return value modulo the number of buckets as the actual index. (Using a prime number of buckets that is not too close to a power of two tends to produce a sufficiently uniform key distribution.) Another way to keep chains remain short is to use a technique known as dynamic hashing: adding more buckets when the existing buckets are all used (i.e., when collisions become inevitable), and using a new hashing function that distributes keys uniformly into all of the buckets (it is usually possible to use the same hashing function, but compute indexes modulo the new number of buckets). We also need to re-distribute keys, since the corresponding indices will be different with the new hashing function. Here’s the hashing function used in Perl 5.005: # Return the hashed value of a string: $hash = perlhash("key") # (Defined by the PERL_HASH macro in hv.h) sub perlhash { $hash = 0; foreach (split //, shift) { $hash = $hash*33 + ord($_); } return $hash; } More recent versions use a function designed by Bob Jenkins, and his Web page (listed below) does an excellent job of explaining how it and other hashing functions work. Representing Hashes in Perl We can represent a hash as an array of buckets, where each bucket is an array of [$key, $value] pairs (there’s no particular need for chains to be linked lists; arrays are more convenient). As an exercise, let us add each of the keys in %example below into three empty buckets. %example = ( ab => "foo", cd => "bar", ef => "baz", gh => "quux" ); @buckets = ( [],[],[] ); while (($k, $v) = each(%example)) { $hash = perlhash($k); $chain = $buckets[ $hash % @buckets ]; $entry = [ $k, $v ]; push @$chain, $entry; } We end up with the following structure (you may want to verify that the keys are correctly hashed and distributed), in which we can identify any key-value pair in the hash with one index into the array of buckets and a second index into the entries therein. Another index serves to access either the key or the value. @buckets = ( [ [ "ef", "baz" ] ], # Bucket 0: 1 entry [ [ "cd", "bar" ] ], # Bucket 1: 1 entry [ [ "ab", "foo" ], [ "gh", "quux" ] ], # Bucket 2: 2 entries ); $key = $buckets[2][1][0]; # $key = "gh" $val = $buckets[2][1][1]; # $val = $hash{$key} $buckets[0][0][1] = "zab"; # $hash{ef} = "zab" Building Toy Hashes In this section, we’ll use the representation discussed above to write a tied hash class that emulates the behavior of real Perl hashes. For the sake of brevity, the code doesn’t check for erroneous input. My comments also gloss over details that aren’t directly relevant to hashing, so you may want to have a copy of perltie handy to fill in blanks. (All of the code in the class is available at the URL mentioned below.) We begin by writing a tied hash constructor that creates an empty hash, and another function to empty an existing hash. package Hash; # We'll reuse the perlhash() function presented previously. # Create a tied hash. (Analogous to newHV in hv.c) sub TIEHASH { $h = { keys => 0, # Number of keys buckets => [ [],[],[],[],[],[],[] ], # Seven empty buckets current => [ undef, undef ] # Current iterator entry }; # (Explained below) return bless $h, shift; } # Empty an existing hash. (See hv.c:hv_clear) sub CLEAR { ($h) = @_; $h->{keys} = 0; @{ $h->{buckets} } = ([],[],[],[],[],[],[]); @{ $h->{current} } = (undef, undef); } For convenience, we also write a function that looks up a given key in a hash and returns the indices of its bucket and the correct entry within. Both indexes are undefined if the key is not found in the hash. # Look up a specified key in a hash. sub lookup { ($h, $key) = @_; $buckets = $h->{buckets}; $bucket = perlhash($key) % @$buckets; $entries = @{ $buckets->[$bucket] }; if ($entries > 0) { # Look for the correct entry inside the bucket. $entry = 0; while ($buckets->[$bucket][$entry][0] ne $key) { if (++$entry == $entries) { # None of the entries in the bucket matched. $bucket = $entry = undef; last; } } } else { # The relevant bucket was empty, so the key doesn't exist. $bucket = $entry = undef; } return ($bucket, $entry); } The lookup function makes it easy to write EXISTS, FETCH, and DELETE methods for our class: # Check whether a key exists in a hash. (See hv.c:hv_exists) sub EXISTS { ($h, $key) = @_; ($bucket, $entry) = lookup($h, $key); # If $bucket is undefined, the key doesn't exist. return defined $bucket; } # Retrieve the value associated with a key. (See hv.c:hv_fetch) sub FETCH { ($h, $key) = @_; $buckets = $h->{buckets}; ($bucket, $entry) = lookup($h, $key); if (defined $bucket) { return $buckets->[$bucket][$entry][1]; } else { return undef; } } # Delete a key-value pair from a hash. (See hv.c:hv_delete) sub DELETE { ($h, $key) = @_; $buckets = $h->{buckets}; ($bucket, $entry) = lookup($h, $key); if (defined $bucket) { # Remove the entry from the bucket, and return its value. $entry = splice(@{ $buckets->[$bucket] }, $entry, 1); return $entry->[1]; } else { return undef; } } STORE is a little more complex. It must either update the value of an existing key (which is just an assignment), or add an entirely new entry (by pushing an arrayref into a suitable bucket). In the latter case, if the number of keys exceeds the number of buckets, then we create more buckets and redistribute existing keys (under the assumption that the hash will grow further; this is how we implement dynamic hashing). # Store a key-value pair in a hash. (See hv.c:hv_store) sub STORE { ($h, $key, $val) = @_; $buckets = $h->{buckets}; ($bucket, $entry) = lookup($h, $key); if (defined $bucket) { $buckets->[$bucket][$entry][1] = $val; } else { $h->{keys}++; $bucket = perlhash($key) % @$buckets; push @{ $buckets->[$bucket] }, [ $key, $val ]; # Expand the hash if all the buckets are full. (See hv.c:S_hsplit) if ($h->{keys} > @$buckets) { # We just double the number of buckets, as Perl itself does # (and disregard the number becoming non-prime). $newbuckets = []; push(@$newbuckets, []) for 1..2*@$buckets; # Redistribute keys foreach $entry (map {@$_} @$buckets) { $bucket = perlhash($entry->[0]) % @$newbuckets; push @{$newbuckets->[$bucket]}, $entry; } $h->{buckets} = $newbuckets; } } } For completeness, we implement an iteration mechanism for our class. The current element in each hash identifies a single entry (by its bucket and entry indices). FIRSTKEY sets it to an initial (undefined) state, and leaves all the hard work to NEXTKEY, which steps through each key in turn. # Return the first key in a hash. (See hv.c:hv_iterinit) sub FIRSTKEY { $h = shift; @{ $h->{current} } = (undef, undef); return $h->NEXTKEY(@_); } If NEXTKEY is called with the hash iterator in its initial state (by FIRSTKEY), it returns the first key in the first occupied bucket. On subsequent calls, it returns either the next key in the current chain, or the first key in the next occupied bucket. # Return the next key in a hash. (See hv.c:hv_iterkeysv et al.) sub NEXTKEY { $h = shift; $buckets = $h->{buckets}; $current = $h->{current}; ($bucket, $entry) = @{ $current }; if (!defined $bucket || $entry+1 == @{ $buckets->[$bucket] }) { FIND_NEXT_BUCKET: do { if (++$current->[0] == @$buckets) { @{ $current } = (undef, undef); return undef; } } while (@{ $buckets->[$current->[0]] } == 0); $current->[1] = 0; } else { $current->[1]++; } return $buckets->[$current->[0]][$current->[1]][0]; } The do loop at FIND_NEXT_BUCKET finds the next occupied bucket if the iterator is in its initial undefined state, or if the current entry is at the end of a chain. When there are no more keys in the hash, it resets the iterator and returns undef. We now have all the pieces required to use our Hash class exactly as we would a real Perl hash. tie %h, "Hash"; %h = ( foo => "bar", bar => "foo" ); while (($key, $val) = each(%h)) { print "$key => $val\n"; } delete $h{foo}; # ... Perl Internals If you want to learn more about the hashes inside Perl, then the FakeHash module by Mark-Jason Dominus and a copy of hash.c from Perl 1.0 are good places to start. The PerlGuts Illustrated Web site by Gisle Aas is also an invaluable resource in exploring the Perl internals. (References to all three are included below.) Although our Hash class is based on Perl’s hash implementation, it is not a faithful reproduction; and while a detailed discussion of the Perl source is beyond the scope of this article, parenthetical notes in the code above may serve as a starting point for further exploration. History Donald Knuth credits H. P. Luhn at IBM for the idea of hash tables and chaining in 1953. About the same time, the idea also occurred to another group at IBM, including Gene Amdahl, who suggested open addressing and linear probing to handle collisions. Although the term “hashing” was standard terminology in the 1960s, the term did not actually appear in print until 1967 or so. Perl 1 and 2 had “two and a half data types”, of which one half was an “associative array.” With some squinting, associative arrays look very much like hashes. The major differences were the lack of the % symbol on hash names, and that one could only assign to them one key at a time. Thus, one would say $foo{'key'} = 1;, but only @keys = keys(foo);. Familiar functions like each, keys, and values worked as they do now (and delete was added in Perl 2). Perl 3 had three whole data types: it had the % symbol on hash names, allowed an entire hash to be assigned to at once, and added dbmopen (now deprecated in favour of tie). Perl 4 used comma-separated hash keys to emulate multidimensional arrays (which are now better handled with array references). Perl 5 took the giant leap of referring to associative arrays as hashes. (As far as I know, it is the first language to have referred to the data structure thus, rather than “hash table” or something similar.) Somewhat ironically, it also moved the relevant code from hash.c into hv.c. Nomenclature Dictionaries, as explained earlier, are unordered collections of values indexed by unique keys. They are sometimes called associative arrays or maps. They can be implemented in several ways, one of which is by using a data structure known as a hash table (and this is what Perl refers to as a hash). Perl’s use of the term “hash” is the source of some potential confusion, because the output of a hashing function is also sometimes called a hash (especially in cryptographic contexts), and because hash tables aren’t usually called hashes anywhere else. To be on the safe side, refer to the data structure as a hash table, and use the term “hash” only in obvious, Perl-specific contexts. Further Resources Introduction to Algorithms (Cormen, Leiserson and Rivest) Chapter 12 of this excellent book discusses hash tables in detail. The Art of Computer Programming (Donald E. Knuth) Volume 3 (“Sorting and Searching”) devotes a section (�6.4) to an exhaustive description, analysis, and a historical perspective on various hashing techniques. “When Hashes Go Wrong” by Mark-Jason Dominus demonstrates a pathological case of collisions, by creating a large number of keys that hash to the same value, and effectively turn the hash into a very long linked list. Current versions of Perl use a hashing function designed by Bob Jenkins. His web page explains how the function was constructed, and provides an excellent overview of how various hashing functions perform in practice. This module, by Mark-Jason Dominus, is a more faithful re-implementation of Perl’s hashes in Perl, and is particularly useful because it can draw pictures of the data structures involved. It might be instructive to read hash.c from the much less cluttered (and much less capable) Perl 1.0 source code, before going through the newer hv.c. This image, from Gisle Aas’s “PerlGuts Illustrated”, depicts the layout of the various structures that comprise hashes in the core. The entire web site is a treasure trove for people exploring the internals. The source code for the tied Hash class developed in this article.
https://www.perl.com/pub/2002/10/01/hashes.html/
CC-MAIN-2019-04
refinedweb
3,008
59.33
Comment on Tutorial - Types of EJB By aathishankaran Comment Added by : shey-poun Comment Added at : 2011-12-14 06:39:51 Comment on Tutorial : Types of EJB By aathishankaran i appreciate the article for being straight to the everyone. Does anyone hir knows View Tutorial By: Tomy at 2009-06-24 02:59:57 2. Thank alot View Tutorial By: Naganatarajan at 2010-01-07 05:54:17 3. @Amit Shrivastava : Dear Amit I have tried the sam View Tutorial By: Sanjay at 2015-05-14 19:15:51 4. import java.util.*; public class de View Tutorial By: Virudada at 2012-05-05 06:27:22 5. could somebody please tell a case where memcpy() w View Tutorial By: freak at 2011-02-09 20:29:26 6. some of the errors above e.g the jasper exception View Tutorial By: Robert at 2013-09-14 23:24:57 7. My code runs smoothly. I have taken some steps for View Tutorial By: Amit Shrivastava at 2015-01-25 13:48:46 8. the code works fine. you just need to create a mai View Tutorial By: Nabil at 2011-08-09 11:11:03 9. This tutorial is so simple, often we dont need to View Tutorial By: NIrjhar at 2011-12-12 03:29:52 10. Java is very interesting,I wish to do certificate View Tutorial By: Dzunisani Chauke at 2012-11-05 10:19:35
https://java-samples.com/showcomment.php?commentid=37208
CC-MAIN-2022-33
refinedweb
241
65.93
Found Discussions Just out of interest, how important does everyone feel doing a placement year is instread of going straight into my third year?? My other option is taking a year out and doing my own thing - whatever that might be..... Ben They keep saying they will arrange an interview but havent for the past three months so not holding much hope. Tried Oracle but they full, and 3com aren't replying to my emails. Running out of ideas......might have to look at small local companies. jonathanh wrote:Have you tried IBM? I worked there for a year between school and university, as part of a formal "student employee" programme. So, Microsoft turned me down when I had a interview at their campus in the UK. Oracle, Sun, Apple already filled there positions. Where else in the UK is there?? Anyone know any companies still offering Student placements in the UK?? If you dont know, currently doing a computer science degree and as part of that need to do a year in industry and so i need to find somewhere but i think ive now left it a bit later - my fault for thinking I would get into Microsoft. Ben I really want a ICOP eBox as a NAS (like in the video) but sadly being a student I can't afford it Sorry to jump on the thread. Just thought I would post that Hello, I am currently wanting to increase my programming knowledge. I understand the basics of VB, C++, Java however I have never actually made anything of any use until recently where I was given a task to develop a website server in C++ (now completed). So now I'm looking for new tasks. I want to be able to learn more about the .net language, I can get a lot of code examples however that’s not how I like to learn - I prefer to develop the code myself with examples of short code - just like which namespace to use etc. which I can then play around with and learn. Learn from my code mistakes. What I’m looking for is kind of like coursework. You are given this task to do using this language and must contain these features. At the end when you have completed your code being given the ‘correct’ code to compare to your own. Now am I thinking about this the wrong way? I really want to learn these languages however I do not know where to get started, its fine saying "Hello World", but what about after that - how do you build up? I really have no idea how you get from "Hello World" to something for example Windows media player. Any help/guidance welcome. Thanks Ben
http://channel9.msdn.com/Niners/ben2004uk/Discussions?page=31
CC-MAIN-2014-15
refinedweb
459
72.05
stat, fstat, lstat − get file status #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> int stat(const char *path, struct stat *buf); int fstat(int filedes, struct stat *buf); int lstat(const char *path, struct stat *buf);_dev field describes the device on which this file resides., for example, when the file has holes.) The st_blksize field ‘sticky’ bit (S_ISVTX) on a directory means that a file in that directory can be renamed or deleted only by the owner of the file, by the owner of the directory, and by a privileged process. Since-second timestamps, these nanosecond fields are returned with the value 0. For most files under the /proc directory, stat() does not return the file size in the st_size field; instead the field is returned with the value 0. On success, zero is returned. On error, −1 is returned, and errno is set appropriately. ENAMETOOLONG File name too long. ENOTDIR A component of the path is not a directory. These. access(2), chmod(2), chown(2), fstatat(2), readlink(2), utime(2), capabilities(7)
https://manpag.es/YDL61/2+stat
CC-MAIN-2020-40
refinedweb
180
67.25
Common macros and compiler attributes/pragmas configuration. More... Common macros and compiler attributes/pragmas configuration. Definition in file kernel_defines.h. #include <stddef.h> #include <stdint.h> Go to the source code of this file. Calculate the number of elements in a static array. Definition at line 120 62 of file kernel_defines.h. Declare a constant named identifier as anonymous enum that has the value const_expr. This turns any expression that is constant and known at compile time into a formal compile time constant. This allows e.g. using non-formally but still constant expressions in static_assert(). Definition at line 224 of file kernel_defines.h. Returns the index of a pointer to an array element. Definition at line 74 of file kernel_defines.h. Allows to verify a macro definition outside the preprocessor. This macro is based on Linux's clever 'IS_BUILTIN' (). It takes a macro value that may be defined to 1 or not even defined (e.g. FEATURE_FOO) and then expands it to an expression that can be used in C code, either 1 or 0. The advantage of using this is that the compiler sees all the code, so checks can be performed, sections that would not be executed are removed during optimization. For example: Definition at line 155 of file kernel_defines.h. Check if given variable / expression is detected as compile time constant. This will return 0 if the used compiler does not support this This allows providing two different implementations in C, with one being more efficient if constant folding is used. Definition at line 240 of file kernel_defines.h. Checks whether a module is being used or not. Can be used in C conditionals. Definition at line 166 97 of file kernel_defines.h. Generates a 64 bit variable of a release version. Comparisons to this only apply to released branches To define extra add a file EXTRAVERSION to the RIOT root with the content RIOT_EXTRAVERSION = <extra> with <extra> being the number of your local version. This can be useful if you are maintaining a downstream release to base further work on. Definition at line 193 110 of file kernel_defines.h. Disable -Wpedantic for the argument, but restore diagnostic settings afterwards. This is particularly useful when declaring non-strictly conforming preprocessor macros, as the diagnostics need to be disabled where the macro is evaluated, not where the macro is declared. Definition at line 208 of file kernel_defines.h.
https://api.riot-os.org/kernel__defines_8h.html
CC-MAIN-2022-33
refinedweb
401
59.7
Typically,(3T) call for that thread has returned, because the thread's stack cannot be freed until the thread has terminated. The only reliable way to know if such a thread has terminated is through pthread_join(3T)., which returns the amount of stack space required for a thread that executes a NULL procedure. Useful threads need more than this, so be very careful when reducing the stack size. #include <pthread.h> pthread_attr_t tattr; pthread_t tid; int ret; int size = PTHREAD_STACK_MIN + 0x4000; /* initialized with default attributes */ ret = pthread_attr_init(&tattr); /* setting the size of the stack also */ ret = pthread_attr_setstacksize(&tattr, size); /* only size specified in tattr*/ ret = pthread_create(&tid, &tattr, start_routine, arg); When you allocate your own stack, be sure to append a red zone to its end by calling mprotect(2).
http://docs.oracle.com/cd/E19620-01/805-5080/attrib-33670/index.html
CC-MAIN-2015-27
refinedweb
130
55.17
In most of the projects I've worked on, there has usually been a need for events to be triggered at various absolute times. For example, every Friday at 5:00 PM, or every hour at 15 after. The .NET native timers only support a relative timing interface. You can specify how long you need to wait from the current time. It's easy to calculate for simple events, but the code can become convoluted once you begin dealing with more complicated schedules. This is an attempt to write a simple set of scheduling primitives to simplify building more complicated schedules. Handling human oriented schedules is just one of the goals for this timer. Automatic recovery, event logging, and resolving concurrency issues are also goals. Scheduling batch operations is a common yet often overlooked programming task. Many applications have the need to send out batches of emails, or generate reports at fixed times. The native timers that come with .NET are designed to operate like a hardware timer, going off at a fixed rate from the time they were started. This is fine for many applications, but can be inconvenient when you have to schedule events at a fixed time each day, or at alternating intervals, let alone trying to manage something which only occurs on weekdays. After having to write custom logic to handle these operations, I figured a more general solution was in order. Creating a timer and scheduling events is just one part of the problem that has to be solved. Every automatic process needs someone to maintain it and restart it when it stops, rerun it when events are skipped, and debug it when it just won't do what it is supposed to. The great thing about these processes is that they remove the human element from making sure that these things are taken care of. The downside is that when they fail they can go unnoticed for untold periods of time. So a good timer not only needs to be able to handle the craziest schedule that you can throw at it, it also needs to be extremely failure resistant, and provide a means to notify its operators when things go wrong. Error handling comes in two forms, first event handlers throwing exceptions shouldn't be able to shut down the process. Second the timer needs a way to recover from system down time like power outages and similar failures. Both of these operations should be managed separately from the events themselves. Some of the .NET native timers have properties and complications that are not clearly documented. For example, the System.Threading timer uses threads from the thread pool to run events on. This means that if an event handler runs too long, then other handlers can start up while the first one is running on a separate thread. If you haven't explicitly made sure your process is thread safe, then you can have some really difficult to track down errors. Our timer should allow this to be controlled easily from the timer, rather than forcing the event handler to deal with this, or creating a new object to deal with synchronization issues. System.Threading When I originally wrote this timer, I used the System.Threading timer as a model. This had the limit of having a single event associated with it. Since then, I have had many requests to add support for scheduling multiple events off the same timer. At first I didn't really like the idea because it made the whole process more complicated, and I really prefer simple operations. However, after writing a few consumers of this timer, I realized how much easier it is to code against. You only have one object to start and stop, so I've warmed up to the idea since it makes the clients simpler. One of my favorite features of the .NET framework is the BeginInvoke/EndInvoke operations on each delegate. I thought it would be extremely cool to just provide a method similar to those on a delegate with a schedule, which just ran a function on a specific schedule. Or in general something like delegate.EventInvoke(EventSource, function parameters here);. Unfortunately the delegate operation is too closely integrated into the various language compilers to handle things like this. So, one of the things I want to do with this timer, is mimic this operation as closely as I possibly can. BeginInvoke EndInvoke delegate.EventInvoke(EventSource, function parameters here); So now we have the requirements: This section details some of the timer methods I have seen and some of my early attempts along with the motivations for the current design. I've seen many processes running under the NT task scheduler. In many cases this has been either a script or a regular executable running as a task. This has the advantage of being integrated into the operating system and easy to setup and configure, with a built in UI. Management is more difficult, error handling and recovery is only as good as the process being executed. Further each separate schedule is tied to a specific process. In many cases there were VB and DCOM applications scheduled this way, which required an interactive user logged into the system for anything to run correctly, even though this was due more to a lack of configuration know-how than anything else. For those of us with access to a SQL Server, the SQL Server agent and DTS provides an excellent platform for scheduling operations. It has both a UI and a programmatic interface for scheduling operations. It can run processes or execute SQL statements, and it includes logging, error recovery workflow, and notification management. If you have access to this, then there are a lot of strong reasons to use these tools. The one real downside is that your process will be competing for resources with the SQL Server. SQL Server runs best when it has an entire system dedicated to itself, so it should really be limited to operations that run much more efficiently on the same system, or those that don't consume many resources. The third common scheduling method is to create a Windows service. This provides the most flexibility with the fewest built in features. All the framework provides you with is an installer, and the ability to start and stop your service. All the error handling, reporting, configuration and other details are left up to you. Services which shut down as soon as an exception is thrown, or hang when someone tries to stop the service occur all too often. Another common problem is that the batch actions are not restartable, and after a failure, manual actions need to be taken to get systems back into the correct state before the service can be restarted. On top of the minimal support from the service infrastructure, the only timers available are the .NET pulse type timers. I have typically seen them used to create a series of events at a fixed rate, say every 5 minutes. Then a switch statement, or a lookup is done on every event comparing the batch execution time to the current time and if it is within the execution window, then the batch process is run. Some of the problems with this approach is that it only guarantees that the batch will fire sometime within the required time. So if your pulse rate is every 15 minutes and you schedule something for 12:00, then it will run sometime between 12:00 and 12:15 depending on when the service was started. You can compensate by making the pulse frequency faster. However, this increases the risk that the event can be missed, if a higher priority thread is running for the entire pulse period for example. switch To handle these possibilities, we need a timer that knows when it misses a beat. So even if it fires late it won't drop any scheduled events. The timer can do this by maintaining an event history. It records the last time it fired and finds all the events that occurred between then and now. It fires all of them before it waits until the next event needs to fire. Not only does this prevent missing events while the timer is running, it can help the timer recover from outages if the state is persisted somewhere. This timer should make the minimal possible assumption about each of the handlers that is hooked up to it. This means that every event should be wrapped in an exception handler. An error event is provided for clients to hook into and handle as they need to. Preventing concurrent event operations: if you use the System.Timers.Timer class, each of your events will occur on a thread from the thread pool. Many times, I've seen applications use this timer, and everything works fine in development, but the reports and data processed are all screwed up in production. This is because the default settings for this timer allow events to occur concurrently on different threads. What ends up happening is that the batch process takes longer in production and you end up having multiple batch processes running at the same time. The timer allows you to provide a SynchronizingObject to prevent events from occurring concurrently. This keeps them from executing at the same time, but doesn't let us control how these duplicate events are handled. This depends on the event we are dealing with. The correct solution might be to let them run concurrently, skip the overlapping event completely, or queue the overlapping event to run as soon as the current event finishes running. System.Timers.Timer SynchronizingObject When I originally wrote this timer, I provided a simple event driven interface to set a single event with a single schedule per timer since that is how the .NET timers were setup. After several requests, I've added methods to schedule multiple independent events with different schedules on the same timer. This allows the start and stop methods on a single timer to control all the events. Each schedule needs to provide two similar operations in order to be scheduled. First, return the next time they will fire after a particular time. This is used by the timer to figure out how long to wait before the next event. Second, find all the events that are fired in a particular time interval. This is used to call all the proper events when the timer goes off. This is represented in the IScheduledItem interface. IScheduledItem public interface IScheduledItem { void AddEventsInInterval(DateTime Begin, DateTime End, ArrayList List); DateTime NextRunTime(DateTime time); } The SimpleInterval class models a simple pulse timer. Its constructor takes two parameters, an absolute start time and a TimeSpan for the interval between events. It is more general than the ScheduledTime object because any interval can be scheduled. SimpleInterval TimeSpan ScheduledTime The ScheduledTime class models a timer that goes off at one of several fixed rates like monthly, daily, weekly or hourly. It makes it easier to schedule things on a more human oriented rate, like at 6:00 AM every Thursday. The SingleEvent class models a timer which fires once at a fixed time and then is inactive. SingleEvent EventQueue takes several schedules and provides the union of them. So if you need to execute an event every day at 5:00 AM and 7:00 PM, you could create schedules for the two events and add them both to an EventQueue object. EventQueue BlockWrapper is a scheduler for a very specific operation. It limits another schedule to only fire within a repeating range of time. This is used primarily to manage something that will only run on weekdays, weekends or only during business hours. BlockWrapper I've tried to keep the interface as close to the native .NET System.Timers.Timer object as possible. However, the native event args are sealed and not publicly creatable so I had to create a separate delegate and event argument definition. Here is a simple example of using the timer to run once a day at 5:00 PM: sealed ScheduleTimer TickTimer = new ScheduleTimer(); TickTimer.Events.Add(new Schedule.ScheduledTime("Daily", "5:00 PM")); TickTimer.Elapsed += new ScheduledEventHandler(TickTimer_Elapsed); TickTimer.Start(); The AddJob method on the timer is used to add multiple events or jobs to the timer. The first overload is the easiest to use, it takes three parameters, the schedule, a delegate, and an optional array of the parameters to pass to the method. In order to give you a little more flexibility, you don't have to specify all the parameters of your method. If there are unspecified DateTime or object parameters, the object firing the event and the time this event should have run are passed in. This preserves the .NET EventArgs calling convention, while giving you the freedom of passing additional parameters in to your events. AddJob DateTime object EventArgs If you need more control, then you can create your own TimerJob specifying the exact type of MethodCall you need for your events. TimerJob MethodCall The regular AddJob method synchronizes the jobs so that only one job is executed at once. If your jobs can run concurrently then you can add them with the AddAsyncJob method. AddAsyncJob The timer provides an Error event handler. If you don't add a handler to this event you won't be notified of any exceptions thrown by your event handlers. Error Recovery or state persistence is the ability to automatically run jobs that were missed because of a service outage. This is disabled by default because it requires storing the last execution time in an application specific manner. To add application specific storage, you just need to implement the following interface: public interface IEventStorage { void RecordLastTime(DateTime Time); DateTime ReadLastTime(); } I've provided three implementations of this. The default is LocalEventStorage which stores the last event time in memory, so as long as the timer stays in memory it will make sure every event fires. If you don't want any recovery then you can assign the NullEventStorage class like so: LocalEventStorage NullEventStorage timer.EventStorage = new NullEventStorage(); I've also provided a simple XML file based event storage class which can be used for things like services, but if you are really concerned about recovery you should implement your own. The .NET delegates can be really useful for providing callbacks to objects because they let you store an object and call a specific method on that object as if it was a simple static method. This allows generic operations which only depend on a particular method signature. The downside of this is that if your method doesn't match the signature, you need to write a wrapper method, or write a wrapper class if you don't have the source to the object. C# 2.0 gets around this with anonymous delegates, but in the mean time I've written a few classes to simplify building a delegate by partially passing parameters to a method. Let's say we want to schedule a method that takes a report ID as a parameter. public Delegate void GenerateReport(int reportID); public class Report { public static void Generate(int reportID) {} } public class EventWrapper { EventWrapper(int reportID, GenerateReport report) { mReportID = reportID; mReport = report; } public void EventHandler(object src, EventArgs e) { mReport(mReportID); } int mReportID; GenerateReport mReport; } Using the MethodCall objects, we can just write something like this: IMethodCall call = new DelegateMethodCall(new GenerateReport(Report.Generate), 10); obj.Event = new EventType(call.EventHandler); Also, using some of the parameter setter objects, we can bind parameters to the method based on the name instead of order and type. TickTimer.Events.Add(new Schedule.ScheduledTime("BySecond", "0")); TickTimer.Events.Add(new Schedule.ScheduledTime("ByMinute", "15,0")); TickTimer.Events.Add(new Schedule.ScheduledTime("Weekly", "1,6:00AM")); TickTimer.Events.Add(new Schedule.SingleEvent(new DateTime("6/27/2008 6:00"))); TickTimer.Events.Add(new Schedule.SimpleInterval(new DateTime("1/1/2003"), TimeSpan.FromMinutes(12))); TickTimer.Events.Add( new Schedule.BlockWrapper( new Schedule.SimpleInterval(new DateTime("1/1/2003"), TimeSpan.FromMinutes(15)), "Daily", "6:00 AM", "5:00 PM" ) ); For the sample project I wrote a simple alarm clock. Just to jazz it up a little bit, I made it transparent and always visible. It was remarkably easy to add this functionality to a .NET Forms application. I just had to set the form opacity and hide the normal Windows frame. It was just as easy to override the mouse down and up handlers to make the entire window dragable, as well as add a context menu to close down and set the schedule. Since I mostly code ASP.NET applications, I was surprised that there wasn't an easy way to store dynamic state information in a form application. It's very simple to just read the data out of the app.config file, but there isn't a simple API to update that data. It was just as easy to hack together a simple class for this application to store data in an XML file. Dispose This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Private Delegate Sub dAlarm(tick As DateTime, x1 As String, x2 As String, x3 As String) Private Sub SetAlarmTimer() _AlarmTimer.[Stop]() _AlarmTimer.ClearJobs() _AlarmTimer.AddJob(AlarmTime, New dAlarm(AddressOf _AlarmTimer_Elapsed), "Test1", "Test2", "Test3") _AlarmTimer.Start() End Sub Private Sub _AlarmTimer_Elapsed(time As DateTime, x1 As String, x2 As String, x3 As String) _Flashing = True BackColor = NormalBackColor _LastBackColor = AlarmColor End Sub ScheduleTimer TickTimer = new ScheduleTimer(); TickTimer.Elapsed += new ScheduledEventHandler(Test); ScheduledTime adi = new Schedule.ScheduledTime("Daily", "3:37 AM"); TickTimer.AddEvent(adi); TickTimer.Start(); private void Test(object myObject, ScheduledEventArgs e) uint msRemaining = (uint)(_Interval.TotalMilliseconds - ((uint)Span.TotalMilliseconds % (uint)_Interval.TotalMilliseconds)); double msRemaining = (double)(_Interval.TotalMilliseconds - ((double)Span.TotalMilliseconds % (double)_Interval.TotalMilliseconds)); bool A = temp > end, B = temp < begin, C = end < begin; bool A = (temp - end).TotalMilliseconds > 1, //temp > end B = (begin - temp).TotalMilliseconds > 1, //temp < begin, C = end < begin; for (int jj = 1; jj < 8; jj++) { if ( actdays[jj] == true) { string TickHourMin = String.Format("{0},{1}:{2}", jj-1, acthour, actminute); TickTimer.Elapsed += new Schedule.ScheduledEventHandler(TimeEvent_Handler); TickTimer.AddEvent(new Schedule.ScheduledTime("Weekly", TickHourMin)); Debug.WriteLine("set up weekly event = " + TickHourMin); } } _tickTimer = new ScheduleTimer(); _tickTimer.EventStorage = new NullEventStorage(); _tickTimer.Error += new ExceptionEventHandler(_TickTimer_Error); private void _SetAndStartTimer(double pDelay) { // Purge le timer au cas ou _StopTimer(); // Cree une programme en fonction du parametrage TimeSpan delay = TimeSpan.FromSeconds(pDelay); _logger.Debug("Timer parametre avec " + delay.ToString()); Schedule.SimpleInterval schedule = new SimpleInterval(DateTime.Now, delay); // Programme le nouveau timer _tickTimer.AddJob(schedule, new TickHandler(_TickTimer_Elapsed)); _tickTimer.Start(); } private void _TickTimer_Elapsed(DateTime pInstant) { _logger.Debug("Timer Tick: " + pInstant.ToString()); try { _agent.Execute(); } catch(Exception ex) { // Signale l'erreur dans le JDE _eventLogger.WriteEntry(ex.Message, EventLogEntryType.Error); // Et le détail dans le fichier de trace _logger.Error(String.Format("Exception lors de l'execution de l'agent: {0}", ex.ToString())); } } Imports Schedule Public Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim myTimer as new ScheduleTimer ' contrary to the "Single event" sample (at least it makes a difference in VB), the event handler ' must be added BEFORE you add the event to the timer, otherwise you will get a "null parameter" error AddHandler myTimer.Elapsed, AddressOf myScheduledSub myTimer.AddEvent(New ScheduledTime("Daily", "17:00")) End Sub Private Sub myScheduledSub(ByVal source As Object, ByVal e As ScheduledEventArgs) ' your code to run on schedule MsgBox("Hello") End Sub End Class public BlockWrapper GetWeekdayWrapper(IScheduledItem time) { const string monday = "1,00:00:00"; const string friday = "6,00:00:00"; return new BlockWrapper(time, "Weekly", monday, friday); } uint msRemaining = (uint)(_Interval.TotalMilliseconds - ((uint)Span.TotalMilliseconds % (uint)_Interval.TotalMilliseconds)); double msRemaining = (double)(_Interval.TotalMilliseconds - ((long)span.TotalMilliseconds % (long)_Interval.TotalMilliseconds)); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/6507/NET-Scheduled-Timer?msg=4101918
CC-MAIN-2014-52
refinedweb
3,352
55.34
int i='hell'and int i="hell" int i='hell'and int i="hell"is that the characters enclosed in double quotes is called STRING... whereas its not valid for putting 'hell' as only ONE character can be enclosed in a sing quotes and this is called as a character constant... intstands for integer, and an integer is a number and only a number. If you want to make a variable that is able to handle words then you might be interested in learning about the chardata type(stands for character) or stringtypes. ' 'you're actually getting the ASCII equivalence of whatever is between the two '. ASCII is a standard for representing letters with numbers. That's why you're able to do 'A'and get a number as an output(64). " ". " "in a nut shell it can be thought of as a literal way of representing that may or may not changes. It's a bit confusing at first glance but one can think of it as a version of ' 'that can accept more than one character at a time. cout<<i;wouldn't work because couthasn't been declared to be able to use in the program although you can use std::coutto call that function. In order to use cout normally, you'll have to do using namespace std;before main.
http://www.cplusplus.com/forum/beginner/81668/
CC-MAIN-2015-40
refinedweb
224
69.62
Michael, Very good observations. I applied the patch immediately. see below: (Dmitri's feedback required) ----- Original Message ----- From: <mratliff@collegenet.com> To: <cocoon-dev@xml.apache.org>; <ivelin@apache.org> Sent: Thursday, May 23, 2002 12:51 AM Subject: Re: [Announcement] XMLForm 0.8.2 released > Ivelin, > > >>> >>Report is supported. Report is simply a negation of assert. > >>> > >>> Just double checked this again. Report doesn't work for me. This works: > >>> <assert test="contains(not(.,'$'))"> > >>> but this doesn't: > >>> <report test="contains(.,'$')"> > > >>I've tested it a bit but maybe there is a bug. Feel free to save XMLForm > >>again :) > >>The implementation is one line, so I can't immediately see where the problem > >>might be. > > >>> ><assert >cannot contain <myns:name/> </assert> > >>> > >>> This would be *great* !! But unfortunately, doesn't work for me. Thought > >>it > >>> might be because I my tags didn't have their own namespace so I tried: > >>> <assert test=".!=''">Please provide a <cnet:some_tag /></assert> > >>> but the <some_tag> element just gets stripped out (by the validation > >>> transformer?). > > >>Strange. Not intentional. Maybe another bug that waits to be busted. > > > I'll try to look into these problems this coming week... > > I dug into the code to see what was going on. Here's what I found out... > > 1) <report> was definitely supported and definitely not working. The problem > was in the bindRules method of > org.apache.cocoon.validation.SchematronFactory.java where you create the > SchematronSchema object that gets processed by the Schematron Validator. You > call bindAsserts to add asserts to the SchematronSchema object, but never call > bindReports. > So all that nice code you wrote for handling <report> never gets called. Once I > fixed this little "buglet" everything > worked fine (and corrected spelling of "bindRerports" [sic] everything worked > fine. I have sent you a patched file... Thank you. Valid patch. Applied. > 2) The "elements inside <assert> getting stripped out" problem was more > confusing. Here's how you appear to be getting the contents of <assert> or > String message = (String) jxpContext.getValue ( assertPrefix, String.class > ); > report.setMessage( message ); > Since you are getting a *string* here I don't see how your code could possibly > handle elements. Also, all of your "downstream" code for handling the > assert/report "message" assumes type string, so AFAIK you would have to change > most of the Schematron Validator to support passing elements through to the xslt > processor. Have I missed something? You pinpointed the exact problem. The gist of it is that getValue only returns the text value of the <report> DOM node, but not its sub-elements. Which is correct behaviour for JXPath. The question for Dmitri is: How to get everything under <report> as a string including sub-nodes and their attributes? .getValue("node") for : <node> some text <subnode> more text </subnode> final text </node> correctly returns "some text final text" What we need though in this case is " some text <subnode> more text </subnode> final text " Dmitri, can you help here? > 3) Incidentally, I think somebody *could* modify XMLForm to support Schematron's > <name/> and <value-of/> elements. If I understand jxpath, you could get a > jxpContext for the appropriate view and then call getValue with the proper xpath > parameter. The result would be a string which *could* be handled by the > "downstream" code already in place. Not high on my priority list: I need #2 > more than #3. Any other takers? Absolutely doable. Any takers ;) Thanks for your support, Ivelin > > Cheers, > --Michael > > > > > --------------------------------------------------------------------- >
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200205.mbox/%3C023a01c2025b$592d0f80$0c91fea9@galina%3E
CC-MAIN-2015-35
refinedweb
570
68.06
GroovyWSThere is some killer Groovy support for Web Services in the Groovy WS module. GroovyWS incorporates Apache CXF to help you quickly consume, publish, and test WS-I compliant web services. GroovyWS can even use secured web-services. To invoke a web service using a complex type, simply write the method's signature and its parameters, and GroovyWS will construct the proper SOAP message and invoke the remote web service during runtime. The complex types are automatically generated from the WSDL, compiled, and made available through your classloader. The client API provides a method to easily instantiate a complex object from its class name. GroovyWS makes life easier by logging the names of classes that are generated on the fly. The client-side of GroovyWS integrates seamlessly with Grails and Griffon applications. ScriptomThis module combines the "syntactical sugar" of Groovy with the Jacob (Java COM Bridge) library to use ActiveX or COM Windows components from Groovy. It takes advantage of Groovy's dynamic (late-bound) principles to map COM objects into Groovy objects at runtime, therefore, you don't need to know a lot about COM in order to use Scriptom. You also don't need to deal with type-libraries and there are no wrappers to maintain. Just code it and run it. Scriptom got its name because it is script-like in the sense that it resembles writing code using VBScript (but with the more advanced Groovy language). Among other things, Scriptom can be used to automate Word or Excel documents, control Internet Explorer, make your PC talk using the Microsoft Speech API, monitor processes with WMI (Windows Management Instrumentation), or browse the Windows Registry using WShell. Talking to custom VB6 or Microsoft.NET libraries is also much easier with Scriptom. Groovy MonkeyIf you are working on automating tasks in Eclipse or doing plugin development in general, this is the tool for you! Groovy Monkey is a branch of the Eclipse Monkey software based on the Eclipse Jobs API. It lets users quickly try parts of the Eclipse API without the overhead of a plugin or a separate runtime instance. It is a dynamic scripting tool for writing quick and reusable functionality (i.e. task automation) to make life easier with Eclipse. Groovy Monkey can even be used to translate the quick and dirty work into a plugin. Because Groovy Monkey is based on the Eclipse API, you can seamlessly monitor the progress in the platform and write scripts that users can cancel midway through. Groovy Monkey is also based on the Apache Bean Scripting Framework (BSF) and OSGi. BSF lets you write scripts in several languages (Beanshell, Ruby, Python), not just Groovy! (But why wouldn't you want to write in Groovy? ;-) ) The OSGI framework lets Groovy Monkey add the classloader of any bundle on the workbench to a script's classloader. It also allows Groovy Monkey to do a white box introspection of running bundles/plugins. GrooshGroosh is a Unix-like shell that is written in Groovy and can be used with Grapes. Grape is the infrastructure that enables the grab() calls in Groovy that leverage Apache Ivy, allowing a repository driven module system for Groovy. With Grapes (@Grab), Groosh can be a powerful alternative to regular shell scripts. Here is a simple example of Groosh in action: //Read a text file and write it to stdout @Grapes([ @Grab(group='org.codehaus.groovy.modules',module='groosh',version='[0.3.6,)'), @GrabConfig(systemClassLoader=true) ]) import groosh.Groosh Groosh.withGroosh(this) cat('test_scripts/blah.txt') >> stdout GroovySWTFor GUI building, Groovy has a wrapper for the Eclipse Standard Widget Toolkit (SWT). GroovySWT let's you easily write Eclipse SWT applications right in Groovy's builder mechanism. Using Groovy instead of Java for SWT applications can significantly reduce the amount of code needed. Here is some SWT code using native Groovy: import org.eclipse.swt.SWT import org.eclipse.swt.widgets.* import org.eclipse.swt.layout.RowLayout as Layout def display = new Display() def shell = new Shell(display) shell.layout = new Layout(SWT.VERTICAL) shell.text = 'Groovy / SWT Test' def label = new Label(shell, SWT.NONE) label.text = 'Simple demo of Groovy and SWT' shell.defaultButton = new Button(shell, SWT.PUSH) shell.defaultButton.text = ' Push Me ' shell.pack() shell.open() while (!shell.disposed) { if (!shell.display.readAndDispatch()) shell.display.sleep() } When you run the script, this is the result: Griffon will allow the use of GroovySWT in the very near future. _________ Many thanks to Andres Almiray for helping me with this list. Have you tried any of these tools yet? Think there's another Groovy module that's more deserving of attention? Tell us what you think, Groovy-lover.
http://groovy.dzone.com/articles/best-groovy-projects-you-might
CC-MAIN-2014-49
refinedweb
780
65.22
21 December 2012 19:26 [Source: ICIS news] NEW YORK (ICIS)--The contract price for ?xml:namespace> The US PX contract follows the PX Asian Contract Price (ACP), which settled lower in December by $16/tonne to $1,550/tonne CFR (cost & freight) US PX spot export prices were notionally down in the week by 1 cent/lb to 70-71 cents/lb FOB (free on board), based on spot PX prices in Asia, which were assessed lower on the back of persistently strong buying resistance and limited liquidity in the Asian spot market. PX is said to be in short supply for 2013, and prices for PX and downstream polyethylene terephthalate (PET) are expected to rise as demand kicks in during the first quarter, sources said. PX is primarily used to make purified terephthalic acid (PTA), an intermediate chemical used in the production PET. A major outlet for PET is in the production of plastic bottles for beverages. Major US PX producers include BP Chemicals, ExxonMobil Chemical, Chevron Phillips
http://www.icis.com/Articles/2012/12/21/9627014/us-px-rolls-over-in-december-at-76.50-centslb.html
CC-MAIN-2014-49
refinedweb
170
52.73
See also: IRC log <whenry> I'm calling from Skype in Ireland and it's been echoing all day. I've been on vacation. <scribe> scribeNick: GlenD <monica> can get now <monica> couldn't earlier Minutes are approved. Paul chairs next telcon F2F in SF - please register if you haven't Umit: Docs not ready until next week - two actions outstanding. Others committed and closed. ... Need to check in with Asir - better idea after today's meeting. Prasad: Should have docs ready... Chris: Next week ok? Umit: Yes, next Wed should be no problem. 166 - publication primer + guidelines... DONE <scribe> ACTION: Felix to update the public web pages to point to the refreshed documents. [recorded in] <trackbot> Created ACTION-177 - Update the public web pages to point to the refreshed documents. [on Felix Sasaki - due 2007-01-10]. ACTION-170 - DONE ACTION-171 - - PENDING ACTION-172 - - PENDING Chris: Done by next week? Asir: Possible, yes ACTION-173 - - PENDING <abbie> good for u Chris: By EOW ACTION-174 - - PENDING ACTION-175 - - DONE ACTION-176 - - DONE a) C14N 1.1 Last Call review announcement, XML Core WG Chris: status unknown, carry this over to next wk ISSUE 4069: Updating References for Use of xml:id Monica: This only affects the primer. We added changes that affect 2.8 in the primer - 1) add a reference to xml:id as a third reference mechanism, and 2) add an example of doing so. ... In sec 3.2 (now 3.6) add a chance which acknowledges the use of xml:id. MS indicates they're fine with this. <monica> Chris: Any objection to closing 4069 with this proposal? <silence> RESOLUTION: Close issue 4069 with the proposal from Monica () <scribe> NEW ISSUE 4128: Add References to WSDL 1.1 and WSDL 2.0 Component Syntax, Ashok <asir> 4069 related editorial action is Ashok explains the issue referenced in the email. Chris: Can we just turn this over to the editors? Ashok: Fine by me. ... Dan mentioned we also need to explain how effective policy is calculated when using these domain expressions. Chris: Can we split this, just add the refs now and then have another issue for wording on effective policy? <umit> +1 to splitting this issue Ashok: Dan and I have already agreed on some words... Asir: But those were in a different context, should make sure they make sense now. ... Refs appear to be in 3.4.1 - para right b4 second example... Ashok: Still need at least WSDL 1.1 ... Stylistically, would be nice to have the refs at the first use Chris: Adding reference to the note we're working on seems a done deal. Not closed yet on effective policy wording. Ashok / Dan meld minds to recall agreement or not Chris: OK, let's take this offline, and do the editorial changes for the references now. (Asir adds editorial action for the team to add references earlier) <asir> related editorial action is c) NEW ISSUE 4129: Attaching Policies to EPRs, Ashok <scribe> ACTION: Ashok and Dan to work to come to consensus wording describing effective policy calculation with respect to issue 4128. [recorded in] <trackbot> Created ACTION-178 - And Dan to work to come to consensus wording describing effective policy calculation with respect to issue 4128. [on Ashok Malhotra - due 2007-01-10]. <umit> that is my recollection as well, Glen. <cferris> glen: this proposal needs more discussion... at best is not a complete solution <cferris> ashok: how should we proceed <cferris> glen: fine with raising the issue though <asir> F2F discussion pointer - Glen: The issue was that EPR equivalence becomes an issue when trying to implement the proposed solution. This needs to be at least addressed, if not completely solved. Asir: (recapitulates F2F minutes) Glen: Also there's no "<wsa:EndpointReference>" GED ... Propose that Ashok / Glen go off and rewrite this in some way before the group accepts the issue. <scribe> ACTION: Glen and Ashok to come up with complete wording/proposal for EPR-related LC issue regarding policy attachment. [recorded in] <trackbot> Created ACTION-179 - And Ashok to come up with complete wording/proposal for EPR-related LC issue regarding policy attachment. [on Glen Daniels - due 2007-01-10]. d) NEW ISSUE 4130: Ignorable assertion must be ignored, Ashok Ashok: Would like ignorable to be stronger - MustIgnore. ... Sergey seemed to want the stronger version. <cferris> glen: the 'able' part is intentional... it means you MAY ignore... lax enables this <cferris> glen: agree not as strong as stating that you MUST ignore, but think that is stupid <cferris> glen: prefer to leave as is Umit: +1 to Glen. Had lots of discussion about partitioning clients, and how some want to use the assertion, and others don't even know about it... so we should leave it as is to support this. <Zakim> cferris, you wanted to take my hat off and make my case Chris: Speaking as Chris, not as chair. ... Compelling use cases for why you want to expose assertions which don't change wire messages but may want to advertise QoS. These don't impose requirements, but they MAY want to be aware of them. ... Confidentiality example, for instance. Delivery assurance another. ... Consumer may want specifically to do intersection that uses those particular assertions. Yet there may be clients which don't care... so it's nice to have strict/lax to support these cases. <umit> +1 to the hatless Chris <FrederickHirsch> +1 to Chris, Glen Asir: We spent lots of time on this - if we want to reopen, we should have serious grounds to do so.... Ashok: If I as a server publish an ignorable assertion, I don't know whether the client will "really" ignore it or not... would like to be able to say MustIgnore <Zakim> GlenD, you wanted to mention it's the clients who don't understand these assertions which matter <cferris> glen: there is a class of assertions that are used for (e.g) configuration... that you don't want the consumer to care or know about... these should be excluded from the publically published policy <cferris> glen: not use Ignorable <SergeyB> Chris, minor clarifications : I actually told Ashok that 'lax' mode can be used now to achive the ignorability and I also said wsp: optional would be the only way to achibe ignorability if wsp:ignorable didnt exist <cferris> glen: if you mark an assertion Ignorable that actually requires some behavior on the part of the policy consumer, then yes, you will have problems <cferris> glen: if you don't mark an assertion that has no wire manifestation as Ignorable and the consumer doesn't understand the assertion, you don't have intersection and cannot interact <whenry> +1 to Glen. Use mynamespace:local or mynamespace:MustIgnore <TRutt_> +1 to Glen Glen: Ashok, can you give a use-case for MustIgnore? Why would you do that? Ashok: Legal policy or ad policy... don't want it used as policy selection process. <monica> let's remember our manners, please <whenry> And ... ignore it Glen: But what are they going to do with it? Ashok: Ignory stuff! Maybe choose not to talk to the service Glen: Isn't that selection? Ashok: I can put a pointer to my legal policy in there... Client isn't going to do intersection. Umit: But it might be a precursor to intersection. Ashok: I think these things are purely abstract, and you can use them to decide if you want to work with the guy... Glen: Isn't that the same thing as intersection, at least abstractly? Ashok: I don't think so. (discussion continues) (discussion of whether legal policies will be matched by policy engines or not) <sanka> PING PONG Chris: I should be free to be stupid <MarkTR> +1 to Chris <sanka> +1 to Chris from me too .. Chris: In other words, why should you overly constrain me as to whether I ignore or don't a particular assertion <monica> again: let's remember our manners, please Chris: You can "force" people to look at something by NOT marking it as ignorable Ashok: This is stuff that's "for human eyes only" Umit: If you have a legal policy which you use to decide whether or not you use a given endpoint, then you are always forced to process the legal policy, yes? Ashok: Most people don't read EULAS.... Umit: Either you choose to do something with your metadata or not. If you do, then you understand what the QName is for the legal assertion and understand how to use that assertion in determining whether or not you engage with the endpoint. ... This is like being in strict mode Sergey: Ashok seems to want the user to make the decision on a certain assertion, and not the algorithm to fail before that. We can still acheive this with lax mode. ... Tools can be configured to ask user about unrecognized assertions. Chris: EULA analogy. Forces you to click before using whatever. As the consumer, I can read it or not before I click. ... That's my choice. This is like strict/lax. <TRutt_> "for human eyes only" does not match the semantics of ignorable. Perhaps he is asking for a new attribute type "for information only", However I do not see the requirement for such a concept Glen: EULA isn't right example because you MUST click the button. EULA without a button is a better example of ignorable. ... Can Ashok and I talk about this and come back next week? Leaving this for next week. e) NEW ISSUE 4138: Normalization Algorithm is broken, Umit <Ashok> sounds good, Glen ... i'll send you mail Umit: Normalization algorithm doesn't handle some cases... example in mail. ... Recommendation is to add one line to the algorithm - when result yields conjunction containing a single assertion or a set, then it's equivalent to a single alternative which contains that result assertion/set. Fabian: +1 to Umit's issue. Prefer to have algorithm defined in terms of operators. Asir: Algorithm says to construct "normal form" which is defined elsewhere. Would like more time to consider. Umit: Normal form isn't theoretically complete. Asir: There is a sentence there... maybe this is a clarification? Umit: Shouldn't be part of the recursion... this problem only happens when recursion completes with conjunctive form with no alternatives. Chris: Everyone understand the issue? ... Please review and consider. Umit: Not tied, btw, to this particular proposal. Just want to fix the problem. f) NEW ISSUE 4141: Policy parameter definition is not accurate, Umit Umit: Framework doc is not accurate about what parameterized assertions are... ... Proposal is to change text to qualify parameters as "things which are not <wsp:Policy>" Chris: Policy references too, right? Umit: Cool - friendly amendment is to just watch the namespace. Dan: Isn't this only for a pre-normalized policy? Umit: I don't think so Chris: No Post-Policy-Validation Infoset... Asir: Data model is close to normal form, and description of parameter is at the data model level... ... So policy references and inclusions have already been processed <umit> This is very bad, Post schema validation infoset. Asir: So you could say "normal form" == PPVI Umit: WHOA! If *we* missed that detail, what about our readers? ... Someone looking at an assertion (XML) should be able to tell what's a parameter and what's nested/referenced.... Asir: We like the proposal, but I think we digressed here. <cferris> dan: question is whether or not this definition applies to normal form or to the definition of an assertion <cferris> dan: we may need to talk about the infoset of the assertion in its normal form <cferris> umit: that is a second level consideration <cferris> umit: we need to exclude nested policy from the definition of parameters <maryann_> so can someone propose ammendments? Chris: Consensus as to the definition of parameter, but do we need to clarify both in term of normal form AND infoset form? <scribe> ACTION: Umit and Dan to discuss resolution of 4141 and come back with an amended proposal for 2007-01-10. [recorded in] <trackbot> Created ACTION-180 - And Dan to discuss resolution of 4141 and come back with an amended proposal for 2007-01-10. [on Umit Yalcinalp - due 2007-01-10]. g) NEW ISSUE 4142: Contradictory recommendation for nesting and intersection, Umit Umit: Recommending an empty <wsp:Policy> in order to help intersection not fail. What does an empty policy really mean? ... seems to be contradictory with 4.3.2... <cferris> q Umit: Either last sentence of 4.3.2 is incorrect, or our algorithm for compatibility needs to accommodate an empty policy expression. ... We intended the former - last sentence is wrong. Maryann: Must still be an alternative which matches, so the reason for this was to supply a valid alternative for matching. Umit: So you agree with first interpretation? Maryann: Yes. Should be clearer. Umit: Might need an example somewhere in guidelines or primer. Chris: Guidelines/primer issue is secondary ... Intent was clearly not to make things fail based on inclusion/omission of empty policy, we should clean it up. Asir: +1 there's consensus that last sentence is misleading. Can we just drop it? Umit: Maryann and I can work on something and then see what the group thinks <scribe> ACTION: Umit and Maryann to come up with a new version of the wording for the last sentence of sec 4.3.2, as an amended proposal for 4142 [recorded in] <trackbot> Created ACTION-181 - And Maryann to come up with a new version of the wording for the last sentence of sec 4.3.2, as an amended proposal for 4142 [on Umit Yalcinalp - due 2007-01-10]. <scribe> ACTION: Umit to file a new issue against primer/guidelines suggesting the need for examples of intersection with/without empty wsp:Policy (see issue 4142) [recorded in] <trackbot> Created ACTION-182 - File a new issue against primer/guidelines suggesting the need for examples of intersection with/without empty wsp:Policy (see issue 4142) [on Umit Yalcinalp - due 2007-01-10]. a) ACTION-152 Review 4.4.8 with respect to the addition of the ignorable attribute ("treasure island") due December 1, David Orchard <scribe> DONE. See: Status: David outlined his proposal at the Dec 20 meeting. The WG has until Jan 3 to decide if it should be adopted. Chris: Can we review this by next week? Defer discussion of ACTION-152 until next week. b) (NEW) ISSUE 4041: Update primer to mention ignorable as needed, Frederick <scribe> PENDING c) [NEW ISSUE] 4103 Questionable use of Contoso Ltd in Primer, Chris Ferris See thread ending at: Chris: Contoso is registered TM of Microsoft? Asir: Can check on this. Chris: Had suggested example.com, using what a lot of other people use. ... Example.com as a company name is not registered, though. <scribe> ACTION: Asir to research ownership of "Contoso" name/trademark. [recorded in] <trackbot> Created ACTION-183 - Research ownership of \"Contoso\" name/trademark. [on Asir Vedamuthu - due 2007-01-10]. Chris: Should have explicit permission, at least, if we're going to use a non-W3C-registered name. <umit> why can't we use Company A as we do in the Guidelines? <cferris> ACTION: Chris to follow-up on arch of www doc's use of a company name [recorded in] <trackbot> Created ACTION-184 - Follow-up on arch of www doc\'s use of a company name [on Christopher Ferris - due 2007-01-10]. <Nadalin> can't use that ! <Nadalin> I have it that registerd <maryann_> aw come on <maryann_> :-) <Nadalin> NoNameCompany Chris: Next week we'll do guidelines issues and proposals for new issues. ADJOURN <asir> is taken
http://www.w3.org/2007/01/03-ws-policy-minutes.html
crawl-002
refinedweb
2,617
65.42
Important: Please read the Qt Code of Conduct - [Solved]Compiler Problems (Qt 5.1,GCC compiler x86 64 bits) Hi there. I spend some time with a problem about my compiler. I have my code such this: @#ifndef CEDGE_H #define CEDGE_H #include "cgraphmanager.h" #include <QtCore> #include <QtGui> class CEdge { friend class CGraphManager; public: CGraphManager* m_pGraphManager; virtual void DrawEdge(QPaintEvent *event); unsigned long long GetKey()const; unsigned long GetKeyFirst()const; unsigned long GetKeySecond()const; protected: union { struct { unsigned long m_ulIDFirst; unsigned long m_ulIDSecond; }; unsigned long long m_ullID; }; CEdge(); virtual~CEdge(); }; #endif // CEDGE_H @ and the .cpp: @#include "cedge.h" #include "cvertex.h" CEdge::CEdge() { } CEdge::~CEdge(){ } void CEdge::DrawEdge(QPaintEvent *event){ QPainter EdgePainter=new QPainter(); CVertex* pVFirst,*pVSecond; pVFirst=m_pGraphManager->CreateVertex(m_ulIDFirst); pVSecond=m_pGraphManager->CreateVertex(m_ulIDSecond); EdgePainter.drawLine(pVFirst->m_fx,pVFirst->m_fy,pVSecond->m_fx,pVSecond->m_fy); //Implementar un metodo de utensilio para dibujar las flechas. DrawArrows(bool) permite o no la visualizacion de flechas } unsigned long long CEdge::GetKey()const { return m_ullID; } unsigned long CEdge::GetKeyFirst()const { return m_ulIDFirst; } unsigned long CEdge::GetKeySecond()const { return m_ulIDSecond; } @ Is one of much class and the easiest, and the .pro of the static library is something like this: @#------------------------------------------------- Project created by QtCreator 2014-03-19T23:25:37 #------------------------------------------------- QT += core gui TARGET = GraphLib TEMPLATE = lib CONFIG += staticlib CONFIG += c++11 QMAKE_CXXFLAGS += -std=c++11 SOURCES += graphlib.cpp cvertex.cpp cedge.cpp cgraph.cpp cgraphmanager.cpp HEADERS += graphlib.h cvertex.h cedge.h cgraph.h cgraphmanager.h unix:!symbian { maemo5 { target.path = /opt/usr/lib } else { target.path = /usr/lib } INSTALLS += target } win32:CONFIG(release, debug|release): LIBS += -L$$OUT_PWD/release/ -lGraphLib else:win32:CONFIG(debug, debug|release): LIBS += -L$$OUT_PWD/debug/ -lGraphLib else:unix: LIBS += -L$$OUT_PWD/ -lGraphLib INCLUDEPATH += $$PWD/ DEPENDPATH += $$PWD/ win32:CONFIG(release, debug|release): PRE_TARGETDEPS += $$OUT_PWD/release/GraphLib.lib else:win32:CONFIG(debug, debug|release): PRE_TARGETDEPS += $$OUT_PWD/debug/GraphLib.lib else:unix: PRE_TARGETDEPS += $$OUT_PWD/libGraphLib.a @ Sometimes my project make me ones error at build time like undefined reference to vtable for CVertex or say me that the QtGui dont exist. But the rare issues is that is intermittent. I restart my computer, open again Qt and build the project and the errors dont show. Only if is rebuild all it is when the issues appear again Any idea about this problem? Thank You bq. undefined reference to vtable for ... I've seen this error in two cases. If you forget to add widgets to QT (QT += widgets) in Qt5 If you declare class and implemented this class in the same file and forget to include .moc file at the end of the file. Also you can take a look "on these questions and answers": If you use Qt5 then the second error can be related to the fact that you forget to add widgets to QT variable. - JKSH Moderators last edited by [quote author="andreyc" date="1395975720"]I've seen this error in two cases. If you forget to add widgets to QT (QT += widgets) in Qt5 If you declare class and implemented this class in the same file and forget to include .moc file at the end of the file.[/quote]Please do not manually include .moc files! You should let the Qt build system take care of it. There are 2 common reasons for "undefined reference to vtable" in classes that inherit QObject: You declared your class in the .cpp file instead of the .h file. You added the Q_OBJECT macro to your class, but didn't update your Makefile. The solution to these scenarios is: Make sure your class is declared in a .h file (not a .cpp) file, and then run qmake. Is CVertex a QObject?" @ - JKSH Moderators last edited by [quote author="andreyc" date="1395976682"" @ [/quote]OK, test cases are a bit different. :) That's the one and only place where you are supposed to manually a .moc file (as "documented": ). Charlie_Hdz should not do this though, since he's not writing a test. Thank you very much JKSH and amdreyc En effect, Cvertex is not a Q Object. My problems right now is the following points: With that code the compiler make intermitent error ( I mean sometimes it does sometimes not) , since i started working on Qt in linux with the compiler GCC compiler x86 64 bits I've had much problems but i always find the answer.Whatever , for this reason i think is something based on the setting of the compiler or the compiler itself.( If you have a tip about how should be setting the compiler). In this moment, im learning how to draw primitive shapes and i had this method: @void CEdge::Draw(QPainterEvet event){ QPainter painter; painter.setPen(Qt::black); CVertex pVFirst,*pVSecond; pVFirst=m_pGraphManager->CreateVertex(m_ulIDFirst); pVSecond=m_pGraphManager->CreateVertex(m_ulIDSecond); painter.drawLine(pVFirst->m_fx,pVFirst->m_fy,pVSecond->m_fx,pVSecond->m_fy); } @ were Cvertex is other class of the static library. So this method is from a class of a static library were are Stl containers that store the points where the lines and circles in a window (widgets) will be drawn, of course this library does not inherit from QObject and not draw anything on it. Im confuse about if possible implement the QPainter in this method or I have to create a method of that widget and then call this method CEdge to access to the data of the container , in short I'm drawing a graph in Qt. Any idea is welcome. Hi, perhaps it's because you're not calling begin() and end() on your QPainter class. hskoglund into the constructor and destructor of the class? No, right after you create your QPainter instance, like this: QPainter EdgePainter=new QPainter(); EdgePainter.begin lots of DrawLines EdgePainter.end All right hskoglund Thank you.
https://forum.qt.io/topic/39613/solved-compiler-problems-qt-5-1-gcc-compiler-x86-64-bits
CC-MAIN-2021-10
refinedweb
955
57.06
On Thu, Mar 01, 2012 at 10:46:27PM +0000, AntC wrote: > > Also this would be ambiguous: > > object.SubObject.Field.subField Well, we'd have to either define what it means, or use something other than '.'. > In terms of scope control, I think (I'm guessing rather) you do get similar > behaviour to DORF, with the added inconvenience of: > * an extra arg to Has (how does the constraint sugar cope?) You can infer ft from the f. > r{ field :: Int } => ... > r{ Field :: Int } => ... -- ? does that look odd Well, it's new syntax. > -- suppose I have two private namespaces > r{ Field :: Int ::: Field1 } => ... -- ?? > r{ (Field ::: Field2) :: Int } => ... -- ??? You've lost me again. > >. I don't follow. You agreed above that "you do get similar behaviour to DORF", and if you just use lowercase field names then the behaviour is the same as SORF. Therefore both are supported. Thanks Ian
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-March/022058.html
CC-MAIN-2014-41
refinedweb
148
86.1
Is there any way to kill a Thread? It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases: - the thread is holding a critical resource that must be closed properly - the thread has created several other threads that must be killed as well. The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit. For example: import threadingclass StoppableThread(threading.Thread): """Thread class with a stop() method. The thread itself has to check regularly for the stopped() condition.""" def __init__(self, *args, **kwargs): super(StoppableThread, self).__init__(*args, **kwargs) self._stop_event = threading.Event() def stop(self): self._stop_event.set() def stopped(self): return self._stop_event.is_set(): def _async_raise(tid, exctype): '''Raises an exception in the threads with id tid''' if not inspect.isclass(exctype): raise TypeError("Only types can be raised (not instances)") res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(ctypes.c_long(tid), None) raise SystemError("PyThreadState_SetAsyncExc failed")class ThreadWithExc(threading.Thread): '''A thread class that supports raising an exception in the thread from another thread. ''' def _get_my_tid(self): """determines this (self's) thread id CAREFUL: this function is executed in the context of the caller thread, to get the identity of the thread represented by this instance. """ # TODO: in python 2.6, there's a simpler way to do: self.ident raise AssertionError("could not determine the thread's id") def raiseExc(self, exctype): """Raises the given exception type in the context of this thread. If the thread is busy in a system call (time.sleep(), socket.accept(), ...), the exception is simply ignored. If you are sure that your exception should terminate the thread, one way to ensure that it works is: t = ThreadWithExc( ... ) ... t.raiseExc( SomeException ) while t.isAlive(): time.sleep( 0.1 ) t.raiseExc( SomeException ) If the exception is to be caught by the thread, you need a way to check that your thread has caught it. CAREFUL: this function is executed in the context of the caller thread, to raise an exception in the context of the thread represented by this instance. """ _async_raise( self._get_my_tid(), exctype ) (Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.) As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption. A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup. There is no official API to do that, no. You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes. Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed. and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p See the Python documentation for multiprocessing. Example: import multiprocessingproc = multiprocessing.Process(target=your_proc_function, args=())proc.start()# Terminate the processproc.terminate() # sends a SIGTERM
https://codehunter.cc/a/python/is-there-any-way-to-kill-a-thread
CC-MAIN-2022-21
refinedweb
593
67.55
IPython has a %%script cell magic, which lets you run a cell in a subprocess of any interpreter on your system, such as: bash, ruby, perl, zsh, R, etc. It can even be a script of your own, which expects input on stdin. import sys.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] %%script python3 import sys print('hello from Python: %s' % sys.version) hello from Python: 3.2.3 (v3.2.3:3d0686d90f55, Apr 10 2012, 11:25:50) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] IPython also creates aliases for a few common interpreters, such as bash, ruby, perl, etc. These are all equivalent to %%script <name> %%ruby puts "Hello from Ruby #{RUBY_VERSION}" Hello from Ruby 1.8.7 %%bash echo "hello from $BASH" hello from /usr/local/bin/bash10a4be660> print(ruby_lines.read()) line 1 line 2 line 3 line 4 line 5 line 6 line 7 line 8 line 9 You can pass arguments the subcommand as well, such as this example instructing Python to use integer division from Python 3: %%script python -Qnew print 1/3 0.333333333333 You can really specify any program for %%script, for instance here is a 'program' that echos the lines of stdin, with delays between each line. %%script --bg --out bashout bash -c "while read line; do echo $line; sleep 1; done" line 1 line 2 line 3 line 4 line 5 Starting job # 2 in a separate thread. Remember, since the output of a background script is just the stdout pipe, you can read it as lines become available: import time tic = time.time() line = True while True: line = bashout.readline() if not line: break sys.stdout.write("%.1fs: %s" %(time.time()-tic, line)) sys.stdout.flush() 0.0s: line 1 1.0s: line 2 2.0s: line 3 3.0s: line 4 4.0s: line 5 The list of aliased script magics is configurable. The default is to pick from a few common interpreters, and use them if found, but you can specify your own in ipython_config.py: c.ScriptMagics.scripts = ['R', 'pypy', 'myprogram'] And if any of these programs do not apear on your default PATH, then you would also need to specify their location with: c.ScriptMagics.script_paths = {'myprogram': '/opt/path/to/myprogram'}
http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Script%20Magics.ipynb
CC-MAIN-2014-35
refinedweb
396
84.27
- 07 Nov 2021 03:41:58 UTC - Distribution: Moose - Source (raw) - Browse (raw) - Changes - How to Contribute - Repository - Issues (71) - Testers (2810 / - INTRODUCTION - ATTRIBUTE OPTIONS - Read-write vs. read-only - Accessor methods - Predicate and clearer methods - Required or not? - Default and builder methods - Laziness - Constructor parameters (init_arg) - Weak references - Triggers - Attribute types - Delegation - Attribute traits and metaclasses - Native Delegations - ATTRIBUTE INHERITANCE - MULTIPLE ATTRIBUTE SHORTCUTS - MORE ON ATTRIBUTES - A FEW MORE OPTIONS - AUTHORS NAME Moose::Manual::Attributes - Object attributes with Moose VERSION version 2.2201 INTRODUCTIONobject has a first name and last name". Attributes can be optional, so that we can say "some Personobjects. ATTRIBUTE OPTIONS Use the hasfunction to declare an attribute: package Person; use Moose; has 'first_name' => ( is => 'rw' ); This says that all Personobjects have an optional read-write "first_name" attribute. Read-write vs. read-only The options passed to hasdefine the properties of the attribute. There are many options, but in the simplest form you just need to set is, which can be either ro(read-only) or rw(read-write). When an attribute is rw, you can change it by passing a value to its accessor. When an attribute is ro, you may only read the current value of the attribute through its accessor. You can, however, set the attribute when creating the object by passing it to the constructor.to bare. Accessor methods Each attribute has one or more accessor methods. An accessor lets you read and write the value of that attribute for an object. By default, the accessor method has the same name as the attribute. If you declared your attribute as rothen your accessor will be read-only. If you declared it as rw, you get a read-write accessor. Simple. Given our Personexample above, we now have a single first_nameaccessor that can read or write a Personobject's first_nameattributemethodand writermethods:. Predicate and clearer methods. Required or not? By default, all attributes are optional, and do not need to be provided at object construction time. If you want to make an attribute required, simply set the requiredoption to true: has 'name' => ( is => 'ro', required => 1, ); There are a couple caveats worth mentioning in regards to what "required" actually means. Basically, all it says is that this attribute ( name) must be provided to the constructor or it mustand predicatefor a required attribute. Default and builder methods Attributes can have default values, and Moose provides two ways to specify that default. In the simplest form, you simply provide a non-reference scalar value for the defaultoption: trivial example, but it illustrates the point that the subroutine will be called for every new object created. When you provide a defaultsubroutine reference, it is called as a method on the object, with no additional parameters: has 'size' => ( is => 'ro', default => sub { my $self = shift; return $self->height > 200 ? 'large' : 'average'; }, ); When the default. has 'mapping' => ( is => 'ro', default => sub { {} }, ); as we saw above. This is a bit awkward, but it's just the way Perl works. As an alternative to using a subroutine reference, you can supply a buildermethod for your attribute: has 'size' => ( is => 'ro', builder => '_build_size', predicate => 'has_size', ); sub _build_size { return ( 'small', 'medium', 'large' )[ int( rand 3 ) ]; } This has several advantages. First, it moves a chunk of code to its own named method, which improves readability and code organization. Second, because this is a named method, it can be subclassed or provided by a role. We strongly recommend that you use a builderinstead of a defaultfor anything beyond the most trivial default. A builder, just like a default, is called as a method on the object with no additional parameters. Builders allow subclassing Because the builderis called by name, it goes through Perl's method resolution. This means that builder methods are both inheritable and overridable. If we subclass our Personclass, we can override _build_size: package Lilliputian; use Moose; extends 'Person'; sub _build_size { return 'small' } Builders work well with roles. Laziness Moose lets you defer attribute population by making an attribute lazy: has 'size' => ( is => 'ro', lazy => 1, builder => '_build_size', ); When lazy lazylets you defer the cost until the attribute is needed. If the attribute is never needed, you save some CPU time. We recommend that you make any attribute with a builder or non-trivial default lazyas a matter of course. Lazy defaults and $_ Please note that a lazy default or builder can be called anywhere, even inside a mapor grep. This means that if your default sub or builder changes $_, something weird could happen. You can prevent this by adding local $_inside your default or builder. Constructor parameters ( init_arg) By default, each attribute can be passed by name to the class's constructor. On occasion, you may want to use a different name for the constructor parameter. You may also want to make an attribute unsettable via the constructor. You can do either of these things with the init_argoption: has 'bigness' => ( is => 'ro', init_arg => 'size', ); Now we have an attribute named "bigness", but we pass sizeto the constructor. Even more useful is the ability to disable setting an attribute via the constructor. This is particularly handy for private attributes: has '_genetic_code' => ( is => 'ro', lazy => 1, builder => '_build_genetic_code', init_arg => undef, ); By setting the init_argto undef, we make it impossible to set this attribute when creating a new object. Weak references Moose has built-in support for weak references. If you set the weak_refoption to a true value, then it will call Scalar::Util::weakenwhenever the attribute is set: has 'parent' => ( is => 'rw', weak_ref => 1, ); $node->parent($parent_node); This is very useful when you're building objects that may contain circular references. When the object in a weak reference goes out of scope, the attribute's value will become undef"behind the scenes". This is done by the Perl interpreter directly, so Moose does not see this change. This means that triggers don't fire, coercions aren't applied, etc. The attribute is not cleared, so a predicate method for that attribute will still return true. Similarly, when the attribute is next accessed, a default value will not be generated. Triggers A trigger the old value was undef. This differs from an aftermethodor builder. Attribute types Attributes can be restricted to only accept certain types: has 'first_name' => ( is => 'ro', isa => 'Str', ); This says that the first_nameattribute must be a string. Moose also provides a shortcut for specifying that an attribute only accepts objects that do a certain role: has 'weapon' => ( is => 'rw', does => 'MyApp::Weapon', ); See the Moose::Manual::Types documentation for a complete discussion of Moose's type system. Delegation. Attribute traits and metaclasses One of Moose's best features is that it can be extended in all sorts of ways through the use of metaclass traits and custom metaclasses. You can. Native Delegations Native delegations allow you to delegate to standard Perl data structures as if they were objects. For example, we can pretend that an array reference has methods like push(), shift(), map(), count(), and more. has 'options' => ( traits => ['Array'], is => 'ro', isa => 'ArrayRef[Str]', default => sub { [] }, handles => { all_options => 'elements', add_option => 'push', map_options => 'map', option_count => 'count', sorted_options => 'sort', }, ); See Moose::Manual::Delegation for more details. ATTRIBUTE INHERITANCE By default, a child inherits all of its parent class(es)' attributes as-is. However, you can change most aspects of the inherited attribute in the child class. You cannot change any of its associated method names (reader, writer, predicate, etc). To change some aspects of an attribute, you simply prepend a plus sign ( +) to its name: package LazyPerson; use Moose; extends 'Person'; has '+first_name' => ( lazy => 1, default => 'Bill', ); Now the first_nameattribute in LazyPersonis lazy, and defaults to 'Bill'. We recommend that you exercise caution when changing the type ( isa) of an inherited attribute. Attribute Inheritance and Method Modifiers When an inherited attribute is defined, that creates an entirely new set of accessors for the attribute (reader, writer, predicate, etc.). This is necessary because these may be what was changed when inheriting the attribute. As a consequence, any method modifiers defined on the attribute's accessors in an ancestor class will effectively be ignored, because the new accessors live in the child class and do not see the modifiers from the parent class. MULTIPLE ATTRIBUTE SHORTCUTS If you have a number of attributes that differ only by name, you can declare them all at once: package Point; use Moose; has [ 'x', 'y' ] => ( is => 'ro', isa => 'Int' ); Also, because hasis just a function call, you can call it in a loop: for my $name ( qw( x y ) ) { my $builder = '_build_' . $name; has $name => ( is => 'ro', isa => 'Int', builder => $builder ); } MORE ON ATTRIBUTES Moose attributes are a big topic, and this document glosses over a few aspects. We recommend that you read the Moose::Manual::Delegation and Moose::Manual::Types documents to get a more complete understanding of attribute features. A FEW MORE OPTIONS Moose has lots of attribute options. The ones listed below are superseded by some more modern features, but are covered for the sake of completeness. The documentationoption You can provide a piece of documentation as a string for an attribute: has 'first_name' => ( is => 'rw', documentation => q{The person's first (personal) name}, ); Moose does absolutely nothing with this information other than store it. The auto_derefoption If your attribute is an array reference or hash reference, the auto_derefoption will make Moose dereference the value when it is returned from the reader method in list context: my %map = $object->mapping; This option only works if your attribute is explicitly typed as an ArrayRefor HashRef. When the reader is called in scalar context, the reference itself is returned. However, we recommend that you use Moose::Meta::Attribute::Native traits for these types of attributes, which gives you much more control over how they are accessed and manipulated. See also Moose::Manual::BestPractices#Use_Moose::Meta::Attribute::Native_traits_instead_of_auto_deref. Initializer Moose provides an attribute option called initializer. This is called when the attribute's value is being set in the constructor, and lets you change the value before it is set..
https://web-stage.metacpan.org/dist/Moose/view/lib/Moose/Manual/Attributes.pod
CC-MAIN-2022-27
refinedweb
1,674
60.24
Console class and Line Editing Facility - OOP344 20141 - Wednesday Feb 12th, 23:59 Please blog about your problems and notes and add the link with a proper title below. Reply to others blogs to help and update the blog link to indicate you replied to them. the Linux GNU C++ and Windows Visual C++ platforms which accept console input, and provide console output through the set of facilities available in your Console module. The name of the library object (i.e. the global instance of Console) ">>" istream and ostream operators. external links - instantiate Console, in an object called "console" in cio namespace and create an external linkage to is a reference to a variable that stores the current editing mode (insert or over-strike). By default, this parameter holds the reference of a static bool attribute of the Console class (i.e. _insertMode) which globalizes the insert/over-strike mode between all instances of Console. - Do the following initial corrections before you engage in editing the string at the very beginning of the method. -, ESCAPE, TAB,:. TBA -.senecacollege.ca (only use putty with the setting stated at Notes) run: <!-- $ ~fardad.soleimanloo/cio_test --> How to Compile Compile and test your code with the test-main, in the following command-line environments and visual studio. matrix: GNU (use -lncurses to link ncurses library) g++ bconsole.cpp console.cpp cio_test.cpp -lncurses -Wno-write-strings Local PC: Visual Studio.net Tester Program Tester program is at: git@github.com:Seneca-OOP344/20133notes.git, in /cio/cio_test.cpp - repo path: cio - cio_test.cpp How to submit For submission; first your solution must compile, link, and run without errors/warnings in all environments. Test your program with cio_test.cpp from the cio directory in 20133Notes repository on GitHub (V0.95.2). When your program passed all the tests; log on to matrix.senecacollege.ca (using putty), and create a directory and copy all the source files (console.cpp, console.h, bconsole.cpp, bconsole.h) into it. Then change the working directory to the newly created directory (CD to the new directory). Then issue the following command by copy and pasting it to the putty terminal screen: This will copy the tester object file to current directory, compile your code and dump any possible error/warning messages into the file result.txt. This should not generate any warnings. To run the test (with automatic submission), execute: $ a.out X <ENTER> where X is your section, so if you are in section B it would be: $ a.out B <ENTER> Run the test to the end and if everything is ok, an email will be sent to your professor with your assignment files, and compile result attached.
https://wiki.cdot.senecacollege.ca/w/index.php?title=Console_class_and_Line_Editing_Facility_-_OOP344_20141&oldid=103507&printable=yes
CC-MAIN-2019-35
refinedweb
450
56.66
Difference between revisions of "Introduction to Python" Revision as of 07:10, 2 February 2013 This is a short tutorial made for who is totally bottom part: (If you don't have it, click on View → Views → Python console.) The interpreter shows the Python version, then a >>> symbol, which is the command prompt, that is, where you enter Python code. Writing code in the interpreter is simple: one line is one instruction. When you press Enter, your line of code will be executed (after being instantly and invisibly compiled). For example, try writing this: print "hello" print why they are called variables, the contents can vary. For example: myVariable = "hello" print myVariable myVariable = "good bye" print myVariable We changed the value of myVariable. We can also copy variables: var1 = "hello" var2 = var1 print var2 Note that it is of data, and especially numbers, not only text strings. One thing is important, Python must know what kind of data it is dealing with. We saw in our print hello example, that the print command recognized our "hello" string. That is because by using the ", we told specifically the print command it worked, Python knows that 10 and 20 are integer numbers. So they are stored as "int", and Python can do with them everything it can do with integers. Look at the results of this: firstNumber = "10" secondNumber = "20" print firstNumber + secondNumber See? We forced Python to consider that our two variables are not numbers but mere pieces of text. Python can add two pieces of text together, but it won't try to find out any sum. But we were talking about integer numbers. There are also float numbers. The difference is that integer numbers don't have decimal part, while foat numbers can have a decimal part: var1 = 13 var2 = 15.65 print "var1 is of type ", type(var1) print "var2 is of type ", type(var2) Int and Floats can be mixed together without problem: total = var1 + var2 print total print type(total) Of course the total has decimals, right? Then Python automatically decided that the result is a float. In several cases such as this one, Python automatically decides what type we can force Python to convert between types: varA = "hello" varB = 123 print varA + str(varB) Now both int and float if we want: varA = "123" print int(varA) print float(varA) Note on Python commands You must have noticed that in this section we used the print command in several ways. We printed variables, sums, several things separated by commas, and even the result of other Python command such as type(). Maybe you also saw that doing those two commands: type(varA) print type(varA) have exactly the same result. That is because we are in the interpreter, and everything is automatically printed by using [ ]: myList = [1,2,3] type(myList) myOtherList = ["Bart", "Frank", "Bob"] myMixedList = ["hello", 345, 34.567] You see that it can contain any type of data. Lists are very useful because you can group variables together. You can then do all kind of things within that groups, for example counting them: len(myOtherList) or retrieving one item of a list: myName = myOtherList[0] myFriendsName = myOtherList[1] You see that while the len() command returns the total number of items in a list, their "position" in the list begins with 0. The first item in a list is always at position 0, so in our myOtherList, "Bob" will be at position 2. We can do much more One big cool use of lists is also browsing through them and do something with each item. For example look at this: alldaltons = ["Joe", "William", "Jack", "Averell"] for dalton in alldaltons: print dalton + " Dalton" We iterated (programming jargon. How will Python know how many of the next lines will be to be executed inside the for...in operation? For that, Python uses indentation. That is, your next lines won't begin immediately. You will begin them with a blank space, or several blank spaces, or a tab, or several tabs. Other programming languages use other methods, like putting everythin if you make big ones (for example use tabs instead of spaces because it's larger), when you write a big program you'll have a clear view of what is executed inside what. We'll see that many other commands than for-in can have indented blocks of code too. For-in commands can be used for many things that must be done more than once. It can for example be combined with the range() command: serie = range(1,11) total = 0 print "sum" for number in serie: print number total = total + number print "----" print total Or more complex things like this: alldaltons = ["Joe", "William", "Jack", "Averell"] for n in range(4): print alldaltons[n], " is Dalton number ", n You see. If executes a code block only if a certain condition is met, for example: alldaltons = ["Joe", "William", "Jack", "Averell"] if "Joe" in alldaltons: print "We found that Dalton!!!" Of course this will always print the first sentence, but try replacing the second line The standard Python commands are not many. In current version of Python there are about 30, and we already know several of them. But imagine if we could invent our own commands? Well, we can, and it's extremely easy. In fact, most) Extremely simple: the def() command defines a new function. You give it a name, and inside the parenthesis you define arguments that we'll use in our function. Arguments are data that will be passed to the function. For example, look at the len() command. If you just write len() alone, Python will tell you it needs an argument. That is, you want len() of something, right? Then, for example, you'll write len(myList) and you'll get the length of myList. Well, myList is an argument that you pass to the len() function. The len() function is defined in such a way that it knows what to do with what is passed to it. Same as we did here. The "myValue" name can be anything, and it will be used only inside the function. It is just a name you give to the argument so you can do something with it, but it also serves so the function knows how many arguments to expect. For example, if you do this: printsqm(45,34) There will be an error. Our function was programmed to receive just one argument, but it received two, 45 and 34. We could instead do something like this: def sum(val1,val2): total = val1 + val2 return total sum(45,34) myTotal = sum(45,34) We made a function that receives two arguments, sums them, and returns that value. Returning something is very useful, because we can do something with the result, such as store it in the myTotal variable. Of course, since we are in the interpreter and everything is printed, doing: sum(45,34) will print the result on the screen, but outside the interpreter, since there is no more print command inside the function, nothing would appear on the screen. You would need to do: print sum(45,34) to have something printed. Read more about functions here. Modules Now that we have a good idea of how Python works, we'll need one last thing: How to work with files and modules. Until now, we wrote Python instructions line by line in the interpreter, right? What if we could write several lines together, and have them executed all at once? It would certainly be handier for doing more complex things. And we could save our work too. Well, that too, is extremely easy. Simply open a text editor (such as the windows notepad), without the .py extension. This will simply execute the contents of the file, line by line, just as if we had written it in the interpreter. The sum function will be created, and the message will be printed. There is one big difference: the import command is made not only to execute programs written in files, like ours, but also to load the functions inside, so they become available in the interpreter. Files containing functions, like ours, are called modules. Normally when we write a sum() function in the interpreter, we execute it simply like that: sum(14,45) Like we did earlier. When we import a module containing our sum() function, the syntax is a bit different. We do: test modules inside your module! One last extremely useful thing. How do we know what modules we have, what functions are inside and how to use them (that is, what kind of arguments they need)? We saw already that Python has a help() function. Doing: help() modules Will give us a list of all available modules. We can now type q to get out of the interactive help, and import any of them. We can even browse their content with the dir() command import math dir(math) We'll see all the functions contained in the math module, as well as strange stuff named __doc__, __file__, __name__. The __doc__ is extremely useful, it is a documentation text. Every function of (well-made) modules has a __doc__ that explains how to use it. For example, we see that there is a sin function in side the math module. Want to know how to use it? print math.sin.__doc__!
https://www.freecadweb.org/wiki/index.php?title=Introduction_to_Python&diff=16730&oldid=16235
CC-MAIN-2019-51
refinedweb
1,570
70.33
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Q 1. When should I use the abstract class rather.... Additional Functions If we try to add... specialized, it means that its internal behavior may be quite complex and pose un Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Q 1 : How should I create an immutable class ? Ans... an immutable class should not contain any modifier method. But a developer should... should not contain references of mutable fields.   corejava corejava Creating the object using "new" and usins xml configurations whih one is more useful?why corejava - Java Beginners design patterns are there in core java? which are useful in threads?what r the names of these design patterns? ok thank you (inadvance). Hi...(); System.out.println("Numbers are printing line by line : "); try Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava for you application then you should generally use a data field to determine the sort... to limit file selection. You could write your own custom filter that only takes.... INSERT statement creates records and these records should be distinct from each JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources are: HTML. You should be able to put together HTML pages. Java. You should be able to program in Java. This tutorial teaches JSP... a static site. But if you have a lot of static pages in your existing site, you CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account Professional Web Design Services For You Web Site Professional Web Design Services For You Web Site  ... to keep a few points in mind. You should always check his references and the previous... then it becomes essential for you to consult a professional designer and tell him your.   SEO Tips,Latest SEO Tips,Free SEO Tips & Tricks,Useful Search Engine Optimization Tips and it's targeted audience. Try to collect complete information on the website you... in description meta tag which should not be more then 255 characters long. You can place your... on different site makes you popular and also helps to rank high in search engines Java try, catch, and finally then this catch block specifies a code that should be executed. The syntax to use the try... Java try, catch, and finally The try, catch, and finally keywords are Java keywords Try it Editor Try it Editor Hello sir...actually i want to add an html,css & js editor like in w3 school try it editor 1.5....can you tell me how i can add it..pllz plzzz rppy soon Nested try (String args[]) { try { int a = Integer.parseInt(args[0]); int b...[]) { try { nestedTry(args); } catch (ArithmeticException e...;Definitely, you will get different outputs. The output depends on your input Submitting Web site to search engine Web Sites Once your web site is running, the next job for you is to attract the visitors to your websites. You can advertise your site using your... to the search engines and directories. Once you register your site Try Ruby Try Ruby Try out Ruby code in this site, by typing the ruby code then see the result. Read full Description Web Hosting Tips, Free Web Hosting Tips, Web Hosting Tips for free Web Hosting Tips Are you developing any site or have you developed your site... deserving web host? Then you should learn our web hosting tips that will enable you quite enough to understand the basics of web hosting. Our web hosting tips Software Maintenance Services Without These You?ll Be Out Of Date this for you. There are many reasons why you should invest in this practice... time to focus on other aspects of business. This is particularly useful if you... your target market is so you can adjust your site accordingly. How to find corejava - Java Beginners corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass.... This is, for example, why you cannot mutate the (original, caller's) reference to the object Useful Negotiation Tips on Outsourcing, Helpful Negotiation Tips for negotiations are the same as the things you need to keep in mind while... about what you want and keep that in focus while negotiating terms... without these. However be willing to extend the due date if you feel WEB SITE WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology) like theme selection in orkut like... Should Learn. U can Learn Beginner Level with How to Market Your Ecommerce Site How to Market Your Ecommerce Site So, did you just set up your e-commerce site? Great! The next step to get it all rolling smoothly is to market your e...: There are a lot of sites these days who allow you to place your ad on their site at free corejava - Java Interview Questions corejava how to merge the arrays of sorting i want source code of this one plz--------------------------------------------- Hi Friend, Try the following code: public class MergeSort{ public static void main corejava - Java Interview Questions ){ alert("The date format should be : mm/dd/yyyy") return false Have you tried PHP resource site; PHPKode.com? Have you tried PHP resource site; PHPKode.com? is a good free open source PHP resource site which helps me a lot during my PHP learning. Have you tried it before CoreJava corejava How to Upload Site Online the graphics and create HTML pages. You can add the JavaScript for validation of the user input. After completing and testing of the website you can upload the website on your server. To upload the website on your server, you should have Offshore Outsourcing Tips,Useful Offshore Outsourcing Tips,Helpful Outsourcing Tips Try to identify the technologies you might need and prepare a list of these too Once you have all relevant information ready, identify... outsourcing. However here are some tips that can help you try catch method in java try catch method in java try catch method in java - when and how should i use the try and catch method in Java ? Please visit the following links: How about this site? Java services What is Java WebServices? If you are living in Dallas, I'd like to introduce you this site, this home security company seems not very big, but the servers of it are really good. Dallas Alarm systems Know About Outsourcing, More About Outsourcing, Useful Information Outsourcing Everything you need to Know about Outsourcing Introduction Let us start... of IT outsourcing, you can either outsource the entire management of all your... it is quite common for companies to outsource services like data storage or disaster HTML FAQ site HTML FAQ site For a school project need simple website. It is an FAQ site which uses natural language processing to get the answers. Natural... of answers or the actual answer should be generated. As close as possible. I need We have organized our site map for easy access. You can browser though Site Map to reach the tutorials and information pages. We... to our site. Site map will help you in accessing the complete resource listing Nested try catch Nested try catch The code which can throw exception should be written in the try block. If the exceptions occurs at that particular block Which language should you learn Java or .NET? If you are confused about which language should you learn Java or .NET, well... which you can get after learning them. First let’s explain Java: Java... never ends. Once you have learned basic Java, the jobs that you can get ; In a jsp we should always try to use jsp- style comments unless you want the comments to appear in the HTML. Jsp comments are converted... { out.println("Sorry, you lose. Try again." Multiple try catch Multiple try catch The code which can throw exception should be written in the try block. If the exceptions occurs at that particular SEO Copywriting Tips,Best SEO Copywriting Services,Useful SEO Copywriting Tips . Search engines are unable to read images. Hence, you should try to avoid...: Understand Your Keywords: Even before you start writing SEO articles, you should... of achieving good sales as well as search engine's attention, you should I find that you have changed jobs many times so far. Why is it so? this question, you should try to reassure him. Your answer should satisfactorily... in a poor light. Your answer should indicate that you are generally quite... behind this question are quite obvious. The interviewer is worried that you may Maximum number of catches for a try block Maximum number of catches for a try block How many catch statements are allowed for a try statement? Hello Friend, You can use any number of catch statement for a try statement. Thanks sections by topics on our website. You can find the best tutorials here at Rose India website. Index | Ask Questions | Site Map Web Services Top 10 Web Developments Concepts Designers Should Know Top 10 Web Developments Concepts Designers Should Know When it comes to web... of the web development concepts designers should know at this time. Application... hours a day. Unlike a so called desktop or laptop you are connected to internet Why Web Development with PHP Is Useful engine servers should have an easy time with finding what is on a site... of features. It is also one that can be found on a search engine. A site that can handle search engine optimization as well as it can is a site I think you should be earning more money at this point of your career. Why isn?t it happening? I think you should be earning more money at this point of your career. Why... negotiating your salary. However your answer should explain why you are earning something below the industry standards. Your answer should state that you do like Ruby Frameworks for ruby Try Ruby Try out Ruby code in this site, by typing... you to drop and drag objects on your image. Nitro/Og The try-with-resource Statement The try-with-resource Statement In this section, you will learn about newly added try-with-resource statement in Java SE 7. The try-with-resource statement contains declaration of one or more resources. As you know, prior Outsourcing Guidelines,Successful Guidelines on Outsourcing,Useful Guidelines on Outsourcing time. It is quite natural to have apprehensions about depending... not meet the parameters. The organization should then shortlist If you came on board with us, what changes would you make in the system? the majority, you should first try to find out how these schedules go for those... in this organization is stupid. Your answer should therefore reflect that you would like... things first. Should you take me on, as I hope, I would like to take a good look Social media marketing - why you should call in the professionals Social media marketing - why you should call in the professionals Social... up the social media marketing side of things yourself. This is why you should... there are so many benefits. You can advertise directly to your target market Hibernate Tools Update Site Site. The anytime you can user Hibernate Tools Update Manager from your eclipse... Hibernate Tools Update Site Hibernate Tools Update Site In this section we which database and programming language i should choose - Framework which database and programming language i should choose Hi, I am...) also this web site need to database that handle this huge amount of data.please... you Generate a documentation site for Maven project In this section, you will learn to auto generate a documentation site with default design and pages for your Maven based project Top 10 Java People You Should Know go with the answer in choosing top 10 Java people you should know. James... discussion on top 10 Java people you should know. Marc Fleury :- Creator... from our list of top 10 Java people you should know simply because of the two can u plz try this program - Java Beginners can u plz try this program Write a small record management...) No database should be used. All data must be stored in one or two files. Listing records should print the names of the users in alphabetical order should close the browser with user confirmation in javascript - Java Beginners should close the browser with user confirmation in javascript Hi, I... this using javascript. Senthuran Hi try the following code hope...() { if (event.clientY < 0) { event.returnValue = 'If You want to close delete retailer jsp file (sir..is this a good logic.. jsp file is useful for what purpose) delete retailer jsp file (sir..is this a good logic.. jsp file is useful...; } /*var r=confirm("Untag the products?") if(r==true) { confirm("Are you sure you... sumcount=0; try { Class.forName("com.mysql.jdbc.Driver").newInstance How to write a select box and id should be stored in database? ) should be stored in database using SWINGS concept plz help You...How to write a select box and id should be stored in database? Hi...=selectedString(is); try{ Class.forName("com.mysql.jdbc.Driver Social Media Marketing: Do you really need it? you are really facing is losing some time without a useful result. Even so... site or websites you are using, the results will not be as detailed as you'd..., you should go right ahead. Let's just say that the social media marketing type my table should be reseted to new value upon the selection of the combo box.. my table should be reseted to new value upon the selection of the combo box... new form purchase */ public purchase() { try { Class.forName... keyPressed(KeyEvent e) { try{ if (e.getKeyCode()==KeyEvent.VK_ENTER Web Site promotion services at roseindia.net Welcome to RoseIndia.net Web Promotion Company Our Web site services will help you get listed in major search engines and directory of the world. Our own site traffic comes from the major search What would you rate as your greatest weaknesses? is perfect, but from what I have learned from you about this job, I should make... that should cause you concern…” On the other hand if you don’t... and briefly without a trace of bitterness. You should be able to present things How to Save Your Site from Google Penguin Update that you can get away with such deplorable tactics. To save your site from... site from Google penguin update all are not realistic enough to change your web face as quickly to evade the wrath of Google. At best you can focus on the Black Fashion Accessories: Why do you need them? to show you a couple of items for a specific type of outfit. You should be able.... They help with the overall look and can be quite useful as well. Because...Fashion Accessories: Why do you need them? It is pretty obvious why you would Tips for Successful Outsourcing,Useful Tips of Successful Outsourcing,Tips and Techniques for Successful Outsourcing . It should be mutually beneficial. The outsourcer should exercise great care while engaging a vendor. This task... to the scope of the relationship, the time- frame, and the price that you are offering... to the left side of the page? Thank you so much, I really appreciate your What is Web Graphics descriptions on your web site, you should have the related graphic... site should match in color, typeface, and special effects. You can use... web site. The navigation button should be placed on the top Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Useful SEO Articles - 5 Killer Tips for Writing SEO Articles for Home-Based Online Business Ideas having overture demand of about 1.000 per month should be preferred... article a good page ranking in search engine results. Review The Competition You... engine results by typing the major keywords of the SEO article written by you Free Web Site Hosting Services offer free space and tools for you to build your own web site. You get:150 MB... now and receive free space and tools for you to build your own web site. What... board, Form Mail So Your Visitors Can E-Mail You Directly From Within a Web Page Java Error of the applications should not try to catch it. You must not declare a method... that describes the serious problems that a reasonable application should not try to catch Web Site Goals - Goal of Web Designing Web Site Goals - Goal of Web Designing What are the prime features necessary... and the client. What is Custom Web Design? Custom web site is little bit... within the deadline and the budget of the company. We try to get the maximum How to Upload a Website? that once you uploaded the web material and your site became visible... of concern after make the site ready on design and content part. But having... but there are many aspects that you must know as for better performance of your website What you Really Need to know about Fashion What you Really Need to know about Fashion You might think that this is not a very important aspect or activity, but there are a couple of things you should... clothes. First of all, you should understand that no one can explain what Getting Around Agra come about in the last few years. If you love sweets, then you can try... in the city itself. But if you do have your credit card, it can be the most useful... that if you have limited time, you may have to keep your priorities in place Exception Handling with and without using try catch block Description: Without using try catch block. If you do not want to explicitly make try catch block then to you program write throws Exception to your... is by using try catch block shown in following code sample: class try catch try catch why following code gives compile time error.please reply...=1;i<=3;i++) { System.out.println(i); try...); try { Thread.sleep(1000 Nested try versa.pl explain me class Demo { static void nestedTry(String args[]) { try... static void main(String args[]) { try { nestedTry(args); } catch data should not repeat in the drop down list when it is loading dynamically from database data should not repeat in the drop down list when it is loading dynamically...='emp' ><option value='-1'>Select</option>"; try... mba for 2 times it is repeating 4 2 times.It should not repeat those many times data should not repeat in the drop down list when it is loading dynamically from database data should not repeat in the drop down list when it is loading dynamically...;Select</option>"; try{ Class.forName("com.mysql.jdbc.Driver... then that name appears for 4 times )actually it should not happen.When once that name Where is the top site to buy swtor credits in 2014 new year? Where is the top site to buy swtor credits in 2014 new year? In 2014 new year,swtor2credits.com also offer 100% safe swtor credits.com for you.In... is here to tell you that a 8% discount code is granted to support you to have your You Tube You Tube  ... video. The wide variety of site content includes movie and TV clips and music... were being streamed from the site per month within couple of years from its Submit your site to 100 top search engines List of Top 100 Search Engines Here is the list of top 100 search engines where you can submit your site. Top search engines are Yahoo, Google, MSN, AOL When you look back on the position you held last, do you think you have done your best in it? don’t give your all to the work at hand. You should therefore indicate... developments in your career. 62. Give me a reason why I should hire you when... When you look back on the position you held last, do you think you have Welcome to Free search engine secrets: webmaster's guide to search engine queries. You should avoid the repetition of key words. Otherwise search engines may penalize you in some way after indexing your site. You... meta tags and completing your site, you are now ready to submit your URL corejava - Java Beginners
http://www.roseindia.net/tutorialhelp/comment/7861
CC-MAIN-2015-14
refinedweb
3,409
65.83
Welcome to Woden Introduction. Your Help Wanted Looking to get involved in an Open Source project? Interested in Web services or WSDL? Woden is looking for your help. There are a number of different areas in which help is needed including: - Updating the Woden Axiom based implementation to the WSDL 2.0 recommendation - Updating the WSDL 1.1 -> 2.0 converter to the WSDL 2.0 recommendation - Adding validation logic for more WSDL 2.0 assertions - Documenting design decisions and use patterns - Creating more automated tests - Reviewing the Woden API Getting involved is easy. Just send an e-mail to woden-dev@ws.apache.org stating that you'd like to help and what you'd like to help with (if you know). News - Apr 24, 2008 - Woden 1.0 Milestone 8 released! 1.0M8 is the first Woden release as part of the Apache Web services project. Download Woden 1.0M8 and view the release notes here. - December 8, 2007 - Woden Graduated to the Web Services project! The Web Services and Incuator PMCs voted to accept Woden's graduation proposal. Woden is now a member of the Web Services project. - August 3, 2007 - Woden milestone 7b declared! This is a further incremental release of M7 which includes some minor corrections to accomodate late changes to the WSDL 2.0 spec and a number of fixes, including Woden's use of WS-Commons XmlSchema. Download Woden M7b and view the release notes here. - April 23, 2007 - Woden milestone 7a declared! This is an incremental release of M7 which delivers an essential fix to support the new shortened WSDL 2.0 namespace format. Download Woden M7a and view the release notes here. - February 19, 2007 - Woden milestone 7 declared! The focus of this release has been to fully support the parsing of 'valid' WSDL 2.0 documents, so that all of the WSDL 2.0 Infoset and Component Model can be represented in Woden. Woden M7 provides this level of compliance with the WSDL 2.0 spec as at 10th February. Download Woden M7 and view the release notes here. - October 6, 2006 - Woden milestone 6 declared! Woden milestone 6 introduces a StAX/AXIOM-based implementation of the Woden API in addition to the existing Xerces/DOM-based implementation. It also improves compliance with the WSDL 2.0 scoping rules by changing the way nested components are created and added to their parent components. Download the milestone and view the release notes here. - October 6, 2006 - Second W3C WSDL 2.0 Working Group Interoperability Event The W3C WSDL 2.0 working group will host a second interoperability event November 14-18, 2006, in Rennes, France. Woden will be represented at this event by members of the development team. The goals are to ensure full coverage of the WSDL 2.0 specification by the W3C WSDL 2.0 test suite, to verify that participating WSDL 2.0 implementations such as Woden pass the test suite and to perform further interoperability testing similar to the SOAP message testing performed at the first Interop event in July 2006. - June 21, 2006 - Woden milestone 5 declared! Woden milestone 5 integrates the remaining extensions into the WSDL component model (HTTP, operation safety and rpc style), adds support for Interface Extension and introduces new ways to specify the WSDL source to the WSDL reader. Download the milestone and view the release notes here. - June 5, 2006 - W3C WSDL 2.0 Working Group Interoperability Event The W3C WSDL 2.0 working group will host an interoperability event July 5-7, 2006, in Toronto, Ontario. Woden will be represented at this event by members of the development team. For more information see the the call for participation page at the W3C. - May 30, 2006 - Woden Presentation at ApacheCon Europe 2006 Woden will be featured for the first time at ApacheCon during ApacheCon Europe 2006, when John Kaputin and Jeremy Hughes present "Apache Woden WSDL 2.0 Processor" on Wed. June 28. Come find out what WSDL 2.0 is all about. Register now! UPDATED: PDF of presentation now available: WodenWSDL2Processor_WE12.pdf - Mar. 13, 2006 - Woden milestone 4 declared! Woden milestone 4 integrates extension SOAP components into the WSDL component model, includes validation of Service components and introduces a user guide to the Woden site. Download the milestone and view the release notes here. - Jan. 26, 2006 - Woden milestone 3 declared! Woden milestone 3 includes parsing logic for WSDL 2.0 service, import, and include elements into both element and component models and validation of binding elements and components. Download the milestone and view the release notes here. - Jan. 18, 2006 - Woden call for participation Like Web services? Want to keep up with the latest WSDL specification? Help answer the WSDL 2.0 working group's call for implementations by participating in Woden. A task list is located on the Woden Wiki. Get involved! Tell the Woden team how you want to help by posting to the Woden mailing list. - Dec. 9, 2005 - Woden milestone 2 declared! Woden milestone 2 includes parsing logic for WSDL 2.0 interface and binding elements into both element and component models and validation of type and interface elements and components. Download the milestone and view the release notes here. - Oct. 3, 2005 - Woden milestone 1 declared! Woden milestone 1 includes parsing logic for WSDL 2.0 types and interface sections. Download the milestone and view the release notes here.
http://ws.apache.org/woden/
CC-MAIN-2014-10
refinedweb
905
60.72
The gods Let us begin by clearing up a point that causes confusion. It was only in the fourth century AD, late in the story of Hadrian’s Wall, that Christianity, a monotheistic (belief in one god) religion, became a significant feature of frontier life. Up until that point the wall communities were polytheistic (worshippers of many gods) and many remained so. We will talk further about Christianity on the Wall later. For the moment, let us consider polytheism. At one level, many of the classic deities of Roman polytheism seem familiar. Mars the god of war, who was frequently honoured by soldiers along Hadrian’s Wall, is a well-known example. Less well known is the fact that he was also believed to have protected crops, so that farmers venerated him too. Even familiar deities could actually be viewed in different ways by different communities. The relationship between the Romans and their gods can seem straightforward at first. It is often described as contractual. An individual called upon a god to help him or her meet a certain goal (to survive battle, become pregnant, gain promotion etc.), and vowed to give something to the god in return (for example, to sacrifice an animal, erect an altar or statue, or make a pilgrimage). This relationship is encapsulated in the letters VSLM (votum solvit libens merito or ‘willingly and deservedly fulfilled his/her vow’) often seen on altars dedicated to the gods. This sense of contracts (vows or pledges made by mortals to gods in return for services rendered) is vitally important. We should not forget, however, that behind it lay a spectrum of conviction, belief and interpretation. Students beginning their study of the ‘pantheon’ or members of Roman gods, will often start with Jupiter, Juno and Minerva (the Capitoline Triad). Jupiter, the king of the gods was often given the epithet Optimus Maximus (‘best and greatest’). Juno was his consort, and a protector of the state. Minerva was the goddess of wisdom and the arts. This Capitoline Triad enjoyed a special place in the rituals of the Roman state. Other principal Roman gods include Mars (discussed above), Venus (goddess of love and fertility), Diana (goddess of the hunt and the moon), Neptune (god of the sea) and Mercury (god of commerce and trade). These principal gods were accompanied by many others, much less well known. Minor gods had specific functions: there was a deity for everything in the Roman world, from shepherds to infant mortality, from beekeeping to woodlands. Many of these lesser gods were worshipped principally in Rome itself, but some of the principal gods, including Mars and Mercury, were worshipped across the Empire. In the provinces, they were often twinned with indigenous (local) deities. On Hadrian’s Wall, for example, Jupiter and Mars were sometimes paired with other gods. As we shall see, as the Empire expanded, many ‘local’ gods were incorporated into the belief systems, and some (particularly those from the Near East) came to be worshipped across a wide area. Deities in this group include Mithras and Sol Invictus. This was a great strength of polytheism, new gods could readily be incorporated into existing beliefs. Finally semi-divine beings were also widely worshipped. These were the offspring of gods and their human lovers. Hercules (son of Jupiter or his Greek equivalent, Zeus, and a mortal mother) is a well-known example. Partly linked to this belief that gods and mortals could unite is another significant development in the rituals and beliefs of empire. Of great importance was the cult or cults of the Emperor, whose authority came from the gods. This divine sanction was recognised via the Imperial Cult, first seen in the reign of Augustus. The numen (spirit) of the Emperor was often worshipped too. As we will see, whilst many emperors were divinized, some had their memory obliterated (damnatio memoriae) when they died. In the quiz that follows, we look at the features and attributes associated with deities and test your ability to recognise them. If you are particularly interested in reading more about the gods and religion in Roman Britain, we recommend Henig, M. 1995 Religion in Roman Britain, Routledge: London © Newcastle University
https://www.futurelearn.com/courses/hadrians-wall/3/steps/48573
CC-MAIN-2018-43
refinedweb
700
62.78
- Draw a triangle - Very basic programming help please - making my program sleep / wait / delay... - Need some advice - translate - Sorting help for newbie - Pre Processor output - comparing return values from functions - Parameter passing with pointer to pointer - Problems with vectors of objects! - variable size of namespace? - IF statement - name of a type - Pointing to struture members..... - stl multimap results - directX program - Take a look at my program - program help - Why just use prototypes with classes? - Debug error, help needed - extern twist - Purpose of Operator Overloading - How do I title my Program? - User mode and Kernel mode - Bloody recursive thing - I read about memory, stack, etc but I didn't understand something! - reference doesn't work? - Need help on program that alters the led combination with different input - Problems with Bank account class - Problem with class header please help - LINK : fatal error LNK1561: entry point must be defined - Problems with string declarations in C++ - Simple fp_in question. - Destructors in STL? - pass extra data to a function - scope in inheritance - boost, xtime_get: undefined reference - conditional breakpoints and inline functions - Graphical Interfaces - hi peeps! - Files - linker error - Where do Unused function returns go? - Code Compile - Weird String error (LNK2019) - Help Please - large binary number division - maxIndex in an array - Remote thread problem - Recommendation on Books for Designing Classes/etc.. in C++
http://cboard.cprogramming.com/sitemap/f-3-p-474.html?s=136b1bc6244fe2ea1b6f28f969476241
CC-MAIN-2016-18
refinedweb
217
54.73
A textual captcha for Django using simple decorator syntax. simplecaptcha provides an easy decorator syntax to add a textual captcha to your Django forms. The captcha is a simple arithmetic question: Either add, subtract, or multiply two numbers between 1 and 10. No server-side context is needed, as the captcha uses cryptographic signatures to securely pass the context to the client, and then validate the supplied answer on the back end. In order to mitigate replay attacks, the signatures expire after a configurable amount of time (default 5 minutes): enough time to fill out and submit the form, but short enough to reduce the ability to reuse signatures with known answers. There’s lots of Django captchas out there, including more than one that uses arithmetic questions just like this one. So why do we need another? Simply put, the others all lack in flexibility. When I set out to find one for my form, I needed one that would allow me to manually render my fields; the first few I found, however, hardcoded the question (as a label) into the format_output() method, or even directly in the render() method itself. This meant I couldn’t separately render the label where I need it for my design. I kept digging, and found another that offered the flexibility I needed in the layout, but put the captcha generation logic in the field’s __init__() method. While this sounds great, Django’s method of using class objects – rather than instance objects – means that you get only a single captcha question per server thread, period. So I sat down to write a captcha that would give me the flexibility I needed to fit into my front-end design, but that also would reliably generate a fresh captcha question each time the page was loaded. This is that captcha. (Recommended) Install from PyPi with a simple pip install django-simplecaptcha. Download the source from GitHub, and simply make the simplecaptcha module available to Python in some way; on *nix systems, a simple symlink in the root of your Django project to the simplecaptcha directory is probably the most straightforward solution. Using simplecaptcha is simple: from simplecaptcha import captcha @captcha class MyForm(Form): pass This will add a field named “captcha” to MyForm. However, nothing else need be done: the decorator takes care of adding the field and ensuring it is always updated when a new form instance is created, as well as validating bound forms and providing useful error messages for users. simplecaptcha, as its name implies, is simple. It works straight out of the box without any need to add any configuration in your Django project. However, if you do want to modify its behavior, you can do that as well, by simply adding any of these settings to your Django project’s settings module: The decorator will always add the captcha field to the end of your form. If this is undesirable for any reason, you can of course always manually render your form fields as decribed in the Django docs. Another option is to simply add a “dummy” field to your form with the same name as that used by the decorator. The decorator would then effectively replace the field in your form: from simplecaptcha import captcha from simplecaptcha.fields import CaptchaField @captcha class MyForm(Form): field1 = CharField() field2 = CharField() captcha = CaptchaField() field3 = CharField() (NOTE: Since the decorator will replace the field of the same name, it does not matter what type of field you specify when using this approach. Because of the way Django processes Form classes, however, you must specify a Django field, or else Django will ignore it and you won’t get the desired effect.) Now when you render MyForm in your template, fields will be ordered precisely as they are in your source: field1, then field2, followed by captcha, and finally field3. If for any reason you don’t want your captcha field to be named “captcha”, and you don’t want to set SIMPLECAPTCHA_DEFAULT_FIELD_NAME in your Django settings module, you can use the @captchaform decorator and supply the desired field name as an argument, like so: from simplecaptcha import captchaform @captchaform('securitycheck') class MyForm(Form): pass This will add a field named “securitycheck” to MyForm that will contain the form’s captcha. If you wish to do this and use the method in the previous section to specify the field order, note that the “dummy” field you add must match the name you passed into the decorator. It is possible to add multiple captcha fields to your form simply by decorating your form multiple times. However note that field order in your form will be the reverse of the order that you write your decorators: from simplecaptcha import captchaform @captchaform('captcha') @captchaform('captcha2') class MyForm(Form): pass In this example, when MyForm is rendered in your template, “captcha2” will appear first, and then “captcha”. This is a consequence of how decorators in Python are processed; you simply have to remember that the last captcha decorated into your form is the first one that will appear in your.
https://pypi.org/project/django-simplecaptcha/
CC-MAIN-2017-04
refinedweb
853
53.55
. ? on 2005-12-13 19:37 on 2005-12-13 20:18 One way to get around this is to do the following. Keep your category controller as it is, but make the 'admin' controller into something like 'admin/main' or the like. Then, in your routes add something like: map.connect "/admin/", :controller => "admin/main", :action => "index" which will allow you just type in /admin/ and get to the index. I have my site set up this way, and it works well because I really only use the index action on this controller anyways. Hope this helps! -Nick on 2005-12-13 20:21 On 13-dec-2005, at 19:37, gros gros wrote: > I encounter an error because rails tries to use the default route > ? I guess there is. If you put controllers in subdirectories you have to namespace them. '/admin' :controller=>'admin/main' for Admin::MainController and further on it will pick them automatically Admin::CategoriesController and so on and so forth on 2005-12-14 09:23 Ok thank you all, so i'll create something like 'admin/main' :/
https://www.ruby-forum.com/topic/48817
CC-MAIN-2018-09
refinedweb
183
72.26
A hint: This file contains one or more very long lines, so maybe it is better readable using the pure text view mode that shows the contents as wrapped lines within the browser window. Elasticsearch is an open source project and we love to receive contributions from our community — you! There are many ways to contribute, from writing tutorials or blog posts, improving the documentation, submitting bug reports and feature requests or writing code which can be incorporated into Elasticsearch itself. If you think you have found a bug in Elasticsearch, first make sure that you are testing against the latest version of Elasticsearch - your issue may already have been fixed. If not, search our issues list on GitHub in case a similar issue has already been opened. It is very helpful if you can prepare a reproduction of the bug. In other words, provide a small test case which we can run to confirm your bug. It makes it easier to find the problem and to fix it. Test cases should be provided as curl commands which we can copy and paste into a terminal to run it locally, for example: # delete the index curl -XDELETE localhost:9200/test # insert a document curl -XPUT localhost:9200/test/test/1 -d '{ "title": "test document" }' # this should return XXXX but instead returns YYY curl .... Provide as much information as you can. You may think that the problem lies with your query, when actually it depends on how your data is indexed. The easier it is for us to recreate your problem, the faster it is likely to be fixed. If you find yourself wishing for a feature that doesn't exist in Elasticsearch, you are probably not alone. There are bound to be others out there with similar needs. Many of the features that Elasticsearch has today have been added because our users saw the need. Open an issue on our issues list on GitHub which describes the feature you would like to see, why you need it, and how it should work. If you have a bugfix or new feature that you would like to contribute to Elasticsearch, please find or open an issue about it first. Talk about what you would like to do.. Note that it is unlikely the project will merge refactors for the sake of refactoring. These types of pull requests have a high cost to maintainers in reviewing and testing with little to no tangible benefit. This especially includes changes generated by tools. For example, converting all generic interface instances to use the diamond operator. The process for contributing to any of the Elastic repositories is similar. Details for individual projects can be found below. You will need to fork the main Elasticsearch code or documentation repository and clone it to your local machine. See github help page for help. Further instructions for specific projects are given below. Once your changes and tests are ready to submit for review: Test your changes Run the test suite to make sure that nothing is broken. See the TESTING file for help running tests. Sign the Contributor License Agreement. Rebase your changes Update your local repository with the most recent code from the main Elasticsearch repository, and rebase your branch on top of the latest master branch. We prefer your initial changes to be squashed into a single commit. Later, if we ask you to make changes, add them as separate commits. This makes them easier to review. As a final step before merging we will either ask you to squash all commits yourself or we'll do it for you. discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Elasticsearch. Please adhere to the general guideline that you should never force push to a publicly shared branch. Once you have opened your pull request, you should consider your branch publicly shared. Instead of force pushing you can just add incremental commits; this is generally easier on your reviewers. If you need to pick up changes from master, you can merge master into your branch. A reviewer might ask you to rebase a long-running pull request in which case force pushing is okay for that request. Note that squashing at the end of the review process should also not be done, that can be done when the pull request is integrated via GitHub. Repository: JDK 12 is required to build Elasticsearch. You must have a JDK 12 installation with the environment variable JAVA_HOME referencing the path to Java home for your JDK 12 installation. By default, tests use the same runtime as JAVA_HOME. However, since Elasticsearch supports JDK 8, the build supports compiling with JDK 12 and testing on a JDK 8 runtime; to do this, set RUNTIME_JAVA_HOME pointing to the Java home of a JDK 8 installation. Note that this mechanism can be used to test against other JDKs as well, this is not only limited to JDK 8. Note: It is also required to have JAVA8_HOME, JAVA9_HOME, JAVA10_HOMEand JAVA11_HOMEavailable so that the tests can pass. Warning: do not use sdkmanfor Java installations which do not have proper jrunscriptfor jdk distributions. Elasticsearch uses the Gradle wrapper for its build. You can execute Gradle using the wrapper via the gradlew script in the root of the repository. We support development in IntelliJ versions IntelliJ 2019.2 and onwards. We would like to support Eclipse, but few of us use it and has fallen into disrepair. Elasticsearch builds using Java 12. Before importing into IntelliJ you will need to define an appropriate SDK. The convention is that this SDK should be named "12" so that the project import will detect it automatically. For more details on defining an SDK in IntelliJ please refer to their documentation. You can import the Elasticsearch project into IntelliJ IDEA via: build.gradlefile To run an instance of elasticsearch from the source code run ./gradlew run Please follow these formatting guidelines: // tagand // endcomments are included in the documentation and should only be 76 characters wide not counting leading indentation import foo.bar.baz.*) are forbidden and will cause the build to fail. This can be done automatically by your IDE: Preferences->Java->Code Style->Organize Imports. There are two boxes labeled " Number of (static )? imports needed for .*". Set their values to 99999 or some other absurdly high value. Preferences->Editor->Code Style->Java->Imports. There are two configuration options: Class count to use import with '*'and Names count to use static import with '*'. Set their values to 99999 or some other absurdly high value. We require license headers on all Java files. With the exception of the top-level x-pack directory, all contributed code should have the following license header unless instructed otherwise: /* * Licensed to Elasticsearch under one or more contributor * license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright * ownership. Elasticsearch top-level x-pack directory contains code covered by the Elastic license. Community contributions to this code are welcome, and should have the following license header unless instructed otherwise: /* * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one * or more contributor license agreements. Licensed under the Elastic License; * you may not use this file except in compliance with the Elastic License. */ It is important that the only code covered by the Elastic licence is contained within the top-level x-pack directory. The build will fail its pre-commit checks if contributed code does not have the appropriate license headers. NOTE: If you have imported the project into IntelliJ IDEA the project will be automatically configured to add the correct license header to new source files based on the source location. Run all build commands from within the root directory: cd elasticsearch/ To build a tar distribution, run this command: ./gradlew -p distribution/archives/tar assemble --parallel You will find the distribution under: ./distribution/archives/tar/build/distributions/ To create all build artifacts (e.g., plugins and Javadocs) as well as distributions in all formats, run this command: ./gradlew assemble --parallel The package distributions (Debian and RPM) can be found under: ./distribution/packages/(deb|rpm)/build/distributions/ The archive distributions (tar and zip) can be found under: ./distribution/archives/(tar|zip)/build/distributions/ Before submitting your changes, run the test suite to make sure that nothing is broken, with: ./gradlew check If your changes affect only the documentation, run: ./gradlew -p docs check For more information about testing code examples in the documentation, see This repository is split into many top level directories. The most important ones are: docs Documentation for the project. distribution Builds our tar and zip archives and our rpm and deb packages. libs Libraries used to build other parts of the project. These are meant to be internal rather than general purpose. We have no plans to semver their APIs or accept feature requests for them. We publish them to maven central because they are dependencies of our plugin test framework, high level rest client, and jdbc driver but they really aren't general purpose enough to belong in maven central. We're still working out what to do here. modules Features that are shipped with Elasticsearch by default but are not built in to the server. We typically separate features from the server because they require permissions that we don't believe all of Elasticsearch should have or because they depend on libraries that we don't believe all of Elasticsearch should depend on. For example, reindex requires the connect permission so it can perform reindex-from-remote but we don't believe that the all of Elasticsearch should have the "connect". For another example, Painless is implemented using antlr4 and asm and we don't believe that all of Elasticsearch should have access to them. plugins Officially supported plugins to Elasticsearch. We decide that a feature should be a plugin rather than shipped as a module because we feel that it is only important to a subset of users, especially if it requires extra dependencies. The canonical example of this is the ICU analysis plugin. It is important for folks who want the fairly language neutral ICU analyzer but the library to implement the analyzer is 11MB so we don't ship it with Elasticsearch by default. Another example is the discovery-gce plugin. It is vital to folks running in GCP but useless otherwise and it depends on a dozen extra jars. qa Honestly this is kind of in flux and we're not 100% sure where we'll end up. Right now the directory contains wildflyproject. Throwables or add a shutdown hook But we're not convinced that all of these things belong in the qa directory. We're fairly sure that tests that require multiple modules or plugins to work should just pick a "home" plugin. We're fairly sure that the multi-version tests do belong in qa. Beyond that, we're not sure. If you want to add a new qa project, open a PR and be ready to discuss options. server The server component of Elasticsearch that contains all of the modules and plugins. Right now things like the high level rest client depend on the server but we'd like to fix that in the future. test Our test framework and test fixtures. We use the test framework for testing the server, the plugins, and modules, and pretty much everything else. We publish the test framework so folks who develop Elasticsearch plugins can use it to test the plugins. The test fixtures are external processes that we start before running specific tests that rely on them. For example, we have an hdfs test that uses mini-hdfs to test our repository-hdfs plugin. x-pack Commercially licensed code that integrates with the rest of Elasticsearch. The docs subdirectory functions just like the top level docs subdirectory and the qa subdirectory functions just like the top level qa subdirectory. The plugin subdirectory contains the x-pack module which runs inside the Elasticsearch process. The transport-client subdirectory contains extensions to Elasticsearch's standard transport client to work properly with x-pack. We use Gradle to build Elasticsearch because it is flexible enough to not only build and package Elasticsearch, but also orchestrate all of the ways that we have to test Elasticsearch. Gradle organizes dependencies and build artifacts into "configurations" and allows you to use these configurations arbitrarily. Here are some of the most common configurations in our build and how we use them: In general Elasticsearch is happy to accept contributions that were created as part of a class but strongly advise against making the contribution as part of the class. So if you have code you wrote for a class feel free to submit it. Please, please, please do not assign contributing to Elasticsearch as part of a class. If you really want to assign writing code for Elasticsearch as an assignment then the code contributions should be made to your private clone and opening PRs against the primary Elasticsearch clone must be optional, fully voluntary, not for a grade, and without any deadlines. Because: Finally, we require that you run ./gradlew check before submitting a non-documentation contribution. This is mentioned above, but it is worth repeating in this section because it has come up in this context.
https://fossies.org/linux/elasticsearch/CONTRIBUTING.md
CC-MAIN-2022-27
refinedweb
2,239
63.29
from PSPApp import * def ScriptProperties(): return { 'Author': u'Laura Seabrook', 'Copyright': u'2009', 'Description': u'Adds a layer for Shadows', 'Host': u'Paint Shop Pro X', 'Host Version': u'10.00' } def Do(Environment): # EnableOptimizedScriptUndo App.Do( Environment, 'EnableOptimizedScriptUndo', { 'GeneralSettings': { 'ExecutionMode': App.Constants.ExecutionMode.Silent, 'AutoActionMode': App.Constants.AutoActionMode.Match, 'Version': ((10,0,1),1) } }) # New Raster Layer App.Do( Environment, 'NewRasterLayer', { 'General': { 'Opacity': 50, 'Name': u'Shadow', 'IsVisible': True, 'IsTransparencyLocked': False, 'LinkSet': 0, 'UseHighlight': False, 'PaletteHighlightColor': (255,255,64), 'GroupLink': True, 'BlendMode': App.Constants.BlendMode.Multiply }, 'BlendRanges': { 'BlendRangeGreen': (0,0,255,255,0,0,255,255), 'BlendRangeRed': (0,0,255,255,0,0,255,255), 'BlendRangeBlue': (0,0,255,255,0,0,255,255), 'BlendRangeGrey': (0,0,255,255,0,0,255,255) }, 'GeneralSettings': { 'ExecutionMode': App.Constants.ExecutionMode.Silent, 'AutoActionMode': App.Constants.AutoActionMode.Match, 'Version': ((10,0,1),1) } }) I just found a link to some software that creates maps, called AUTOREALM. Apparently the main use is in creating maps for role playing, and it looks like it also generates hex and square map overlays (cool). However, with at least one web comic that has the occasional map, I can see other uses too. What a neat idea! Just discovered Sloog.org. It's a book marking and tagging system for Second Life. You go to the office in Mosi-Mosi and grab a sloog HUD. Once you have the HUD attached you can tag places and avatars for later reference. You can also search for those tagged items, and the results are given in local chat. Interesting idea which reminds me of some Internet tagging and book marking sites I've looked at. I guess the advantage to using this over just dumping landmarks and calling cards in your inventory, is that the tagging tends to put them in context. Also, it means that commonly tagged places and avatars will become more popular and easier to find. Has anyone else used this? What was your experience? I just discovered that En Garde! - a role- playing game set in 17th century Paris and on which I used to run postal games with about 40 players in the late 70s - is still around! It's been revived by a new company and even has its own web site. Cool stuff. I just sent the people running this an e-mail to see if they'd like to see copies of the "expanded rules" I wrote. I was in the Star Trek Museum in Second Life and am highly impressed with the Science floor of main deck. Anyway, one of the rooms there has a version of power point presentation (one of several it turns out): Watch Star Trek with a Physicist (II). The physicist is Don Lincoln who works at Fermilab. Most interesting! Instead of looking at web comics like I was going to, I spent a fair bit of time last night going through the Source Forge listings. And I found some great stuff! I found a windows version of a solitaire game called Shisen-Sho, where you remove pairs Mahjong tiles until you clear the board. Sounds exactly like any of the hundred versions of solitaire Mahjong, doesn't it, but there's just one difference - all the tiles are flat in a rectangle, and you can only remove pairs that that can have an unblocked route of no more than three lines to each other (see pic at right). The only thing missing from the download was the rules - but I found these elsewhere! This game is extremely absorbing for me, just like Links was - I love elegant puzzle games. Apparently it's a port of a version of the game that ran under KDE (a Linux GUI) and can use the tile sets for KMahjongg. I went out and found the download page and converted them (as simple as changing the extension from .tileset to .bmp). There are heaps of other versions out there too (and Ishido looks just as interesting). Cool! The other big find at Source Forge was the number of train and railway games and simulations. I've always been interested in trains and railways since I was a child. My father was a guard on freight and passenger trains for over 30 years, and even took me with him on a couple of runs up to the Avon Valley marshalling yards. In Simultrains you "build the transport networks, with platforms, quays, level crossings, signals and much more. Transport passengers between nearby cities with a commuter train or use a high speed train to earn big money by connecting cities further apart". I haven't tried it yet but it looks a lot like A-Train and Sim City (though there's also FreeTrain). Rails is a java implementation of the 18xx series of board games. What's 18xx? I have an original copy (with Northern expansion) of 1829 by Hartland Trefoil. This was an elegant board game based on the first railways in Britain in the 19th century. Each player bought shares in one or more companies and built track (by placing tiles), bought engines and ran trains for profit or loss. It was deceptively simple, requiring a mixture of strategy and shrewd management. Like Diplomacy, the game seems to have created an entire following and variants. And then there's the Crayon Rails game (not open-source, I found it while looking for Cyber Rails, which doesn't seem to have anything to download yet) which is clearly inspired by Empire Builder. Years ago when I was in Fandom, they used to have Rail Baron tournaments at Swancon. I used to own a set of that but I really found it difficult to play the game because the board would freak out my vision and (like Monopoly) I'd always end a game with a migraine headache! An alternative to RB was Empire Builder. I own two sets - America (the original) and Britain. The thing about these games was that you built rail networks by drawing in crayon on a laminated map. Much more interesting than Monopoly styled RB. Actually, there seems to be a whole site devoted to these old board games, called Rail Game Fans. I must investigate this more thoroughly, as I should at Rail Serve as well. But, without a doubt, the big "gob smacker" of a discovery in my browsing would have to be Rail World and Yard Duty. Both are railway simulations that use satellite photos of real railway complexes to simulate railway management. There's no "winning" as such, but by golly, the most realism I've seen yet! I must see about adding the Kewdale Freight Terminal and other locations sometime. Yes, I know this all sounds obsessive, but trains (and train games) have been a passion for a long while. More serendipity. I was reading a friend's post when I saw a reference to Too Good to Last (they were talking about Firefly). Turns out, it's all part of the TV TROPES WIKI, which talks about techniques and clichés writers use in Anime, Comics, Video games, Literate, Film, and New Media. Funniest reference I found was Northern Exposure providing examples of Dropped a Bridge on Him. As in "When a character is permanently written out of a show, especially killed off, in a way that is particularly awkward, anti-climactic, mean-spirited or dictated by producer's fiat, they Dropped a Bridge on Him." In the first season of Northern Exposure, some dies when a satellite crashes into him! And of course it's called this because: Named for the death of Captain Kirk in Star Trek: Generations, which should have been a key, climactic event putting an exclamation point to 30 years of adventuring. Instead, they, literally, Dropped A Bridge On Him. (And that's the improved version, mind you. Originally, and in the novelization, the plan was for the Big Bad to shoot him in the back, but for some reason test audiences didn't like it -- hence the improvement.) Ha ha ha - well that was about the only way they could do it! Other neat terms include Better On DVD, Lamp Shade Hanging, Spikeification, Clue From Ed, Freud Was Right, Too Dumb To Live, Nothing Exciting Ever Happens Here and Anti Poop Socking. There's obviously much to explore at this site. Gasp, just discovered BoardGame Geek! In particular, I was looking for information about a game called Conquest that I played once with my cousins in the early 70s. I found it, and also all sorts of interesting links like DIY, Customized or Homemade Games (golly, I used have copies of Subbuteo) and what looks like a cool games company in the Netherlands called Cwali. I remember a lot of GDW games I used to own as well, including Belter, Battle for Moscow (now "public domain/freeware" so you can download PDF and other files and make a copy!) and Citadel:the battle of Dien Bien Phu (the only game I can think of where units had an infinite movement allowance along roads). Think I'll have a "hunt and peck" session at this site. Click to show your support for a new publication of Dune... The Spice must flow! Golly, those were the days. I remember playing play-by-mail games, both Diplomacy and Flying Buffalo stuff; of getting Strategy & Tactics by S.P.I., and Conflict Magazine by Simulations Design Corporation; and all those Avalon Hill games I had. I still a complete collection of Games & Puzzles (anyone else remember Hexagonal Chess?). Of course I sold most of them when I left Perth. Just didn't have the room to store them, and just didn't mix in wargaming circles any more.
http://las-blog.livejournal.com/
CC-MAIN-2015-32
refinedweb
1,628
71.14
char type is designed to store characters, such as letters and numeric digits. The most common symbol set is the ASCII character set. For example, 65 is the code for the character A, and 77 is the code for the character M. Try the char type in the following code. #include <iostream> int main( ) { using namespace std; char ch; // declare a char variable cout << "Enter a character: " << endl; cin >> ch; cout << "hi! "; cout << "Thank you for the " << ch << " character." << endl; return 0; } The following code illustrates the char type and int type contrasted. #include <iostream> int main() /* ww w . j av a2s . c o m*/ { using namespace std; char ch = 'M'; // assign ASCII code for M to ch int i = ch; // store same code in an int cout << "The ASCII code for " << ch << " is " << i << endl; cout << "Add one to the character code:" << endl; ch = ch + 1; // change character code in ch i = ch; // save new character code in i cout << "The ASCII code for " << ch << " is " << i << endl; cout.put(ch); // using cout.put() to display a char constant cout.put('!'); cout << endl << "Done" << endl; return 0; } The code above generates the following result. You have several options for writing character literals in C++. We can write. C++ has special notations, called escape sequences, as shown in the following Table. For example, \a represents the alert character, which beeps your terminal's speaker or rings its bell. The escape sequence \n represents a newline. And \" represents the double quotation mark as an ordinary character instead of a string delimiter. char alarm = '\a'; cout << alarm << "this is a test!\a\n"; cout << "Java \"hi \" C++\n was here!\n"; The newline character provides an alternative to endl for inserting new lines. All three of the following move the screen cursor to the beginning of the next line: cout << endl; // using the endl manipulator cout << '\n'; // using a character constant cout << "\n"; // using a string. The following code demonstrates a few escape sequences. #include <iostream> int main() { /* ww w . jav a 2 s .c om*/ using namespace std; cout << "\ahi \"hey\" is now activated!\n"; cout << "Enter your agent code:________\b\b\b\b\b\b\b\b"; long code; cin >> code; cout << "\aYou entered " << code << "...\n"; cout << "\ahi! !\n"; return 0; } The code above generates the following result.
http://www.java2s.com/Tutorials/C/Cpp_Tutorial/0050__Cpp_char_Type.htm
CC-MAIN-2017-09
refinedweb
389
75.91
Deriving A Window From Another Custom Window - Hey guys, I am having a heck of a time with this one. Basically I am trying to derive a window off of another window, which is derived from the plain window shipped with WPF. I know how to derive from in in code but the problem is using it in the xaml. I have done what I have so far found in the inet. This is really the only thing I have seen referring to it on my searches <src:myForm <!--Other Goodies, Intentional space, get a smile if not--> xmlns: </> The problem with that is I get this error upon compiling cannot be the root of a XAML file because it was defined using XAML. Line 2 Position 2. As well as some other messages regarding schema(not errors). What am I missing because this is driving me nuts. Answers - Monday, June 18, 2007 3:41 PM Carole Snyder - MSFT It's not that you're defining and using the namespace in the same element. The following is perfectly legal: (spaces intentional to avoid emoticons) <Label Content="{x: Static sys: DateTime.Now}" xmlns: But it's not possible to inherit a from a custom window in XAML. I forget the full reason why, but if I recall it has to do with how XAML is loaded. What I've done in create a class that inherits from a panel such as grid or dock panel and then use that class as the root element for all of my windows. I don't know if that will work for you though. You could also try creating a style for Window in App.xaml. - Wednesday, June 20, 2007 6:19 AM Yi-Lun LuoMSFT, Moderator Hello, Carole is right. You can’t inherit a Window if the parent Window is defined in XAML. But in your case, it seems that it’s not difficult to define your parent Window in pure code. So just use pure code to define your parent Window, and then you can inherit it in XAML. All Replies - Saturday, June 16, 2007 4:11 AM Matt ElandLooks like basically you're trying to say to XAML: - The root element is myForm within the namespace I defined as src. - I want to define a namespace called src that is located at ... I'm not aware of any way to define a namespace before the root node, but it might be possible using metadata. Otherwise, I think you're pretty much messed as far as declaring a CustomWindow in XAML. Code should be well enough, but that defeats the point. - So basically there is no inheritance for windows in WPF(xaml), because that makes no sense and is quite lame. Anyone else have a suggestion on how to go about this, there has to be some sort of a workaround or something for the time being - Saturday, June 16, 2007 11:16 PM Matt ElandWell, what do you need a custom Window object for? Is it for styling, added methods, or what? If you really can't extend Windows and reference them as root nodes in XAML (which does seem pretty lousy to me), there might be another way to get the functionality you're looking for. - I basically need a window that will do some special control processing that I want all the windows in my application to inherit(formating, etc), so that I don't need some sort of external class, within each window of the application. I mean it is possible to do it that way but it just seems more easily done with some simple inheritance. - Wednesday, January 30, 2008 9:57 PM the_magic_juan I think MSFT really missed the boat on this one and I hope they fix it...to not be able to inherit xaml is a big miss. I know somebody is going to say, "just use styles". What we want to do, declaritively define a base xaml page with wpf controls including the excellent xceed datagrid with events and what not...that's not styles. wpf is right on for the designer but for shops that are trying out wpf, to not be able to inherit a base xaml page, defined declaritively, defeats the purpose of oop. I understand that you can create the page all in code and then have your wpf page inherit that. That's great. But then you loose the ease and power of the xaml. I can dynamically generate a xaml page at run time but that does nothing for me in design time. Anyway, I hope MSFT addresses this. For the time being we're going to have to go back to winforms and use the host object for presenting wpf - sort of a hybrid application. Thanks.
http://social.msdn.microsoft.com/forums/en/wpf/thread/a7ec68d3-11b1-4f80-a1fe-4bfbc9923812/
crawl-002
refinedweb
800
69.82
On 02/10/11 12:03, Linus Torvalds wrote:> On Thu, Feb 10, 2011 at 11:34 AM, Randy Dunlap <randy.dunlap@oracle.com> wrote:>>>> Loading ipmi_si module a second time causes an Oops:>>>> [ 68.120143] RIP: 0010:[<ffffffff813fc579>] [<ffffffff813fc579>] put_driver+0x10/0x22> > The disassembly is> > 55 push %rbp> 48 89 e5 mov %rsp,%rbp> 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)> 48 ff 05 c7 af 80 01 incq 0x180afc7(%rip) # 0x180aff2> * 48 8b 7f 60 mov 0x60(%rdi),%rdi <-- trapping instruction> e8 38 27 ec ff callq 0xffffffffffec276c> 48 ff 05 bf af 80 01 incq 0x180afbf(%rip) # 0x180affa> c9 leaveq> c3 retq> > which is the access of "drv->p" in that function:> > kobject_put(&drv->p->kobj);> > so "drv" that was passed in was just bogus. (it's> "0xffffffffa06a8430", looks like it's the DEBUG_PAGEALLOC that has> caused the page to be free'd).> >> [ 68.340115] Call Trace:>> [ 68.340115] [<ffffffff813fc64b>] driver_register+0xc0/0x1b2>> [ 68.340115] [<ffffffff8137f5de>] pnp_register_driver+0x28/0x31>> [ 68.340115] [<ffffffffa06b888d>] init_ipmi_si+0x1a4/0x4cd [ipmi_si]>> [ 68.340115] [<ffffffff810020a6>] do_one_initcall+0x6c/0x1e3>> [ 68.340115] [<ffffffff810d4998>] sys_init_module+0x12b/0x307> > And I think that - as usual - the problem is that the damn driver> cleanup is very ugly, and has this duplicate set of code to unregister> all the random crap. Except one of the duplicates is missing one case.> I think the bug was introduced by Gjorn Helgaas in commit 9e368fa011d4> ("ipmi: add PNP discovery (ACPI namespace via PNPACPI)") which added> the acpi pnp case, but only unregistered it on the regular module exit> path, not on the "module loaded with no pnp devices" path.> > Does this patch fix it? And Corey - this is a good example of why the> code shouldn't duplicate the "unregister stuff" in the module load> error case vs the module exit path, and there should be a shared> "cleanup()" function that is called by both. Can this be cleaned up,> please?> > PATCH IS UNTESTED!That works.Acked-and-tested-by: Randy Dunlap <randy.dunlap@oracle.com>thanks,-- ~Randy*** Remember to use Documentation/SubmitChecklist when testing your code ***
http://lkml.org/lkml/2011/2/10/391
CC-MAIN-2013-48
refinedweb
347
71.85
Starting python debugger automatically on error python -m pdb -c continue myscript.py If you don't provide the -c continue flag then you'll need to enter 'c' (for Continue) when execution begins. Then it will run to the error point and give you control there. As mentioned by eqzx, this flag is a new addition in python 3.2 so entering 'c' is required for earlier Python versions (see You can use traceback.print_exc to print the exceptions traceback. Then use sys.exc_info to extract the traceback and finally call pdb.post_mortem with that traceback import pdb, traceback, sysdef bombs(): a = [] print a[0]if __name__ == '__main__': try: bombs() except: extype, value, tb = sys.exc_info() traceback.print_exc() pdb.post_mortem(tb) If you want to start an interactive command line with code.interact using the locals of the frame where the exception originated you can do import traceback, sys, codedef bombs(): a = [] print a[0]if __name__ == '__main__': try: bombs() except: type, value, tb = sys.exc_info() traceback.print_exc() last_frame = lambda tb=tb: last_frame(tb.tb_next) if tb.tb_next else tb frame = last_frame().tb_frame ns = dict(frame.f_globals) ns.update(frame.f_locals) code.interact(local=ns) Use the following module: import sysdef) print # ...then start the debugger in post-mortem mode. # pdb.pm() # deprecated pdb.post_mortem(tb) # more "modern"sys.excepthook = info Name it debug (or whatever you like) and put it somewhere in your python path. Now, at the start of your script, just add an import debug.
https://codehunter.cc/a/python/starting-python-debugger-automatically-on-error
CC-MAIN-2022-21
refinedweb
249
61.22
21 May 2010 05:37 [Source: ICIS news] SINGAPORE (ICIS news)--Asia’s benzene values fell by $45-50/tonne (€36-40/tonne) on Friday to hit a seven-month low on the back of falling crude futures and an overnight drop in the US benzene market, traders said. “The market really fell [as] ?xml:namespace> Spot values were assessed at $790-805/tonne FOB (free on board) Korea, $45-50/tonne lower than Thursday’s close and a whopping $145-150/tonne lower than last Friday’s closing level, according to ICIS pricing. The last time prices went below $800/tonne FOB Offers for any July loading cargoes touched $805/tonne FOB The steady fall in crude futures seen this week was the key driving factor for the dip in global benzene values, market participants said. US benzene prices hit $2.85-2.95/gal FOB Barges late Thursday night, 5-10 cents lower than the previous close. Asia is a major exporter of benzene to
http://www.icis.com/Articles/2010/05/21/9361417/asia-benzene-plunges-on-crude-follows-overnight-us-dip.html
CC-MAIN-2014-42
refinedweb
167
67.69
Instant PhpStorm Starter Want this title & more? Book Details About This Book - Who This Book Is For If you are a developer who knows the basics of PHP and want to learn the PHPStorm and Symfony 2 frameworks, this book is for you. The book concentrates on using the IDE and not on the PHP language. Table of Contents What You Will Learn - Create PHPStorm is a modern integrated development environment for the PHP language. PHPStorm provides an intelligent editor for PHP code, HTML, and JavaScript with on-the-fly code analysis and automated refactoring for PHP and JavaScript code. PHPStorm's code completion supports PHP 5.4, including namespaces, closures, traits, and short array syntax. It includes a full-fledged SQL editor with editable query results. "Instant PHPStorm Starter" is a plain and simple introduction to the world of advanced and professional PHP development with PHPStorm. It concentrates on the various tools and operations that will help you to produce better code in a more efficient way. Learning professional PHP development starts with the basic using, analysing, and extending of existing PHP code. The book will guide you through the process of setting up and running your first application in Symfony2, a PHP hot topic that enforces all the best practices in PHP programming. Once you complete the task, you will acquire all the necessary knowledge to efficiently work on your code. The book covers PHPStorm’s interface as well as the most useful tools to generate, modify, and inspect the code. We start with the basic configuration of tool windows and the general IDE’s appearance. Then we proceed with the first application; here, you will learn to manipulate the project’s files. Next, we describe the most important operations concerning the code, one-by-one. This part of the book is divided into three main sections: editing, high level programming operations, and VCS. This book covers all the killer features of the PHPStorm IDE. Recommended for You
https://www.packtpub.com/web-development/instant-phpstorm-starter-instant
CC-MAIN-2015-18
refinedweb
330
55.24
Java Objects and Classes in ColdFusion In this and following chapters, we examine the nature of objects and types of objects and how they interact with one another. Classes are factories for objects. Once a class defines the kind of data it can hold and the operations it is capable of performing, a particular object can be made. For instance, "Ludwig" is an instance of the "person" class. Once instantiated (once a particular instance of a class has been brought into existence), the object often needs to relate to other objects similarly constructed in order to have a meaningful existence. Once the object can no longer fulfill the obligations of meaningful service to the organization of objects to which it belongs, it is taken out of service. Understanding the concepts presented in this chapter is crucial to excelling at Java development, as the object is the foundation of Java. In Java, as in life, the rules regarding the creation of objects follow clearly defined and relatively simple patterns. In this chapter, we engage the complexities of the hierarchical relations involved in objects performing their functions. These relations or relational descriptors include packages and documentation, constructors, abstractions, inner classes, exceptions, and finalityfew of which have meaningful corresponding terms or concepts in ColdFusion. For that reason I have tried where possible to approach the subject cautiously and from different angles up to this point. 7.1 Packages We will begin with packages for three reasons. First, they should be somewhat familiar at this point because they have been mentioned previously. Second, working with packages is very straightforward. Finally, we will use packages to organize much of the remainder of our work, so understanding how they are used is necessary. Applications are organized into packages, the fundamental organizational unit in Java. A package is simply a directory, itself composed of Java classes, interfaces, and other packages. Use packages in a manner similar to how you use directories for storing files on your computer. Package names are hierarchical, corresponding to physical directories on your hard drive. You may create as few or as many packages as you like for your applications. Use your best judgment to decide when it makes sense to create a new package. Think about portability, code reuse, and the audience and purpose of your application. You have encountered packages many times already. For this book, I have a root package called JavaForCF. Inside this package is one package for each chapter in which we write code examples, such as chp6. Classes in the standard API created by Sun are generally in the java package, and they have a subgroup when it makes sense. Packages are referenced using dot notation. So, for instance, the java.sql package contains classes relevant for creating connections to databases. The java.nio package contains new classes for input and output. That means that there is a folder called java that has a folder called nio inside it that contains the source files. NOTE Where is this folder? Installing the JDK on your system includes an archive called src.jar. This archive contains all of the source files used to create the JDK. You can view the source file for any given Java class by unpacking the src.jar archive. To unpack the archive, navigate to the JDK directory via a command prompt. Then type this command: jarv xf src.jar C:/jdk1.4/src/java/sql/Statement.java. This command will create the new directories src, java, and sql. You can then open and view the Statement.java file. This is the object used for executing static SQL statements and returning the results. Depending on your file associations, you might need to replace the file name src.jar with src.zip in the above command. You will readily see the importance of using packages if you have worked with XML at all. Because XML allows you to define your own tags, just as Java allows you to define your own classes, you must have some way of indicating the uniqueness of your work to distinguish it from the work of others. That is, you have to avoid name conflicts. In XML, you do this with namespaces. In Java, you use packages. Packages can be nested. For instance, the String class is in the lang package, which is nested in the java package: Any reference to String is really an implicit reference to java.lang.String. Sun recommends using your Internet domain name for your packages, because these are already known to be unique. Generally, packages are created with the domain name as a subpackage of the name extension. For instance, packages created by Apache Software Foundation can be found inside the org.apache package. Here's another example: I have registered the domain CoreColdFusion.com. I might create packages for this book and this chapter in com.corecoldfusion.javaforcf.chp7. 7.1.1 Designing Packages Creating unique names is the real reason for package nesting. The compiler honors no relationship whatsoever between com.corecoldfusion.javaforcf and com.corecoldfusion.javaforcf.chp7. They are organized in your mind, perhaps, but otherwise they are totally unrelated. However, packages should be designed with care. Think of the access you want to allow, and plan for it. Classes in a package have total access to each other's non-private members. Anything not explicitly marked private opens your class members up for unexpected reference by unrelated classes. Group your packages logically. This will help other programmers find your code. There is no added benefit to obscurity when placing classes in packages. 7.1.2 The package Object There is a package object in the java.lang package is not used in creating or working with packages. You don't need to reference it when defining packages for your classes. It is useful for discovering metadata about a package, such as version information about a package's implementation and specification. This can be useful to you as a programmer. For instance, you may need your program to inspect the package object for version information and then implement certain functionality depending on the result. You could also need this information in order to work around bugs that may exist in a certain package. You gain access to this information by calling the main methods of the package object, which are shown here: public String getName() returns the name of this package. public String getSpecificationTitle() returns the title of the specification implemented by this package. If unknown, returns null. public String getSpecificationVersion() returns a string describing the version of the specification implemented by this package. If unknown, returns null. public String getSpecificationVendor() returns a string naming the owner and maintainer of this specification implemented by this package. If unknown, returns null. public boolean isCompatibleWith(String desiredVersion) returns a boolean indicating whether the package is compatible with the version indicated. 7.1.3 Working with Packages There are two kinds of classes that a class can use: classes in their own package, and public classes in other packages. If you want to use a public class in another package, you have two options: Add the full package name to every reference you make to the class. For instance: package chp7; public class Test { public static void main(String [] a) { java.io.File myFile = new java.io.File("Dude.txt"); } } Import the package and reference the class name directly: package chp7; import java.io.File; public class Test { public static void main(String [] a) { File myFile = new File("Dude.txt"); } } Importing the packageName.className as shown above allows the shortcut reference to only that class, not other classes in the same package. You can use multiple import statements for the same package or different packages, like this: import java.io.BufferedReader; import java.io.BufferedWriter; ... If you are going to import more than one or two classes in the same package, use the wildcard character to import all of the classes in a package, like this: import java.io.*; When importing a package, you can import the package name with a trailing * to indicate that you want to import all of the classes in that package. NOTE Using the * to indicate the import of all classes in a package does NOT import nested packages. For instance, importing java.util.* will import all classes located directly in that package, but not the java.util.jar or java.util.zip subpackages. You can only import classes. You cannot import objects. The only time you need to worry about your imports is when you have two classes that have the same name in different packages. A common example of this kind of conflict is with two different Date classes provided by the JDK. There is one in java.sql and another in java.util. So while the following will compile, import java.util.*; import java.sql.*; you won't be able to reference the Date class in this program without the package name, like this: Date hireDate; // Error! because the compiler won't know if you mean java.util.Date or java.sql.Date. If you need to use both Date classes, you don't have a choice but to explicitly reference each one: java.sql.Date hireDate = new java.sql.Date(); java.util.Date fireDate = new java.util.Date(); 7.1.4 Packaging a Class It is easy to add classes to a package: Create a directory. You do so using the package keyword at the top of your class definition: package chp7; public class Test { //... code here } This command must be the first non-comment line of code in your class definition. It is not necessary to put your class into a package. If you do not include a package command in your source file, the classes therein are added to the default package. The default package has no name. If the directories don't exist, some IDEs will create the packages for you and place the resultant class in there. The compiler won't check directories, however, when source files are compiled, and the virtual machine may not be able to find the resulting class file. For that reason, put your source files and class files in the same directory structure.
http://www.informit.com/articles/article.aspx?p=31755&amp;seqNum=6
CC-MAIN-2017-17
refinedweb
1,706
57.67
gnutls_x509_crl_verify — API function #include <gnutls/x509.h> is the crl to be verified is a certificate list that is considered to be trusted one holds the number of CA certificates in CA_list Flags that may be used to change the verification algorithm. Use OR of the gnutls_certificate_verify_flags enumerations. will hold the crl verification output. This function will try to verify the given crl and return its verification status. See gnutls_x509_crt_list_verify() for a detailed description of return values. Note that since GnuTLS 3.1.4 this function includes the time checks. Note that value in verify is set only when the return value of this function is success (i.e, failure to trust a CRL a certificate does not imply a negative return:
https://man.linuxexplore.com/htmlman3/gnutls_x509_crl_verify.3.html
CC-MAIN-2021-31
refinedweb
122
59.6
$:is a way-too-Perl-y special variable alias for $LOAD_PATH. You could change it like this: $LOAD_PATH.unshift(File.dirname(__FILE__)) The upside: you're now doing something more comprehensible. The downside: that "something" is editing an all-caps global. That doesn't feel wrong to you? This code pops the local directory name (relative, not absolute) onto the load path. Then you can require filenames as relative paths. This is not such a good idea. Say you've got a gem which provides an interface to the Twitter API, and this gem uses this convention. This is just an example; I'm not thinking of any real code out there on the githubs. Say you're enabling your web app with some Twitter-related functionality. Maybe you've got files like lib/twitter.rbor app/models/twitter.rb. The problem comes not with this idiom, but from what it makes possible. It enables you to do this: require 'twitter' In this particular example, that shit breaks all over the fucking place. Yay! I saved myself the effort of typing an absolute path and/or code which would generate an absolute path! And in the process, I guaranteed myself unexpected load bugs. The code attempts to load lib/twitteror app/models/twitterand you get MethodNotFoundErrors and similar problems. This idiom is only a good idea if your filenames are like the Highlander and there can be only one. You can make it a little nicer: $LOAD_PATH.unshift(File.expand_path(File.dirname(__FILE__))) But you're still looking at a mess, and if you're going to go to all that trouble, you're better off doing this: require File.expand_path(File.dirname(__FILE__)) + "twitter" Of course the nicer thing is to do this: class File def self.here(string) expand_path(dirname(__FILE__)) + string end end require File.here "twitter" I think in reality Fileis a module, but you get the idea. You see File.expand_path(File.dirname(__FILE__))all over the place, and it's ugly as sin. You see $:.unshift(File.dirname(__FILE__))now and then, and it's not just ugly, but also vulnerable to annoying breakage. Any time you're faced with a choice between ugly shit and ugly shit that doesn't even work, the only sane response is to say "fuck no" and rewrite the rules so you're not faced with that kind of dilemma again. Holy. Freaking. Crap. Dude, get out of my head! Today at work I thought "I'm really sick of this File.expand_path(File.join(File.dirname(__FILE__),...)) crap. What I really want is something like 'require from_here "file_name"'. Spooky. There are some upsides. If you are dealing with semi-canonical 'twitter' gem, it is nice to be able to preload you own require 'twitter' before a gem the requires that as a dependency. So maybe it is just a little hubris, you are saying this is "the twitter.rb" That gets at what I was also saying about being relative from the name of the gem onward, such as the idiom for requiring from the common library: require 'cgi/session' This works if you are dealing with a case where there is a defacto or real standard, such as for uniquely named gems (say hpricot or something). All I know is I have seen cases where it is cleaner and avoids loading issues better if some things are purely relative. . There are two bugs in your code, alas. The first one is that you'll want to insert a "/" between the result of expand_path() and string. The second is more significant: the value of __FILE__ will always be the path to the file that *defines* File.here, rather than the file that calls it.: require "lib/file_here.rb" puts File.here("my_lib") # ignoring the slash problem require File.here("my_lib") then the output is "/tmp/lib/my_lib", not "/tmp/my_lib", along with a LoadError. So, basically, this will only work if the File.here method is defined by a file in the same directory as the other files you want to require using it. .... that is, unless you plan on defining File.here at the top of every file that plans on using it, of course. And you have to anyways, if you are going to use it before any require... That is the real problem with the require mess, you need to fix it before you could require a file that would fix it.... ;) Of course, we could just all switch to 1.9. ;) @lazyatom: the specifics of my code are irrelevant. The first sentence after the code example acknowledges that it's pseudocode. .) God, do not get me started. Every time I see someone chewing up ARGV the red mist descends... David Brady posted a solution on Twitter. But I don't want necessarily a require of my vendored lib to be counted as a different file than some other version, and thus both loaded, due to having different absolute paths. require 'canonical_lib/subset' is correct often. That's why I am thinking some form of auto_require :ConstantName, 'whatever fucking path you. Ah! I always had a feeling that something where goint wrong with the sentence $LOAD_PATH.unshift(File.dirname(__FILE__)) but I couldn't specify what exactly. Now things are cristal clear. Thanks Giles.
http://gilesbowkett.blogspot.com/2009/04/unshiftfiledirnamefile.html?showComment=1240476120000
CC-MAIN-2018-22
refinedweb
890
74.79
How the heck does async/await work in Python 3.5? Or, generators let you do neat stuff Being a core developer of Python has made me want to understand how the language generally works. I realize there will always be obscure corners where I don't know every intricate detail, but to be able to help with issues and the general design of Python I feel like I should try and understand its core semantics and how things work under the hood. But until recently I didn't understand how async/ await worked in Python 3.5. I knew that yield from in Python 3.3 combined with asyncio in Python 3.4 had led to this new syntax. But having not done a lot of networking stuff -- which asyncio is not limited to but does focus on -- had led to me not really paying much attention to all of this async/ await stuff. I mean I knew that: yield from iterator was (essentially) equivalent to: for x in iterator: yield x And I knew that asyncio was an event loop framework which allowed for asynchronous programming, and I knew what those words (basically) meant on their own. But having never dived into the async/ await syntax to understand how all of this came together, I felt I didn't understand asynchronous programming in Python which bothered me. So I decided to take the time and try and figure out how the heck all of it worked. And since I have heard from various people that they too didn't understand how this new world of asynchronous programming worked, I decided to write this essay (yes, this post has taken so long in time and is so long in words that my wife has labeled it an essay). Now because I wanted a properly understanding of how the syntax worked, this essay has some low-level technical detail about how CPython does things. It's totally okay if it's more detail than you want or that you don't fully understand it as I don't explain every nuance of CPython internals in order to keep this from turning into a book (e.g., if you don't know that code objects have flags, let alone what a code object is, it's okay and you don't need to care to get something from this essay). I have tried to provide a more accessible summary at the end of every section so that you can skim the details if they turn out to be more than you want to deal with. A history lesson about coroutines in Python According to Wikipedia, "Coroutines are computer program components that generalize subroutines for nonpreemptive multitasking, by allowing multiple entry points for suspending and resuming execution at certain locations". That's a rather technical way of saying, "coroutines are functions whose execution you can pause". And if you are saying to yourself, "that sounds like generators", you would be right. Back in Python 2.2, generators were first introduced by PEP 255 (they are also called generator iterators since generators implement the iterator protocol). Primarily inspired by the Icon programming language, generators allowed for a way to create an iterator that didn't waste memory when calculating the next value in the iteration. For instance, if you wanted to create your own version of range(), you could do it in an eager fashion by creating a list of integers: def eager_range(up_to): """Create a list of integers, from 0 to up_to, exclusive.""" sequence = [] index = 0 while index < up_to: sequence.append(index) index += 1 return sequence The problem with this, though, is that if you want a large sequence like the integers from 0 to 1,000,000, you have to create a list long enough to hold 1,000,000 integers. But when generators were added to the language, you could suddenly create an iterator that didn't need to create the whole sequence upfront. Instead, all you had to do is have enough memory for one integer at a time. def lazy_range(up_to): """Generator to return the sequence of integers from 0 to up_to, exclusive.""" index = 0 while index < up_to: yield index index += 1 Having a function pause what it is doing whenever it hit a yield expression -- although it was a statement until Python 2.5 -- and then be able to resume later is very useful in terms of using less memory, allowing for the idea of infinite sequences, etc. But as you may have noticed, generators are all about iterators. Now having a better way to create iterators is obviously great (and this is shown when you define an __iter__() method on an object as a generator), but people knew that if we took the "pausing" part of generators and added in a "send stuff back in" aspect to them, Python would suddenly have the concept of coroutines in Python (but until I say otherwise, consider this all just a concept in Python; concrete coroutines in Python are discussed later on). And that exact feature of sending stuff into a paused generator was added in Python 2.5 thanks to PEP 342. Among other things, PEP 342 introduced the send() method on generators. This allowed one to not only pause generators, but to send a value back into a generator where it paused. Taking our range() example further, you could make it so the sequence jumped forward or backward by some amount: def jumping_range(up_to): """Generator for the sequence of integers from 0 to up_to, exclusive. Sending a value into the generator will shift the sequence by that amount. """ index = 0 while index < up_to: jump = yield index if jump is None: jump = 1 index += jump if __name__ == '__main__': iterator = jumping_range(5) print(next(iterator)) # 0 print(iterator.send(2)) # 2 print(next(iterator)) # 3 print(iterator.send(-1)) # 2 for x in iterator: print(x) # 3, 4 Generators were not mucked with again until Python 3.3 when PEP 380 added yield from. Strictly speaking, the feature empowers you to refactor generators in a clean way by making it easy to yield every value from an iterator (which a generator conveniently happens to be). def lazy_range(up_to): """Generator to return the sequence of integers from 0 to up_to, exclusive.""" index = 0 def gratuitous_refactor(): while index < up_to: yield index index += 1 yield from gratuitous_refactor() By virtue of making refactoring easier, yield from also lets you chain generators together so that values bubble up and down the call stack without code having to do anything special. def bottom(): # Returning the yield lets the value that goes up the call stack to come right back # down. return (yield 42) def middle(): return (yield from bottom()) def top(): return (yield from middle()) # Get the generator. gen = top() value = next(gen) print(value) # Prints '42'. try: value = gen.send(value * 2) except StopIteration as exc: value = exc.value print(value) # Prints '84'. Summary Generators in Python 2.2 let the execution of code be paused. Once the ability to send values back into the paused generators were introduced in Python 2.5, the concept of coroutines in Python became possible. And the addition of yield from in Python 3.3 made it easier to refactor generators as well as chain them together. What is an event loop? It's important to understand what an event loop is and how they make asynchronous programming possible if you're going to care about async/ await. If you have done GUI programming before -- including web front-end work -- then you have worked with an event loop. But since having the concept of asynchronous programming as a language construct is new in Python, it's okay if you don't happen to know what an event loop is. Going back to Wikipedia, an event loop "is a programming construct that waits for and dispatches events or messages in a program". Basically an event loop lets you go, "when A happens, do B". Probably the easiest example to explain this is that of the JavaScript event loop that's in every browser. Whenever you click something ("when A happens"), the click is given to the JavaScript event loop which checks if any onclick callback was registered to handle that click ("do B"). If any callbacks were registered then the callback is called with the details of the click. The event loop is considered a loop because it is constantly collecting events and loops over them to find what to do with the event. In Python's case, asyncio was added to the standard library to provide an event loop. There's a focus on networking in asyncio which in the case of the event loop is to make the "when A happens" to be when I/O from a socket is ready for reading and/or writing (via the selectors module). Other than GUIs and I/O, event loops are also often used for executing code in another thread or subprocess and have the event loop act as the scheduler (i.e., cooperative multitasking). If you happen to understand Python's GIL, event loops are useful in cases where releasing the GIL is possible and useful. Summary Event loops provide a loop which lets you say, "when A happens then do B". Basically an event loop watches out for when something occurs, and when something that the event loop cares about happens it then calls any code that cares about what happened. Python gained an event loop in the standard library in the form of asyncio in Python 3.4. How async and await work The way it was in Python 3.4 Between the generators found in Python 3.3 and an event loop in the form of asyncio, Python 3.4 had enough to support asynchronous programming in the form of concurrent programming. Asynchronous programming is basically programming where execution order is not known ahead of time (hence asynchronous instead of synchronous). Concurrent programming is writing code to execute independently of other parts, even if it all executes in a single thread (concurrency is not parallelism). For example, the following is Python 3.4 code to count down every second in two asynchronous, concurrent function calls. import asyncio # Borrowed from. @asyncio.coroutine def countdown(number, n): while n > 0: print('T-minus', n, '({})'.format(number)) yield from asyncio.sleep(1) n -= 1 loop = asyncio.get_event_loop() tasks = [ asyncio.ensure_future(countdown("A", 2)), asyncio.ensure_future(countdown("B", 3))] loop.run_until_complete(asyncio.wait(tasks)) loop.close() In Python 3.4, the asyncio.coroutine decorator was used to label a function as acting as a coroutine that was meant for use with asyncio and its event loop. This gave Python its first concrete definition of a coroutine: an object who implemented the methods added to generators in PEP 342 and represented by the collections.abc.Coroutine abstract base class. This meant that suddenly all generators implemented the coroutine interface even if they weren't meant to be used in that fashion. To fix this, asyncio required that all generators meant to be used as a coroutine had to be decorated with asyncio.coroutine. With this concrete definition of a coroutine (which matched an API that generators provided), you then used yield from on any asyncio.Future object to pass it down to the event loop, pausing execution of the coroutine while you waited for something to happen (being a future object is an implementation detail of asyncio and not important). Once the future object reached the event loop it was monitored there until the future object was done doing whatever it needed to do. Once the future was done doing its thing, the event loop noticed and the coroutine that was paused waiting for the future's result started again with its result sent back into the coroutine using its send() method. Take our example above. The event loop starts each of the countdown() coroutine calls, executing until it hits yield from and the asyncio.sleep() function in one of them. That returns an asyncio.Future object which gets passed down to the event loop and pauses execution of the coroutine. There the event loop watches the future object until the one second is over (as well as checking on other stuff it's watching, like the other coroutine). Once the one second is up, the event loop takes the paused countdown() coroutine that gave the event loop the future object, sends the result of the future object back into the coroutine that gave it the future object in the first place, and the coroutine starts running again. This keeps going until all of the countdown() coroutines are finished running and the event loop has nothing to watch. I'll actually show you a complete example of how exactly all of this coroutine/event loop stuff works later, but first I want to explain how async and await work. Going from yield from to await in Python 3.5 In Python 3.4, a function that was flagged as a coroutine for the purposes of asynchronous programming looked like: # This also works in Python 3.5. @asyncio.coroutine def py34_coro(): yield from stuff() In Python 3.5, the types.coroutine decorator has been added to also flag a generator as a coroutine like asyncio.coroutine does. You can also use async def to syntactically define a function as being a coroutine, although it cannot contain any form of yield expression; only return and await are allowed for returning a value from the coroutine. async def py35_coro(): await stuff() A key thing async and types.coroutine do, though, is tighten the definition of what a coroutine is. It takes coroutines from simply being an interface to an actual type, making the distinction between any generator and a generator that is meant to be a coroutine much more stringent (and the inspect.iscoroutine() function is even stricter by saying async has to be used). You will also notice that beyond just async, the Python 3.5 example introduces await expressions (which are only valid within an async def). While await operates much like yield from, the objects that are acceptable to an await expression are different. Coroutines are definitely allowed in an await expression since the concept of coroutines are fundamental in all of this. But when you call await on an object , it technically needs to be an awaitable object: an object that defines an __await__() method which returns an iterator which is not a coroutine itself . Coroutines themselves are also considered awaitable objects (hence why collections.abc.Coroutine inherits from collections.abc.Awaitable). This definition follows a Python tradition of making most syntax constructs translate into a method call underneath the hood, much like a + b is a.__add__(b) or b.__radd__(a) underneath it all. How does the difference between yield from and await play out at a low level (i.e., a generator with types.coroutine vs. one with async def)? Let's look at the bytecode of the two examples above in Python 3.5 to get at the nitty-gritty details. The bytecode for py34_coro() is: >>> dis.dis(py34_coro) 2 0 LOAD_GLOBAL 0 (stuff) 3 CALL_FUNCTION 0 (0 positional, 0 keyword pair) 6 GET_YIELD_FROM_ITER 7 LOAD_CONST 0 (None) 10 YIELD_FROM 11 POP_TOP 12 LOAD_CONST 0 (None) 15 RETURN_VALUE The bytecode for py35_coro() is : >>> dis.dis(py35_coro) 1 0 LOAD_GLOBAL 0 (stuff) 3 CALL_FUNCTION 0 (0 positional, 0 keyword pair) 6 GET_AWAITABLE 7 LOAD_CONST 0 (None) 10 YIELD_FROM 11 POP_TOP 12 LOAD_CONST 0 (None) 15 RETURN_VALUE Ignoring the difference in line number due to py34_coro() having the asyncio.coroutine decorator, the only visible difference between them is the GET_YIELD_FROM_ITER opcode versus the GET_AWAITABLE opcode. Both functions are properly flagged as being coroutines, so there's no difference there. In the case of GET_YIELD_FROM_ITER, it simply checks if its argument is a generator or coroutine, otherwise it calls iter() on its argument (the acceptance of a coroutine object by the opcode for yield from is only allowed when the opcode is used from within a coroutine itself, which is true in this case thanks to the types.coroutine decorator flagging the generator as such at the C level with the CO_ITERABLE_COROUTINE flag on the code object). But GET_AWAITABLE does something different. While the bytecode will accept a coroutine just like GET_YIELD_FROM_ITER, it will not accept a generator if has not been flagged as a coroutine. Beyond just coroutines, though, the bytecode will accepted an awaitable object as discussed earlier. This makes yield from expressions and await expressions both accept coroutines while differing on whether they accept plain generators or awaitable objects, respectively. You may be wondering why the difference between what an async-based coroutine and a generator-based coroutine will accept in their respective pausing expressions? The key reason for this is to make sure you don't mess up and accidentally mix and match objects that just happen to have the same API to the best of Python's abilities. Since generators inherently implement the API for coroutines then it would be easy to accidentally use a generator when you actually expected to be using a coroutine. And since not all generators are written to be used in a coroutine-based control flow, you need to avoid accidentally using a generator incorrectly. But since Python is not statically compiled, the best the language can offer is runtime checks when using a generator-defined coroutine. This means that when types.coroutine is used, Python's compiler can't tell if a generator is going to be used as a coroutine or just a plain generator (remember, just because the syntax says types.coroutine that doesn't mean someone hasn't earlier done types = spam earlier), and thus different opcodes that have different restrictions are emitted by the compiler based on the knowledge it has at the time. One very key point I want to make about the difference between a generator-based coroutine and an async one is that only generator-based coroutines can actually pause execution and force something to be sent down to the event loop. You typically don't see this very important detail because you usually call event loop-specific functions like the asyncio.sleep() function since event loops implement their own APIs and these are the kind of functions that have to worry about this little detail. For the vast majority of us, we will work with event loops rather than be writing them and thus only be writing async coroutines and never need to really care about this. But if you're like me and were wondering why you couldn't write something like asyncio.sleep() using only async coroutines, this can be quite the "aha!" moment. Summary Let's summarize all of this into simpler terms. Defining a method with async def makes it a coroutine. The other way to make a coroutine is to flag a generator with types.coroutine -- technically the flag is the CO_ITERABLE_COROUTINE flag on a code object -- or a subclass of collections.abc.Coroutine. You can only make a coroutine call chain pause with a generator-based coroutine. An awaitable object is either a coroutine or an object that defines __await__() -- technically collections.abc.Awaitable -- which returns an iterator that is not a coroutine. An await expression is basically yield from but with restrictions of only working with awaitable objects (plain generators will not work with an await expression). An async function is a coroutine that either has return statements -- including the implicit return None at the end of every function in Python -- and/or await expressions ( yield expressions are not allowed). The restrictions for async functions is to make sure you don't accidentally mix and match generator-based coroutines with other generators since the expected use of the two types of generators are rather different. Think of async/ await as an API for asynchronous programming A key thing that I want to point out is actually something I didn't really think deeply about until I watched David Beazley's Python Brasil 2015 keynote. In that talk, David pointed out that async/ await is really an API for asynchronous programming (which he reiterated to me on Twitter). What David means by this is that people shouldn't think that async/ await as synonymous with asyncio, but instead think that asyncio is a framework that can utilize the async/ await API for asynchronous programming. David actually believes this idea of async/ await being an asynchronous programming API that he has created the curio project to implement his own event loop. This has helped make it clear to me that async/ await allows Python to provide the building blocks for asynchronous programming, but without tying you to a specific event loop or other low-level details (unlike other programming languages which integrate the event loop into the language directly). This allows for projects like curio to not only operate differently at a lower level (e.g., asyncio uses future objects as the API for talking to its event loop while curio uses tuples), but to also have different focuses and performance characteristics (e.g., asyncio has an entire framework for implementing transport and protocol layers which makes it extensible while curio is simpler and expects the user to worry about that kind of thing but also allows it to run faster). Based on the (short) history of asynchronous programming in Python, it's understandable that people might think that async/ await == asyncio. I mean asyncio was what helped make asynchronous programming possible in Python 3.4 and was a motivating factor for adding async/ await in Python 3.5. But the design of async/ await is purposefully flexible enough to not require asyncio or contort any critical design decision just for that framework. In other words, async/ await continues Python's tradition of designing things to be as flexible as possible while still being pragmatic to use (and implement). An example At this point your head might be awash with new terms and concepts, making it a little hard to fully grasp how all of this is supposed to work to provide you asynchronous programming. To help make it all much more concrete, here is a complete (if contrived) asynchronous programming example, end-to-end from event loop and associated functions to user code. The example has coroutines which represents individual rocket launch countdowns but that appear to be counting down simultaneously . This is asynchronous programming through concurrency; three separate coroutines will be running independently, and yet it will all be done in a single thread. import datetime import heapq import types import time class Task: """Represent how long a coroutine should wait before starting again. Comparison operators are implemented for use by heapq. Two-item tuples unfortunately don't work because when the datetime.datetime instances are equal, comparison falls to the coroutine and they don't implement comparison methods, triggering an exception. Think of this as being like asyncio.Task/curio.Task. """ def __init__(self, wait_until, coro): self.coro = coro self.waiting_until = wait_until def __eq__(self, other): return self.waiting_until == other.waiting_until def __lt__(self, other): return self.waiting_until < other.waiting_until class SleepingLoop: """An event loop focused on delaying execution of coroutines. Think of this as being like asyncio.BaseEventLoop/curio.Kernel. """ def __init__(self, *coros): self._new = coros self._waiting = [] def run_until_complete(self): # Start all the coroutines. for coro in self._new: wait_for = coro.send(None) heapq.heappush(self._waiting, Task(wait_for, coro)) # Keep running until there is no more work to do. while self._waiting: now = datetime.datetime.now() # Get the coroutine with the soonest resumption time. task = heapq.heappop(self._waiting) if now < task.waiting_until: # We're ahead of schedule; wait until it's time to resume. delta = task.waiting_until - now time.sleep(delta.total_seconds()) now = datetime.datetime.now() try: # It's time to resume the coroutine. wait_until = task.coro.send(now) heapq.heappush(self._waiting, Task(wait_until, task.coro)) except StopIteration: # The coroutine is done. pass @types.coroutine def sleep(seconds): """Pause a coroutine for the specified number of seconds. Think of this as being like asyncio.sleep()/curio.sleep(). """ now = datetime.datetime.now() wait_until = now + datetime.timedelta(seconds=seconds) # Make all coroutines on the call stack pause; the need to use `yield` # necessitates this be generator-based and not an async-based coroutine. actual = yield wait_until # Resume the execution stack, sending back how long we actually waited. return actual - now async def countdown(label, length, *, delay=0): """Countdown a launch for `length` seconds, waiting `delay` seconds. This is what a user would typically write. """ print(label, 'waiting', delay, 'seconds before starting countdown') delta = await sleep(delay) print(label, 'starting after waiting', delta) while length: print(label, 'T-minus', length) waited = await sleep(1) length -= 1 print(label, 'lift-off!') def main(): """Start the event loop, counting down 3 separate launches. This is what a user would typically write. """ loop = SleepingLoop(countdown('A', 5), countdown('B', 3, delay=2), countdown('C', 4, delay=1)) start = datetime.datetime.now() loop.run_until_complete() print('Total elapsed time is', datetime.datetime.now() - start) if __name__ == '__main__': main() As I said, it's contrived, but if you run this in Python 3.5 you will notice that all three coroutines run independently in a single thread and yet the total amount of time taken to run is about 5 seconds. You can consider Task, SleepingLoop, and sleep() as what an event loop provider like asyncio and curio would give you. For a normal user, only the code in countdown() and main() are of importance. As you can see, there is no magic to async, await, or this whole asynchronous programming deal; it's just an API that Python provides you to help make this sort of thing easier. My hopes and dreams for the future Now that I understand how this asynchronous programming works in Python, I want to use it all the time! It's such an awesome concept that's so much better than something you would have used threads for previously. The problem is that Python 3.5 is so new that async/ await is also very new. That means there are not a lot of libraries out there supporting asynchronous programming like this. For instance, to do HTTP requests you either have to construct the HTTP request yourself by hand (yuck), use a project like the aiohttp framework which adds HTTP on top of another event loop (in this case, asyncio), or hope more projects like the hyper library continue to spring up to provide an abstraction for things like HTTP which allow you to use whatever I/O library you want (although unfortunately hyper only supports HTTP/2 at the moment). Personally, I hope projects like hyper take off so that we have a clear separation between getting binary data from I/O and how we interpret that binary data. This kind of abstraction is important because most I/O libraries in Python are rather tightly coupled to how they do I/O and how they handle data coming from I/O. This is a problem with the http package in Python's standard library as it doesn't have an HTTP parser but a connection object which does all the I/O for you. And if you were hoping requests would support asynchronous programming, your hopes have already been dashed because the synchronous I/O that requests uses is baked into its design. This shift in ability to do asynchronous programming gives the Python community a chance to fix a problem it has with not having abstractions at the various layers of the network stack. And we have the perk of it not being hard to make asynchronous code run as if its synchronous, so tools filling the void for asynchronous programming can work in both worlds. I also hope that Python gains some form of support in async coroutines for yield. Maybe this will require yet another keyword (maybe something like anticipate?), but the fact that you actually can't implement an event loop system with just async coroutines bothers me. Luckily, it turns out I'm not the only one who thinks this, and since the author of PEP 492 agrees with me, I think there's a chance of getting this quirk removed. Conclusion Basically async and await are fancy generators that we call coroutines and there is some extra support for things called awaitable objects and turning plain generators in to coroutines. All of this comes together to support concurrency so that we have better support for asynchronous programming in Python. It's awesome and much easier to use than comparable approaches like threads -- I wrote an end-to-end example of asynchronous programming in under 100 lines of commented Python code -- while still being quite flexible and fast (the curio FAQ says that it runs faster than twisted by 30-40% but slower than gevent by 10-15%, and all while being implemented in pure Python; remember that Python 2 + Twisted can use less memory and is easier to debug than Go, so just imagine what you could do with this!). I'm very happy that this landed in Python 3 and I look forward to the community embracing it and helping to flesh out its support in libraries and frameworks so we can all benefit from asynchronous programming in Python.
http://www.snarky.ca/how-the-heck-does-async-await-work-in-python-3-5
CC-MAIN-2016-22
refinedweb
4,919
61.16
Angular Master Class in Málaga Join our upcoming public training!Get a ticket → Some people seem to be confused why Angular seems to favor the Observable abstraction over the Promise abstraction when it comes to dealing with async behavior. There are pretty good resources about the difference between Observables and Promises already out there. I especially like to highlight this free 7 minutes video by Ben Lesh on egghead.io. Technically there are a couple of obvious differences like the disposability and lazyness of Observables. In this article we like to focus on some practical advantages that Observables introduce for server communication. Want to see things in action first? code View Demos play_arrow Play videos TABLE OF CONTENTS The scenario Consider you are building a search input mask that should instantly show you results as you type. If you’ve ever build such a thing before you are probably aware of the challenges that come with that task. 1. Don’t hit the search endpoint on every key stroke Treat the search endpoint as if you pay for it on a per-request basis. No matter if it’s your own hardware or not. We shouldn’t be hammering the search endpoint more often than needed. Basically we only want to hit it once the user has stopped typing instead of with every keystroke. 2. Don’t hit the search endpoint with the same query params for subsequent requests Consider you type foo, stop, type another o, followed by an immediate backspace and rest back at foo. That should be just one request with the term foo and not two even if we technically stopped twice after we had foo in the search box. 3. Deal with out-of-order responses When we have multiple requests in-flight at the same time we must account for cases where they come back in unexpected order. Consider we first typed computer, stop, a request goes out, we type car, stop, a request goes out. Now we have two requests in-flight. Unfortunately the request that carries the results for computer comes back after the request that carries the results for car. This may happen because they are served by different servers. If we don’t deal with such cases properly we may end up showing results for computer whereas the search box reads car. Challenge accepted We will use the free and open wikipedia API to write a little demo. For simplicity our demo will simply consist of two files: app.ts and wikipedia-service.ts. In a real world scenario we would most likely split things further up though. Let’s start with a Promise-based implementation that doesn’t handle any of the described edge cases. This is what our WikipediaService looks like. Despite the fact that the Http/JsonP API still has some little unergonomic parts, there shouldn’t be much of surprise here. import { Injectable } from '@angular/core'; import { URLSearchParams, Jsonp } from '@angular/http'; @Injectable() export class WikipediaService { constructor(private jsonp: Jsonp) {} search (term: string) { var search = new URLSearchParams() search.set('action', 'opensearch'); search.set('search', term); search.set('format', 'json'); return this.jsonp .get('', { search }) .toPromise() .then((response) => response.json()[1]); } } Basically we are injecting the Jsonp service to make a GET request against the wikipedia API with a given search term. Notice that we call toPromise in order to get from an Observable<Response> to a Promise<Response>. With a little bit of then-chaining we eventually end up with a Promise<Array<string>> as the return type of our search method. So far so good, let’s take a look at the app.ts file that holds our App Component. // check the plnkr for the full list of imports import {...} from '...'; @Component({ selector: 'my-app', template: ` <div> <h2>Wikipedia Search</h2> <input #term <ul> <li *{{item}}</li> </ul> </div> ` }) export class AppComponent { items: Array<string>; constructor(private wikipediaService: WikipediaService) {} search(term) { this.wikipediaService.search(term) .then(items => this.items = items); } } Not much of a surprise here either. We inject our WikipediaService and expose it’s functionality via a search method to the template. The template simply binds to keyup and calls search(term.value) leveraging Angular’s awesome template ref feature. We unwrap the result of the Promise that the search method of the WikipediaService returns and expose it as a simple Array of strings to the template so that we can have *ngFor loop through it and build up a list for us. You can play with the demo and fiddle with the code through this plnkr. Unfortunately this implementation doesn’t address any of the described edge cases that we would like to deal with. Let’s refactor our code to make it match the expected behavior. Taming the user input Let’s change our code to not hammer the endpoint with every keystroke but instead only send a request when the user stopped typing for 400 ms. This is where Observables really shine. The Reactive Extensions (Rx) offer a broad range of operators that let us alter the behavior of Observables and create new Observables with the desired semantics. To unveil such super powers we first need to get an Observable<string> that carries the search term that the user types in. Instead of manually binding to the keyup event, we can take advantage of Angular’s formControl directive. To use this directive, we first need to import the ReactiveFormsModule into our application module. import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { JsonpModule } from '@angular/http'; import { ReactiveFormsModule } from '@angular/forms'; @NgModule({ imports: [BrowserModule, JsonpModule, ReactiveFormsModule] declarations: [AppComponent], bootstrap: [AppComponent] }) export class AppModule {} Once imported, we can use formControl from within our template and set it to the name "term". <input type="text" [formControl]="term"/> In our component we create an instance of FormControl from @angular/form and expose it as a field under the name term on our component. Behind the scenes term automatically exposes an Observable<string> as property valueChanges that we can subscribe to. Now that we have an Observable<string>, taming the user input is as easy as calling debounceTime(400) on our Observable. This will return a new Observable<string> that will only emit a new value when there haven’t been coming new values for 400ms. export class App { items: Array<string>; term = new FormControl(); constructor(private wikipediaService: WikipediaService) { this.term.valueChanges .debounceTime(400) .subscribe(term => this.wikipediaService.search(term).then(items => this.items = items)); } } Don’t hit me twice As we said, it would be a waste of resources to send out another request for a search term that our app already shows the results for. Fortunately Rx simplifies many operations that it nearly feels unnecessary to mention them. All we have to do to achieve the desired behavior is to call the distinctUntilChanged operator right after we called debounceTime(400). Again, we will get back an Observable<string> but one that ignores values that are the same as the previous. Dealing with out-of-order responses Dealing with out of order responses can be a tricky task. Basically we need a way to express that we aren’t interested anymore in results from previous in-flight requests as soon as we are sending out new requests. In other words: cancel out all previous request as soon as we start a new one. As I briefly mentioned in the beginning Observables are disposable which means we can unsubscribe from them. This is where we want to change our WikipediaService to return an Observable<Array<string>> instead of an Promise<Array<string>>. That’s as easy as dropping toPromise and using map instead of then. search (term: string) { var search = new URLSearchParams() search.set('action', 'opensearch'); search.set('search', term); search.set('format', 'json'); return this.jsonp .get('', { search }) .map((response) => response.json()[1]); } Now that our WikipediaSerice returns an Observable instead of a Promise we simply need to replace then with subscribe in our App component. this.term.valueChanges .debounceTime(400) .distinctUntilChanged() .subscribe(term => this.wikipediaService.search(term).subscribe(items => this.items = items)); But now we have two subscribe calls. This is needlessly verbose and often a sign for unidiomatic usage. The good news is, now that search returns an Observable<Array<string>> we can simply use flatMap to project our Observable<string> into the desired Observable<Array<string>> by composing the Observables. this.term.valueChanges .debounceTime(400) .distinctUntilChanged() .flatMap(term => this.wikipediaService.search(term)) .subscribe(items => this.items = items); You may be wondering what flatMap does and why we can’t use map here. The answer is quite simple. The map operator expects a function that takes a value T and returns a value U. For instance a function that takes in a string and returns a Number. Hence when you use map you get from an Observable<T> to an Observable<U>. However, our search method produces an Observable<Array> itself. So coming from an Observable<string> that we have right after distinctUntilChanged, map would take us to an Observable<Observable<Array<string>>. That’s not quite what we want. The flatMap operator on the other hand expects a function that takes a T and returns an Observable<U> and produces an Observable<U> for us. NOTE: That’s not entirely true, but it helps as a simplification. That perfectly matches our case. We have an Observable<string>, then call flatMap with a function that takes a string and returns an Observable<Array<string>>. So does this solve our out-of-order response issues? Unfortunately not. So, why am I bothering you with all this in the first place? Well, now that you understood flatMap just replace it with switchMap and you are done. What?! You may be wondering if I’m kidding you but no I am not. That’s the beautify of Rx with all it’s useful operators. The switchMap operator is comparable to flatMap in a way. Both operators automatically subscribe to the Observable that the function produces and flatten the result for us. The difference is that the switchMap operator automatically unsubscribes from previous subscriptions as soon as the outer Observable emits new values. Putting some sugar on top Now that we got the semantics right, there’s one more little trick that we can use to save us some typing. Instead of manually subscribing to the Observable we can let Angular do the unwrapping for us right from within the template. All we have to do to accomplish that is to use the AsyncPipe in our template and expose the Observable<Array<string>> instead of Array<string>. @Component({ selector: 'my-app', template: ` <div> <h2>Wikipedia Search</h2> <input type="text" [formControl]="term"/> <ul> <li *{{item}}</li> </ul> </div> ` }) export class App { items: Observable<Array<string>>; term = new FormControl(); constructor(private wikipediaService: WikipediaService) { this.items = this.term.valueChanges .debounceTime(400) .distinctUntilChanged() .switchMap(term => this.wikipediaService.search(term)); } } And voilà, we’re done. Check out the demos below! Videos on this article. Advanced caching with RxJS When building web applications, performance should always be a top priority. One very efficient way to optimize the performance of... Cold vs Hot Observables In this article we are going to demystify what the term hot vs cold means when it comes to Observables....
https://blog.thoughtram.io/angular/2016/01/06/taking-advantage-of-observables-in-angular2.html
CC-MAIN-2019-35
refinedweb
1,885
63.7
26 March 2007 18:17 [Source: ICIS news] SAN ANTONIO, Texas (ICIS news)--The EU’s chemical legislation Reach is biased against the industry, a controversial academic said on Monday. ?xml:namespace> Bjorn Lomborg said there had been no discussion to weigh up the cost to the industry against the benefits of the new regulations, which take effect later this year. “It is about cost and benefit and that discussion has not taken place,” Lomborg told ICIS news at the NPRA’s 32nd International Petrochemicals Conference. “There are always trade-offs – you cannot make these judgements if you only have one side of the story,” said the author of best-seller The Skeptical Environmentalist. Lomborg, a professor at the ?xml:namespace> His book, published in 2001, suggested that predictions made by environmentalists were widely exaggerated. He made similar claims during a presentation to the NPRA, arguing that the consequences of global warming were vastly over
http://www.icis.com/Articles/2007/03/26/9016270/npra-07-academic-claims-reach-one-sided.html
CC-MAIN-2013-48
refinedweb
155
50.16
Quickstart ========== Lets assume we have created a project directory and already have a Haskell module or two. Every project needs a name, we'll call this example "proglet". .. highlight:: console :: $. Using "cabal init" ------------------. Editing the .cabal file ----------------------- .. highlight:: cabal. Modules included in the package ------------------------------- For a library, ``cabal init`` looks in the project directory for files that look like Haskell modules and adds all the modules to the :pkg-field:`library:exposed-modules` field. For modules that do not form part of your package's public interface, you can move those modules to the :pkg-field:`other-modules` field. Either way, all modules in the library need to be listed. For an executable, ``cabal init`` does not try to guess which file contains your program's ``Main`` module. You will need to fill in the :pkg-field:`executable:main-is` field with the file name of your program's ``Main`` module (including ``.hs`` or ``.lhs`` extension). Other modules included in the executable should be listed in the :pkg-field:`other-modules` field. Modules imported from other packages ------------------------------------ && < m`` - ``pkgname == n.*`` The last is just shorthand, for example ``base == 4.*`` means exactly the same thing as ``base >= 4 && < 5``. Building the package -------------------- For simple packages that's it! We can now try configuring and building the package: .. code-block:: console $``). Next steps ----------. Package concepts ================ Before diving into the details of writing packages it helps to understand a bit about packages in the Haskell world and the particular approach that Cabal takes. The point of packages --------------------- simply. Package names and versions --------------------------. Section [TODO] has some tips on package versioning.. Kinds of package: Cabal vs GHC vs system ----------------------------------------``. Unit of distribution. Explicit dependencies and automatic package management ------------------------------------------------------. Portability -----------. Developing packages ===================. Creating a package ------------------ Suppose you have a directory hierarchy containing the source files that make up your package. You will need to add two more files to the root directory of the package: :file:`{package}.cabal` a Unicode UTF-8 text file containing a package description. For details of the syntax of this file, see the section on `package descriptions`_. :file:`Setup.hs` a single-module Haskell program to perform various setup tasks (with the interface described in the section on :ref:`installing-packages`). This module should import only modules that will be present in all Haskell implementations, including modules of the Cabal library. The content of this file is determined by the :pkg-field:`build-type` setting in the ``.cabal`` file. In most cases it will be trivial, calling on the Cabal library to do most of the work. Once you have these, you can create a source bundle of this directory for distribution. Building of the package is discussed in the section on :ref: if os(windows) build-depends: Win32 Example: A package containing a simple library ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The HUnit package contains a file ``HUnit.cabal`` containing: :: name: HUnit version: 1.1.1 synopsis: A unit testing framework for Haskell``: .. code-block:: haskell import Distribution.Simple main = defaultMain Example: A package containing executable programs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ::. Example: A package containing a library and executable programs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :: <../release/cabal-latest/doc/API/Cabal/Distribution-Simple.html>`__). <more complex packages>`_. Package descriptions --------------------``, ``JHC``, ``UHC`` or ``LHC``) followed by a version range. For example, ``GHC ==6.10.3``, or ``LHC >=0.6 && <0.8``. Modules and preprocessors ^^^^^^^^^^^^^^^^^^^^^^^^^`` (:hackage-pkg:`greencard`) - ``.chs`` (:hackage-pkg:`c2hs`) - ``.hsc`` (:hackage-pkg:`hsc2hs`) - ``.y`` and ``.ly`` (happy_) - ``.x`` (alex_) - ``.cpphs`` (cpphs_)). Some fields take lists of values, which are optionally separated by commas, except for the ``build-depends`` field, where the commas are mandatory. Some fields are marked as required. All others are optional, and unless otherwise specified have empty default values. Package properties ^^^^^^^^^^^^^^^^^^ .. pkg-section:: global These fields may occur in the first top-level properties section and describe the package as a whole: .. pkg-field:: name: package-name (required) The unique name of the package, without the version number. .. pkg-field:: version: numbers (required) The package version number, usually consisting of a sequence of natural numbers separated by dots. .. pkg-field:: cabal-version: >= x.y The version of the Cabal specification that this package description uses. The Cabal specification does slowly evolve, introducing compatibility and behaviour. Most tools (including the Cabal library :pkg-field:`cabal-version` field. .. pkg-field:: build-type: identifier :default: ``Custom`` The type of build used by this package. Build types are the constructors of the `BuildType <../release/cabal-latest/doc/API/Cabal/Distribution-PackageDescription.html#t:BuildType>`__ type, defaulting to ``Custom``. If the build type is anything other than ``Custom``, then the ``Setup.hs`` file *must* be exactly the standardized content discussed below. This is because in these cases, ``cabal`` will ignore the ``Setup.hs`` file completely, whereas other methods of package management, such as ``runhaskell Setup.hs [CMD]``, still rely on the ``Setup.hs`` file. For build type ``Simple``, the contents of ``Setup.hs`` must be: .. code-block:: haskell import Distribution.Simple main = defaultMain For build type ``Configure`` (see the section on `system-dependent parameters`_ below), the contents of ``Setup.hs`` must be: import Distribution.Simple main = defaultMainWithHooks autoconfUserHooks For build type ``Make`` (see the section on `more complex packages`_ below), the contents of ``Setup.hs`` must be: import Distribution.Make main = defaultMain For build type ``Custom``, the file ``Setup.hs`` can be customized, and will be used both by ``cabal`` and other tools. For most packages, the build type ``Simple`` is sufficient. .. pkg-field:: license: identifier The type of license under which this package is distributed. License names are the constants of the `License <../release/cabal-latest/doc/API/Cabal/Distribution-License.html#t:License>`__ type. .. pkg-field:: license-file: filename .. pkg-field:: license-files: filename list The name of a file(s) containing the precise copyright license for this package. The license file(s) will be installed with the package. If you have multiple license files then use the :pkg-field:`license-files` field instead of (or in addition to) the :pkg-field:`license-file` field. .. pkg-field:: copyright: freeform The content of a copyright notice, typically the name of the holder of the copyright on the package and the year(s) from which copyright is claimed. For example:: .. pkg-field:: author: freeform The original author of the package. Remember that ``.cabal`` files are Unicode, using the UTF-8 encoding. .. pkg-field:: maintainer: address The current maintainer or maintainers of the package. This is an requests and patches. .. pkg-field:: stability: freeform The stability level of the package, e.g. ``alpha``, ``experimental``, ``provisional``, ``stable``. .. pkg-field:: homepage: URL The package homepage. .. pkg-field:: bug-reports: URL: .. pkg-field:: package-url: URL The location of a source bundle for the package. The distribution should be a Cabal package. .. pkg-field:: synopsis: freeform A very short description of the package, for use in a table of packages. This is your headline, so keep it short (one line) but as informative as possible. Save space by not including the package name or saying it's written in Haskell. .. pkg-field:: description: freeform Description of the package. This may be several paragraphs, and should be aimed at a Haskell programmer who has never heard of your package before. For library packages, this field is used as prologue text by :ref:`setup-haddock` and thus may contain the same markup as Haddock_ documentation comments. .. pkg-field:: category: freeform A classification category for future use by the package catalogue Hackage_. These categories have not yet been specified, but the upper levels of the module hierarchy make a good start. .. pkg-field:: tested-with: compiler list A list of compilers and versions against which the package has been tested (or at least built). .. pkg-field:: <accessing data files from package code>`_.. .. pkg-field:: data-dir: directory The directory where Cabal looks for data files to install, relative to the source directory. By default, Cabal will look in the source directory itself. .. pkg-field:: extra-source-files: filename list A list of additional files to be included in source distributions built with :ref:`setup-sdist`. As with :pkg-field:`data-files` it can use a limited form of ``*`` wildcards in file names. .. pkg-field:: extra-doc-files: filename list A list of additional files to be included in source distributions, and also copied to the html directory when Haddock documentation is generated. As with :pkg-field:`data-files` it can use a limited form of ``*`` wildcards in file names. .. pkg-field:: extra-tmp-files: filename list A list of additional files or directories to be removed by :ref:`setup-clean`. These would typically be additional files created by additional hooks, such as the scheme described in the section on `system-dependent parameters`_ Library ^^^^^^^ .. pkg-section:: library The library section should contain the following fields: .. pkg-field:: exposed-modules: identifier list :required: if this package contains a library A list of modules added by this package. .. pkg-field::. .. pkg-field:: reexported-modules: exportlist Supported only in GHC 7.10 and later. A list of modules to *reexport* from this package. The syntax of this field is ``orig-pkg:Name as NewName`` to reexport module ``Name`` from ``orig-pkg`` with. The library section may also contain build information fields (see the section on `build information`_). Cabal 1.25 :pkg-field:`build-depends` upon. Then your Cabal file might look something like this: :: name: foo version: 1.0 license: BSD3 cabal-version: >= 1.23 build-type: Simple library foo-internal exposed-modules: Foo.Internal build-depends: base library exposed-modules: Foo.Public build-depends: foo-internal, base test-suite test-foo type: exitcode-stdio-1.0 main-is: test-foo.hs build-depends: foo-internal, base Internal libraries are also useful for packages that define multiple executables, but do not define a publically accessible library. Internal libraries are only visible internally in the package (so they can only be added to the :pkg-field:`build-depends` of same-package libraries, executables, test suites, etc.) Internal libraries locally shadow any packages which have the same name (so don't name an internal library with the same name as an external dependency.) Opening an interpreter session ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Freezing dependency versions """""""""""""""""""""""""""". Generating dependency version bounds """""""""""""""""""""""""""""""""""" Cabal also has the ability to suggest dependency version bounds that conform to `Package Versioning Policy`_, which is a recommended versioning system for publicly released Cabal packages. This is done by running the ``gen-bounds`` command: $ cabal gen-bounds For example, given the following dependencies specified in :pkg-field:`build-depends`: :: build-depends: foo == 0.5.2 bar == 1.1 ``gen-bounds`` will suggest changing them to the following: :: build-depends: foo >= 0.5.2 && < 0.6 bar >= 1.1 && < 1.2 Executables ^^^^^^^^^^^ .. pkg-section::`_). .. pkg-field:: main-is: filename (required) The name of the ``.hs`` or ``.lhs`` file containing the ``Main`` module. Note that it is the ``.hs`` filename that must be listed, even if that file is generated using a preprocessor. The source file must be relative to one of the directories listed in :pkg-field:`hs-source-dirs`. Running executables """""""""""""""""""``. Test suites .. pkg-section:: test`_). .. pkg-field:: type: interface (required) The interface type and version of the test suite. Cabal supports two test suite interfaces, called ``exitcode-stdio-1.0``. .. pkg-field:: main-is: filename .. :name: test-main-is :required: ``exitcode-stdio-1.0`` :disallowed: ``detailed-0.9`` :pkg-field:`hs-source-dirs`. This field is analogous to the ``main-is`` field :pkg-field:`test-module` field. .. pkg-field:: test-module: identifier :required: ``detailed-0.9`` :disallowed: ``exitcode-stdio-1.0`` The module exporting the ``tests`` symbol. Example: Package using ``exitcode-stdio-1.0`` interface """"""""""""""""""""""""""""""""""""""""""""""""""""""" The example package description and executable source file below demonstrate the use of the ``exitcode-stdio-1.0`` interface. .. code-block:: cabal :caption: foo.cabal Name: foo Version: 1.0 License: BSD3 Cabal-Version: >= 1.9.2 Build-Type: Simple Test-Suite test-foo type: exitcode-stdio-1.0 main-is: test-foo.hs build-depends: base .. code-block:: haskell :caption: test-foo.hs module Main where import System.Exit (exitFailure) main = do putStrLn "This test always fails!" exitFailure Example: Package using ``detailed-0.9`` interface """"""""""""""""""""""""""""""""""""""""""""""""". .. code-block:: cabal :caption: bar.cabal Name: bar Version: 1.0 License: BSD3 Cabal-Version: >= 1.9.2 Build-Type: Simple Test-Suite test-bar type: detailed-0.9 test-module: Bar build-depends: base, Cabal >= 1.9.2 .. code-block:: haskell :caption: Bar.hs module Bar ( tests ) where import Distribution.TestSuite tests :: IO [Test] tests = return [ Test succeeds, Test fails ] where succeeds = TestInstance { run = return $ Finished Pass , name = "succeeds" , options = [] , setOption = \_ _ -> Right succeeds } fails = TestInstance { run = return $ Finished $ Fail "Always fails!" , name = "fails" , options = [] , setOption = \_ _ -> Right fails } Running test suites You can have Cabal run your test suites using its built-in test runner: :: $ cabal configure --enable-tests $ cabal build $ cabal test See the output of ``cabal help test`` for a list of options you can pass to ``cabal test``. Benchmarks ^^^^^^^^^^ .. pkg-section:: .. pkg-field:: type: interface (required). .. pkg-field:: main-is: filename :required: ``exitcode-stdio-1.0`` :pkg-field:`hs-source-dirs`. This field is analogous to the ``main-is`` field of an executable section. Example: Package using ``exitcode-stdio-1.0`` interface :name: foo-bench.cabal Name: foo Version: 1.0 License: BSD3 Cabal-Version: >= 1.9.2 Build-Type: Simple Benchmark bench-foo type: exitcode-stdio-1.0 main-is: bench-foo.hs build-depends: base, time .. code-block:: haskell :caption:) Running benchmarks """""""""""""""""" You can have Cabal run your benchmark using its built-in benchmark runner: :: $ cabal configure --enable-benchmarks $ cabal build $ cabal bench See the output of ``cabal help bench`` for a list of options you can pass to ``cabal bench``. Build information ^^^^^^^^^^^^^^^^^ .. pkg-section:: build The following fields may be optionally present in a library, executable, test suite or benchmark section, and give information for the building of the corresponding library or executable. See also the sections on `system-dependent parameters`_ and `configurations`_ for a way to supply system-dependent values for these fields. .. pkg-field::`` turn``. Starting with Cabal 2.0, there's a new syntactic sugar to support PVP_-style major upper bounds conveniently, and is inspired by similiar syntactic sugar found in other language ecosystems where it's often called the "Caret" operator: :: build-depends: foo ^>= 1.2.3.4, bar ^>= 1 The declaration above is exactly equivalent to :: build-depends: foo >= 1.2.3.4 && < 1.3, bar >= 1 && < 1.1 .. Note:: Prior to Cabal 1.8, ``build-depends`` specified in each section were global to all sections. This was unintentional, but some packages were written to depend on it, so if you need your :pkg-field:`build-depends` to be local to each section, you must specify at least ``Cabal-Version: >= 1.8`` in your ``.cabal`` file. .. Note:: Cabal 1.20 experimentally supported module thinning and renaming in ``build-depends``; however, this support has since been removed and should not be used. .. pkg-field:: :pkg-field:`other-modules`, :pkg-field:`library:exposed-modules` or :pkg-field:`executable:main-is` fields. .. pkg-field:: hs-source-dirs: directory list :default: ``.`` Root directories for the module hierarchy. For backwards compatibility, the old variant ``hs-source-dir`` is also recognized. .. pkg-field:: default-extensions: identifier list A list of Haskell extensions used by every module. These determine corresponding compiler options enabled for all files. Extension names are the constructors of the `Extension <../release/cabal-latest/doc/API/Cabal/Language-Haskell-Extension.html#t:Extension>`__ type. For example, ``CPP`` specifies that Haskell source files are to be preprocessed with a C preprocessor. .. pkg-field:: other-extensions: identifier list A list of Haskell extensions used by some (but not necessarily all) modules. From GHC version 6.6 onward, these may be specified by placing a ``LANGUAGE`` pragma in the source files affected e.g. {-# LANGUAGE CPP, MultiParamTypeClasses #-} In Cabal-1.24 the dependency solver will use this and :pkg-field:`default-extensions` information. Cabal prior to 1.24 will abort compilation if the current compiler doesn't provide the extensions. If you use some extensions conditionally, using CPP or conditional module lists, it is good to replicate the condition in :pkg-field:`other-extensions` declarations: :: other-extensions: CPP if impl(ghc >= 7.5) other-extensions: PolyKinds You could also omit the conditionally used extensions, as they are for information only, but it is recommended to replicate them in :pkg-field:`other-extensions` declarations. .. pkg-field:: extensions: identifier list :deprecated: Deprecated in favor of :pkg-field:`default-extensions`. .. pkg-field:: build-tools: program list A list of programs, possibly annotated with versions, needed to build this package, e.g. ``c2hs >= 0.15, cpphs``. If no version constraint is specified, any version is assumed to be acceptable. ``build-tools`` can refer to locally defined executables, in which case Cabal will make sure that executable is built first and add it to the PATH upon invocations to the compiler. .. pkg-field:: buildable: boolean :default: ``True`` Is the component buildable? Like some of the other fields below, this field is more useful with the slightly more elaborate form of the simple build infrastructure described in the section on `system-dependent parameters`_. .. pkg-field:: ghc-options: token list Additional options for GHC. You can often achieve the same effect using the :pkg-field:`extensions` field, which is preferred. Options required only by one module may be specified by placing an ``OPTIONS_GHC`` pragma in the source file affected. As with many other fields, whitespace can be escaped by using Haskell string syntax. Example: ``ghc-options: -Wcompat "-with-rtsopts=-T -I1" -Wall``. .. pkg-field:: ghc-prof-options: token list Additional options for GHC when the package is built with profiling enabled. Note that as of Cabal-1.24, the default profiling detail level defaults to ``exported-functions`` for libraries and ``toplevel-functions`` for executables. For GHC these correspond to the flags ``-fprof-auto-exported`` and ``-fprof-auto-top``. Prior to Cabal-1.24 the level defaulted to ``none``. These levels can be adjusted by the person building the package with the ``--profiling-detail`` and ``--library-profiling-detail`` flags. :pkg-field:`ghc-prof-options` field: use ``-fno-prof-auto`` or one of the other ``-fprof-auto*`` flags. .. pkg-field:: ghc-shared-options: token list Additional options for GHC when the package is built as shared library. The options specified via this field are combined with the ones specified via :pkg-field:`ghc-options`, and are passed to GHC during both the compile and link phases. .. pkg-field:: includes: filename list :pkg-field:`include-dirs`. These files typically contain function prototypes for foreign imports used by the package. This is in contrast to :pkg-field:`install-includes`, which lists header files that are intended to be exposed to other packages that transitively depend on this library. .. pkg-field:: install-includes: filename list A list of header files from this package to be installed into ``$libdir/includes`` when the package is installed. Files listed in :pkg-field:`install-includes` should be found in relative to the top of the source tree or relative to one of the directories listed in :pkg-field:`install-includes` is typically used to name header files that contain prototypes for foreign imports used in Haskell code in this package, for which the C implementations are also provided with the package. For example, here is a ``.cabal`` file for a hypothetical ``bindings-clib`` package that bundles the C source code for ``clib``:: include-dirs: cbits c-sources: clib.c install-includes: clib.h Now any package that depends (directly or transitively) on the ``bindings-clib`` library can use ``clib.h``. Note that in order for files listed in :pkg-field:`install-includes` to be usable when compiling the package itself, they need to be listed in the :pkg-field:`includes` field as well. .. pkg-field:: include-dirs: directory list A list of directories to search for header files, when preprocessing with ``c2hs``, ``hsc2hs``, ``cpphs`` or the C preprocessor, and also
https://gitlab.haskell.org/ghc/packages/Cabal/-/blame/55b54c51f1ab63fa15bea0b51a741926c2203180/Cabal/doc/developing-packages.rst
CC-MAIN-2022-27
refinedweb
3,330
50.73
In the last few years, it is very common to work on n-layer web application where the front end is represented by an ASP.NET website that talks with the layer below where, in an SOA context, one layer is usually represented by either WebServices or WCF to expose all the functionality using a standard completely decoupled from the consumer. In the last few years, the necessity to query the services using an asynchronous way is getting a key of a well done web application to avoid refreshing the entire page against HTTP requests especially when we need to refresh only a portion of the web page (Partial Rendering). JQuery is getting the standard to do this job allowing the front end to post and get data using JSON format. Unfortunately, in my experience, to achieve this behavior, I saw so many bad things first of all web services that returns HTML against a jquery call to have partial rendering. From my point of view, webservice/WCF must always return either POCO objects or JSON, in the case we are querying it using JQUERY but never HTML. Some other solutions are based on some kind of client side "template" that are going to be filled up by a client script using Jquery or JavaScript. With this solution against any change on the front end we have to change the template and probably the login to bind it with the JSON response. In this article, I'm going to show a good way to do partial rendering using a custom control that can be used in the web page without dealing with JQuery at all! First of all, we have to create a web site project. After that, we have to add the JQueryControllor on the toolbox bar clicking on choose items and selecting JqueryController.dll. JQueryControllor Basically, this control inherits from the Panel web control and it's going to work as a container for the portion of the page we want to refresh (more or less the same behaviour of the AJAX update panel). Each JaqueyControll must contains at most one web user control (.ascx) that represents the portion of the page that is going to be refreshed. If you need to refresh more, then on section you just need to add more than one JqueryControll, each one with their web user control. The key concept of this solution is instead of call a web service using Jquery to bind the JSON response to the interface, it does a post to a web page passing any parameters we need on the server side to satisfy the request. On the page (that must inherit from PageBase that is explained later), you can find already filled up a collection with all the parameters you post. Thus you have everything you need on the server side to satisfy the request. Then you just need to call the Refresh method on the JqueryController to refresh the interface on the client side without dealing with Jquery or JavaScript! If you need, you can also provide a JavaScript callback function that is going to be called after the refresh bringing back also any parameters you may need to initialize again the web form or for any other reasons. Below, you can see the Sequence diagram of that process. JaqueyControll JqueryControll PageBase Refresh JqueryController As you can see on the diagram, the process is very similar to a classic ASP.NET postback but instead of rendering back the whole page, it calls a method on JqueryControl that returns just the portion of the HTML that has to be refreshed. JqueryControl In this example, we got a web page that contains GridView that is bounded to a data source. The user wants to add one record to this data source (an employee in this case) and then we have to refresh the GridView to update the result. The web form should look like the one below: GridView All the processes start by call a special JavaScript function that is going to start the process. That function is shown below: function JqueryPost() { var argv = JqueryPost.arguments; var argc = argv.length; var strParms=new String() page = argv[0]; for (var i = 1; i < argc; i++) { strParms += argv[i] + '&' } strParms = strParms.substr(0, strParms.length - 1); $.post(page, { __parameters: strParms }, function(data){ jQuery.globalEval(data); }); } This is the JavaScript code we have to add to the control we want to make as a trigger. In this case, it is a button click of an asp:Button. By requirement, if we use a server control, like in this example, we have to add "return false" after the call to avoid a post back. asp:Button return false <asp:button By requirement, that function can accept any parameters that we could need on server side to satisfy the request (for example, it could be a selected item of a drop down list). The first parameter must be the page where we want to post the data to. As you can see, this function uses the Jquery Post method that posts the web form to the page with all the parameters we have included into the call. Then the function executes the jquery code that comes back from the request that contains the JQuery code to inject the HTML we want to refresh. On the server side, the PageBase is going to do all the work: Below the PageBase code is shown. There are two main points I would like to focus on: the function that creates the HTML and the one that returns the Jquery to inject that HTML. Below you can see the function that creates the HTML to be rendered: public static string RenderUserControl(Control crtl) { Control control = null; const string STR_BeginRenderControlBlock = ""; const string STR_EndRenderControlBlock = ""; StringWriter tw = new StringWriter(); Page page = new Page(); page.EnableViewState = false; HtmlForm form = new HtmlForm(); form.ID = "__temporanyForm"; page.Controls.Add(form); form.Controls.Add(new LiteralControl(STR_BeginRenderControlBlock)); form.Controls.Add(crtl); form.Controls.Add(new LiteralControl(STR_EndRenderControlBlock)); HttpContext.Current.Server.Execute(page, tw, true); string Html = tw.ToString(); //TO DO:clean the response!!!!! int start = Html.IndexOf(""); int end = Html.Length - start; Html = Html.Substring(start,end); return Html; } This function gets in input the user control that we want to render back (in this case, the user control that contains the gridview) and generates at run-time the HTML. First, it dynamically creates a new Web Page instance, then adds the control passed in input to the page.Then the control will be added to that page. After that, using server.execute, we execute the page end, return the HTML of it but having just the control, we want to refresh on that page, it returns just that HTML. gridview server.execute At this point, we could think that everything is done but we just miss the Jquery code that injects that HTML into the page and where it(the container) has to be injected. The function to do that is shown below: public void AddToRender(string PaneldId, string htmlToAdd) { this.ResponseToRender.Append("$('#" + PaneldId + "').html ('" + htmlToAdd + "');"); } After this call, the page returns just the Jquery code that injects the HTML on the page. On the client side, that code is executed by the function we call at the beginning of the process (JqueryPost) and the page will be refreshed. JqueryPost All this stuff is encapsulated into the JqueryController and the PageBase. All we need is to call the JqueryController.Refresh() method to have partial rendering! That method also accepts a callback function that gets a list of parameters and a name of a JavaScript function we could need after the render to do something more on the client side (for example, select an item on a drop down list). Now we are going through the page's code to understand how we can use that control. As we said at the beginning, everything starts by handling a client event (in this case, a button click) that has to call the JqueryPost() function. This call does a post to the server so that we can move the focus on the page. PageBase JqueryController.Refresh() JqueryPost() using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using JqueryController; public partial class _Default : PageBase { Employee emp = new Employee(); protected void Page_Load(object sender, EventArgs e) { if (this.IsPartialRendering) { if (this.PostParameter.ContainsKey("action")) { string action = this.PostParameter["action"]; switch (action) { case "add": emp.AddOneEmployee(); break; case "remove": emp.RemoveTheLastEmployee(); break; } BindDataGrid(emp.GetAllEmployee()); } if (this.PostParameter.ContainsKey("selectedId")) { int selectedId = int.Parse(this.PostParameter["selectedId"]); SelectEmployees(selectedId); } } } private void BindDataGrid(IEnumerable<employee> datasource) { GridUserControl1.GridDetails.DataSource = datasource; GridUserControl1.GridDetails.DataBind(); JqueryControllerGrid.RefreshPanel("parameterBack"); } private void SelectEmployees(int selectedId) { switch(selectedId){ case 0: BindDataGrid(emp.GetAllEmployee()); break; case 1: BindDataGrid(emp.GetAllEmployee().Where(x =>x.ID > 30)); break; case 2: BindDataGrid(emp.GetAllEmployee().Where(x => x.ID < 30)); break; } } } As you can see, on the page load, we have a new method called IsPartialRendering that allows the developer to understand if the post was raised from a JqueryControl. Thus after checking if is a partial rendering post, we can understand what kind of operation we want to perform getting the parameters we sent along the HTTP post. To do that, we can access PostParameter collection. In this example, we post a parameter named "Action" that contains the action we want to perform (in this case, add or remove and employ). After that, we are able to call the right function to perform the action required. In this example, we just call a method on the page that inserts or deletes the employee but on a real context we could call a WCF or whatever there is below the front end layer on our architecture. The good point is that if our front-end is based on MCV or MVP, we can carry on to use those patterns because we are just posting data to the page as a classic postback but we don't by-pass any layer as we could call for example a Service layer without passing through all the layers between the front end and the service layer itself. Anyway, let's focus on the example. After knowing what action we have to perform, we are able to call the function that adds or removes an employee. At this point, all we need to do is bind the datagrid again to the data-source and call JqueryContorl.Refresh() to update the interface. In this example, we want to call a callback function on the client side passing a string "parameterBack". We can see that after that call, the datagrid has been refreshed and the callback function has been called showing the parameter we sent back. Below, you can see the callback function that is contained on the page: IsPartialRendering JqueryControl PostParameter Action JqueryContorl.Refresh() string parameterBack datagrid function showParams() { alert(showParams.arguments[0]); } I think that is a good way to do partial rendering without changing anything on our architecture especially when we use patterns as MVP. The scope of this article is just to show an easy way to do it without dealing with client page template or complex Jquery call. It's also easy to maintain because you can use a user control approach to design your interface instead of doing it dynamically on the client side. Probably this is not the best way to do that, but for sure it's quite elegant and easy.
http://www.codeproject.com/Articles/83040/Partial-Rendering-Control-using-JQuery
CC-MAIN-2015-35
refinedweb
1,926
61.97
Marlon Pierce, Bryan Carpenter, Geoffrey Fox Community Grids Lab Indiana University mpierce@cs.indiana.edu See primarily the Primer and Messaging Framework links. The actual SOAP schema is available from It is pretty small, as these things go. WSDLService Keep It Simple Messages may require advanced features like security, reliability, conversational state, etc. KISS, so dont design these but do design a place where this sort of advanced information can go. Tell the message originator is something goes wrong. Define data encodings That is, you need to tell the message recipient the types of each piece of data. SOAP Encoding: Rules for encoding data. Focus on SOAP for RPC SOAP Messaging SOAP Basics SOAP is often thought of as a protocol extension for doing Remote Procedure Calls (RPC) over HTTP. This is how we will use it. This is not completely accurate: SOAP is an XML message format for exchanging structured, typed data. It may be used for RPC in client-server applications May be used to send XML documents Also suitable for messaging systems (like JMS) that follow one-tomany (or publish-subscribe) models. SOAP is not a transport protocol. You must attach your message to a transport mechanism like HTTP. First slide is an example message that might be sent from a client to the echo service. Second slide is an example response. I have highlighted the actual message payload. SOAP Request<?xml version=1.0 ?> <soapenv:Envelope xmlns: <soapenv:Body> <ns1:echo soapenv: <in0 xsi:Hello World</in0> </ns1:echo> </soapenv:Body> </soapenv:Envelope> SOAP Response<?xml version=1.0 ?> <soapenv:Envelope xmlns:soapenv= xmlns:xsd= xmlns: <soapenv:Body> <ns1:echoResponse soapenv:encodingStyle= xmlns: <echoReturn xsi:type=String> Hello World</echoReturn> </ns1:echoResponse> </soapenv:Body> </soapenv:Envelope> SOAP Structure SOAP structure is very simple. 0 or more headers elements 1 body element Envelop that wraps it all. Body contains XML payload. Headers are structured the same way. Can contain additional payloads of metadata Security information, quality of service, etc. Message Payload <xs:complexType <xs:sequence> <xs:element <xs:element </xs:sequence> <xs:anyAttribute </xs:complexType> Lax If the item, or any items among its children if it's an element information item, has a uniquely determined declaration available, it must be valid with respect to that definition. That is, validate where you can, don't worry when you can't. SOAP Envelop The envelop is the root container of the SOAP message. Things to put in the envelop: Namespaces you will need. is required, so that the recipient knows it has gotten a SOAP message. Others as necessary SOAP Headers SOAP Body elements contain the primary message contents. Headers are really just extension points where you can include elements from other namespaces. i.e., headers can contain arbitrary XML. Headers may be processed independently of the body. Headers may optionally define encodingStyle. Headers may optionally have a role attribute Header entries may optionally have a mustUnderstand attribute. mustUnderstand=1 means the message recipient must process the header element. If mustUnderstand=0 or is missing, the header element is optional. Session State Support: Many services require several steps and so will require maintenance of session state. Equivalent to cookies in HTTP. Put session identifier in the header. Header Processing SOAP messages are allowed to pass through many intermediaries before reaching their destination. Intermediary=some unspecified routing application. The final destination processes the body of the message. This allows an intermediary application to determine if it can process the body, provide the required security, session, or reliability requirements, etc. Header Roles SOAP nodes may be assigned role designations. SOAP headers then specify which role or roles should process. Standard SOAP roles: None: SOAP nodes MUST NOT act in this role. Next: Each SOAP intermediary and the ultimate SOAP receiver MUST act in this role. UltimateReceiver: The ultimate receiver MUST act in this role. SOAP Body Body entries are really just placeholders for XML from some other namespace. The body contains the XML message that you are transmitting. It may also define encodingStyle, just as the envelop. The message format is not specified by SOAP. The <Body></Body> tag pairs are just a way to notify the recipient that the actual XML message is contained therein. The recipient decides what to do with the message. xsi-type is used to specify that the <in0> element takes a string value. This is data encoding Data encoding rules will also be examined in next lectures. SOAP Provides its own fault communication mechanism. These may be in addition to HTTP errors when we use SOAP over HTTP. HTTP Error Messages have a must understand header that cant be understood. You failed to meet some required quality of service specified by a header. <xs:complexType <xs:sequence> <xs:element <xs:element </xs:sequence> </xs:complexType> Enumerating Faults Fault codes must contain one of the standard fault messages. DataEncodingUnknown: you sent data encoded in some format that I dont understand. MustUnderstand: I dont support this header. Receiver: message was correct, but receiver could not process for some reason. Sender: message was incorrectly formatted, or lacked required additional information Couldnt authenticate you <xs:simpleType <xs:restriction <xs:enumeration <xs:enumeration <xs:enumeration <xs:enumeration <xs:enumeration </xs:restriction> </xs:simpleType> Fault Subcodes Fault codes may contain subcodes that refine the message. Unlike Codes, subcodes dont have standard values. Instead, they can take any QName value. This is an extensibility mechanism. Fault Reasons This is intended to provide human readable reasons for the fault. The reason is just a simple string determined by the implementer. For Axis, this is the Java exception name. At least, for my version of Axis.<xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType> <xs:complexType <xs:simpleContent> <xs:extension <xs:attribute </xs:extension> </xs:simpleContent> </xs:complexType> Source I Check AuthZ Node 2 Dest. Fault Detail A fault detail is just an extension element. Carries application specific information It can contain any number of elements of any type. This is intended for the SOAP implementer to put in specific information. You can define your own SOAP fault detail schemas specific to your application. <xs:complexType <xs:sequence> <xs:any </xs:sequence> <xs:anyAttribute </xs:complexType> Next Time This lecture has examined the basic SOAP message format. We have not described the following: The rules for encoding transmitted data Specifically, how do I encode XML for RPC? How does this connect to WSDL? I also want to give a specific example of extending SOAP to support reliable messaging.
https://www.scribd.com/presentation/158477324/SOAP1-ppt
CC-MAIN-2019-43
refinedweb
1,087
50.43
I. OMG...your weblog really helped me out. I as having big problems as I was using dotnetnuke in my root web directory, but the settings in web.config were being inherited by child directories that I created other custom applications for underneath the root. Using the remove tags worked very well in asp.net 2.0. Thanks. Likewise, your blog really helped out. Was having the same problem, now I know why! Thanks for the article. I was setting up a new site and had no idea how to get rid of the inheritance - your article worked wonders. Awesome! A real life saver! Kicked it: If a sub application requires its own forms authentication, root/ = 2.0 application requiring authentication root/subapplication/ = 2.0 application requiring separation authentication The subapplication will not seem to read the <authentication> section of its own web.config. In fact it won't redirect to the root's login page either as authorization is required. I have several upcoming apps which would require this but cannot understand how to have a web.config in a sub application to work. Thanks for this great post. It helped me to successfully integrate a sub application into Community Server! One useful addition may be this. If the root web.config has a pages setting like this: <pages pageBaseType="SomeType, SomeAssembly" /> and that assembly is in the root's bin directory. The best way to 'undo' this in the subapplication is to add the pages tag but replace the pageBaseType with the default ASP .NET page, like so: <pages pageBaseType="System.Web.UI.Page" /> Thanks again! Thanks for the clear explanation, this can be very confusing the first time you plonk a website in a subdirectory. You can also use <clear /> inside <httpModules /> And to clear <add verb="" path="" /> settings within <httpHandlers> use <remove verb="" path="" /> The following works for me: Put the <add> in the root application. The sub app does not require any modifications or <remove> tags this way... <add verb="GET" path="myhandler.axd" type="Module.Classname, Module" validate="false"/> I have been reading many Blogs on the issue of web.config and inheritance, and yours seems to clear up a few things for me, but I still have a problem. I have a .Net 2 app in the root web and in virtual folders/web I have .Net 1.1 app (Reporting Services). Below is my error: Parser Error Message: Child nodes are not allowed. <system.web> <pages> <controls> <- THIS IS REPORTED AS THE ERROR <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> If you can help clear this up, that would be GREAT! the pharmacy in which they practice. <a href= 30 pills x 100 mg ></a>*[url= 30 pills x 100 mg ][/url]* use to provide patients with positive health outcomes Thanks-very good post. It also helped me to remove the assemblies loaded at the parent application like: <compilation debug="true" defaultLanguage="c#"> <assemblies> <remove assembly="S Pingback from MS Dynamics CRM activity preview, queue view customizing by http module « stefan-scheller.com
http://weblogs.asp.net/owscott/archive/2005/12/29/ASP.NET-files-and-inheritance.aspx
crawl-002
refinedweb
524
58.79
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! Hey All, I am developing an interface connecting EC with OnBoarding. We are trying to generate multiple EmpIds using odata api - generateNextPersonId function, store them in an Array List and update multiple employees on the OnBoarding side. Now I have generated the Ids and am able to store it in an ArrayList using Groovy scripting. Now I want to store that in the properties section so that I can access it later on. How will I be able to do that? I tried creating a global variable in properties section and store it. Didn't work. I can create Global variable, Local variables, Constant, Expression, Property, Header, XPath. It's weird that I cannot set the type for anything else except XPath. The Type field in the properties section gets greyed out when I select global variable. I though I could set the type as java.Util.ArrayList. Any input will be highly appreciated. Hello Kriba, You can set the property containing ArrayList in Script itself Script: import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap; def Message processData(Message message) { //Body def body = message.getBody(); List<String> list = new ArrayList<String>(); list.add("shri"); list.add("Prasad"); list.add("Praveen"); def map = message.getHeaders(); map = message.getProperties(); message.setProperty("P_ArrayData", list); message.setBody(body); return message; } Write Variable: Content Modifier to retrieve the value stored( just for testing purpose ): Unfortunately its not possible retrieve individual elements of ArrayList written as Property( In earlier steps ) inside Content Modifier,you have to write a script again to get the elements one by one.But you can retrieve complete list of elements stored in one shot ( as shown in Content modifier by setting the value of variable to Property and access it later.) Hope it helps. Regards, Sriprasad Shivaram Bhat Worked like a charm! Thank you. I have written another csript to loop the Array and get the elements one by one.
https://answers.sap.com/questions/367170/how-to-store-an-arraylist-in-scp-properties.html
CC-MAIN-2018-13
refinedweb
340
50.53
Pass multiple email addresses to a hidden field in a form Discussion in 'Javascript' started by james00_c@yahoo.com, Jun 21, 2005. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads Multiple Email AddressesTCB, Mar 21, 2006, in forum: ASP .Net - Replies: - 1 - Views: - 586 - sloan - Mar 21, 2006 Physical Addresses VS. Logical Addressesnamespace1, Nov 29, 2006, in forum: C++ - Replies: - 3 - Views: - 1,072 Populate Hidden field on post back and retrieve value from Hidden FieldRick, Mar 24, 2010, in forum: ASP .Net - Replies: - 3 - Views: - 9,638 - Alexey Smirnov - Apr 13, 2010 ADSI/ASP: How to pass distinguishedName as hidden form field?Bharat Suneja, Sep 10, 2004, in forum: ASP General - Replies: - 0 - Views: - 309 - Bharat Suneja - Sep 10, 2004 Pass hidden form field value to another form field to insert in dbGavMc, Sep 22, 2005, in forum: ASP General - Replies: - 4 - Views: - 532 - Evertjan. - Sep 22, 2005
http://www.thecodingforums.com/threads/pass-multiple-email-addresses-to-a-hidden-field-in-a-form.918759/
CC-MAIN-2016-07
refinedweb
187
71.85
Previously in my projects I’m always used well known DHT22 (AM2302) temperature/humidity sensors. But I found that this sensors is not very stable and subject to hungs. In my case this device is worked about two weeks and then stops responding untill power rebooted. This is absolutely unacceptable on some distant and autonomous devices. After some googling I found that I’m not alone and some peoples also expirienced such problem. I’ve decided to replace this sensors to something more reliable and more accurate. My choice fell on HTU21D from the Measurement Specialties. HTU21D is a quite reliable and precise sensor, much newer than DHT and uses standard i2c bus instead own 1-wire protocol. I2C interface was a determinative and in this article I want to describe connection of this device to the Raspberry PI in details. In one of the my previous articles I’ve already described interfacing with a i2c/smubs devices. In current case everything is much simpler but some concepts are same. Let’s check out device datasheet first. Document says that we can send this commands to trigger some actions and then get the result: Since all low-level I2C level is covered by the driver we don’t to worry about clocks and start/stop sequences. Please read the MLX90614 article if you want to know details about this communication protocol. In the first case, the SCK line is blocked (controlled by HTU21D sensor) during measurement process while in the second case the SCK line remain open for other communication while the sensor is processing the measurement. First variant is faster – you can get result at once as measurements is made. In case of second variant – you have to poll the device with some timeout, waiting for the status “done”. Of course very frequent request is not good idea because you can flood the i2c line. In my device I have multiple devices on the I2C bus so I chose the second variant with a 50 ms polling. Connecting to the Raspberry PI. HUT21D is supplied in the small DFN package. This is good but may cause troubles with soldering. Fortunately It’s easy to buy breakout board with already mounted HTU21D device and all required extra components. Connection of this board is also very simple. Sensor is ready to use. Programming There is a many options to work with this sensor, using different libraries and programming languages. But in current example we will use a pure Linux i2c interface in C. This is a most clear and faster way to use our device. (GPIO2 and GPIO3). i2c-0 is available for manual soldering. In later Raspberry’s models both buses is available on the GPIO header. To check that HTU21D device is properly connected and worked run this command: i2cdetect –y 1 (1 means /dev/i2c-1 device). This utility is available in i2c-tools package. HTU21 address is 0x40 and cannot be changed. And if everything is OK and HTU device is lonely on the bus – you can see such output: Note: if you want to connect two sensor simultaneously the only way to do this is connect them to separate i2c buses which is available on the Raspberry board. Now we ready to write code. Here is initialization of the I2C interface for HTU21D, nothing extra: #include <sys/ioctl.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <stdio.h> #include <errno.h> #include <string.h> #include <linux/i2c-dev.h>40; // set slave device address 0x40 if (ioctl(fdev, I2C_SLAVE, i2c_addr) < 0) { fprintf(stderr, "Failed to select I2C slave device! Error: %s\n", strerror(errno)); return -1; } We got a regular file descriptor in dev and we can send requests and read responses. In the previous article communication protocol is little bit complex and used smbus transactions with a structures and ioctl() calls. Now all we need is just a write() and read(). According to datasheet to trigger the temperature measurements (in blocking mode) we need to send 0xE3, so let’s do this. uint8_t buf[1]; buf[0] = 0xE3; write(fdev, buf, 1); That’s all. Now we ready to get device response with read(). Because we used 0xE3 command we can just read on descriptor and this call will be blocked while measurements is in progress. But how many bytes we should read and what we actually reading? Again, let’s check out the device datasheet, page 11. Measured data are transferred in two byte packages, i.e. in frames of 8-bit length where the most significant bit (MSB) is transferred first (left aligned). Each byte is followed by an acknowledge bit. Since the maximum resolution of the measurement is 14 bits, the two last least significant bits (LSBs, bits 43 and 44) are used for transmitting status information. Bit 1 of the two LSBs indicates the measurement type (‘0’: temperature, ‘1’: humidity). Bit 0 is currently not assigned. Sensor data is splitted by three 8-bit parts so wee need to read a three bytes: data1, data2 and checksum. // device response, 14-bit ADC value: // first 8 bit part ACK second 8 bit part CRC // [0 1 2 3 4 5 6 7] [8] [9 10 11 12 13 14 15 16] [17 18 19 20 21 22 23 24] // bit 15 - measurement type (‘0’: temperature, ‘1’: humidity) // bit 16 - currently not assigned uint8_t buf[3] = { 0 }; read(fdev, buf, 3); Now we can combine first and second bytes in to the one 16 bit value, skipping two last least significant bits. uint16_t sensor_data = (buf [0] << 8 | buf [1]) & 0xFFFC; What next? Using this value we can calculate actual temperature or humidity using some formulas (page 15 of the device datasheet). // temperature double sensor_tmp = sensor_data / 65536.0; double result = -46.85 + (175.72 * sensor_tmp); printf("Temperature: %.2f C\n", result); // humidity result = -6.0 + (125.0 * sensor_tmp); printf("Humidity: %.2f %%\n", result); Little note about checksum. In a some simple application you can skip verification of the checksum and use data as is. But you should always remember that sometimes you can error. This error may caused by some malfunction with a sensor or by some Interference on the i2c line. So it better to use some simple algorithm to calculate and verify crc8. You can found a lot of examples and ready to use functions. In my “production” application below you can find such verification. No hold master mode This mode is preferred when you have multiple devices on your bus so blocking this bus by one device may be a bad idea. Temperature and humidity measurements operations can be triggered with 0xF3 and 0xF5 commands and you can’t just call read() and wait. This call will return immediately with invalid data in buffer. Correct behaviour here is polling with some timeout. Typically this timeout is 50 ms. So you need to do read() every 50 ms and check how many actually bytes was read. If this count is less than 3 – try again after timeout. Retry count can be limited by some moderate value, but typically one 50 ms timeout is enough. uint8_t buf[3] = { 0 }; int counter = 0; while (1) { usleep(50000); // 50 ms counter++; if (read(fdev, buf, 3) != 3) { if (counter >= 5) { break; } continue; } break; } If you are interested which is the difference in reading time between the two modes – it is quite noticeable, but not critical. No hold master. $ time ./read_htu21d temp=18.99 humidity=33.06 real 0m0.130s user 0m0.000s sys 0m0.000s Hold master. $ time ./read_htu21d -l temp=19.00 humidity=33.05 real 0m0.087s user 0m0.000s sys 0m0.010s Of course might be critical for some hard real time applications. Soft reset. It’s recommended to perform software reset of the sensors before any measurements. Soft reset can be done by sending 0xFE command. After this you should wait at least 15 ms, this time is required for correct and full startup of the device. Full source code of the HTU21D utility you can find on my github here. Compilation and usage is pretty simple and described in a README. P.S. There is another variant of this sensor – SHT21 from the Sensirion. This sensor have the same pinout as HTU21D and uses same protocol, even the same I2C address. So this code can be used with both types of the sensors without any modifications. Thanks for reading! One thought on “Connecting HTU21D temperature/humidity sensor to the Raspberry PI using simple C i2c interface”
http://olegkutkov.me/2018/02/21/htu21d-raspberry-pi/
CC-MAIN-2019-13
refinedweb
1,431
66.94
Generating thumbnail urls is a good example. If you have a url which can generate a thumbnail of an image at a specified size that is pretty cool. Except then you can have someone generating images at all sorts of other weird sizes by exploiting the width, and height parameters (eg making 100,000 pixel wide thumbnails). Or perhaps you want to limit it to a certain set of images. You could always code in your thumb nail generating routine which variables are valid... but this has problems. First it makes your routine less flexible. Separating authorisation is always a nice idea. Another way (with HMAC like using pywebsite.signed_url) is to generate the urls and add a hash to them with a secret salt. This way you can be fairly sure that it was generated by someone with access to the secret salt. This system is not as secure as using PKI, but it is a lot quicker to implement and is faster running. One problem with using hashes is when you have to change the salt or hash scheme... then all your old urls are invalidated. Very annoying (sad panda). Below is a little example of how you would hash protect some object methods, so only methods which are generated by authorised people are allowed. Note that it only uses a hash with a length of 6 characters, this is to keep urls short. from pywebsite import signed_url, imageops class SignedImages(object): salt = 'somesecret' root_dir = '/tmp/' def thumb(self, hash, width, height, rotate, gallery, image): """ serves a thumb nail """ salt = self.salt keys = None length_used = 6 values = [width, height, rotate, gallery, image] if not signed_url.verify(values, salt, hash, keys, length_used = length_used): raise ValueError('not a valid url') cache_dir = os.path.join(self.root_dir, "cache/") path_to_galleries = os.path.join(self.root_dir, "galleries/") iops = imageops.ImageOps(cache_dir, path_to_galleries) path = iops.resize(gallery, image, width, height, rotate) return cherrypy.lib.static.serve_file(path) def gen_thumb_url(self, width, height, gallery, image): values = [ width, height, '0', gallery, image] salt = self.salt keys = None length_used = 6 hash = signed_url.sign(values, salt, keys = keys, length_used = length_used) url = "/".join(["thumb", hash] + values) return url thumb.exposed = True gen_thumb_url.exposed = True See the signed_url.py on launchpad file browser, or bzr co lp:pywebsite. Also see pywebsite.signed_url.signed_url_test for the unit tests. It is quite a simple module, and you could roll your own quite easily (with the python hmac module). Well, maybe not well easily... eg, consider the flickr exploits This code is also probably vulnerable( eg, SHA-1 is now vulnerable to collisions), but at least it will stop basic fiddling with unauthorised urls and shouldn't have the same vulnerabilities as the flickr API used to(and the facebook API might still have). updates: been through a number of changes... the name has changed from hash_url to signed_url.sign, signed_url.verify. It has also moved into its own sub-package (like all pywebsite modules now live in their own self contained sub-package). Fixes for timing attack(unit tested). Uses hmac.
http://renesd.blogspot.com/2009/12/hashing-urls-for-authorisation-and.html
CC-MAIN-2015-22
refinedweb
509
68.06
may send you an email when changes are made to the ticket, depending on how your notification preferences are configured. Permission groups can also be entered in the CC field, to notify all members of the group. How to use your username to receive notification mails To receive notification mails, you can either enter a full email address or your Trac username. To get notified with a simple username or login, you need to specify a valid email address in your preferences.: The [trac] base_url option must be configured for links in the notification message to be correctly generated. email subject The email subject can be customized with the ticket_subject_template option, which contains a Genshi email content The notification email content is generated based on ticket_notify_email.txt in trac/ticket/templates. You can add your own version of this template by adding a ticket_notify_email.txt to the templates directory of your environment. The default is: ${ticket_body_hdr} ${ticket_props} # if ticket.new: ${ticket.description} # else: # if changes_body: ${_('Changes (by %(author)s):', author=change.author)} ${changes_body} # endif # if changes_descr: # if not changes_body and not change.comment and change.author: ${_('Description changed by %(author)s:', author=change.author)} # endif ${changes_descr} -- # endif # if change.comment: ${_('Comment:') if changes_body else _('Comment (by %(author)s):', author=change.author)} ${change.comment} # endif # endif ${'-- '} ${_('Ticket URL: <%(link)s>', link=ticket.link)} ${project.name} <${project.url or abs_href()}> ${project.descr} See the cookbook for additional template customization recipes..
https://trac.osgeo.org/csmap/wiki/TracNotification
CC-MAIN-2022-33
refinedweb
243
51.65
Let's create a recursive solution. - If both trees are empty then we return empty. - Otherwise, we will return a tree. The root value will be t1.val + t2.val, except these values are 0 if the tree is empty. - The left child will be the merge of t1.left and t2.left, except these trees are empty if the parent is empty. - The right child is similar. def mergeTrees(self, t1, t2): if not t1 and not t2: return None ans = TreeNode((t1.val if t1 else 0) + (t2.val if t2 else 0)) ans.left = self.mergeTrees(t1 and t1.left, t2 and t2.left) ans.right = self.mergeTrees(t1 and t1.right, t2 and t2.right) return ans ans.left = self.mergeTrees(t1 and t1.left, t2 and t2.left) I'm new to this game, how does "and" work? Could you briefly explain? @qxlin I am using t1 and t1.left as a shortcut for t1.left if t1 is not None else None. Here, " x and y" evaluates as follows: If x is truthy, then the expression evaluates as y. Otherwise, it evaluates as x. When t1 is None, then None is falsy, and t1 and t1.left evaluates as t1 which is None. When t1 is not None, then t1 is truthy, and t1 and t1.left evaluates as t1.left as desired. This is a standard type of idiom similar to the "?" operator in other languages. I want t1.left if t1 exists, otherwise nothing. Alternatively, I could use a more formal getattr operator: getattr(t1, 'left', None) @awice Thanks so much! I am familiar with '?' operator(or the if else statement), but python always surprises me! Thanks for sharing a nice looking solution! I tried it and got 132 ms. I was surprised to see that it underperformed my initial naive implementation (125ms). I guess not creating a new TreeNode helps a bit if one of the sides is None: def mergeTrees(self, t1, t2): """ :type t1: TreeNode :type t2: TreeNode :rtype: TreeNode """ if not t1 and not t2: return if not t1: res = t2 elif not t2: res = t1 else: res = TreeNode(t1.val+t2.val) res.left = self.mergeTrees(t1.left,t2.left) res.right = self.mergeTrees(t1.right,t2.right) return res Here is a complete analysis of this algorithm: Thanks for sharing. I have a revised version of this and I think it is faster and easier to understand: def mergeTrees(self, t1, t2): if t1 and t2: root = TreeNode(t1.val + t2.val) root.left = self.mergeTrees(t1.left, t2.left) root.right = self.mergeTrees(t1.right, t2.right) return root else: return t1 or t2 Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/92214/python-straightforward-with-explanation
CC-MAIN-2017-43
refinedweb
465
79.36
Minimalist build system for CoffeeScript. Version 0.6.13 Do not hesitate to open a feature request or a bug report. A place to talk about it, ask anything, get in touch. Luckily you'll be answered sooner than later. NOTE: The list is active and maintained, though the low activity. So don't be shy. Minimalist build system for CoffeeScript, made for those who dare to use class definitions in CoffeeScript while being able to easily inherit from external files. The system is powered with import directives that uses wildcards facilities, exposed scopes, excluded files filter options and a packaging system that can inject your folders-as-namespaces to all your classes based on where they are under your src folder. CoffeeToaster was created initially as a base for creating the Theoricus Framework. -c -w -cd, -wd -a, -ad folders exclude vendors bare packaging expose minify httpfolder release debug npm install -g coffee-toaster There are two simple scaffolding routines bundled with CoffeeToaster for creating new projects structure from the scratch and also for creating the config toaster.coffee file for existent projects. CoffeeToaster suggests a very simple structure for initial projects, you can customize it as you like. toaster -n mynewapp You will be asked for some things: source folder Relative folderpath to your source folder, default is src. release file Relative filepath to your release file, default is www/js/app.js http folder The folderpath to reach your debug file through http, default is an empty string. Imagine that the www is your root web folder, and inside of it you have a js dir where you put your debug file. In this case you'd just need to inform 'js' as the http folder. It tells toaster how to reach your debug js file starting from the / on your server. This property is only for debug, your release file will not be affected. Considering all the default values, you'll end up with a structure as such: myawsomeapp/├── src├── vendors├── www└── js└── toaster.coffee4 directories, 1 file You can also initialize an existing project with a config toaster.coffee file such as: cd existing-projecttoaster -i Some of the same information ( src, release and httpfolder) will be required, answer everything according to your project's structure and a config toaster.coffee file will be created inside of it. Toaster help screen. CoffeeToasterMinimalist build system for CoffeeScriptUsage:toaster [options] [path]Examples:toaster -n myawsomeapp (required)toaster -i [myawsomeapp] (optional)toaster -w [myawsomeapp] (optional)toaster -wd [myawsomeapp] (optional)Options:-n, --new Scaffold a very basic new App-i, --init Create a config (toaster.coffee) file-w, --watch Start watching/compiling your project-c, --compile Compile the entire project, without watching it.-d, --debug Debug mode (compile js files individually)-a, --autorun Execute the script in node.js after compilation-j, --config Config file formatted as a json-string. [string]-f, --config-file Path to a different config file. [string]-v, --version-h, --help The import directive is known by: #<< app/views/user_view#<< app/utils/* By putting #<< app/views/user_view in your CoffeeScript file, you're telling CoffeeToaster that there's a dependency. It's like a require, except that you can't save a reference of the imported file to a variable. Instead, this directives shoud be put in the first lines of your files. This is how you organically tells Toaster about the specific ordering options to be considered when all of your files get merged. Files imported this way will only be gracefully sorted out in your final output javascript so every file is always defined before it's needed. Wild cards #<< app/utils/* are also accepted as a handy option. If you're writing a class B that will extends the class A, you shoud first import the class A so it will be available for being extended by class B. :->consolelog 'Will be used as base class.' #<< app/a:->consolelog 'Using class A as base class' Think of it as a glue that you use to chain all of your files appropriately.## Compile Compile your project according your config file. cd existing-projecttoaster -c Starts Toaster in watching'n'compiling mode: cd existing-projecttoaster -w Any changes you make to your src files will trigger the compile action. In debug mode option -d all files will be compiled individually inside a folder called toaster in the same directory you've pointed your debug file, aiming to ease the debugging process. toaster -wdtoaster -cd For example, if you have release/app-debug.js, a folder will be created at release/toaster and all your CoffeeScript files will be compiled to Javascript within. Bellow is a representative directory structure after compiling in debug mode. /usage|-- src| `-- app| |-- controllers| | `-- users_controller.coffee| |-- models| | `-- user_model.coffee| `-- views| `-- user_view.coffee|-- www| `-- js| |-- app-debug.js| |-- app.js| `-- toaster| `-- app| |-- controllers| | `-- users_controller.js| |-- models| | `-- user_model.js| `-- views| `-- user_view.js`-- toaster.coffee Every CoffeeScript file is compiled individually inside the www/js/toaster directory, so you can debug it sanely. The debug file www/js/app-debug.js is the boot-loader responsible for loading all these individual compiled JS files into the right order. In autorun mode option -a the script is recompiled after each file change and it is executed in a node.js child process. It is possible to use autorun in combination with debug option -d to set the script breakpoint on the first line toaster -atoaster -da of if you like the watch option toaster -watoaster -wda to better debug your application via node.js you can use some tools like node-inspector It is also possible to pass arguments to the compiled script toaster -wa argument argument2 argument3toaster -wda argument argument2 argument3 Please note that the -a arguments has to be the last of the group in order to make it work: toaster -ad argument will not behave as expected and toaster -da argument should be used instead So in your .html you'll have two options: - Include your release file. - Include the toaster boot-loader (your debug mode). You can pass your own config file for toaster instead of the default one toaster.coffee, with the -f or --config-file option: toaster -wdf config/mycustomconfig.coffee NOTE: It's important that you always call this from your project base folder, otherwise the paths of your config can get messy. Remembers also that the paths in your config file shoud ideally be always relative to your project base folder. Alternativelly, you can even pass the whole configuration as a JSON string, with the -j or --config option: toaster -wdj '{"folders":{"src":""},"expose":"window","release":"app.js","debug":"app-debug.js"}' NOTE: The same above.## Conlusion Every time something changes, CoffeeToaster recompiles all of your application by: Due to the way VIM handles files, you'll need to disable the creation of swap and backup files. To do it, just put these three lines in your .vimrc: " for coffee-toasterset nobackup " no backup filesset nowritebackup " only in case you don't want a backup file while editingset noswapfile " no swap files This will guarantee the expected behavior of Toaster and make it play nicely with VIM without any conflicts. For more info about why it's really needed, please check this thread. The toaster.coffee is the config file from where Toaster seek all information about your app, vendores, build options and so on. There are two main usages you can make of this file: When all your code is bellow one single source folder your set up the main toast call passing the folder path directly. # src foldertoast 'src'# excluded items (will be used as a regex)exclude: 'folder/to/exclude''another/folder''.DS_Store'# packaging vendors among the codevendors: 'vendors/x.js''vendors/y.js'# gereral options (all is optional, default values listed)bare: falsepackaging: trueexpose: '' # can be 'window', 'exports' etcminify: false# httpfolder (optional), release and debug (both required)httpfolder: 'js'release: 'www/js/app.js'debug: 'www/js/app-debug.js' When your code is splitted between two or more source folders you can set the main toast call without any path, and inform your folders right bellow it. toastfolders:'src/my/app/folder': 'app''src/my/lib/folder': 'lib'# ... Let's take a closer look at all properties you can have in your toaster.coffee file and what each one of these is responsible of. Mandatory: no Type: Object Default: null In case you have more than one src folder, you can set an object of objects containing setup information about all your source folders, in the format 'folderpath':'folderalias'. The hash-key is the path of your folder, and the hash-value is the alias you want to prepend to all files under that. Pay attention to this specially when using Toaster with the '-j' option. To give an example, the equivalent use of this config: toast 'src'# ... Would be: toastfolders:'src': '' NOTE: Aliases take effect only if the packaging is set to true. Aliases lets you set a virtual top namespace to your source folder, if you have src/app/app.coffee which is a class App, you'll usually access it using new app.App. Now if you set an alias like 'src':'awesome' the whole structure under your source folder will be addressed under that awesome namespace and you need to prepend it when accessing your classes, i.e. new awesome.app.App. Mandatory: no Type: Array Default: [] Let's you excplicity exclude some folder, file or file type from Toaster search/process mechanism. The string you use here will effectively turn into a RegExp like that: '.DS_store''.swp''my/folder/to/be/excluded' Mandatory: no Type: Array Default: [] You can define vendors such as: vendors: 'vendors/x.js''vendors/y.js'... It's an ordered array of all your vendor's paths. These files must be purely javascript, preferably minified ones -- Toaster will not compile or minify them, only concatenate everything.### `bare` Mandatory: no Type: Boolean Default: false If true, compile your CoffeeScript files without the top-level function safety wrapper: {console;}; So you will end up with just your peace of code: console; Mandatory: no Type: Boolean Default: false When packaging is true, Toaster will rewrite all your class declarations. If you have a file in src/app/models/user.coffee with this contents: Toaster will rewrite your declaration prepending a namespace to it, based on the folder the class is located, resulting -- in this example -- into this: This rewriting process is saved directly into your file. In case you move this class to another folder, the prepended namespace will be rewrited again, always following your folder structure. In other words, your don't need to worry about hardcoded namespaces in your files, because Toaster will handle all the dirty for you.### `expose` Mandatory: no Type: String Default: null If informed, list all you packages of classes in the given scope. If you use window as your expose scope, your classes will be available also in the window object -- or whatever scope you inform, suck as exports if you're building for NodeJS. In the end you'll be able to access your files throught this scope where your classes was exposed. Mandatory: no Type: Boolean Default: true If true, minify your release file using UglifyJS. Debug files are never minified.### `httpfolder` Mandatory: no Type: String Default: '' The folder path to reach your debug file through http, in case it is not inside your root directory. Imagine that the www is your root folder and when you access your webiste the / referes to this folder. Inside this www folder you have another folder called js where you put all your compiled js, resulting from a config like this: toast 'src'# ...release: 'www/js/app.js'debug: 'www/js/app-debug.js' Following this case you'd just need to inform js as your http folder. Toaster will use it to reach your debug files. For that, it will writes the declarations inside the debug boot loader following this location in order to import your scripts properly when in debug mode, prepending your httpfolder to all file paths: // app-debug.jsdocument Without knowing that your JS files is under the js folder this path would be broken. NOTE: Your release file will not be affected by this property.### `release` Mandatory: yes Type: String Default: null The file path to your release file.### `debug` Mandatory: yes Type: String Default: null The file path to your debug file. You'll certainly find some useful resources in the examples provided. Examine it and you'll understand how things works more instinctively. Install coffee-toaster, clone the usage example and try different config options, always looking for the differences in your javascript release file. Single folder example Multi folder example API example You can use Toaster through API as well, in case you want to power up your compiling tasks or even build some framework/lib on top of it. See the API example for further information. Toaster = require 'coffee-toaster'Toastertoasting = basediroptionsskip_initial_buildtoastingbuild header_code_injectionfooter_code_injection Environment setup is simple achieved by: git clone git://github.com/serpentem/coffee-toaster.gitcd coffee-toaster && git submodule update --initnpm link Builds the release file inside the lib folder. make build Starts watching/compiling using a previuos version of the CoffeeToaster itself. make watch Run all tests. make test
https://www.npmjs.com/package/coffee-toaster
CC-MAIN-2017-26
refinedweb
2,232
54.42
Reduction operations are those that reduce a collection of values to a single value. In this post, I will share how to implement parallel reduction operations using CUDA. Sequential Sum Compute the sum of all elements of an array is an excellent example of reduction operation. The sum of an array which values are 13, 27, 15, 14, 33, 2, 24, and 6 is 134. The interesting question is: How would you compute it? Probably your first answer would be doing something like this (((((((13+27)+15)+14)+33)+2)+24)+6). Am I right? The problem with this approach is that it is impossible to parallelize. Why? Each step depends on the result of the previous one. Parallel Sum Adding values is an associative operation. So, we can try something like this ((13+27)+(15+14))+((33+2)+(24+6)) This way is much better because now we can execute it in parallel! CUDA Let’s figure out how to do it using CUDA. Here is the main idea: - Assuming N as the number of the elements in an array, we start N/2 threads, one thread for each two elements - Each thread computes the sum of the corresponding two elements, storing the result at the position of the first one. - Iteratively, each step: - the number of threads halved (for example, starting with 4, then 2, then 1) - doubles the step size between the corresponding two elements (starting with 1, then 2, then 4) - after some iterations, the reduction result will be stored in the first element of the array. Let’s code #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <iostream> #include <numeric> using namespace std; __global__ void sum(int* input) { const int tid = threadIdx.x; auto step_size = 1; int number_of_threads = blockDim.x; while (number_of_threads > 0) { if (tid < number_of_threads) // still alive? { const auto fst = tid * step_size * 2; const auto snd = fst + step_size; input[fst] += input[snd]; } step_size <<= 1; number_of_threads >>= 1; } } int main() { const auto count = 8; const int size = count * sizeof(int); int h[] = {13, 27, 15, 14, 33, 2, 24, 6}; int* d; cudaMalloc(&d, size); cudaMemcpy(d, h, size, cudaMemcpyHostToDevice); sum <<<1, count / 2 >>>(d); int result; cudaMemcpy(&result, d, sizeof(int), cudaMemcpyDeviceToHost); cout << "Sum is " << result << endl; getchar(); cudaFree(d); delete[] h; return 0; } Time to action In this post, we implemented a primary example of parallel reduction operation in CUDA. But, the pattern we adopted can be used in more sophisticated scenarios. I strongly recommend you to try to implement other reduction operations (like discovering the max/min values of an array). Now, it would be easy. Right? Share your thoughts in the comments. This Post Has 6 Comments Hi Elemar, what do you think about projects like Hybridizer and ManagedCuda? I’ve been testing ALEA.. It is “nice” What do following lines do? step_size <>= 1; Or partiularly, what is functioning of “<>==”? The line is: step_size<<=1 It doubles the step_size. How to sum using multiple blocks? hi, what happens when the size of the array is not a power of two? Im trying to do it with an array of 784 elemnts but it does no sum well, do you have any ideas? thanks in advance
https://www.elemarjr.com/en/archive/parallel-reduction-in-cuda/
CC-MAIN-2020-29
refinedweb
532
64.61
delegate is specified when the timer is constructed, and cannot be changed. The method does not execute on the thread that created the timer; it executes on a ThreadPool thread supplied by the system. When you create a timer, you can specify an amount of time to wait before the first execution of the method (due time), and an amount of time to wait between subsequent executions (period). using the Change method. When a timer is no longer needed, use the Dispose method to free the resources held by the timer. Note that callbacks can occur after the Dispose() method overload has been called, because the timer queues callbacks for execution by thread pool threads. You can use the Dispose(WaitHandle) method overload to wait until all callbacks have completed.. The following code example demonstrates the features of the Timer class. using System; using System.Threading; class TimerExample { static void Main() { // Create an event to signal the timeout count threshold in the // timer callback. AutoResetEvent autoEvent = new AutoResetEvent(false); StatusChecker statusChecker = new StatusChecker(10); // Create an inferred delegate that invokes methods for the timer. TimerCallback tcb = statusChecker.CheckStatus; // Create a timer that signals the delegate to invoke // CheckStatus after one second, and every 1/4 second // thereafter. Console.WriteLine("{0} Creating timer.\n", DateTime.Now.ToString("h:mm:ss.fff")); Timer stateTimer = new Timer(tcb, autoEvent, 1000, 250); // When autoEvent signals, change the period to every // 1/2 second. autoEvent.WaitOne(5000, false); stateTimer.Change(0, 500); Console.WriteLine("\nChanging period.\n"); // When autoEvent signals the second time, dispose of // the timer. autoEvent.WaitOne(5000, false); stateTimer.Dispose(); Console.WriteLine("\nDestroying timer."); } } class StatusChecker { private int invokeCount; private int(); } } } Available since 8 .NET Framework Available since 1.1 Portable Class Library Supported in: portable .NET platforms Silverlight Available since 2.0 Windows Phone Silverlight Available since 7.0 Windows Phone Available since 8.1 This type is thread safe.
https://msdn.microsoft.com/en-us/library/system.threading.timer.aspx
CC-MAIN-2015-40
refinedweb
319
60.31