text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
User talk:Splaka/Archive2 From Uncyclopedia, the content-free encyclopedia Hi Splaka Hilf mir bitte, weil ich hier neu bin. Bébizokni ist auf Ungarisch/Hungarian/Magyar. Aber dieses Sablon finde ich nicht... --Dr. Steller 11:27, 13 Nov 2005 (UTC) Image ist aus diese Seite: --Dr. Steller 11:29, 13 Nov 2005 (UTC) - Sorry, I don't comprehend. The {{Qua?}} template was simply to notify you that Bébizokni was in the wrong language area. Could you move it to Hu:Bébizokni (if it is Hungarian) please? I notice we also don't have any main Hungarian page at Hu:, start one if you like ^_^. --Splaka 11:43, 13 Nov 2005 (UTC) - Actually, having realized that was German and not Hungarian: Um äußere Bilder zu verwenden sollten Sie sie hochladen hier verwendend Special:Upload. Ungarische Sprachenseiten sollten mit vorgesetzt werden: "Hu:" (eg Hu:Bébizokni). Es gibt ein volles deutsches Uncyclopedia an uncyclopedia.de. (Babelfish translation, sorry)--Splaka 11:49, 13 Nov 2005 (UTC) Dawg to Splaka Hey, life gave me a lemon and I wanted a lime. If anyone wonders where I am (I figure you might), I'll be mostly offline until futher notice (not sure how long, need to order some new hardware for a certain recently-deceased machine that was freaking out earlier). Apparently I was right about the database... Well, drop me a message here or on my talk page. I'll try to drop by from time to time. » Brig Sir Dawg | t | v | c » 11:44, 13 Nov 2005 (UTC) - Bummer! Damn computers. Will do. Stay sane. Make lemonade. --Splaka 11:49, 13 Nov 2005 (UTC) PS: <chron> jason, you know our thing is doubly serialzed right? <JasonR> Yes. - Yes, ERTW does hold the record. Sadly, the dead hardware is both expensive and hard to come by, even on the internet. I'm now a modern digital gypsy, almost like when I was a student and I carried floppies with me before the original zip drives became ubiquitous and when CD-Rs still cost a fortune. Now I live on a 1GB USB flash drive; the kids now-a-days have it so easy. » Brig Sir Dawg | t | v | c » 10:40, 15 Nov 2005 (UTC) About my block Ummm... is there any chance that one of the admins you talked to about my 12-hr block was Famine? (reply on my talk, plz) --Clorox MUN ONS (diskussion) ☃ 20:32, 13 Nov 2005 (UTC) Why blocked Can you please tell me why I am blocked? Something about American/English, or Poop_Cuisine, of which I know nothing about. My user name is Chocology - It is because you share an AOL proxy IP with a vandal. Your username doesn't appear to be blocked, just be wary of aol (it sucks, every edit can have a different IP, sometimes we have to ban the whole thing). ^_^ --Splaka 23:11, 13 Nov 2005 (UTC) Hey, I'm Sorry I found Zach on CNN and a friend of mine had sent me the pictures (I'm the other guy on the Bad Boy'd? case) and I thought it would be funny. I was editing, loading, editing, loading, to make sure it looked okay, and you deleted it the first time while I was editing, so the reload was a mistake. I sincerely apologize and will not attempt to repost it. I just thought it was kind of funny. --192.195.225.6 - Fair enough. What annoyed us most was the use of bugmenot to recreate it and upload pics. There is no need to use bugmenot. Logins do not require email addresses or any personal information. I actually like and use bugmenot, but it is not needed for wiki sites. It can also be easily disabled by logging in and changing the password. If you create a login you can put the article back up (although mention it is obscure and not a friend of yours). Sorry for jumping the gun ^_^. --Splaka 07:52, 14 Nov 2005 (UTC) IRC spoofing Just a heads-up. Our old friend (I won't be Specific about his name, I'll just keep it General) has learned how to use /nick on the IRC channel. He's already practiced spoofing you, and I'm sure he'll play with mine next. I think it will be painfully obvious when it's really one of us, and when it's him. FWIW, I didn't hang around the channel to watch -- he's getting kind of boring. -- Sir BobBobBob ! S ? [rox!|sux!] 21:38, 14 Nov 2005 (UTC) Postman Pat Hello, Splaka. I have no idea what are your plans with this article you have nothing to do (yet?) with except the MTU tagging. You've recently deleted in a hurry the stuff I write all this morning (funny or not that text). I've noticed you have been during the last week the one who ask writers to move/expand/delete just-made article stubs in next 7 days which in many cases take the fun off in stories and so far in writing, such a hurry. I think it needs lots of time and second thoughts to find the right mixes between the articles and not too much one person's quick decisions what to do with all this crap written. VERY BAD articles is a different thing, of course. Don't we all do this site together, mixing different ideas from different writers coming from all around the world? I'm not a native English speaker myself and I wellcome everyone (you included) to make this crappy article a lot larger and better. - 81.197.63.160 (IP at the moment) 15 Nov 2005 - I MTU tag artcles as they are created, if they look short but worth expanding. Note that they are usually in no danger for 7 days or so. If you expand the article you can remove the tag. The tag is just our way of finding the artcles later if they never get expanded (you would not believe how many people come in, create a two sentence article, and disappear thereafter). If I deleted some stuff of yours it was either TOO short (or plain bad) for even MTU, or was about non-notable people. Can you give me names of articles of yours you feel were deleted unfairly? --Splaka 12:11, 15 Nov 2005 (UTC) P.S. As you were the original author of Postman Pat I'll undo my revert, it looked at first glance like vandalism ^_^. - OK, you are forgiven for this matter. ;-) But I must say that MTU tag can be quite frightening for us foreigners (at least) who write here at the first time and not be too sure of what the main rules are here. Stub tag is easier one to understand, a well-known thing from Semprini. There should be probably a simplier text written in the MTU tag to explain what it really means? --original writer 81.197.63.160 (IP at the moment) 15 Nov 2005 - Hey, don't blame Splaka too quickly. I may be the culprit. I just got finished annihilating several articles that Splaka marked MTU, because in *my* opinion they were a waste of electrons. Sorry, Splaka... -- SirBobBobBob ! S? [rox!|sux!] 14:58, 15 Nov 2005 (UTC) - I'm sorry as well if I said things too harshly and like attacked Splaka personally. It's the practice I'm really talking about. I just think Uncyclopedia may lose good fresh writers here if you administrators react at once on every new stub by threatening them to be moved/deleted instead of marking them just to be a stub. I myself prefer a hilarious stub to a long story with loose humour (if these are the choices). But of course, it is easier to talk about the problems than being the one who put things into practice. You administrators also do loads of great job here, I can't deny that at all! --81.197.63.160 (IP still at the moment? or not) 15 Nov 2005 Peter Williams - Splaka, - Just thought you'd want to know Peter Williams has been recreated again. MadMax 04:57, 18 Nov 2005 (UTC) Can I, can I? Hi, Splaka. Ghostface said you're the man to talk to. So I am. I'm a new user - Tryggvasson, however, before becoming one (starting to wear suits, ties and all and get some recognition (or not)) I have managed to chip in a couple of articles and a few contributions to others, as a mercenary (ip - 82.76.101.94), before I joined the regiment. (I've created Nightwish and Don't talk with your mouth full; I've completely rewritten Mintrubbing and Muie (yeah, you've guessed, I'm Romanian :p) - they were very small and really no fun; I've added a chapter to Mother Fucker (how do I choose my topics, I wonder?), plus three others to Gerbonia (language (except three quoutes), women and national sports) and a bit of minor contributions to various other articles. I'd like to ask you if there's a way you can connect the IP to one user name or attribute all the "writings" of that ip (which is really mine, by the way) to my user name, so that I won't feel deprived of the past in my new capacity. That's it. Thanks a lot (in advance). Take care! --Tryggvasson 22:59, 19 Nov 2005 (UTC) - Unfortunately that is rather complex and requires hacking the database. See Wikipedia:Wikipedia:Changing_attribution_for_an_edit#General_Notes for the policy on wikipedia. AFAIK it has never been done on uncyclopedia. Best thing to do is make a list of IPs you've used and say "I edited under these IPs". (The wikicities developer is also currently very busy trying to upgrade us to the next version of MediaWiki.) --Splaka 23:10, 19 Nov 2005 (UTC) Jesus Now listen. Why won't you let me say on the Jesus talk page that the page is unfunny63.19.207.215 22:19, 21 Nov 2005 (UTC) - The talk page is a redirect, you shouldn't destroy a redirect to make a comment (instead, comment on the page it directs too). The page itself is a disambiguation page and doesn't need to be funny. --Splaka 22:26, 21 Nov 2005 (UTC) - Parts of the article such as "Jeez-its" were obviously trying (and failing) to be funny. 63.19.207.215 22:29, 21 Nov 2005 (UTC) Rap music - Splaka, - Just wanted to let you know I just merged the Rap music article under "Old School" in Rap. MadMax 02:34, 22 Nov 2005 (UTC) - Cheers. I changed it to a redirect. --Splaka 02:39, 22 Nov 2005 (UTC) - Ditto on Danish-Canadian War. And I was really waiting for a good article on that to... MadMax 05:25, 22 Nov 2005 (UTC) PS: Splaka, I hate to keep bothering you with these. Should I save you the trouble and add a redirect myself ? I would have already done so except for the MTU template. MadMax 05:30, 22 Nov 2005 (UTC) User talk:Famine - Splaka, thanks for the kind words regarding my computer issues. Although it's really no trouble if I get blocked as I usally just log on again. Admittedly it's a hassle (and certainly after more then a year of editing Wikipedia I'm used to it ;-] ), but it's certainly justified given the massive amounts of vandalism that comes from AOL alone. Anyway I certainly apprecite the fact someone notices my contributions. :) MadMax 03:35, 22 Nov 2005 (UTC) User:SamuraiClinton I saw that you banned the aforementioned suspect for a bit. I was wondering if you'd agree that his contributions are somewhat suspect as well: Wheely Willy is a redirect to Willy on Wheels, which looks to me like QVFD vanity. I think we can safely rollback his addition to Wikipedia, and the contribution to Jimbo Wales Jr. makes that page look even more huffable. Update: The only link to Wheely Willy is Page move vandalism created by 69.209.170.251. Also from this fine auteur, Penguin shit island and edits to Penguin and (gasp!) Wheely Willy. I took the liberty of banninating this IP for a week as well with the excuse reason "Appears to be User:SamuraiClinton making anon edits. Stop it." I haven't deleted any of the pages, in case you need them for your doctoral thesis research, so please feel free to huff 'em at your leisure. -- Sir BobBobBob ! S ? [rox!|sux!] 15:51, 23 Nov 2005 (UTC) Image:PushOnce.jpg Oiy, What did you do that for? now when you press the button in the Des Moines article (the only one it's on) it goes to the image page for it rather than the article about the button!!--MysteryShopper 11:04, 24 Nov 2005 (UTC) - Image redirects don't work so hot, and well, are screwy. Use this code: <span class="plainlinks">[]</span> (the links have to be external for this to work). --Splaka 11:09, 24 Nov 2005 (UTC) - But I like image redirect's their like "old skool" (old == 2/3 months) + you get the image in the resultant article!--MysteryShopper 11:13, 24 Nov 2005 (UTC) - Maybe, but it isn't your image, and making an image description page into a redirect is just a bad habit (for those wanting information on the image source and licensing). --Splaka 11:21, 24 Nov 2005 (UTC) - ............--MysteryShopper 11:26, 24 Nov 2005 (UTC) - Everything that was in the article is in the edit summary!--MysteryShopper 11:36, 24 Nov 2005 (UTC) - Everything that was in the original upload summary maybe (and unparsed), however as there is no preview this is not always the final version of the description page. I'd argue that a simple link to the destination page on an image description page is hardly more inconvenient than a redirect, but why? Image description pages serve a purpose that is inconvenienced by redirects (which serve little purpose), so no. Also, why are you still arguing? You already used my code, I see. --Splaka 11:42, 24 Nov 2005 (UTC) - For the principal! (also I'm 90% certain it I had replaced the redirect you would have blocked me) --MysteryShopper 11:49, 24 Nov 2005 (UTC) - Blocked? Possibly. Lot easier than using a talk page ^_^. But until wikia disables external images, my alt method is fine (except the image appears unused, use Template:Notorphan so it doesn't get huffed). --Splaka 11:52, 24 Nov 2005 (UTC) - I now would have been a pain to get myself unblocked :-)--MysteryShopper 11:54, 24 Nov 2005 (UTC) - You are almost being a pain to get yourself blocked. ^_^ If you wanna continue this come to IRC (before I sleep). --Splaka 11:57, 24 Nov 2005 (UTC) - Who says I aint already? --MysteryShopper 11:59, 24 Nov 2005 (UTC) - FWIW, I actually wanted to have the image redirect to that article (or maybe a future article to be named later), but being a total n00b, I couldn't figure out how to do it properly. So TYVM to both of you! --Schnuggle Bear! 23:06, 27 Nov 2005 (UTC) Bill Fitchburg It is an attempt to mainstream an inside joke. If there is anything I can do to make it funny on more of a mass basis, tell me. I don't want my page deleted if I can save it. --Kosmakos 22:46, 24 Nov 2005 (UTC) - The problem with inside jokes is, nobody else will get it. Allowed 'inside joke' vanity falls under the 'community' guideline, be it about a school, town, online forum/chatroom, or other group. Writing about a non-notable friend/classmate/sibling though just has no redeeming quality as it won't reach a wide audience (exceptiosn are made for user pages (which don't show on article namespace) and subsections of above mentioned allowed vanity). Uncyclopedia strives to be a satire of Wikipedia, but we can't do that without following some basic patterns of them (we allow school/university 'vanity' because of their policy of allowing a wiki article about every school). So the short answer is, if it is about a real but non-notable person (or reads like it is, even if you made the person up), it will be deleted by someone here. --Splaka 22:53, 24 Nov 2005 (UTC) It's supposed to be an example of a generalization on the prototypical chavs/wiggers. If I can get that point across, is it saveable?I worked a good amount of time on it and so would like to find a way to save it. --Kosmakos 23:12, 24 Nov 2005 (UTC) - I asked a brit: <Codeine> Well, it *could* be funny <Codeine> but it needs work <Codeine> If he's prepared to put the hours in, I say give it a stay of execution <Codeine> but I'd ask him if it's a made-up name <Codeine> if not, he can change it to one - So, if you can make it more obviously a generalization and not about an NNP (photoshop the image, more made-up-ified name) then it could work. --Splaka 23:24, 24 Nov 2005 (UTC) Diarrhea Since you were the last admin to touch this article I'll ask you if it's OK to revert 72.1.206.7's changes. Is it? Thanks for your time. --Naughtyned 05:32, 25 Nov 2005 (UTC) - Yah, undone. Anonymous users who blank sections without an edit summary can ususally be undone (unless it is spam or vandalism or such). --Splaka 06:14, 25 Nov 2005 (UTC) Too many templates Still not pretty, eh? --Logixoul 21:31, 25 Nov 2005 (UTC) Sk33lz0n3 Codeine marked this one as allowable vanity, but I think he missed the little "Splaka is gay" at the bottom. I'll leave it to you to decide how to respond (and to razz Codeine). -- Sir BobBobBob ! S ? [rox!|sux!] 03:42, 26 Nov 2005 (UTC) - Cheers. Feel free to CVP or delete it if they offend again. --Splaka 04:13, 26 Nov 2005 (UTC) - False alarm, I guess -- they didn't go back to the Dark Side after your edit, so hopefully they'll follow the Straight and Narrow Path. They should feel lucky, though, that my bloodlust had already been sated by the enormous amount of crap being generated by k1dd13z home for the holidays. -- SirBobBobBob ! S? [rox!|sux!] 05:47, 27 Nov 2005 (UTC) Dacia Logan Hi, I looked at the list of cars being written about, for example on page Automobile (at the bottom), and I'm not sure how to add the Dacia Logan. If you haven't read this article before I recommend you do it now, it's one of the most hilarious I've read about a car model. Anyway, I'd appreciate it if you'd add this link to the list of car model, thanks. ~User:Csaba UK Highway Code We liked this, some of us, but somoene blanked it and it got deleted. Is resurrection an option? Also Hazard Lights was, IIRC, related. - Guy - Both were blanked by the original author (which is usually an indication that they wanted it deleted, we discourage blanking but for something that is MTU tagged and then subsequently blanked by the same person, we assume they just agree it should be gone). The last versions before blanking were: - You can recreate them, if you feel they won't get deleted too quickly, but use proper capitalization this time, eg Hazard lights (although UK Highway Code seems to be a proper noun in that context). --Splaka 10:44, 26 Nov 2005 (UTC) Here it is... You're probably the only person here that was helpful and influential to me during this first month that I haven't taken the time to write a decent letter to. I don't know why that is, but it may be because I see you in channel so often anyway. :) Much the way that you singled out Elvis as a good role model to learn from, I've been trying to learn from you. While my education here has been more observation than interaction (like you and E), it's still been very helpful. Everything that I've read about how you've changed this site for the better has been useful, and I've tried to do good work here though I don't have your range of skills. I enjoy our sane/ridiculous conversations in the channel, and have appreciated you showing me the ropes in finding vandalism in the diff logs, asking my opinion on pages, editing my monobook, nominating me for awards, not being the least bit bothered that I shamelessly steal code from your userpage, and the other bucketload of stuff that has made getting up to speed here easier. Anyway, lemme know where to mail the tuition money. It was well spent, I think. :) -- T. (talk) 21:56, 26 Nov 2005 (UTC) Homosexual Splaka, hate to be a snitch but Medieval restored the previous version of the article. I didn't want to start an edit war by reverting to the last version but I thought you'd want to know. MadMax 10:32, 27 Nov 2005 (UTC) - Snitches rule, I've reverted him twice. I'll see if he does it again. Thing is, User:Spooner came in and asked us to watch the article, as someone had been borderline-vandalizing it with that version for a while. --Splaka 10:35, 27 Nov 2005 (UTC) MediaWiki:Monobook.js My most recent attempt to break uncyclopedia got me thinking, random logos, a la User:Algorithm's <choose><option> code! do you think I should :-)!--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 17:23, 29 Nov 2005 (UTC) - Personally, I'm against the idea; I like seeing the potato in the corner all the time. --Sir gwax (talk) 19:07, 29 Nov 2005 (UTC) France Splaka, sorry I was cut off on IRC before I could get back to you, I think I was really pushing my luck with less then two minutes before closing. ;-) It's hard to tell from the exact moment when the article began going down hill but the was a series of reversion between the 20th up until a few days ago. I think there were just so many edits in between reversions, Admins could check out every edit before it. Here is a comparison though between my last (loged in) edit on Nov. 9 and the version before my edits. [1] MadMax 04:51, 30 Nov 2005 (UTC) - Okiedoke, will take a peek when I can. Looks like a mess though. Buh. --Splaka 04:55, 30 Nov 2005 (UTC) User talk:EvilZak Do not put back abusive comments against me, which I deleted. The person who put that here was the biggest quinn on earth, anyway. - That person was an admin, and you have no right to remove abusive comments about yourself while putting more in, like this one. Anywho, you've earned a complimentary range block. Enjoy. Douchebag. --Splaka 05:03, 30 Nov 2005 (UTC) Idiotic Table Fantastic work. It looks beautiful! Strong RadX 14:39, 1 Dec 2005 (UTC) IRC link I was just trying to find a link to the Uncyc IRC when I forgot it. And its source. Gimme a link, plz. (reply here) ----Clorox MUN ONS (diskussion) ☃ 01:56, 3 Dec 2005 (UTC) - Try the bottom of the front page. Also, don't go back and 'edit' a mistake when you use [+], that adds a second copy of the post. --Splaka 01:59, 3 Dec 2005 (UTC) Hola I'd like to submit a request for changing the title of the article "British Colombia" to "Coca-Colombia". Please instruct me in the pertinent procedure or don't. - Well, it usually isn't funny to have pages at incorrect locations. You can refer to it as such on the page, or you can create a new page about BC called CC (as long as it is big enough to warrant non-movement). Hmm. Consult Ogopogo about it, he's been cleaning up the Canadian articles. --Splaka 07:08, 3 Dec 2005 (UTC) {{skeith}} on Bunny Do you think a {{skeith}} on the top of the Bunny article would be funny? --Logixoul 12:21, 3 Dec 2005 (UTC) - Possibly at the bottom, at the top it would detract from the overall 'joke' too fast, I think. The drummer from Def Leppard I do not think that the article should be labeled NRV. If it was expanded it wouldn't be as funny; the whole joke is "This person's arm-related article is a stub." --PseudoChron 19:06, 3 Dec 2005 (UTC) - A quote and 3 sentences doesn't make it very redeeming or even much of a 'stub' more like a sub-stub. Try expanding it a tiny bit more? --Splaka 22:47, 3 Dec 2005 (UTC) Notebook page - Splaka, I just got your message on Report a Problem. Just wanted to let you know the categorization button works great. Apparently some computers may or may not use Javascript. I'd of told you in IRC but I guess we keep missing each other. MadMax 07:23, 4 Dec 2005 (UTC) How do I revert an article to an older version? I just spent a couple of hours rewriting Armenia to get it funny and off the vote for deletion list and the original author (69.242.105.27) came back and put the previous (very lame and unfunny) History section back into that article (while deleting my new section). The original history was the whole article before and the main reason it was on VFD in the first place. I've never reverted an article before, do I just type it back in, or what? Thanks man Wiki Tiki God 04:48, 5 Dec 2005 (UTC) - Click [history], click the date of the good version, click [edit] (you'll get a warning about it being an out of date version). Put in a summary like: 'reverting'. Click [save]. --Splaka 04:51, 5 Dec 2005 (UTC) U.S. Presidential election This entry is one of the most wanted on uncyc... I figured I was doing my part to make the red links bluer. Is it just not funny enough, or what? -- Myronic da Chronic MUN NScx (h0lla) @19:06, 5 Dec 2005 (UTC) - A red link is sometimes preferrable to a bad blue link. Don't try to blue all the red links with short articles of little content. 3 lines is not even a stub. Quality over Quantity. And such. --Splaka 22:28, 5 Dec 2005 (UTC) E_e_cummings Um, before you huff this article, you might want to see some of his actualpoetry, whereupon you will realize it is brilliant satire and sing this article's praises. Seriously. --Hobelhouse 14:57, 6 Dec 2005 (UTC) - FWIW, I agree with Hobelhouse. He's a real person, and the article is written in his style. It needs work, but even as wide as I swing my sword, I'd spare this waif. -- SirBobBobBob ! S? [rox!|sux!] 15:14, 6 Dec 2005 (UTC) (1) The correct page title (yes, we mostly want that here, Manitoba over Manisnowba) is E. E. Cummings , (location typoes are not teh funny, even if it is in their style. You don't see us moving pages to stupid titles all the time (well, you might, but then we move them back and ban the person that did it)), making the capitalization and punctuation of this one totally wrong. (2) even if it is funny, there is still no content (4 templates and 1 picture, vs 33 words to the article), making it definitely an NRV. --Splaka 21:46, 6 Dec 2005 (UTC) - I figured the alternative spelling would be appropriate, given that most folks think that's the way he preferred it (in fact, I didn't know otherwise until the so-called experts told me). Even they have a redirect at the lowercase spelling. Also, don't blame Hobelhouse for the templates... those are my doing. Perhaps the article would be funnier if someone could confuse e e cummings with t s eliot, and have ee as the creator of Cats. In any case, there are plenty of articles with less redeeming value than this one. Or maybe I just have low standards today. -- SirBobBobBob ! S? [rox!|sux!] 22:02, 6 Dec 2005 (UTC) - They have a redirect from the lowercase, but it goes to the above-mentioned capitalization. Most people use the correct linkage when linking to it too. Unless it is exceptional (eg No Orleans) then the wrong title is just not really funny. --Splaka 22:08, 6 Dec 2005 (UTC) Katakyuk Joe You deleted this article yesterday, but it's back, created and edited several times by User:AirGunRunner (who also deleted an NRV tag added by EvilZak). I think it's still pretty much crap, probably vanity, but I'll leave it to you to swing the executioner's axe. Warning: the giant image on AirGunRunner's user page is a huge picture of a moose -- suitable for work, but not for dialup. -- Sir BobBobBob ! S ? [rox!|sux!] 14:55, 6 Dec 2005 (UTC) Ruining my Fun How am I supposed to carry out my sacred duties when you constantly undermine my authority? I mean, seriously. I ban people, and two minutes later you come along and negate 12/13ths of their ban. Talk about irresponsible and malicious. Worse? Your banatorial was lame and uninspired. You are a disgrace to this website. Go write an article or something. Sir Famine, Gun ♣ Petition » 01:12, 7 Dec 2005 (UTC) - Hehe, sorry. I didn't see your ban. But anyhow, your ban takes precidence as it was first. I banned someone for 24hrs, and then for 1 year (after seeing more what they'd done), and they came back vandalizing 24hrs later. --Splaka 01:30, 7 Dec 2005 (UTC) I did not know that about bans. Good to know. Currently, I spend my time here watching this trainwreck. I swear, Pavlov would be rolling over in his grave right now. Sir Famine, Gun ♣ Petition » 03:32, 9 Dec 2005 (UTC) Recent Oscar Wilde additions - Splaka, - I was wondering if you've noticed the recent changes to Oscar Wilde in the past few weeks ? I've revised the latestest additions to Oscar Wilde's life [2] , however even after reformating it seems suited more to the OSCAR WILDE Livejournal. MadMax 02:50, 8 Dec 2005 (UTC) - Hmm, that page gives me a headache, I just revert obvious abuse of it. Do what you want to him, I can't judge funny on it anymore! --Splaka 03:51, 9 Dec 2005 (UTC)) Something petty about Template:Moodring Shouldn't there be a link to the template under your templates section? ---:12, 9 Dec 2005 (UTC) He: Hi. Could the definitions for the He: pages be changed to RTL (Right To Left)? It would make my life a lot easier. Thanx anyway, Kakun 03:41, 9 Dec 2005 (UTC) - I am not sure I understand? You mean the content of these pages, or the links to them? Or something else? --Splaka 03:45, 9 Dec 2005 (UTC) - Splaka, - The Hebrew Uncyclopedia's text is displayed backwards and Elvis suggested asking at the Hebrew Uncyclopedia how their page is displayed and then to check with you if the display problem could be resolved by a javascript/css code. MadMax 00:42, 10 Dec 2005 (UTC) Huffination SPLAKA. Please can I finish editting my article before you delete it? I haven't used my mod powers for evil before, only for good, please don't force me to fall to the dark side. MY dad wouldn't stand a chance in a lightsabre fight. - Ok, what did I do now? --Splaka 09:09, 9 Dec 2005 (UTC) - Ahh k Victim. It looked like an empty piece of fluff, we've been highly trained by chron to huff such stuff. Since you've been so inactive I didn't recognize you as a fellow sys. Apologizes. But if an anon IP or new user had made such an article, it probably would have been deletable by a dozen different admin ^_^. --Splaka 09:14, 9 Dec 2005 (UTC) - Man, it's so tough watching all the yong padawans fall to the dark side. Easier, More Seductive. Yeah - I keep an eye on Uncyclopedia, but don't make many articles - the weight of rubbish that gets written depresses me. I remember the good old days. I would have deleted what I wrote too, if it were random IP stuff. But I've got pedigree. Although my browser has malfunctioned and I can't find the sign key :-p - Surrender to the dark side. Just type out 4 tildes: ~~~~ and you're sigged. --Splaka 09:32, 9 Dec 2005 (UTC) Jesii solution ? - Splaka, - I was curious what you thought of a 100 Worst Jesii list regarding the massive amounts of Unwanted Jesii the site seems to get regualarly ? It might prove to be a short term solution at the moment, but I thought the idea might be interesting as Worst 100 list's go. MadMax 00:36, 10 Dec 2005 (UTC) - If we can't just delete them all, and perma-ban all the editors, I guess that's a good second choice. Sir Famine, Gun ♣ Petition » 00:46, 10 Dec 2005 (UTC) Glad you implemented WOTD I'm glad you liked the word of the day idea and I am completely boggled by how well you implemented it; have another ninjastar: {{Ninjastar|Ninjastar.png|Original Ninjastar|I award Splaka a Ninjastar for the mind-boggling ludicrosity<br>that is his implementation of the word of the day.|gwax}} --Sir gwax (talk) 19:51, 10 Dec 2005 (UTC) End of the Road I basically added that article to lend some completely pointless credibility to my much longer Orkut article. End of the Road, Kentuckistan also happens to be a place in the short story "Blackberry Winter" by Todd Keisling, but I didn't expect anyone to get THAT ;P Basically I just wanted a really silly redneck place to link to in the Orkut article. Is that too silly and pointless? -- Bringa 03:06, 11 Dec 2005 (UTC) - Short worthless articles are pointless yes. "If it took you 10 seconds to write, it probably isn't funny." That wasn't even undictonary worthy. A red link is preferrable to such in most cases. A wiki full of stubs is not funny :/ Why not just put the info on the longer page until it is long enough to be its own article? --Splaka 03:08, 11 Dec 2005 (UTC) - Actually, you would be wrong in that regard - plenty of us will fill in the details of a fictional town. It's not like we have to get the details right now, is it? And besides, getting something horribly wrong and smashing someone else's dreams is something we do well here. But yeah, what Slpaka said - cram it all into one big article. If it grows so large that parts need to be spun off, they can be. Plan for one big article, as it's more likely to get done well than many disparate short ones. Sir Famine, Gun ♣ Petition » 03:32, 11 Dec 2005 (UTC) - FWIW: The so-called experts have never heard of this End of the Road, but they do have something about what I assume is the other end. "'End of the Road' is a 1992 #1 hit recorded by Boyz II Men for the Motown label." The article is part of the wikipedia:Category:Boyz II Men songs. Geez. They have a category for Boys, sorry, Boyz II Men songs, and they call us a parody? -- SirBobBobBob ! S? [rox!|sux!] 16:40, 14 Dec 2005 (UTC) Another Recreated Article - Splaka, The article Bush Family Tree was just recreated a few minutes ago. MadMax 06:27, 11 Dec 2005 (UTC) - Yah, been keeping an eye on it. I thought at first it was vandalism, but it seems someone is just trying to make the code work, I'll give it a while. Btw, you don't have to tell me each time something is recreated, putting it on the QVFD is usually enough (we do have 40+ admin) ^_^. But thanks for the vigil, it is appreciated. --Splaka 06:29, 11 Dec 2005 (UTC) - Sorry about that. I only let you know as I noticed it was deleted by you a minute before. I did put it on qvfd although it was removed by one of the users who created the family tree image. I wasn't sure what the status was on it, so I thought I'd mentio it in any case, before reposting the burn tag. Also if an editor removes an nrv tag before the seven day limit, should it be replaced even it there has been changes ? MadMax 06:46, 11 Dec 2005 (UTC) - Ahh k. The page is surely crap now, I'll have to huff it. And nrv/mtu tags should be readded if the page is not changed (except for tag removal). If the page is expanded in any way, it can be re-tagged with nrv, but shouldn't usually be reverted. If that makes sense. --Splaka 06:55, 11 Dec 2005 (UTC) - Alright then. So basicly any changes to the page get an extended deadline ? MadMax 07:03, 11 Dec 2005 (UTC) Pie rats are small peg-legged rodents known to have taken over the SS Sara Lee factory ship. Was Pie rats are small peg-legged rodents known to have taken over the SS Sara Lee factory ship. too subtle a joke for Uncyclopedia? Probably ought to go ahead and delete Category:Pre-stub too, since it can never have pages with real titles and it won't be very funny without something illustrating it. Sorry I have such low-brow humor. I'll try better next time. It did have content however—the category tag. Qwerty Uiop 06:27, 13 Dec 2005 (UTC) - So...why don't you write a real Pie rat article? Or if it's not enough material for a full page, maybe add a section on pie rats to the Rat article? I suppose the joke has potential, but an empty page, not so much... --—rc (t) 06:41, 13 Dec 2005 (UTC) Too Fast damn that was quick - I'd just hit delete on Christiano Ronaldo, and get an error because you were there faster than the speed of awesome TheTris 10:43, 14 Dec 2005 (UTC) Thanks - THank you for re direct of my page. I just started with uncyclopedia and just immigrated and not fuly under standing things yet--Maddie's life 02:18, 15 Dec 2005 (UTC) Template:Wargame Just when I think I've spent all night on IRC, y'all go and decide something without me. :( I added the Fall of Masada because it looks like it would have a particularly interesting ending. TK, big-time. On the other hand, it could have turned out to be total crap, which would of course have qualified it for VFH. Whining aside, though, I'd have to agree with removing it from the Bleeding Red Links unless/until I actually write the game... I'm just peeved that I didn't get to bitch about it on IRC first. Looks like I've taken care of that need, okthxbye. -- Sir BobBobBob ! S ? [rox!|sux!] 16:18, 15 Dec 2005 (UTC) - Man, that was like days ago. I consulted with a handful of other users, they agreed that there were way too many red links on the template (more than blue), very few were worth keeping. --Splaka 22:14, 15 Dec 2005 (UTC) Blesses Bless you for cleaning the AAAAAAAA! - Page - Danke o_O Zombieminion I belive I promised you and Dawg Zombieminions on IRC the other day for not blocking me...here it is. --Brigadier General Sir Zombiebaron 15:21, 8 Jan 2006 (UTC) User:Bloo Splaka I assure you that I am not Masterbloo. Masterbloo came up with his profile name and I just decided to name mine Bloo. So it is just mere co-incidence. And can you please unblock me from the IRC, and when I changed names I was only testing that name change thing that Keitei told me how to do and I was just to seeing if I did it properly. What's driving you? Are you or anyone of the admins paid or is this sheer hobby?--Suresh 10:53, 16 Dec 2005 (UTC) Error at line 11867 in Admin.php, question not understood, please clarify.--Splaka 10:58, 16 Dec 2005 (UTC) - Get paid? Hell, I've actually shelled out good money to play this insane game. Asking if we get paid to admin is like asking an alcoholic if he gets paid to drink. -- SirBobBobBob ! S? [rox!|sux!] 18:47, 16 Dec 2005 (UTC) I am not an admin and would refuse to serve if asked to carry a tray, but isn't Sir Splaka being driven by Dale Earnhardt Jr., BobBobBob (with newly overbored cylinders) by Jeff Gordon, and the Redoubtable Sir Famine by Matt Groenig in a stunning pink silk evening dress? I mean, I don't know what's driving them, but we can just imagine... ----OEJ 18:59, 16 Dec 2005 (UTC) I now suspect that admins motivs could add up to a nice conspiration theory.--Suresh 20:47, 16 Dec 2005 (UTC) - Hey, there is no cabal. --Ogopogo 22:34, 16 Dec 2005 (UTC) Deletion of Nynorsk Deletion too hasty? Consider NRV-tagging. Nynorsk is one of two official languages in Norway.--Suresh 11:35, 16 Dec 2005 (UTC) - No. Consider that we delete articles not only for being non-notable (as this wasn't) but for being total crap too: Nynorsk is an old and habbitated languages. The language is mainly spoken by cocksuckers and thaiwhores. The goverment of norway has added this language to fuck up the system and be more like Russia, it ain't always easy being like a communist all the time (the bullshit philosofie tha only cocksuckers uses) So stand together against Racisem, Nynorsk and Neo-Nazisem Fuck Ny Norsk Banned? Uh, hello? I just tried to create a User page and have already been banned before doing anything. Hope it's over soon.--Mike Nobody =/\= 06:26, 17 Dec 2005 (UTC) - I see no ban for that username. You'll have to be more specific as to what IP is blocked, which is affecting you. See: Special:Ipblocklist. Find the message that is displayed on the block text. Relay it to us. --Splaka 06:31, 17 Dec 2005 (UTC) - OrgID: AOL 22000 AOL Way Dulles VA 20166, IP range 152.163.0.0 - 152.163.255.255; unblocked it for now, but likely the same problems will recur the moment anyone on AOL does something annoying. Get an ISP, darnit! --carlb 18:42, 17 Dec 2005 (UTC) lol @ Wiki Administrators & AOL noobs --Nerd42 22:37, 17 Dec 2005 (UTC) Relativity now seem to have some redeeming value. I guess that removing the banner won't stop the ticking in the Worthless article file. Is it writer or admins job to remove NRV- block in article after changing content? --Suresh 13:08, 18 Dec 2005 (UTC) - The banner is what tags the article to the Worthless Articles category. If you've improved the article, remove the banner. If someone else reads the new version and still thinks it needs additional work, it might get tagged again (as NRV, or something else). -- T. (talk) 13:18, 18 Dec 2005 (UTC) Nazi cocksucker Redirects to Splaka. --Suresh 20:29, 19 Dec 2005 (UTC) - Yah, it is kinda cute, pathetic vandals ^_^. I'll keep it for a while. --Splaka 03:28, 20 Dec 2005 (UTC) New templates I got fed up with the norwegian male vanity flu, so I edited an own banner against them for VFD. Can I use it and save it as a template? --Suresh 20:29, 19 Dec 2005 (UTC) - Sure, call it Template:Vanity.no maybe ^_^. --Splaka 22:17, 19 Dec 2005 (UTC) Please undelete Your Penis Hi. Please undelete Your Penis. Its content was more than just "Your Penis is probably smaller than My Penis". I also specially-created the Penis Stub Template so that it would read in this particular file, "Your Penis is a stub. You can help Uncyclopedia by buying Viagra." The stub part was the joke, not the content. That was just placeholder text. Please undelete. Thanks. --RealGrouchy 05:46, 20 Dec 2005 (UTC) Tx: added a mock up of proposed lynx-style homepage at User:Isra1337/testpage. If it seems promsing I will finish it. --Isra1337 13:18, 20 Dec 2005 (UTC) A Little Revert War Help Hey Splarks, I'm wondering if I could get a little advice/help with a problem on Optimus Prime. I'm in a bit of a revert war (thrice reverted so far) with JumpinTuck who has done relatively little other than keep putting his crappy image up. Since it's not obvious vandalism, so much as just low quality material, I don't feel right going to ban patrol and since I don't have the complaining power of a SysOp, I thought that I'd turn to someone who does for help. Alternatively, if I'm wrong about this one,)}" > 15:23, 20 Dec 2005 (UTC) - It's not so bad. Actually, the other pictures look very dark on my screen . Would he/you be alright with adding it later in the article as an Optimus unmasked picture? Have you talkpaged with him about it? -- T. (talk) 16:50, 20 Dec 2005 (UTC) - Actually, never mind. Your feelings about the picture are abundantly clear. :) # (cur) (last) 22:13, 12 Dec 2005 Gwax (the foreman prime image sucks; if you want a revert war, you've got one) -- T. (talk) 16:57, 20 Dec 2005 (UTC) - Yeah, I guess I was a little too aggressive in my defense of Optimus Prime, I'll use the talk page and be a little more civil. --Sir gwax (talk) 17:34, 20 Dec 2005 (UTC) Thanks again, again Thanks again for reverting my user page after a vandal attack. I've finally gotten my very own stalker, with User:Bob is a bum. joining User:Uncyclopedia is stupid. and User:Splaka is dumb!! in the pantheon of really, really stupid usernames that have been permabanned. And the no-underline style looks tastes great, *and* is less filling. -- Sir BobBobBob ! S ? [rox!|sux!] 16:48, 20 Dec 2005 (UTC) - Yeah, it's kinda weird, but I've got one too. User:EMP doesn't seem to be on here for any reason other than to vandalise one of my pages. It's kind of sad that people with such cool user names could turn out to be such jerks. --Nerd42 17:16, 20 Dec 2005 (UTC) - Nerd-o, I realize your teeny-weeny anti-humour brain isn't capable of understanding this, but none of the no-doubt multitudes of people who are disgusted and probably made physically ill by your favorite "article" are going to put it on VFD, because that would just get more people to read it, which of course is exactly what you want. In fact, if you put it there, and I haven't been banned for life at that point, I'll delete the entry myself. Try to get a clue, hate-boy. I hear they're 20% off this week, just in time for Christmas! (Oh, but I guess you people don't celebrate that, do you.) --Johnny C. Raven 18:03, 20 Dec 2005 (UTC) - And while you're at it, maybe you should get rid of that sick comment you made on the Slavery article, too. And don't bother telling me that was a "joke," either. --Johnny C. Raven 18:06, 20 Dec 2005 (UTC) - Nerd42, I'll let Splaka deal with Mr. Raven, above. Meanwhile, to address the issue you raised, I've posted a warning to User Talk:EMP letting him know that admins look poorly upon those who would declare "war" upon each other. -- SirBobBobBob ! S? [rox!|sux!] 21:09, 20 Dec 2005 (UTC) Xenotransplantation I see you just made a change to the link put in my Undictionary entry for xenotransplantation. Big help, thank you. Now I can see how I should do it properly and copy it for subsequent links. Cheers. Ketlan Huffination Holy Crap with a capital K. Splaka is on a deletion roll! ] - (Deletion log); 23:33 . . Splaka (Talk) (deleted "User:Mobvok": content was: 'What a fag.') - (Deletion log); 23:33 . . Splaka (Talk) (deleted "Talk:2005": content was: 'blow me freaks') - (Deletion log); 23:33 . . Splaka (Talk) (deleted "Talk:Talk:Euroipods": content was: 'Euroipods r t3h 5h1zz.') - (Deletion log); 23:33 . . Splaka (Talk) (deleted "Talk:Talk:Talk:Euroipods": content was: 'It's good to talk') all in under a minute, averaging out at 1 page deleted every 15 seconds. wow. --ONX 23:36, 28 Dec 2005 (UTC) - Woot! Actually you'll see that a lot, when we huff things by category. I went through all the new non-article pages created since yesterday, opened each in a tab, went through either closing the tab (if acceptable) or clicking 'delete' (if crap), then cycling back through clicking [yes really delete this] on the delete page for the 4 that didn't make par. You'll see hundreds of deletions per hour when we clean out NRV and MTU and such. --Splaka 23:39, 28 Dec 2005 (UTC) - Thumb clicking is even faster: - 16:51, 26 Dec 2005 Gwax deleted "Root of all evil" (NRV:Dec16th content was: 'The Root of all evil is, in most opinions, considered to be seagulls. In some western parts of Russia, though, it's believed to be the headache yo...') - 16:51, 26 Dec 2005 Gwax deleted "Road rage" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Rendoosia" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Religiouseductionteachers" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Reggie Fils-Aime" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Ravenous Bugblatter Beast of Traal" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Ratko Mladic" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Ragnvald" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Rachmaninov" (NRV:Dec19th) - 16:51, 26 Dec 2005 Gwax deleted "Poohman" (NRV:Dec19th) - --Sir gwax (talk) 16:21, 29 Dec 2005 (UTC) - My delete sprees are even crueler! - 09:45, 29 Dec 2005 Dawg deleted "Ässpoo" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "William H. Gates" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Vapourware releases" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Vagoo" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Super bowl" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Tuba" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Toxic avenger" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Snus" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Sian n rizzo" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Short Circuit" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Shithead" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Shield" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Ring of Fire" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Robert Kennedy" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "RMDB" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "RTC" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Pepino di Caprio" (NRV Expired) - 09:45, 29 Dec 2005 Dawg deleted "Poser goths" (NRV Expired) » Brig Sir Dawg | t | v | c » 16:52, 29 Dec 2005 (UTC) - If it is just about size, observe: - 02:09, 13 Dec 2005 Splaka deleted "Poptarts" (content was: '#REDIRECT Pop-tarts') - 02:09, 13 Dec 2005 Splaka deleted "Santa Clara" (content was: '#REDIRECT Saint Claire') - 02:09, 13 Dec 2005 Splaka deleted "Cotton Gin" (content was: '#REDIRECT Cotton gin') - 02:09, 13 Dec 2005 Splaka deleted ""Accidentally" exposing your breast at the Super Bowl as a publicity stunt" (content was: '#REDIRECT wardrobe malfunction') - 02:09, 13 Dec 2005 Splaka deleted "Luke skywalkers heterosexuality" (content was: '#REDIRECT Luke Skywalker's heterosexuality') - 02:09, 13 Dec 2005 Splaka deleted "Numismatist" (content was: '#REDIRECT Numismatists') - 02:09, 13 Dec 2005 Splaka deleted "Dammann" (content was: '#Redirect:Da Man') - 02:09, 13 Dec 2005 Splaka deleted "100 Years War" (content was: '#REDIRECT 100 Years war') - 02:09, 13 Dec 2005 Splaka deleted "Priest Cthulhu" (content was: '#REDIRECT Priest of Cthulhu') - 02:09, 13 Dec 2005 Splaka deleted "Quotes" (content was: '#redirect Quote') - 02:09, 13 Dec 2005 Splaka deleted "John durrant" (content was: '#REDIRECT John Durrant') - 02:09, 13 Dec 2005 Splaka deleted "Clipart" (content was: '#redirectClip Art') - 02:09, 13 Dec 2005 Splaka deleted "Complex Number" (content was: '#REDIRECT Complex number') - 02:09, 13 Dec 2005 Splaka deleted "Peter Mckinlay" (content was: '#REDIRECT Peter McKinlay') - 02:09, 13 Dec 2005 Splaka deleted "Billy Connoly" (content was: '#REDIRECT Billy Connelly') - 02:09, 13 Dec 2005 Splaka deleted "Margaret tatcher" (content was: '#REDIRECT Margaret Tatcher') - 02:09, 13 Dec 2005 Splaka deleted "Dame edna" (content was: '#REDIRECT Dame Edna') - 02:09, 13 Dec 2005 Splaka deleted "Varun choda" (content was: '#REDIRECT Varun Choda') - 02:09, 13 Dec 2005 Splaka deleted "Spy vs spy" (content was: '#REDIRECT Spy vs Spy') - 02:09, 13 Dec 2005 Splaka deleted "Talius Del'Agriel" (content was: '#REDIRECT Priest of Cthulhu') - 02:09, 13 Dec 2005 Splaka deleted "Rivers cuomo" (content was: '#REDIRECT Rivers Cuomo') - 02:08, 13 Dec 2005 Splaka deleted "Carlos santana" (content was: '#REDIRECT Carlos Santana') - 02:08, 13 Dec 2005 Splaka deleted "Judd nelson" (content was: '#REDIRECT Judd Nelson') - 02:08, 13 Dec 2005 Splaka deleted "Grandizer" (content was: '#redirect Goldorak') - 02:08, 13 Dec 2005 Splaka deleted "Northfield mount hermon" (content was: '#REDIRECT Northfield Mount Hermon') - 02:08, 13 Dec 2005 Splaka deleted "User:Chomu Sclavus/clipboard" (some unused redirects -- first up: content was: '#REDIRECT Heartlessness') - And I didn't even artificially have to make that just for the purpose of this discussion, *cough*dawg*cough* ^_^ ... see all (huge link) --Splaka 22:10, 29 Dec 2005 (UTC) Centering the table on User:Nonymous/eurotest Know of any way to center the table and use Image:Therest.gif as the background? - Nonymous 23:17, 29 Dec 2005 (UTC) - Looks like the auto-margins were your problem on the centering. You can't do background images in wiki tables, the code is ignored. --Splaka 23:22, 29 Dec 2005 (UTC) Polish BobBobBob ! S ? [rox!|sux!] 21:33, 30 Dec 2005 (UTC) - Better ask Codeine about it, he was the polish huffinator. --Splaka 21:35, 30 Dec 2005 (UTC) adslgr.com I've seen a notice made by you on my article adslgr.com that it will be deleted in 5 days and i decide to contact you in order to find out why this is so I read about the typical characteristics of articles that are deleted from uncyclopedia and apart from the obvius non candidates i excluded i found only some possible reasons and would like to list em Advertising: [] is a greek site writen in greek and frequent by Greeks, uncyclopedia on the other hand is an english site frequent mostly by non-Greeks (aka Barbarians) therefore advertising adslgr, which is not even my site and by the way has in deed over 15000 members, would be futile. In the contrary i found out about Uncyclopedia in adslgr.com Unrelevance: the article first of all deals with a site that is Greece's bigest forum about broadband connections in Greece and the internet in general , besides that it is a subtle critisism of the way the major ISP's in Greece deal with broadband connections, Greece at the moment is the home of over 12 million citizen (i know this is small but we re working on it, round the clock, u can come do your bit next summer). Also on this point i would like to say that i have seen many articles whose subject i failed to recognise , still i found em funny. Unfunnyness: This is the only reason i would accept if u said so to me to be true, since Humour is a matter of relevance. Please consider this and give me a reply ps: If u go ahead and delete my article i will kill that cat [[3]] ps 2: heres my only other article The great 1927 riot of Poughkeepsy - There are other reasons to delete articles, other than 'not being allowed' or 'not funny'. Sometimes they are just bad. Note that the NRV tag is actually 'giving the article a chance'. First and foremost, do not sign your articles. You sign talk pages and vote pages, but not articles. Second, it has no formatting and no links. It just looks like a bad copy/paste (eg, it is plain text, not an article). Thirdly, it has the potention to incite vanity (which happens anytime you write about a forum site, we get that a lot actually). Fourthly, 15,000 is just barely notable. Note for example that the english Wikipedia has no article with this title. Given all these reasons, I feel justified in NRV tagging it (you have more like 11 days by my calculations). - Things that can be done: For the first two, it just needs to be expanded/cleaned up. For the third, just make sure that if it does start mentioning individual members and in-jokes, that the page gets tagged with {{vanity}} (showing that it is allowable vanity). For the fourth, I don't think anything can be done to make it more notable in just 7 days. The best route to go might be {{vanity}}. See ZFGC for an example of allowable community forum vanity. --Splaka 23:10, 4 Jan 2006 (UTC) wikipedia actually has a link to adslgr.com under the DSL around the world article the link is titled "all you need to know about adsl in greece" - But, we don't care about what wikipedia has. The article itself is still bad, and unless someone makes it better, I'm deleting it at the end of today.-- 17:43, 5 Jan 2006 (UTC) well you mentioned wikipedia, User 66.117.168.123 eats Hay Just wanted to thank you for dealing with the defilements on Hay. As much as I'm honored that my page is worthy of being defaced, I like what I had better that his "Justin and Trina" rubbish. Thanks again!! --Simulacrum Caputosis
http://uncyclopedia.wikia.com/wiki/User_talk:Splaka/Archive2
CC-MAIN-2015-14
en
refinedweb
07 November 2007 17:11 [Source: ICIS news] LONDON (ICIS news)--Belarusian Potash Company (BPC) has resumed potash sales and increased prices in response to strong demand in major markets, a supply shortage and rising freight rates, a company official said on Wednesday. ?xml:namespace> BPC temporarily suspended entry into new potash sale contracts on 29 October following concerns about supply disruptions at Silvinit and in response to a similar decision by PotashCorp in ?xml:namespace> A source at International Potash Company, marketer of Silvinit potash, said on 6 November that there had been no major changes in the sinkhole that could affect its rail links and that product continued to be shipped as usual. Other market sources have said that there had been something of an overreaction to the situation. Given market fundamentals, the BPC official said it had raised standard MOP (muriate of potash) prices to Asian spot markets by $40/tonne (€27.60/tonne) to $400/tonne CFR (cost and freight) effective immediately and until further notice. In addition, BPC announced that spot granular MOP prices in BPC markets potash
http://www.icis.com/Articles/2007/11/07/9076908/bpc-resumes-potash-sales-raises-prices.html
CC-MAIN-2015-14
en
refinedweb
Welcome to Haskell IDE plugin for amazing Atom editor! This plugin is intended to help you in Haskell developing. Haskell IDE is currently in active development state. Haskell IDE works only with cabal projects. You can simply start Atom from cabal project root or drag and drop cabal project folder on editor and plugin will be started automatically. After saving the current file the check and linter processes will be executed. After processes are finished the results can be seen in plugin output panel. You can see different kind of results by switching Errors, Warnings and Lints tab buttons. By pressing with mouse button on any result inside output panel the Atom editor will open the appropriate file with cursor already at the position of this result. Also all the results can be seen near the line numbers if you position the mouse cursor over the handsome icon. And of course the results are highlighted inside editor view so you can easily locate where the problem is. Just position your mouse cursor above expression you want to know the type of and wait for some time. Tooltip will appear with everything you want to know. Remember that you need autocomplete-plus package to be installed to use Haskell IDE autocompletion feature. Autocompletion feature works for pragmas like LANGUAGE and OPTIONS_GHC. Also autocompletion works for import keyword. And of course autocompletion feature works inside functions to make your Haskelling happier. Not all the things I wanted from this feature was implemented. That is why autocompletion is subject to change the way you want! So you are welcome with suggestions how this feature can be changed to make your work with Haskell code more comfortable. Pelease, write issues with enhancement of autocompletion here. Now you can use stylish-haskell utility to indent pragmas, imports and data type definitions. Simply select Prettify from Haskel IDE menu or press magic combination of buttons to apply stylish-haskell to current file. $ apm install ide-haskell Open ~/.atom/config.cson by clicking Open Your Config in Atom menu. Manually add ide-haskell plugin section as in example below. 'ide-haskell': 'ghcModPath': '/path/to/ghc-mod' 'stylishHaskellPath': '/path/to/stylish-haskell' Following entries are also customizable in ide-haskell section ghcModPath- path to ghc-modutility stylishHaskellPath- path to stylish-haskellutility checkOnFileSave- check file after save (defaut is true) lintOnFileSave- lint file after save (defaut is true) switchTabOnCheck- switch to error tab after file check finished (defaut is true) expressionTypeInterval- after this period of time the process of getting the expression type will be started (milliseconds, default is 300) Changelog is available here. See the LICENSE.md for details. Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
https://atom.io/packages/ide-haskell
CC-MAIN-2015-14
en
refinedweb
Point2D.Float cast to Floatjimbo8 Oct 15, 2010 6:35 AM I noticed in the swingx BusyPainter code (fragments below) that Point2D.Float is being assigned/cast to Float, how does this work? How can they do this Float center = new Float(sp.x, sp.y) ? In paintRotatedCenteredShapeAtPoint p which was defined as Point2D.Float is being passed as a Float. Can someone please explain? How can they do this Float center = new Float(sp.x, sp.y) ? In paintRotatedCenteredShapeAtPoint p which was defined as Point2D.Float is being passed as a Float. Can someone please explain? ... Point2D.Float sp = new Point2D.Float(); Float center = new Float(sp.x, sp.y); center.x = ((float) width) / 2; center.y = ((float) height) / 2; // draw the stuff int i = 0; g.translate(center.x, center.y); for (Point2D.Float p : pList) { drawAt(g, i++, p, center); } } private void drawAt(Graphics2D g, int i, Point2D.Float p, Float c) { g.setColor(calcFrameColor(i)); paintRotatedCenteredShapeAtPoint(p, c, g); } private void paintRotatedCenteredShapeAtPoint(Float p, Float c, Graphics2D g) { Shape s = getPointShape(); double hh = s.getBounds().getHeight() / 2; double wh = s.getBounds().getWidth() / 2; double t, x, y; double a = c.y - p.y; double b = p.x - c.x; This content has been marked as final. Show 3 replies 1. Re: Point2D.Float cast to Floatjduprez Oct 15, 2010 7:26 AM (in response to jimbo8)<nonsense deleted /> Edited by: jduprez on Oct 15, 2010 2:23 PM Look at the imports: So they can use Float and Point2D.Float interchangeably, to mean java.awt.geom.Point2D.Float import java.awt.geom.Point2D; import java.awt.geom.Point2D.Float; What I don't get is, how does either of that not conflict with java.lang.Float? Edited by: jduprez on Oct 15, 2010 2:25 PM 2. Re: Point2D.Float cast to FloatDarrylBurke Oct 15, 2010 8:20 AM (in response to jduprez) jduprez wrote:From the JLS What I don't get is, how does either of that not conflict with java.lang.Float? [url]Single-Type-Import Declaration A single-type-import declaration d in a compilation unit c of package p that imports a type named n shadows the declarations of: <li> any type named n imported by a type-import-on-demand declaration in c. [url]Automatic Imports Each compilation unit automatically imports all of the public type names declared in the predefined package java.lang, as if the declaration: <tt>import java.lang.*;</tt> appeared at the beginning of each compilation unit, immediately following any package statement. The java.lang automatic import constitutes a type-import-on-demand declaration. db 3. Re: Point2D.Float cast to Floatjduprez Oct 15, 2010 2:25 PM (in response to DarrylBurke)Accurate. Concise. Neat. Thanks Darryl.
https://community.oracle.com/message/7008980
CC-MAIN-2015-14
en
refinedweb
. The validate_unique parameter was added to allow skipping Model.validate_unique(). Previously, Model.validate_unique() was always called by full_clean.atically..] fields an UPDATE. - If the object’s primary key attribute is not set or if the UPDATE didn. Previously., including how to delete objects in bulk, see Deleting objects. If you want customized deletion behavior, you can override the delete() method. See Overriding predefined model methods for more details. Other model instance methods¶ A few object methods have special purposes. Note On Python 3, as all strings are natively considered Unicode, only use the __str__() method (the __unicode__() method is obsolete). If you’d like compatibility with Python 2, you can decorate your model class with python_2_unicode_compatible(). _: from django.db import models: from django.db import models from django.utils.encoding import force_bytes specfication,..
https://docs.djangoproject.com/en/1.6/ref/models/instances/
CC-MAIN-2015-14
en
refinedweb
How do I localize strings? How do I localize strings? Hi all; When designing with Architect, how do I put all strings in a separate resource file that we can change based on the language of the user? Or is another approach recommended? thanks - dave SA does not have any turn-key solution to this. I have done the following: - Put your strings in a separate namespace in a separate JS-file. - Load that file before you include your Ext JS file in the HTML-page - In you SA project, add processconfigs to all things you need to localize and set the text/html/title/fieldLabel properties /Mattias OW! I was afraid it was something like that. thanks - dave At SenchaCon I discussed this with a couple of people and Sencha engineers and came up with an alternative approach that I can share with you. This may or may not fit your project, especially as it involves overriding components on a high level that may cause performance issues. The basic idea is to override Ext.Component along the lines below. The namespace VPCalcDesktop is the app's namespace and VPCalcLang is a file containing the following code and that is required in the SA project: Code: Ext.define('VPCalcDesktop.Locale',{ override: 'Ext.Component', constructor: function(cfg) { if (cfg) { this.parser(cfg, 'title'); this.parser(cfg, 'text'); this.parser(cfg, 'labelFieldSet'); this.parser(cfg, 'boxLabel'); this.parser(cfg, 'fieldLabel'); this.parser(cfg, 'label'); } this.callParent(arguments); }, parser: function(config, property) { if (config[property]) { var text = config[property]; if (text.substr(0, 1) == '#') { var result = VPCalcLang[text.substr(1)]; } config[property] = result||text; } } }); Code: var VPCalcLang=VPCalcLang || {}; VPCalcLang=eval('({"indata_tab1":"Projektinformation","indata_tab3":"Installation","indata_tab4":"Driftparametrar"})'); (I am developing a Desktop and Touch version of the same application, with different names but the strings are the same hence the different namespace for the strings). I think this would kill us as we have around 4,000 strings. I'm hoping they can add something where there is a resource.js that is just a ton of variables that have strings assigned to them. And then also a resource_en.js, resource_de.js, etc. and it loads the appropiate one. thanks - dave
http://www.sencha.com/forum/showthread.php?270522-How-do-I-localize-strings
CC-MAIN-2015-14
en
refinedweb
PrintSystemObject Class Defines basic properties and methods that are common to the objects of the printing system. Classes that derive from this class represent such objects as print queues, print servers, and print jobs. System.Printing.PrintSystemObject System.Printing.PrintFilter System.Printing.PrintPort System.Printing.PrintQueue System.Printing.PrintServer System.Printing.PrintSystemJobInfo Namespace: System.PrintingNamespace: System.Printing Assembly: System.Printing (in System.Printing.dll) The PrintSystemObject type exposes the following members. In addition to being the base class for print system objects, this class can be useful for calling methods when your application does not know or does not care what particular type of print system object it is using. For example, you could enumerate through a PrintSystemObjects collection of different object types, calling the Commit method on each of them in turn. If you want to print from a Windows Forms application, see the System.Drawing.Printing namespace.Notes to Inheritors If you derive a class from PrintSystemObject, you may want to derive a collection of objects of that class from PrintSystemObjects.
https://msdn.microsoft.com/en-us/library/system.printing.printsystemobject.aspx
CC-MAIN-2015-14
en
refinedweb
15 January 2013 18:06 [Source: ICIS news] Correction: In the ICIS news story headlined "US propylene contracts for January settle 15 cents/lb higher" dated 15 January 2013, please read in the sixth paragraph …up 36% from 53.25 cents/lb in mid-November… instead of … up 36% from 53.25 cents/lb in mid-December…. A corrected story follows. HOUSTON (ICIS)--US propylene contracts rose by 15 cents/lb ($331/tonne, €248/tonne) for January, lifted by a surge in spot prices in the past few weeks and strong demand, market sources said on Tuesday. The 26% increase puts polymer-grade propylene (PGP) contracts at 73.00 cents/lb and chemical-grade propylene (CGP) contracts at 71.50 cents/lb. A large increase for January had been expected after spot prices surged in the past few weeks on the back of plant outages, including an unexpected shutdown at PetroLogistics’ propane dehydrogenation (PDH) plant in ?xml:namespace> Several US producers originally had nominated increases of 11.50 and 13.00 cents/lb for January, but one of the those suppliers later bumped the initiative to 15.00 cents/lb as spot prices continued to rise while negotiations got under way. PGP for January traded on Monday at 72.25 cents/lb, up 36% from 53.25 cents/lb in mid-November, while refinery-grade propylene (RGP) traded at 69.00 cents/lb on Tuesday, rising by nearly 40% from deals done at 49.50-50.00 cents/lb four weeks earlier. The surge in the price of RGP, which accounts for about 60% of the Market sources also cited stronger demand, saying PGP supply restrictions resulting from unplanned outages created additional demand for RGP as a feedstock for the higher-grade monomer. PGP demand also strengthened on its own, sources said, adding that buyers flocked to the market in the second half of December once it became clear that a massive increase for propylene was in the pipeline for January. With the January settlement completed, market attention quickly will shift to February, but sources said the outlook for next month is unclear, noting that it is too soon to tell how the January increase will affect demand. Another source said trends in the Major The main buyers include Dow Chemical, INEOS, Ascend Performance Materials and Total. (
http://www.icis.com/Articles/2013/01/15/9631988/corrected-us-propylene-contracts-for-january-settle-15-centslb-higher.html
CC-MAIN-2015-14
en
refinedweb
» Mobile » Android Author Service being destroyed? Sean Michael Hayes Ranch Hand Joined: Feb 08, 2012 Posts: 54 I like... posted Mar 13, 2013 11:41:22 0 I have an App that is listening on a certain port for messages to come through. I need this service to run as long as the Application is running on the phone. However, the problem I seem to have is that the Service seems to die seconds after it is created in the code and then started again according to the logs and then seems to die again later. What is weird is that the destruction of the Service is not logged despite me putting a log message in the OnDestroy() method to inform me if it dies. here is the Service being launched in my main code. try { Log.e("address", trapReceiver.getLocalIpAddress()); startService(new Intent(context, snmpService.class)); Log.e("", "Succeeded"); } catch (Exception e) { Log.e("", "failed"); e.printStackTrace(); } And here is the service, I am using the setForeground method as that seems to be the solution according to Google public class snmpService extends Service { @Override public IBinder onBind(Intent arg0) { // TODO Auto-generated method stub return null; } @Override public void onCreate() { super.onCreate(); setForeground(true); Log.i("snmpService", "started"); Log.i("", trapReceiver.getLocalIpAddress()); trapReceiver snmp4jTrapReceiver = new trapReceiver(this); try { snmp4jTrapReceiver.listen(new UdpAddress(trapReceiver.getLocalIpAddress()+"/5228")); } catch (IOException e) { e.printStackTrace(); Log.e("", "service error"); } catch (IllegalArgumentException e) { Toast t = Toast.makeText(this, "Unable to start listener service, please check your internet connection", 3000); t.show(); Log.e("", "service error"); } } @Override public void onDestroy() { super.onDestroy(); Toast.makeText(this, "Service destroyed ...", Toast.LENGTH_LONG).show(); Log.e("here", "end"); } } Steve Luke Bartender Joined: Jan 28, 2003 Posts: 4181 21 I like... posted Mar 13, 2013 18:15:06 1 You should temper the use of setForeground(): it really is used for cases when the user should be aware of the service, and actively cares about it. The setForeground() method itself is badly depricated: it isn't even in newer API levels so if you use it then the app might not work on newer devices. See this for the alternative and a painful method of supporting bot the old and new APIs. I think the real fix for you is to bind to the server in the application. A service that is bound to an application is not likely to be restarted, and since you seem to indicate the life of the service should be dependent on the life of the application, I think the idea of binding to the service matches the requirement. The IBinder you create will also gives you a channel to communicate with the application or activities (as I am sure is necessary, since you have incoming messages...). Since you want the service to live as long as the application, and not an activity, you should do the binding in your application object. Binding in the application object has its own difficulties, since it is never clear exactly when the application ends, so you have to find signals that come close enough to indicating the application is closed. One that I have used is the onTrimMemory() method. For example, check if the parameter is >= TRIM_MEMORY_BACKGROUND and it would be a good signal that your app is no longer in use, and it is a good time to stop the service (by unbinding from it). It is a little more difficult to get a good prediction of when the application comes back to the foreground. Say for example the user stops using the app for an hour, the OS sends the TimMemory() method, but does not kill the application. Then the user re-launches the application. You may be without the service. A method I use to get around this is to have activities register with the application in their onResume() method and deregister in their onPause() method. This way, I can detect when the application comes to the foreground (there are 1+ registered activities) and when it goes to the background (0 registered activities). When the first activity registers, I bind the services, when the last one deregisters I unbind the service - letting it die (this usually allows the service to end sooner than waiting for the onTrimMemory() callback, which I see as a positive in terms of consuming less of a user's battery and bandwidth). I also have the application provide the IBinder to any interested activities so they can communicate with the service when required. Steve I agree. Here's the link: subject: Service being destroyed? Similar Threads Problem with Notifications when calling them from a Non-Activity class Application nor run in the background Beginning Android, activity / service question send scheduling sms through android Android CustomListView getView() getting nullpointerexception All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/607207/Android/Mobile/Service-destroyed
CC-MAIN-2015-14
en
refinedweb
The Making of the Interactive Treehouse Ad Published by. The Idea - Have a "canvas" (not <canvas>, just an area where things happen) - Have a series of five bits of simple jQuery code - Click a button, code runs, something happens in canvas - You can see the code you are about to execute - It should be visually clear the code you see is what is running and making the changes - The whole time, the canvas area is a clickable ad for Treehouse (users aren't required to interact with the jQuery stuff, it's still just an ad) - You can start over with the interactive steps when completed Here's what we end up with. Demo on CodePen: The HTML Relevant bits commented: <div class="ad"> <!-- Our jQuery code will run on elements in here --> <div class="canvas" id="canvas"> <a id="treehouse-ad" href="" target="_blank">Team Treehouse!</a> <img src=""> The CSS. The JavaScript Now we need to make this thing work. Which means... - When button is clicked... - Execute the code - Increment the step - If it's the last step, put stuff in place to start over The Structure). The "Steps" We'll have five steps. As in, the different bits of jQuery that execute and do different things to our ad canvas. - Add a "button" class to the link - Move it up a little bit (demonstrating animation) - Change the text of the button - Add a new element - Add class to the entire ad, giving it a finished feel We'll also need to do two things with each bit of code: - Display it - Execute it(). The Animation To make it visually clear what is going on when you press the "Run Code" button, I thought having the code kind of slide up and fade into the "canvas" would be a cool idea. The steps then become: - Make a clone/duplicate code area directly on top of existing one - Animate position upwards while fading out - Remove clone when done - Wait to change code text until animation has completed Guts(); The End. That’s brilliant! I can see people copying same idea for their websites for interaction with visitors and to keep them longer on the website. And, yes I am going to try it on my site too. Why not store the steps in the array as functions, then call .toString()on them to extract the code? Here’s a quick sample: Looks like a great way of handling it to me! Coming to think of it, there’s just one problem with this solution: you can’t minify the JS. You’ll have to maintain the function and the display text separately. I’ve been using that sort of structure on all my JS lately, and I find it’s so much nicer. One thing I like to do with my code, which I picked up from others, is use a “settings” object to hold any cached items and values. So, instead of just declaring a bunch of naked variables at the top, it looks something like this: So, as I’ve done in the init()function, I can access any of those settings using “s.whatever”. The only flaw is that you have to define the settings at the top of each function using “this.settings”. I don’t know if this is the best way to do this sort of thing, but I find it saves a bunch of characters, so you don’t have to do “ModuleName.variableName” every time you want a cached object. Just be careful with that global. You want to properly declare that: Yes, that’s right. My example was more or less theoretical. Not meant to be copy/pasted. I usually do this something like this at the top: And along those same lines, the other code should be “var ModuleWhatever…” because that too was global as I had written it. Well in that case, there’s no reason for s = this.settingsat the beginning of each function. Once you did that in your initfunction, it’ll be available throughout your module, since they all have access to the parent closure. Yes, you’re right… Hmm, that actually makes it much easier. I think some of my old code has a few redundancies! :) Thanks for pointing that out. Pretty cool… but then you have that global “s” which scares me. I think I’d prefer the repetitive s = this.settings;in general. @Chris I don’t think we’re talking about polluting the global namespace here. We’ll have all this in its own closure: True. I really love this ad. A lot of my job involves trying to make our advertisements more interesting and evaluating statistics about our advertisements. Your ad is interactive, it’s catchy, it’s very well targeted at the proper demographic… It’s pretty much perfect. Good job, as always. :) I too have been structuring a lot of my javascript like this lately. Some things I hadn’t considered in there as well though. One thing that I have been fighting with lately is putting jquery in my functions/modules. Obviously it could have no merit if I just know jquery will be there but I still for some reason have been fighting with relying on jQuery to be there or not. I’d be interested to see the statistics for this ad. What percent go through all five steps and click the button, which go through all five but don’t click, do some stop at step 3, etc. Would be an interesting study since this form of advertisement is very unique. Chris: Why do you need to declare e.preventDefault() in this block of code? Just curious… Thanks! I probably have a href=”#” on the link so it has a proper cursor and hover state and stuff. But that href will jump to the top of the page if that line isn’t there. That line prevents the browser from following the link in the href attribute of the a tag. Awesome! I should have been using this for a while now ;). Thanks! Not to be a downer on this “interactive” ad but it took me forever to even figure out what part of the ad was “Interactive” AFTER being told it was. This is due to the fact that I saw the green “treehouse” box and thought that was the ad and everything else was unrelated content. It took me a long time to read the small paragraph below the code that said it had something to do with Treehouse. I think this is because of the dotted border around the main Treehouse ad area. It separates it, and since I’m not used to seeing interactive ads it requires a big jump for me to associate the content inside the box with the content below. While it may be cool interaction it fails on a basic level because it takes so long to figure out that it’s somehow related to Treehouse. I’d be curious to see if the number of people interacting with the content is worth making it an interactive ad? Thanks for the thoughts. All good stuff to consider in the future. I feel like “Run Code” is a pretty strong verb-y button and curious folks might just click to see what it does. It’s also kind of good that you did see the Treehouse area and just glossed over the stuff below it. It needs to be an ad first and anything else second. Awesome as always Chris!! Thanks! @Chris – “Run Code” can / will also scare some, already overly-cautious, users out there…. lol @Louis – THANK YOU for bringing up the “settings” object…. I now have realized that I have some reduncancies to clean up as well…. hahaha…
https://css-tricks.com/treehouse-ad/
CC-MAIN-2015-14
en
refinedweb
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 5 results of 5 This time for src/compiler/generic/vm-tran.lisp. I was a little unsure how to separate the macroexpanded code out into a separate function, so I left that part until later. -- Nathan | | veritas aeterna Yes, God had a deadline. So He wrote it all in Lisp. Lisp. Everything else is just Turing complete. Both bug 18 and bug 29 are concerned with compilation bugs, that seem to have gone in current CVS-SBCL (sbcl-07pre117): I can't reproduce bug 29. If I compile the code given for bug 18, there is no problem. Then, if I set SB-C::*CHECK-CONSISTENCY* to T (as the text talks about), I get a different error-message then in the bug-description: "ILISP: The target for #<SB-C:TN-REF :TN #<SB-C:TN t1[Const7]> :WRITE-P NIL :VOP SB-C:CALL-NAMED> isn't complementary WRITE-P." which doesn't help me a lot either ... but perhaps someone more knowledgeable can elaborate a bit on this? Cheers, Martin -- Martin Atzmueller <martin@...> Attached is a patch and some tests for bug 38: "DEFMETHOD doesn't check the syntax of &REST argument lists properly, accepting &REST even when it's not followed by an argument name: (DEFMETHOD FOO ((X T) &REST) NIL)" The buggy behavior also happened for DEFGENERIC, which has been fixed, too. I've also worked on a FIXME, because I think SBCL should signal an error, in case there is a non-standard lambda-list-keyword in the argument-list, e.g. (DEFMETHOD FOO ((X T) &REST Y &WHOLE Z) NIL) should probably signal an error. (similar to DEFGENERIC). Cheers, Martin -- Martin Atzmueller <martin@...> As I'm typing this off-line my comments might arrive pretty late. Sorry for that. On Sat, Jan 05, 2002 at 02:36:53PM -0600, William Harold Newman wrote: > However, ANSI CL pathnames layered over POSIX filesystems don't seem > to be so bad for working in their own little closed world in a > subdirectory somewhere, manipulated only by cooperative programs which > don't introduce weird things like symlinks and permission problems. I have tried this method with success in two commercial products. But the sacrifices people have to make seem excessive sometimes. Notice that the hard-core free-clim/McCLim people already introduced problems in their source base: $ find ~/fakeroot/work/sourceforge/McClim/McCLIM -type d -not -name CVS /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Doc /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Spec /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Spec/html /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Spec/src /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Backends /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Backends/CLX /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Backends/OpenGL /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Backends/OpenGL/archives /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Examples /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Lisp-Dep /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Tools /home/pvaneynd/fakeroot/work/sourceforge/McClim/McCLIM/Tools/gilbert Notice the mixed-case directories. This makes using a LPN for the defsystem already a world of pain. > (I do question the > priorities involved. A lot of work and pages in the standard andi > symbols in the CL namespace went into this which might've been better > spent in standardizing something more fundamental like multithreading > or a minimal standard C FFI. But I think the portable filesystem layer > has some value.) /me nods > There's only one real problem I'm aware of for using the portable > layer over POSIX: the Unix idea that each directory is a file. This > idea appears, in a non-fundamental way, in the original bug report > above. In that bug report, my answer is "fix the application". But > unfortunately it's not always that simple. For someone who's only read > the ANSI CL spec and never heard of Unix, the idea that e.g. after > (ENSURE-DIRECTORIES-EXIST "stubs/z/zone.lisp") > you can't do > (OPEN "stubs" :DIRECTION :OUTPUT) You can on some unix versions. I think on solaris at least this works. > At least I don't think they mix cleanly. I certainly don't see a > good fix for the problem. The expected portable Common Lisp behavior > *could* be recovered by messing with the mapping of Lisp filenames > onto Unix filenames. E.g. ordinary filenames, but not directory > filenames, could be modified by e.g. prepending an ASCII character > which is not STANDARD-CHAR-P. But then you'd lose the ability to work > with most files not written by SBCL (!) and I'd be surprised if many > people thought that was a price worth paying for removing this > particular surprise. Notice that this is a documented behaviour in the spec see section 19.2.2.1.1 . Not that I would advice to use this of course... Groetjes, Peter -- It's logic Jim, but not as we know it. | pvaneynd@... "God, root, what is difference?" - Pitr| "God is more forgiving." - Dave Aronson| Christophe Rhodes writes: > On Fri, Jan 04, 2002 at 04:43:38PM +0100, Raymond Wiker wrote: > > Martin Atzmueller writes: > > > I suppose this happened while building sbcl-07pre112 with sbcl-07pre112 > > > ... > > > Well, I just built sbcl-07pre112 with itself (on X86/Linux), so I guess > > > this is really specific to FreeBSD. > > > > No, I'm using 0.pre7.34. I had the same problem building > > 0.pre7.110 a couple of days back, but I got managed to get a running > > lisp image by by telling the compiler to accept the generated (and > > possibly faulty) deb-int.lisp-obj. I guess I could try using > > 0.pre7.110 to build 0.pre7.112... > > My advice in this instance is to build with cmucl (which works on > FreeBSD, right?); some of the pre7s build themselves but not other > versions, so starting from something that is "known OK" is probably > sensible... I was able to use the built-with-possibly-faulty-deb-int.lisp-obj sbcl to build 0.pre7.117, with no problems. I was also able to build directly from an even earlier version of 0.pre7.X (X = 10, I think). The CVS log for deb-int.lisp shows that this was a known problem at some point, with a fix that involved stubbing out FIND-ESCAPED-FRAME. //Raymond.
http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200201&viewday=7
CC-MAIN-2015-14
en
refinedweb
The Data Access Visual Designer is a visual tool integrated in Visual Studio. It enables point-and-click modification of your domain model, such as create and modify entities, associations, mapping and inheritance relationships. It also helps you validate the domain model. The Visual Designer consists of a design surface and several editors and tools for viewing and editing your mappings, creating objects, associations and inheritance relationships. To make it easier to get an application up and running in Visual Studio, Telerik Data Access offer several projects templates (.NET3.5+) to help you get started. The project templates will create the following Data Access-enabled projects and files for you: ASP.NET Web Site, ASP.NET Ajax Web Application, Class Library, and Fluent Library (for code-first developers). You can begin your ASP.NET Dynamic Data project with generating all your custom pages with the Telerik Data Access Dynamic Data Application project template. The Data Access Create Model Wizard auto-generates your domain model together with a source code file (C# or VB.NET) from an existing database within minutes. During the process, you choose which tables, views and stored procedures to include in the domain model. You control various aspects of the model creation, such as naming rules and default namespace. The Model Wizard enables you to do the conceptual modeling first and then create a database supportiing the domain model. It generates the schema definition script for creating (or the schema migration script for migrating) a database from an existing conceptual model. Use the Database Wizard to update your domain model after changes have been made to the underlying database. It shows the difference between the database content and the model in terms of what tables/columns/views/objects have been added, changed or deleted in the database. It then gives you the ability to choose what actions to take for each in order to update the model. The Telerik Service Wizard is tremendous time-saver, because it automatically generates the C# or VB.NET code and all necessary project files for using Telerik Data Access entities with many web services, saving the developer time and eliminating syntax errors and possible issues if done by hand. The Service Wizard can also create a Silverlight application to consume the CRUD operations for selected entities in the project. The supported services are ASP.NET Web API, WCF RIA Services, WCF Data Services, Open Data (OData) Protocol, WCF Endpoint (Plain) Services, REST Collection WCF Services, and ATOM Publishing Protocol WCF Services. The Domain Method Editor simplifies the set-up and use of stored procedures and database functions in Data Access applications. Use this editor when a stored procedure is included in the domain model to create a function (method) that returns a single scalar value, a collection of the Persistent Types, Complex Types or a no value. Adding a function enables you to call the corresponding stored procedure from your code. The information created from Domain Method Editor is passed directly to the code generation engine, where the corresponding complex types and methods are generated. The Editor utilizes the ADO.NET-like low level API of Data Access that gives you fast access to your data through the stored procedure. Data Access supports Model Operations, which enable you to update your model in a few clicks. This is done through a simple dialog, which enables you to: The Entity Framework and LINQ to SQL Conversion Wizards enable you to convert the corresponding data model to a Telerik Data Access Domain Model. The wizard is quick and easy to use. This tool upgrades projects that use previous versions of Telerik Data Access to the newest version. Data Access provides many more Visual Studio tools to help and guide you when working with your models: Get your fully functional free copy of Data Access. Including demos and articles to get you started. Help us shape our roadmap.
http://www.telerik.com/data-access/visual-studio-integration
CC-MAIN-2015-14
en
refinedweb
Help Getting Started w/ClassesvoxL Jun 1, 2012 9:18 AM Hi: I'd like to setup a class file to store functions that I use frequently, not sure how to do it, but I think I'm close. Here's a simple example that doesn't quite work, I get error: import PFarmFunctions; PFarmFunctions.fadeMe (mc,"off",1,1); Call to a possibly undefined method fadeMe through a reference with static type Class. My code is: package { import com.greensock.*; import com.greensock.easing.*; import com.greensock.plugins.*; import fl.events.*; import flash.display.DisplayObject; import flash.display.MovieClip; import flash.events.Event; import flash.display.Sprite; public class PFarmFunctions { private var who:Object; private var onOff:String; private var delayTime:Number; public function fadeMe(who:Object, onOff:String, howLong:Number, delayTime:Number):void { if (onOff == 'off') TweenMax.to(who, howLong,{autoAlpha:0,delay:delayTime}); }; //End fadeMe } } If someone could help me get going on this, I'd appreciate it very much. Thanks. 1. Re: Help Getting Started w/Classeskglad Jun 1, 2012 10:41 AM (in response to voxL) use public static functions (and use a shorter class name). for example: package{ // import statements public class R{ public static function fadeMe(dobj:DisplayObject,duration:Number,delayNum:Number,inOut:Number):void{ TweenMax.to(dobj,duration,{autoAlpha:inOut,delay:delayNum}); } // etc } } then to use, in any of your classes: R.fadeMe(mc,1,2,0); // where mc is a display object you want to fade out over 1 second with a 2 second delay 2. Re: Help Getting Started w/Classesesdebon Jun 1, 2012 11:33 AM (in response to voxL) import PFarmFunctions; var obj:PFarmFunctions=new PFarmFunctions(); obj.fadeMe (mc,"off",1,1); 3. Re: Help Getting Started w/Classeskglad Jun 1, 2012 2:55 PM (in response to voxL) don't use the code suggested by esdebon unless you change PFarmFunctions into a singleton class. 4. Re: Help Getting Started w/ClassesvoxL Jun 1, 2012 3:14 PM (in response to voxL) Thanks much, I think on my way. kglad: Extra thanks for being thorough and cleaning up my code - really appreciate it 5. Re: Help Getting Started w/Classeskglad Jun 1, 2012 9:34 PM (in response to voxL) you're welcome.
https://forums.adobe.com/message/4457935
CC-MAIN-2015-14
en
refinedweb
Initialization of Objects A local automatic object or variable is initialized every time the flow of control reaches its definition. A local static object or variable is initialized the first time the flow of control reaches its definition. Consider the following example, which defines a class that logs initialization and destruction of objects and then defines three objects, I1, I2, and I3: // initialization_of_objects.cpp // compile with: /EHsc #include <iostream> #include <string.h> using namespace std; // Define a class that logs initializations and destructions. class InitDemo { public: InitDemo( const char *szWhat ); ~InitDemo(); private: char *szObjName; size_t sizeofObjName; }; // Constructor for class InitDemo InitDemo::InitDemo( const char *szWhat ) : szObjName(NULL), sizeofObjName(0) { if( szWhat != 0 && strlen( szWhat ) > 0 ) { // Allocate storage for szObjName, then copy // initializer szWhat into szObjName, using // secured CRT functions. sizeofObjName = strlen( szWhat ) + 1; szObjName = new char[ sizeofObjName ]; strcpy_s( szObjName, sizeofObjName, szWhat ); cout << "Initializing: " << szObjName << "\n"; } else szObjName = 0; } // Destructor for InitDemo InitDemo::~InitDemo() { if( szObjName != 0 ) { cout << "Destroying: " << szObjName << "\n"; delete szObjName; } } // Enter main function int main() { InitDemo I1( "Auto I1" ); { cout << "In block.\n"; InitDemo I2( "Auto I2" ); static InitDemo I3( "Static I3" ); } cout << "Exited block.\n"; } The preceding code demonstrates how and when the objects I1, I2, and I3 are initialized and when they are destroyed. There are several points to note about the program. First, I1 and I2 are automatically destroyed when the flow of control exits the block in which they are defined. Second, in C++, it is not necessary to declare objects or variables at the beginning of a block. Furthermore, these objects are initialized only when the flow of control reaches their definitions. (I2 and I3 are examples of such definitions.) The output shows exactly when they are initialized. Finally, static local variables such as I3 retain their values for the duration of the program, but are destroyed as the program terminates.
https://msdn.microsoft.com/en-us/library/4t84t432(v=vs.90).aspx
CC-MAIN-2015-14
en
refinedweb
#include <algorithm> //for for_each() std::for_each (vt.begin(), vt.end(), &Task::show_pid); Step 3: Using a Function Adapter Fortunately, you don't really need a fourth argument because the member function show_pid() should be called for every object in the range vt.begin() vt.end(). But how do you tell for_each() to do this? The Standard Library also defines a set of function adapters that bind a member function to an object and return a matching function object. For example, std::mem_fun_ref() takes a member function's address and binds it to an object's reference, which is exactly what you need: std::for_each (vt.begin(), vt.end(), std::mem_fun_ref(&Task::show_pid)); Notice that the results of this example and the previous for-loop are identical. The benefit of using for_each() is maintenance ease and improved readability. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/getHelpOn/10MinuteSolution/19911/0/page/3
CC-MAIN-2015-14
en
refinedweb
For back compatability with fltk1.1 Callback is a function pointer to a fltk callback mechanism. It points to any function void foo(Widget*, void*) Function pointer to a callback with only one argument, the widget Function pointer to a callback with a long argument instead of a void argument A pointer to a Callback. Needed for BORLAND fltk::Color is a typedef for a 32-bit integer containing r,g,b bytes and an "index" in the lowest byte (the first byte on a little-endian machine such as an x86). For instance 0xFF008000 is 255 red, zero green, and 128 blue. If rgb are not zero then the low byte is ignored, or may be treated as "alpha" by some code. If the rgb is zero, the N is the color "index". This index is used to look up an fltk::Color in an internal table of 255 colors shown here. All the indexed colors may be changed by using set_color_index(). However fltk uses the ones between 32 and 255 and assummes they are not changed from their default values. (this is not the X colormap used by fltk) A Color of zero (fltk::NO_COLOR) will draw black but is ambiguous. It is returned as an error value or to indicate portions of a Style that should be inherited, and it is also used as the default label color for everything so that changing color zero can be used by the -fg switch. You should use fltk::BLACK (56) to get black. Type of function passed to drawimage(). It must return a pointer to a horizontal row of w pixels, starting with the pixel at x and y (relative to the top-left corner of the image, not to the coordinate space drawimage() is called in). These pixels must be in the format described by type passed to drawimage() and must be the delta apart passed to drawimage(). userdata is the same as the argument passed to drawimage(). This can be used to point at a structure of information about the image. Due to cropping, less than the whole image may be requested. So the callback may get an x greater than zero, the first y passed to it may be greater than zero, and x+w may be less than the width of the image. The passed buffer contains room for at least the number of pixels specified by the width passed to drawimage(). You can use this as temporary storage to construct a row of the image, and return a pointer offset by x into it. Type returned by fltk::Widget::flags() and passed to fltk::Box and many other drawing functions. HelpFunc type - link callback function for files... A Theme is a function called by fltk just before it shows the first window, and also whenever it receives a signal from the operating system that the user's preferences have changed. The Theme's job is to set all the NamedStyle structures to the correct values for the appearance selected by the user and operating system. The return value is ignored but you should return true for future compatability. This pointer is declared as a "C" function to make it easier to load the correct function by name from a plugin, if you would like to write a scheme where the appearance is controlled by plugins. Fltk provides a convienence function to portably load plugins called fltk::load_plugin() that you may want to use if you are writing such a system. Hides whatever the system uses to identify a thread. Used so the "toy" interface is portable. Type of function passed to add_timeout(), add_check(), and add_idle() Numbers passed to Widget::handle() and returned by event(). Values returned by event_key(), passed to event_key_state() and get_key_state(), and used for the low 16 bits of add_shortcut(). The actual values returned are based on X11 keysym values, though fltk always returns "unshifted" values much like Windows does. A given key always returns the same value no matter what shift keys are held down. Use event_text() to see the results of any shift keys. The lowercase letters 'a' through 'z' and the ascii symbols '`', '-', '=', '[', ']', '\', ',', '.', '/', ';', '\'' and space are used to identify the keys in the main keyboard. On X systems unrecognized keys are returned unchanged as their X keysym value. If they have no keysym it uses the scan code or'd with 0x8000, this is what all those blue buttons on a Microsoft keyboard will do. I don't know how to get those buttons on Windows. Flags returned by event_state(), and used as the high 16 bits of Widget::add_shortcut() values (the low 16 bits are all zero, so these may be or'd with key values). The inline function BUTTON(n) will turn n (1-8) into the flag for a mouse button. Device identifier returned by event_device(). This enumeration is useful to get the device type that caused a PUSH, RELEASE, DRAG or MOVE event Values of the bits stored in Widget::layout_damage(). When a widget resized or moved (or when it is initially created), flags are set in Widget::layout_damage() to indicate the layout is damaged. This will cause the virtual function Widget::layout() to be called just before fltk attempts to draw the windows on the screen. This is useful because often calculating the new layout is quite expensive, this expense is now deferred until the user will actually see the new size. Some Group widgets such as fltk::PackedGroup will also use the virtual Widget::layout() function to find out how big a widget should be. A Widget is allowed to change it's own dimensions in layout() (except it is not allowed to change it if called a second time with no changes other than it's x/y position). This allows widgets to resize to fit their contents. The layout bits are turned on by calling Widget::relayout(). Widget::when() values Symbolic names for some of the indexed colors. The 24-entry "gray ramp" is modified by fltk::set_background() so that the color fltk::GRAY75 is the background color, and the others are a nice range from black to a lighter version of the gray. These are used to draw box edges. The gray levels are chosen to be evenly spaced, listed here is the actual 8-bit and decimal gray level assigned by default. Also listed here is the letter used for fltk::FrameBox and the old fltk1.1 names used for these levels. The remiander of the colormap is a 5x8x5 color cube. This cube is used to dither images on 8-bit screens X colormaps to reduce the number of colors used. Values of the bits stored in Widget::damage(). When redrawing your widgets you should look at the damage bits to see what parts of your widget need redrawing. The Widget::handle() method can then set individual damage bits to limit the amount of drawing that needs to be done, and the Widget::draw() method can test these bits to decide what to draw: MyClass::handle(int event) { ... if (change_to_part1) damage(1); if (change_to_part2) damage(2); if (change_to_part3) damage(4); } MyClass::draw() { if (damage() & fltk::DAMAGE_ALL) { ... draw frame/box and other static stuff ... } if (damage() & (fltk::DAMAGE_ALL | 1)) draw_part1(); if (damage() & (fltk::DAMAGE_ALL | 2)) draw_part2(); if (damage() & (fltk::DAMAGE_ALL | 4)) draw_part3(); } Except for DAMAGE_ALL, each widget is allowed to assign any meaning to any of the bits it wants. The enumerations are just to provide suggested meanings. Enumeration describing how colors are stored in an array of bytes that is a pixel. This is used as an argument for fltk::drawimage(), fltk::readimage(), and fltk::Image. Notice that the order of the bytes in memory of ARGB32 or RGB32 is a,r,g,b on a little-endian machine and b,g,r,a on a big-endian machine. Due to the use of these types by Windows, this is often the fastest form of data, if you have a choice. To convert an fltk::Color to RGB32, shift it right by 8 (for ARGB32 shift the alpha left 24 and or it in). More types may be added in the future. The set is as minimal as possible while still covering the types I have actually encountered..: bool state_changed; // anything that changes the display turns this on void check(void*) { if (!state_changed) return; state_changed = false; do_expensive_calculation(); widget->redraw(); } main() { fltk::add_check(1.0,check); return fltk::run(); } Install a function to parse unrecognized events. If FLTK cannot figure out what to do with an event, it calls each of these functions (most recent first) until one of them returns non-zero. If none of them returns non zero then the event is ignored. Currently this is called for these reasons: Add file descriptor fd to listen to. When the fd becomes ready for reading fltk::wait() will call the callback function and then return. The callback is passed the fd and the arbitrary void* argument. The second argument is a bitfield to indicate when the callback should be done. You can or these together to make the callback be called for multiple conditions: Under UNIX any file descriptor can be monitored (files, devices, pipes, sockets, etc.) Due to limitations in Microsoft Windows, WIN32 applications can only monitor sockets (? and is the when value ignored?) Same as add_fd(fd, READ, cb, v);. Add a one-shot timeout callback. The function will be called by fltk::wait() at t seconds after this function is called. The optional void* argument is passed to the callback. Add a series of points to the current path on the arc of an ellipse. The ellipse in inscribed in the l,t,w,h rectangle, and the start and end angles are measured in degrees counter-clockwise from 3 o'clock, 45 points at the upper-right corner of the rectangle. If end is less than start then it draws the arc in a clockwise direction. Add an isolated circular arc to the path. It is inscribed in the rectangle so if it is stroked with the default line width it exactly fills the rectangle (this is slightly smaller than addarc() will draw). If the angles are 0 and 360 a closed circle is added. This tries to take advantage of the primitive calls provided by Xlib and GDI32. Limitations are that you can only draw one, a rotated current transform does not work. Add a series of points on a Bezier spline to the path. The curve ends (and two of the points) are at x,y and x3,y3. The "handles" are at x1,y1 and x2,y2. Add a pie-shaped closed piece to the path, inscribed in the rectangle so if it is stroked with the default line width it exactly fills the rectangle (this is slightly smaller than addarc() will draw). If you want a full circle use addchord(). This tries to take advantage of the primitive calls provided by Xlib and GDI32. Limitations are that you can only draw one per path, that rotated coordinates don't work, and doing anything other than fillpath() will produce unpredictable results. Add a single vertex to the current path. (if the path is empty or a closepath() was done, this is equivalent to a "moveto" in PostScript, otherwise it is equivalent to a "lineto").. Add a whole set of integer vertices to the current path. Add a whole set of vertices to the current path. This is much faster than calling fltk::addvertex once for each point. Adds a whole set of vertcies that have been produced from values returned by fltk::transform(). This is how curve() and arc() are implemented. Same as fltk::message() except for the "!" symbol.. The arguments recognized are listed under args(). The args() argument parser is entirely optional. It was written to make demo programs easy to write, although some minor work was done to make it usable by more complex programs. But there is no requirement that you call it or even acknoledge it's existence, if you prefer to use your own code to parse switches. The second form of fltk::args() is useful if your program does not have command line switches of its own. It parses all the switches, and if any are not recognized it calls fltk::fatal(fltk::help). Consume all switches from argv. To use the switch parser, call fltk::args(argc,argv) may be abbreviated one letter and case is ignored: -iconicWindow::iconize() will be done to the window. -geometryWxH Window is resized to this width & height -geometry+X+Y Initial window position -geometryWxH+X+Y Window is resized and positioned. -displayhost or -displayhost:n.n The X display to use (ignored under WIN32). -namestring will set the Window::label() -bgcolor Call set_background() with the named color. Use "#rrggbb" to set it in hex. -backgroundcolor is the same as -bg color Displays a printf-style message in a pop-up box with an "Yes" and "No" button and waits for the user to hit a button. The return value is 1 if the user hits Yes, 0 if they pick No. The enter key is a shortcut for Yes and ESC is a shortcut for No. If message_window_timeout is used, then -1 will be returned if the timeout expires. A child thread can call this to cause the main thread's call to wait() to return (with the lock locked) even if there are no events ready. The main purpose of this is to get the main thread to redraw the screen, but it will also cause fltk::wait() to return so the program's code can do something. You should call this immediately before fltk::unlock() for best performance. The message argument can be retrieved by the other thread using fltk::thread_message(). Generates a simple beep message You can enable beep on default message dialog (like ask, choice, input, ...) by using this function with true (default is false) You get the state enable beep on default message dialog (like ask, choice, input, ...) by using this function with true (default is false) Get the widget that is below the mouse. This is the last widget to respond to an fltk::ENTER event as long as the mouse is still pointing at it. This is for highlighting buttons and bringing up tooltips. It is not used to send fltk::PUSH or fltk::MOVE directly, for several obscure reasons, but those events typically go to this widget. Change the fltk::belowmouse() widget, the previous one and all parents (that don't contain the new widget) are sent fltk::LEAVE events. Changing this does not send fltk::ENTER to this or any widget, because sending fltk::ENTER is supposed to test if the widget wants the mouse (by it returning non-zero from handle()). Same as fltk::wait(0). Calling this during a big calculation will keep the screen up to date and the interface responsive: while (!calculation_done()) { calculate(); fltk::check(); if (user_hit_abort_button()) break; } Shows the message with three buttons below it marked with the strings b0, b1, and b2. Returns 0, 1, or 2 depending on which button is hit. If one of the strings begins with the special character '*' then the associated button will be the default which is selected when the enter key is pressed. ESC is a shortcut for b2. If message_window_timeout is used, then -1 will be returned if the timeout expires. Replace the top of the clip stack. Remove. Remove the. Similar to drawing another vertex back at the starting point, but fltk knows the path is closed. The next addvertex() will start a new disconnected part of the shape. It is harmless to call fltk::closepath() several times in a row, or to call it before the first point. Sections with less than 3 points in them will not draw anything when filled. Turn a string into a color. If name is null this returns NO_COLOR. Otherwise it returns fltk::parsecolor(name, strlen(name)). fltk:tk::ValueInput. This version takes and returns numbers in the 0-1 range. There is also a class fltk::ColorChooser which you can use to imbed a color chooser into another control panel. Same but user can also select an alpha value. Currently the color chips do not remember or set the alpha! Same but it takes and returns 8-bit numbers for the rgb arguments. Same but with 8-bit alpha chosen by the user. Same but it takes and returns an fltk::Color number. No alpha. Use of this function is very simple. Any text editing widget should call this for each fltk::KEY event. If true is returned, then it has modified the fltk::event_text() and fltk::event_length() to a set of bytes to insert (it may be of zero length!). It will also set the del parameter to the number of bytes to the left of the cursor to delete, this is used to delete the results of the previous call to fltk::compose(). Compose may consume the key, which is indicated by returning true, but both the length and del are set to zero. Compose returns false if it thinks the key is a function key that the widget should handle itself, and not an attempt by the user to insert text. Though the current implementation returns immediately, future versions may take quite awhile, as they may pop up a window or do other user-interface things to allow international characters to be selected. If the user moves the cursor, be sure to call fltk::compose_reset(). The next call to fltk::compose() will start out in an initial state. In particular it will not set "del" to non-zero. This call is very fast so it is ok to call it many times and in many places. Multiply the current transformation by a b 0 c d 0 x y 1 Returns fg if fltk decides it can be seen well when drawn against bg. Otherwise it returns either fltk::BLACK or fltk::WHITE. Change the current selection. The block of text is copied to an internal buffer by FLTK (be careful if doing this in response to an fltk::PASTE as this may be the same buffer returned by event_text()). The block of text may be retrieved (from this program or whatever program last set it) with fltk::paste(). There are actually two buffers. If clipboard is true then the text goes into the user-visible selection that is moved around with cut/copy/paste commands (on X this is the CLIPBOARD selection). If clipboard is false then the text goes into a less-visible buffer used for temporarily selecting text with the mouse and for drag & drop (on X this is the XA_PRIMARY selection). Fork a new thread and make it run f(p). Returns negative number on error, otherwise t is set to the new thread. True if any. Turn a PixelType into the number of bytes needed to hold a pixel.. Drag and drop the data set by the most recent fltk::copy() (with the clipboard argument false). Returns true if the data was dropped on something that accepted it. By default only blocks of text are dragged. You can use system-specific variables to change the type of data. Fltk can draw into any X window or pixmap that uses the fltk::xvisual. This will reset the transformation and clip region and leave the font, color, etc at unpredictable values. The w and h arguments must be the size of the window and are used by fltk::not_clipped(). Before you destroy the window or pixmap you must call fltk::stop_drawing() so that it can destroy any temporary structures that were created by this. Return the last flags passed to setdrawflags(). Same as (drawflags() & f), returns true if any of the flags in f are set. Same except line_delta is set to r.w() times depth(type), indicating the rows are packed together one after another with no gap. Draw a image (a rectangle of pixels) stored in your program's memory. The current transformation (scale, rotate) is applied. The X version of FLTK will abort() if the default visual is one it cannot use for images. To avoid this call fltk::visual(fltk::RGB) at the start of your program. Call the passed function to provide each scan line of the image. This lets you generate the image as it is being drawn, or do arbitrary decompression of stored data (provided it can be decompressed to individual scan lines easily). callback is called with the void* data argument (this can be used to point at a structure of information about the image), the x, y, and number of pixels desired from the image, measured from the upper-left corner of the image. It is also given a buffer of at least w pixels that can be used as temporary storage, for instance to decompress a line read from a file. You can then return a pointer to this buffer, or to somewhere inside it. The callback must return n pixels of the format described by type. The xywh rectangle describes the area to draw. The callback is called with y values between 0 and h-1. Due to cropping not all pixels may be asked for. You can assumme y will be asked for in increasing order. Draw a straight line between the two points. If line_width() is zero, this tries to draw as though a 1x1 square pen is moved between the first centers of pixels to the lower-right of the start and end points. Thus if y==y1 this will fill a rectangle with the corners x,y and x1+1,y+1. This may be 1 wider than you expect, but is necessary for compatability with previous fltk versions (and is due to the original X11 behavior). If line_width() is not zero then the results depend on the back end. It also may not produce consistent results if the ctm is not an integer translation or if the line is not horizontal or vertical. Draw a straight line between the two points. Draw a dot at the given point. If line_width() is zero this is a single pixel to the lower-right of x,y. If line_width() is non-zero this is a dot drawn with the current pen and line caps. Draw a dot at the given point. If line_width() is zero this is the single pixel containing X,Y, or the one to the lower-right if X and Y transform to integers. If line_width() is non-zero this is a dot drawn with the current pen and line caps (currently draws nothing in some api's unless the line_style has CAP_ROUND). Draw using this style. Set drawstyle() to this, drawflags() to flags, calls setcolor() and setbgcolor() with appropriate colors for this style and the given flags, and calls setfont(). This is called by the draw() methods on most fltk widgets. The calling Widget picks what flags to pass to the Symbols so that when they call this they get the correct colors for each part of the widget. Flags that are understood: It then further modifies fg so that it contrasts with the bg. Return the last style sent to drawstyle(s,f). Some drawing functions (such as glyphs) look in this for box types. If this has not been called it is Widget::default_style. Draw a nul-terminated string. Draw the first n bytes (not characters if utf8 is used) starting at the given position. This is the fancy string-drawing function that is used to draw all labels in fltk. The string is formatted and aligned inside the passed rectangle. This also: Provides access to some of the @-string formatting for another graphics API. Most symbols will not work, but you will be able to access the line break and justifications, and commands that change the font, size, and color. I use this to format labels for that are drawn in OpenGL. textfunction is called to draw all text. It's arguments are a pointer to a UTF-8 string, the length in bytes of that string, and two float numbers for the x and y to draw the text at. textfunction may use getfont(), getsize(), and getcolor() to find out the settings and recreate them on the output device. Draw text starting at a point returned by fltk::transform(). This is needed for complex text layout when the current transform may not match the transform being used by the font.(). Returns the most recent event handled, such as fltk::PUSH or fltk::KEY. This is useful so callbacks can find out why they were called. Returns which mouse button was pressed or released by a PUSH or RELEASE event. Returns garbage for other events, so be careful (this actually is the same as event_key(), the buttons have been assigned the key values 1,2,3, etc. Returns the number of extra times the mouse was clicked. For a normal fltk::PUSH this is zero, if the user "double-clicks" this is 1, and it is N-1 for each subsequent click. Setting this value can be used to make callbacks think things were (or were not) double-clicked, and thus act differently. For fltk::MOUSEWHEEL events this is how many clicks the user moved in the x and y directions (currently dx is always zero). Reserved for future use if horizontal mouse wheels (or some kind of joystick on the mouse) becomes popular. Returns true if the current fltk::event_x() and fltk::event_y() put it inside the Rectangle. You should always call this rather than doing your own comparison so you are consistent about edge effects. This is true for a short time after the mouse key is pressed. You test this on the RELEASE events to decide if the user "clicked" or "held" the mouse. It is very useful to do different actions depending on this. This turns off after a timeout (implemented on X only, currently), when the mouse is moved more than 5 pixels, and when the user presses other mouse or keyboard buttons while the mouse is down. On X this is used to decide if a click is a double-click, it is if this is still on during the next mouse press. On Windows and OS/X the system's indication is used for double-click. You can set this to zero with fltk::event_is_click(0), this can be used to prevent the next mouse click from being considered a double click. Only false works, attempts to set this true are ignored. Returns which key on the keyboard was last pushed. Most non-keyboard events set this to values that do not correspond to any keys, so you can test this in callbacks without having to test first if the event really was a keystroke. The values returned are described under fltk::SpaceKey. True if the most recent KEY event was caused by a repeating held-down key on the keyboard. The value increments for each repeat. Note: older versions of fltk reused event_clicks() for this. This made it impossible to design a GUI where the user holds down keyboard keys while clicking the mouse, as well as being pretty hard to understand. So we had to change it for fltk 2. Returns true if the given key was held down (or pressed) during the last event. This is constant until the next event is read from the server. The possible values for the key are listed under fltk::SpaceKey. On Win32 event_key_state(KeypadEnter) does not work. Returns the length of the text in fltk::event_text(). There will always be a nul at this position in the text. However there may be a nul before that if the keystroke translates to a nul character or you paste a nul character. This is a bitfield of what shift states were on and what mouse buttons were held down during the most recent event. The flags to pass are described under fltk::SHIFT. Because Emacs screws up if any key returns the predefined META flag, lots of X servers have really botched up the key assignments trying to make Emacs work. Fltk tries to work around this but you may find that Alt or Meta don't work, since I have seen at least 3 mutually incompatible arrangements. Non-XFree86 machines may also have selected different values so that NUMLOCK, META, and SCROLLLOCK are mixed up. In addition X reports the state before the last key was pressed so the state looks backwards for any shift keys, currently fltk only fixes this bug for the mouse buttons. Same as event_state()&mask, returns true if any of the passed bits were turned on during the last event. So doing event_state(SHIFT) will return true if the shift keys are held down. This is provided to make the calling code easier to read. The flags to pass are described under fltk::SHIFT. Returns the ASCII text (in the future this may be UTF-8) produced by the last fltk::KEY or fltk::PASTE or possibly other event. A zero-length string is returned for any keyboard function keys that do not produce text. This pointer points at a static buffer and is only valid until the next event is processed. Under X this is the result of calling XLookupString(). Returns the distance the mouse is to the right of the left edge of the widget. Widget::send() modifies this as needed before calling the Widget::handle() method. Return the absolute horizontal position of the mouse. Usually this is relative to the left edge of the screen, but multiple Monitor setup may change that. To find the absolute position of the current widget, subtract event_x_root()-event_x(). Returns the distance the mouse is below the top edge of the widget. Widget::send() modifies this as needed before calling the Widget::handle() method. Return the absolute vertical position of the mouse. Zero is at the top. Turns on exit_modal_flag(). This may be used by user callbacks to cancel modal state. See also fltk::Window::make_exec_return(). True if exit_modal() has been called. The flag is also set by the destruction or hiding of the modal widget, and on Windows by other applications taking the focus when grab is on. pops up the file chooser, waits for the user to pick a file or Cancel, and then returns a pointer to that filename or NULL if Cancel is chosen. If use_system_file_chooser() is set to true, a system FileChooser is opened. If the user picks multiple files, these will be separated by a new line. If fname is NULL then the last filename that was choosen is used, unless the pattern changes, in which case only the last directory is used. The first time the file chooser is called this defaults to a blank string. This function is called every time the user navigates to a new file or directory in the file chooser. It can be used to preview the result in the main window. Return the filename from expanded to a full "absolute" path name by prepending the current directory: The return value is the number of bytes this wants to write to output. If this value is greater or equal to length, then you know the output has been truncated, and you can call this again with a buffer of n+1 bytes. Leading "./" sequences in input are removed, and "../" sequences are removed as well as the matching trailing part of the prefixed directory. Technically this is incorrect if symbolic links are used but this matches the behavior of most programs. If the pwd argument is null, this also expands names starting with '~' to the user or another user's HOME directory. To expand a filename starting with ~ in the current directory you must start it with "./~". If input is a zero-length string then the pwd is returned with a slash added to the end. Returns true if the file exists . Returns a pointer to the last period in filename_name(f), or a pointer to the trailing nul if none. Notice that this points at the period, not after it! Returns true if the file exists and is a directory. Returns true if the file exists and is a regular file. Returns true if filename s matches pattern p. The following glob syntax is used by pattern: Returns the modification time of the file as a Unix timestamp (number of seconds since the start of 1970 in GMT). Returns 0 if the file does not exist. Returns a pointer to after the last slash in name. If the name ends with a slash then this returns a pointer to the NUL. If there is no slash this returns a pointer to the start of name. Does the opposite of filename_absolute(). Produces the shortest possible name in output that is relative to the current directory. If the filename does not start with any text that matches the current directory then it is returned unchanged. Return value is the number of characters it wants to write to output. Returns the size of the file in bytes. Returns zero if it does not exist. Does fltk::closepath() and then fill with the current color, and then clear the path. For portability, you should only draw polygons that appear the same whether "even/odd" or "non-zero" winding rules are used to fill them. This mostly means that holes should be drawn in the opposite direction of the outside. Warning: result is somewhat different on X and Win32! Use fillstrokepath() to make matching shapes. In my opinion X is correct, we may change the Win32 version to match in the future, perhaps by making the current pen invisible? Fill the rectangle with the current color. Does fltk::fill(), then sets the current color to linecolor and does fltk::stroke with the same closed path, and then clears the path. This seems to produce very similar results on X and Win32. Also it takes advantage of a single GDI32 call that does this and should be faster. Portably calls the fopen() function for different systems. Note that ALL calls to this function MUST make sure the filename is UTF8 encoded. Portably calls the system's stat() function, to deal with native Unicode filenames. Has the same return values and use as the system's stat. The string passed to fltk_stat must be UTF8 (note that ASCII is a subset of UTF8) write all preferences to disk Get the display up to date. This is done by calling layout() on all Window objects with layout_damage() and then calling draw() on all Window objects with damage(). (actually it calls Window::flush() and that calls draw(), but normally you can ignore this). This will also flush the X i/o buffer, update the cursor shape, update Windows window sizes, and other operations to get the display up to date. wait() calls this before it waits for events. Change fltk::focus() to the given widget, the previous widget and all parents (that don't contain the new widget) are sent fltk::UNFOCUS events, the new widget is sent an fltk::FOCUS event, and all parents of it get fltk::FOCUS_CHANGE events. fltk::focus() is set whether or not the applicaton has the focus or if the widgets accept the focus. You may want to use fltk::Widget::take_focus() instead, it will test first. For back-compatabilty with FLTK1, this turns an integer into one of the built-in fonts. 0 = HELVETICA. Call the handle() method from the passed ShortcutFunctor object for every Widget::shortcut() assignment known. If any return true then this immediately returns that shortcut value, else this returns zero after calling it for the last one. This is most useful for making a display of shortcuts for the user, or implementing a shortcut editor. class ListShortcuts : public ShortcutFunctor { public: bool handle(const Widget* widget, unsigned key) { printf("Widget=%s shortcut=%s\n", widget->label() ? widget->label() : "NULL", key_name(key)); return false; } }; f() { ListShortcuts listShortcuts; fltk::foreachShortcut(listShortcuts); } If widget is not null, only do assignments for that widget, this is much faster than searching the entire list. This is useful for drawing the shortcuts on a widget (though most fltk widgets only draw the first one). Return the rgb form of color. If it is an indexed color that entry is returned. If it is an rgb color it is returned unchanged. Returns the string sent to the most recent set_encoding(). Returns true if the given key is held down now. This is different than event_key_state() as that returns how the key was during the last event. This can also be slower as it requires a round-trip query to the window server. The values to pass are described under fltk::SpaceKey. On Win32 fltk::get_key_state(fltk::KeypadEnter) does not work. Return where the mouse is on the screen by doing a round-trip query to the server. You should use fltk::event_x_root() and fltk::event_y_root() if possible, but this is necessary if you are not sure if a mouse event has been processed recently (such as to position your first window). If the display is not open, this will open it.. Return the distance from the baseline to the top of letters in the current font. Returns the last Color passed to setbgcolor(). To actually draw in the bg color, do this: Color saved = getcolor(); setcolor(getbgcolor()); draw_stuff(); setcolor(saved) Returns the last Color passed to setcolor(). Return an arbitrary HDC which you can use for Win32 functions that need one as an argument. The returned value is short-lived and may be destroyed the next time anything is drawn into a window! Return the distance from the baseline to the bottom of letters in the current font. Uses glDrawPixels to draw an image using the same arguments as drawimage(). If you are in the normal OpenGL coordinate system with 0,0 in the lower-left, the first pixel is memory is the lower-left corner. Draw text at the given point in 3D space transformed to the screen. Draw text at the current glRasterPos in the current font selected with fltk::glsetfont(). You can use glRasterPos2f() or similar calls to set the position before calling this. The string is in UTF-8, although only characters in ISO-8859-1 are drawn correctly, others draw as question marks. Draw the first n bytes of text at the current glRasterPos. Draw the first n bytes of text at the given point in 3D space transformed to the screen. Inline wrapper for glRecti(x,y,x+w,y+h). Set the current OpenGL color to a FLTK color, or as close as possible. Make the current OpenGL font (as used by gldrawtext()) be as similar as possible to an FLTK Font. Currently the font is aliased except on X. Set up an OpenGL context to draw into the current window being drawn by fltk. This will allow you to use OpenGL to update a normal window. The current transformation is reproduced, and the current clipping is simulated with glScissor() calls (which can only do a rectangle). You must use glfinish() to exit this mode before any normal fltk drawing calls are done. You should call glvisual() at program startup if you intend to use this. This may be used to change how windows are created so this call works. I would like this to work reliably, but it is not real good now on any platforms. In particular it does not cooperate with the double-buffering schemes. It does appear to work on X when you turn off double buffering, it also works if OpenGL is entirely software, such as MESA. Do not call glstart()/glfinish() when drawing into a GlWindow! Draw a 1-thick line just inside the given rectangle. Same as fltk::visual(int) except choose a visual that is also capable of drawing OpenGL. On modern X servers this is true by default, but on older ones OpenGL will crash if the visual is not selected with this. mode is the same bitflags accepted by GlWindow::mode(). This causes all windows (and thus glstart()) to have these capabilities. Try to guess the filetype Beware that calling this force you to link in all image types ! Make FLTK act as though it just got the event stored in xevent. You can use this to feed artifical X events to it, or to use your own code to get events from X. Besides feeding events your code should call fltk::flush() periodically so that FLTK redraws its windows. This function will call any widget callbacks from the widget code. It will not return until they complete, for instance if it pops up a modal window with fltk::ask() it will not return until the user clicks yes or no. This is the function called from the system-specific code for all events that can be passed to Widget::handle(). You can call it directly to fake events happening to your widgets. Currently data other than the event number can only be faked by writing to the undocumented fltk::e_* variables, for instance to make event_x() return 5, you should do fltk::e_x = 5. This may change in future versions. This will redirect events to the modal(), pushed(), belowmouse(), or focus() widget depending on those settings and the event type. It will turn MOVE into DRAG if any buttons are down. If the resulting widget returns 0 (or the window or widget is null) then the functions pointed to by add_event_handler() are called. Return true if add_check() has been done with this cb and arg, and remove_check() has not been done. Returns true if the specified idle callback is currently installed. Returns true if the timeout exists and has not been called yet. Returns true if the current thread is the main thread, i.e. the one that called wait() first. Many fltk calls such as wait() will not work correctly if this is not true. This function must be surrounded by lock() and unlock() just like all other fltk functions, the return value is wrong if you don't hold the fltk lock! Warning: in_main_thread() is wrong if the main thread calls fltk::unlock() and another thread calls fltk::lock() (the assumption is that the main thread only calls wait()). Current fix is to do the following unsupported code: fltk::in_main_thread_ = false; fltk::unlock(); wait_for_something_without_calling_fltk_wait(); fltk::lock(); fltk::in_main_thread_ = true; Same as lerp(fg, getbgcolor(), .5). This is for back-compatability only? Same as lerp(fg, bg, .5), it grays out the color. Pops up a window displaying a string, lets the user edit it, and return the new value. The cancel button returns NULL. The returned pointer is only valid until the next time fltk::input() is called. Due to back-compatability, the arguments to any printf commands in the label are after the default value. If message_window_timeout is used, then 0 will be returned if the timeout expires. Intersect a transform()'d rectangle with the current clip region and change it to the smaller rectangle that surrounds (and probably equals) this intersection area. This can be used by device-specific drawing code to limit complex pixel operations (like drawing images) to the smallest rectangle needed to update the visible area. Return values: Turn a string into a fltk::event_key() value or'd with fltk::event_shift() flags. The returned value can be used by by fltk::Widget::add_shortcut(). Any error, or a null or zero-length string, returns 0. Currently this understands prefixes of "Alt+", "Shift+", and "Ctrl+" to turn on fltk::ALT, fltk::SHIFT, and fltk::CTRL. Case is ignored and the '+' can be a '-' instead and the prefixes can be in any order. You can also use '#' instead of "Alt+", '+' instead of "Shift+", and '^' instead of Ctrl+. After the shift prefixes there can either be a single ASCII letter, "Fn" where n is a number to indicate a function key, or "0xnnnn" to get an arbitrary fltk::event_key() enumeration value. The inverse function to turn a number into a string is fltk::key_name(). Currently this function does not parse some strings fltk::key_name() can return, such as the names of arrow keys! Unparse a fltk::Widget::shortcut(), an fltk::event_key(), or an fltk::event_key() or'd with fltk::event_state(). Returns a pointer to a human-readable string like "Alt+N". If hotkey is zero an empty string is returned. The return value points at a static buffer that is overwritten with each call. The opposite function is fltk::key(). Return (1-weight)*color0 + weight*color1. weight is clamped to the 0-1 range before use. Return the last value for dashes sent to line_style(int,width,dashes). Note that the actual pointer is returned, which may not point at legal data if a local array was passed, so this is only useful for checking if it is NULL or not. Return the last value sent to line_style(int,width,dashes), indicating the cap and join types and the built-in dash patterns. Set how to draw lines (the "pen"). If you change this it is your responsibility to set it back to the default with fltk::line_style(0). style is a bitmask in which you , if set then the dash pattern in style is ignored.. The dashes array is ignored on Windows 95/98. Return the last value for width sent to line_style(int,width,dashes). Replace the current transform with the identity transform, which puts 0,0 in the top-left corner of the window and each unit is 1 pixel in size. Call the theme() function if it has not already been called. Normally FLTK calls this just before the first Window::show() is done. You need to call this earlier to execute code such as measuring labels that may depend on the theme. A multi-threaded fltk program must surround all calls to any fltk functions with lock() and unlock() pairs. This is a "recursive lock", a thread can call lock() n times, and it must call unlock() n times before it really is unlocked. If another thread calls lock() while it is locked, it will block (not return from lock()) until the first thread unlocks. The main thread must call lock() once before any call to fltk to initialize the thread system. The X11 version of fltk uses XInitThreads(), XLockDisplay(), and XUnlockDisplay(). This should allow an fltk program to cooperate with other packages updating the display using Xlib calls. This lets you pass your own measurement function to measure the widths of printed text. Also returns floating point sizes. Measure the size of box necessary for drawtext() to draw the given string inside of it. The flags are used to set the alignment, though this should not make a difference except for fltk::ALIGN_WRAP. To correctly measure wrap w must be preset to the width you want to wrap at if fltk::ALIGN_WRAP is on in the flags! w and h are changed to the size of the resulting box.. Restricts events to a certain widget. First thing: much of the time fltk::Window::exec() will do what you want, so try using that. This function sets the passed widget as the "modal widget". All user events are directed to it or a child of it, preventing the user from messing with other widgets. The modal widget does not have to be visible or even a child of an fltk::Window for this to work (but if it not visible, fltk::event_x() and fltk::event_y() are meaningless, use fltk::event_x_root() and fltk::event_y_root()). The calling code is responsible for saving the current value of modal() and grab() and restoring them by calling this after it is done. The code calling this should then loop calling fltk::wait() until fltk::exit_modal_flag() is set or you otherwise decide to get out of the modal state. It is the calling code's responsibility to monitor this flag and restore the modal widget to it's previous value when it turns on. grab indicates that the modal widget should get events from anywhere on the screen. This is done by messing with the window system. If fltk::exit_modal() is called in response to an fltk::PUSH event (rather than waiting for the drag or release event) fltk will "repost" the event so that it is handled after modal state is exited. This may also be done for keystrokes in the future. On both X and WIN32 grab will not work unless you have some visible window because the system interface needs a visible window id. On X be careful that your program does not enter an infinite loop while grab() is on, it will lock up your screen! Returns the current modal widget, or null if there isn't one. It is useful to test these in timeouts and file descriptor callbacks in order to block actions that should not happen while the modal window is up. You also need these in order to save and restore the modal state. Sorts two files based on their modification date. Find an indexed color in the range 56-127 that is closest to this color. If this is an indexed color it is returned unchanged. Clear the current "path". This is normally done by fltk::fillpath() or any other drawing command. You can make fltk "open" a display that has already been opened, perhaps by another GUI library. Calling this will set xdisplay to the passed display and also read information FLTK needs from it. Don't call this if the display is already open! Opens the display. Does nothing if it is already open. You should call this if you wish to do X calls and there is a chance that your code will be called before the first show() of a window. This is called automatically Window::show(). This may call fltk::abort() if there is an error opening the display. Makes FLTK use its own X colormap. This may make FLTK display better and will reduce conflicts with other programs that want lots of colors. However the colors may flash as you move the cursor between windows. This function is pretty much legacy nowadays as all modern systems are full color, on such systems this does nothing. You must call this before you show() any windows. If you call visual(int) you must call this after that. Turn the first n bytes of name into an fltk color. This allows you to parse a color out of the middle of a string. Recognized values are: Same as fltk::input() except an fltk::SecretInput field is used. This is what a widget does when a "paste" command (like Ctrl+V or the middle mouse click) is done to it. Cause an fltk::PASTE event to be sent to the receiver with the contents of the current selection in the fltk::event_text(). The selection can be set by fltk::copy(). There are actually two buffers. If clipboard is true then the text is from the user-visible selection that is moved around with cut/copy/paste commands (on X this is the CLIPBOARD selection). If clipboard is false then the text is from a less-visible buffer used for temporarily selecting text with the mouse and for drag & drop (on X this is the XA_PRIMARY selection). most toolkits require. Restore the previous clip region. You must call fltk::pop_clip() exactly once for every time you call fltk::push_clip(). If you return to FLTK with the clip stack not empty unpredictable results occur. Put the transformation back to the way it was before the last push_matrix(). Calling this without a matching push_matrix will crash! Pushes the intersection of the current region and r onto the clip stack. Same as push_clip(Rectangle(x,y,w,h)) but faster: Same as push_clip(Rectangle(x,y,r,h)) except faster as it avoids the construction of an intermediate rectangle object. Pushes the intersection of the current region and this rectangle onto the clip stack. Save the current transformation on a stack, so you can restore it with pop_matrix(). FLTK provides an arbitrary 2-D affine transformation (rotation, scale, skew, reflections, and translation). This is very similar to PostScript, PDF, SVG, and Cairo. Due to limited graphics capabilities of some systems, not all drawing functions will be correctly transformed, except by the integer portion of the translation. Don't rely on this as we may be fixing this without notice. Pushes an empty clip region on the stack so nothing will be clipped. This lets you draw outside the current clip region. This should only be used to temporarily ignore the clip region to draw into an offscreen area. Get the widget that is being pushed. fltk::DRAG or fltk::RELEASE (and any more fltk::PUSH) events will be sent to this widget. This is null if no mouse button is being held down, or if no widget responded to the fltk::PUSH event. Change the fltk::pushed() widget. This sends no events. Reads a 2-D image off the current drawing destination. The resulting data can be passed to fltk::drawimage() or the 8-bit pixels examined or stored by your program. The return value is either p or NULL if there is some problem (such as an inability to read from the current output surface, or if the rectangle is empty). p points to the location to store the first byte of the upper-left pixel of the image. The caller must allocate this buffer. type can be fltk::RGB or fltk::RGBA (possibly other types will be supported in the future). rectangle indicates the position on the surface in the current transformation to read from and the width and height of the resulting image. What happens when the current transformation is rotated or scaled is undefined. If the rectangle extends outside the current drawing surface, or into areas obscured by overlapping windows, the result in those areas is undefined. linedelta is how much to add to a pointer to advance from one pixel to the one below it. Any bytes skipped over are left with undefined values in them. Negative values can be used to store the image upside-down, however p should point to 1 line before the end of the buffer, as it still points to the top-left pixel. Same except linedelta is set to r.w()*depth(type).(); if (user_hit_abort_button()) break; } } Redraws all widgets. This is a good idea if you have made global changes to the styles. Makes it so SharedImage can identify image files of the types compiled into fltk. These are XPM, PNG, and Jpeg images. Does nothing if load_theme() has not been called yet. If load_theme() has been called, this calls the theme() function again and then call redraw(). If the theme function is written correctly, this should change the display to the new theme. You should call this if you change the theme() or if external information changes such that the result of your theme() function changes. FLTK will call this automatically when it gets a message from the system indicating the user's preferences have changed. Remove all matching check callback, if any exists. You can call this from inside the check callback if you want. Remove all the callbacks (ie for all different when values) for the given file descriptor. It is harmless to call this if there are no callbacks for the file descriptor. If when is given then those bits are removed from each callback for the file descriptor, and the callback removed only if all of the bits turn off. Removes the specified idle callback, if it is installed. Removes all pending timeout callbacks that match the function and arg. Does nothing if there are no matching ones that have not been called yet.: void callback(void*) { printf("TICK\n"); fltk::repeat_timeout(1.0,callback); } main() { fltk::add_timeout(1.0,callback); for (;;) fltk::wait(); } Change the theme to the compiled-in default by calling the revert function of all NamedStyle structures. A theme() function may want to call this to clear the previous settings. Rotate the current transformation counter-clockwise by d degrees (not radians!!). This is done by multiplying the matrix by: cos -sin 0 sin cos 0 0 0 1 Calls fltk::wait() as long as any windows are not closed. When all the windows are hidden or destroyed (checked by seeing if Window::first() is null) this will return with zero. A program can also exit by having a callback call exit() or abort(). Most fltk programs will end main() with return fltk::run();. Scale the current transformation by multiplying it by x 0 0 0 y 0 0 0 1 Scale the current transformation by multiplying it by x 0 0 0 x 0 0 0 1 Move the contents of a rectangle by dx and dy. The area that was previously outside the rectangle or obscured by other windows is then redrawn by calling draw_area for each rectangle. This is a drawing function and can only be called inside the draw() method of a widget. If dx and dy are zero this returns without doing anything. If dx or dy are larger than the rectangle then this just calls draw_area for the entire rectangle. This is also done on systems (Quartz) that do not support copying screen regions. fltk::GRAY75 is replaced with the passed color, and all the other fltk::GRAY* colors are replaced with a color ramp (or sometimes a straight line) so that using them for highlighted edges of raised buttons looks correct. Set one of the indexed colors to the given rgb color. i must be in the range 0-255, and c must be a non-indexed rgb color. Obsolete function to encourage FLTK to choose a 256-glyph font with the given encoding. You must call setfont() after changing this for it to have any effect. Notice that this is obsolete! Only the non-Xft X version actually uses it and that may be eliminated as well. In addition FLTK uses UTF-8 internally, and assummes that any font it prints with is using Unicode encoding (or ISO-8859-1 if there are only 256 characters). The default is "iso10646-1" Set the "background" color. This is not used by the drawing functions, but many box and image types will refer to it by calling getbgcolor(). Set the current "brush" in the DC to match the most recent setcolor() and line_style() calls. This is stupid-expensive on Windows so we defer it until the brush is needed. Set the color for all subsequent drawing operations. Sets the current rgb and alpha to draw in, on rendering systems that allow it. If alpha is not supported this is the same as setcolor(). The color you pass should not premultiplied by the alpha value, that would be a different, nyi, call. Store a set of bit flags that may influence the drawing of some fltk::Symbol subclasses, such as boxes. Generally you must also use setcolor() and setbgcolor() to set the color you expect as not all symbols draw differently depending on the flags. The flags are usually copied from the flags() on a Widget. Some commonly-used flags: Set the current font and font scaling so the size is size pixels. The size is unaffected by the current transformation matrix (you may be able to use fltk::transform() to get the size to get a properly scaled font). The size is given in pixels. Many pieces of software express sizes in "points" (for mysterious reasons, since everything else is measured in pixels!). To convert these point sizes to pixel sizes use the following code: const fltk::Monitor& monitor = fltk::Monitor::all(); float pixels_per_point = monitor.dpi_y()/72.0; float font_pixel_size = font_point_size*pixels_per_point; See the fltk::Font class for a description of what can be passed as a font. For most uses one of the built-in constant fonts like fltk::HELVETICA can be used. Set the current "pen" in the DC to match the most recent setcolor() and line_style() calls. This is stupid-expensive on Windows so we defer it until the pen is needed. Older style of color chooser that only chooses the "indexed" fltk colors. This pops up a panel of the 256 colors you can access with "indexed" fltk::Color values and lets the user pick one of them. If the user clicks on one of them, the new index is returned. If they type Esc or click outside the window, the old index is returned. Set r,g,b to the 8-bit components of this color. If it is an indexed color they are looked up in the table, otherwise they are simply copied out of the color number. Destroy any dc or other objects used to draw into this window. Destroy any "graphics context" structures that point at this window or Pixmap. They will be recreated if you call draw_into() again. Unfortunately some graphics libraries will crash if you don't do this. Even if the graphics context is not used, destroying it after destroying it's target will cause a crash. Sigh. Draw a line between all the points in the path (see fltk::line_style() for ways to set the thicknesss and dot pattern of the line), then clear the path. Draw a line inside this bounding box (currently correct only for 0-thickness lines). Returns the current Theme function. By default this points at fltk_theme(). Change what function fltk should call to set the appearance. If you change this after any windows may have been shown, you should call reload_theme(). Replace x and y transformed into device coordinates. Device-specific code can use this to draw things using the fltk transformation matrix. If the backend is Cairo or another API that does transformations, this may return xy unchagned. Transform the rectangle from into device coordinates and put it into to. This only works correctly for 90 degree rotations, for other transforms this will produce an axis-aligned rectangle with the same area (this is useful for inscribing circles, and is about the best that can be done for device functions that don't handle rotation. Same as transform(Rectangle(X,Y,W,H),to) but replaces XYWH with the transformed rectangle. This may be faster as it avoids the rectangle construction. Replace x and y with the transformed coordinates, rounded to the nearest integer. Replace x and y with the tranformed coordinates, ignoring translation. This transforms a vector which is measuring a distance between two positions, rather than a position.. Translate the current transformation by multiplying it by 1 0 0 0 1 0 x y 1 Try sending the current KEY event as a SHORTCUT event. Normally the focus() gets all keystrokes, and shortcuts are only tested if that widget indicates it is uninterested by returning zero from Widget::handle(). However in some cases the focus wants to use the keystroke only if it is not a shortcut. The most common example is Emacs-style editing keystrokes in text editing widgets, which conflict with Microsoft-compatable menu key bindings, but we want the editing keys to work if there is no conflict. This will send a SHORTCUT event just like the focus returned zero, to every widget in the focus window, and to the add_handler() calls, if any. It will return true if any widgets were found that were interested in it. A handle() method can call this in a KEY event. If it returns true, return 1 immediatly, as the shortcut will have executed and may very well have destroyed your widget. If this returns false, then do what you want the key to do. Releases the lock that was set using the fltk::lock() method. Child threads should call this method as soon as they are finished accessing FLTK. If some other thread is waiting for fltk::lock() to return, it will get control. On Windows this makes file_chooser() call the Win32 file chooser API instead of using the one constructed in fltk. Ignored on other systems. Returns the value of FL_VERSION that FLTK was compiled with. This can be compared to the FL_VERSION macro to see if the shared library of fltk your program linked with is up to date. X-specific crap to allow you to force the "visual" used by fltk to one you like, rather than the "default visual" which in many cases has less capabilities than your machine really has! For instance fltk::visual(fltk::RGB_COLOR) will get you a full color display instead of an 8-bit colormap, if possible. You must call this before you show() any windows. The integer argument is an 'or' of the following: This returns true if the system has the capabilities by default or FLTK suceeded in turing them on. Your program will still work even if this returns false (it just won't look as good). On non-X systems this just returns true or false indicating if the system supports the passed values. Same as fltk::wait(infinity). Call this repeatedly to "run" your program. You can also check what happened each time after this returns, which is quite useful for managing program state.. Change where the mouse is on the screen. Returns true if successful, false on failure (exactly what success and failure means depends on the os). Returns the XID for a window, or zero if show() has not been called on it. Returns the X pixel number used to draw the given FLTK color. If a colormapped visual is being used, this may allocate it, or find the nearest match. 1-pixel thick gray line around rectangle. Obsolete. Draws colored edge and draws nothing inside rectangle. You can change this string to convert fltk to a foreign language. The color picked by the most recent setcolor(Color). The device context that is currently being drawn into. Draws a standard box based on the current theme Diamond shape used to draw Motif-style checkboxes. Raised diamond shape used to draw Motif-style checkboxes. The dnd_* variables allow your fltk program to use the Xdnd protocol to manipulate files and interact with file managers. You can ignore these if you just want to drag & drop blocks of text. I have little information on how to use these, I just tried to clean up the Xlib interface and present the variables nicely. The program can set this variable before returning non-zero for a DND_DRAG event to indicate what it will do to the object. Fltk presets this to XdndActionCopy so that is what is returned if you don't set it. The action the source program wants to perform. Due to oddities in the Xdnd design this variable is not set on the fltk::DND_ENTER event, instead it is set on each DND_DRAG event, and it may change each time. To print the string value of the Atom use this code: char* x = XGetAtomName(xdisplay, dnd_source_action); puts(x); XFree(x); You can set this before calling fltk::dnd() to communicate a different action. See dnd_source_types, which you must also set. Zero-terminated list of atoms describing the formats of the source data. This is set on the DND_ENTER event. The following code will print them all as text, a typical value is "text/plain;charset=UTF-8" (gag). for (int i = 0; dnd_source_types[i]; i++) { char* x = XGetAtomName(xdisplay, dnd_source_types[i]); puts(x); XFree(x); } You can set this and dnd_source_action before calling dnd() to change information about the source. You must set both of these, if you don't fltk will default to "text/plain" as the type and XdndActionCopy as the action. To set this change it to point at your own array. Only the first 3 types are sent. Also, FLTK has no support for reporting back what type the target requested, so all your types must use the same block of data. The X id of the window being dragged from. The program can set this when returning non-zero for a DND_RELEASE event to indicate the translation wanted. FLTK presets this to "text/plain" so that is returned if you don't set it (supposedly it should be limited to one of the values in dnd_source_types, but "text/plain" appears to always work). Inset box in fltk's standard theme 2-pixel thick raised line around edge. 2-pixel thick engraved line around edge. fltk will call this when it wants to report a recoverable problem. but in this case the display is so messed up it is unlikely the user can continue. Very little calls this now. The default version on Unix prints a message to stderr, on Windows it pops up a MessageBox, and then both versions call exit(1). You may be able to use longjmp or an exception to get back to your own code. The last timestamp from an X event that reported it (not all do). Many X calls (like cut and paste) need this value. Draws a flat rectangle of getbgcolor(). The single X GC used for all drawing. This is initialized by the first call to Window::make_current(). This may be removed if we use Cairo or XRender only. Most Xlib drawing calls look like this: This is a portion of the string printed by fltk::args() detects an invalid argument on the command-line. You can add this to your own error or help message to show the fltk switches. It's value is (no newline at start or the end): -d[isplay] host:n.n -g[eometry] WxH+X+Y -n[ame] windowname -i[conic] -bg color Draws nothing normally, and as THIN_DOWN_BOX when the mouse pointer points at it or the value of the widget is turned on. Draws nothing normally, and as THIN_UP_BOX when the mouse pointer points at it or the value of the widget is turned on. This dummy 1x1 window is created by fltk::open_display() and is never destroyed. You can use it to communicate with the window manager or other programs. When this is set to true, then (all) message windows will use scrollbars if the given message is too long. The most recent message read by GetMessage() (which is called by fltk::wait(). This may not be the most recent message sent to an FLTK window (because our fun-loving friends at MicroSoft decided that calling the handle procedures directly would be a good idea sometimes...) You can change this string to convert fltk to a foreign language. Draws nothing. Can be used as a box to make the background of a widget invisible. Also some widgets check specifically for this and change their behavior or drawing methods. Ellipse with no border. You can change this string to convert fltk to a foreign language. Ellipse with a black border and gray shadow. Ellipse with a black border. Pushed in version of PLASTIC_UP_BOX Box designed to vaguely resemble a certain fruit-themed operating system. Round-cornered rectangle with no border. Inset oval or circle. Raised oval or circle. Round-cornered rectangle with a black border. Round-cornered rectangle with a black border and gray shadow. 1-pixel-thick inset box. 1-pixel-thick raised box. A up button in fltk's standard theme. fltk will call this when it wants to report a recoverable problem. The display may be messed up but the user can probably keep working. (all X protocol errors call this). The default version on Unix prints a message to stderr, on Windows it pops up a MessageBox. The colormap being used by FLTK. This is needed as an argument for many Xlib calls. You can also set this immediately after open_display() is called to your own colormap. The function own_colormap() can be used to make FLTK create a private one. FLTK uses the same colormap for all windows and there is no way to change that, sorry. The open X display. This is needed as an argument to most Xlib calls. Don't attempt to change it! This is NULL before fltk::open_display() is called. The most recent X event. If non-zero this is the palette alloced by fltk on an 8-bit screen. Hopefully you can ignore this, I'm not even sure it works anymore. Which screen number to use. This is set by fltk::open_display() to the default screen. You can change it by setting this to a different value immediately afterwards. The X visual that FLTK will use for all windows. These are set by fltk::open_display() to the default visual. You can change them before calling Window::show() the first time. Typical code for changing the default visual is: fltk::args(argc, argv); // do this first so $DISPLAY is set fltk::open_display(); fltk::xvisual = find_a_good_visual(fltk::xdisplay, fltk::xscreen); if (!fltk::xvisual) fltk::abort("No good visual"); fltk::xcolormap = make_a_colormap(fltk::xdisplay, fltk::xvisual->visual, fltk::xvisual->depth); // it is now ok to show() windows: window->show(argc, argv); A portable interface to get a TrueColor visual (which is probably the only reason to do this) is to call fltk::visual(int). Set by Window::make_current() and/or draw_into() to the window being drawn into. This may be different than the xid() of the window, as it may be the back buffer which has a different id. You can change this string to convert fltk to a foreign language.
http://www.fltk.org/doc-2.0/html/namespacefltk.html
CC-MAIN-2015-14
en
refinedweb
This is a report from the W3C Technical Architecture Group to the W3C membership on TAG activities from August, 2009 through February, 2010. In our previous status report we announced that the TAG had decided to focus primarily on three major areas of current interest: We have continued to use them as the organizing framework for the majority of our work: sections below discuss each of these, as well as TAG work in some other areas. We continue to put particular emphasis on working with the HTML WG to resolve issues relating to HTML5; we consider HTML5 to be a top priority for the success of the W3C and for the Web community in general, and we wish to support the HTML WG in their efforts to resolve issues promptly. As the TAG has shifted its focus toward working closely with groups like the HTML WG, we have somewhat de-emphasized work on Recommendation-track documents and findings. In part for that reason, the TAG did not produce any formal working drafts or findings during the period covered by this report. We did, during the previous period, produce First Public Working Draft Usage Patterns For Client-Side URI parameters . The TAG may or may not decide to take this forward on the Recommendation track. During the period covered by this report, several less formal drafts and planning documents were prepared and discussed, and these include: In general, these:: While at the November Technical Plenary, the TAG met in a joint session with the HTML working group. HTML5 remains the single most important area of focus for the TAG. During the summer of 2009, the TAG read and reviewed the entire HTML5 draft specification. Based on that reading, we identified and prioritized issues at our September face-to-face meeting. The following subsections briefly summarize some of the more important issues we've worked on during this period: As early as the 2008 Technical Plenary in Mandelieu, the TAG had identified to the HTML working group concerns about the modularity of the HTML5 specification, and also related concerns regarding its consistency with the the specifications for other standards (e.g. then-current use of the term "URL" in HTML5"). As a result of cooperative efforts among the HTML WG, the TAG, and other concerned parties, significant progress has been made on some of these issues. For example: The TAG remains concerned that there may be other parts of the HTML5 specification that might better be published separately and/or that might benefit from tighter alignment with existing specifications. Also raised at Mandelieu in 2008, and discussed in our previous status report was a concern that the then-current HTML5 drafts focused were more focused on user agent conformance than on clearly identifying the syntax, semantics, and other conformance requirements for HTML5 documents. The TAG therefore asked that the HTML5 WG consider the production of an HTML5 Language Reference document. Various forms for such a document were proposed by members of the HTML5 WG and by others. The HTML Working Group has proposed to address the TAG's concern, in part, by publishing a so-called "author's view" of the HTML5 specification. This is built from the same sources as the other versions of HTML5, but it is organized specifically to meet the needs of those who author HTML5 documents. The TAG has indicated general satisfaction with this direction, but there are some outstanding concerns regarding the degree of commitment by the HTML WG to setting clear goals for, ensuring the quality of, and maintaining over the long term this other version of the HTML5 specification. The HTML WG has also published a First Public Working Draft of: HTML: The Markup Language. The TAG has had a long-standing concern with the fact that HTML5, particularly in its text/html serialization, lacks robust facilities for decentralized extensibility of the language. This issue (which is tracked by the HTML WG as their ISSUE-41) remains unresolved. During this period, the TAG took several steps to help address this issue, including: Although not formally speaking on behalf of the TAG, TAG chair Noah Mendelsohn gave an invited plenary talk titled "Decentralized Extensibility in HTML5" (ppt, odp, pdf) at the November 2009 W3C TPAC. Note that the HTML WG chairs have also put out a Call for proposals on Decentralized Extensibility, with a deadline of March 23rd. The TAG is currently preparing a response to that call. The TAG explored at length with the HTML WG group a number of concerns relating to data- and metadata-related facilities for use with HTML5. These included: did the inclusion of microdata facilities directly in HTML5 represent an inappropriate lack of modularity, and also an inappropriate lack of emphasis on RDFa, which is an existing W3C Recommendation? We also considered whether the namespace capabilities (or lack thereof) in HTML5's text/html serialization would unnecessarily complicate the use of RDFa. As already noted, the removal of microdata to a separate specification represents some progress toward addressing the TAG's concerns. The HTML WG has also recently published HTML+RDFa: A mechanism for embedding RDF in HTML. At our joint meeting with the HTML5 WG at the 2009 TPAC, the TAG expressed some concern regarding the then-current treatment in the HTML working drafts of so-called "polyglot" documents, I.e. those that are HTML that is also well-formed XML, but which are served as Content-type text/html. As a result of these discussions, key text in the HTML5 draft was revised to eliminate ambiguities, and to make clearer that at least some such polyglot documents are indeed conformant. The TAG viewed this as a very positive result, and perhaps sufficient to completely resolve our concerns; as of now, the TAG is still discussing whether to advocate for yet more liberal support for polyglot documents, perhaps to include those that have explicit DOCTYPEs. The TAG has also been debating the degree to which current HTML5 drafts would be suitable as a media type registration document, to replace the existing HTML 4-based registration. The most significant concern is the lack of explicit references to early specifications, such as to HTML 2 or to HTML 4. The TAG also participated with others in discussions of other HTML-related topics, including:? As discussed in our previous status report, the TAG decided in June 2009 to begin a more comprehensive exploration of the ways that Web architecture should evolve to support such Web Applications. An initial list of topics for exploration was assembled and discussed. During this period we continued our work in the following specific areas: The TAG is working to understand and to assist in resolving concerns about the implementation of privacy policy for APIs such as geolocation. This topic has become controversial, in part because of differences of opinion between the W3C Geolocation working group and the IETF (GEOPRIV) effort. The W3C group issued a working draft that does not include explicit privacy controls; the IETF responded with a comment on that specification, requesting that GEOPRIV-like mechanisms be adopted, and offering assistance with the work. The TAG held a number of meetings to discuss these concerns, which we understand will arise not just for geolocation, but for other APIs as well. In December of 2009, the TAG issued adopted a position that. Subsequently, Device Access and Policy Working Group chair Frederick Hirsch sent a note to the TAG citing a Geolocation WG resolution against including explicit policy rules, and inviting the TAG's assistance in further exploring the issue. Several notes and discussions have followed from that correspondence. Privacy policy is just one API-related area of concern for the TAG; we expect to explore others in coming months. Several years ago, the TAG published its finding: The use of Metadata in URIs, which includes the Good Practice Note: URI assignment authorities SHOULD NOT put into URIs metadata that is to be kept confidential. Recently, Web-based systems have begun to emerge which do indeed rely on limiting distribution of URIs that contain either secret information in clear text, or tokens that must be kept confidential because they can grant access to sensitive information. That is, these systems distribute (often through email) URIs that give the recipient the capability of retrieving or modifying information that is intended to be kept secret from others. The TAG is actively considering the pros and cons of modifying its advice on putting confidential information into URIs. As noted above, the TAG is considering options for further work on our First Public Working Draft Usage Patterns For Client-Side URI parameters . The TAG continues to explore several issues relating to publishing and accessing metadata on the Web. The TAG continues to review and discuss progress on the following IETF drafts: The work described below on HTTP Semantics is in part supportive of the TAG's work on access to metadata. The TAG is also interested in the formats and vocabularies that are used to encode and transmit metadata. As previously reported, at its June, 2009 face-to-face meeting, the TAG agreed to start work on a TAG Finding on this topic. We have had several discussions since, but actual drafting of a finding has not received much attention in recent months. In addition the primary focus areas discussed above, the TAG did work in several other areas:. TAG member participates regularly, and the TAG discusses the progress of this work from time to time. At the December TAG F2F, Jonathan Rees presented a formal approach that many TAG members felt was a very useful step toward developing a more rigorous specification of HTTP semantics. The TAG continued its explorations of URIs, URNs, URI schemes, and in particular the suitability of the http URI scheme in situations where flexible registration and naming semantics are required. We have also started to consider issues relating to the long term persistence of URI assignments. Henry Thompson and Jonathan Rees prepared revised draft Guidelines for Web-based naming. The TAG has several times considered the Widgets 1.0: Widget URIs. In general, the TAG believes that http-scheme URIs should be used to identify resources in most cases where such use is practical; the TAG is considering, in that context, the merits of having a separate widget scheme for use with widget packaging. The TAG has expressed its gratitude to the EXI working group for their registration of content-coding tag EXI. report was published in July, 2009. The following changes in the membership have occurred since the last TAG report: All new terms began on 1 February, 2010. Other continuing members of the TAG are: No members have left the TAG during the period covered by this report. Dan Connolly is the W3C Staff Contact for the TAG. Notes: $Revision: 1.22 $ of $Date: 2010/03/05 23:18:45 $
http://www.w3.org/2001/tag/2010/sum03.html
CC-MAIN-2015-14
en
refinedweb
J-AXE file splitter is a Windows application developed using C# .NET to split file into time interval based on size and total files after splits. I would not ignore the fact that this is some kind of new application. Already there are so many open source projects available to achieve the same purpose. Then the question is why I have spent lots of time to develop this app: allow me to explain the question: whatever application is available as open and/or paid, you will not get the options to split. You might wonder why this application is named J-AXE? Actually the letter J came from Jawed and AXE is to cut some wooden block. So, I have chosen this application name as J-AXE. Of course, there was something which triggered me to come up with this ideas/application. Last month, I recorded family videos using my camcorder. After that, I added few songs and effects into the recorded video to make it into a movie so that I could distribute this among family members. Now I wanted to split this file into durations instead of size so that I could make it into different parts, something like Part 1 as 30 minutes and Part 2 as 45 Minutes. I searched here and there for open source but everything I got had functionality to splits file only by size not by the time interval. Then this requirement triggered some chemical imagination into my mind to come up with some application to fulfill my requirement and here it is an application “J-AXE: A file Splitter”. J-AXE: A file Splitter Windows application is very handy in use. The whole solution is divided into 2 projects. One project would be User interface and): First, I would like to explain the implementation of UI part. Then, I will explain the Logic part. The block diagram would be something like this: On UI User will get the below option to split the file: Based on the option selected, we will call the corresponding function to splits the file: Here is the code to call the corresponding method: On click of Split Button, we will call the button click event function to get the inputs values like Location of file to splits, Output directory and selected Option. //get the tab selected int tabSelected = tabControlDuration.SelectedIndex; //make sure that inputs file and target folder name is provided if (!string.IsNullOrEmpty(textBoxFileSplit.Text)) { if (!string.IsNullOrEmpty(textBoxOpFileLocation.Text)) {//get the tab selected for the operation switch (tabSelected) { case 0: DurationSplit();//call the method to split the file based on duration. break; case 1: SizeSplit();//call the method to split the file based on Size in KB/M/Bytes. break; case 2: NoOfFileSplit();//Call the method to split the the file based on Numbers. break; default: break; }//end of switch Let’s assume that user has clicked on Duration Tab: Under this option, user needs to provide the Time interval (like 30 minutes of interval) to split the file along with whether they wants the time interval in Minutes or seconds (By default, it would be in seconds). After providing the desired inputs, user needs to click on Split button. On click of this split button, user can see animation of AXE on UI, like something is under AXE. To show the Animation on UI and splitting of files, I have used the concept of BACKGROUND Worker. So that while splitting of the file going on the UI will not get frozen or hang. Allow me to explain something about background worker (Source:). Background worker is part of System.ComponentModel namespace for managing a worker thread. BackgroundWorker is a component that allows you to delegate a long running task to a different thread. It doesn’t stop with that. You can place the component on a Windows Form (it is a non-UI control, so it goes into the component tray). You can register event handlers with it. It takes care of running the long running task in a separate thread while running the task to update control (to report result or progress) in the main event handling thread. BackgroundWorker BackgroundWorker has a RunWorkerCompleted event that fires after the DoWork event handler has done its job. On DoWork, the BackgroundWorker will call our JAXE DLL to split the file and when it’s done, it will call RunWorkerCompleted to show notification message to the user through UI and reset all the controls to initial states. RunWorkerCompleted DoWork DoWork Come to the code part: If you look at my code, I have used RunWorkerAsync by passing parameter as object. It means I’m telling BackgroundWorker to do work asynchronously. Here is the code to perform our task: RunWorkerAsync //Method to split the file based on Duration. private void DurationSplit() { var durationIntreval = comboBoxDuration.SelectedIndex; //Verify the Duration value provided by user if (!string.IsNullOrEmpty(textBoxDuration.Text)) { //Convert the duration in seconds var durationInSeconds = 0.0; if (durationIntreval == 1) durationInSeconds=Convert.ToDouble(textBoxDuration.Text)*60; else durationInSeconds=Convert.ToDouble(textBoxDuration.Text); //set the values var setInPuts = new SetInPuts { SourceFileLocation = textBoxFileSplit.Text, TargetFolderLocation = textBoxOpFileLocation.Text, DurationToSplit =durationInSeconds, TabSelected=tabControlDuration.SelectedIndex }; //Enabled the animation pictureBox1.Visible = true; //disable the button buttonSplit.Enabled = false; //initialize the BackGround worker backgroundWorkerForSplitFile = new BackgroundWorker(); //run back ground worker in sync by passing parameter as object. backgroundWorkerForSplitFile.RunWorkerAsync(setInPuts); //force the BGW to do some work backgroundWorkerForSplitFile.DoWork+=new DoWorkEventHandler (backgroundWorkerForSplitFile_DoWork); //Force the GBW to perform some of the task when previous job done backgroundWorkerForSplitFile.RunWorkerCompleted+= new RunWorkerCompletedEventHandler(backgroundWorkerForSplitFile_RunWorkerCompleted); }//end of if condition }//end of the method Our Dowork will go here, which will ultimately get called by background worker. Dowork private static void backgroundWorkerForSplitFile_DoWork (object sender, DoWorkEventArgs e) { // var split = new JAXE.Jaxe(); var inputToPass = e.Argument as SetInPuts; if (inputToPass != null) { switch (inputToPass.TabSelected) { case 0: split.SplitFileBasedOnDuration(inputToPass.SourceFileLocation, inputToPass.DurationToSplit,inputToPass.TargetFolderLocation); break; case 1: split.SplitFileBasedOnSize(inputToPass.SourceFileLocation, inputToPass.SizeToSplit, inputToPass.TargetFolderLocation); break; case 2: split.SplitFileBasedOnNumberOfFiles(inputToPass.SourceFileLocation, inputToPass.NumOfFileToSplits, inputToPass.TargetFolderLocation); break; default: break; } } }//end of method On completion, the background worker will call RunWorkerCompleted to update the user with Notification message and alongwith reset all the controls. Using background worker, we will call our main logic (JAXE DLL) which will content our splits file logic. If you see the above piece of code, we have called the method like this: split.SplitFileBasedOnDuration(inputToPass.SourceFileLocation, inputToPass.DurationToSplit,inputToPass.TargetFolderLocation); I have called SplitFileBasedOnDuration method which is part of our JAXE class that resides in our external DLL, to split the file based on time interval. SplitFileBasedOnDuration To get the duration of input file, I have used open source DLL that is nothing but “DirectShowLib”. To know more about this, visit this link. After getting the whole duration, I have started splitting the file into time intervals. DirectShowLib To split the file based on duration, I have taken the reference with respect to size of input size. Just have a look at the below code. /); Duration of each byte. //get the duration of each in interval to split var durationOfEachFile = Convert.ToDouble(timeDuration); //get the duration of 1 bytes file var fractionOfTimeDuration = totalDuration / durationOfEachFile; Size of file corresponding to each byte w.r.t. time duration provided by User: //get the size corresponding to duration to split the file var sizeOfEachFile = (int)Math.Ceiling((double)fsDur.Length / fractionOfTimeDuration); Now we will get the Total Numbers of files after splitting the source file. /); Finally, we will start reading the source file byte by byte and will place into output file: for (var i = 1; i <= numberOfFiles; i++) { //file name after splitting var outputFile = new FileStream(outPutLocation + "\\" + baseFileName + "_" + i.ToString().PadLeft(5, Convert.ToChar("0")) + extension, FileMode.Create, FileAccess.Write); var bytesRead = 0; //array of bytes to hold the piece of file data after splitting var buffer = new byte[sizeOfEachFile]; //split the file byte by byte if ((bytesRead = fsDur.Read(buffer, 0, sizeOfEachFile)) > 0) outputFile.Write(buffer, 0, bytesRead); outputFile.Close(); } The output files name would in this format SourceFilename_Numericvalue.extension. SourceFilename_Numericvalue.extension The whole logic would be as below: //Split the file based on duration //Note: There might be a chances where exact duration of the file //would not be same as user provided. deviation of few seconds public void SplitFileBasedOnDuration(string inputFile, double timeDuration,string outPutLocation) { /); //get the duration of each in interval to split var durationOfEachFile = Convert.ToDouble(timeDuration); //get the duration of 1 bytes file var fractionOfTimeDuration = totalDuration / durationOfEachFile; //get the size corresponding to duration to split the file var sizeOfEachFile = (int)Math.Ceiling((double)fsDur.Length / fractionOfTimeDuration); /); for (var i = 1; i <= numberOfFiles; i++) { //file name after splitting var outputFile = new FileStream(outPutLocation + "\\" + baseFileName + "_" + i.ToString().PadLeft(5, Convert.ToChar("0")) + extension, FileMode.Create, FileAccess.Write); var bytesRead = 0; //array of bytes to holed the piece of file data after splitting var buffer = new byte[sizeOfEachFile]; //split the file byte by byte if ((bytesRead = fsDur.Read(buffer, 0, sizeOfEachFile)) > 0) outputFile.Write(buffer, 0, bytesRead); outputFile.Close(); } //close the File stream fsDur.Close(); //return "Done!"; } Finally our files would be ready to use after splitting the source file. I have tested this on few extension source files, it works perfectly. But I have not tested this on all types of source files. Might be few files would not work. Please let me know if you find some bugs. I would be happy to hear from all of you. The first time I used the concept of background worker and I found it very interesting and useful in Windows application when you don’t want your UI to get frozen while doing some complex/lengthy process. Splitting the file based on duration was also very challenging, somehow I overcome those issues, but still there is some deviation (fraction of seconds) in the time duration of output files. BackgroundWorker uses the thread pool, which means you should never call Abort on a BackgroundWorker thread. Background.
http://www.codeproject.com/Articles/223791/JAXE-A-File-Splitter-in-Csharp
CC-MAIN-2015-14
en
refinedweb
15 July 2010 10:26 [Source: ICIS news] SINGAPORE (ICIS news)--South Korea’s LG Chem has bought 25,000 tonnes of open spec spot naphtha at a steep discount of $12.00/tonne (€9.48/tonne) reflecting poor market conditions, a company source said on Thursday. The cargo - bought on ?xml:namespace> “Naphtha’s in the dumps,” said a trader in LG Chem last bought a bigger volume of 100,000 tonnes of spot open-spec naphtha for first half of August delivery at discount levels of between $4.00-5.00,
http://www.icis.com/Articles/2010/07/15/9376705/lg-chem-buys-25000t-naphtha-at-steep-12tonne-discount.html
CC-MAIN-2015-14
en
refinedweb
Difference between revisions of "EBC Exercise 11b gpio via mmap" Latest revision as of 07:28, 18 October 2021. (See if memtool will work instead of devmem2.) An easy way to read the contents of a memory location is with devmem2. First install it with: bone$ wget bone$ gcc devmem2.c –o devmem2 bone$ sudo mv devmem2 /usr/bin Now use it bone$ sudo devmem2 0x4804c13c /dev/mem opened. Memory mapped at address 0xb6f99000. Read at address 0x4804C13C (0xb6f9913c): 0x01. bone$ sudo devmem2 0x4804c190 w 0x01000000 /dev/mem opened. Memory mapped at address 0xb6f53000. Read at address 0x4804C190 (0xb6f53190): 0x01800000 Write at address 0x4804C190 (0xb6f53190): 0x01000000, readback 0x01000000 The LED should be off now. Turn it on using the GPIO_SETDATAOUT (0x194) register. bone$ sudo via c devmem; mmap via python Here's a python version of mmap(). From: from mmap import mmap import time, struct 24 we need bit 24, or 1 shifted left 24 places.3 = .
https://www.elinux.org/index.php?title=EBC_Exercise_11b_gpio_via_mmap&curid=76508&diff=556636&oldid=527071
CC-MAIN-2021-49
en
refinedweb
Author: Keats in post puberty introduction It is said that StringBuilder is more efficient than String in handling String splicing, but sometimes there may be some deviation in our understanding. Recently, when I tested the efficiency of data import, I found that my previous understanding of StringBuilder was wrong. Later, I found out the logic of this piece by means of practical test + finding principle. Now let's share the process test case There are generally two cases when our code splices strings in a loop - The first is to splice several fields in the object into a new field each time, and then assign a value to the object - The second operation is to create a string object outside the loop and splice new content to the string each time. After the loop ends, the spliced string is obtained For both cases, I created two control groups Group 1: Concatenate strings in each For loop, that is, use them and destroy them when they are used up. Use String and StringBuilder to splice respectively /** * String concatenates strings in a loop and is destroyed after one loop */ public static void useString(){ for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { String str = str1 + i + str2 + i + str3 + i + str4 ; } } /** * String builder is used to splice strings in the loop and destroy them after one loop */ public static void useStringBuilder(){ for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { StringBuilder sb = new StringBuilder(); String s = sb.append(str1).append(i).append(str2).append(i).append(str3).append(i).append(str4).toString(); } } Group 2: Multiple For loops splice a String. The String is used at the end of the loop and recycled by the garbage collector. It is also spliced using String and StringBuilder respectively /** * Concatenate multiple loops into a String using String */ public static void useStringSpliceOneStr (){ String str = ""; for (int i = 0; i < CYCLE_NUM_LOWER; i++) { str += str1 + str2 + str3 + str4 + i; } } /** * Concatenate multiple loops into a string with StringBuilder */ public static void useStringBuilderSpliceOneStr(){ StringBuilder sb = new StringBuilder(); for (int i = 0; i < CYCLE_NUM_LOWER; i++) { sb.append(str1).append(str2).append(str3).append(str4).append(i); } } In order to ensure the test quality, before each test item is carried out. The thread rested for 2s, and then ran for 5 times to warm up. Finally, calculate the time by averaging the time for 5 times public static int executeSometime(int kind, int num) throws InterruptedException { Thread.sleep(2000); int sum = 0; for (int i = 0; i < num + 5; i++) { long begin = System.currentTimeMillis(); switch (kind){ case 1: useString(); break; case 2: useStringBuilder(); break; case 3: useStringSpliceOneStr(); break; case 4: useStringBuilderSpliceOneStr(); break; default: return 0; } long end = System.currentTimeMillis(); if(i > 5){ sum += (end - begin); } } return sum / num; } Main method public class StringTest { public static final int CYCLE_NUM_BIGGER = 10_000_000; public static final int CYCLE_NUM_LOWER = 10_000; public static final String str1 = "Zhang San"; public static final String str2 = "Li Si"; public static final String str3 = "Wang Wu"; public static final String str4 = "Zhao Liu"; public static void main(String[] args) throws InterruptedException { int time = 0; int num = 5; time = executeSometime(1, num); System.out.println("String Splicing "+ CYCLE_NUM_BIGGER +" Times," + num + "Time average:" + time + " ms"); time = executeSometime(2, num); System.out.println("StringBuilder Splicing "+ CYCLE_NUM_BIGGER +" Times," + num + "Time average:" + time + " ms"); time = executeSometime(3, num); System.out.println("String Splice a single string "+ CYCLE_NUM_LOWER +" Times," + num + "Time average:" + time + " ms"); time = executeSometime(4, num); System.out.println("StringBuilder Splice a single string "+ CYCLE_NUM_LOWER +" Times," + num + "Time average:" + time + " ms"); } } test result The test results are as follows Result analysis first group 10_ 000_ 000 times of loop splicing, the efficiency of using String and StringBuilder in the loop is the same! Why? Use javap -c StringTest.class to decompile and view the files compiled by the two methods: It can be found that StringBuilder is used after String method splicing and String compiler optimization, so the efficiency of use case 1 and use case 2 is the same. Group 2 The result of the second group is loved by everyone, because 10_ 000_ 000 loop String splicing is too slow, so I used 10_ 000 splices to analyze. Analysis case 3: Although the compiler will optimize String splicing, it creates StringBuilder objects in the loop and destroys them in the loop every time. The next cycle he has created. In comparison, use case 4 creates new objects outside the loop n times, destroys objects, and converts StringBuilder into String n - 1 times. Low efficiency is also a matter of course. extend There is another way to write the test of the first group: /** * String builder is used to splice strings in the loop and destroy them after one loop */ public static void useStringBuilderOut(){ StringBuilder sb = new StringBuilder(); for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { // sb.setLength(0); sb.delete(0, sb.length()); String s = sb.append(str1).append(i).append(str2).append(i).append(str3).append(i).append(str4).toString(); } } Create a StringBuilder outside the loop, empty the contents of the StringBuilder at the beginning of each loop, and then splice. No matter sb.setLength(0) is used in this way; Or sb.delete(0, sb.length()); Efficiency is slower than using String / StringBuilder directly within a loop. However, I don't understand why he is slow. I guess the speed of the new object is slower than the length, so I tested the following: public static void createStringBuider() { for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { StringBuilder sb = new StringBuilder(); } } public static void cleanStringBuider() { StringBuilder sb = new StringBuilder(); for (int i = 0; i < CYCLE_NUM_BIGGER; i++) { sb.delete(0, sb.length()); } } But the result is that cleanStringBuider is faster. I can't figure it out If a great God sees hope, he can help analyze it conclusion - The compiler will optimize String splicing to use StringBuilder, but there are still some defects. It is mainly reflected in the use of String splicing in the loop. The compiler will not create a single StringBuilder for reuse - For the need to splice a String in multiple loops: StringBuilder is fast because it avoids n operations of new object and object destruction, and n - 1 operations of converting StringBuilder into String - StringBuilder splicing is not applicable to the operation mode of each splicing in the loop. Because the compiler optimized String splicing also uses StringBuilder, the efficiency of both is the same. The latter is easy to write -END-
https://programmer.help/blogs/do-you-still-use-concatenation-strings-in-the-for-loop-tomorrow-is-not-for-work.html
CC-MAIN-2021-49
en
refinedweb
- 2.1. Introduction - 2.2. A Simple C Program: Printing a Line of Text - 2.3. Another Simple C Program: Adding Two Integers - 2.4. Arithmetic in C - 2.5. Decision Making: Equality and Relational Operators - 2.6. Secure C Programming 2.2. A Simple C Program: Printing a Line of Text We begin by considering a simple C program. Our first example prints a line of text. The program and its screen output are shown in Fig. 2.1. Fig. 2.1. A first program in C. 1 // Fig. 2.1: fig02_01.c 2// A first program in C. 3#include <stdio.h> 4 5 // function main begins program execution 6int main( void ) 7{ 8printf( "Welcome to C!\n"); 9} // end function main This program illustrates several important C features. Lines 1 and 2 // Fig. 2.1: fig02_01.c // A first program in C begin with //, indicating that these two lines are comments. Comments do not cause the computer to perform any action when the program is run. Comments are ignored by the C compiler and do not cause any machine-language object code to be generated. The preceding comment simply describes the figure number, file name and purpose of the program. You can also use /*...*/ multiline comments in which everything from /* on the first line to */ at the end of the last line is a comment. We prefer // comments because they’re shorter and they eliminate common programming errors that occur with /*...*/ comments, especially when the closing */ is omitted. #include Preprocessor Directive Line 3 #include <stdio.h> is a directive to the C preprocessor. Lines beginning with # are processed by the preprocessor before compilation. Line 3 tells the preprocessor to include the contents of the standard input/output header (<stdio.h>) in the program. This header contains information used by the compiler when compiling calls to standard input/output library functions such as printf (line 8). We explain the contents of headers in more detail in Chapter 5. Blank Lines and White Space Line 4 is simply a blank line. You use blank lines, space characters and tab characters (i.e., “tabs”) to make programs easier to read. Together, these characters are known as white space. White-space characters are normally ignored by the compiler. The main Function Line 6 int main( void ) is a part of every C program. The parentheses after main indicate that main is a function. C programs contain one or more functions, one of which must be main. Every program in C begins executing at the function main. Functions can return information. The keyword int to the left of main indicates that main “returns” an integer (whole-number) value. We’ll explain what this means when we demonstrate how to create your own functions in Chapter 5. For now, simply include the keyword int to the left of main in each of your programs. Functions also can receive information when they’re called upon to execute. The void in parentheses here means that main does not receive any information. In Chapter 14, we’ll show an example of main receiving information. A left brace, {, begins the body of every function (line 7). A corresponding right brace ends each function (line 9). This pair of braces and the portion of the program between the braces is called a block. The block is an important program unit in C. An Output Statement Line 8 printf( "Welcome to C!\n" ); instructs the computer to perform an action, namely to print on the screen the string of characters marked by the quotation marks. A string is sometimes called a character string, a message or a literal. The entire line, including the printf function (the “f” stands for “formatted”), its argument within the parentheses and the semicolon (;), is called a statement. Every statement must end with a semicolon (also known as the statement terminator). When the preceding printf statement is executed, it prints the message Welcome to C! on the screen. The characters normally print exactly as they appear between the double quotes in the printf statement. Escape Sequences Notice that the characters \n were not printed on the screen. The backslash (\) is called an escape character. It indicates that printf is supposed to do something out of the ordinary. When encountering a backslash in a string, the compiler looks ahead at the next character and combines it with the backslash to form an escape sequence. The escape sequence \n means newline. When a newline appears in the string output by a printf, the newline causes the cursor to position to the beginning of the next line on the screen. Some common escape sequences are listed in Fig. 2.2. Fig. 2.2. Some common escape sequences. Because the backslash has special meaning in a string, i.e., the compiler recognizes it as an escape character, we use a double backslash (\\) to place a single backslash in a string. Printing a double quote also presents a problem because double quotes mark the boundaries of a string—such quotes are not printed. By using the escape sequence \" in a string to be output by printf, we indicate that printf should display a double quote. The right brace, }, (line 9) indicates that the end of main has been reached. We said that printf causes the computer to perform an action. As any program executes, it performs a variety of actions and makes decisions. Section 2.5 discusses decision making. Chapter 3 discusses this action/decision model of programming in depth. The Linker and Executables Standard library functions like printf and scanf are not part of the C programming language. For example, the compiler cannot find a spelling error in printf or scanf. When the compiler compiles a printf statement, it merely provides space in the object program for a “call” to the library function. But the compiler does not know where the library functions are—the linker does. When the linker runs, it locates the library functions and inserts the proper calls to these library functions in the object program. Now the object program is complete and ready to be executed. For this reason, the linked program is called an executable. If the function name is misspelled, the linker will spot the error, because it will not be able to match the name in the C program with the name of any known function in the libraries. Using Multiple printfs The printf function can print Welcome to C! several different ways. For example, the program of Fig. 2.3 produces the same output as the program of Fig. 2.1. This works because each printf resumes printing where the previous printf stopped printing. The first printf (line 8) prints Welcome followed by a space, and the second printf (line 9) begins printing on the same line immediately following the space. Fig. 2.3. Printing on one line with two printf statements. 1 // Fig. 2.3: fig02_03.c 2// Printing on one line with two printf statements. 3#include <stdio.h> 4 5 // function main begins program execution 6int main( void ) 7{ 8 printf( 9 printf( "to C!\n" ); 10} // end function main One printf can print several lines by using additional newline characters as in Fig. 2.4. Each time the \n (newline) escape sequence is encountered, output continues at the beginning of the next line. Fig. 2.4. Printing multiple lines with a single printf. 1 // Fig. 2.4: fig02_04.c 2// Printing multiple lines with a single printf. 3#include <stdio.h> 4 5 // function main begins program execution 6int main( void ) 7{ 8printf( toto \n C!\n" );C!\n" ); \n 9} // end function main
https://www.informit.com/articles/article.aspx?p=2062174&seqNum=2
CC-MAIN-2021-49
en
refinedweb
Problem Statement In this problem, we are given a non-negative integer. We have to convert the integer in a such a format, in which there will be some dots which separates all thousands, i.e. there are dots after each 3 places from right. Example #1 n = 987 "987" #2 n = 123456789 "123.456.789" Explanation: The number is 123456789. The rightmost dot will be 3 places from right. So, seeing from right, we will leave 9,8 and 7 behind and put a dot between 6 and 7. Then after leaving 6,5 and 4 behind, we will put a dot between 3 and 4. Now leaving 3,2 and 1 behind, we would place a dot only if there would be more number on left because . should be between two numbers according to the question. Thus we will not place any dot. Approach First of all, we are converting the number into string (let str). Then we are traversing the string str from right. We are using a for loop for this. In each loop we are inserting its three digits followed by a dot. But after each digit insertion, we will check if we have reached out of the left bound of the str. If yes, then we will break the loop. Otherwise, we will keep inserting the 3 digits and then 1 dot. Note that the 3rd checking to insert a dot is the crucial one, which will be used in scenarios like 123 or 123.456 or 123.456.789 where we don’t need to insert a dot before leftmost digit. Because we are inserting the characters from right to left thus our created string needs to be reversed to get the final answer. Thus, after reversing the string, return it. Implementation C++ Program for Thousand Separator Leetcode Solution #include <bits/stdc++.h> using namespace std; string thousandSeparator(int n) { string str=to_string(n); stringstream ss; for(int i=str.length()-1;i>=0;){ ss<<str[i];//inserting 1st digit i--; if(i==-1)break;//checking if we are out of left bound ss<<str[i];//inserting 2nd digit i--; if(i==-1)break;//checking if we are out of left bound ss<<str[i];//inserting 3rd digit i--; if(i==-1)break;//checking if we are out of left bound ss<<".";//after 3 digits insertion, finally inserting a dot "." } str= ss.str(); reverse(str.begin(),str.end());//reversing the final string return str; } int main() { cout << thousandSeparator(123456789); } 123.456.789 Java Program for Thousand Separator Leetcode Solution import java.util.*; import java.lang.*; class Solution { public static String thousandSeparator(int n) { String str=n+""; StringBuilder sb=new StringBuilder(); for(int i=str.length()-1;i>=0;){ sb.append(str.charAt(i));//inserting 1st digit i--; if(i==-1)break;//checking if we are out of left bound sb.append(str.charAt(i));//inserting 2nd digit i--; if(i==-1)break;//checking if we are out of left bound sb.append(str.charAt(i));//inserting 3rd digit i--; if(i==-1)break;//checking if we are out of left bound sb.append(".");//after 3 digits insertion, finally inserting a dot "." } return sb.reverse().toString();//reverse and return the final string } public static void main(String args[]) { System.out.println(thousandSeparator(123456789)); } } 123.456.789 Complexity Analysis for Thousand Separator Leetcode Solution Time Complexity O(len) : we are traversing the given number from right digit to left thus the time Complexity will be O(len) where len is the number of digits in the given number. Space Complexity O(len) : we have used a stringbuilder in java and stringstream in c++ thus using a extra space makes the space complexity O(len).
https://www.tutorialcup.com/leetcode-solutions/thousand-separator-leetcode-solution.htm
CC-MAIN-2021-49
en
refinedweb
#include <cstdint> #include <vector> Go 69 of file tx_verify.cpp. Context dependent validity checks for non coinbase transactions. This doesn't check the validity of the transaction against the UTXO set, but simply characteristic that are suceptible to change over time such as feature activation/deactivation and CLTV. Definition at line 40 of file tx_verify.cpp. Definition at line 140 of file tx_verify.cpp. Check if transaction is final per BIP 68 sequence numbers and can be included in a block. Consensus critical. Takes as input a list of heights at which tx's inputs (in order) confirmed. Definition at line 151 of file tx_verify.cpp.
https://bitcoindoxygen.art/ABC/tx__verify_8h.html
CC-MAIN-2021-49
en
refinedweb
Chapter 1. Introducing the Service Mesh What Is a Service Mesh? Service meshes provide policy-based, network services for network-connected workloads by enforcing desired behavior of the network in the face of constantly changing conditions and topology. Conditions that change can be load, configuration, resources (including those affecting infrastructure and application topology of intracluster and intercluster resources coming and going), and workloads being deployed. Fundamentals Service meshes are an addressable infrastructure layer that allow you to manage both modernizing existing monolithic (or other) workloads as well as wrangling the sprawl of microservices. Service meshes are an addressable infrastructure layer brought to bear in full force. They’re beneficial in monolithic environments, but we’ll blame the microservices and containers movement—the cloud native approach to designing scalable, independently delivered services—for their brisk emergence. Microservices have exploded what were internal application communications into a mesh of service-to-service remote procedure calls (RPCs) transported over networks. Among their many benefits, microservices provide democratization of language and technology choice across independent service teams—teams that create new features quickly as they iteratively and continuously deliver software (typically as a service). The field of networking being so vast, it’s no surprise that there are many subtle, near-imperceptible differences between similar concepts. At their core, service meshes provide a developer-driven, services-first network: one primarily concerned with obviating the need for application developers to build network concerns (e.g., resiliency) into their code; and one that empowers operators with the ability to declaratively define network behavior, node identity, and traffic flow through policy. This might seem like software-defined networking (SDN) reincarnate, but service meshes differ here most notably by their emphasis on a developer-centric approach, not a network administrator–centric one. For the most part, today’s service meshes are entirely software based (although hardware-based implementations might be coming). Though the term intent-based networking is used mostly in physical networking, given the declarative policy-based control service meshes provide, it’s fair to liken a service mesh to a cloud native SDN. Figure 1-1 shows an overview of the service mesh architecture. (We outline what it means to be cloud native in Chapter 2.) Figure 1-1. If it doesn’t have a control plane, it ain’t a service mesh. Service meshes are built using service proxies. Service proxies of the data plane carry traffic. Traffic is transparently intercepted using iptable rules in the pod namespace. This uniform layer of infrastructure combined with service deployments is commonly referred to as a service mesh. Istio turns disparate microservices into an integrated service mesh by systemically injecting a proxy among all network paths, making each proxy cognizant of one another, and bringing these under centralized control; thus forming a service mesh. Sailing into a Service Mesh Whether the challenge you face is managing a fleet of microservices or modernizing your existing noncontainerized services, you can find yourself sailing into a service mesh. The more microservices that are deployed, the greater these challenges become. Client Libraries: The First Service Meshes? To deal with the complicated task of managing microservices, some organizations have started using client libraries as frameworks to standardize implementations. These libraries are considered by some to be the first service meshes. Figure 1-2 illustrates how the use of a library requires that your architecture has application code either extending or using primitives of the chosen library(ies). Additionally, your architecture needs to accommodate the potential use of multiple language-specific frameworks and/or application servers to run them. Figure 1-2. Services architecture using client libraries coupled with application logic The two benefits of creating client libraries are that resources consumed are locally accounted for each and every service, and that developers are empowered to self-service their choice of an existing library or building a new language-specific library. Over time, however, the disadvantages of using client libraries brought service meshes into existence. Their most significant drawback is the tight coupling of infrastructure concerns with application code. Client libraries’ nonuniform, language-specific design makes their functionality and behavior inconsistent, which leads to poor observability characteristics, bespoke practices to augment services that are more or less controllable by one another, and possibly compromised security. These language-specific resilience libraries can be costly for organizations to adopt wholesale, and they can be either difficult to wedge into brownfield applications or entirely impractical to incorporate into existing architectures. Networking is hard. Creating a client library that eliminates client contention by introducing jitter and an exponential back-off algorithm in the calculation of timing the next retry attempt isn’t necessarily easy, and neither is attempting to ensure the same behavior across different client libraries (with the varying languages and versions of those libraries). Coordinating upgrades of client libraries is difficult in large environments as upgrades require code changes, rolling a new release of the application and, potentially, application downtime. Figure 1-3 shows how with a service proxy next to each application instance, applications no longer need to have language-specific resilience libraries for circuit breaking, timeouts, retries, service discovery, load balancing, and so on. Service meshes seem to deliver on the promise that organizations implementing microservices could finally realize the dream of using the best frameworks and language for their individual jobs without worrying about the availability of libraries and patterns for every single platform. Figure 1-3. Services architecture using service proxies decoupled from application logic Why Do You Need One? At this point, you might be thinking, “I have a container orchestrator. Why do I need another infrastructure layer?” With microservices and containers mainstreaming, container orchestrators provide much of what the cluster (nodes and containers) needs. They focus largely on scheduling, discovery, and health, primarily at an infrastructure level (necessarily so), leaving microservices with unmet, service-level needs. A service mesh is a dedicated infrastructure layer for making service-to-service communication safe, fast, and reliable, at times relying on a container orchestrator or integration with another service discovery system. Service meshes might deploy as a separate layer atop container orchestrators, but don’t require them, as control and data-plane components might be deployed independent of containerized infrastructure. In Chapter 3, we look at how a node agent (including a service proxy) as the data-plane component is often deployed in noncontainer environments. The Istio service mesh is commonly adopted à la carte. Organization staff we’ve spoken to are adopting service meshes primarily for the observability that they bring through instrumentation of network traffic. Many financial institutions in particular are adopting service meshes primarily as a system for managing the encryption of service-to-service traffic. Whatever the catalyst, organizations are adopting posthaste. And service meshes are not only valuable in cloud native environments, to help with the considerable task of runing microservices. Many organizations that run monolithic services (those running on metal or virtual machines, on- or off-premises) keenly anticipate using service meshes because of the modernizing boost their existing architectures will receive from this deployment. Figure 1-4 describes the capabilities of container orchestrators (asterisks denotes an essential capability). Service meshes generally rely on these underlying layers. The lower-layer focus is provided by container orchestrators. Figure 1-4. Container orchestration capabilities and focus versus service-level needs Don’t We Already Have This in Our Container Platforms? Containers simplify and provide generic, non-language-specific, application packaging and essential life cycle management. As a generic, non-language-specific platform, container orchestrators take responsibility for forming clusters, efficiently scheduling their resources, and managing higher-level application constructs (deployments, services, service-affinity, anti-affinity, health checking, scaling, etc.). Table 1-1 shows how container orchestrators generally have service discovery mechanisms—load balancing with virtual IP addresses built in. The supported load-balancing algorithms are typically simplistic in nature (round robin, random) and act as a single virtual IP to communicate with backend pods. Kubernetes handles the registration/eviction of instances in the group based on their health status and whether they match a grouping predicate (labels and selectors). Then, services can use DNS for service discovery and load balancing regardless of their implementation. There’s no need for special language-specific libraries or registration clients. Container orchestrators have allowed us to move simple networking concerns out of applications and into the infrastructure, freeing the collective infrastructure technology ecosystem to advance our focus to higher layers. Now you understand how service meshes complement underlying layers: what about other layers? Landscape and Ecosystem The service mesh landscape is a burgeoning ecosystem of tooling that’s not relegated to cloud native applications; indeed, it also provides much value to noncontainerized, nonmicroservice workloads. As you come to understand the role a service mesh plays in deployments and the value it provides, you can begin selecting a service mesh and integrating it with your incumbent tooling. Landscape How should you select a service mesh? Of the many service meshes currently available, their significant differences don’t make it easy for people to discern what actually is a service mesh and what isn’t. Over time, more of their capabilities are converging, making it easier to characterize and compare them. Interestingly, but not surprisingly, many service meshes have been based on some of the same proxies, such as Envoy and NGINX. Ecosystem As far as how a service mesh fits in with other ecosystem technologies, we’ve already looked at client libraries and container orchestrators. API gateways address some similar needs and are commonly deployed on a container orchestrator as an edge proxy. Edge proxies provide services with Layer 4 (L4) to Layer 7 (L7) management while using the container orchestrator for reliability, availability, and scalability of container infrastructure. API gateways interact with service meshes in a way that puzzles many, given that API gateways (and the proxies they’re built upon) range from traditional to cloud-hosted to microservices API gateways. The latter can be represented by a collection of microservices-oriented, open source API gateways, which use the approach of wrapping existing L7 proxies that incorporate container orchestrator native integration and developer self-service features (e.g., HAProxy, Traefik, NGINX, or Envoy). With respect to service meshes, API gateways are designed to accept traffic from outside your organization or network and distribute it internally. API gateways expose your services as managed APIs, focused on transiting north-south traffic (in and out of the service mesh). They aren’t as well suited for traffic management within the service mesh (east-west) necessarily, because they require traffic to travel through a central proxy, and add a network hop. Service meshes are designed primarily to manage east-west traffic internal to the service mesh. Given their complementary nature, you’ll often find API gateways and service meshes deployed in combination. API gateways wotk with other API management functions to handle analytics, business data, adjunct provider services, and implementation of versioning control. Today, there is overlap as well as gaps between service mesh capabilities, API gateways, and API management systems. As service meshes gain new capabilities, use cases overlap more. The Critical, Fallible Network As noted, in microservices deployments, the network is directly and critically involved in every transaction, every invocation of business logic, and every request made to the application. Network reliability and latency are among the chief concerns for modern, cloud native applications. One cloud native application might comprise hundreds of microservices, each with many instances that might be constantly rescheduled by a container orchestrator. Understanding the network’s centrality, you want your network to be as intelligent and resilient as possible. It should: Route traffic away from failures to increase the aggregate reliability of a cluster. Avoid unwanted overhead like high-latency routes or servers with cold caches. Ensure that the traffic flowing between services is secure against trivial attack. Provide insight by highlighting unexpected dependencies and root causes of service communication failure. Allow you to impose policies at the granularity of service behaviors, not just at the connection level. Also, you don’t want to write all of this logic into your application. You want Layer 5 management, a services-first network; you want a service mesh. The Value of a Service Mesh Currently, service meshes provide a uniform way to connect, secure, manage, and monitor microservices. Observability Service meshes give you visibility, resiliency, and traffic control, as well as security control over distributed application services. Much value is promised here. Service meshes are transparently deployed and give visibility into and control over traffic without requiring any changes to application code (for more details, see Chapter 2). In this, their first generation, service meshes have great potential to provide value; Istio, in particular. We’ll have to wait and see what second-generation capabilities spawn when service meshes are as ubiquitously adopted as containers and container orchestrators have been. Traffic control Service meshes provide granular, declarative control over network traffic to determine, for example, where a request is routed to perform a canary release. Resiliency features typically include circuit-breaking, latency-aware load balancing, eventually consistent service discovery, retries, timeouts, and deadlines (for more details, see Chapter 8). Security When organizations use a service mesh, they gain a powerful tool for enforcing security, policy, and compliance requirements across their enterprise. Most service meshes provide a certificate authority (CA) to manage keys and certificates for securing service-to-service communication. Assignment of verifiable identity to each service in the mesh is key in determining which clients are allowed to make requests of different services as well as in encrypting that request traffic. Certificates are generated per service and provide a unique identity for that service. Commonly, service proxies (see Chapter 5) are used to take on the identity of the service and perform life cycle management of certificates (generation, distribution, refresh, and revocation) on behalf of the service (for more on this, see Chapter 6). Modernizing your existing infrastructure (retrofitting a deployment) Many people consider that if they’re not running many services, they don’t need to add a service mesh to their deployment architecture. This isn’t true. Service meshes offer much value irrespective of how many services you’re running. The value they provide then only increases with the number of services you run and with the number of locations from which your services deploy. While some greenfield projects have the luxury of incorporating a service mesh from the start, most organizations will have existing services (monoliths or otherwise) that they’ll need to onboard to the mesh. Rather than a container, these services could be running in VMs or bare-metal hosts. Service meshes help with modernization, allowing organizations to upgrade their services inventory without rewriting applications, adopting microservices or new languages, or moving to the cloud. You can use facade services to break down monoliths. You could also adopt a strangler pattern of building services around the legacy monolith to expose a more developer-friendly set of APIs. Organizations can get observability support (e.g., metrics, logs, and traces) as well as dependency or service graphs for each of their services (microservice or not), as they adopt a service mesh. In regard to tracing, the only change required within the service is to forward certain HTTP headers. Service meshes are useful for retrofitting uniform and ubiquitous observability tracing into existing infrastructures with the least amount of code change. Decoupling at Layer 5 An important consideration when digesting the value of a service mesh is the phenomenon of decoupling service teams and the delivery speed this enables, as demonstrated in Figure 1-5. Figure 1-5. Layer 5 (L5), where Dev and Ops meet Just as microservices help decouple feature teams, creating a service mesh helps decouple operators from application feature development and release processes, in turn giving operators declarative control over how their service layer is running. Creating a service mesh doesn’t just decouple teams, it eliminates the diffusion of responsibility among them and enables uniformity of practice standards across organizations within our industry. Consider this list of tasks: Identify when to break a circuit and facilitate it. Establish end-to-end service deadlines. Ensure distributed traces are generated and propagated to backend monitoring systems. Deny users of “Acme” account access to beta versions of your services. Whose responsibility is this—the developer or the operator? Answers likely would differ from organization to organization; as an industry, we don’t have commonly accepted practices. Service meshes help keep these responsibilities from falling through the cracks or from one team blaming the other for lack of accountability. The Istio Service Mesh Let’s now embark on our journey into the Istio service mesh. The Origin of Istio Istio is an open source implementation of a service mesh first created by Google, IBM, and Lyft. What began as a collaborative effort among these organizations has rapidly expanded to incorporating contributions from many other organizations and individuals. Istio is a vast project; in the cloud native ecosystem, it’s second in scope of objectives to Kubernetes. It ingests a number of Cloud Native Computing Foundation (CNCF)–governed projects like Prometheus, OpenTelemetry, Fluentd, Envoy, Jaeger, Kiali, and many contributor-written adapters. Akin to other service meshes, Istio helps you add resiliency and observability to your services architecture in a transparent way. Service meshes don’t require applications to be cognizant of running on the mesh, and Istio’s design doesn’t depart from other service meshes in this regard. Between ingress, interservice, and egress traffic, Istio transparently intercepts and handles network traffic on behalf of the application. Using Envoy as the data-plane component, Istio helps you to configure your applications to have an instance of the service proxy deployed alongside it. Istio’s control plane is composed of a few components that provide configuration management of the data-plane proxies, APIs for operators, security settings, policy checks, and more. We cover these control-plane components in later chapters of this book. Although it was originally built to run on Kubernetes, Istio’s design is deployment-platform agnostic. So, an Istio-based service mesh can also be deployed across platforms like OpenShift, Mesos, and Cloud Foundry, as well as traditional deployment environments like VMs and bare-metal servers. Consul’s interoperability with Istio can be helpful in VM and bare-metal deployments. Whether running monoliths or microservices, Istio is applicable—the more services you run, the greater the benefit. The Current State of Istio As an evolving project, Istio has a healthy release cadence, this being one way in which open source project velocity and health are measured. Figure 6-1 presents the community statistics from May 2017, when Istio was publicly announced as a project, to February 2019. During this period, there were roughly 2,400 forks (GitHub users who have made a copy of the project—either in the process of contributing to the project or using its code as a base for their own projects) and around 15,000 stars (users who have favorited the project and see project updates in their activity feed). Figure 1-6. Istio contribution statistics A simple number of stars, forks, and commits is moderately indicative of project health in terms of velocity, interest, and support. Each of these raw metrics can be improved upon. Reporting the rates of commits, reviews, and merges over time perhaps better indicates project velocity, which is most accurately measured relative to itself, and relative to its own timeline. When determining a project’s health, you should look at whether the rates of these activities are increasing or decreasing, whether the the release cadence is consistent; and how frequently and how many patches are released to improve a low-quality major or minor feature release? Cadence Like many software projects, Istio’s versioning semantics are laid out in a familiar (to Semantic Versioning) style (e.g., version 1.1.1), and, like other projects, Istio defines its own nuances to release frequency, setting expectations of longevity of support (see Table 1-1). Though daily and weekly releases are available, these aren’t supported and might not be reliable. However, as Table 1-1 shows, the monthly snapshots are relatively safe and are usually packed with new features. But, if you are looking to use Istio in production, look for releases tagged as “LTS” (Long-Term Support). As of this writing, 1.2.x is the latest LTS release. As a frame of reference, Kubernetes minor releases occur approximately every three months, so each minor release branch is maintained for approximately nine months. By comparison, because it is an operating system, Ubuntu quite necessarily needs to prioritize stability over speed of feature release, and thus publishes its LTS releases every two years in April. It’s worth noting that the LTS releases are much more heavily used (something like 95% of all Ubuntu installations are LTS releases). Docker uses a time-based release schedule, with time frames generally as follows: Docker CE Edge releases happen monthly. Docker CE Stable releases happen quarterly, with patch releases as needed. Docker EE releases happen twice per year, with patch releases as needed. Updates and patches release as follows: Docker EE releases receive patches and updates for at least one year after they are released. Docker CE Stable releases receive patches and updates for one month after the next Docker CE Stable release. Docker CE Edge releases do not receive any patches or updates after a subsequent Docker CE Edge or Stable release. Releases The original plan was that Istio would have one point release every quarter, followed by n patch releases. Snapshots were intended as monthly releases that would mostly meet the same quality bar as a point release, except that it’s not a supported release and can have breaking changes. A history of all releases is available on Istio’s Releases page on GitHub. Table 1-2 presents Istio’s release cadence over a 10-month period. Future Working groups are iterating on designs toward a v2 architecture, incorporating learnings from running Istio at scale and usability feedback from users. With more and more people learning about service meshes in the future, ease of adoption will be key to helping the masses successfully reach the third phase of their cloud native journey → containers → orchestrators → meshes. What Istio Isn’t Istio doesn’t account for specific capabilities that you might find in other service meshes, or offered by management plane software. This is because it’s subject to change or to be commonly augmented with third-party software. With the prominent exception of facilitating distributed tracing, Istio is not a white-box application performance monitoring (APM) solution. The generation of additional telemetry surrounding and introspecting network traffic and service requests that is available with Istio does provide additional black-box visibility. Of the metrics and logs available with Istio, these provide insight into network traffic flows, including source, destination, latency, and errors; top-level service metrics, not custom application metrics exposed by individual workloads or cluster-level logging. Istio plug-ins integrate service-level logs with the same backend monitoring system you might be using for cluster-level logging (e.g., Fluentd, Elasticsearch, Kibana). Also, Istio uses the same metrics collection and alarming, which might well be the same utility (e.g., Prometheus) that you’re using already. It’s Not Just About Microservices Kubernetes doesn’t do it all. Will the infrastructure of the future be entirely Kubernetes-based? Not likely. Not all applications, notably those designed to run outside of containers, are a good fit for Kubernetes (currently, anyway). The tail of information technology is quite long considering that mainframes from decades ago are still in use today. No technology is a panacea. Monoliths are easier to comprehend, because much of the application is in one place. You can trace the interactions of its different parts within one system (or a limited more or less stagnant set of systems). However, monoliths don’t scale, in terms of development teams and lines of code. Nondistributed monoliths will be around for a long time. Service meshes help in their modernization and can provide facades to facilitate evolutionary architecture. Deployment of a service mesh gateway as an intelligent facade in front of the monolith will be an approach many take to strangle their monolith by siphoning path-based (or otherwise) requests away. This approach is gradual, leading to migrating parts of the monolith into modern microservices, or simply acting as a stopgap measure pending a fully cloud native redesign. Terminology Here are some important Istio-related terms to know and keep in mind: - Cloud A specific cloud service provider. - Cluster A set of Kubernetes nodes with common API masters. - Config store A system that stores configuration outside of the control plane itself for example, etcd in a Kubernetes deployment of Istio or even a simple filesystem. - Container management Loosely defined as OS virtualization provided by software stacks like Kubernetes, OpenShift, Cloud Foundry, Apache Mesos, and so on. - Environment The computing environment presented by various vendors of infrastructure as a service (IaaS), like Azure Cloud Services, AWS, Google Cloud Platform, IBM Cloud, Red Hat Cloud computing, or a group of VMs or physical machines running on-premises or in hosted datacenters. - Mesh A set of workloads with common administrative control; under the same governing entity (e.g., a control plane). - Multienvironment (aka hybrid) Describes heterogeneous environments where each might differ in the implementation and deployment of the following infrastructure components: - Network boundaries Example: one component uses on-premises ingress, and the other uses ingress operating in the cloud. - Identity systems Example: one component has LDAP, the other has service accounts. - Naming systems like DNS Example: local DNS, Consul-based DNS. - VM/container/process orchestration frameworks Example: one component has on-premises locally managed VMs, and the other has Kubernetes-managed containers running services. - Multitenancy Logically isolated, but physically integrated services running under the same Istio service mesh control plane. - Network A set of directly interconnected endpoints (can include a virtual private network [VPN]). - Secure naming Provides mapping between a service name and the workload principals authorized to run the workloads implementing a service. - Service A delineated group of related behaviors within a service mesh. Services are named using a service name, and Istio policies such as load balancing and routing are applied to service names. A service is typically materialized by one or more service endpoints. - Service endpoint The network-reachable manifestation of a service. Endpoints are exposed by workloads. Not all services have service endpoints. - Service mesh A shared set of names and identities that allows for common policy enforcement and telemetry collection. Service names and workload principals are unique within a mesh. - Service name A unique name for a service that identifies it within the service mesh. A service may not be renamed and maintain its identity: each service name is unique. A service can have multiple versions, but a service name is version-independent. Service names are accessible in Istio configuration as the source.serviceand destination.serviceattributes. - Service proxy The data-plane component that handles traffic management on behalf of application services. - Sidecar A methodology of coscheduling utility containers with application containers grouped in the same logical unit of scheduling. In Kubernetes’s case, a pod. - Workload Process/binary deployed by operators in Istio, typically represented by entities such as containers, pods, or VMs. A workload can expose zero or more service endpoints; a workload can consume zero or more services. Each workload has a single canonical service name associated with it, but can also represent additional service names. - Workload name Unique name for a workload, identifying it within the service mesh. Unlike service name and workload principal, workload name is not a strongly verified property and should not be used when enforcing access control lists (ACLs). Workload names are accessible in Istio configuration as the source.nameand the destination.nameattributes. - Workload principal Identifies the verifiable authority under which a workload runs. Istio service-to-service authentication is used to produce the workload principal. By default, workload principals are compliant with the SPIFFE ID format. Multiple workloads may share a workload principal, but each workload has a single canonical workload principal. These are accessible in Istio configuration as the source.userand the destination.userattributes. - Zone (Istio control plane) Running set of components required by Istio. This includes Galley, Mixer, Pilot, and Citadel. A single zone is represented by a single logical Galley store. All Mixers and Pilots connected to the same Galley are considered part of the same zone, regardless of where they run. A single zone can operate independently, even if all other zones are offline or unreachable. A single zone may contain only a single environment. Zones are not used to identify services or workloads in the service mesh. Each service name and workload principal belongs to the service mesh as a whole, not an individual zone. Each zone belongs to a single service mesh. A service mesh spans one or more zones. In relation to clusters (e.g., Kubernetes clusters) and support for multienvironments, a zone can have multiple instances of these. But Istio users should prefer simpler configurations. It should be relatively trivial to run control-plane components in each cluster or environment and limit the configuration to one cluster per zone. Operators need independent control and a flexible toolkit to ensure they’re running secure, compliant, observable, and resilient microservices. Developers require freedom from infrastructure concerns and the ability to experiment with different production features, and deploy canary releases without affecting the entire system. Istio adds traffic management to microservices and creates a basis for value-add capabilities like security, monitoring, routing, connectivity management, and policy. Get Istio: Up and Running now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/istio-up-and/9781492043775/ch01.html
CC-MAIN-2021-49
en
refinedweb
#include <interaction.h> Representation of an interaction via an end-effector Displays one marker for manipulating the EEF position. Definition at line 157 of file interaction.h. The name of the group that defines the group joints. Definition at line 166 of file interaction.h. Which degrees of freedom to enable for the end-effector. Definition at line 169 of file interaction.h. The name of the group that sustains the end-effector (usually an arm) Definition at line 160 of file interaction.h. The name of the link in the parent group that connects to the end-effector. Definition at line 163 of file interaction.h. The size of the end effector group (diameter of enclosing sphere) Definition at line 172 of file interaction.h.
http://docs.ros.org/en/hydro/api/moveit_ros_robot_interaction/html/structrobot__interaction_1_1EndEffectorInteraction.html
CC-MAIN-2021-49
en
refinedweb
Whenever we write up a feature on a microcontroller or microcontroller project here on Hackaday, we inevitably get two diametrically opposed opinions in the comments. If the article featured an 8-bit microcontroller, an army of ARMies post that they would do it better, faster, stronger, and using less power on a 32-bit platform. They’re usually right. On the other hand, if the article involved a 32-bit processor or a single-board computer, the 8-bitters come out of the woodwork telling you that they could get the job done with an overclocked ATtiny85 running cycle-counted assembly. And some of you probably can. (We love you all!) When beginners walk into this briar-patch by asking where to get started, it can be a little bewildering. The Arduino recommendation is pretty easy to make, because there’s a tremendous amount of newbie-friendly material available. And Arduino doesn’t necessarily mean AVR, but when it does, that’s not a bad choice due to the relatively flexible current sourcing and sinking of the part. You’re not going to lose your job by recommending Arduino, and it’s pretty hard to get the smoke out of one. But these days when someone new to microcontrollers asks what path they should take, I’ve started to answer back with a question: how interested are you in learning about microcontrollers themselves versus learning about making projects that happen to use them? It’s like “blue pill or red pill”: the answer to this question sets a path, and I wouldn’t recommend the same thing to people who answered differently. For people who just want to get stuff done, a library of easy-to-use firmware and a bunch of examples to crib learn from are paramount. My guess is that people who answer “get stuff done” are the 90%. And for these folks, I wouldn’t hesitate at all to recommend an Arduino variant — because the community support is excellent, and someone has written an add-on library for nearly every gizmo you’d want to attach. This is well-trodden ground, and it’s very often plug-and-play. Know Thyself But the other 10% are in a tough position. If you really want to understand what’s going on with the chip, how the hardware works or at least what it does, and get beyond simply using other people’s libraries, I would claim that the Arduino environment is a speed bump. Or as an old friend of mine — an embedded assembly programmer — would say, “writing in Arduino is like knitting with boxing gloves on.” His point being that he knows what the chip can do, and just wants to make it do its thing without any unnecessary layers of abstraction getting in the way. After all, the engineers who design these microcontrollers, and the firms that sell them, live and die from building in the hardware functionality that the end-user (engineer) wants, so it’s all there waiting for you. Want to turn the USART peripheral on? You don’t need to instantiate an object of the right class and read up on an API, you just flip the right bits. And they’re specified in the datasheet chapter that you’re going to have to read anyway to make sure you’re not missing anything important. For the “get it done” crowd, I’m totally happy recommending a simple-to-use environment that hides a lot of the details about how the chip works. That’s what abstractions are for, after all — getting productive without having to understand the internals. All that abstraction is probably going to come with a performance cost, but that’s what more powerful chips are for anyway. For these folks, an easy-to-use environment and a powerful chip is a great fit. But my advice to the relative newbie who actually wants to learn the chip is the exact opposite. Pick an environment that maximally exposes the way the chip works, and pick a chip that’s not so complicated that it completely overwhelms you. It’s the exact opposite case: I’d recommend a “difficult” but powerful environment and a simple chip. If you want to become a race-car driver, you don’t start you out on a Formula 1 car because that’s like learning to pilot a rocket ship. But you do need to understand how a car handles and performs, so you wouldn’t start out in an automatic luxury sedan either. You’d be driving comfortably much sooner, but you lose a lot of empathy for the drivetrain and intuition about traction control. Rather, I’d start you out on something perhaps underpowered, but with good feel for the road and a manual transmission. And that, my friends, is straight C on an 8-bitter. Start Small… I hear the cries of the ARM fanboys and fangirls already! “8-bit chips are yesterday, and you’re just wasting your time on old silicon.” “The programming APIs for the 8-bit chips are outdated and awkward.” “They don’t even have DMA or my other favorite peripherals.” All true! And that’s the point. It’s what the 8-bitters don’t have that makes them ideal learning platforms. How many of you out there have actually looked at the code that runs when you type HAL_Init() or systinit() or whatever? Even if you have, can you remember everything it does? I’m not saying it’s impossible, but it’s a lot to throw at a beginner just to get the chip up and running. Complex chips are complex, and confronting a beginner with tons of startup code can overwhelm. But glossing over the startup code will frustrate the dig-into-the-IC types. Or take simple GPIO pin usage. Configuring the GPIO pin direction on an 8-bitter by flipping bits is going to be new to a person just starting out, but learning the way the hardware works under the hood is an important step. I wouldn’t substitute that experience for a more abstract command — partly because it will help you configure many other, less user-friendly, chips in the future. But think about configuring a GPIO pin on an ARM chip with decent peripherals. How many bits do you have to set to fully specify the GPIO? What speed do you want the edges to transition at? With pullup or pulldown? Strong or weak? Did you remember to turn on the GPIO port’s clock? Don’t get me started on the ARM chips’ timers. They’re plentiful and awesome, but you’d never say that they’re easy to configure. The advantage of learning to code an AVR or PIC in C is that the chip is “simple” and the abstraction layers are thin. You’re expected to read and write directly to the chip, but at least you’ll know what you’re doing when you do it. The system is small enough that you can keep it all in your mind at once, and coding in C is not streamlined to the point that it won’t teach you something. And the libraries are there for you too — there’s probably as much library code out there for the AVR in C as there is in Arduino. (This is tautologically true if you count the Arduino libraries themselves, many of which are written in C or C-flavored C++.) Learning how to work with other people’s libraries certainly isn’t as easy as it is in Arduino, where you just pull down a menu. But it’s a transferable skill that’s worth learning, IMO, if you’re the type who wants to get to really know the chips. …But Get Big So start off with the 8-bit chip of your choice — they’re cheap enough that you can buy a stick of 25 for only a tiny bit more than the price of a single Arduino. Or if you’d like to target the 8-bit AVR platform, there’s a ton of cheap bare-bones boards from overseas that will save you the soldering, and only double the (minimal) cost. Build 25 projects with the same chip until you know something about its limits. You don’t have to achieve lft levels, or even memorize the ghastly “mnemonic” macro names for every bit in every register, but at least get to know how a timer/counter is good for. Along the way, you’ll develop a library of code that you use. Then move on. Because it is true that the 8-bitters are limited. Along the way, you’ll have picked up some odd habits — I still instinctively cringe at 32-bit floating point multiplications even on a chip with a 32-bit FPU — so you don’t want to stagnate in the 8-bit world too long. Maybe try an RTOS, maybe play around with all the tools you’ve been missing on the smaller chips. Luxuriate in the extra bit depth and speed! Use the higher-level, black-boxy libraries if you feel like it, because you’ll know enough about the common ways that all these chips work inside to read through the libraries if you want to. If you started out the other way, it would all be gibberish — there’s not a digitalWrite() in sight! After a few projects with the big chips, you’ll have a good feel for where the real boundaries between the 8- and 32-bit chips lie. And despite what our comments would have you believe, that’s not something that anyone can tell you, because it depends as much on you as on the project at hand. Most of us are both blue-pill and red-pill microcontroller users from time to time, and everyone has different thresholds for what projects need what microcontroller. But once you’ve gained even a bit of mastery over the tools, you’ll know how to choose for yourself. 152 thoughts on “When Are 8 Bits More Than 32?” My advice to the 10%ers is to look up an old kit with a pic16f84a. Get one that teaches assembly and do one decent project in asm/risc etc. Once you have beat yourself up with it and know how , start learning to do the same projects in C. There are tons of old time kits teaching this for about 100$ . Once you know this stuff, microchips thousands of chips become a blessing not a curse. IF you go this route duino code is a bit obnoxious to work with. Silicon Chip () and Elektor Publications are a must in addition to 16F84 resources (as per addidas’ comment) in order to really understand the details. I’m surprised Hackaday doesn’t collaborate with these publications but it would be of joint benefit if Hackaday did… There’s a company Altronics (altronics.com.au) that does an insanely good jobs with technical projects! Jaycar too (jaycar.com.au) and Freetronics (freetronics.com.au). On another note, I find there are two types of programmers – those that come from an electronics background (and prefer lower level programming with cost effective 8-bit built for purpose chips – where selecting the best chip is a skill) and those that come from a computer science background which favor the richness of higher level languages, which almost always require 32 bit boards. Agree on elektor , I think my first kit was “talking electronics” kit. I was mighty disapointed at first because I had to fix my programmer to do icsp but then I realized to be unhappy with the programmer , I had to firstunderstand what i was doing and now love them for what they taught me. Also to be fair, many of us started with 8-bit given that 32-bit dev boards with a screen used to cost $5000 only 10 years ago. You get a lot more bang-for-buck these days, given smartphone’s have driven down device/part costs. In an ideal world we’d develop in 32/64 bit high-level first and then optimize for the application the closer to production we get (that includes using FPGAs/ASICs or 8/16 bit micro-controllers) or with subsequent product releases. I also started on pic16f84a! It’s hard to imagine a simpler microcontroler. Great for beginners, especially if they want to experience a variety of architectures. I wouldn’t recommend the 16f84 to _anyone_ regardless of what they want to achieve. Yes I have driven 16f84 chips within an inch of their lives and literally only a few bytes of code space to spare left – no there is no conceivable circumstance where I could possibly be persuaded to use one for anything today. There is such a thing as overly simple and clunky, and the 16f84 certainly is so by any standard relevant today: there simply aren’t any of the peripherals in it that make a 8-bit chip usable (and useful!) today and as soon as you want to do more then blink an LED you’ll start being limited by the 1K flash. It’s like teaching a future F1 driver how to double-declutch – it’s obsolete and useless knowledge, irrelevant to anything they’ll ever do as an F1 driver. Or, putting it differently, the 16f84 is the equivalent of obfuscation contests and/or the 2048 byte compo category of the demoscene – it’s utterly irrelevant to learning to program in any context unless your ambition is to compete within that specific, artificial challenge. Yes, if you do it well, it does showcase your mad skillz. No, it’s absolutely not required in any sense to _acquire_ those mad skillz. You can learn the same just by bare-metal programming any 8-bit _OR_ 32-bit chip in a properly efficient way (as opposed to Arduinos which throw away 95% of your MCU’s performance and hardware features right off the bat). Learning how to add a software USB peripheral to an MCU that doesn’t have a hardware one may teach you about USB but it’s just wasting your time unless you cared to learn in-depth about USB: you really should just use one that has that if you wanted to use USB. Learning how to generate jitter-free stepper control pulses using all the built-in peripherals to help you do it, on the other hand, is actually a useful achievement. if you want to learn to write risc its perfect . Put a few buttons few led , bit bang out some pwm and software debounce the buttons. Between that and chip setup you have a fine project. Its perfect for learning how to write inline asm later for timing critical things. I’m with the 16f84a + assembly route all the way. Screw that ugly abomination, the sluggish speed of it, the butt ugly page flipping, its registers, all of it. Just nail it to the wall, and better play with a simulator of an Z80 or i8080 for a classic von Neumann or take a venerable 2313 or any other AVR for modified Harvard and enjoy a much more logical and streamlined architecture. I still hate that chip from the time it was taught in university. Very well written. I followed AVR then ARM path, and currently enjoying FPGAs. Completely agree with the author. AVR butterfly is a good start, and later for ARM, I’d certainly suggest ST, mainly because of the support from the company. Enjoying FPGA’s? Talk about an acessibility barrier! Every time I’ve tried to crack open a Verilog or VHDL tutorial, or even a Xilinx ISE tutorial, I’ll end up saying “eff this”, due to frustration from the ridiculous complexity of doing anything on FPGAs :\ I’ve had two polar experiences with FPGAs: all within the context of the same class. 1) It’s really cool to do the low-level gates and flip-flops stuff. 2) Pretty shortly thereafter, you toss in a soft CPU and link in tons of IP cores and it becomes even plug-and-player than using an Arduino. FPGA pros do all the work that’s between these two extremes, I assume. (But yeah, the FPGA flows are a pain to get through. It’s a one time cost. And see Al Williams’ FPGA tutorials here on HaD.) I found the cheapest FPGA route to take is to buy barebone Altera breakout board (~20$) from Waveshare and get a USB blaster from ebay (~4$). Install slightly older version of Quartus II (14.1), then you’re good to go. They provide you with schematics and lots of demo projects. For more: @[Andrew Pullin] I tried to start with a CPLD kit from Vantis for the MACH2 chips decades ago. I couldn’t register the software because I had no internet connection so I gave up and through it in the bin. If Vantis still existed the daggers would be out. It was such a negative experience that I will never forget and I will never forgive. And I am not surprised that they’re dead now because you can’t fuc( with end users forever – alternatives evolve. Unfortunately the assholes that run these companies have never heard of user experience and spend so much time worrying about their IP that it’s still a pain to deal with them. Not to worry – new open source tool chains are evolving so one day very soon we can say to them – fuc( you and your registration process. If you have ever had a PC crash and need to be reloaded then you will totally understand me totally! But for now we still need to put up with third world software and the sooner you get over this hurdle the better. Go though the process – register with the web site for the download – download – install – register somewhere else to get a registration key – try to register again – register to open a support ticket to work out which of the 6 gazillion far(en registration detail that you have to obtain over the process is actually the registration key for the the far(en software. Install the software – find out it the wrong version for the chips your using – start over. Spend a great deal of time wishing there was some competition and how fast your going to through this all in the bin when there is. After installing the correct software for the chip – find out that your programmer won’t work with the software – look for a version that is compatible for both – find that you have to split the software and manipulate it so that you can run one version of the IDE and another version of the programming software. This step takes about 3 days on youtube. Alternatively just get the Altera package – as it’s the least worse of the IDEs / ISEs. Go to ebay an buy a $15 – $30 CPLD develoment board and a $6 programmer (USB Blaster). OK right now I can hear all those screening about how bad a choice CPLD is as far as bang for buck goes so here’s the question … Do you want a CPLD that has a 20 page datasheet or a FPGA with a 600 page datasheet. I something isn’t working and you want to work out why … well good luck with that 600 page datasheet. Now when you have learnt the basics and know how to choose what you want go buy something more expensive. Once you get over the software setup and find some good tutorials you are on your way. Spend plenty of time looking for tutorials: web / text and youtube so that you find something that is at your level. One other thing – if your doing VHDL (perhaps Verilog is supported to) then download a trial version of Sigasi. I can’t afford a full version but it it an excellent learning tool. It’s like a code editor and has those sorts of features like syntax highligh and cod completion BUT it also alerts you to errors IMMEDIATELY and I can’t understate how useful that is as the error messages that come from the IDEs (compilers) are totally cryptic. Don’t get me started on the error messages and warnings – oh god please don’t …. Have you ever tried to play with Papilio? I found it was quite accessible to go through the long tutorial provided on their website. I’ve also been able to program by myself a custom clock generator after I bricked an ATMega32 playing with fuses, and I found it was not so difficult. I really have to throw in the MSP430s into the ring here as well. I mean, after all, they’re 16-bits, so it’s a good midpoint, right? Right?? I really just find the basic MSP430 instruction set a heckuva lot cleaner than PICs or AVRs, and it’s nowhere near as complex as things need to get for 32-bit guys like ARMs. To me it doesn’t even feel like a preference thing – it feels pretty fundamental. 256 bytes is pretty unusable for all but the easiest microcontroller projects, but 64k is a good amount of code, and 4 GB is obviously stupid-overkill. And although 8-bit processors obviously have ways of addressing much more than 256 bytes, they all have to jump through *some* hoops to do it – either through the X/Y/Z register pairs in AVR, or bank selects on a PIC. On an MSP430 there’s no trick: 16-bit register, 16-bit address space. No tricks. (This is the same reason why I’m not fond of the extended memory bits in an MSPX, and typically work with small code/data models unless I really need to). And I have to say the MSP430 FRAM-based microcontrollers are just really, really nice. Being able to use *huge* amounts of FRAM as a buffer just like you would RAM is fantastic. 2 kB transmit buffer? Sure, why not? No big deal. Add to that the fact that LaunchPads are dirt cheap… they’re really, really nice. I totally wish I had more experience with the MSP430s because the claim that you’re not quite making — “16 bits should be enough for anybody” — is actually pretty close to correct. 8 bits is often too little resolution for many sensors, for instance, and 32 is surely overkill. 16 is a sweet spot. But consider the learning experience of overflowing (on purpose or otherwise) your 8-bit counter variables. Consider the value of having to think about what data types you’re actually going to use. It’s a pain in the ass, sure, but it’s educational. I’m not sure I’d have learned the same lessons on a 16-bit machine. The compiler hides a lot of the details that bug you about the 8-bit memory addressing, though, so that’s a lesson that won’t get learned until one digs into assembler. But that’s for much further down the road? You do realize those LaunchPads are like, $10, right? :) “But consider the learning experience of overflowing (on purpose or otherwise) your 8-bit counter variables. ” It’s not a limitation! It’s a learning experience! :) But I’m not sure I’d agree here. 16-bit counters still overflow quickly: a 30 Hz system tick overflows in about an hour, which made me whack myself in the face when the uC started going crazy after I left it run for a bit. 32-bits are where you can get in trouble pretending they last forever. “The compiler hides a lot of the details that bug you about the 8-bit memory addressing, though, so that’s a lesson that won’t get learned until one digs into assembler. ” Exactly, which is why I like them. Programming in C is easy, and debugging the assembly is pretty straightforward too. And in fact, switching to assembly-written stuff is pretty easy, too, which can get you a huge performance boost, especially in ISRs. Looking at the Arduino ISRs in assembly always makes me cringe. And unfortunately, debugging is sadly important, because microcontroller compilers, in general, suck. As evidenced by the inlined assembly in Arduino’s SoftwareSerial library due to a buggy GCC version in OSX. And that’s not even for performance reasons! “You do realize those LaunchPads are like, $10, right? :)” It’s worse than that — they’re free to me b/c I have one sitting in the closet right now. Sadly, it’s the time. I’ll use an MSP on my next non-time-constrained project, ok? “a 30 Hz system tick overflows in about an hour, which made me whack myself in the face when the uC started going crazy after I left it run for a bit.” An 8-bit counter at 30Hz overflows before you’ve finished your coffee. If you started out on 32 bits, you wouldn’t notice the bug until it was in the customers’ hands. :) Seriously, though, many of the embedded gotchas are the same no matter how many bits you’ve got. You just run into them all the time with an 8-bitter. (And I’ll claim that it’s good to hit them first in a simpler context.) Re: debugging. If I could pick a super power: I wish I were a better debugger. hey you know you can hook up gdb to an ARM, right? ;) Don’t forget that starting out with the MSP430 will allow you to easily take different paths, depending on your needs. You could use Code Composer Studio and learn bare metal C and/or assembly if you want, or you can use Energia and take advantage of the Arduino community. Need more CPU power? Step up to the MSP432 (They are on sale at the TI store for $4.32 right now in celebration of Engineers Week). You get the easy to understand MSP peripherals and a full 32-bit ARM CPU under the hood. The best of both worlds. With that you can experiment with TI-RTOS if you like. The Launchpad ecosystem in combination with Code Composer Studio is probably the best debugging environment you can find out there, and their documentation and support is really quite good. You can’t go wrong. I already mentioned the FRAM-based microcontrollers, but honestly, they’re so ridiculously awesome I have to mention them again, because *no one else* has anything close to them. I mean, you can get an MSP430FR6989 LaunchPad for $15. Want to have a double-image buffer for a 32x32x32 LED cube, so you can run things at their max speed and not even worry about image tearing or anything? No problem! That’s only 64k. Still have 64k for the program, and a full 2K of RAM free. On a microcontroller level, it’s just unheard of. I was recommended the MSP432 as a beginner a few months ago but only now realising what a good choice it is because of the things you’ve listed. There’s bare C and assembly, there’s a Driver Lib, there’s Energia to prototype and finally the RTOS. That’s a lot of bases covered for one little cheap launchpad. 8/16/32 bits is the width of the data bus, not the address bus. The data bus typically determines the size of the registers. If the data bus is smaller than the address bus, you’ll need to use *some* sort of hoop-jumping to indirectly access data (via pointers), which is what I said. On AVR this is through X/Y/Z register pairs, on PIC this is through bank switching, etc. All of that costs performance (not that big a deal) but also makes it a fair amount harder to learn to debug what’s going in the assembly. That’s why I said 16-bits is kindof a sweet spot. You can have the address bus be equal to the data bus and still be useful, and the assembly is more approachable. There aren’t special register pairs, there isn’t a bank register you need to make sure didn’t get messed with, etc. The bitness of a computer is it’s processing width at assembly level. Which is a bit of a mouthful. For example, an 8088 is a 16-bit processor, but it has an 8-bit address bus. That’s because it fully supports 16-bit operations on its registers. The Z80, 6809 and 6502 are all 8-bit CPUs, because they fully support ALU operations only on 8-bit values. This is true even though the Z80 and 6809 have some support (add/sub/inc/dec/load/store) for 16-bit data values. The PowerPC G4 and Pentium CPUs were a 32-bit processor. This was true despite them having 64-bit data busses, because they only fully supported up to 32-bit operations in their instruction sets. I hope this helps. I wanted a dirt cheap micro controller for a simple project that required some clock timing (seconds/mins/days timing). Ended up with the bottom of the range MSP430G2201 as a perfect candidate. Very simple, no extra unnecessary hardware. Read through the documentation and knew the entire chip inside and out within a day. It’s also a battery powered project so I wanted the lowest power draw possible. Not having to write 30 lines of code to disable all the extra crap was great. When you’re making a $10 product, even in small volumes, paying $2 for a 32bit ARM cpu when you don’t need it instead of a 30c 8/16bit micro is silly. Spending twice as long fitting your code onto a $2 PIC instead of a $5 ARM for a one-off project on the other hand, is equally silly. They all have their place. about the MSP430s, I have to agree with you. Never actually tried but the texas instrument chip but it’s true that memory trick are just annoying on 8bit, and writing address assembly is annoying on 32bits (0x103A5634 is just one number) When I first heard about FRAM I thought it would take the embedded world by storm. It looks and sound awesome. Will try it ASAP but still sitting on a stack of AVR so i’m still using that for now. I’ve always wanted to learn how to program microcontrollers, but I procrastinated for the longest time. But I finally started teaching myself C because I’m the type that likes to know what’s going on under the hood. It’s fun. I picked up a Fram based Ti development board because it was pretty cheap. I stressed for a bit about which boards, AVR, TI, ARM etc I should start with but I ended up getting the TI because of the price. I’m super excited to start using it once I get a better understanding of C. Your comment got me even more pumped! I’ve learned a ton about microcontrollers by focusing on the ATtiny platform over the last few years. 200 page datasheets are great! Seems like the STM32F0 is a good next step. STM32F0 is an excellent next step! It comes with a slight learning curve, but the DMA alone is worth the effort. STM32F0 are great, mainly because of the bang for the buck. It’s a full-fledged 32-bit controller at 50 cents in bulk. When I design products with some simple function, I’ll definitely use these (as a matter of fact: I already have). However, I would say that the learning curve for the F0 is about the same as for the F1-series. The upside of F0 is, as mentioned, the price. Get an stm32-nucleo. They’re about 10 bucks for the cheaper ones. Another plus is the “snippets” library for the F0/L0 processors. They have some nice bare metal examples utilizing just about every peripheral. Also, you get a free, unlimited Keil license. F0 and L0 series are great. I’ve got an L0 nucleo board with couple of free add-ons from ST for free in one of their hans-on workshop. I do appreciate their effort especially for enthusiasts like us. Here is the link, see if you can find a location near to you: STM32 are awesome, you can use the free IDE (atollic truestudio) I think you will love the ST’s plus the Nucleo comes with the programmer on board :) STM32’s are easily programmed using MBED and also the Arduino IDE (for some STM32 devices). The bang / buck is hard to beat, when the development boards cost less than $5 for a 72Mhz device with 64k flash and 20k RAM (F103), (Note GD32’s are even cheaper and also run faster) Asm used to scare me but have been learning on a chip with nice register structure (and even still some of the instructions feel like calling Arduino functions, like moving a byte by just telling it what and where). I want to know how the instruction executes but thats where I start reaching my limits…Porting asm is where it sucks, or fairly large projects, say you have to maintain an older project and some dirty hacks aren’t documented. Embedded world is pretty sweet though, best not to think too much about where or what to do and just get into some chip(s). The toolchains are also the real magic too. My current microprocessor course is based off of a freescale 68hc12 (dragon12 dev kit), I think it’s a pretty okay platform for teaching about computational architecture. It’s a hard von neumann architecture and the processor itself only has a handful of registers (d register, two index registers SP PC ect). In my opinion it’s a good platform for assembly learning if you can get past the shit documentation, though to be fair I haven’t dug that deep into any other platform like I have the hc12. I’d argue both, when it’s right. Getting something done, is indeed, different than learning the nitty-gritty. (Disclaimer: I cut my teeth in the days of the 8008…so YMMV.) I’d also argue, that if you want to learn the assembly level stuff, you can learn it, in small bites, with the lowly Arduino platform (and have a lot of help available, because it’s really just a dev board for the CPU that’s on the board). It’s not that difficult to write inline assembly in C/C++ (let’s remember that the Arduino IDE is really just a dev environment, for a particular board, for C/C++ that has access to a huge number of libraries). If I need to do synchronous PWM scr light dimming, assembly; if I want to converse with an I2C temperature sensor, call the library. But, don’t attempt the former on day one, or two, or maybe even three. ;) You can also program an Arduino board with plain C (I’ve done it with PIC32 and AVR based boards). Yeah, and probably inline assembly. And laying out your own board and making a custom programmer b/c only lazy noobs use a board safely designed with the necessary parts. You can make a project as hard or easy as you want, the value in Arduino/Raspi is making it easy for others to build and extend your project (or critique it, this is hackaday afterall ;p ) and building others real quick, it’s great. That looks to me like a best way to learn stuff. First start with simple projects in Arduino, use somebody else’s libs and built-in ones. Then you realize you need something more and start writing your own libs using datasheets and logic anaylzer. Later you need to solve some time-critical part and you can use inline-assembly. IMHO that path is the best because it starts with simpler things, and those who are willing to learn and explore will … learn and explore. But if someone without any experience takes bare microcontroller, datasheet and assembly language I believe he will give up very quickly. Whatever you pick, make sure it’s supported by gcc, and do everything from either the commandline or a generic IDE (Eclipse?). Whether you start in assembler or C, a free toolchain that you can learn once for many chips is worth its weight in gold. I don’t agree. Like programming languages, the more compilers and IDEs you get exposed to, the more you are able to make informed decisions about what tools are best for a particular job. I do agree. When you’re out of the experimentation/learning stage and want to put your abilities to work, the best compiler and IDE is the one you know by heart, no matter what it is. So pick one that will translate to as many platforms as possible. I enjoy programming on 8-bitters. Why? You can pretty much memorise how to setup all the peripherals and how they work. Of course they are limited, but most microcontroller projects are limited as well. For hobbyists, there are fewer and fewer reasons to pick a beefy microcontroller for your task: the market is becoming flooded with really inexpensive hardware for interfacing to the real world in various levels of power, realtime-ness and abstraction. Five years ago, you might have used a very large ATMega and an ethernet networking chip for your home automation project, and written a ton of software for this. Webserver, display, control, etc. — Today, you might just use a whole technology stack consisting of cheap microcontrollers for the realtime parts, an ESP8266 for communication and the web server and a raspberry pi for display, logging, and control. It’s easier to program, since the abstractions of each of the platforms are closer to what’s easy to achieve on the specific platform. It’s also more flexible. Cost-wise, if you’re not doing a commercial product, it makes almost no difference. So, what’s the deal then? I’d recommend beginners to learn the most important stuff of microcontrollers first: Interacting with the world outside the machine. That means bit-banging, UART, I2C, SPI, ADC, (where applicable, DAC + Filtering), PWM, Timers. Storing data into EEPROM. Reading Sensors. All this stuff. So for beginners, I’d recommend a microcontroller that’s easy to get and has a sensible infrastructure for handling these peripherals. I like the MSP430, but AVRs or PICs are also fine. The MSP430 has a really nice IDE if you can bear the limitations, the AVR has an established open-source toolchain with very inexpensive programmers and the advantage of having ONE datasheet for your chip of choice that explains everything. The PIC offers choice, and their IDE is nice as well, but many things are very awkward with PICs. Silicon Labs has some extremely nice and very inexpensive 8051s, but I don’t have any experience with them yet. When you’ve learnt how to interface with external components, the choice of microcontroller for your project should mainly depend on what you want to accomplish, what you have around and what’s the best tradeoff between price and hassle. I’m not especially fond of Atmel’s ARMs, as I find their documentation and overall support quite lacking. Other manufacturers have done a much better job of supporting people who start out with their MCUs. Totally different from all the rest, Cypress offers the PSoC, which is extremely flexible, very well-documented and a lot of fun to play around with. They have perfected abstraction–it almost doesn’t matter whether you use their 8-bit 8051s or their 32-bit ARMs for your project. This. I like AVRs because they are so simple. I do complicated stuff at work all day; if I am going to be programming at home, I want it to be relaxing. Experience with bare metal programming is awesome! Pretty much the only thing that my 8 bit AVRs don’t do that I would like them to do is DMA. Of course, it depends on what projects you are working on… and I have used beefier 32 bit MCUs when needed, but I just like the simplicity of 8 bit AVRs. The cost may not be better (and in some cases is actually worse), but the personal enjoyment makes up for it. YMMV, of course. > Pretty much the only thing that my 8 bit AVRs don’t do that I would like them to do is DMA. I would like to introduce you to the XMega :) Sad that this microcontroller is overlooked, it would have been in everything if introduced 2 years earlier. Nice article, makes me want to learn ARM after years on 8bit PIC. Thanks ! I started with ATmega168 (before Arduino even existed) and used that for a couple of years, also going through some ATtiny. I’ve always wanted to know what goes on behind the scenes, so I never really liked Arduino (and yes, I have used it). Then I moved on to STM32 as preparation for a summer job, and I haven’t looked back since. I love their cheap dev boards (about 15 bucks for an stm32f4-discovery? Including a full-fledged ARM-programmer? Hell yeah!). The only thing I don’t like now is that their new HAL-library is too messy. I prefer their older std-periph. I also tried out some other things in between, like some PIC and LPC1313 (which must be one of the worst chips I ever worked with). Start small and get big? :^) … That’s a nice advice. That can be made into one childish joke, but a very nice advice for a lot of things. Alternatively: Learn to walk before you run . Two frogs left their pond, because it was drying out. Wandering across meadows, they came upon a deep well with steep walls. First frog said: “That’s it! This is the solution for our problems – this pond is so deep and shadowy, it will never dry out. Let’s jump in!” “Wait!” – the other replied, “How will we get out of it if it DOES dry out?” When you commit yourself to a certain architecture, it is hard to port your work to another one if you didn’t anticipated it in advance. But if you do think about it at start, you’ll write suboptimal code and you will still have nasty surprises when you try to migrate, regardless. So, in the end, it means you’ll have to compensate for that losses of performance by reserving more resources, using costlier chips. And usually, if your product succeeds, you will have to migrate, either because your initial chips will get too expensive after a while, or because your product will fell a victim of creeping featurism, or because someone else decided that we are going to use another platform for whatever reason. So, essentially, even though you can write terse programs for your 8-bitters, you should always try to isolate and abstract your hardware and all your platform dependencies as much as you can. Use simple solutions only if they are one-off, or if you don’t mind starting anew. Still, I must admit that using something tiny for something clever is so much fun! But if you are not doing it for fun, then it’s a cruel and unusual punishment … You don’t need to touch assembly at all when programming 8-bit micros. Modern microcontrollers are made more C-friendly, and C compilers are smart enough to make efficient use of limited hardware. Use assembly only when you need to write a time-critical function. In recent months I used only four lines of assembly while programming in C, because PIC I used has locking mechanism that protects program memory from being overwritten in case you make a mistake in your code. And I only had to copy code from datasheet anyway. Those 8-bit micros come in so many flavors that you can find a part for just anything. Today morning I wrote a code for monophonic synth that will emulate Trautonium in its operation, with chiptune-ish sound, on PIC10F, using only 168 words of program memory. In XC8, free version. Possible with proper selection of parts and careful reading of datasheet. I will add more features to it just to see, what is possible with limited hardware… No. Isolating and abstracting hardware is setting yourself up to fail. It’s designing to the lowest common denominator, it’s throwing away 90% of what the hardware has to offer, just because it’s specific. It’s what the Arduino does, and the main reason why it lacks any meaningful performance – sure, it does work with timers and such but did you want a bit flipped in hardware automatically when a compare match occurs…? Oooops, sorry, no can do. It’s not a universal enough feature to show up in the Arduino API. So you’ll end up doing it in software by polling or in an uncertainly delayed interrupt handler, or you’ll get a much faster chip just to hide the inadequacy of the solution. It’s the Windows way where endless bloat, perpetually growing hardware requirements just to achieve the same thing (badly), arbitrary processes freezing the entire UI for arbitrary amounts of time and any number of crashes and blue-screens (Yes! in 2016!) are perfectly acceptable and the normal way of doing things. I for one prefer the Space Shuttle way of programming where you NEVER, EVER crash, precision is never less than absolute, and any jitter is within narrowly specified limits. Contrast that with the Arduino way that can’t even tell you exactly how much free RAM (if any) you have left – the _official_ advice is “don’t go over 80% and prey”. Say WHAT?!? See, that’s why we can’t have nice things… To weigh in a little. Without wanting to continue with the AVR/PIC flame wars too much, I wouldn’t advocate 8-bit PIC for the simple reason that the architecture teaches you what a processor shouldn’t be. I’m sorry, but PIC is poor beyond words from an architectural viewpoint and they don’t support proper versions of ‘C’ (no proper stack frames, non-contiguous RAM). AVRs have a mostly decent instruction set, but there’s a lot of mnemonics to learn, though this isn’t an issue if you’re programming in C. AVRs have peripheral sets that are simpler in general than MSP430, but better designed than 8-bit PIC. For example, 8-bit PICs have timers that automatically reset on comparator match, when they should offer a free-running timer mode with a comparator that merely triggers an interrupt (as AVR and MSP430 do). MSP430 is definitely easier to learn assembly for than AVRs and is well supported in ‘C’. Here 12 ish regular registers are better for learning and the instruction set is simpler. MSP430 low-end devices are not so deterministic and therefore not quite as good at bit-banging. But for learning I don’t think that’s a great problem. ARM Cortex-M0 has a nice instruction set, it’s very easy to learn. There is nothing really that prevents a chip designer such as NXP from designing a Cortex-M0 with a super-simple hobby-level peripheral set. It wouldn’t be rocket science. Let’s say we call it the LPC10xx series with narrow DIP; 4Kb to 32Kb of Flash with no DMA and: 1. Four simple 32-bit timers that have: a free-running counter, a comparator, a capture mode, a PWM mode and a pulse density mode* (TMR+=comparator with OC=Carry output). PDM, is amazing by the way ;-) Single control register for setup and Prescalar. (12 I/O regs). 2. At least 4 SPI/USI ports with a 4-byte buffer that can be used for SPI/simple USART/I2C with a bit of help. At least two SPI ports should be able to operate in 4-bit wide SPI mode so that off-chip code can be run from SPI and off-chip RAM can be accessed as RAM. (8 I/O regs). 3. A/D, but not analog comp. (9 or 10 I/O regs). 4. Interrupt sources for multiples of 16-pins per reg: 00=no int, 01=rise, 10=falling, 11=any edge. ( 2 I/O regs). 5. I would also support a pattern-match mode to make it easier to support I/O decoding on the MCU. Here we have a 32-bit mask; a 32-bit match register and also a 32-bit mis-match mask ( the mismatch mask is ANDed with the DDR to generate an output set of signals, which output the pin contents for the corresponding mismatching pins. This allows, for example, a default value to be set up in an output latch, or for example to output a toggle value from a pin, by picking an Output Comp match pin). What’s the point of all this? Well, if you have an MCU that responds like a memory mapped parallel hardware device, then the MCU will generally either be able to respond to an input signal or will be background processing. It’s an alternative to the simple Configurable Logic Block implementations on the LPC810 and some recent PICs. ( 3 I/O regs). 6. Simple bit-banging support: DDR+OUT latch + IN bits. OUT latch functions as pull-up just like AVR. However, for advanced users we’d support bit banding too. (3 or 6 I/O regs). 7. EEProm and self-flashing support using just 4 regs: Control, Status, Addr, Data. ( 4 I/O regs). 8. Rudimentary clock domains and watchdog, e.g.some kind of automatic PLL support as well as RC oscillators of course and 32KHz ultra-low power support. ( 2 I/O regs?). Anyway, that sums it up. Because we have 32-bit registers, the number of control ports is reduced. Here, I count: 43 to 47 I/O regs. -cheers from Julian Skidmore (MPhil Computer Architecture, Manchester University England, FIGnition designer). * They definitely could, but the fact is that things go hand in hand: those who want high processing power and memory would probably make use of more complex and feature rich peripherals. Another reason is that all these complexities kind of come for a very low cost. ARMs are made in smaller and newer technologies. All that complex driving and configuration schemes they have for an I/O actually reside under the PAD where the wire bonds. Whether you make it simple or complex it’s going to take that space anyway. Look at this die from STM32: most of the stuff that looks uniform is memories, the core is in the center and the peripherals are the tiny weird thing on bottom left. Take them out and most of the chip is still there in terms of cost, but not functionality. So, not many reasons to make really simple arm chips. something wrong with the image…. Well, the point of the article is that the simplicity of the Peripherals still makes 8-bit devices appealing for learning. But really, it’s because of the peripherals themselves rather than the CPU architecture. Simple peripheral sets on ARM are possible, but from a manufacturer, it’s whether there’s a market for hobby-based ARM MCUs with hobby peripherals. The LPC1114FN28 and LPC810FN8 shows that NXP think there’s some potential for the DIP format, though why they gave the LPC1114 a wide DIP I’ll never know (especially as it’s possible to shave off the ends of the package and turn it into a narrow dip version!). The hobby market, really is an investment market: you’re trying to appeal to newbies and give them a low-barrier to entry so that (a) you train up a future engineering base and (b) you get them familiar with your chips. ARM Cortex M0s are an example of reducing the barrier to entry, because they do have simpler peripherals than most ARMs and because e.g. the interrupt controller is designed to be ‘C’ friendly. Also most ARM MCUs have a serial-based bootloader. In conclusion I’d say that we’ve been expecting 32-bit MCUs to eliminate the 8-bit market for, well, decades now, but it hasn’t and 8-bit MCUs still sell about as well. This isn’t due simply to price, because Cortex M0s aren’t that much bigger than 8-bit MCUs, so it must be due to barrier to entry issues. Therefore although there aren’t a lot of reasons to build a simpler peripheral set for them, the investment in a new generation of engineers and lowering the barrier to entry would help and that might be worthwhile. And don’t forget that in most cases you don’t really need all that power for a given task. [Dave Jones] @ EEVBlog has an episode about electric toothbrush that uses 4-bit micro with less than 1kwords of program memory and few half-bytes of RAM, IIRC. It was designed that way, because you don’t need 8 bits to switch a motor or count time, and that makes device cheaper to manufacture. Simpler micros are easier to program, than ARMs, and that saves time in development. That’s why there are so many 8-bitters on the market and new ones are developed all the time, with fancier peripherals, lower power consumption and faster clocks. 8-bits are here to stay, together with 4-bits, at least for another 10-30 years. it’s all about total cost in the end. something you know will reduce development time and risks of mistakes…. and this matters in the scale of how many of one thing you are going to make. In the hobby level we tend to forget the development time, which is a cost well accounted for when something becomes commercial. “EEVBlog has an episode about electric toothbrush that uses 4-bit micro with less than 1kwords of program memory and few half-bytes of RAM,” Cool. I’ve never really tried to program a proper 4-bit MCU, though I’ve read the TMS1000 series datasheets for fun ;-) For small programs where limited performance is needed, they can be the ideal solution. The cortex M0 was mostly designed for low power, and one way to achieve this is to cut down on things, including the complexity of the core. I don’t think 8 bit micros will go, we still need a lot of simple things to be done with micros. If both the 8 and 32 bit are made using the same tech node**, there is no reason for the simpler 8 bit not to be cheaper, and in a lot of cases this is all that matters. **this is one of the reasons 32 bits appeared very great at some point: they were being compared with 8 bits done in older nodes that had similar price, but much lower features. Once the 8 bit ones were upgraded this advantage started to fade away. Xmega isn’t *priced* cheaper than Atmega part. ARM parts also have much more competition between vendors to drive down prices as there are little incentive to stick to a more expensive vendor if parts have similar specs. The old 8-bit in new process node is not going to retain the same electrical properties e.g. 5V or 5V tolerant would take a lot of chip area. There is also the increase in leakage (that you have pointed out) going down the process node. @fpgacomputer, agreed, some things like 5V operation are lost when scaling down 8 bit micros. I wasn’t saying the new 8 bits are cheaper than the old 8 bits** (xmega vs mega is not a fair comparison since the xmega has a lot more things, it is more like a low end ARM) … but I was saying that using the same technology, an 8 bit micro should be cheaper than an 32 bit one, everything else being constant. Ans power consumption, no talk about that. Small 8 bit micros like AVR/PIC/MSP430 are pretty good at burning very little on standby. It is not that ARMs cannot, it is just that most low cost ones are going to burn an order of magnitude more and you are going to spend a lot more money to make that coin cell battery powered thermometer Actually you don’t need that blow power to have a decent battery life. A CR2032 is rated for 1200+ hour at 0.19mA drain. That’s 50 days for 190uA. If you drop the current down by about a factor of 6, you can get to the magical 1 year mark. i.e. average of 30uA of power. The el cheapo STM32F030F4 in standby is about 5uA. Let say you use a LCD with another 5uA current drain. The run current for the F0 part is listed as 5mA using 8MHz internal RC as clock. (I have measured it closer to the 2.4mA figure listed for the F050. You can get much lower current by running a divider.) It is a matter of running at very low duty cycle and spending lots of time in standby. For a duty cycle of 0.005, that works out to be 25uA. So you are allowed to wake up and run for 5ms for every 1 second which is way more then enough CPU cycle for such a simple device. Now running 10 years on a smaller battery like my Casio watch on the other hand is a lot tougher without low power parts. PSsst. You’re not going to get lower current that this for a wake up, take a reading, report it and go back to sleep project. Idle current becomes zilch. Pssst, it costs as much as PIC12F1501, which has a watchdog timer that takes as much as 260nA according to datasheet, but on the other hand you don’t need any additional components in your design. And with 30uA/MHz @ 1.8V it’s hard to beat. Besides, battery probably will have higher self-discharge current over time that that watchdog anyway… When designing a product it is more important to have lower costs than lower power consumption. “When designing a product it is more important to have lower costs than lower power consumption.” I wonder what the Smart Watch makers think about that statement. May be you missed the whole point – namely you don’t to have ultra low power parts to get a usable battery life. One year or so is a good enough number to get away with the cheapest parts which OP claims is not impossible. My numbers also include a LCD because without one misses the whole idea of a thermometer. Nanowatt is meaningless if the display don’t stay on. Getting that to run at 5uA is also not going to be cheap nor without work. Sharp memory LCD built for low power (cost a lot more than the uC BTW) is spec for 15uA without update and 50uA with 1 update per sec. In general, temperature don’t change that much between samples, so you can get away being smart in the lcd driver code to do the minimal work. So all that low current is only achievable if all the components in the system are indeed designed for low current. Someone without the experience could mess up the firmware design and drive the consumption up easily. Things have to be balanced too. For such a device an e-ink display would be better. You only need to change it when temperature changes. and you can get away with checking temperature every 5-30 seconds. And there is even better technology that draws so little power, it will be powered by the heat it measures. It works by exploiting the relationship between temperature, pressure and volume of various materials. It’s reliable, works over wide range of temperatures and it’s precise. And it’s really, really cheap… I’m actually repling to Moryc. thermoelectricity is very inefficient and bad when the temperature delta is low. If there is a display I suggest a solar cell: if there is a display there is propably light as well. (backlight obviously exist but the power is then in the mA range.) That’s an awesome device. Depends what you care about. I like the “stop” mode where you still keep your RAM, which if I add the numbers comes to a typical of 10u, with a max of about 10X that. Remember to look at the max too, or you will wonder why the battery unexpectedly dies after a month instead of a year. Of course, we are looking at 1 chip, but the things hold for a lot more. It is the tech behind it as well: it takes more area and complicated things to do them low power in sleep. Don’t forget the elephant in the room – embedded Linux platforms like R.Pi and BB. Adding Linux makes IoT and other connectivity downright trivial, and the costs have downright plummeted of late. It’s not always the most appropriate choice, but then, neither is an ATTiny. Embedded Linux is definitely A Thing, but IMHO it is currently way too overused. I don’t want a full blown OS (with all the security holes and upgrade requirements that it brings) to notify my phone that the toast is done. Sure, they have their place, but the overlap of what is suitable for a Linux box vs. a microcontroller should be very small indeed. Stop thinking so small. Put a zigbee radio on a BBB and call it a bridge or concentrator. Or a WIFI USB dongle, or a 3G dongle… The elephant in the room is the god awful development pace for the BBB regarding USB Babble errors, the Device Tree Overlay changes, etc. Periodically booting halts when the LAN IC can’t be found. If your board can’t boot due to even a test point and PCB trace running to an IO pin then why the hell isn’t each sensitive I/O pin behind a buffer IC on that damn board? It doesn’t get as much love as AVRs, but I really like the Parallax Propeller for having an easy high level language that seems purpose built to make you give up and learn the assembly already. There are plentiful IOs, timers that are surprisingly easy to use, and you get to learn about the joys of concurrency with no safety net. Came here to say this. The beautiful thing about the Propeller is you don’t have to learn or memorize a bunch of special I/O combinations because all the I/O is done in software, which gives you great control and a chance to learn about those interfaces if you want. Need eight serial ports or two VGA outputs from one chip? No problem. Yet it’s a very basic environment, easy to learn well enough to get stuff done. I was so excited the first time I was able to change the blink speed of the default blink program in the Arduino. And I have learned countless things since then. Including all the hate people seam to have toward Arduino projects. Nowadays I am embarrassed to share projects in which I have used one. I would love to learn the nitty gritty details of programming pics and AVRs without using arduino, but I always seam to get discouraged and demotivated as soon as I hear about setting fuses. I have yet to find a good write up that speaks to me in a way that makes me comfortable. I really enjoyed this article, and would take the red pill in a heartbeat if someone could show me the way. Thank you. in avr studio you can set fuse with a GUI… usually you only change them only one time in a project (for example changing from internal oscillator to external). it’s easy to brick your avr tough. Previous programming experience should be considered as well. I learned 6502 assembly 30 years ago, and found it only took a couple days to pick up AVR assembler. I find it even simlper than 6502 without so many addressing modes and instruction timing that is much easier to remember. I went from Z80 to 68k to ARM. Learning ARM enough to crack game protection with no toolchains took an afternoon. Lovely architecture! 68k is now just a distant nightmare… 68K is a very easy architecture to code assembly for. Not as orthogonal as a VAX’s but nice enough. And easy enough to grok for mere mortals as compared to a MIPS or ARM. Old ARM32 is easy. It takes an afternoon, like Alphatek says. The new Cortex series are more complicated, especially if you also need the built in system peripherals and/or special instructions. What about for those people with the standard Computer Science education who all learned basic assembly on MIPS? Is there anything similar with a load-store architecture and a luxurious overabundance of registers? IIRC Pic32 is MIPS And if you prefer Propellers, you get the fast-easy example code solution and the nitty-gritty tight packed assembly option, all in one chip! :-) There’s very little justification to continue to use 8-bit microcontrollers even AVRs (my favourite 8-bit micros). Use STM32F0, or even STM32F4s. The STM32F0’s are cheaper and more performant than AVRs, if configured correctly you can get very similar power dissipation levels as AVRs, they have free software dev tools and the hardware is cheap. Nucleo boards contain, a programmer, debugger, and USB2Serial as well as a target device all for a measly $12USD They also come in all shapes and sizes Nucleo-32, Nucleo-64 and Nucleo-144.. Blinking an LED is as simple as: #include "mbed.h" DigitalOut myled(LED1); int main() { while(1) { printf("Hello World\n"); myled = 1; wait(1); myled = 0; wait(1); } } It really doesn’t get much simpler that that. mbed can be easily used in the cloud as well as offline. If anyone needs to get it going offline ask me and I will write an blog entry about it. .” This is precisely the mainstream approach — the get-stuff-working approach. And it’s right for a lot of people. And for them, I totally agree with you. If instead you’re the type who wants to know what code does and how it works, this abstracted approach will frustrate. If you’ve ever dug into the STMCube libs, for instance, you’ll know what I mean. You can get to the bottom, but it’s like peeling onions, tears and all. Agreed, the STMCube libs are not for beginners nor for the faint of heart. In addition to looking up the peripherals in the Micro’s reference manual you’ll also have to look at the STM32Cube manual as well and examples….lots of examples. In the end however it will be worth it…,because once you get used to it , it will make managing these complicated peripherals more manageable. Also the STM32Cube is intended to be somewhat portable between the STM32F0, F1, F2, F3 and F4 families so code base migration between these parts is fairly easy. I actually built two little breakout boards based on the STM32f072 and the stm32f042 In addition I developed my own small C library for setting the clock, gpio, external interrupts, raw and buffered serial and adc (SPI and I2C are incoming). I have similar libraries developed for the STM32F411 as well. It was and continues to be a very neat way to learn the innards of the microcontroller and its peripherals. It also got me thinking about the intricacies of API design. After a while, I basically decided to simply rely mostly on STM32Cube. Especially that I now have a relatively solid understanding of most of the peripherals at the register level. The reason is that it helps me manage the complexity of the core and peripherals with little effort and it gives 90-95% of the granularity that I had with my own library or pure register-based code (on which my libs were based). Mbed is great for fast prototyping and those wanting to learn the basics. While being a high-level API, the mbed libs expose enough complexity that it makes it an excellent tool for newcomers to the embedded world. It abstracts which bit in which register does what, but still exposes enough details of say the SPI bus, I2C bus or external interrupts allowing students to initially focus on the bigger picture and then possibly ‘zoom in’ to a lower level of abstraction at a later date. The biggest problem with mbed is that the binaries for a simple UART loopback program can be about 20K. So i wouldn’t recommend its use with microcontrollers having less than 64KB of flash. In my custom library, the same program (for the same part) uses about 4K. The same program written in STM32Cube uses about 8-9K. I have a copy of your AVR programming book by the way. Its a true gem and I highly recommend it to all those interested in the AVR micros. The abstracted STMCube libs aren’t very efficient either. It sometimes takes 10 lines of filling a struct, calling a function that’s going to validate all the bits, and then combines all the values and writes them to a single register. Instead, you could just write the single register yourself in one line of code. STM32CUBE is not efficient, but mbed libs for the stm32s is built on top of it so it is even more wasteful of flash. But when you have 512KB of flash this is not an issue. Of course if you have 32KB or even worse 16KB of flash only it is most definitely an issue. CodeVisionAVR has one of my favorite ways to do the GUI to code configuration: you select in the GUI what you want, but in the end all the code it generates is simply writing the proper values in the registers. It cannot get more efficient than this. “You can get to the bottom, but it’s like peeling onions, tears and all.” very well said! This is primarily due to two things: – STM32Cube libs are still fairly new (2-3 years old) and are perhaps not as stable as they should be (though they’re getting there in my opinion) – Because its new, there’s very few tutorials out there. BTW in terms of intuitive API design the TI’s TIVA C Tivaware libraries are probably the most intuitive and easiest to use. Freescale’s Kinetis SDK is also pretty good. I find that the STM32Cube libs trails behind these two. But I like the STM32’s hardware, low cost, and availability so much that I’m willing to overlook the STM32Cube Libs imperfections. I still think that with proper instruction, newcomers to the embedded world are better off using a higher abstraction API (say mbed) and move to a lower Abstraction API (say STM32Cube) and then register based stuff as they get more comfortable with the hardware. This would be a better approach to first learning the register based stuff on an 8-bitter and then moving on to 32-bit in my mind. Especially in this day and age where rapid development is the latest rage and where more microcontroller families keep getting released every year. Ofcourse I could be completely wrong about this. Where can I find a STM32F0 cheaper than 30c? I bought some f030f4p6 to play with, but the cheapest I can find them on Aliexpress is $4.40/10. I can get 10 tiny13a for $3. pssst. STM8S003 Now compare it with a PIC10F and laugh… STM8 is 8-bit. I was going to get it, but the price difference from STM32F030 is minimal. You can get a hardware debugger for either with a $3 STLink clone. You get more I/O and memory than ATTiny13. Five Volt. Pic24 16 bit , Dip package, around a dollar from digikey. Has hardware multiplier, register based instructions, simple assembly, no memory paging. Just curious as to why no body recommends these. Few years ago I read an opinion of fellow PIC programmer, that PIC 16-bit family is poorly designed, and some PICs have errata as big as datasheet. I checked it now with random PIC24F and errata had only 10 pages, less than similar documents for some 8-bit PICs I used. So I’ll try one out… +1 I found the Pic24F series very straightforward to use. I was able to fit FAT filesystem for an SD card and a basic TCP stack on a chip costing about $3. The libraries are provided by Microchip and easy to integrate. I’m a teacher at a local polytechnic. And I teach embedded systems. Within the curriculum we start with 8051 :D using SDCC. Keil boards, a treat to use. Then we jump to Arduino (AVR 8-bit, mostly nano clones, local versions and barebone ATMega 328’s) and mbed (original LPC1768 and some Freescale Kinetics). ARM M4 and M7 boards (LPC based, from KEIL), PIC boards (MikroElektronika, open for almost any classic PIC) and Launchpad MSP430 are available for students who want to know more. We even hack the chinese STM32 to work with mbed, for their projects. To be honest, it’s really simple to explain all the major concepts on a 8051. And SDCC. Hands down, in projects it beats even the digitalWrite(). Working with timers and interrupts, explaining why 32bit multiplication is fu*ked up in assembler and so on.. The only problem is a lack of nicely written libraries (for everything) like Arduino has. That sounds great. How many semesters was that? Basic embedded systems (with 8051) are in the 5th (of 6) on Bachelors Degree, with Designing embedded systems (Arduino + mbed + everything else in their obligatory project) is in the last semester. Nice thing is that other teachers use those boards in their classes in the last semester, even if students are not taking my class about designing embedded systems – students learn PID on Arduino, or they manipulate with sound using mbed on DSP classes. If students understand how to get the timings, inputs and outputs right, then the platform is irrelevant. 8051 is so much easier then others to teach, and sometimes it’s easier to program. We bought licences for 8051 for Labcenter Proteus, and now we consider jumping to Multisim (already have the NI Labview licence) as it supports 8051 (and PIC 16F84) so the risk of unnecessary white smoke is minimal. That’s really cool that you start them out on 8051 first so that they have a better grounding to understand the fancier/simpler-to-use libraries second. Folks who start out with Arduino, it seems to me, are getting that backwards. I am architecturally agnostic, and make decisions based on manufacturing metrics. We like 8/16 bit PICs because they are very reliable, inexpensive ($0.24), and have practical real-time features you will need eventually (counters, comparators, and hardware uarts etc.). However when you start getting into 32 bit $5+ PIC24 series, an ATmega2560 (please ignore the arduino fan-boy rhetoric) is better value for the designer in many ways. Note that if your C is cross platform centric, the choice is almost arbitrary. Note most people who use the ATmega2560 are usually trying to poorly implement what the ARM6/7 SoCs do with ease. A proof is trivial, go outside and count how many MP3 players you see compared to smart phone users. I’ve done projects with the MSP series (this 1990’s style line will not endure in my opinion), Cypress’s crap (only brand I actively discourage for safety reasons), Motorola’s solid stuff, TI’s many orphans (bad for long term business), and countless MCS-51 based clones (like China CPU Clones a consistent source can be erratic given the number of variations). Intel and 68k hold outs are usually a joke about cults willing to give $80 to their masters name, but at least the 68k you chose will still be around if your product takes years to get to market. When I recommend an ARM7 SoC, it is because it has and will be more appropriate for the intended application in the long term (they are getting cheaper, more powerful, and most sane manufactures make a compatible chip). The mountains of errata people like myself have had to go though over the years due to vendor arrogance has become trivial to the users, and now one can get stuff to work reliably with much less money/effort/time. These SoCs are usually meant to run an OS, and simply can’t do some types of tasks an 8bit PIC does with ease. Change for the sake of change is like building your house on shifting sands. You will fail, hopefully learn from your mistakes, and come to an evidence based reasoning like many before you. I can measure your counter-arguments by the pound as e-waste, and dead spools of buggy vanity chips rotting in stock. A few people are too inexperienced to know why senior people won’t authorize using new toy silicon (or some silly software), but maybe they should stay in marketing rather than engineering. Me too, That’s why I use ST parts. For the money, it doesn’t get better. STM32 low ends vs priced the same as a PIC16F? Regarding the cypress parts, what do you mean by safety issues? The STM32 would have been awesome in the 90’s, but Broadcom’s $5 700Mhz/512M ARM6 SoC is functionally superior for this class of computing. The STM line just showed up about 15 years too late, but is still good for educational purposes. Cypress chips are a fire hazard as the chip & SoC design can fail closed even from offline static discharge. Cypress itself is like corporate cancer, and is directly responsible for the fall of Fujitsu (cored Spansion too). I assume the only reason they are still around is due to their relationship with the feds making “American chips”. I can’t allow their crap on a production product, as they don’t seem to care that they have already indirectly strangled a few companies in “anomalous operation” lawsuits. If you need finite impulse response based DSP filters, do yourself a favour and try a kit based Xilinx FPGA with Analog Devices front end. There are even MatLab filter design code-gen suites around for low-skill software people, but I’m not smart enough to get it to work faster than manually typing VHDL. This is not the first time I’ve heard of Cypress ESD failures in MCUs. Its actually the only time ive ever seen this type of failure in an MCU. Are you referring to the Raspberrypi vs stm32? The STM32F0 is under $0.50 in high volume. The broadcom part is $5 on a dev board that must be soldered by a human being (no castle connectors?) Is the broadcom part even available loose? I guess it depends on the applications you’re doing and how much of a budget you have to spend. The STM32 would have definitely dominated on price-for-value prior to the rise of SoCs, but they just showed up too late in the game. TI and Microchip also still believe that a “new” mid-range mcu is still relevant to general consumer products, and yet the China A10 has probably shipped more units then all of them combined. You are probably too young to remember that Fujitsu used to make hard-drives until something started going horribly wrong… Cypress wrong… cause it was their popular silicon… still in use… ;-) Re Cypress and Fujitsu drives: Do you mean Cirrus Logic? Not really, I was alluding to the internal Crossbar switching i/o bus layout of the analogue sections of Cypress hybrid SoCs. Most manufactures of mcus and fpgas do switch pin mapping contexts with ease, but no other manufacture I know of still offers a Halt and Catch Fire enabled architecture. I’m new to all this and I like to jump from low level to high level. What I mean is i hear about say SPI and I start interfacing something simple in Arduino get a feel for it in a high level. I then transfer over to my bare AVR and ARM microcontroller and page through more technical documents. With Arduino, and many of the other libraries, that’s a great plan. Because the “language” is really just C/C++, you can freely mix and match most of the time. So get a working demo in “pure Arduino” and then tweak at the edges. If it still works: success! :) Some of the libs are less loosely-coupled, though, and this invites subtle bugs and much head-scratching, so it’s not perfect, but it’s a great way to go. … can anyone explain to me the importance of DMA in a microcontroller, maybe with a practical example? I see everyone using ARM is bragging about it, but I’m not sure why. I know it’s useful for offloading storage operations on a PC but I have yet to understand what it could be used for in a microcontroller. Thanks. Blinks LEDs faster. An example may be were you have a very high speed Analog to Digital converter and you want to get the samples into memory (RAM) as fast as possible. The CPU may be too slow to do this and even if it were fast enough to do it then it would be using up most of it’s time with that one task and have little time left for other tasks. With DMA (Direct Memory Access) the DMA controller can load the samples into RAM in the background without the CPU having to do anything more than set up the DMA parameters initially. The above is a fairly *traditional’ use of DMA. In some of the modern chips the DMA controller can link a far greater range of resources and NOT just memory. So – for example – people have used DMA features to generate VGA and things like that – things that the CPU can’t really do well if at all. Oh and one other thing – It can make the LED blink faster like [Benchoff] said lol. Interesting, thanks! Guess it’s time to start playing with that STM32 Discovery board sitting in the drawer. If you want to have some fun, get one with an LCD and start working with STemWin. I’ve used DMA to implement a high speed RS485 bus on a PIC32, using the hardware CRC feature. I’ve also used DMA on a LPC17xx chip to read values from a SPI based optical scanner. And of course, most Ethernet and USB peripherals use some kind of DMA as well. Think of DMA as a peripheral that is used to transfer data without requiring CPU cycles once you have set it up. It has much less latency a few clock cycles vs 30-40 easily in an interrupt. So it is suited for high bandwidth and/or low jitter. The CPU can be running other code or even asleep during the transfer. Once you have used it, you ‘ll want it. I have used DMA for sampling ultrasonic at 160ksps while the CPU is down clocked to 4MHz to save power. (The ADC can run up to 700ksps) I have also used chaining DMA to merge/interleave 2nd set of data that is sampled at a later point. I have used DMA for bitbanging PS/2 port. A port change triggers the DMA instead of the usual interrupt. This reduces the # of IRQ + context switching overhead in RTOS. Or an LCD….say you have an LCD and some memory with a picture somewhere. You set the DMA up, tell ti how much to read, from where and where to write and it does it by itself. The most obvious use of DMA for me is for receive/transmit buffers, if the architecture can support them. On my most recent project, using an MSP430, data being transmitted via the UART or I2C uses 1 cycle/byte, so the uC is asleep for the entire thing, and the only interrupt generated is only for “transmit complete.” Nice write-up! Comments are now longer than the article by a large margin And not to jinx it, but the comments are fantastic so far! ENOTAHACK! For a while Stellaris (aka TI) made a chip called the LM3S9B96. One thing I liked about it was a full blown I/O driver library *baked into the chip’s ROM*. It also had a small RTOS in there if you wanted it. The I/O library was very efficient and easy to use. Code calling it was much simpler than tweaking the registers yourself. Anybody seen other chips taking this approach? The MSP432 has Driver Library in ROM, scroll down to DriverLib in ROM part Stellaris had one of the most buggy board support packages I’ve ever worked with in 30 years. I’ll admit I bought the TI $20 robot a few years back to test it out, and found the scariest redefinition of what people think RTOS means. gcc+FreeRTOS is a dream compared to the Stellaris bait-and-switch compiler base. When taking the Arduino route, be cautious of what I call “everything is a nail”. You get caught up in the particulars of the language or platform that you fail to realize there are other, more suitable, tools for a job. I fell into this trap years ago when I was first introduced to VB4/5. I began to apply the language to everything. Need a UI? Done. Need a serial loader for the 68HC11? Done. A little database access? EZPeazy Need a map creation tool for a game? Got that here. Need to parse a script and execute said script as fast as possible? Let me read Dan Appleman’s Visual Basic Programmer’s Guide to the Win32 API and join Xtreme VB forums so I can pull this off… done. Want me to create an improved compression algorithm that can do a better job at recovering corrupted files…. woah… hold on. Now we’re getting crazy… It dawned on me how crazy my life got when I realized I was writing routines in pure “assembly”, packing them into VB string and executing the string directly and bypassing nearly all of the ActiveX crap. And none of it was earning me real money. At one point, I genuinely thought I could write a better Quake engine than the guys at Id in pure VB. This was around the same time I bought an OOPic, because of the VB-like language and ran down that path too. Still not working as a programmer (too old now I guess) but my toolbox is a little better balanced. C, C++, Java, ASM of various architectures and Perl. I dabbled in a few others (most suck, Powershell anyone?) and working on adding Python to the list. I like the little GHI STM32F4 dip40 boards running netmf with C#. Easy to do almost anything in C# and if I need to get low level, the runtime is open-source. So I can sprinkle some C/C++ wherever its needed. For projects with graphics and touch I’m using the Dragonboard 410C and Windows IoT. Basically for the same reason. When I’m working on a project that requires ubber low power, I go with something like the PICAXE. The thinking behind my work is to use high level languages when possible, but also to ensure I can add low level code as needed. So far this is working for me. You should really look into TI’s MSP product line, which IMHO sports much of the best of both worlds. Used to be 16-bit only, but it has developed into a very promising 16/32 bit proposition which certainly flattens the learning curve into the somewhat intimidating ARM MCU domain (and directly into its very powerful M4F family). It was first released in the 1990s and is still being actively developed, with major exclusive innovations to this day (like FRAM, mentioned in a few comments earlier). Here’s a few reasons why I love the platform and why I recommend it to newbies or even experienced 8-bit fans: 1) From a historical perspective, they really got everything right from the beginning, obviously within the constraints of the 16-bit space. By this I mean it was designed from day one with extreme low power in mind, which it achieves with a clever combination of automatic core features and a very flexible, yet not intimidating, clock architecture; 2) The core itself and its instruction set is very clean, elegant and modern and yet traditional at the same time. It is based on a Von Neumann (single bus) architecture which makes it very simple to adjust to and to work with, at the expense of performance of course. This is, by the way, the same choice that ARM made with its very recent low-end, ultra-low power M0 / M0+ cores, so it’s by no means an obsolete approach in itself (the M3 / M4 / M7 are Harvard architectures with multiple buses); 3) Speaking of the instruction set, it’s natively a very reduced one, but with a brilliant twist that many instructions are not hard wired in the core, but instead emulated through a constant generator in one of the registers, with no performance penalty whatsoever. Therefore it is powerful and complete, this sounds mind-boggling but it’s actually true and fun to investigate and understand; 4) Having mentioned the registers, there are many multi-purpose ones (all obviously 16-bit wide) and together with an efficient hardware implementation of the stack, it is very very C compiler-friendly. Add to this the excellent code density that is made available by the 16-bit instruction set, resulting in very very compact code. The IS is orthogonal, meaning that most instructions can be used with all the available addressing modes (and this results in easier and cleaner coding in Assembly). Even ARM, in its 32-bit cores, has implemented a 16-bit instruction set called Thumb which is useful whenever the few 32-bit instructions aren’t needed, to take advantage of this; 5) I personally find 16 bits the ideal compromise in terms of being powerful and yet still trackable in my brain in terms of flags, for example. 16 bits are the minimum requirement for ints in C, while chars are conveniently packed when using strings (arrays) so there’s no penalty there. Timers also work well with 16-bit registers. 64Kb of address space are an abundant space for a large number of MCU application classes and an extended 20-bit mode is available on larger devices if you really need more memory. Single-bit access, a feature made possible by the 32-bit ARM giant address-space Matrix, is of course something you miss, in terms of performance; 6) The 16-bit constraint has also another pleasant effect on the peripherals – by the way, a number of useful features are implemented as such and not integrated in the core, a different approach from the mid/high-end ARMS but one that still gets things done – it keeps them simple to master and easy to remember, which I find priceless when you’re efficiently coding on the bare metal rather than using drivers and APIs. High-end MSP430s sport very powerful peripherals including Sigma-Delta converters, fast ADCs / DACs, crypto-engines etc. DMA is also available, in a simplified form compared to the 32-bit ARMs. Interrupt control, however, is admittedly less versatile (can’t be prioritized like with ARM’s NVIC) although this isn’t a big problem for many types of applications; 7) The resulting documentation is therefore succint and complete at the same time and is also presented very pleasantly, in a dual form for the device family (i.e. general architecture, instruction set and peripherals) and the individual part (i.e. specifications and packaging/pins) data sheets; 8) FRAM MCUs are an industry-exclusive opportunity that TI only makes available in the 16-bit MSP line-up, as it’s apparently not (yet?) compatible with their 32-bit processes. For those who are unaware, FRAM is a memory technology that blurs the lines between SRAM and Flash, basically incorporating the benefits of both (non-volatility, granular access, speed and power requirements), making it a candidate for a unified memory solution; 9) Last but not least, there is a very nice upgrade path within the family. Last year TI introduced the MSP432 line, which basically replaces the classic proprietary 16-bit core with an ARM M4F one with full floating point and DSP support, but keeps the peripherals and general configuration (wherever possible). It uses a process which they claim will give better power performance than the competition’s M0+ cores (something that ST claims as well). It also preserves the very simple and effective MSP way of doing low-power, so basically what you get is an ultra-powerful, yet ultra-low power core with easy to use, simple peripherals that you can become familiar and practice with on a more learner-friendly 16-bit platform (and even unusually keep good code compatibility if applications are written in plain C); 10) The LaunchPad development kits (all with programmers / debuggers) are very cheap (4.32 USD currently for the 32-bit one as promoted on their online store) and a family of Arduino shield-like add-on boards called BoosterPacks. TI’s software tools are also very well designed. Their Code Composer Studio IDE is based on Eclipse and it includes both its proprietary compilers (code-size limited) and open-source MSPGCC (officially and with no limitations). A very good Wiring / Arduino port called Energia is also available which supports most of their kits. A cloud-based version (debugging-enabled) is also available and best of all, everything is truly cross-platform (i.e. Windows, Linux and even Mac OS, with their most basic G2 board being supported by Windows only for the time being). “Timers also work well with 16-bit registers” When you set a timer to count microseconds, it only takes 65 ms before it wraps 16 bits. That’s too short for many applications. It is painful, but you can use overflow IRQ to extend the timer. The amount of loading isn’t much as you are only dealing with 15 or so interrupt per second. You can do arbitrary resolution this way. I have implemented a software assist code to extend the 16-bit timer compare to generate a cycle accurate timing pulse for a frequency counter on ATMega8. Pretty cool to get to 10 seconds of delay at full CPU clock resolution. (i.e. not using prescaler) The code uses hardware figures out the amount of timer overflows and then uses the hardware output compare for the remainder cycles. Wow! I’m super excited to get into the world of Microcontrollers even more so after this post! I’ve decided to start with Launchpads and after reading your post I’m glad I did. Hello Elliot, Your last book (AVR Programming: Learning to Write Software for Hardware) was great when can we expect a new book on ARM STM32 or Cypress ARM PSoC 5? The world needs more books from you :-) Best regards, João Nuno Carvalho I thought about this a little, and my conclusion was: “8-bit chips are yesterday, and you’re just wasting your time on old silicon.” Having gotten that off my chest, I just have to say that PIC chips have always let me down, mostly due to horrible buggy C compilers, so why waste time on those. So why not just use ARM for everything? That is my general answer, except that I tend to use the ESP8266 for everything. Just turn off the radio and you have a nice cheap 32 bit controller. As long as you don’t need lots of pins you are set. If you do, jump to the Beaglebone Black. As for Arduino, there is the evil path. Seductively easy for the beginner, but absolutely stunts anyones growth who wants to really learn anything. Just say no to Arduino. 8-bit chips is basically what we need to preserve any semblance of digital security in the future (since lower than that is much more than “yesterday”). IoT nodes of no less than 32bit monsters will run more malware than 8bit nodes. Regarding the Arduino “evil path”, do you say the same thing about something like Matlab? So damn easy to crunch numbers, solve matrix equations, and even simulate transfer functions instead of doing it on pencil/paper. No other boards have really done the “stacking layers of computer add-ons” that Arduino has, “shields” and the like. It’s like a better version of PCI concept except for things like stoplight controllers and other industrial stuff that needs the rugged pull in/out. Good luck fitting decent crypto on those “secure” 8 bit chips. People wanting to build IoT devices using hardware not fast enough to do asymmetric crypto has done awful things to the security of the Internet of Things already – hardcoded symmetric keys everywhere, broken pairing schemes like the one in Bluetooth 4.0, novel crypto that can be broken trivially on a desktop PC, etc. True. Lots of ciphers have been ported to 8bit AVR (it’s amazing how much, I believe the most they can support is 128 bit keys but then they have an example for AES256? Confusing.) I’m sure there’s others for other vendors. Main thing for me is, say you buy a computer, how many preprogrammed chips could have malware pre-installed? Those chips are all so huge there’s a lot of places to hide in the open. There’s a long storied history of malware in a new computer. Now same thing w/ 8bit chip? It would have to be an efficient preprogrammed malware. Plus future malware wouldn’t have as much room. And by IoT I mean things that aren’t necessary connected to the internet, small crypto on some random chip in the wild will probably be too much of a challenge for almost no reward. I’ve cut my teeth on 8 bit pic’s in assembly. I’ve found that being able to read the disassembly listing is a good skill to have when your trying to debug. I’ve gotten to where I kinda like working in assembly. But bank switching on the pic’s is a bit of a pain at times. I did pick up a ti fr6989 board to play with. I’m really digging all that fram. Thanks for the article. This particular topic is of interest to me, as I’ve been looking at how to go about expanding my skill set to include programming the micros. (I do HW design for a living.) A friend of mine and I chose the MSP430 as an embedded education platform for this kind of thing for a few reasons: Launchpads are inexpensive The MSP family allows you to start simple and move on to more capable parts One can go from Energia (abstracted) to assembly (gotta know the chip) Free debugger that you can use on your future MSP430 projects Has many common micro peripherials (I2C, SPI, ADC, Timers for PWM, etc.) Our goal is to offer a path for those who want to learn embedded system design from the bottom up, emphasis on efficient use of HW and SW design considered together, In that effort it’s critical to teach the fundamentals of how the uC works. That comparison between “32-bit” and “8-bit” code is a bit misleading. That’s just the difference between poorly abstracted, device-specific C code (which is common in 8-bit microcontrollers) and well-abstracted, device-agnostic C code (which is common in 32-bit microcontrollers). The reason for this correlation, I’ve found, is because there are many 8-bit microcontroller families, and many of them have not had enough focus to have well-written C compilers or libraries – many C abstraction methods will either compile poorly (unacceptable when you have so few resources to begin with already) or not compile at all due to poor standards-compliant compiler support. With 32-bit, you basically have ARM, which basically means GCC, and easy abstraction. And if you have a crap compiler, you usually have enough cycles/ memory that it doesn’t matter and you don’t have to worry about the overhead of your code abstraction. You rarely see example code in ARMs that looks like the intro code for a standard AVR simply because they prefer to introduce you to abstracted libraries. If you’ve tried to write your own device-specific ARM startup code, you’d find that in many chips it can look as simple as the AVR code. Often, the only actual initialization which isn’t software libraries is going to be oscillator configuration, which usually isn’t present in AVRs or PICs because they’re burned into the configuration word instead of inside program memory. Honestly, I’ve found peripherals to be far more of a pain on 8-bit chips (especially lower end ones, like PICs), because the device constraints often lead to nuances in register configuration. The classic problem is flashing the LED – in many lower end chips, there’s a priority order to each pin, and GPIO is almost always lowest – so flashing an LED on a particular pin often involves reading several datasheet chapters, ensuring that the ADC, comparator, timer, and every other darn peripheral isn’t hogging a pin so that you can just toggle on and off an LED. In many ARM chipsets, simply because of how much additional space is available, you can flip it to GPIO operation with a single register that maps the pin to GPIO or some other function. This makes the challenge of learning bit by bit far more manageable. Sometimes there are also read-modify-write issues and other subtle problems which more modern chips do away with entirely. Anywho, I actually agree with the point when I first heard it – there are in fact chips that are simpler and easier to learn than the highest end, newest 32-bit ARMs. However, I think the ideal chip could very well be a low end Cortex-M0 – no cache or other high-end funkiness in the core, and maybe with a reasonable set of easy-to-configure peripherals (I’ve seen more difficult to use, complex peripherals in 8-bit uCs than 32-bit, honestly – in part because many of them are designed to offload so more of the processing from an 8-bit uC than you’d expect is necessary in a 32-bit uC). Or a very, very low-end PIC10/12 or ATTiny, where the datasheet is actually less than, I don’t know, 200 pages and there really are only a handful of peripherals and instructions you could possibly be concerned with. A graduating novice could probably start learning things from the ground up from there as well. As someone who started with PICs, moved on to multiple ARM platforms, and eventually to every other platform including AVRs, I think you may be badly biased in thinking the AVR’s register set is somehow substantially simpler or more obvious in any absolute sense than most uCs. OK let me see If I can explain this correctly. For those serious about learning about embedded systems and not just interested in making things; instead of focusing on breadth i.e. learning the fundamentals of embedded programming 8-bitters and then move onto 32-bitters, I believe that they can be better served with a more ‘depth’ based approach. This can be managed easily by starting with a High level API such as mbed. Once you learn the high level concepts such as blocking polling w/ & w/o timeouts, non-blocking polling, interrupts as well as the high-level workings of GPIOs, ADCs, the SPI and I2C buses e.t.c, you can then move to a lower level API such as vendor-based libraries like Tivaware, Kinetis SDK & STM32Cube. Here you learn about how this lower level API gives you a higher degree of control, granularity, speed all with less flash usage. You start understanding interrupts in more detail by looking at the NVIC interrupt controller, You start poking around the reference manual and cross referencing with the API’s manual and are introduced to the joy of using DMA’s, and perhaps setting up the clock tree and looking at different low power modes Once you’re comfortable using this lower level api you can then move onto bit fiddling, understanding how to set, clear, toggle and test bits in register, understand the meaning of RMW instructions, and focus more on the hardware reference manual.. With this approach, you gain even more control, granualrity, speed and memory savings at a cost of increased complexity and brain pain. Finally you develop enough skills and knowledge to know when to use the High-level APIs, Low level APIs and the extra low level register based approach and heck even learn how to combine them in ways that optimize your programs where it counts. In short: the top-down approach rather than the bottom-up. I can see that working as well in principle, and I don’t have any real backup for my gut feeling that bottom-up is somehow more rigorous — it’s how I learned in school, and also how I learn best out of school, but maybe that’s just because it was the way I was taught. It could just be my bias. There must be educational theories backing both of these approaches up. :) When you learn linear algebra, for instance, you start by figuring out what a matrix is and what the operations do in terms of simple math that you already know. By the end you’re left with a system that gets rid of the need for knowing anything about how matrix multiplication _actually_ works — you just follow the higher-level rules and all’s well: that’s the linear algebra. But it’s also the case that CS people learn Python long before they learn (if ever) about how an ALU works at the circuit level. So it’s gotta go both ways in practice. The point of the article is that there’s people who always ask “why”. And if you start with a very abstract API, and you tell me that this-does-that, I’m the type who’ll ask why, and we’ll waste hours burrowing through the code that implements the API. Instead, you could cut me off at the pass by starting at the machine. The dominant mode (see Arduino, mbed, Energia… first) is the one that frustrates me, and I was suggesting an alternative for the misfits. :) Elliot, Agreed! both top-down and bottom-up approaches can work well. But we are all different and we learn in different ways. For some the top down approach works better while for others the bottom-up approach is better. BTW I’m one of these misfits who prefer the bottom-up approach. I learned about the 68HC11 and PIC16 in college back in the 90s. I also started playing with AVRs in the late 90s and have never looked back. (I still have a couple of at90s2313s). When the Arduino stuff started popping up in the early 2000’s I felt that it was a toy and purposefully avoided it for a long time because I always liked to talk directly to the hardware. I currently teach a microprocessors and Final project courses at College and for the past four years, I’ve been teaching AVRs with a bottom-up approach i.e. starting with register bit fiddling and working up. I’ve discovered that depending on the cohort only 20-40% of the class tend to prefer that approach (perhaps they’re just lazy). In addition, the projects that students want to build in their final projects are sometimes a bit more ambitious for the bottom-up approach. Let’s say they want to include an ILI9341 LCD in their project, well with the bottom-up approach they’ll have to write their own driver which is no easy task, especially for a college student who only completed 2-3 programming courses. To better manage this I’ve allowed students to use the Arduinos for their projects, while still teaching microcontrollers with AVRs in a bottom-up approach. Next year I’m planning to move to the Nucleo_F411RE board. I’m planning to deliver the course in a more top-down approach, starting with mbed, then moving briefly down to the middleware STMCube32 level and then further down to the register based stuff. I think that students might find that a bit more palatable at least for an Introductory course on Microcontrollers/Microprocessors. I’ll see how that goes. Thanks for sharing this amazing article. It has really helped me organize my thoughts about this topic! Wow! Thank you for that reply! I’m very glad to hear the experience of folks like you who are actively teaching this stuff — because theories about how people learn are kinda empty until you actually try ’em out in front of a few classrooms. I think that’s a neat approach to teach the bottom-up, and simultaneously have them working projects at a more-abstracted level. That may be the best of both worlds. Another commenter suggested that he was doing essentially the same thing on his own — building in Arduino and then dipping down when he can/wants. I’m interested to hear how it goes with Nucleo version of the class. Keep us updated? (I love the boards, at the very least for the cheap hardware. I have yet to do more than a “hello world” in mbed, though.) Thanks for your thoughts! I loved working on the 12f683. Only something like 96 bytes of ram, you have to turn off every extra peripheral, fight to save ram, etc. Obviously I could have gotten projects done sooner with something more powerful, but I guarantee you I would not have learned as much. I’ve always loved computers. And I’ve always wanted to know how they work on a deep level. So, I’m teaching myself C so that I can program microcontrollers to build cool projects as a hobby. I’ve decided to start with 8-bit AVRs as they seem easy to grasp. I’m going to be using the 328p. I’ve started to read the datasheet and I’ve learned a lot of useful information already and I haven’t even started anything yet. I want to learn a good portion of C before I really dive in. Please be kind and respectful to help make the comments section excellent. (Comment Policy)
https://hackaday.com/2016/02/24/when-are-8-bits-more-than-32/
CC-MAIN-2021-49
en
refinedweb
perfolationperfolation Originally designed for interpolation performance improvements, but as Scala's built-in interpolation has improved, this has become redundant. However, formatted interpolation is still both slow and painful to work with. To that end, this library has changed to providing convenient features for formatting dates and numbers. SetupSetup Perfolation supports Scala on JVM, JS, and Native with support for 2.11, 2.12, 2.13, and 3 Load the core dependency with SBT: libraryDependencies += "com.outr" %% "perfolation" % "1.2.5" Or the unit dependency for size conversions with SBT: libraryDependencies += "com.outr" %% "perfolation-unit" % "1.2.5" Main FeaturesMain Features Numeric FormattingNumeric Formatting Numeric implicits are supported for Int, Long, Double, and BigDecimal exposing a simple method f: def f(i: Int, // Minimum integer digits. Defaults to 1 f: Int = 0, // Minimum fraction digits. Defaults to 0 for Int/Long and 2 for Double/BigDecimal maxI: Int, // Maximum integer digits. Defaults to -1 for no maximum maxF: Int, // Maximum fraction digits. Defaults to -1 to use the same as `f` g: Grouping, // Grouping mode (Defaults to Grouping.None) rm: RoundingMode // Rounding mode (Defaults to RoundingMode.HalfUp) ): String Some simple examples: import perfolation._ 4.f(i = 2) // 04 40.f(f = 2) // 40.00 400.0.f() // 400.00 4000.0.f(f = 3, g = Grouping.US) // 4,000.000 Most of this follows a similar concept to f interpolation, but with a type-safe mechanism. Most commonly, you are likely to find this useful in interpolations like: println(s"The value is: ${value.f(g = Grouping.US)}") Date FormattingDate Formatting Perfolation provides a convenient CrossDate to make working with dates much easier between the JVM, JS, and Native. Similar to numeric formatting, date implicits are supported for Int, Long, Double, and BigDecimal although this is most commonly just used on Long for timestamps. Perfolation exposes a simple method t: def t: CrossDate Some simple examples: import perfolation._ val date = System.currentTimeMillis() date.t.milliseconds // Milliseconds on date date.t.hour24 // Hour in 24-hour time date.t.dayOfWeek // Numeric day of the week date.t.A // Full named day of the week (ex. "Tuesday") See the docs for CrossDate for a complete reference. Unit FormattingUnit Formatting Unit formatting requires the perfolation-unit module, but provides conversion between byte-based sizes. For example: import perfolation.unit._ Information.useBinary() // Configure Information to use binary conversions 5.kb.bytes // 5120L 5.gb.bytes // 5368709120L Information.useDecimal() // Configure Information to use decimal conversions 5.kb.bytes // 5000L 5.gb.bytes // 5000000000L (5 * 1000 * 1000).b.format() // "5.00 mb" The format method has a similar signature to that of f in numeric formatting: def format(i: Int = 1, f: Int = 2, maxI: Int = -1, maxF: Int = -1, g: Grouping = Grouping.None, rm: RoundingMode = RoundingMode.HalfUp, showUnit: ShowUnit = ShowUnit.Abbreviation): String
https://index.scala-lang.org/outr/perfolation/perfolation-unit/1.2.6?target=_sjs1.x_3.0.0-RC2
CC-MAIN-2021-49
en
refinedweb
Over time, Swift has developed a distinct dialect, a set of core idioms that distinguish it from other languages. Many new developers now arrive not just from Objective-C but also from Ruby, Java, Python, and more. The other day, I was helping Nicholas T Chambers find his groove with the new language. He was porting some Ruby code to build up his basic language skills. The code he was working with was this: def find_common(collection) sorted = {} most = [0,0] for item in collection do if not sorted.key? item then sorted[item] = 0 end sorted[item] += 1 if most[1] < sorted[item] then most[0] = item most[1] = sorted[item] end end return most end And his most recent Swift attempt was this: func find_common(items: [Int]) -> [Int] { var sorted = [Int: Int]() var most = [0, 0] for item in items { if sorted[item] == nil { sorted[item] = 0 } sorted[item]! += 1 if most[1] < sorted[item]! { most[0] = item most[1] = sorted[item]! } } return most } Other than a couple of forced unwraps, there’s almost no difference between the two. I don’t know much Ruby but this code in both versions feels very C-like and non-functional (in the fp sense, not the “won’t work” sense). I know Ruby supports some kind of reduce functionality, which you don’t see here. One of the first things I did when trying to learn Swift was to implement pages and pages of Ruby functional calls. I still have endless reject, delete_if, keep_if, etc playgrounds around. They’re really great for focusing in on learning the Swift language. Here’s the rewrite I suggested: import Foundation extension Array where Element: Hashable { /// Returns most popular member of the array /// /// - SeeAlso: /// func mode() -> (item: Element?, count: Int) { let countedSet = NSCountedSet(array: self) let counts = countedSet.objectEnumerator() .map({ (item: $0 as? Element, count: countedSet.count(for: $0)) }) return counts.reduce((item: nil, count: 0), { return ($0.count > $1.count) ? $0 : $1 }) } } In a way, this is an unfair refactor because I “went there” with NSCountedSet but writing in Swift doesn’t mean you reject Foundation. It seems to me that a counted set is exactly the kind of thing this code was trying to do: “Say you have a list of a random type (but its the same type throughout), in an arbitrary order. how do you find the most common item in the list?“. Here are some thoughts about the refactor Leverage Libraries. When migrating Swift, consider whether Foundation and Swift Foundation types will get you there faster. Counted sets provide a good match here because they do all the work of grouping and counting members. I wish there were a native version as I’m not crazy about either the object enumerator or that the code will compile even if you don’t specify hashable elements. Embrace Generics. The challenge list uses a random type. Hardcoding Int isn’t the way to do that. Bring generics into the solution early once you recognize the functionality is applicable to many types. Consider Protocols. A native version of counted set would be Hashable at a minimum, just like Swift sets. I include the restriction here but it does compile and run without that conformance. Live Functionally. Any kind of “find this within a list” screams functional programming to me. If your variables exist to store intermediate state while iterating a list, look to Swift’s core map/filter/reduce fp calls and eliminate explicit state. Avoid Global Functions. I felt my implementation was better expressed as a collection extension than a freestanding function. A mode always describes and operates on an array. Its implementation belongs as part of Array. I even considered making it a property rather than a function because a list’s mode is an intrinsic quality of an array. I’m still wavering back and forth on that point. Think Tests and Documentation. Even before you write a single line of code, considering test cases and documentation has become a core part of Swift development. I added a little doc markup here, I didn’t add any tests. Prefer Good Swiftsmanship. At first, I was drawn to syntactic specifics, like “use conditional binding” and “type the variable/prefer a literal” before I stepped back and considered the larger picture. Once I took a few moments to think about it, I retargeted my advice towards fp, but that doesn’t mean core Swift best practices should be overlooked. A lot of this falls into the big picture little picture dichotomy. When learning Swift, you probably want to work from the details up: learning how optionals work and how to use them right and how to use fp, all the way to creating tests, documentation, and leveraging protocols and generics. It’s hard to get hit in the face with so many concerns at once. Adding core API knowledge on top of *that* makes things even more difficult. Navigating both utility types and Cocoa/Cocoa Touch APIs represent a significant challenge to those new to Apple platforms, even for people with strong backgrounds in modern language fundamentals. At this point, writing “Swiftily” doesn’t just mean using conventional coding idioms but also remembering and leveraging the platform the language is arriving from. I hope counted set (and many other Cocoa Foundation outliers) make it over the bridge to native inclusion. 5 Comments Forgive me that I have not read your Swift style book. You may talk about this in there. When I read through the Swift standard library, I find it harder to read than most swift app code. Your mode extension reads as standard library code as well. I did a little refactor: Gist I added a typealias for the tuple, used trailing closures and formatted it for what I feel is more readable. I’d be interested in your thoughts (or, if easier, citations to look up in your book) Dang I wanted to make a point about the tuple, and adding labels, but I forgot Hi, More than one element of a sequence (which may be an array, of course) may occur the maximum number of times, so here is my version which returns a tuple containing an array of all of the mode elements and their shared count. The array will be empty and the count will be zero when the sequence is empty. extension Sequence where Iterator.Element: Hashable { func modes() -> (elements: [Iterator.Element], count: Int) { var maxCount = 0 var modes: [Iterator.Element] = [] var countedElements: [Iterator.Element: Int] = [:] forEach { element in var count = countedElements[element] ?? 0 count += 1 countedElements[element] = count if count > maxCount { maxCount = count modes = [element] } else if count == maxCount { modes.append(element) } } return (modes, maxCount) } } // Examples. let empty = [Int]() print(empty.modes()) // output: ([], 0) let modes = "CGATTATGCGGCC".characters.modes() print(modes) // output: (["G", "C"], 4) let samples = ["a3", "b5", "c4", "a3", "b9", "a8", "b3"] print(samples.modes()) // output: (["a3"], 2) Hi, I have a question related to objectEnumerator used in code above. Everything works as expected without calling it in this line: let counts = countedSet.map { (item: $0 as? Element, count: countedSet.count(for: $0)) } Can you please explain what is a purpose of using objectEnumerator here? Is my assumption correct that we don’t have to call it here? The objectEnumerator() call is superfluous in Swift because NSCountedSet conforms to Swift’s Sequence protocol. By the way, here is an alternative implementation using NSCountedSet, and the return type is deliberately slightly different: extension Array where Element: Hashable { /// Returns tuple of most popular member of the array and the member's count, or nil when array is empty. /// In case of multiple members being equally most popular, the particular member that is returned is undefined. func altEricaMode() -> (item: Element, count: Int)? { let countedSet = NSCountedSet(array: self) return countedSet .max { countedSet.count(for: $0) < countedSet.count(for: $1) } .map { ($0 as! Element, countedSet.count(for: $0)) } } } </code
https://ericasadun.com/2017/01/24/swift-idioms/
CC-MAIN-2021-49
en
refinedweb
Symptoms Consider the following scenario: You create a Microsoft Visual C++ project in Microsoft Visual Studio 2010. You change the Additional Manifest Files option to support Windows 7. Note To do this in Visual Studio 2010, follow these steps: Open the project's Property Pages dialog box, expand the Manifest Tools node in the Configuration Properties pane, and then click Input and Output. Add the following manifest file to the Additional Manifest Files option in the right-side pane: <> You try to rebuild the project to merge the manifest file. In this scenario, you receive the following warning message: 81010002: Unrecognized Element "compatibility" in namespace "urn:schemas-microsoft-com:compatibility.v1" Cause This issue occurs because the Mt.exe tool that is available in the Windows software development kit (SDK) does not recognize the compatibility element in the manifest file. do not have to restart the computer after you install the hotfix. We recommend that you close all Visual Studio 2010-related components before For more information about the Mt.exe tool, go to the following MSDN website: General information about the Mt.exe toolFor more information about the application manifest, go to the following MSDN website: General information about the application manifest Status Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
https://support.microsoft.com/en-gb/topic/fix-81010002-warning-message-when-you-rebuild-a-visual-c-project-in-visual-studio-2010-f9a5c2cd-5e43-c76f-b1cc-e04d3f221224
CC-MAIN-2021-49
en
refinedweb
In this assignment, you will explore three classic puzzles from the perspective of uninformed search. A skeleton file homework2 cases where you find yourself reinventing the wheel. If you are unsure where to start, consider taking a look at the data structures and functions defined in the collections, itertools, math, this section, you will develop a solver for the $n$-queens problem, wherein $n$ queens are to be placed on an $n \times. [5 points] Rather than performing a search over all possible placements of queens on the board, it is sufficient to consider only those configurations for which each row contains exactly one queen. Therefore, without taking any of the chess-specific constraints between queens into account, we want to first consider the number of possible placements of $n$ queens on an $n \times n$ board without or with the restriction that each row contains exactly one queen. Implement the function num_placements_all(n), which returns the number of all possible placements of $n$ queens on an $n \times n$ board, and the function num_placements_one_per_row(n) that calculates the number of possible placements of $n$ queens on an $n \times n$ board such that each row contains exactly one queen. Think carefully about why this restriction is valid, and note the extent to which it reduces the size of the search space. You should assume that all queens are indistinguishable for the purposes of your calculations. [5 points] With the answer to the previous question in mind, a sensible representation for a board configuration is a list of numbers between $0$ and $n - 1$, where the $i$th number designates the column of the queen in row $i$ for $0 \le i \lt, 2]) True >>> n_queens_valid([0, 1]) False >>> n_queens_valid([0, 3, 1]) True [15 points] Write a function n_queens_solutions(n) that returns a list of all valid placements of $n$ queens on an $n \times n$ board, using the representation discussed above. The output may be in any order you see fit. Your solution should be implemented as a depth-first search, where queens are successively placed in empty rows until all rows have been filled. You may find it helpful to define a helper function n_queens_helper(n, board) that yields all valid placements which extend the partial solution denoted by board. >>> n_queens_solutions(6) [[1, 3, 5, 0, 2, 4], [2, 5, 1, 4, 0, 3], [3, 0, 4, 1, 5, 2], [4, 2, 0, 5, 3, 1]] >>> len(n_queens_solutions(8)) 92 The Lights Out puzzle consists of an $m \times last section for more details. . >>> b = [[True, False], [False, True]] >>> p = LightsOutPuzzle(b) >>> p.get_board() [[True, False], [False, True]] >>> b = [[True, True], [True, True]] >>> p = LightsOutPuzzle(b) >>> p.get_board() [[True, True], [True, True]] . After importing the random module with the statement import random, the expression random.random() < 0.5 generates the values True and False with equal probability. ]] >>> for i in range(2, 6): ... p = create_puzzle(i, i + 1) ... print(len(list(p.successors()))) ... 6 12 20 30 GUI provided. In this section, you will investigate the movement of disks on a linear grid. if another disk is located on the intervening square. cell from to the cell to. $\ell$, then a desired solution moves the first disk from cell $0$ to cell $\ell - 1$, the second disk from cell $1$ to cell $\ell - 2$, $\cdots$, and the last disk from cell $n - 1$ to cell $\ell -)] >>> solve_distinct_disks(4, 3) [(1, 3), (0, 1), (2, 0), (3, 2), (1, 3), (0, 1)] >>> solve_distinct_disks(5, 3) [(1, 3), (2, 1), (0, 2), (2, 4), (1, 2)] [1 point] Approximately how long did you spend on this assignment? [2 point] Which aspects of this assignment did you find most challenging? Were there any significant stumbling blocks? [2 point] Which aspects of this assignment did you like? Is there anything you would have changed? Once you’ve filled in the functions for any of the problems ( N-Queens, Lights Out & Linear Disk Movement), you can test out your implementation using the homework2_gui.py file provided. Simply run the script using python homework2_gui.py. Once you run this command, Mac users should see a window pop up that is completely white. On the menu bar, go to New and select the problem you are trying to test out. If your implementation is correct, you should be able to obtain solutions for the problem.
http://artificial-intelligence-class.org/homeworks/hw2/homework2.html
CC-MAIN-2021-49
en
refinedweb
Mime type rootXML equality improvement -------------------------------------- Key: TIKA-367 URL: Project: Tika Issue Type: Improvement Components: mime Affects Versions: 0.5 Environment: My local MacBook pro Reporter: Chris A. Mattmann Assignee: Chris A. Mattmann Fix For: 0.6 While working on TIKA-357 and TIKA-366, I noticed (and Ken did too) that XHTML detection was no longer working in his regression test within o.a.tika.parser.html.HtmlParserTest#testXhtmlParsing. The cause of this has to do with the fix for TIKA-327. Because I used namespaceless html and link tags as valid root XML for the text/html mime type, text/html was now matching for the application/html+xml example that Ken had previously included in o.a.tika.parser.html.HtmlParserTest#testXhtmlParsing. Phew. You still with me? OK, so if you are, it turns out that the reason it failed was due to the rootXML matches rules that were being employed. The code boiled down to: {code} boolean matches(String namespaceURI, String localName) { //Compare namespaces if (!isEmpty(this.namespaceURI)) { if (!this.namespaceURI.equals(namespaceURI)) { return false; } } //Compare root element's local name if (!isEmpty(this.localName)) { if (!this.localName.equals(localName)) { return false; } } return true } {code} The issue with this block is that this version of the #matches function is too lenient. So lenient, to the point of declaring one root-XML match for a localName "html" with no namespace superseded another root-XML with a localName "html", and that included a namespace. This isn't the behavior we want. To alleviate this we should check if this.namespaceURI and this.localName are empty (e.g., put in an else block above) and make sure that if they are, the provided namespaceURI and localName are empty as well in order to return true. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.us.apache.org/mod_mbox/tika-dev/201001.mbox/%3C1209077487.361171263965636293.JavaMail.jira@brutus.apache.org%3E
CC-MAIN-2021-49
en
refinedweb
Setup To be able to begin to retrieve Metadata and other information from the Salesforce database, you will have to first establish a connection. Note: If you are using a sandbox environment, the login url will be changed from login.salesforce.com to test.salesforce.com. To establish the connection you will need to do the following things first. 1. Create a connected app in the Salesforce platform. 2. Get the consumerkey and consumersecret from the created app 3. Create a security token for your Salesforce account. Establishing a connection The python code below uses the SOAP API in Salesforce to grab data. It is kept as its own function, for simplicity, and to easily allow threading. import requests import json # Pass the api endpoint that you want to retrieve data from # Example: api = "/services/data/v37.0/sobjects/" def connect(api): sandbox=True payload = { 'grant_type': 'password', 'client_id': consumer_key, 'client_secret': consumer_secret, 'username': username, 'password': password } #there are different login urls in sandbox and production if sandbox: loginUrl = "" else: loginUrl = "" headerPayload = {"Content-Type":"application/x-www-form-urlencoded"} try: r = requests.post(loginUrl, headers=headerPayload,data=payload) except Exception as e: msg = "Error >> couldn't get login token >>>>" + str(e) print msg try: body = json.loads(r.content) except Exception as e: print "couldn't get body" print str(e) try: token = body["access_token"] except Exception as e: print "ERROR couldn't get access token > " + str(e) try: #get the url to send url = body['instance_url'] + api logging.info( "url: " + url) except Exception as e: print "ERROR Couldnt get url > " + str(e) try: #get request r = requests.get(url, headers = {"Authorization":"Bearer " + token}) #this will print out the output in an easy to read format parsedJson = json.loads(r.content) return parsedJson except Exception as e: print "ERROR couldn't get r.content > " + str(e) Other Notes - There are a limited amount of rest calls allowed daily so it would be a good idea to check how many rest calls you have remaining and store them somewhere. - The rest calls get returned and processed by Salesforce extremely slowly. It is highly encouraged to use threads to send Rest calls much faster and Queue up as many as you need.
https://paxson.io/connecting-to-salesforce-rest-api-in-python/
CC-MAIN-2021-49
en
refinedweb
In the previous post, methods clone_node and indent_node were described. Next method we are going to describe is dedent_node or moving node left. Method dedent_node Method dedent_node(pos) moves node at the given position to left, i.e. to its grand-parent just after its current parent. This operation can change size of model by deleting some positions. Here is the image showing dedent_node in action. We are using outline from our last experiment. ![ Here is the definition of this method: def dedent_node(self, pos): '''Moves node left''' (positions, nodes, attrs, levels, expanded, marked) = self.data # this node i = positions.index(pos) if levels[i] == 1: # can't move left return gnx = nodes[i] # parent node pi = up_level_index(levels, i) pp = positions[pi] pgnx = nodes[pi] psz = attrs[pgnx].size h, b, mps, chn, sz0 = attrs[gnx] # grandparent node gpi = up_level_index(levels, pi) gpp = positions[pi] gpgnx = nodes[gpi] di0 = i - gpi di1 = di0 + sz0 di2 = pi - gpi di3 = di2 + psz def movedata(j, ar): ar[j+di0: j+di3] = ar[j+di1:j+di3] + ar[j+di0:j+di1] def decrease_levels(j): a = j + di0 b = j + di1 levels[a:b] = [x-1 for x in levels[a:b]] donepos = [] for gxi in gnx_iter(nodes, gpgnx, attrs[gpgnx].size): donepos.append(positions[gxi + di2]) decrease_levels(gxi) if di1 != di3: # this node is not the last child of its parent # we need to move data to the end of parent block movedata(gxi, positions) movedata(gxi, nodes) movedata(gxi, levels) for pxi in gnx_iter(nodes, pgnx, psz-sz0): if positions[pxi] not in donepos: a = pxi + di0 - di2 b = a + sz0 del positions[a:b] del nodes[a:b] del levels[a:b] update_size(attrs, pgnx, -sz0) update_size(attrs, gpgnx, sz0) # replace parent with grandparent mps[mps.index(pgnx)] = gpgnx self._update_children(pgnx) self._update_children(gpgnx) First lines 7-9 prevents dedenting top level nodes (at level 1). Then we collect necessary data about this node, its parent and grand parent node (lines 11-24). Then we calculate the distances of data blocks in lines (26-29). The data block [di0:di1] relative to grand-parent index represents the node being moved. The data block [di2:di3] represents parent node. Then we define two helper functions: movedata and decrease_levels. The movedata method moves block of this node at the end of its current parent node. For example if we look at the left side of the above picture and let’s consider moving node at position P9 left. We would have the following situation (see lines 26-29): i, pi, gpi = 9, 8, 0 sz0, psz = 4, 11 di0 = i - gpi = 9 di1 = di0 + sz0 = 13 di2 = pi - gpi = 8 di3 = di2 + psz = 19 In this case there is only one grand parent at index 0 (root node). We would then be moving blocks of data as follows (see line 33): Look at the above image, left side to see which nodes are moved where. The other helper function decrease_levels decreses levels of this node and its subtree by one. If di1 == di3 (end of this node is equal to the end of its parent, i.e. this node is the last child of its parent), we don’t need to move data blocks because they are already where they need to be (see lines 44-46). Then we visit every occurrence of grand parent node (for loop lines 40-49) and each time we decrease levels, then if necessary move this node at the end of its parent. Each position of parent node is marked as done, so that we don’t process it again. In the loop (lines 51-57) we are visiting every occurrence of parent node and if it is not already processed we delete child node that was dedented. Then we update sizes of parent and grand parent ancestors nodes (lines 62-63). Then we change link to parent node to point to the grand parent node in line 63. Finally in lines 65-66 we update lists of children for the parent node and grand parent node.
https://computingart.net/leo-tree-model-7.html
CC-MAIN-2021-49
en
refinedweb
Get your application up and running Adding the route to the main view We previously replaced the original MainView with our own. The new one does not have an @Route annotation which we need set our view as the the root route. Switch back to IntelliJ IDEA. Expand the src/main/java/com.vaadin.tutorial.crm.uipackage and open MainView.java. Add the @Route("")annotation at the beginning of the MainViewclass. Your MainView class should now look like this: @Tag("main-view") @JsModule("./src/views/main-view.js") @Route("") (1) public class MainView extends PolymerTemplate<MainView.MainViewModel> { // The rest of the file is omitted from the code snippet } The @Routeannotation maps to MainView. Running the project Next, we run the project to see how the new layout will look like. The easiest way to run the project for the first time is to: Open the ApplicationJava class in src/main/java/com/vaadin/tutorial/crm/Application.java Click the green play button next to the line which starts with "public class Application". This starts the application and automatically adds a run configuration for it in IntelliJ IDEA. Later, when you want to run or restart the application, you can build, run/restart, stop and debug the application from the toolbar: When the build is finished and the application is running. Open in your browser to see the result. Proceed to the next chapter to connect your views to Java: Connecting your Main View to Java
https://vaadin.com/docs/latest/tools/designer/getting-started/get-your-application-up-and-running
CC-MAIN-2021-49
en
refinedweb
Customizing ASP.NET Core Part 12: Hosting Jürgen Gutsch - 29 April, 2019 12th part of this series, I'm going to write about how to customize hosting in ASP.NET Core. We will look into the hosting options, different kind of hosting and a quick look into hosting on the IIS. And while writing this post this again seems to get a long one. This will change in ASP.NET Core 3.0. I anyway decided to do this post about ASP.NET Core 2.2 because it still needs some time until ASP.NET Core 3.0 is released. This post is just an overview bout the different kind of application hosting. It is surely possible to go a lot more into the details for each topic, but this would increase the size of this post a lot and I need some more topics for future blog posts ;-) - This article Quick setup For this series we just need to setup a small empty web application. dotnet new web -n ExploreHosting -o ExploreHosting That's it. Open it with Visual Studio Code: cd ExploreHosting code . And voila, we get a simple project open in VS Code: WebHostBuilder Like in the last post, we will focus on the Program.cs. The WebHostBuilder is our friend. This is where we configure and create the web host. The next snippet is the default configuration of every new ASP.NET Core web we create using File => New => Project in Visual Studio or dotnet new with the .NET CLI: public class Program { public static void Main(string[] args) { CreateWebHostBuilder(args).Build().Run(); } public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>(); } As we already know from the previous posts the default build has all the needed stuff pre-configured. All you need to run an application successfully on Azure or on an on-premise IIS is configured for you. But you are able to override almost all of this default configurations. Also the hosting configuration. Kestrel After the WebHostBuilder is created we can use various functions to configure the builder. Here we already see one of them, which specifies the Startup class that should be used. In the last post we saw the UseKestrel method to configure the Kestrel options: .UseKestrel((host, options) => { // ... }) Reminder: Kestrel is one possibility to host your application. Kestrel is a web server built in .NET and based on .NET socket implementations. Previously it was built on top of libuv, which is the same web server that is used by NodeJS. Microsoft removes the dependency to libuv and created an own web server implementation based on .NET sockets. Docs: This first argument is a WebHostBuilderContext to access already configured hosting settings or the configuration itself. The second argument is an object to configure Kestrel. This snippet shows what we did in the last post to configure the socket endpoints where the host needs to listen to: .UseKestrel((host, options) => { var filename = host.Configuration.GetValue("AppSettings:certfilename", ""); var password = host.Configuration.GetValue("AppSettings:certpassword", ""); options.Listen(IPAddress.Loopback, 5000); options.Listen(IPAddress.Loopback, 5001, listenOptions => { listenOptions.UseHttps(filename, password); }); }) This will override the default configuration where you are able to pass in URLs, eg. using the applicationUrl property of the launchSettings.json or an environment variable. HTTP.sys Do you know that there is another hosting option? A different web server implementation? It is HTTP.sys. This is a pretty mature library deep within Windows that can be used to host your ASP.NET Core application. .UseHttpSys(options => { // ... }) The HTTP.sys is different to Kestrel. It cannot be used in IIS because it is not compatible with the ASP.NET Core Module for IIS. The main reason to use HTTP.sys instead of Kestrel is Windows Authentication which cannot be used in Kestrel only. Another reason is, if you need to expose it to the internet without the IIS. Also the IIS is running on top of HTTP.sys for years. Which means UseHttpSys() and IIS are using the same web server implementation. To learn more about HTTP.sys please read the docs. Hosting on IIS An ASP.NET Core Application shouldn't be directly exposed to the internet, even if it's supported for even Kestrel or the HTTP.sys. It would be the best to have something like a reverse proxy in between or at least a service that watches the hosting process. For ASP.NET Core the IIS isn't only a reverse proxy. It also takes care of the hosting process in case it brakes because of an error or whatever. It'll restart the process in that case. Also Nginx may be used as an reverse proxy on Linux that also takes care of the hosting process. To host an ASP.NET Core web on an IIS or on Azure you need to publish it first. Publishing doesn't only compiles the project. It also prepares the project to host it on IIS, on Azure or on an webserver on Linux like Nginx. dotnet publish -o ..\published -r win32-x64 This produces an output that can be mapped in the IIS. It also creates a web.config to add settings for the IIS or Azure. It contains the compiled web application as a DLL. If you publish a self-contained application it also contains the runtime itself. A self-contained application brings it's own .NET Core runtime, but the size of the delivery increases a lot. And on the IIS? Just create a new web and map it to the folder where you placed the published output: It get's a little more complicated if you need to change the security, if you have some database connections and so on. This would be a topic for a separate blog post. But in this small sample it simply works: This is the output of the small Middleware in the startup.cs of the demo project: app.Run(async (context) => { await context.Response.WriteAsync("Hello World!"); }); Nginx Unfortunately I cannot write about Nginx, because I don't have a running Linux currently to play around with it. This is one of the many future projects I have. I just got ASP.NET Core running on Linux using the Kestrel webserver. Conclusion ASP.NET Core and the .NET CLI already contain all the tools to get it running on various platforms and to set it up to get it ready for Azure and the IIS, as well as Nginx. This is super easy and well described in the docs. BTW: What do you think about the new docs experience compared to the old MSDN documentation? I'll definitely go deeper into some of the topics and in ASP.NET Core there are some pretty cool hosting features that make it a lot more flexible to host your application: Currently we have the WebHostBuilder that creates the hosting environment of the applications. In 3.0 we get the HostBuilder that is able to create a hosting environment that is completely independent from any web context. I'm going to write about the HostBuilder in one of the next blog posts.
https://asp.net-hacker.rocks/2019/04/29/customizing-aspnetcore-12-hosting.html
CC-MAIN-2021-49
en
refinedweb
Question 22) A. John & Son Canada Corp has the following capital structure. Security 1)6.0% Bond---30million 2)5.5% Straight bond ----10 million 3)7.5% Preferred stock ----10 million 4)Common stock -----50 million Total $100 million The 6% bond is a callable bond and yield to call (YTC) of this bond is 6.75%. The straight bond which has a coupon rate of 5.5% has a yield of 6.45%. The preference share of John & Son is currently trading at $95. The common stock of John & Son Canada Corp has a beta of 1.25. The T-bill rate is 2% and the return on TSX composite index is 9%. The corporate tax rate is 35%. i) Compute the cost of capital after tax for each sources of financing (capital)? [6 marks] ii) What is its WACC? iii) If John & Son Canada Corp is evaluating an investment proposal that provide internal rate of return (IRR) of 12%, should the company go with the proposal assuming that the WACC ramains constant. Top Answer The cost of debt is the effective interest rate a company pays on its debts. It's the cost of debt, such as bonds and loans,... View the full answer
https://www.coursehero.com/tutors-problems/Finance/19990068-22-A-John-Son-Canada-Corp-has-the-following-capital-structure-Sec/
CC-MAIN-2020-05
en
refinedweb
You know how you can you type python script.py and magic happens? That’s the Python virtual machine, or language interpreter, reading the source code you wrote, translating it down to bytecode the Python VM can understand, and then executing it. I use the terms language interpreter and language VM interchangeably. I’ll try to be consistent, but then I try to resist unattended jelly doughnuts too. Please make sure you have a C compiler installed! GCC or clang are good choices. Like Frieza, a program has multiple forms. When you start coding one, you write text that looks like: #include <stdio.h> int main (void) { printf("Hello World!"); return 0; } Your CPU has no idea what to do with that. We have to transform this text into something the CPU can understand and act upon: binary. This process (or series of processes) is often called compilation and requires more steps. All processors have a language of their own they can understand. This is often called assembly code and is highly specific to the processor. Assembly that your iPhone or Galaxy C4 Boom Edition can understand is not comprehensible to that cheaper AMD proc you bought on NewEgg over the Intel one, and you totally don’t regret that decision at all. You can write assembly code directly, though this is rare in modern times. Its tedious and annoying, much like an episode of Friends. Your friend the compiler can take your source code, and spit out assembly code for you. Let’s take our earlier C code example and put it in a file called 01_c_hello_world.c: #include <stdio.h> int main (void) { printf("Hello World!"); return 0; } Save that somewhere on your disk. Now, from a terminal, run: $ gcc -S 01_c_hello_world.c $ cat /path/to/01_c_hello_world.s You should see some version of the following: .section __TEXT,__text,regular,pure_instructions .macosx_version_min 10, 13 .globl _main ## -- Begin function main .p2align 4, 0x90 _main: ## @main subq $16, %rsp leaq L_.str(%rip), %rdi movl $0, -4(%rbp) movb $0, %al callq _printf xorl %ecx, %ecx movl %eax, -8(%rbp) ## 4-byte Spill movl %ecx, %eax addq $16, %rsp popq %rbp retq .cfi_endproc ## -- End function .section __TEXT,__cstring,cstring_literals L_.str: ## @.str .asciz "Hello World!" Don’t panic! You don’t need to know what all that means, nor will we be writing this. Its to show what assembly looks like. Once we have the assembly code, there’s another program (often baked in to the compiler) that takes the assembly and transforms it into the 0s and 1s that our CPU can understand. To see the assembler in action, you can run: $ gcc -c 01_c_hello_world.s -o 01_c_hello_world.o $ ls 01_c_hello_world.c 01_c_hello_world.o 01_c_hello_world.s What the .o file contains is close to the actual 0s and 1s that the CPU can execute. It would be platform specific, but would execute quickly. One of the benefits used to market Java way back when it first lumbered onto the scene was the "write once, run anywhere" promise. That is, the Java code you wrote could run, unmodified, on any hardware platform that could run the JVM. This meant that people needed to care about one program, the JVM, running on their hardware, and Sun Microsystems (later Oracle) would take care of that part. Other languages follow this model: the .NET CLR, Python, Ruby, Perl, and more. While these VMs provide services (hardware abstraction, garbage collection, and more), it all comes with a price: slower execution speed and higher resource consumption. As a general rule, languages that run on a VM execute more slowly than ones compiled to run on specific hardware. The last thing to cover in this post is the concept of registers. On a CPU, a register is a special area to store data. For a more detailed explanation, I’ll steal from Wikipedia:. — Wikipedia When your CPU executes code to set a variable to the number 5, that 5 is probably going to be loaded into a register somewhere. Our application that is pretending to be a CPU will also have registers it can use. We’re going to write an application that pretends to be a CPU, and executes programs we write for it. Which, of course, means we’ll have to invent a language too. But we’ll get to all that later. You should now have enough basic knowledge to go on to the next section.
https://www.tefter.io/bookmarks/38099/readable
CC-MAIN-2020-05
en
refinedweb
Subject: Re: [boost] Is there any interest in a library which solves C++ class reflection feature just like Java does? From: jinhua luo (ljh.home.king_at_[hidden]) Date: 2011-12-05 23:46:09 Hi, I just have a quick look at the boost::fusion. The declaration syntax is a bit similar, but they are completely different things, I think. And the my library do not use boost::fusion. Also have a quick look at the boost::reflect, and I found it's a different style to fulfill the reflection: a) my way is non-intrusive, which means it makes no assumption about your class, e.g. inheritance, or implements any interfaces. Any public class members (member functions) of any type (even the C++ reference) can be reflected. But the boost::reflect seems to be focus on the interface-oriented reflection. b) in my way, you just need to declare your class via dedicated macros, and do nothing else, then you can use Java-like reflection API to reflect and use your classes at run time. And the API do not need any initialization before you begin to use. All the declared classes will be registered automatically. c) The declaration and reflection is separated. In my example, you can see that the declaration resides in the standalone shared library, while the reflection API usage resides in any other program, in which program you do not need to include any concrete class header file, because that's where the reflection magic happens. But the boost::reflect needs to know about the interface definition obviously. d) when the compiler compiles the declaration, it will raises errors if the declaration is invalid, e.g. the prototype doesn't match the class definition. So it's type-safe. Moreover, the reflection API will raise exceptions if you try to launch some invalid reflection, e.g. non-exist method, wrong method prototype. e) the declaration is flexible enough. you can place the declaration anywhere (and within any namespace), which means you don't have to place it into one single source file. in fact, you don't need to care about the ODR (One-Definition-Rule) violation, and it will eliminate the duplicated class registration, and that's why I said you can place the declaration within the header file, which may be included within any compile unit. You can even place the same declaration both in the shared library and the executable. Regards, JinHua 2011/12/6 Julien Nitard <julien.nitard_at_[hidden]> > Hi, > > >In brief, I'd designed and implemented somehow C++ class reflection based > on boost libraries. > > I am interested. Could you detail how it differs from > BOOST_*FUSION*_ADAPT_* > STRUCT* in boost::fusion ? It seems very similar to me. > > > >It has some unique advantages: > >a) it doesn't require code generator > > Well ... I'd prefer that a code generater be provided: Your syntax is not > that lightweight, you can compare with boost::reflect that is a non > proposed library (link). > If you want to reflect a large number of classes, then your solution is not > very maintenable (as all solutions without a code generator). > > One more thing: your solution is "runtime" reflection, I think in C++ a > compile time API would make more sense, though, I must say there are > advantages for both. > > Regards, > > Julien > > _______________________________________________ > Unsubscribe & other changes: > > Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2011/12/188675.php
CC-MAIN-2020-05
en
refinedweb
משיב מוביל Hyper-V 2016 VMs stuck 'Creating checkpoint 9%' while starting backups שאלה We have a two clustered W2016 Hyper-V hosts, every couple of days one of the hosts gets stuck when the backup kicks off. In Hyper-V manager the VMs all say 'Creating checkpoint 9%' It's always the same percentage 9%. You can't cancel the operation and the VMMS service refuses to stop, the only way to get out of the mess is to shutdown the VMs, and hard reset the effected node. The backup works for a few days then it all starts again. The only events I can see on the effected node is: Event ID: 19060 source: Hyper-V-VMMS 'VMName' failed to perform the 'Creating Checkpoint' operation. The virtual machine is currently performing the following operation: 'Creating Checkpoint'. Can anybody help please? Cluster validation is clean. Hosts and guests are patched up. תשובות כל התגובות Hi Sir, Are you using windows server backup to backup these VMs ? How long is the backup interval ? Every time the backup will stuck at "9%" ? Best Regards, Elton Please remember to mark the replies as answers if they help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com. Thanks for responding, The backup is Veeam 9.5 Backup interval is configured to run each night at 23:00, it normally takes 20 minutes to finish. When the backup job starts it must call to the Hyper-V host to take a checkpoint, it this that stalls at 9% each time, not the backup it's self. VMMS becomes unresponsive on the host and I can't even LiveMigration VMs, hard resetting the host seems to be the only way to get out of the situation. @AustinT have you tries restarting VMMS ? restart-service VMMS What does the job information tell you about this job ? gwmi -namespace root\virtualization\v2 -class msvm_concretejob gwmi -namespace root\virtualization\v2 -class msvm_migrationjob gwmi -namespace root\virtualization\v2 -class msvm_storagejob Hi, The VMMS service is unresponsive and won't restart. Here is the output from the msvm_concretejob....the other two returned nothing.PS C:\> gwmi -namespace root\virtualization\v2 -class msvm_concretejob __GENUS : 2 __CLASS : Msvm_ConcreteJob __SUPERCLASS : CIM_ConcreteJob __DYNASTY : CIM_ManagedElement __RELPATH : Msvm_ConcreteJob.InstanceID="8C874D34-8F00-412F-8F92-056A62C954CB" __PROPERTY_COUNT : 41 __DERIVATION : {CIM_ConcreteJob, CIM_Job, CIM_LogicalElement, CIM_ManagedSystemElement...} __SERVER : HV2 __NAMESPACE : root\virtualization\v2 __PATH : \\HV2\root\virtualization\v2:Msvm_ConcreteJob.InstanceID="8C874D34-8F00-412F-8F92-056A62C954CB" Cancellable : True Caption : Creating Checkpoint CommunicationStatus : DeleteOnCompletion : False Description : Creating Virtual Machine Checkpoint DetailedStatus : ElapsedTime : 00000001183119.965512:000 ElementName : Creating Checkpoint ErrorCode : 0 ErrorDescription : ErrorSummaryDescription : HealthState : 5 InstallDate : 16010101000000.000000-000 InstanceID : 8C874D34-8F00-412F-8F92-056A62C954CB231150.000000-000 StartTime : 20170327231150.000000-000 Status : OK StatusDescriptions : {Job is running} TimeBeforeRemoval : 00000000000500.000000:000 TimeOfLastStateChange : 20170327231150.000000-000 TimeSubmitted : 20170327231150.000000-000 UntilTime : PSComputerName : HV2 __GENUS : 2 __CLASS : Msvm_ConcreteJob __SUPERCLASS : CIM_ConcreteJob __DYNASTY : CIM_ManagedElement __RELPATH : Msvm_ConcreteJob.InstanceID="C4CA521A-3309-48D8-B659-F7EF6B95615B" __PROPERTY_COUNT : 41 __DERIVATION : {CIM_ConcreteJob, CIM_Job, CIM_LogicalElement, CIM_ManagedSystemElement...} __SERVER : HV2 __NAMESPACE : root\virtualization\v2 __PATH : \\HV2\root\virtualization\v2:Msvm_ConcreteJob.InstanceID="C4CA521A-3309-48D8-B659-F7EF6B95615B" Cancellable : True Caption : Creating Checkpoint CommunicationStatus : DeleteOnCompletion : False Description : Creating Virtual Machine Checkpoint DetailedStatus : ElapsedTime : 00000001184028.776685:000 ElementName : Creating Checkpoint ErrorCode : 0 ErrorDescription : ErrorSummaryDescription : HealthState : 5 InstallDate : 16010101000000.000000-000 InstanceID : C4CA521A-3309-48D8-B659-F7EF6B95615B230241.000000-000 StartTime : 20170327230241.000000-000 Status : OK StatusDescriptions : {Job is running} TimeBeforeRemoval : 00000000000500.000000:000 TimeOfLastStateChange : 20170327230241.000000-000 TimeSubmitted : 20170327230241.000000-000 UntilTime : The issue came back even with CSV cache disabled. See this thread for people having similar issues. All poster have Dell based Intel 10GbE cards with Windows 2016 installed. Hi, The solution for me was to update the network adapter firmware and use the latest Intel drivers from the Dell support website. IMHO it was the inbox drivers from Microsoft causing the issues, the Intel drivers seem to have fixed the issue. All the best Hey guys, the solution for me was to rename a broken Hyper-V Server so it was not loaded any more. The Hyper-V server was in one folder including disks and machine. So I stopped and killed the hyper-v services and simply renamed the folder. All services started up and all other virtual machines worked fine after this action. Best greets, Daniel Hebel I've got that same issue... stuck at 9% creating checkpoint on one node of a 2-node hyper-v cluster. I"m also using veeam 9.5u4. I"ve got the latest NIC drivers installed and latest updates on windows 2019. Daniel, what do you mean "I renamed the folder". I can't restart the node right now or do anything drastic to try and recover during production hours. I tried restarting one VM and now its stuck unable to fully turn off. I assume from above, I can't simply taskkill vmms or the vmwp processes. I notice two events in the vmms admin event viewer which seem to be at the start of the issue... error 32587 and 32510 . Both say "the description of event ID #### from source cannot be found..." It referenced a VM that was in the off state. I was using hyper-v replication too for a few key VMs and have turned that off per - נערך על-ידי trump26901 יום רביעי 13 מרץ 2019 14:35 production with the check box option to create standard if the guest does not support creation of production checkpoints. there are about 30 VMs in the backup job all of which are in the same cluster. most use the veeam app-aware processing function, but a few are copy only. The VMs that are stuck are all on the same host and are a mixture of app-aware and copy only in veeam. I had three VMs that were set to replicate to an off-cluster host and one of those was on the problem host as well. I go the same outputs for as AustinT back in his old post with the wmi calls and only gwmi -namespace root\virtualization\v2 -class msvm_concretejob gives a response and my response looks almost identical. I didn't try killing vmms since he said that didn't help and at least for the moment I don't want to upset the cluster until I can allow for a more controlled potential outage. Is there a way to force those Msvm_ConcreteJob jobs to stop? 100% it was drivers for me. Is this a Dell server with Intel NICS? Use this PowerShell to see the driver version and more importantly the driver provider. It should say Intel and not Microsoft.Get-NetAdapter | where {$_.InterfaceDescription -notlike "Hyper-V*"} | where {$_.InterfaceDescription -notlike "Microsoft*"}| FT Name, DriverInformation, DriverProvider -AutoSize I'll keep that in mind. maybe the production one increases the chances of the problem coming up, but for now all I can say is that something on the host broke. I should be able to get in a full cluster shutdown tonight, so that should hopefully give me some more breathing room to trouble-shoot. I am having the exact same issue, Dell Servers XC 630 (essentially R730's), Intel cards X520, and using Veeam, drivers are Intel 4.1.4.0. Checkpoints are set to use Production with standard as failback. Did you manage to find anything in your troubleshooting trump26901 ? At the moment I do not appear to be able to stop the checkpoint creation, VM's on here will not shutdown or migrate off, so looking to have to do the forced host reboot. I could do without this happening again, if anyone found any further resolution ? - nothing yet. I did a "controlled" shutdown of the cluster and all running VMs (any that would not shut off, I got them to stop at the shutting off phase and then I shut down the cluster and did taskkill to allow the nodes to reboot. I haven't run into the problem again yet and so I don't want to do anything too drastic and try and fix something that only happens under a very rare set of conditions which we might never see again. If it happens to me again I'll start looking into it more seriously. We had the same issue today with Windows Server 2016 Datacenter running Hyper-V, and VM running Windows Server 2016 Standard, Exchange 2016 15.1.1415.2. Cancelled checkpoint creation from Hyper-V - no result. Graceful stop of Veeam backup job - no result Force stop Veeam backup job - after one minute the checkpoint creation stopped. VM was on, but terminal would not connect. Force shutdown VM and boot again - VM works as normal. Veeam log after stopped job: 02-04-2019 06:02:51 :: Failed to create VM recovery checkpoint (mode: Veeam application-aware processing) Details: Job failed ('Checkpoint operation for '<server name>' failed. (Virtual machine ID <VM GUID>) Checkpoint operation for '<server name>' was cancelled. (Virtual machine ID <VM GUID>) '<server name>' could not initiate a checkpoint operation: This operation returned because the timeout period expired. (0x800705B4). (Virtual machine ID <VM GUID>) and 02-04-2019 06:23:19 :: Retrying snapshot creation attempt (Failed to create production checkpoint.) I found this article too: Was that host in a cluster or just standalone? It happened again to us the other night. I've changed all VMs to standard checkpoint now as opposed to production per mlavie's suggestion. Nice to know that I don't have to totally reboot the host next time as long as I can turn off the veeam server. I didn't see anything particularly obvious for why its getting stuck in my logs so hopefully others in here are watching this thread and together we can work out a commonality. My environment: 2-node S2D Cluster on Windows Server DC 2019 - latest updates Intel X722-4 NICs for hosts/VMs, Mellanox 4lx crossover cables for RDMA/cluster Veeam 9.5u4 (I just looked and there is now a u4a available that I will update to now). Good Afternoon, Please confirm that the Security Policy "User Rights Assignment > Log on as a batch job" is not defined by Group Policy on any Hyper-V Hosts. You can confirm if it is by performing the following steps; - Start - Run - "Secpol.msc", click OK - Local Policies > User Rights Assignment > Log on as a batch job - Confirm that "Add User or Group..." is not greyed-out. If this has been defined, remove the group policy from the host - update group policy, then reboot the host. Confirm if the issue still exists. Thanks. - User Rights Assignment I checked both my cluster hosts and both are NOT greyed-out. Both contain "Administrators", "Backup Operators", and "Performance Log Users" . I"m not sure if the timing is repeatable or just random, but it appears to have happened roughly 3-weeks apart... I installed and setup the system, roughly three weeks later the issue happened, and then roughly three weeks after that it happened again. If it repeats again at the same pace, that will happen in roughly two weeks. I have a host in the same boat, not clustered, Using Quest Rapid Recovery, 2019 STD, VSS gets wacked and then VM's are stuck in 9% checkpoint. Host is an Intel board S2600WFT with X722 10GBATE-T build in adapters. PS does report they are using Microsoft Drivers per one of the comments in this thread. I will see about updating the drivers to Intel drivers after hours. We also have a customer who is affected by this. Their setup: - 3 Node Hyper-V Cluster running Server 2019 Datacenter, all with the latest updates/firmware/drivers - Replication server at remote site - HPE MSA2052 with the latest firmware - Altaro VM Backup 8.2.1.3 So far this has affected VMs on different hosts, so it is not specific to one host. There appears to be no pattern to this. The backups run as scheduled each night, and one VM will get stuck on 'Creating Checkpoint (9%)'. Following this, the VMs on that node can no longer be managed. You cannot: - Create checkpoints - Live migrate to another host - Quick migrate to another host - Manage replication (if replication is enabled for that VM) The following does continue to work: - You can log in to each VM on the affected host, either via RDP or Hyper-V Console - Replication-enabled VMs continue to replicate as normal, except for the affected VM. The affected VM stops replicating. Essentially, VMs on the affected host respond as though there is no issue with this host. Please allow me to re-iterate that this has affected VMs which are configured for replication, and VMs which are not configured for replication. A different VM has been affected, on a different host, each time this has happened. Previously, the following recovered the host: - Restart VMMS (this took a very long time, and crashed out another VM) - Reboot the affected node Following _Dickins suggestion, we removed our GPO from the hosts which define users in 'Log on as a service'. This restored the default account(s) which require this access, but unfortunately made no difference. This is a brand new Infrastructure Refresh, which was only installed in 2019. Everything is fully up to date in terms of drivers/firmware. Each VM has the following Integration Service enabled: - Operating system shutdown - Data Exchange - Heartbeat - Backup (volume shadow copy) And the following disabled: - Time synchronization - Guest services Our customer is using 'Production checkpoints', but I'm considering changing these to 'Standard checkpoints' as a test, as others have said this has resolved the issue. - Well the standard checkpoint didn't fix it for me. I"ve seen a bunch of "Cluster Shared Volume 'CSV#' has entered a paused state because of '(c0e7000b)'. All I/O will temporarily be queued until a path to the volume is reestablished. event5120 and event 5142. The odd thing is that the error is on the node that owned the CSV resource disk and also happened to be the node that has the effected VMs. Not sure if its a pattern yet, but I the last two and possibly three times this happened, it has been the same host with the same problems. Just to note, having the same issue with Acronis Advanced Backup on Windows Server 2019. Stuck at creating chjeckpoint 9% Cannot stop hyper V - stuck at stopping. The only solution is to reboot the server (which promtly hangs at stopping Hyper V), and then after a few seconds of that, remote process kill vmms.exe. The server will then reboot. My NICs are 40G Intel XL710-QDA2 - the drivers are Microsoft, but their date is later than the one on the intel web site, and both are version 6.80. Log on as Batch Job Local security policy as above is OK This only occurs on 2019 - the 2012R2 servers have no problems Hello, I'm facing the similar issue, while trying to create the checkpoint of multiple VMs. The VMs are become unresponsive..... Single VM check point is working fine without any issue Issue started since 10th April. Earlier it was fine. Applied the following patched in the server KB4091664 KB4480961 KB4489882 KB4487026 KB4489882 - Targets to the following components. Unfortunately, customer is denying to take a simultaneous backup of Hyper-V VMs using Windows Server Backup tool Is anybody have a suggestion on this? Please help Jaril Nambiar - נערך על-ידי Jaril Nambiar שבת 20 אפריל 2019 17:27 Also having the same issue with a newly installed 2 node Failover Cluster running Server 2019 /w Hyper-V role. CSV over iSCSi. Latest Intel drivers/firmware, Windows updates, etc are applied. I left the default checkpoint settings of production then standard if not supported. I'm certain our issues are related to the backups which had run successfully for the past 2 weeks but hung on week 3 which others have mentioned above. We use Veritas Backup Exec 20.3 Rev 1188 which is also backing up a 2012 R2 Hyper-V cluster without problems. After noticing our backup job was hung and a VM status showing Creating Checkpoint (9%), I stopped the backup exec agent on the Hyper-V host which took forever to stop. Now the Volume Shadow Copy (VSS) service is in a stopping state. Can't manage/migrate/shutdown VMs but they are thankfully still running. I will have to do a forceful shutdown tonight. Since this issue is happening with both Server 2016 and 2019 Hyper-V, and with various backup software. I'd say this is a Microsoft problem? The problem continues - seems I can get 1 backup done OK before the problem occurs, and then stuck eternally at Creating Checkpoint 9%. This is with the latest Acronis backup software. If I turn off Volume Shadow Copy Service and Volume Shadow Copy Service for virtual machines then the problem does not occur, but the backups would be unreliable (so not an option). I haven't seen this error come back for me since April 15th and that breaks the rough patter I was seeing of every 2-weeks I had experienced prior to that. Some additional notes: I originally created nested mirror accelerated parity drives AND enabled deduplication. I then added a number of additional physical disks to each node and optimized the storage pool each time. I have since moved everything to just simple two-way mirror drives WITHOUT deduplication. Eventually I'll try adding the nested mirror setup back as I"m guessing it was either the dedup chunks or the mirror accelerated parity overhead that might have been contributing to the problem. I also have the same issue on a brand new Hyper-V host running Windows Server 2019 Standard with 5 VM's. 2 x Windows Server 2019, 2 x Windows Server 2008 R2 and 1 x Windows 7 Pro. Backup software is Veritas Backup Exec v20.3 but i noticed the issue already occurs when creating a manual checkpoint myself from the Hyper-V console of one of the 2008 R2 servers and also stuck on 9%. No way to recover without rebooting the Hyper-V host. As I'm a Microsoft partner and having 10 Action Pack incidents I will contact Microsoft Support for this issue. The specific server is a Dell PowerEdge R740 server with quad port Intel i350 Gigabit NIC's... I am having the same issue. Server 2019 STD with Hyper-V. Veeam 9.5 4a running on separate box. Dell Poweredge R730XD with Intel 10GbE and 1GbE cards is the host. Only fix is to hard reboot the server. The problem started for me with a Veeam backup failing on a guest that suddenly acts like it lost its network connection. Veeam backup fails. I try to shutdown the guest and it hangs at "shutting down." The next scheduled Veeam backup then gets stuck at 9% on a fully functioning VM. The VM that is being backed up runs just fine. Trying to shutdown guests from the host, they get stuck at "stopping". The fix is to hard reboot the box. I try to shutdown the guests via RDP to hopefully get them to shutdown somewhat gracefully. Then I try to shutdown the host and wait a good while, about an hour, then I have to reset the server. From all these postings, it really feels like it is an Intel card/driver issue. Could this be resolved by changing out the Intel cards for Broadcom? On the Poweredge's the 10Gbe/1GbE part code is Y36FR, I believe. I am checking with my reseller now to find the availability of that part to ship me one out. I will post with my findings. I do not believe the reboot itself is slow as is the hung up vm that won't shut down. If you kill those tasks first, the reboot would have been normal. That being said, I was (fingers crossed its in the past) having the issue and my production NICs are Intel x722 series cards. I switched to standard checkpoints, but I definitely had one additional recurrence since that happened. I then moved the VMs to a new CSV (I"m using an S2D cluster) and haven't had a problem since. Not sure which was the fix, but so far its been running for over a month without issue now and it used to happen roughly every 2 weeks for me prior to that. I managed to get 21 hours from a reboot before another problem with backups. After the reboot last night, Veeam was able to get several backups out of the host with no issues. I have migrated other guests off the host and all I have left is Exchange 2013. Veeam gets stuck at 9% and won't go any further. Killed the VMMS task and it woudn't register as stopped. Logged into RDP and shutdown Exchange. Wait. I was able to get a successful reboot out of it, which is a good thing. Then I have to retry my Veeam backups for Exchange. I also have checkpoints disabled. Looking at this, seeing that Intel networking is a fairly common thread, I made a change. I stopped/disabled the Intel management service to see if it has any impact. I have the Broadcom 10GbE/1GbE daughtercard coming tomorrow. I will drop it in and we will see how it does.. out of curiosity, did your s2d volumes include either a nested resiliency volume and/or did you add drives to the system after creating any volumes? I haven't had the problem in two months now and it was happening on a pretty regular 2-week schedule for me in the past and the last thing I did was to move all my data from the original CSVs I created to new ones that were not nested resiliency. Oh yea... I also had dedup turned on for some of those volumes and that is now off. No, it is (or should be) a fairly simple setup in that regard. Just a couple of regular CSV's, no dedup. It's been 13 days since last issues, but I really dont trust it. We are going to add extra 10GB NIC's to the hosts soon to seperate data streams (production data vs migration and S2D sync data). This is something a MS tech told us might help. I have the same problem Here the environnement : 2 Server Windows Server 2019 Datacenter (1809) with Hyper-V Role Veeam Backup & Replication 9.5.4.2753 Veeam replica job HyperV01 to HyperV02 Servers are directly connected with Broadcom NetXtreme E-Series Advanced Dual-port 10GBASE-T for the replication. I first met the problem which led me to this technet post : "Creating snapshot at 9%" Backup or replication job failed Cannot launch new backup job or cancel snapshot creation Event ID 19060 Hyper-V VMMS I did a hard reboot of the server one night. And i disabled replication job every 15 minutes and changed production snapshot to standard snapshot. I didn't have any problem with backup during one week until today. Tonight, i will hard reboot the server, update veeam, update 10Gb NIC card... If someone found the solution ? Thanks for help Ok i spoke too fast, same problem this night during backup. - We have now encountered the problem on a third system. On the other two systems, however, everything has been fine for several weeks now. All the systems involved have one thing in common: They were converted from VMWare to HyperV. What does that look like for you? Is it possible to reduce the problem to that? We also have this problem from the start (4 months) of our new HP DL380-G10 HyperV 2019 cluster, MSA2052, 10Gb Intels and Altaro software. It happens every 5-10 day’s. We have all 2012R2 VM’s, and one 2008R2. I removed the 2008R2 from backup. Maybe it helps. All other VM’s were migrated from an older 2012R2 HyperV host (not converted from VMWARE) Tried the option to set production to standard. No luck. The API Altaro uses forces a production snapshot. The VSS integration is checked (default) Ok i think veeam try also to do production checkpoint instead standard checkpoint while i changed in the hyper-v settings of the virtual machine. Event id 18016 Hyper-V-VMMS Can not create production control points for VM01. (Virtual Machine ID: E9E041FE-8C34-494B-83AF-4FE43D58D063) And in Veeam log, i have id 150 VEEAM MP VM VM01 task has finished with 'Failed' state. Task details: Failed to create VM recovery checkpoint (mode: Hyper-V child partition snapshot) Details: Failed to call wmi method 'CreateSnapshot'. Wmi error: '32775' Failed to create VM recovery snapshot, VM ID '260fa868-64f9-418f-a90a-d833bc7ec409'. Retrying snapshot creation attempt (Failed to create production checkpoint.) I have more logs since i disable VSS integration yesterday (necessary for production checkpoint)if i'm not mistaken. I've started a call with Microsoft regarding this issue. At the moment the Microsoft Rep has asked we uninstalled Sophos AV from the Host machine; which we've done. We've got one VM set to Standard Checkpoints, one to Production Checkpoints. Backup software is Altaro. Now awaiting for the issue to replicate and get Microsoft back on the case. Will update when I know more.. Hello Problem happened again four days ago. I had to hard reboot Hyper-V again. In your case, is it a fresh install ? When problem is appears ? Did you upgrade veeam to lastest version or install directly last version ? I open a case with microsoft. I'm waiting return. - Our problem seems to have been fixed by either two things: fully patching the Hyper-V host and also making sure that the VM itself is updated. And other than that we have updated our Intel X710 10gb fiber cards to the latest Intel driver in stead of the Microsoft one. This has solved our backups hanging at 9% but now we have new problems that might be unrelated to this topic. When we connect virtual machines to the Hyper-V virtual switch with the 10gb networkcard, the host disconnects from the network every 30 seconds with a 10400 event in the logs. When we switch the VM's to a Hyper-V virtual switch with the regular RJ45 port everything is OK. Also stable network connectivity on the 10gb card. Really weird, one problem after the other Count me in on this... brand new Server 2019 Core Hyper-V Cluster (Microsoft OS up to date), HPE 460 G10 Blades using HPE QLogic NX2 10Gb drivers, and latest rev of Veeam (9.5up4b). Happens sporadically about once every week to week and a half on any one of my 6 nodes. Just got off the phone with HPE support and confirmed all hardware, firmware, and drivers were up to date and came back green on health check. Was going to open a ticket with Veeam until I ran across this thread. Might go ahead and do it anyway just to make sure, but I am not sure what else to do to be honest. Update... After a little more research, I did run across this forum posting about Windows Defender. Looks like this fixed it for some users. I have it implemented now and will see what happens. I THINK WE FOUND THE SMOKING GUN! . Could it be all related to 10Gb NIC’s + teaming caused by the Virtual Machine Queue (VMQ) ??? Our intel 10Gb 562SFP+ are in teaming and it seems that you have to configure each of the NIC’s in the team to not overlap on the same CPU cores. Enter the following command to check your VMQ setting. (“FIBER*”=NIC NAME) The outcome: BaseVmqProcessor was 0 on both. So they overlapped! Get-NetAdapterVmq | Sort Name | ? Name -Like "FIBER*" | FT -A With the following commands we have configured the VMQ for each adapter. The settings are related to the amount of cpu/cores you have in your server. We have 2 x 8 - No Hyperthreading. Set-NetAdapterRss "Fiber01" -BaseProcessorNumber 0 -MaxProcessors 4 Set-NetAdapterRss "Fiber02" -BaseProcessorNumber 4 -MaxProcessors 4 Set-NetAdapterVmq "Fiber01" -BaseProcessorNumber 8 -MaxProcessors 4 Set-NetAdapterVmq "Fiber02" -BaseProcessorNumber 12 -MaxProcessors 4 Charbel Nemnom vmq-rss (google) has a great article and more info about VMQ After this setting we have rebooted all VM’s and back-up is running smooth for couple of day’s now. We have converted a few VM's (server 2012 and server 2016) from VMWare to Hyper-V. Looks like, only systems that we migrated from VMWare to HyperV (with the 5nine converter) are affected. Or can someone disprove that? Does anyone have the problem with newly installed Windows servers? - Opened a Microsoft Support Call. It is a known issue. but no solution or hotfix available at this time. Microsoft told us to be sure to have no Microsoft drivers with network Cards and switch to Standard Checkpoints until a hotfix is available. i will test this next days. - Do you mind sharing your support case ID? I'd like to open a "me too" case so I can get on the notification list for the fix. And since we're in the process of commissioning the new infrastructure that exhibits this problem, I might be in a better position in terms of being able to get outages scheduled for installation of any hotfix and/or to down-n-up my Hyper-V hosts as sometimes seems to be necessary to resolve the situation after a VM gets into this 'stuck' state. I have the same problem. Brand new DELL Blade servers with W2019 and all the lastest drivers. I'v tried also with the MS basic drivers. I'm are using Netvault backup. I have no migrated VMs, just clean fully patched 2019 virtuals. I created a PS script that takes checkpoints every 10minutes. After a while some virtuals were stuck on 9%. So my conclusion is that there is something fundamentally wrong with Hyper-V 2019. Using production or standard checkpoints the results were same. I also manged to get the Hyper-V stuck when trying to storage migrate (unsuccesfully) VMs from another host. The VMs that were running in the destination (W2019) are in the same state as they were when stuck in the 9% checkpoint. Can't do any operation with them. Hyper-V services restart doesn't help. Only reboot to the host helps. Update from my case with Microsoft; Microsoft have asked for the removal of Altaro backup and Sophos Anti-Virus. They've then asked for the clean-boot of all VM's which are on the Host and the Host it's self; which I've done. I've now got a script creating checkpoints of VM's and removing them - waiting for it to happen again while the host and VM's are in clean-boot. Afraid so, I started the case with a link to this thread to begin with. I followed what they were after, which was the following; - Windows Update the Hypervisor. - Remove Sophos from the Hypervisors & VM's. - Clean boot all VM's and the Hypervisor. After following this, I managed to get the error to reoccur by simply check-pointing the VM's every 10 minutes, then merging the Checkpoints. Got Microsoft back on the phone and he stated the following; The Check-point is stuck at 9% because the internal VSS of the Guest VM is stuck in a "Timed Out" state. Please can you run Windows Update on the VM's along with the Hypervisor. I know that I'm going to Windows Update the VM's and the issue is to reoccur, I'll jump through their hoops the final time to prove once more; this is a problem with Windows Server 2019. FYI; this is a completely standalone host. We have 4 HP ProLiant DL380 Gen10 servers, no Clustering, no 10Gb/s NICs running the VM's on Local - full SSD storage. I have the following script which runs every 10 Minutes to cause the issue; even with the Hypervisor in a clean boot state. $Vms = Get-VM foreach ($Vm in $Vms) { $Snaps = Get-VMSnapshot -VM $Vm if ($Snaps.Length -eq 0) { Checkpoint-VM -VM $Vm } } foreach ($Vm in $Vms) { Remove-VMSnapshot -VM $Vm } $Today = Get-Date Add-Content C:\X\Snapshot-All-Vms.txt "Completed: $Today" @_Dickens - Just to chime in here that we are seeing exactly the same issue on one of our W2019 servers. Some observations below: We managed one, maybe two good backups after the host is rebooted (normally forcibly as the VMMS service hangs on reboot) before we hit the 9% Checkpoint issue again. HP DL380 Gen10. 10Gb NIC (but does the same when using the 1Gb NIC), Sophos in the VM (Not host), Veeam B&R 9.5 Update 4. Hyper-V replica running to an identical host. Several of the VSS writers in the VM go "Timed Out" after the failed Checkpoint. Both VM and host are up to date via Windows Update. Latest HP SSP in the host. Even the though the Checkpoint says 9%, there's only ever a 4Mb AVHDX created that seemingly has no relationship with the parent VHDX as it's never merged and can be safely deleted after shutting the VM down. The VMMS service crashes when creating the Checkpoint. When shutting down the VMs within themselves, the VM always sticks at Shutting Down within Hyper-V manager even though they have fully shutdown. I invariably end up trying to stop the VMMS and Data Sharing Services on the host in a desperate attempt to allow a clean(ish) shutdown in the host. I've performed CHKDSK /f on VM and host. Moved the Checkpoint location to another volume. Specified Standard Checkpoints if Production Checkpoints fail. Disabled RSS and offloading on all NICs. All in all, this has been a nightmare. I initially suspected Veeam was the issue, but having read several accounts of users with Altaro, SolarWinds and various other backup vendors, I've come around to the notion that there's something inherently wrong with the W2019 hyper visor. Tonight I will change the backup to run within the VM rather than at the container level as I can't afford to spend any more time on this issue. Hopefully, MS are monitoring this thread and putting some energy into re-creating and eventually offering a resolution. - We had the problem with three customers in the meantime: Customer 1: Windows Server 2019 HyperV Standalone Veeam 9.5.4.2753 Kaspersky Security 10.1.1 for Windows Servers Proliant DL380 Gen10 - HPE Ethernet 1Gb 4-port 331i (Driver: Hewlett-Packard 214.0.0.0) - HPE Ethernet 10Gb 2-port 530T (Driver: Cavium 7.13.150.0) Customer 2: Windows Server 2019 HyperV Failover Cluster with NetApp E-Series Veeam 9.5.4.2753 No virus protection ProLiant DL360 Gen10 - HPE Ethernet 1Gb 4-port 331i (Driver: Hewlett-Packard 214.0.0.0) - HPE Ethernet 10Gb 2-port 530T (Driver: Cavium 7.13.145.0) Customer 3: Windows Server 2016 HyperV Failover Cluster with NetApp E-Series Veeam 9.5.4.2753 No virus protection ProLiant DL360 Gen9 - HPE Ethernet 1Gb 4-port 331i (Driver: Hewlett-Packard 214.0.0.0) - HPE Ethernet 10Gb 2-port 561T (Driver: Intel 4.1.76.0) All hosts and VMs have the latest Windows updates. - נערך על-ידי Dennis_K5121 יום שלישי 06 אוגוסט 2019 06:45 Worked for me - in Acronis. Was getting same issue with Server 2019 with 40G NICs. Disabling VMQ both on server and VMs made the problem go away. Not sure how much of hit this causes to the network throughput - tried testing, but difficult to get an accurate assessment - I have been just dealing with it for the past couple of weeks, but I too disabled VMQ on servers and VMs just yesterday and was finally able to at least get a good backup of everything again. Hoping this "resolves" the issue until Microsoft can address whatever this ultimately turns out to be. So we had a case open with Microsoft for 3 months now. We have 3 clusters with now 2 having the issue. Initially it was only 1. The 2nd one started having the issue about 2-3 weeks ago. First 2 clusters didn't have the issue, these were configured back in March and April with Server 2019. The third cluster that had the issue since the beginning were installed on May-June wiht Server 2019. I have a feeling one of the newer updates is causing the issue. The 1st cluster not having the problem has not been patched since. To this day nothing was resolved and they have no idea what it might be. Now they are closing the case on us because the issue went from one Host in our Cluster to another host, and our scope was the first Hyper-V host having the issue. Unbelievable. The issue is still there though just happening on another host in the Cluster. The clusters experiencing the issues have the latest generation Dell Servers in them, PE 640s, while the one not having the issue only has older generation PE 520, PE 630, etc. The way we realize the issue is that we have a PRTG Sensor checking our host for responsiveness. At some random point in the day or night, PRTG will report that the sensor is not responding to general Hyper-V Host checks (WMI). After this, no checkpoints, backups, migrations, setting changes can happen because everything is stuck. Can't restart VMMS service or kill it. Here is what we have tested with no solution yet: Remove all 3rd party applications - BitDefender (AV), Backup Software (Backup Exec 20.4), SupportAssist, WinDirStat, etc. - Didn't fix it. Make sure all VMSwitches and Network adapters were identical in the whole cluster, with identical driver versions (Tried Intel, and Microsoft drivers on all hosts) - Didn't fix it. Check each worker process for the VM - When a VM got stuck during a checkpoint or migration. - Didn't fix it. get-vm | ft name, vmid compare vmid to vmworkerprocess.exe seen in details -> Task Manager kill process Hyper-V showed VM running as Running-Critical Restart VMMS service (didn't work) net stop vmms (didn't work) Restart Server -> VMs went unmonitored After restart everything works fine as expected Evict Server experiencing issues in Cluster -> This just causes the issue to go to another host, but the issue is still there. - Didn't fix it. Create two VMS (one from template, one new one) on the evicted host -> No issues here, never gets stuck, but other hosts still experience the issue. Install latest drivers, updates, BIOS, firmware for all hardware in all the hosts of the cluster. - didn't fix it. We migrated our hosts to a new Datacenter, running up to date switches (old Datacenter - HP Switches, new Datacenter - Dell Switches), and the issue still continues. New Cat6 wiring was put in place for all the hosts - Issue still continues. Disable "Allow management operating system to share this network adapter" on all VMSwitches - issue still continues Disable VMQ and IPSec offloading on all Hyper-V VMs and adapters - issue still continues We're currently patched all the way to August 2019 Patches - issue still continues. We asked Microsoft to assign us a higher Tier technician to do a deep dive in to kernel dumps and process dumps, but they would not do it until we exhausted all the basic troubleshooting steps. Now they are not willing to work further because the issue moved from 1 host to another after we have moved from one datacenter to another. So it seems like based on how the Cluster comes up and who's the owner of the disks and network, it might determine which hosts has the issue. Also, our validation testing passes for all the hosts, besides minor warnings due to CPU differences. Any ideas would be appreciated. I had this on a newly built Windows 2019 Dell R430 with Intel 10G X520 Adapters. It was causing an issue with HyperV replication and Production Checkpoints. I had two previous R430s in the cluster and I noticed that the two existing servers were running the Microsoft NIC driver and that the new one was running the Intel NIC driver. So Iremoved the HyperV virtual NIC, broke the team and reverted the new servers NIC drivers to the MS version 3.12.11.1 driver for the X520. Re-established the team, recreated the HyperV virtual switch and the problem has not reoccurred since. Queeg505, thanks for the response. I wish it was that simple for us. I have done the same about a couple of months ago to see if going from the newly installed Intel drivers to the old Microsoft drivers would help. It did not really help for us. For the different NIC types that we have, these are the current driver versions we are running. Intel Gigabit 4P i350-t Adapter - 12.15.22.6 (Previously 12.15.184.0 Intel Driver) Intel Ethernet 10G 2P x540-t Adapter - 3.12.11.1 (Previously 4.1.4.0 Intel Driver) Broadcom NetXtreme Gigabit Ethernet - 214.0.0.0 (Previously 17.2.1.0) Intel Ethernet Converged Network Adapter X710-t - 1.8.103.2 I have verified that all servers are running identical driver versions across these adapters. Just to add our experience until Microsoft actually acknowledge this problem and look at a fix! We have two Server 2019 Hyper-V clusters. Cluster 1 – Brand new 6 node S2D cluster (all flash). Intel 10 GB NICs,100GB Mellanox NICs for storage, S2D storage and ReFS storage. Veeam backups failed nearly every day with checkpoints hanging at 9%. VMMS service will not stop even with process explorer. Only a hard reset would fix the node. We updated the Intel network drivers and disabled receive side coalescing (the driver introduced a new problem with rsc enabled). Checkpoints changed to standard and disabled backup (vss) integration service on each vm. All nodes were rebooted and backups have now succeeded for two weeks. However we are now too worried to live migrate or take checkpoints in case anything breaks again! Cluster 2 – 6 Node Old 2012 R2 cluster with FC SAN storage (which has run for 5 years prior to 2019 upgrade) reinstalled as Server 2019 (fresh install). Emulex 10GB NICs,CSV NTFS Volumes on a Hitachi SAN. Veeam is replicating some VMs to this cluster. Currently hangs at 9% checkpoint nearly every day requiring node hard reboot. Have tried drivers and disabling RSC – no change. Have just disabled RSS and VMQ and awaiting results. I have noticed that when the VMMS service is locked up you cannot use PowerShell to make any changes to the network adapters (hangs). Device Manager also hangs when making any changes to the network adapters. Based on the above it leads me to believe it is related to the network adapters as they become unmanageable at the same time. Just wish we knew the cause or MS would take some interest in fixing it! - נערך על-ידי CEvans2008 יום שני 09 ספטמבר 2019 22:39 Same here. A single Hiper-V server with Windows Server 2019 standard, with only two VM running on it. Backup with ARCServe UDP, without any antivirus nor any other software. Intel10Gb NICs X722 with Microsoft drivers 1.8.103.2. After a few backups, system stuck both VM at 9%...VM machines does not respond, I'm unable to clear reboot the host server, and unique solution is to made a hard reset. I will update drivers and put result here. We had this exact same issue on non-clustered Dell servers with Intel X520 10G NICs, but also had success with the following: - Updated NIC drivers, using Intel's latest X520 drivers (4.1.143.0) - Set Jumbo Packets enabled on the NIC at 9014 bytes (this set previously when we had the Microsoft NIC drivers, so this was not a new change) - Disabled VMQ on all VMs that had it enabled After that, our weekly VM export backups have worked for 2 weeks without issue. Hi. I just want to share my experience that updating Intel NIC driver manually did the trick. Although the newest driver that existed for this NIC was only 3 months newer I really doubt it would work but it did for now (really crossing my fingers that problem went away) SuperMicro motherboard, NIC information: NIC driver model: Intel 82579LM BEFORE THE UPDATE Driver date: 05.04.2016 Driver Version: 12.15.22.6 AFTER Driver date: 25.07.2016 Driver Version: 12.15.31.4 with best regards B BCR - Found a solution on another thread. The issue is related to VMQ, but in order for the changes to work, you most likely have to disable it to all the VMs in the VM Advanced Network Settings in your Cluster, restart the VMs, and also restart the hosts. This is probably why the initial time we disabled VMQ, the fix didn't work. After the host froze and we restarted it didn't happen again. Another solution another person posted was the following: From Microsoft Support we received a powershell command for hyper-v 2019 and the issue is gone ;) Set-VMNetworkAdapter -ManagementOS -VrssQueueSchedulingMode StaticVrss It is a bug from Windows Server 2019 and Hyper-V Hello. Disabling VMQ's seems to have worked for me now. Please can you elaborate on this unsigned file? Is this a driver which you've been asked to install which has been unsigned somehow? Please could you share your Microsoft Case Reference number so I could give this to my Microsoft Support Advisor? Thanks. Oliver. Thank you very much. On our Windows Server 2019 we have set the parameters and are looking forward to see if the problem is solved. From Microsoft Support we received a powershell command for hyper-v 2019 and the issue is gone ;) Set-VMNetworkAdapter -ManagementOS -VrssQueueSchedulingMode StaticVrss It is a bug from Windows Server 2019 and Hyper-V Is there a similar command for Windows Server 2016? The parameter "VrssQueueSchedulingMode" is not available for Windows Server 2016. Hello, the following solution on a Lenovo SR650 Host with Windows Server 2019 OS (only Hyper-V Role) and 4x10GBit Intel X722 LOM has worked: Changes the driver of the Intel X722 4 x 10Gbit LOM card from Microsoft drivers to the new’s Intel drivers (1.10.130.0, 9.5.2019). After this change the network performance of the VMs were absolutely BAD! Then we switched of the driver value of “Recv.-Seqment-Coalescing” (IPv4 and IPv6) from active to inactive. Network-Speed was normal on Host (X722 LOM NIC 10Gbit Port 1) and VMs (X722 LOM NIC 10Gbit Port 2). After testing creating more the 50 checkpoints each day (using the Microsoft script) for two weeks, we assume now, that the problem is been solved. One more interesting note: The checkpoint stuck at 9% was just a symtom for the problem at hand. Even without checkpoints creation there can always be a problem after a few days that the host no longer had control over the VMs. Neither Hyper-V MMC nor Hyper-V Powershell could be used to apply commands such as restart, save, etc. to the VMs. VMs continued to run, Hyper-V replication also worked. However, a VM could not restart if it was initiated from within the VM. The VM simply did not start anymore and in the Hyper-V Manager the status remains at "Shutdown". In this case, the host always had to be switched off using the power switch! Probably this information should help other users to solve such a “miracle” problem. Thank you to all other in this thread, that helped us solving this problem! St. Reppien Hello Someone have good news ? I had the problem again and I tried the followings command in the last 15 days on the hyper-v but i had problem with snapshot 2 days ago. I had no problem for 2 months before.. Set-VMNetworkAdapter -ManagementOS -VrssQueueSchedulingMode StaticVrss Maybe, people think they do not have any problem anymore because it's been 15 days since it's working properly but you have to plan for the long term. I have this problem since July 2019 and no solution was viable. As i said before, sometimes i had no problem during months. I downgrad veeam version from 9.5u4 to 9.5u3 but problem appears again.. Thank you Alexandre Hi Alexandre, Yes, we haven't had the issue since we disabled the advanced features. And this was back around beginning of September. Keep in mind that you may have disabled VMQ and IPSec but any new VMs that may have been created could still have it enabled. Check your VMs and make sure all of them have it disabled. I'm seeing something similar to this too. I have a customer with two hyper-V Failover Clusters and last week alone one of the cluster nodes have stopped responding twice. The behavior is consistent with what you guys are seeing: VMMS looses all control over VMs, can't start, stop os livemigrate VMS and eventually the cluster service goes down. The only way to get things going is to force a reboot of the compromised node. In my case, I have some VMs replicating between the nodes with application consistent checkpoints enabled, so I suspect checkpoints do act like some kind of trigger. I'm also seeing some erros on the logs about moving RSS queues from VMQ during my backup window, so I suspect that VMQ has a role to play in all this, specifically d.VMMQ. That's why I'm guessing that setting VrssQueueSchedulingMode StaticVrss on my managementOS Virtual interfaces should help mitigate the issue. d.VMMQ does some kind of advanced offloading using the NICs driver, and my interfaces being Broadcom we all know what to expect. I'm also setting VrssQueueSchedulingMode StaticVrss at the VMs by running: Get-VM | Get-VMNetworkAdapter | Where-Object VrssQueueSchedulingMode -like Dynamic | Set-VMNetworkAdapter -VrssQueueSchedulingMode StaticVrss Let's see how it goes. Regards, Giovani
https://social.technet.microsoft.com/Forums/windows/he-IL/0d99f310-77cf-43b8-b20b-1f5b1388a787/hyperv-2016-vms-stuck-creating-checkpoint-9-while-starting-backups
CC-MAIN-2020-05
en
refinedweb
I heard Saul Griffith say recently that if you covered all the car parks in the USA with solar panels you would supply way more than the national energy requirements (I can’t find the actual reference, but just go and watch his talks and read everything he’s written). I claimed this might translate to the UK. But does it? The solar part is easy enough. If we electrify everything and want to remove carbon based generation, we need to build 300TWh of renewables. For the sake of argument let’s do it all in solar (yes, i know, but ignore clouds and nights for now. It’s a spherical cow). Now according to CAT to generate 800kWh (per year) we’d need ~1kW of panels, which might be 8m², or 125kWh/m². so 300TWh / 125kWh = 24*10⁸ m², or 2400 km² Right. Does the UK have 2400 km² of parking? Turns out that openstreetmap can give a (probably wrong) answer. Below I present a cleaned up route I hacked out to get there. The following politely elides the many, many detours and dead ends along the way. . e/bin/activate pip install geopandas descartes ipykernel matplotlib python -m ipykernel install --user --name=e # get the great-britain-latest.osm.bpf file from brew install osmosis osmosis --read-pbf great-britain-latest.osm.pbf --tf accept-ways amenity=parking --tf reject-relations --used-node --way-key-value keyValueList="amenity.parking" --write-xml gb-parking.osm npm install -g osmtogeojson osmtogeojson gb-parking.osm > gb-parking.json jupyter notebook And then in the notebook: import matplotlib p = geopandas.read_file("gb-parking.json") p = p[p['id'].str[:4] == 'way/'] # remove stuff we don't need And wait another while for my poor laptop to warm the room. No one ever accused python of being fast. Now, check we have something. This should draw all the parking areas in the UK. So something, but it’s a bit hard to see so let’s try plotting where they are with big blobs p['centroids'] = p.centroid p = p.set_geometry('centroids') p.plot(figsize=(8,16)) p = p.set_geometry('borders') # reset geometry Now the geometry we have doesn’t give us the correct units (we want m²) so change to something else, then add all the areas up and convert to km² sum(cart.area) / 10**6 gives Tada! 🎉. So apparently no! At least from the crowdsourced OSM data. We’d have to use more than just car parks 😢.
https://tech.labs.oliverwyman.com/blog/2019/10/28/uk-parking-areas/
CC-MAIN-2020-05
en
refinedweb
Lesson 4 - E-shop in ASP.NET Core MVC - Relations and Repository In the previous lesson, E-shop in ASP.NET Core MVC - Products and categories, we learned how to add additional entities to the data model, including DataAnnotation attributes. In today's ASP.NET Core tutorial, we'll add relations to these entities and update the database. Next, we'll have a look at one of the possible implementations of the Repository design pattern. Relations There's a M:N relation between the Product and the Category entities, because one category may contain multiple products, and one product may belong to multiple categories. We make this relationship happen by adding an association table with foreign key to both original tables. Unfortunately, EF Core (version 2.1) can't generate association tables for a Many-To-Many relationship yet, so we need to create it by ourselves. We'll add a new CategoryProduct to the Models/ folder of the data project (we've chosen the name accordingly to the entities this table will bind together): public class CategoryProduct { public int CategoryId { get; set; } public virtual Category Category { get; set; } public int ProductId { get; set; } public virtual Product Product { get; set; } } Let's add a CategoryProduct collection to the Product and Category classes: public virtual ICollection<CategoryProduct> CategoryProducts { get; set; } In fact, we've now created two 1:N relationships in our classes. The first between the product and the items in the association table, the second one similarly for categories. Finally, we need to explicitly define ..: Setting up the 1:N and M:N relations in Entity Framework Code First, inserting initialization and test data, the Repository design pattern implementation.:
https://www.ict.social/csharp/asp-net/core/e-shop/e-shop-in-aspnet-core-mvc-relations-and-repository
CC-MAIN-2020-05
en
refinedweb
Grails Transactions using @Transactional annotations and Propagation.REQUIRES_NEW Hi All, Here is how you can implement a new transaction in an already executing transaction in Grails which uses nothing but Spring framework’s Transaction mechanism as an underlying implementation. Spring provides @Transactional annotations to provide declarative transactions. We can use the same in our Grails project to achieve the transactional behavior. Here is the scenario: You have two domain classes named SecuredAccount and AccountCreationAttempt. You try to transactionally save the SecuredAccount object which in turn creates a AccountCreationAttempt object which writes to the database stating: “There is an attempt to create a new SecuredAccount at this time: <current date and time>”. Point to note here is that even if the creation of the new SecuredAccount object fails, the record must still be written to the database so that the Administrator can validate whether the attempt at the specific time was by a legitimate user or an attacker. Here is the code: import org.springframework.transaction.annotation.* Class MyService { static transactional = false def anotherService @Transactional def createSecuredAccount() { def securedAccount = new SecuredAccount(userId:"John") securedAccount.save(flush:true) anotherService.createAccountCreationAttempt() throw new RuntimeException("Error thrown in createSecuredAccount()") } } import org.springframework.transaction.annotation.* class AnotherService { static transactional = false @Transactional(propagation = Propagation.REQUIRES_NEW) def createAccountCreationAttempt() { def accountCreationAttempt = new AccountCreationAttempt(logRemarks: "There is an attempt to create a new SecuredAccount at this time: {new Date()}") accountCreationAttempt.save(flush:true) } } Now in this scenario, AccountCreationAttempt object always gets persisted whether or not the transaction for creating SecuredAccount object fails. Here are few gotchas regarding the above transactions: 1.) First of all, for Propagation.REQUIRES_NEW to work as intended, it has to be inside a new object i.e. a new service in our example. If we had put the createAccountCreationAttempt() method in the MyService there would be no new transaction spawned and even no log entry would be made. This is Spring’s proxy object implementation of transactions and you can read more about it here:. Please pay special attention to the “NOTE” sub-section.This is what it states: “In proxy mode (which is the default), only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual transaction at runtime even if the invoked method is marked with @Transactional.” 2.) Secondly, all the @Transactional methods should have a public visibility i.e. createSecuredAccount() and createAccountCreationAttempt() methods should be public methods, and not private or protected. This again is Spring’s @Transactional annotations implementation and you can read about it at the same link as provided above. Note the right side-bar titled “Method visibility and @Transactional“. Well, once you keep note of these gotchas I guess you are all set to make good use of @Transactional annotations and its full power. Cheers !!! – Abhishek Tejpaul abhishek@intelligrape.com [IntelliGrape Software Pvt. Ltd.] Unbleilvabee how well-written and informative this was. That first gotcha was so so helpful. Thanks! I am not genuinely sure if greatest practices have emerged around things like that, but I am sure that your wonderful job is clearly identified. I was thinking if you offer any membership to your RSS feeds as I would be very interested but i can’t find any link to register here.Where is it? Best wishes, Doretta. This blog is certainly rather handy because I am currently building an internet flower blog – however I’m in the beginning stages therefore it is really quite small, nothing like this site. It would be great if I could link to a few of the threads because they’re very intriguing. Many thanks. Claire Garcia It was very useful to me, thanks a lot!
https://www.tothenew.com/blog/grails-transactions-using-transactional-annotations-and-propagation-requires_new/?replytocom=39339
CC-MAIN-2020-05
en
refinedweb
Database Change Notifications in ASP.NET using WebSocket Display database change on website in real-time Database table changes are usually not displayed right away in an application, escecially if it is a web application. With HTML5 and Web API that is about to change. New MVC introduces async controllers which can be used to do live data update to HTTP client. This especially involves using of WebSockets and SignalR. In this example I'm about to show how you can display in real-time data changes committed to database tables. I found a few articles on Internet which are describing how to do this, so I combined code from both and made a test app which works and you can download it from this article. First thing we need to to is to ensure that out database has "Broker Enabled" to be able to send notifications to a client. That can be done from options dialog on a database itself or by running the following command in query window in SQL Server Management Studio ALTER DATABASE [Temp] SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE The second thing to be done is to initialize Notification in Global.asax.cs file with two lines for Application_Start and Application_End public class Global : HttpApplication { void Application_Start(object sender, EventArgs e) { // Code that runs on application startup AreaRegistration.RegisterAllAreas(); GlobalConfiguration.Configure(WebApiConfig.Register); RouteConfig.RegisterRoutes(RouteTable.Routes); SqlDependency.Start(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString); } void Application_End() { SqlDependency.Stop(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString); } } Now we enabled web application to handle notifications from SQL Server, all we have to do now is to specify what are we monitoring and handle that in our controller. Since controller needs to send messages to a client, we will have to use WebSocket or SignalR. In our case (since I did not work with SignalR yet) we are going to use WebSocket. This will require to fetch Microsoft.WebSockets.dll from NuGet. So here goes the controller code public class DatabaseNotification { public ChatWebSocketHandler() { SetupNotifier(); } protected void SetupNotifier() { using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString)) { connection.Open(); using (SqlCommand command = new SqlCommand(@"SELECT [FullName],[experiance_nYears] FROM [dbo].[t_Doctor]", connection)) { command.Notification = null; SqlDependency dependency = new SqlDependency(command); dependency.OnChange += new OnChangeEventHandler(dependency_OnChange); if (connection.State == ConnectionState.Closed) { connection.Open(); } var reader = command.ExecuteReader(); reader.Close(); } } } private static WebSocketCollection _chatClients = new WebSocketCollection(); public override void OnOpen() { _chatClients.Add(this); } public override void OnMessage(string msg) { } private void dependency_OnChange(object sender, SqlNotificationEventArgs e) { _chatClients.Broadcast(string.Format("Data changed on {0}", DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"))); SetupNotifier(); } } } This controller is pretty much all you need to grab notifications of table updates. Method dependency_OnChange will handle update and broadcast message to all the connected clients. All we are left to to is to connect client to this controller. Since it is HTML5 feature, we will initialize WebSocket client with simple JavaScript on the page. $(document).ready(function () { initSocket(); }); function initSocket(recordType) { var uri = "ws://localhost:8080/api/DatabaseNotification"; websocket = new WebSocket(uri); websocket.onopen = function () { $('#messages').prepend('<div>Connected to server.</div>'); websocket.send(recordType); }; websocket.onerror = function (event) { $('#messages').prepend('<div>ERROR</div>'); }; websocket.onmessage = function (event) { $('#messages').prepend('<div>' + event.data + '</div>'); }; } This solution will work in all modern browsers which support HTML5 but it will fail for other browsers. To handle most of the browser vendors and version you might consider involving SignalR. The following is a list of browser supporting using of WebSocket directly on a page without SignalR. The whole support list of web browsers supporting WebSockets can be found at References - - - - Disclaimer Purpose of the code contained in snippets or available for download in this article is solely for learning and demo purposes. Author will not be held responsible for any failure or damages caused due to any other usage.
https://dejanstojanovic.net/aspnet/2014/june/database-change-notifications-in-aspnet-using-websocket/
CC-MAIN-2020-05
en
refinedweb
#include <sys/types.h> #include <sys/malloc.h> #include <sys/param.h> #include <sys/malloc.h> #include <sys/kernel.h> The mallocarray() function allocates uninitialized memory in kernel address space for an array of nmemb entries: Exactly one of either M_WAITOK or M_NOWAIT must be specified. The type argument is used to perform statistics on memory usage, and for basic sanity checks. It can be used to identify multiple allocations. The statistics can be examined by 'vm <sys/param.h> (instead of <sys/types.h>) and <sys/kernel.h>. malloc(), realloc() and reallocf() may sleep when called with M_WAITOK. free() never sleeps. However, malloc(), realloc(), reallocf() and free() may not be called in a critical section or while holding a spin lock. Any calls to malloc() (even with M_NOWAIT) or free() when holding a vnode(9) interlock, will cause a LOR (Lock Order Reversal) due to the intertwining of VM Objects and Vnodes. Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
https://nxmnpg.lemoda.net/9/reallocf
CC-MAIN-2020-05
en
refinedweb
Sometimes we need to import and save excel data into SQL server database, when we are changing database or migrating a any other platform code into C#, we may need to migrate old database data into new database also, we can do it by importing data from excel into database tables using ASP.NET C#, so in this post, I will explain how easily you can import you excel file data into sql server table using ASP.NET MVC, C# and SqlBulkCopy. If you are looking to read excel file using C# or load it into datatable/ GridView in ASP.NET you might want to look into these other articles Read Excel file and import data into GridView using Datatable in ASP.NET Read excel file in C# (Console application example using OLEDB) Below is the images of the demo Excel file and data which we need to import in our sql server datatable table and our empty SQL server table with design Excel file to be imported Step 1: Let begin by creating a new project of ASP.NET MVC in Visual Studio, Select File->New-> Project -> Select "ASP.NET Web-application" -> Provide a name ("ImportExcelIntoDatabase") and Click "OK" Then Select "MVC" to generate basic ASP.NET MVC template with "No Authentication", Click "OK" Step 2: Now we will create basic Razor syntax HTML to upload file in MVC, so go to your Index.cshtml (Solution->Views->Home->Index.cshtml) (If there isn't one, create one view) and use the below code <br /> @using (Html.BeginForm("Index", "Home", FormMethod.Post, new {<br/> <input type="submit" value="Import to database" class="btn btn-primary" /> } <br/> <div> @ViewBag.Success </div> In the above Razor Code, we are creating form to submit file and post it to "HomeController"->"Index" ActionMethod (with HttpPost Verb), here we need to specify addtional HTML attribute, that is, enctype = "multipart/form-data" which is necessary for uploading files. @ViewBag.Success is used to notify user that file is uploaded and data is imported to sql server database table using C# and sqlBulkCopy Step 3: Now, we need to create C# code in controller to handle file uploading and then extract data from Excel file using C# public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(HttpPostedFileBase file) { string filePath = string.Empty; if (file != null) { string path = Server.MapPath("~/Uploads/"); if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } filePath = path + Path.GetFileName(file.FileName); string extension = Path.GetExtension(file.FileName); file.SaveAs(filePath);()) {. connExcel.Open(); cmdExcel.CommandText = "SELECT * From [" + sheetName + "]"; odaExcel.SelectCommand = cmdExcel; odaExcel.Fill(dt); connExcel.Close(); } } } conString = @"Server=DESKTOP-1PM1CJ9\SQLEXPRESS2;Database=Students;Trusted_Connection(); } } } //if the code reach here means everthing goes fine and excel data is imported into database ViewBag.Success = "File Imported and excel data saved into database"; return View(); } } Here is the quick explanation of the Above Code File we are uploading the Excel file, the uploaded excel file is saved to a folder named "Uploads" and then based on its extension whether XLS (97-2003) or XLSX (2007 and above), appropriate connection string is read for file, placeholder is replaced by the path of the Excel file. One.Data; using System.Data.OleDb; using System.Data.SqlClient; using System.IO; using System.Web; using System.Web.Mvc; Once you have all the code, build it and run it in browser, here is the demo gif image of the sample. You might also like to read Export data to excel (.xlsx & .xls) file using ASP.NET MVC C# You can also download the sample project : ImportExcelIntoDatabase Further explanation for saving excel column into database without using SqlBulkCopy and filtering it Now, suppose you don't want to use SqlBulkCopy, you can also write your own Custom code to match columns of your datatable table and filter it with datatype also. You would have to loop rows saved in datatable, so instead of using SqlBulkCopy code, which is as(); } } we will be saving each row, by looping it usinf foreach, based on conditions, like below foreach (DataRow row in dt.Rows) { bool userStatus; if (row["User Status"].ToString() == "active") { userStatus = true; } else { userStatus = false; } var usernew = new ApplicationUser { UserName = row["Email"].ToString(), Email = row["Email"].ToString(), Name = row["Name"].ToString(), UserStatus = userStatus, PhoneNumber= row["Phone Number"].ToString() }; var result = UserManager.Create(usernew, "Test@123"); if (result.Succeeded) { if (row["User Roles"].ToString() == "Super Admin") { UserManager.AddToRole(usernew.Id, "Super Admin"); } else if (row["User Roles"].ToString() == "Operations") { UserManager.AddToRole(usernew.Id, "Operations"); } else if (row["User Roles"].ToString() == "Sales") { UserManager.AddToRole(usernew.Id, "Sales"); } else if (row["User Roles"].ToString() == "Sales Agent") { UserManager.AddToRole(usernew.Id, "Sales Agent"); } else if (row["User Roles"].ToString() == "Warehouse") { UserManager.AddToRole(usernew.Id, "Warehouse"); } } } In the Above code, as you can see we are going row by row using foreach loop and creating users (active or not), plus creating user roles based one Excel column value "User Roles". Note: In the Above C# code, ApplicationUser is ASP.NET MVC Identity method to create a new user.
https://qawithexperts.com/article/asp-net/import-excel-data-in-sql-server-database-table-using-c-in-as/213
CC-MAIN-2020-05
en
refinedweb
panda3d.core.PStatThread¶ from panda3d.core import PStatThread - class PStatThread¶ A lightweight class that represents a single thread of execution to PStats. It corresponds one-to-one with Panda’s Thread instance. Inheritance diagram __init__(client: PStatClient, index: int) → None¶ Normally, this constructor is called only from PStatClient. Use one of the constructors below to create your own Thread. __init__(copy: PStatThread) → None __init__(thread: Thread, client: PStatClient) → None Creates a new named thread. This will be used to unify tasks that share a common thread, and differentiate tasks that occur in different threads. newFrame() → None¶ This must be called at the start of every “frame”, whatever a frame may be deemed to be, to accumulate all the stats that have collected so far for the thread and ship them off to the server. Calling PStatClient.threadTick()will automatically call this for any threads with the indicated sync name. addFrame(frame_data: PStatFrameData) → None¶ This is a slightly lower-level version of new_frame that also specifies the data to send for this frame. getThread() → Thread¶ Returns the Panda Thread object associated with this particular PStatThread. - Return type - - property thread¶ Returns the Panda Thread object associated with this particular PStatThread. - Return type -
https://docs.panda3d.org/1.10/python/reference/panda3d.core.PStatThread
CC-MAIN-2020-05
en
refinedweb
This topic provides information on methods that allow you to change application and/or individual UI element appearance. Global settings Individual control appearance settings The Look-and-Feel settings are global style settings such as the Default Application Font and skin. You can use the following approaches to specify these settings. Use the Project Settings dialog (the recommended approach). To invoke this dialog, right-click a project in Visual Studio's Solution Explorer and select "DevExpress Project Settings". //apply raster skin DevExpress.LookAndFeel.UserLookAndFeel.Default.SetSkinStyle(SkinStyle.Office2016Colorful); //apply vector skin and choose a palette DevExpress.LookAndFeel.UserLookAndFeel.Default.SetSkinStyle(SkinStyle.Bezier, SkinSvgPalette.Bezier.CherryInk); //disable skinning, apply the "Flat" style //note that several DevExpress controls (for example, the Ribbon) cannot be displayed without a skin DevExpress.LookAndFeel.UserLookAndFeel.Default.SetStyle(LookAndFeelStyle.Flat, false, true); 'apply a raster skin DevExpress.LookAndFeel.UserLookAndFeel.Default.SetSkinStyle(SkinStyle.Office2016Colorful) 'apply a vector skin and choose a palette DevExpress.LookAndFeel.UserLookAndFeel.Default.SetSkinStyle(SkinStyle.Bezier, SkinSvgPalette.Bezier.CherryInk) 'disable skinning, apply the "Flat" style 'note that some DevExpress controls (for example, the Ribbon) cannot be displayed without a skin DevExpress.LookAndFeel.UserLookAndFeel.Default.SetStyle(LookAndFeelStyle.Flat, False, True) Individual controls inherit global Look-and-Feel settings from their parents (forms, user controls, panels, etc.). You can override these inherited settings if you access these controls' LookAndFeel properties in code (hidden in the VS Property Grid). However, only override global Look-and-Feel settings to change a control's background (DevExpress controls v18.1 or older). Individual controls provide Appearance properties that store AppearanceObject or AppearanceObjectEx class objects. These objects specify background, foreground, and font settings that affect their parent controls only. For each supported visual state (normal, disabled, hovered, etc.), a control may provide individual Appearance settings. The StyleController component allows you to specify Appearance settings for multiple editors at once. You can change an individual controls' background and foreground colors to highlight these controls. For raster skins, custom background colors will blend in with element skin bitmaps. For vector skins, custom colors will be displayed “as is”. Versions 18.2 and newer Versions 18.1 and older simpleButton1.Appearance.BackColor = Color.Crimson; As the code samples in the table above demonstrate, beginning with version 18.2, UI elements no longer require that you disable their skins in order to see custom Appearance colors. This change may lead to unexpected results due to junk Appearance settings becoming functional once you upgrade to v18.2. To trace and eliminate such API, use the WindowsFormsSettings.ForcePaintApiDiagnostics method. You can also change the global WindowsFormsSettings.BackgroundSkinningMode setting to restore the behavior seen in versions 18.1 and earlier. Note that you can set a custom color for the Normal element state only; other element states (Pressed, Hovered, Disabled) will automatically receive slightly different hues of this same color. Additionally, starting with version 18.2, when you change an element background color, this element automatically adjusts its foreground to increase contrast and improve readability. Use the WindowsFormsSettings.AutoCorrectForeColor setting to disable this behavior. Not all DevExpress control backgrounds are modified through the Appearance.BackColor settings. For example, to highlight a GroupControl you need to change its Appearance.BorderColor property instead. Refer to control-specific documentation to check whether this control can be highlighted with a custom background color and to find out which Appearance setting you need to modify in order to do so. Starting with version 18.2, you can also use Skin Colors as UI element background and/or foreground colors. At design time, switch to the "DX Skins" tab to choose a Skin Color. The same Skin Color may vary in different skins and vector skins palettes (see the figure below). To assign Skin Colors in code, retrieve them from the DevExpress.LookAndFeel.DXSkinColors.FillColors (for background only) or DevExpress.LookAndFeel.DXSkinColors.ForeColors (for foreground only) classes. simpleButton1.Appearance.BackColor = DXSkinColors.FillColors.Danger; //or simpleButton1.Appearance.ForeColor = DXSkinColors.ForeColors.Warning; simpleButton1.Appearance.BackColor = DXSkinColors.FillColors.Danger 'or simpleButton1.Appearance.ForeColor = DXSkinColors.ForeColors.Warning The Appearance Editor allows you to customize, save, and apply appearance settings to objects at design time. Click the ellipsis button next to a control's Appearance property to invoke the Appearance Editor. Customize the appearance settings and click the "Save As..." button to save these settings as a template. You can access saved templates from the drop-down selector when you invoke the Appearance Editor for a control/visual element. To apply these settings to the current AppearanceObject, select a template and click "Apply". Appearance styles are saved as .xml files to the “C:\Users\Public\Documents\DevExpress\AppearanceTemplates” folder. You can reuse saved appearance settings in other Visual Studio projects. Data Grid, Tree List and Vertical Grid appearance properties return the AppearanceObjectEx class objects. This class provides the AppearanceOptionsEx.HighPriority property that allows you to change the Appearance priority. For instance, the following sample demonstrates how to prioritize cell appearance settings. treeList1.Columns["Employee"].AppearanceCell.Options.HighPriority = true; treeList1.Columns("Employee").AppearanceCell.Options.HighPriority = True You can use this feature, for instance, to show Conditional Formatting rule appearances for selected Data Grid rows. using DevExpress.XtraEditors; using DevExpress.XtraGrid; using DevExpress.XtraGrid.Views.Grid; using System.Drawing; GridFormatRule freightRule; public Form1() { InitializeComponent(); // . . . //selected row appearance gridView1.Appearance.FocusedRow.BackColor = Color.DarkSalmon; gridView1.Appearance.FocusedRow.FontStyleDelta = FontStyle.Bold; //rule condition and condition settings FormatConditionRuleValue freightRuleCondition =() { Column = colFreight, Rule = freightRuleCondition }; gridView1.FormatRules.Add(freightRule); } Imports DevExpress.XtraEditors Imports DevExpress.XtraGrid Imports DevExpress.XtraGrid.Views.Grid Imports System.Drawing Private freightRule As GridFormatRule Public Sub New() InitializeComponent() ' . . . 'selected row appearance gridView1.Appearance.FocusedRow.BackColor = Color.DarkSalmon gridView1.Appearance.FocusedRow.FontStyleDelta = FontStyle.Bold 'rule condition and condition settings Dim freightRuleCondition As() With {.Column = colFreight, .Rule = freightRuleCondition} gridView1.FormatRules.Add(freightRule) End Sub Since custom draw and custom style events (GridView.RowStyle, GridView.RowCellStyle, GridView.CustomDrawCell, etc.) have the highest priority, you can use them more effectively instead of changing the appearance priority. GridFormatRule freightRule; private void GridView1_RowCellStyle(object sender, DevExpress.XtraGrid.Views.Grid.RowCellStyleEventArgs e) { GridView view = sender as GridView; if (view.IsRowSelected(e.RowHandle) && e.Column.FieldName == "Freight" && freightRule.IsFit(e.CellValue, view.GetDataSourceRowIndex(e.RowHandle))) { e.Appearance.Assign((freightRule.Rule as FormatConditionRuleAppearanceBase).Appearance); } } Private freightRule As GridFormatRule Private Sub GridView1_RowCellStyle(ByVal sender As Object, ByVal e As DevExpress.XtraGrid.Views.Grid.RowCellStyleEventArgs) Dim view As GridView = TryCast(sender, GridView) If view.IsRowSelected(e.RowHandle) AndAlso e.Column.FieldName = "Freight" AndAlso freightRule.IsFit(e.CellValue, view.GetDataSourceRowIndex(e.RowHandle)) Then e.Appearance.Assign((TryCast(freightRule.Rule, FormatConditionRuleAppearanceBase)).Appearance) End If End Sub DevExpress controls expose AppearancePrint properties that store appearance settings applied when the control is printed and/or exported. The AppearancePrint settings provide properties that specify whether printed/exported controls should use these unique appearances. If these properties are set to false, printed and/or exported controls are the same as those that appear on-screen. See Printing and Exporting for more information.
https://documentation.devexpress.com/WindowsForms/114444/Common-Features/Application-Appearance-and-Skin-Colors
CC-MAIN-2020-05
en
refinedweb
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » compiling a .c file i need to write a program with both a millisecond delay and a microsecond delay. i figured out with a changed value in the provided millisecond delay function of the libnerdkits i could get the desired microsecond behavior. i hoped i could then just save this .c file with a new name ( udelay )and it would create a corresponding udelay.o ( object ) file. i read in another thread that the normal compile of a program automatically runs the make file in the libnerdkits. when i do this i get, however, no created udelay.o file created. can anyone help? Or you could just use delay_us(); instead of delay_ms(); Rick sure. well, it looks to me that the ms delay is built on the us delay function anyway, but that does not solve my problem. i need this to put this seperate function in the libnerdkits folder. this is what i do not know how to do. You are correct, delay_ms is built on delay_us. But you lost me as to what exactly you are trying to achieve. If you want to use delay_us or delay_ms, all you have to do is include ../libnerdkits/delay.h in your programs and both would be accessable. If you are having problems compiling a library from a copy you modified, that would require a change in the makefile. For instatnce. Say you make a change inside one of the funtions in the delay library (without adding or changing names of functions) then save it to a file called mydelay.c You would also need to copy delay.h to mydelay.h. Then in the makefile, change the reference of ../libnerdkits/delay.o in the LINKOBJECTS to ../libnerdkits/mydelay.o Then your file will compile with your program. Keep in mind, you will also need to change the include in your program for your custom delay file. just to explain the reason for two delays. i need a 30 micro second hi output. with the provided millisecond delay, i was unable to achieve such a short time. after changing a variable in the millisecond delay's .c file, i was able to get my 30 microseconds ( i am checking with an oscilliscope). between the high outputs i need it to stay low in the millisecond's range. the altered .c file did not cooperate, however. it appears less than 1 millisecond in the millisecond delay is a problem and more than 1000 microseconds in the microsecond delay is a problem. i read something on .h files and thought it would not be necessary for what i was trying to do. i will digest your info. and let you know the my results... good or bad. thanks -dave A little while back I had need for an improvement over delay_ms and delay_us and I put together this web page to create accurate timing loops. To use it in C, surround the results with "asm volatile". For example, to generate a 30us loop, go to the web page, enter 14.7456 as the CPU speed (for NerdKits crystal), 30 as the delay, press the "us" button, and it generates this: ; Delay 442 cycles ; 29us 975 25/576 ns ; at 14.7456 MHz ldi r18, 147 L1: dec r18 brne L1 nop To turn this into C code, make some minor tweaks to get it to work with the "asm" statement syntax: asm volatile ( " ldi r18, 147" "\n" "1: dec r18" "\n" " brne 1b" "\n" " nop" "\n" ); I'm planning on having the web page do that automatically at some point. It handles long delays quite easily. Here's a one-second delay for example: asm volatile( " ldi r18, 75 " "\n" " ldi r19, 206" "\n" " ldi r20, 238" "\n" "1: dec r20 " "\n" " brne 1b " "\n" " dec r19 " "\n" " brne 1b " "\n" " dec r18 " "\n" " brne 1b " "\n" " rjmp 2f " "\n" "2: " "\n" ); hey bretm, i have been away from my nerdkit for a while, but just plugged in your delay code. works! awesome. thanks man. I have found the delay functions that are included with avr/gcc are quite accurate and as long as you define F_CPU to reflect your clock frequency, they self-adjust. I wonder why the nerdkit folks wrote their own in libnerdkits? They're in the util directory ... #include <util/delay.h> functions are _delay_us(?) and _delay_ms(?). Please log in to post a reply.
http://www.nerdkits.com/forum/thread/1384/
CC-MAIN-2020-05
en
refinedweb
expression_language 0.2.1 expression_language # Dart library for parsing and evaluating expressions. Main goal # Main goal of this library is to be able to parse and evaluate expressions like: 4 * 2.5 + 8.5 + 1.5 / 3.0 3* @control1.numberProperty1 < (length(@control2.stringProperty1 + "test string") - 42) (!@control1.boolProperty1 && length(@control2.stringProperty1) == 21) ? "string1" : "string2" Features # Currently there are multiple supported data types and operations. Data types # String-> maps directly to the Dart String bool-> maps directly to the Dart bool Integer-> wrapper around the Dart int Decimal-> Custom type DateTime-> maps directly to the Dart DateTime Duration-> maps directly to the Dart Duration Note: To be able to easily work with financial data and not to lose precision we decided to use Decimal data type taken from dart-decimal instead of double. To keep our expression definitions strongly typed and to have a common way to work with all number data types we introduced base Number data type class which is simmilar to Dart num class. Since we can't modify definition of the Dart int we have also introduced Integer data type which is a simple wrapper around the int and which also extends Number. There is a conversion expression from Integer to int and from Decimal to double so higher layers can hide those data types as an implementation detail. To learn more about DateTime data type in expressions see this merge request. Operations # There are most of the standard operations working on the data types above. For example you can use most of the arithmetic operators like +, -, *, / , ~/, % or the logical operators like &&, ||, !, <, >, <=, >=, ==. There are also special functions like length which returns length of the string, round which rounds the Decimal number, now which returns current date time, toString which converts numeric value to the string. To be able to reference another expression from the expression itself we use a construct @element.propertyName. The element can map to any type extending ExpressionProviderElement. Usage # //Create expression parser and pass a map of the types extending ExpressionProviderElement which can hold other expressions. var expressionGrammarDefinition = ExpressionGrammarParser({"element": TestFormElement()}); var parser = expressionGrammarDefinition.build(); //Parse the expression. var result = parser .parse("(1 + @element.value < 3*5) && false || (2 + 3*(4 + 21)) >= 15"); //The expression now contains strongly typed expression tree representing the expression above. var expression = result.value as Expression<bool>; //Evaluate the expression. bool value = expression.evaluate(); 0.2.1 # Fixed analyzer issues. 0.2.0 # - Added ExpressionParserclass to abstract underlying parser library. - Increased minimum dart SDK to 2.6.0. 0.1.4 # - Added custom function expression. - Increased minimum dart SDK to 2.4.0. 0.1.3 # Added matches, contains, startsWith and endsWith functions. 0.1.2 # Fixed small health suggestions. 0.1.1 # Fixed formatting issues. 0.1.0 # Initial Version of the library. import 'package:expression_language/expression_language.dart'; void main() { var expressionGrammarDefinition = ExpressionGrammarParser({}); var parser = expressionGrammarDefinition.build(); var result = parser.parse('\"Hello 1 + 1 equals: \" + (1 + 1)'); var expression = result.value as Expression; var value = expression.evaluate(); print(value); } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: expression_language: :expression_language/expression_language.dart'; We analyzed this package on Jan 16, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.4 Maintenance issues and suggestions Support latest dependencies. (-10 points) The version constraint in pubspec.yaml does not support the latest published versions for 1 dependency ( petitparser).
https://pub.dev/packages/expression_language
CC-MAIN-2020-05
en
refinedweb
direct.p3d.PatchMaker¶ from direct.p3d.PatchMaker import PatchMaker Deprecated since version 1.10.0: The p3d packaging system has been replaced with the new setuptools-based system. See the Distributing Panda3D Applications manual section. Inheritance diagram - class PatchMaker(installDir)[source]¶ Bases: object This class will operate on an existing package install directory, as generated by the Packager, and create patchfiles between versions as needed. It is also used at runtime, to apply the downloaded patches. - class Package(packageDesc, patchMaker, xpackage=None)[source]¶ Bases: object This is a particular package. This contains all of the information needed to reconstruct the package’s desc file. readDescFile(self, doProcessing=False)[source]¶ Reads the existing package.xml file and stores it in this class for later rewriting. if doProcessing is true, it may massage the file and the directory contents in preparation for building patches. Returns true on success, false on failure. - class PackageVersion(packageName, platform, version, hostUrl, file)[source]¶ Bases: object A specific patch version of a package. This is not just the package’s “version” string; it also corresponds to the particular patch version, which increments independently of the “version”. applyPatch(self, origFile, patchFilename)[source]¶ Applies the named patch to the indicated original file, storing the results in a temporary file, and returns that temporary Filename. Returns None on failure. getFile(self)[source]¶ Returns the Filename of the archive file associated with this version. If the file doesn’t actually exist on disk, a temporary file will be created. Returns None if the file can’t be recreated. getPatchChain(self, startPv, alreadyVisited=[])[source]¶ Returns a list of patches that, when applied in sequence to the indicated PackageVersion object, will produce this PackageVersion object. Returns None if no chain can be found. getRecreateFilePlan(self, alreadyVisited=[])[source]¶ Returns the tuple (startFile, startPv, plan), describing how to recreate the archive file for this version. startFile and startPv is the Filename and packageVersion of the file to start with, and plan is a list of tuples (patchfile, pv), listing the patches to apply in sequence, and the packageVersion object associated with each patch. Returns (None, None, None) if there is no way to recreate this archive file. - class Patchfile(package)[source]¶ Bases: object A single patchfile for a package. fromFile(self, packageDir, patchFilename, sourceFile, targetFile)[source]¶ Creates the data structures from an existing patchfile on disk. getSourceKey(self)[source]¶ Returns the key for locating the package that this patchfile can be applied to. getTargetKey(self)[source]¶ Returns the key for locating the package that this patchfile will generate. buildPatch(self, v1, v2, package, patchFilename)[source]¶ Builds a patch from PackageVersion v1 to PackageVersion v2, and stores it in patchFilename.pz. Returns true on success, false on failure. buildPatchChains(self)[source]¶ Builds up the chains of PackageVersions and the patchfiles that connect them. buildPatchFile(self, origFilename, newFilename, patchFilename, printOrigName, printNewName)[source]¶ Creates a patch file from origFilename to newFilename, storing the result in patchFilename. Returns true on success, false on failure. buildPatches(self, packageNames=None)[source]¶ Makes the patches required in a particular directory structure on disk. If packageNames is None, this makes patches for all packages; otherwise, it should be a list of package name strings, limiting the set of packages that are processed. cleanup(self)[source]¶ Should be called on exit to remove temporary files and such created during processing. getPatchChainToCurrent(self, descFilename, fileSpec)[source]¶ Reads the package defined in the indicated desc file, and constructs a patch chain from the version represented by fileSpec to the current version of this package, if possible. Returns the patch chain if successful, or None otherwise. processAllPackages(self)[source]¶ Walks through the list of packages, and builds missing patches for each one. processSomePackages(self, packageNames)[source]¶ Builds missing patches only for the named packages. readPackageDescFile(self, descFilename)[source]¶ Reads a desc file associated with a particular package, and adds the package to self.packages. Returns the Package object, or None on failure.
https://docs.panda3d.org/1.10/python/reference/direct.p3d.PatchMaker
CC-MAIN-2020-05
en
refinedweb
Available Liquid tags Tags make up the programming logic that tells templates what to do. Tags are wrapped in {% %}. {% if user.fullname == 'Dave Bowman' %} Hello, Dave. {% endif %} White space control Normally, Liquid renders everything outside of variable and tag blocks verbatim, with all the white space as-is. Occasionally you don't want the extra white space, but you still want to format the template cleanly, which requires white space. You can tell the engine to strip all leading or trailing white space by adding a hyphen (-) to the start or end block tag. Code {% for i in (1..5) --%} {{ i }} {%-- endfor %} Output 12345 See also Add dynamic content and create custom templates Liquid types Liquid Objects Liquid Filters
https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/liquid-tags
CC-MAIN-2018-26
en
refinedweb
- = $autoscaling->RegisterScalableTarget( { 'ServiceNamespace' => 'ecs', 'RoleARN' => 'arn:aws:iam::012345678910:role/ApplicationAutoscalingECSRole', 'MaxCapacity' => 10, 'MinCapacity' => 1, 'ResourceId' => 'service/default/web-app', 'ScalableDimension' => 'ecs:service:DesiredCount' } ); # To register an EC2 Spot fleet as a scalable target # This example registers a scalable target from an Amazon EC2 Spot fleet with # a minimum target capacity of 1 and a maximum of 10. my $RegisterScalableTargetResponse = $autoscaling->RegisterScalableTarget( { 'ServiceNamespace' => 'ec2', 'RoleARN' => 'arn:aws:iam::012345678910:role/ApplicationAutoscalingSpotRole', 'MinCapacity' => 1, 'MaxCapacity' => 10, 'ScalableDimension' => 'ec2:spot-fleet-request:TargetCapacity', 'ResourceId' => 'spot-fleet-request/sfr-45e69d8a-be48-4539-bbf3-3464e99c50cCapacity => Int The maximum value to scale to in response to a scale out event. This parameter is required if you are registering a scalable target. MinCapacity => Int The minimum value to scale to in response to a scale in event. This parameter is required if you are registering a scalable target. REQUIRED ResourceId => Str The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.... Amazon SageMaker endpoint variants - The resource type is variantand the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. RoleARN => Str Application Auto Scaling creates a service-linked role that grants it permissions to modify the scalable target on your behalf. For more information, see Service-Linked Roles for Application Auto Scaling (). For resources that are not supported using a service-linked role, this parameter is required and must specify the ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf. REQUIRED ScalableDimension => Str The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.. Valid values are: " REQUIRED ServiceNamespace => Str The namespace of the AWS service. For more information, see AWS Service Namespaces () in the Amazon Web Services General Reference. Valid values are: "ecs", "elasticmapreduce", "ec2", "appstream", "dynamodb", "rds", "sagemaker" SEE ALSO This class forms part of Paws, documenting arguments for method RegisterScalableTarget in Paws::ApplicationAutoScaling BUGS and CONTRIBUTIONS The source code is located here: Please report bugs to:
https://metacpan.org/pod/Paws::ApplicationAutoScaling::RegisterScalableTarget
CC-MAIN-2018-26
en
refinedweb
Table of Contents Here are Standard, simple, and portable ways to perform common transformations on a string instance, such as "convert to all upper case." The word transformations is especially apt, because the standard template function transform<> is used. This code will go through some iterations. Here's a simple version: #include <string> #include <algorithm> #include <cctype> // old <ctype.h> struct ToLower { char operator() (char c) const { return std::tolower(c); } }; struct ToUpper { char operator() (char c) const { return std::toupper(c); } }; int main() { std::string s ("Some Kind Of Initial Input Goes Here"); // Change everything into upper case std::transform (s.begin(), s.end(), s.begin(), ToUpper()); // Change everything into lower case std::transform (s.begin(), s.end(), s.begin(), ToLower()); // Change everything back into upper case, but store the // result in a different string std::string capital_s; capital_s.resize(s.size()); std::transform (s.begin(), s.end(), capital_s.begin(), ToUpper()); } Note that these calls all involve the global C locale through the use of the C functions toupper/tolower. This is absolutely guaranteed to work -- but only if the string contains only characters from the basic source character set, and there are only 96 of those. Which means that not even all English text can be represented (certain British spellings, proper names, and so forth). So, if all your input forevermore consists of only those 96 characters (hahahahahaha), then you're done. Note that the ToUpper and ToLower function objects are needed because toupper and tolower are overloaded names (declared in <cctype> and <locale>) so the template-arguments for transform<> cannot be deduced, as explained in this At minimum, you can write short wrappers like char toLower (char c) { return std::tolower(c); } (Thanks to James Kanze for assistance and suggestions on all of this.) Another common operation is trimming off excess whitespace. Much like transformations, this task is trivial with the use of string's find family. These examples are broken into multiple statements for readability: std::string str (" \t blah blah blah \n "); // trim leading whitespace string::size_type notwhite = str.find_first_not_of(" \t\n"); str.erase(0,notwhite); // trim trailing whitespace notwhite = str.find_last_not_of(" \t\n"); str.erase(notwhite+1); Obviously, the calls to find could be inserted directly into the calls to erase, in case your compiler does not optimize named temporaries out of existence. The well-known-and-if-it-isn't-well-known-it-ought-to-be Guru of the Week discussions held on Usenet covered this topic in January of 1998. Briefly, the challenge was, “write a 'ci_string' class which is identical to the standard 'string' class, but is case-insensitive in the same way as the (common but nonstandard) C function stricmp()”. ci_string s( "AbCdE" ); // case insensitive assert( s == "abcde" ); assert( s == "ABCDE" ); // still case-preserving, of course assert( strcmp( s.c_str(), "AbCdE" ) == 0 ); assert( strcmp( s.c_str(), "abcde" ) != 0 ); The solution is surprisingly easy. The original answer was posted on Usenet, and a revised version appears in Herb Sutter's book Exceptional C++ and on his website as GotW 29. See? Told you it was easy! Added June 2000: The May 2000 issue of C++ Report contains a fascinating article by Matt Austern (yes, the Matt Austern) on why case-insensitive comparisons are not as easy as they seem, and why creating a class is the wrong way to go about it in production code. (The GotW answer mentions one of the principle difficulties; his article mentions more.) Basically, this is "easy" only if you ignore some things, things which may be too important to your program to ignore. (I chose to ignore them when originally writing this entry, and am surprised that nobody ever called me on it...) The GotW question and answer remain useful instructional tools, however. Added September 2000: James Kanze provided a link to a Unicode Technical Report discussing case handling, which provides some very good information. The std::basic_string is tantalizingly general, in that it is parameterized on the type of the characters which it holds. In theory, you could whip up a Unicode character class and instantiate std::basic_string<my_unicode_char>, or assuming that integers are wider than characters on your platform, maybe just declare variables of type std::basic_string<int>. That's the theory. Remember however that basic_string has additional type parameters, which take default arguments based on the character type (called CharT here): template <typename CharT, typename Traits = char_traits<CharT>, typename Alloc = allocator<CharT> > class basic_string { .... }; Now, allocator<CharT> will probably Do The Right Thing by default, unless you need to implement your own allocator for your characters. But char_traits takes more work. The char_traits template is declared but not defined. That means there is only template <typename CharT> struct char_traits { static void foo (type1 x, type2 y); ... }; and functions such as char_traits<CharT>::foo() are not actually defined anywhere for the general case. The C++ standard permits this, because writing such a definition to fit all possible CharT's cannot be done. The C++ standard also requires that char_traits be specialized for instantiations of char and wchar_t, and it is these template specializations that permit entities like basic_string<char,char_traits<char>> to work. If you want to use character types other than char and wchar_t, such as unsigned char and int, you will need suitable specializations for them. For a time, in earlier versions of GCC, there was a mostly-correct implementation that let programmers be lazy but it broke under many situations, so it was removed. GCC 3.4 introduced a new implementation that mostly works and can be specialized even for int and other built-in types. If you want to use your own special character class, then you have a lot of work to do, especially if you with to use i18n features (facets require traits information but don't have a traits argument). Another example of how to specialize char_traits was given on the mailing list and at a later date was put into the file include/ext/pod_char_traits.h. We agree that the way it's used with basic_string (scroll down to main()) doesn't look nice, but that's because the nice-looking first attempt turned out to not be conforming C++, due to the rule that CharT must be a POD. (See how tricky this is?) The Standard C (and C++) function strtok() leaves a lot to be desired in terms of user-friendliness. It's unintuitive, it destroys the character string on which it operates, and it requires you to handle all the memory problems. But it does let the client code decide what to use to break the string into pieces; it allows you to choose the "whitespace," so to speak. A C++ implementation lets us keep the good things and fix those annoyances. The implementation here is more intuitive (you only call it once, not in a loop with varying argument), it does not affect the original string at all, and all the memory allocation is handled for you. It's called stringtok, and it's a template function. Sources are as below, in a less-portable form than it could be, to keep this example simple (for example, see the comments on what kind of string it will accept). #include <string> template <typename Container> void stringtok(Container &container, string const &in, const char * const delimiters = " \t\n") { const string::size_type len = in.length(); string::size_type i = 0; while (i < len) { // Eat leading whitespace i = in.find_first_not_of(delimiters, i); if (i == string::npos) return; // Nothing left but white space // Find the end of the token string::size_type j = in.find_first_of(delimiters, i); // Push token if (j == string::npos) { container.push_back(in.substr(i)); return; } else container.push_back(in.substr(i, j-i)); // Set up for next loop i = j + 1; } } The author uses a more general (but less readable) form of it for parsing command strings and the like. If you compiled and ran this code using it: std::list<string> ls; stringtok (ls, " this \t is\t\n a test "); for (std::list<string>const_iterator i = ls.begin(); i != ls.end(); ++i) { std::cerr << ':' << (*i) << ":\n"; } You would see this as output: :this: :is: :a: :test: with all the whitespace removed. The original s is still available for use, ls will clean up after itself, and ls.size() will return how many tokens there were. As always, there is a price paid here, in that stringtok is not as fast as strtok. The other benefits usually outweigh that, however. Added February 2001: Mark Wilden pointed out that the standard std::getline() function can be used with standard istringstreams to perform tokenizing as well. Build an istringstream from the input text, and then use std::getline with varying delimiters (the three-argument signature) to extract tokens into a string. From GCC 3.4 calling s.reserve(res) on a string s with res < s.capacity() will reduce the string's capacity to std::max(s.size(), res). This behaviour is suggested, but not required by the standard. Prior to GCC 3.4 the following alternative can be used instead std::string(str.data(), str.size()).swap(str); This is similar to the idiom for reducing a vector's memory usage (see this FAQ entry) but the regular copy constructor cannot be used because libstdc++'s string is Copy-On-Write. In C++0x mode you can call s.shrink_to_fit() to achieve the same effect as s.reserve(s.size()). A common lament seen in various newsgroups deals with the Standard string class as opposed to the Microsoft Foundation Class called CString. Often programmers realize that a standard portable answer is better than a proprietary nonportable one, but in porting their application from a Win32 platform, they discover that they are relying on special functions offered by the CString class. Things are not as bad as they seem. In this message, Joe Buck points out a few very important things: The Standard string supports all the operations that CString does, with three exceptions. Two of those exceptions (whitespace trimming and case conversion) are trivial to implement. In fact, we do so on this page. The third is CString::Format, which allows formatting in the style of sprintf. This deserves some mention: The old libg++ library had a function called form(), which did much the same thing. But for a Standard solution, you should use the stringstream classes. These are the bridge between the iostream hierarchy and the string class, and they operate with regular streams seamlessly because they inherit from the iostream hierarchy. An quick example: #include <iostream> #include <string> #include <sstream> string f (string& incoming) // incoming is "foo N" { istringstream incoming_stream(incoming); string the_word; int the_number; incoming_stream >> the_word // extract "foo" >> the_number; // extract N ostringstream output_stream; output_stream << "The word was " << the_word << " and 3*N was " << (3*the_number); return output_stream.str(); } A serious problem with CString is a design bug in its memory allocation. Specifically, quoting from that same message: CString suffers from a common programming error that results in poor performance. Consider the following code: CString n_copies_of (const CString& foo, unsigned n) { CString tmp; for (unsigned i = 0; i < n; i++) tmp += foo; return tmp; } This function is O(n^2), not O(n). The reason is that each += causes a reallocation and copy of the existing string. Microsoft applications are full of this kind of thing (quadratic performance on tasks that can be done in linear time) -- on the other hand, we should be thankful, as it's created such a big market for high-end ix86 hardware. :-) If you replace CString with string in the above function, the performance is O(n). Joe Buck also pointed out some other things to keep in mind when comparing CString and the Standard string class: CString permits access to its internal representation; coders who exploited that may have problems moving to string. Microsoft ships the source to CString (in the files MFC\SRC\Str{core,ex}.cpp), so you could fix the allocation bug and rebuild your MFC libraries. Note: It looks like the CString shipped with VC++6.0 has fixed this, although it may in fact have been one of the VC++ SPs that did it. string operations like this have O(n) complexity if the implementors do it correctly. The libstdc++ implementors did it correctly. Other vendors might not. While chapters of the SGI STL are used in libstdc++, their string class is not. The SGI string is essentially vector<char> and does not do any reference counting like libstdc++'s does. (It is O(n), though.) So if you're thinking about SGI's string or rope classes, you're now looking at four possibilities: CString, the libstdc++ string, the SGI string, and the SGI rope, and this is all before any allocator or traits customizations! (More choices than you can shake a stick at -- want fries with that?)
http://gcc.gnu.org/onlinedocs/gcc-4.6.3/libstdc++/manual/manual/strings.html
CC-MAIN-2018-26
en
refinedweb
Symptom Assume that you create a new MailItem, AppointmentItem, or MeetingItem object by using the Outlook Object Model. You then set the HtmlBody property of the item to some previously created well-formed HTML source that contains Cascading Style Sheet (CSS) styles. After you call the Display method and the Send method to send the item, the formatting that's dictated by the configured CSS styles may disappear, or the paragraph styles may be replaced by the MSONormal class. Cause Microsoft Outlook uses Microsoft Word as its editor. Loss of formatting may occur when the HTML source is validated by the Word HTML engine when the item is sent. Workaround We recommend that you use the underlying WordEditor object of the inspector to edit the HTML and Rich Text Format (RTF) bodies of Outlook items when you use the Outlook Object Model, instead of editing the HtmlBody property. See the following example. Note See Word Object Model for more information. using Outlook = Microsoft.Office.Interop.Outlook;using Word = Microsoft.Office.Interop.Word;namespace CreateAndEditMailItemUsingWord{ class Program { static void Main(string[] args) { Outlook.MailItem mailItem = (new Outlook.Application()).CreateItem(Microsoft.Office.Interop.Outlook.OlItemType.olMailItem); Word.Document wordDocument = mailItem.GetInspector.WordEditor as Word.Document; // Insert the text at the very beginning of the document // You can control fonts and formatting using the ParagraphFormat propety of the Word.Range object Word.Range wordRange = wordDocument.Range(0, 0); wordRange.Text = "Please insert your text here"; mailItem.Display(); } }}
https://support.microsoft.com/pt-pt/help/4020759/text-formatting-may-be-lost-when-editing-the-htmlbody-property-of-an
CC-MAIN-2018-26
en
refinedweb
.exception;26 27 /**28 * Example exception.29 *30 * @author Stephan J. Schmidt31 * @version $Id: ExampleException.java 645 2003-01-09 09:49:12Z stephan $32 **/33 public class ExampleException extends ChainedException {34 public ExampleException(String message, Throwable cause) {35 super(message, cause);36 }37 38 public ExampleException(String message) {39 super(message);40 }41 }42 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/snipsnap/exception/ExampleException.java.htm
CC-MAIN-2018-26
en
refinedweb
May 02, 2015 .. currentmodule:: rdfextras.sparql TODO: merge this first bit from sparql.sparql.py into rest of doc... updating all along the way. SPARQL implementation on top of RDFLib Implementation of the W3C SPARQL language (version April 2005). The basic class here is supposed to be a superclass of rdfextras.sparql.graph; it has been separated only for a better maintainability. There is a separate description for the functionalities. For a general description of the SPARQL API, see the separate, more complete description. The top level (__init__.py) module of the Package imports the important classes. In other words, the user may choose to use the following imports only: from rdflibUtils import myTripleStore from rdflibUtils import retrieveRDFFiles from rdflibUtils import SPARQLError from rdflibUtils import GraphPattern The module imports and/or creates some frequently used Namespaces, and these can then be imported by the user like: from rdflibUtils import ns_rdf Finally, the package also has a set of convenience string defines for XML Schema datatypes (ie, the URI-s of the datatypes); ie, one can use: from rdflibUtils import type_string from rdflibUtils import type_integer from rdflibUtils import type_long from rdflibUtils import type_double from rdflibUtils import type_float from rdflibUtils import type_decimal from rdflibUtils import type_dateTime from rdflibUtils import type_date from rdflibUtils import type_time from rdflibUtils import type_duration These are used, for example, in the sparql-p implementation. The three most important classes in RDFLib for the average user are Namespace, URIRef and Literal; these are also imported, so the user can also use, eg: from rdflib import Namespace, URIRef, Literal - Version 1.0: based on an earlier version of the SPARQL, first released implementation - Version 2.0: version based on the March 2005 SPARQL document, also a major change of the core code (introduction of the separate GraphPattern rdflibUtils.graph.GraphPattern class, etc). - Version 2.01: minor changes only: - switch to epydoc as a documentation tool, it gives a much better overview of the classes - addition of the SELECT * feature to sparql-p - Version 2.02: - added some methods to myTripleStore rdflibUtils.myTripleStore.myTripleStore to handle Alt and Bag the same way as Seq - added also methods to add() collections and containers to the triple store, not only retrieve them - Version 2.1: adapted to the inclusion of the code into rdflib, thanks to Michel Pelletier - Version 2.2: added the sorting possibilities; introduced the Unbound class and have a better interface to patterns using this (in the BasicGraphPattern class) @author: Ivan Herman @license: This software is available for use under the W3C Software License @contact: Ivan Herman, ivan@ivan-herman.net @version: 2.2 Am SPARQL error has been detected
http://rdfextras.readthedocs.io/en/latest/sparql/sparql.html
CC-MAIN-2018-26
en
refinedweb
PCF8574 I/O as input I think there is a problem in I2CIO::read() function. T PCF8574 wont correctly work with input pin. According the datasheet, each input pin have to be kept HIGH at any write. The function writes 0 to each input and it doesn' work. With the below fix it works as expected. I'm using your lib for years so it is my two cents. Thank you! Robert Badar (aka budvar10) vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv // // write int I2CIO::write ( uint8_t value ) { int status = 0; if ( _initialised ) { // Only write HIGH the values of the ports that have been initialised as // outputs updating the output shadow of the device // // 15-FEB-2018 - fix, all I/Os initialized as input must be written as HIGH // _shadow = ( value & ~(_dirMask) ); _shadow = ( value | _dirMask ); Wire.beginTransmission ( _i2cAddr ); if (ARDUINO < 100) Wire.send ( _shadow ); else Wire.write ( _shadow ); endif status = Wire.endTransmission (); } return ( (status == 0) ); } Hi Robert, thanks for pointing this out. One question, you mention that there is a problem with the I2CIO read method, however, you are making an update the write method in the I2CIO? Please clarify. Yes, exactly the write. To define which I/O is input, appropriate I/O have to be set (write 1) and must be kept set the whole time, it means each write must be 1 to each input pin. See e.g. the PCF8574; PCF8574A datasheet (rev.5 -27 MAY2013) from NXP, page 13-14. In your lib the write function always write 0 to inputs. The 0 at inputs does not mean there is written nothing, like imply in the comment. There is also an explanation on page 7: Input HIGH, Input LOW. My real testing is as follows (all inputs are pulled up): 1. If 0 is written to the I/O. The 0 will be read whether is pulled up or connected to GND. Only hard connection to VCC (pull-up resistance less than 50ohm at Vcc = 5V! ) cause that it will read 1. 2. If 1 is written to the i/O. The 1 will be read if pulled-up of even not connected. The 0 will be read in it will be connected to GND or pulled-down with significantly smaller resistance. The I/Os are bidirectional and it means that there is no difference between input and output from outside. All functionality is "hidden" in the circuit. The write operation can write only to all pins together. It is impossible to distinguish which pin is written and which one not. In the same time I have to mention that the documentation is not absolutely clear, for me at least. I read the datasheeet from PHILIPS (very old), Texas. All wrote "The I/Os should be HIGH before being used as inputs." but they must be HIGH. Finally in the NXP I found exact example. It should work also with writing the direction mask just before read inside of read(), but inside of write function it produce more effective code. I tested proposed change and it worked for me just fine. My testing device had LCD with PCF8574 and 3 other PCF's used as general I/Os. Just for sure, here is the cpp file because the code from my first post above is unreadable. Should be pushed into the main repo "default" branch TAP 1.3.6. Please give the latest version a go and see if we can close the defect. I am not able to reproduce or test it. Thanks so much for sharing and requesting the update. I've created the branch. Not sure if it is according your perception since I'm new here on bitbucket. I found same problem in several discussions on the internet. Thank you for excellent library and for your effort to keep it maintained. Hey RB, I can't merge that code in. It is based on a very old version of the library. Try using the integration branch. It should have the change you have done already. Hello Francisco, I'd made the pull request again on the integration branch. Regards Robert Fixed in principle and in the integration branch. Please check. Not correct, The I2CIO::write seems to be without change. Only I2CIO::digitalWrite was changed. I2CIO::write is still incorrect with Is the write method the only one you have changed as follows: The digitalWrite method can't be processed as you have shared it since value is not present in the method. This is very odd as I have been using these routines for a long, long time without a glitch. Hm, I am little bit confused. I can see green/pink highlighted difference in side by side comparison of sources. This is in I2CIO::write() function. Here is complete I2CIO::write() working code (value is a parameter): Lines 154-156. The ::digitalWrite is going throughout the ::write, so no change is needed in digitalWrite. Please try to check it again. Something wrong with my browser? I've check it again with IE instead of Firefox (I'm using latest) and I can see the problem in my source. There is change in ::write but some conflict in ::digitalWrite. Weird... I corrected my source, but did not create a new pull-request. In any case, the int I2CIO::write ( uint8_t value ) need to be changed as in the post above. Written value have to have '1' at all bits for inputs. Not '0'. Sorry for any misunderstanding. Merge defect that fixes #80. → <<cset f7709a0d283a>> Yes, it's ok now. Thank you. Bye.
https://bitbucket.org/fmalpartida/new-liquidcrystal/issues/80/pcf8574-i-o-as-input
CC-MAIN-2018-26
en
refinedweb
Lambda Function Handler (Java) At the time you create a Lambda function you specify a handler that AWS Lambda can invoke when the service executes the Lambda function on your behalf. Lambda supports two approaches for creating a handler: Loading the handler method directly without having to implement an interface. This section describes this approach. Implementing standard interfaces provided as part of aws-lambda-java-corelibrary (interface approach). For more information, see Leveraging Predefined Interfaces for Creating Handler (Java). The general syntax for the handler is as follows: outputType handler-name( inputTypeinput, Context context) { ... } In order for AWS Lambda to successfully invoke a handler it must be invoked with input data that can be serialized into the data type of the input parameter. In the syntax, note the following: inputType– The first handler parameter is the input to the handler, which can be event data (published by an event source) or custom input that you provide such as a string or any custom data object. In order for AWS Lambda to successfully invoke this handler, the function must be invoked with input data that can be serialized into the data type of the inputparameter. outputType– If you plan to invoke the Lambda function synchronously (using the RequestResponseinvocation type), you can return the output of your function using any of the supported data types. For example, if you use a Lambda function as a mobile application backend, you are invoking it synchronously. Your output data type will be serialized into JSON. If you plan to invoke the Lambda function asynchronously (using the Eventinvocation type), the outputTypeshould be void. For example, if you use AWS Lambda with event sources such as Amazon S3 or Amazon SNS, these event sources invoke the Lambda function using the Eventinvocation type. The inputTypeand outputTypecan be one of the following: Primitive Java types (such as String or int). Predefined AWS event types defined in the aws-lambda-java-eventslibrary. For example S3Eventis one of the POJOs predefined in the library that provides methods for you to easily read information from the incoming Amazon S3 event. You can also write your own POJO class. AWS Lambda will automatically serialize and deserialize input and output JSON based on the POJO type. For more information, see Handler Input/Output Types (Java). You can omit the Contextobject from the handler method signature if it isn't needed. For more information, see The Context Object (Java). For example, consider the following Java example code. package example; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; public class Hello implements RequestHandler<Integer, String>{ public String myHandler(int myCount, Context context) { return String.valueOf(myCount); } } In this example input is of type Integer and output is of type String. If you package this code and dependencies, and create your Lambda function, you specify example.Hello::myHandler ( package. class:: method-reference) as the handler. In the example Java code, the first handler parameter is the input to the handler (myHandler), which can be event data (published by an event source such as Amazon S3) or custom input you provide such as an Integer object (as in this example) or any custom data object. For instructions to create a Lambda function using this Java code, see (Optional) Create a Lambda Function Authored in Java. Handler Overload Resolution If your Java code contains multiple methods with same name as the handler name, then AWS Lambda uses the following rules to pick a method to invoke: Select the method with the largest number of parameters. If two or more methods have the same number of parameters, AWS Lambda selects the method that has the Contextas the last parameter. If none or all of these methods have the Contextparameter, then the behavior is undefined. Additional Information The following topics provide more information about the handler. For more information about the handler input and output types, see Handler Input/Output Types (Java). For information about using predefined interfaces to create a handler, see Leveraging Predefined Interfaces for Creating Handler (Java). If you implement these interfaces, you can validate your handler method signature at compile time. If your Lambda function throws an exception, AWS Lambda records metrics in CloudWatch indicating that an error occurred. For more information, see Function Errors (Java).
https://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-handler-types.html?shortFooter=true
CC-MAIN-2018-26
en
refinedweb
Or, How I Learned To Stop Worrying And Raise An Event In part 1 of our .NET Event adventure, we covered how enriching your events with custom arguments and availing an extensibility point will make all of your component’s consuming developer’s wildest dreams come true, and probably some of yours as well. Except for that one with the Unicorn. They’re not for real, dude, let it go. As promised, here in part 2, I will lay out a code snippet to put some icing on the rich event cake. This will tighten up the tedium of declaring the handler delegate, the event-raising method, and the arguments class as well as the event itself. Without further ado, here is the content of the C# snippet: <?xml version="1.0" encoding="utf-8"?> <CodeSnippets xmlns=""> <CodeSnippet Format="1.0.0"> <Header> <Title>Declare An Event</Title> <Description>Snippet for generating a rich event</Description> <Author>Aptera Software</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> <Shortcut>event</Shortcut> </Header> <Snippet> <Imports> <Import> <Namespace>System</Namespace> </Import> </Imports> <Declarations> <Literal> <ID>EventName</ID> <Type>String</Type> <ToolTip>Replace with the event name.</ToolTip> <Default>SomethingHappened</Default> </Literal> </Declarations> <Code Language="csharp" Kind="method decl"> <![CDATA[/// <summary> /// Raised when $EventName$. /// </summary> public event $EventName$Handler $EventName$; public delegate void $EventName$Handler( object sender, $EventName$EventArgs e); protected void On$EventName$($EventName$EventArgs e) { if ($EventName$ != null) $EventName$(this, e); } public class $EventName$EventArgs : System.EventArgs { // TODO: add argument properties /// <summary> /// Initializes the $EventName$EventArgs. /// </summary> public $EventName$EventArgs() { // TODO: set argument properties } }]]> </Code> </Snippet> </CodeSnippet> </CodeSnippets> And here is a link to download both the VB and C# .snippet files (in a .zip file). Installing the Snippet If you have never installed a code snippet into Visual Studio before (or you have done, and just didn’t see committing the process to your long-term memory as “your thing”), here’s how: - In Visual Studio, select Tools, Code Snippets Manager… - On the Code Snippets Manager dialog, click the Import… button - Select the .snippet file that you wish to install from its location on your file system - On the Import Code Snippet dialog, check the location(s) to install the snippet (usually My Code Snippets will do) - Click Finish Using the Snippet Now, in your component code, you should be able to just type “event”: Hit the Tab key: …and update the name of your shiny new event. Also, I was just kidding about the Unicorn dream. Raising quality events is a very powerful technique, and anything is possible when you’re using .NET.
https://code.jon.fazzaro.com/2009/04/06/broaden-your-event-horizons-part-2/
CC-MAIN-2018-26
en
refinedweb
At base, we have get(key) and put(key, data) as provided by the Sleepycat/BerkeleyDB core API: get(key, default=None, txn=None, flags=0, ...) Returns the data object associated with key. put(key, data, txn=None, flags=0, ...) Stores the key/data pair in the database. From the documentation for Python’s (now deprecated) bsddb module: the user must serialize them somehow, typically using marshal.dumps() or pickle.dumps(). The two main points of interest here are i) the choice of hash, btree or record-based storage techniques typically provided by key-data stores and ii) the requirement for serialization of Python objects - which, for the case in point, are RDFLib objects: BNode, Literal, URIRef, Namespace, Graph, QuotedGraph, etc. To illustrate (sketchily) how this basic principle of serialized key-data pairs is used to model an RDF store, here is a sort-of-pseudocode distillation of RDFLib’s Sleepycat Store implementation (which uses non-nested btrees, specified via a relevant db flag) and shows the creation of the indices and the main key-data tables: context, namespace, prefix, k2i and i2k (the latter being “key-to-index” and “index-to-key” respectively) and then, broadly, how a {subject, predicate, object} triple is serialized into keys and indices which are then put into the underlying key-data store: def open(self, config): # creating and opening the DBs # Create the indices ... self.__indices = [None,] * 3 self.__indices_info = [None,] * 3 for i in xrange(0, 3): index_name = to_key_func(i)(("s", "p", "o"), "c") index = db.DB(db_env) index.open(index_name, dbopenflags) self.__indices[i] = index self.__indices_info[i] = \ (index, to_key_func(i), from_key_func(i)) # [ ... ] # Create the required key-data stores self.__contexts = db.DB(db_env) self.__contexts.open("contexts", dbopenflags) self.__namespace = db.DB(db_env) self.__namespace.open("namespace", dbopenflags) self.__prefix = db.DB(db_env) self.__prefix.open("prefix", dbopenflags) self.__k2i = db.DB(db_env) self.__k2i.open("k2i", dbopenflags) self.__i2k = db.DB(db_env) self.__i2k.open("i2k", dbopenflags) # [ ... ] def add(self, (subject, predicate, object), context=None, txn=None): # Serializing the subject, predicate, object and context s = _to_string(subject, txn=txn) p = _to_string(predicate, txn=txn) o = _to_string(object, txn=txn) c = _to_string(context, txn=txn) # Storing the serialized data (protected by a transaction # object, if provided) cspo, cpos, cosp = self.__indices value = cspo.get("%s^%s^%s^%s^" % (c, s, p, o), txn=txn) if value is None: self.__contexts.put(c, "", txn=txn) contexts_value = cspo.get("%s^%s^%s^%s^" % ("", s, p, o), txn=txn) or "" contexts = set(contexts_value.split("^")) contexts.add(c) contexts_value = "^".join(contexts) assert contexts_value!=None cspo.put("%s^%s^%s^%s^" % (c, s, p, o), "", txn=txn) cpos.put("%s^%s^%s^%s^" % (c, p, o, s), "", txn=txn) cosp.put("%s^%s^%s^%s^" % (c, o, s, p), "", txn=txn) if not quoted: cspo.put("%s^%s^%s^%s^" % ("", s, p, o), contexts_value, txn=txn) cpos.put("%s^%s^%s^%s^" % ("", p, o, s), contexts_value, txn=txn) cosp.put("%s^%s^%s^%s^" % ("", o, s, p), contexts_value, txn=txn) A corresponding get method reconstructs (de-serializes) the triple from the indices and keys. Returning to the issue of the choice of hash, btree or record-based storage, some of the issues that might usefully be taken into consideration are outlined in the Sleepycat DB manual: Choosing between BTree and Hash For small working datasets that fit entirely in memory, there is no difference between BTree and Hash. Both will perform just as well as the other. In this situation, you might just as well use BTree, if for no other reason than the majority of DB applications use BTree. Note that the main concern here is your working dataset, not your entire dataset. Many applications maintain large amounts of information but only need to access some small portion of that data with any frequency. So what you want to consider is the data that you will routinely use, not the sum total of all the data managed by your application. However, as your working dataset grows to the point where you cannot fit it all into memory, then you need to take more care when choosing your access method. Specifically, choose: BTree if your keys have some locality of reference. That is, if they sort well and you can expect that a query for a given key will likely be followed by a query for one of its neighbors. Hash if your dataset is extremely large. For any given access method, DB must maintain a certain amount of internal information. However, the amount of information that DB must maintain for BTree is much greater than for Hash. The result is that as your dataset grows, this internal information can dominate the cache to the point where there is relatively little space left for application data. As a result, BTree can be forced to perform disk I/O much more frequently than would Hash given the same amount of data. Moreover, if your dataset becomes so large that DB will almost certainly have to perform disk I/O to satisfy a random request, then Hash will definitely out perform BTree because it has fewer internal records to search through than does BTree. And, in addition, there is the usual raft of cryptic XXXTHISNTHAT flags for tweaking the inevitable variety of database speed/space/structure knobs. The design of the RDFLib Store facilitates the exploration of the above-mentioned tradeoffs as shown in Drew Pertulla’s experiment with replacing the BerkeleyDB key-data database with the Tokyo Cabinet key-data database, using the pytc Python bindings. Firstly, the Sleepycat Store is adapted by swapping out bsddb’s BDB (btree) API in favour of pytc’s HDB (hash) API ... class BdbApi(pytc.HDB): """ Make HDB's API look more like BerkeleyDB so we can share the Sleepycat code. """ def get(self, key, txn=None): try: return pytc.HDB.get(self, key) except KeyError: return None def put(self, key, data, txn=None): try: return pytc.HDB.set(self, key, data) except KeyError: return None def delete(self, key, txn=None): try: return pytc.HDB.out(self, key) except KeyError: return None The next step is to create a wrapper to substitute for the standard bsddb open() call, returning a BdbApi object instead of a bsddb object ... def dbOpen(name): return BdbApi(name, pytc.HDBOWRITER | pytc.HDBOCREAT) This can be slotted into place with minimal disturbance to the re-use of the (substantial amount of) remaining Sleepycat-based code ... # Create the required key-data stores # These 3 were BTree mode in Sleepycat, but currently I'm using TC hash self.__contexts = dbOpen("contexts") self.__namespace = dbOpen("namespace") self.__prefix = dbOpen("prefix") self.__k2i = dbOpen("k2i") self.__i2k = dbOpen("i2k") The pytc HashDB API unfortunately does not provide a cursor() object, whereas Sleepycat/BerkeleyDB does and key parts of the functionality of the RDFLib Sleepycat Store implementation rely on the availability of that cursor. The consequent necessity of mimicking a cursor in Python rather than being able to use the library’s fast, C-coded version rendered the exploration much less promising. However, Tokyo Cabinet has subsequently given way to its anagrammatic successor Kyoto Cabinet which offers a much richer API, including (crucially) a cursor object for the HashDB and so the exploration recovers its promise and an RDFLib KyotoCabinet key-value Store is now undergoing performance trials.
http://rdfextras.readthedocs.io/en/latest/store/anatomy.html
CC-MAIN-2018-26
en
refinedweb
25 March 2011 20:23 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> In its monthly economic forecasters survey, the ACC reported expectations for US GDP growth of 3.1% in 2011 and 3.2% in 2012, each a 0.1 percentage point drop from the prior month’s survey. The ACC said that most of its economic forecasters were from industrial companies or consultancies, and had expertise in the dynamics of the For 2011, consumer spending and business investment are projected to rise 3.1% and 8.7% - down 0.1 and 0.3 percentage point, respectively, from the February survey. However, The unemployment picture also improved from February expectations, with economists forecasting an average rate of 9.0% for 2011, an increase of 0.2 percentage point from last month. For 2012, expectations for gains in consumer spending held flat at 2.9%. Meanwhile, projections for growth in industrial projection slipped to 3.9% from 4.0%. The ACC released the survey in conjunction with its weekly economic report. In the report, the ACC noted rising The largest gains were registered in the US Gulf region, the ACC said, which is dominated by the production of building-block materials such as petrochemicals, inorganics and synthetic
http://www.icis.com/Articles/2011/03/25/9447389/us-economic-growth-outlook-dims-on-rising-gasoline-prices.html
CC-MAIN-2014-52
en
refinedweb
This post assumes some basic C skills. Linux puts you in full control. This is not always seen from everyone’s perspective, but a power user loves to be in control. I’m going to show you a basic trick that lets you heavily influence the behavior of most applications, which is not only fun, but also, at times, useful. Let us begin with a simple example. Fun first, science later. #include <stdio.h> #include <stdlib.h> #include <time.h> int main(){ srand(time(NULL)); int i = 10; while(i--) printf("%d\n",rand()%100); return 0; } Simple enough, I believe. I compiled it with no special flags, just gcc random_num.c -o random_num gcc random_num.c -o random_num I hope the resulting output is obvious – ten randomly selected numbers 0-99, hopefully different each time you run this program. Now let’s pretend we don’t really have the source of this executable. Either delete the source file, or move it somewhere – we won’t need it. We will significantly modify this programs behavior, yet without touching it’s source code nor recompiling it. For this, lets create another simple C file: int rand(){ return 42; //the most random number in the universe } We’ll compile it into a shared library. gcc -shared -fPIC unrandom.c -o unrandom.so.so export LD_PRELOAD=$PWD/unrandom.so and then run the program normally. An unchanged app run in an apparently usual manner seems to be affected by what we did in our tiny library… Yup, you are right, our program failed to generate random numbers, because it did not use the “real” rand(), but the one we provided – which returns 42 every time. This is not entirely true. We did not choose which rand() we want our program to use. We told it just to use rand(). When our program is started, certain libraries (that provide functionality needed by the program) are loaded. We can learn which are these using ldd: $) What you see as the output is the list of libs that are needed by random_nums. This list is built into the executable, and is determined compile time. The exact output might slightly differ on your machine, but a libc.so must be there – this is the file which provides core C functionality. That includes the “real” rand(). We can have a peek at what functions does libc provide. I used the following to get a full list: nm -D /lib/libc.so.6 nm -D /lib/libc.so.6 The nm command lists symbols found in a binary file. The -D flag tells it to look for dynamic symbols, which makes sense, as libc.so.6 is a dynamic library. The output is very long, but it indeed lists rand() among many other standard functions. Now what happens when we set up the environmental variable LD_PRELOAD? This variable forces some libraries to be loaded for a program. In our case, it loads unrandom.so for random_num, even though the program itself does not ask for it. The following command may be interesting: $) Note that it lists our custom library. And indeed this is the reason why it’s code get’s executed: random_num calls rand(), but if unrandom.so is loaded it is our library that provides implementation for rand(). Neat, isn’t it? This is not enough. I’d like to be able to inject some code into an application in a similar manner, but in such way that it will be able to function normally. It’s clear if we implemented open() with a simple “return 0;“, the application we would like to hack should malfunction. The point is to be transparent, and to actually call the original open: int open(const char *pathname, int flags){ /* Some evil injected code goes here. */ return open(pathname,flags); // Here we call the "real" open function, that is provided to us by libc.so } Hm. Not really. This won’t call the “original” open(…). Obviously, this is an endless recursive call. How do we access the “real” open function? It is needed to use the programming interface to the dynamic linker. It’s simpler than it sounds. Have a look at this complete example, and then I’ll explain what happens); } The dlfcn.h is needed for dlsym function we use later. That strange #define directive instructs the compiler to enable some non-standard stuff, we need it to enable RTLD_NEXT in dlfcn.h. That typedef is just creating an alias to a complicated pointer-to-function type, with arguments just as the original open – the alias name is orig_open_f_type, which we’ll use later. The body of our custom open(…) consists of some custom code. The last part of it creates a new function pointer orig_open which will point to the original open(…) function. In order to get the address of that function, we ask dlsym to find for us the next “open” function on dynamic libraries stack. Finally, we call that function (passing the same arguments as were passed to our fake “open”), and return it’s return value as ours. As the “evil injected code” I simply used: printf("The victim used open(...) to access '%s'!!!\n",pathname); //remember to include stdio.h! To compile it, I needed to slightly adjust compiler flags: gcc -shared -fPIC inspect_open.c -o inspect_open.so -ldl gcc -shared -fPIC inspect_open.c -o inspect_open.so -ldl I had to append -ldl, so that this shared library is linked to libdl, which provides the dlsym function. (Nah, I am not going to create a fake version of dlsym, though this might be fun.) So what do I have in result? A shared library, which implements the open(…) function so that it behaves exactly as the real open(…)… except it has a side effect of printfing the file path :-) If you are not convinced this is a powerful trick, it’s the time you tried the following: LD_PRELOAD=$PWD/inspect_open.so gnome-calculator LD_PRELOAD=$PWD/inspect_open.so gnome-calculator I encourage you to see the result yourself, but basically it lists every file this application accesses. In real time. I believe it’s not that hard to imagine why this might be useful for debugging or investigating unknown applications. Please note, however, that this particular trick is not quite complete, because open() is not the only function that opens files… For example, there is also open64() in the standard library, and for full investigation you would need to create a fake one too. If you are still with me and enjoyed the above, let me suggest a bunch of ideas of what can be achieved using this trick. Keep in mind that you can do all the above without to source of the affected app! These are only the ideas I came up with. I bet you can find some too, if you do – share them by commenting!
http://gnome-look.org/stories/Rafa%C5%82+Cie%C5%9Blak%3A+Dynamic+linker+tricks%3A+Using+LD_PRELOAD+to+cheat%2C+inject+features+and+investigate+programs?id=151238
CC-MAIN-2014-52
en
refinedweb
05 October 2009 17:30 [Source: ICIS news] BERLIN (ICIS news)--ExxonMobil Chemical is investigating some breakthrough concepts that could “significantly change the nature of cracking” as it works on step-wise improvements to steam cracking, the company's president said on Monday. Most of the company’s work on steam cracking technology was focused on improving energy efficiency, lowering emissions and expanding feedstock flexibility, said ExxonMobil Chemical president Stephen Pryor. Pryor was speaking on the sidelines of the 43rd annual European Petrochemical Association (EPCA) meeting. He did not give any details of the breakthrough ideas ExxonMobil had but said a great deal of the work has to do with feedstock and was firmly based on where the next wave of advantaged feedstock would come from. The key to steam cracking is feedstock flexibility, he said. The company is introducing significant feedstock flexibility at its new 1m tonne/year cracker in ?xml:namespace> For more on ExxonMobil
http://www.icis.com/Articles/2009/10/05/9252876/epca-09-exxonmobil-working-on-breakthrough-cracker-concepts.html
CC-MAIN-2014-52
en
refinedweb
This chapter describes support for the Parlay X 2.0 Presence Web services interfaces for developing applications. The Web service functions as a Presence Network Agent which can publish, subscribe, and listen for notifications on behalf of the users of the Web service. This chapter contains the following sections: Section 8.1, "Introduction" Section 8.2, "Installing the Web Services" Section 8.3, "Configuring Web Services" Section 8.4, "Presence Web Services Interface Descriptions" Section 8.5, "Using the Presence Web Services Interfaces" Section 8.6, "OWLCS Parlay X Presence Custom Error Codes" Section 8.7, "Buddy List Manager API" OWLCS provides support for Part 14 of the Parlay X Presence Web Service as defined in the Open Service Access, Parlay X Presence Web Services, Part 14, Presence ETSI ES 202 391-14 specification. The OWLCS Parlay X Web service maps the Parlay X Web service to a SIP/IMS network according to the Open Service Access, Mapping of Parlay X Presence Web Services to Parlay/OSA APIs, Part 14, Presence Mapping, Subpart 2, Mapping to SIP/IMS Networks, ETSI TR 102 397-14-2 specification. Note:Due to the synchronous nature of the Web service, to receive a callback from the Web service the client must implement the Web service callback interface. For presence, the required interface is the PresenceNotificationinterface described in Open Service Access, Parlay X Presence Web Services, Part 14, Presence ETSI ES 202 391-14. The HTTP server that hosts the presence Web service is a Presence Network Agent or a Parlay X to SIP gateway. The Web services are packaged as a standard .ear file and can be deployed the same as any other Web services through Admin Console. The .ear file contains two .war files that implement the two interfaces. The Web services use the Oracle SDP Platform, Client and Presence Commons shared libraries. The following four mbean attributes are configurable for the Presence Supplier Web service: SIPOutboundProxy: SipURI of the outbound proxy for SIP message. Empty string means no outbound proxy. For example, sip:127.0.0.1:5060; lr;transport=tcp. PublicXCAPRootUrl: URI where the Presence Server is deployed. This attribute is used to update the presence rules stored on the XDMS. Example: Expires: Set the time in seconds after which the PUBLISH by a presentity expires. Default value is 3600 (that is, 1 hour). SessionTimeout: Set the time in seconds after which HTTP sessions times out. Data for all timed out sessions is discarded. For Presence Consumer, there are three mbean attributes that can be configured. SIPOutboundProxy: SipURI of the outbound proxy for SIP message. Empty string means no outbound proxy. For example, sip:127.0.0.1:5060; lr;transport=tcp. Expires: Set the time in seconds after which the SUBSCRIBE by a watcher expires. Default value is 3600 (ie. 1 hour). SessionTimeout: Set the time in seconds after which HTTP sessions times out. Data for all timed out sessions is discarded. The presence Web services consist of three interfaces: PresenceConsumer: The watchers use these methods to obtain presence data (Table 8-1). PresenceNotification: The presence consumer interface uses the client callback defined in this interface to send notifications (Table 8-2). PresenceSupplier: The presentity uses these methods to publish presence data and manage access to the data by its watchers (Table 8-3). This section describes how to use each of the operations in the interfaces, and includes code examples. This is the first operation the application must call before using another operation in this interface. It serves two purposes: It allows the Web services to associate the current HTTP session with a user. It provides a context for all the other operations in this interface by subscribing to at least one presentity (SUBSCRIBE presence event). // Setting the attribute to activity PresenceAttributeType pa = PresenceAttributeType.ACTIVITY; List<PresenceAttributeType> pat = new ArrayList<PresenceAttributeType>(); pat.add(pa); SimpleReference sr = new SimpleReference(); sr.setCorrelator(""); sr.setInterfaceName(""); sr.setEndpoint(""); consumer.subscribePresence ("sip.presentity@test.example.com" , pat, "unused", sr); Call this operation to retrieve a subscribed presentity presence. If the person is offline, it returns ActivityNone and the hardstate note is written to PresenceAttribute.note. If it returns Activity_Other, the description of the activity is returned in the OtherValue field. If the Name field is equal to "ServiceAndDeviceNote", OtherValue is a combination of the service note and the device note. Note that there can be more than one "ServiceAndDeviceNote" when the presentity is logged into multiple clients. PresenceAttributeType pat = PresenceAttributeType.ACTIVITY; List<PresenceAttribute> result = consumer.getUserPresence(presentity, pat); for (PresenceAttribute pa : result) { // Check to see if it is an activity type. if (pa.getTypeAndValue().getUnionElement() == PresenceAttributeType.ACTIVITY){ // Get the presence status. System.out.println("ACTIVITY: " + pa.getTypeAndValue().getActivity().toString()); // Get the customized presence note. if (pa.getNote().length() > 0){ System.out.println("Note: " + pa.getNote()); } } // If this is of type OTHER, then we need to extract // different type of information. if (pa.getTypeAndValue().getUnionElement() == PresenceAttributeType.OTHER){ // This is "Activity_Other", a custom presence status. if (pa.getTypeAndValue().getOther() .getName().compareToIgnoreCase("ACTIVITY_OTHER") == 0){ System.out.println("Other Activity->" + pa.getTypeAndValue().getOther().getValue() + "\n"); } else { // Currently, the only other value beside ACTIVITY_OTHER is // "ServiceAndDeviceNote" which is the service note + // device note. System.out.println("Combined Note->" + pa.getTypeAndValue().getOther().getValue() + "\n"); } } } This asynchronous operation is called by the Web Service when an attribute for which notifications were requested changes. public void statusChanged(String context, String correlator, String uri, List<PresenceAttribute> presenceAttributes) { System.out.println("statusChanged Called:-"); System.out.println("Context = " + context); System.out.println("Correlator = " + correlator); System.out.println("Presentity = " + uri); } This method is called when the duration for the notifications (identified by the correlator) is over. In case of an error or explicit call to endPresenceNotification, this method is not called. This asynchronous method notifies the watcher that the presentity handled the pending subscription. public void notifySubscription(String context, String uri, List<PresencePermission> presencePermissions) { System.out.println("notifySubscription Called:-"); System.out.println("Context = " + context); System.out.println("Uri = " + uri); if (presencePermissions.size() > 0){ for (PresencePermission p:presencePermissions){ System.out.println("Permission " + p.getPresenceAttribute().value() + "->" + p.isDecision()); } } } This asynchronous operation is called by the Web Service to notify the watcher that the subscription has terminated. This is the first operation the application must call before using another operation in this interface. It serves three purposes: It allows the Web services to associate the current HTTP session with a user. It publishes the user's presence status. It subscribes to watcher-info so that the Web services can keep track of any watcher requests. There are three attributes that are of interest when performing a PUBLISH. These attributes can be set in a PresenceAttribute structure and passed into the PUBLISH method. Presense status with a customized note: this is the customized note configured in the My Presence text box in Oracle Communicator. The <note> element is contained in the <person> element of the Presence Information Data Format (PIDF) XML file. Device note: implicitly inserted by Oracle Communicator, or inserted from a Web service. The <note> element is contained in the <device> element of the Presence Information Data Format (PIDF) XML file. Service note: configured in the Presence tab in the Oracle Communicator preferences. The <note> element is contained in the <tuple> element of the Presence Information Data Format (PIDF) XML file. // A simple way to publish the Presence Status PresenceAttribute pa = new PresenceAttribute(); OtherValue other = new OtherValue(); //Set the name to "DeviceNote" to indicate the value must be used as device note. other.setName("DeviceNote"); other.setValue("Device Name"); //More other values can be defined for ServiceNote etc CommunicationValue comm = new CommunicationValue(); AttributeTypeAndValue typeValue = new AttributeTypeAndValue(); typeValue.setUnionElement(PresenceAttributeType.ACTIVITY); typeValue.setActivity(activity); typeValue.setPlace(PlaceValue.PLACE_NONE); typeValue.setPrivacy(PrivacyValue.PRIVACY_NONE); typeValue.setSphere(SphereValue.SPHERE_NONE); typeValue.setCommunication(comm); typeValue.setOther(other); pa.setTypeAndValue(typeValue); String note = "My Note"; pa.setNote(note); XMLGregorianCalendar dateTime = null; dateTime = DatatypeFactory.newInstance().newXMLGregorianCalendar(new GregorianCalendar()); pa.setLastChange(dateTime); List<PresenceAttribute> pat = new ArrayList<PresenceAttribute>(); pat.add(pa); supplier.publish(pat); //To UNPUBLISH,set the OtherValue to (Expires, 0) OtherValue other = new OtherValue(); other.setName("Expires"); other.setValue(0); This operation retrieves a list of new requests to be on your watcher list. This operation allows you to place a watcher on either the block or allow list. This operation retrieves the list of watchers in your allow list. This operation returns only a single item of PresenceTypeAttribute.Activity. An exception is thrown if there is no existing subscription. Table 8-4 and Table 8-5 describe the error codes and their associated error message. The Contact Management API (CMAPI) is an API for manipulating resource-lists (also known as Buddy Lists) and presence-rules documents. Through this high-level API it is possible to act on behalf of a user to add or remove buddies to the buddy list as well as allowing or blocking other users (watchers) from seeing the user's presence information. The CMAPI is capable of querying and manipulating those resources stored on the XDMS (XML Document Management Server). The CMAPI consists of a web service: XML Document Management Client (XDMC) Service and a Java client stub that is part of the oracle.sdp.client shared library. The CMAPI is part of the oracle.sdp.client shared library. Once this library is available, developers can import the package and use the API: import oracle.sdp.presence.integration.Buddy; import oracle.sdp.presence.integration.BuddyListManager; import oracle.sdp.presence.integration.BuddyListManagerFactory; import oracle.sdp.presence.integrationimpl.BuddyListManagerImpl; The BuddyListManagerFactory itself follows the singleton pattern, and there is only one instance of a factory per XDMS/XDMC combination. That is, when creating a BuddyListManagerFactory, you must supply the XCAP root URL to the XDMS from where documents are downloaded, as well as supplying the URL to the XDM Client Service that is running on the client side; the XDMC Service URL is passed in through the BindingProvider.ENDPOINT_ADDRESS_PROPERTY property. For each such combination of XCAP root URL and XDM Client Service endpoint, there can only exist exactly one BuddyListManagerFactory instance. Therefore it is possible to create different factories pointing to the different XDMS/XDMC Service combinations. Example 8-1 Obtaining an instance of the BuddyListManagerFactory // Create the URI pointing to the XDMS. URI xcapRoot = new URI(""); // Location of where the XDM Client webservice is. String wsUrl = ""; String sWsSecurityPolicy = new String[]{"oracle/wss11_saml_token_with_message_protection_client_policy"}; Map<String, Object> params = new HashMap<String,Object>(); params.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, wsUrl); params.put(BindingProvider.USERNAME_PROPERTY, "alice"); params.put(ParlayXConstants.POLICIES, sWsSecurityPolicy); // Obtain the instance to the factory BuddyListManagerFactory factory = BuddyListManagerFactory.getInstance(xcapRoot, params); Example 8-1 shows how to obtain a reference to a factory pointing to the XCAP root of localhost:8001/ services. Every operation performed on this factory is in the context of this particular XCAP root. Hence, when creating a BuddyListManager for a particular user, that BuddyListManager's XCAP root is the one of the factory through which it was created. It is important to realize that a BuddyListManager (BLM) is acting on behalf of a particular user. Therefore, if a BLM is created for user Alice, all operations performed on that particular BLM are on behalf of Alice and manipulate her documents. Example 8-2 shows how to create a BLM for Alice through the factory created in the previous section. Example 8-2 Obtaining a BuddyListManager for the user Alice URI user = new URI("sip:alice@example.com"); Map<String, Object> params = new HashMap<String,Object>(); params.put(XDMClientFactory.PROP_ASSERTED_IDENTITY, assertedId); BuddyListManager manager = factory.createBuddyListManager(user, params); Example 8-2 shows how to create a BLM for the user Alice with SIP address of sip:alice@example.com. If manipulation of the buddy list and presence rules document of another user is required, then a separate BLM must be created with the appropriate SIP address. Adding a buddy to a buddy list is done by first creating a buddy, setting the information needed on that buddy and then using the BLM to add it to the buddy list. Example 8-3 shows how to use the BLM representing Alice to add Bob as a new buddy of Alice and then getting the updated list back. Example 8-3 Adding a New Buddy to the Buddy List of Alice URI uri = new URI("sip:bob@example.com"); Buddy bob = manager.createBuddy(uri); // Optionally, setting additional information. manager.setDisplayname("Bobby"); VCard vcard = bob.getVCard(); vcard.setCity("San Francisco"); vcard.setCountry("USA"); // very important to set the VCard back on the buddy again bob.setVCard(vcard); // Update the buddy info using the BLM manager.updateBuddy(bob); // Getting the updated buddy list List<Buddy> buddies = manager.getBuddies(); Example 8-3 shows how to create a new Buddy, Bob, and how that buddy is added to Alice's buddy list by using the BLM representing Alice. To add more information about the user Bob, such as the address and other information, access Bob's Vcard information and then set the appropriate properties. Note:Since the method getVCard()is actually returning a clone of the VCard, the method setVCard()must be called on the buddy again in order for the information to be updated. Removing a buddy is very similar to adding a buddy. Use the method removeBuddy and pass in the buddy that is to be removed. If there are many buddies to remove, use the removeBuddies method and pass in the list of buddies to remove. Example 8-4 shows how Bob is removed from Alice's buddy list. To allow a watcher to view the presence status, use the method allowWatcher (String watcher) to add the watcher to the allow list. Use blockWatcher (String watcher) to block someone from viewing your presence status. BuddyListException is the base exception, and if the program is not set to register the specific exception, then it can simply catch it. XDMException is the base exception for all exceptions concerning communication with the remote XDMS. XDMException signals that an error occurred when communicating with the XDMS (for example: a connection problem, wrong path to the XCAP root, or something else). DocumentConflictException is a subclass to the XDMException; it signals that a mid-air conflict was detected that could not be resolved. This can occur when multiple clients access the same document on the XDMS. BuddyListManager attempts to resolve such a clash, but if it cannot, it throws an exception.
http://docs.oracle.com/cd/E15523_01/doc.1111/e13807/parlayxj.htm
CC-MAIN-2014-52
en
refinedweb
) private void tileCanvas_MouseDown(object sender, MouseButtonEventArgs e) { var location = tileCanvas.GetLocation(tileCanvas.Viewport.TopLeft); } private System.ComponentModel.BackgroundWorker m_backgroundWorker; public class LoadTileInfo { public LoadTileInfo(int zoom, int tileX, int tileY) { Zoom = zoom; TileX = tileX; TileY = tileY; } public int Zoom { get; internal set; } public int TileX { get; internal set; } public int TileY { get; internal set; } public System.ComponentModel.BackgroundWorker Worker { get; internal set; } } private void LoadTile() { this.Source = null; LoadTileInfo ti = new LoadTileInfo(_zoom, _tileX, _tileY); if (m_backgroundWorker != null) { m_backgroundWorker.CancelAsync(); } m_backgroundWorker = new System.ComponentModel.BackgroundWorker(); m_backgroundWorker.WorkerSupportsCancellation = true; m_backgroundWorker.DoWork += new System.ComponentModel.DoWorkEventHandler(LoadTileInBackground); ti.Worker = m_backgroundWorker; m_backgroundWorker.RunWorkerAsync(ti); } private void LoadTileInBackground(object sender, System.ComponentModel.DoWorkEventArgs e) { LoadTileInfo ti = (LoadTileInfo)e.Argument; ImageSource image = TileGenerator.GetTileImage(ti.Zoom, ti.TileX, ti.TileY); if (image != null) // We've already set the Source to null before calling this method. { this.Dispatcher.BeginInvoke(new Action(() => { if (!ti.Worker.CancellationPending) this.Source = image; })); } } document.LoadXml(Encoding.UTF8.GetString(Encoding.Convert(Encoding.UTF8, Encoding.Default, Encoding.UTF8.GetBytes(e.Result)))); document.LoadXml(e.Result); private void OnDownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) if (((Math.Abs(changeX) > 1) && (Math.Abs(changeY) > 1)) || (this.Children.Count == 0)) this.ChangeColumns(changeX); if ((Math.Abs(changeX) > 1) || (Math.Abs(changeY) > 1) || (this.Children.Count == 0)) MapCanvas.GetLocation MapCanvas.Center private static bool TryGetSize(string a, string b, out double size) { double location1, location2; if (double.TryParse(a, NumberStyles.Float, CultureInfo.InvariantCulture, out location1) && double.TryParse(b, NumberStyles.Float, CultureInfo.InvariantCulture, out location2)) { size = location2 - location1; return true; } size = 0; return false; } private void LoadTile() { this.Source = null; if (IsInDesignMode) return; System.Threading.ThreadPool.QueueUserWorkItem(this.LoadTileInBackground); } private static bool IsInDesignMode { get { var prop = System.ComponentModel.DesignerProperties.IsInDesignModeProperty; var result = (bool)System.ComponentModel.DependencyPropertyDescriptor.FromProperty(prop, typeof(System.Windows.FrameworkElement)).Metadata.DefaultValue; return result; } } int count = 0; private void Button_Click_1(object sender, RoutedEventArgs e) { this.mapCanvas.Center(50, 14 + ++count / 100f, 18); } if (((Math.Abs(changeX) > 1) || (Math.Abs(changeY) > 1)) || (this.Children.Count == 0)) { // It's changed too much or we don't have any tiles this.RegenerateTiles(); } using (var response = request.GetResponse()) { var stream = response.GetResponseStream(); stream.CopyTo(buffer); stream.Close(); response.Close(); } Dispose WebResponse MapCanvas(MapControl) MainWindow.xaml MapCanvas(MapControl) Hi, OpenStreetMaps tile usage policy requires that client will provide "Valid User-Agent identifying application". "Faking another app's User-Agent WILL get you blocked. " When creating BitmapImage with URI constructor, png file containing tile is downloaded asynchronously by the BitmapImage itself. How can I set User-Agent of the web request issued by the BitmapImage ? TileGenerator TileGenerator.CacheFolder = // As before TileGenerator.UserAgent = "MyDemoApp"; // Now the rest of the code... this.InitializeComponent(); RepositionChildren ArrangeOverride InvalidateArrange Canvase.SetLeft Canvas.SetTop GetLocation Article// Calculates the coordinates of the specified point. // The point should be in pixels, relative to the top left corner of the control. // The returned Point will be filled with the Latitude in the Y property and // the Longitude in the X property. public Point GetLocation(Point point); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/87944/WPF-Map-Control-using-openstreetmap-org-Data?fid=1575811&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None&select=4141822&fr=1
CC-MAIN-2014-52
en
refinedweb
From Documentation. - For learning step-by-step, please refer to ZK Essentials. Hello World! After ZK is installed on your favorite Web server[1], writing applications is straightforward. Just create a ZUML file[2], and name it as hello.zul[3], under one of the Web application's directories just as you would do for an HTML file. <window title="My First ZK Application" border="normal"> Hello World! </window> Assuming the name of the Web project is myapp, then go to the corresponding URL, which is, and you'll see your first ZK application running. On a ZUML page, an XML element describes what a component[4] can create while the XML attributes are used to assign values to a component's properties. In this example, a window component is created and its title is set to "My First ZK Application" and its border is set to normal. The text enclosed in the XML elements can also be interpreted as a special component called label. Thus, the above example is equivalent to the following code: <window title="My First ZK Application" border="normal"> <label value="Hello World!"/> </window> - ↑ Please refer to ZK Installation Guide. - ↑ ZUML [1] - ↑ The other way to try examples is to use ZK Sandbox to run them. - ↑ Interface : Component(EventListener) to the component such as that it is invoked when an end user clicks the component. The attribute value could be any legal Java code. Notice that it is NOT JavaScript, and you have to use double quotes (") in a string. To escape a double quote in an XML string, you could use single quotes (') to enclose it[1]. Here we invoke Messagebox.show(String) to display a message box shown above. The Java code is interpreted by BeanShell at runtime. In addition to event handling, you could embed the code in a ZUML page by specifying it in a special element called zscript. For example, you could simply define a function in the code as the following: <window title="My First ZK Application" border="normal"> <button label="Say Hello" onClick='alert("Hello World!")'/> <zscript> void alert(String message){ //declare a function Messagebox.show(message); } </zscript> </window> In fact, alert is a built-in function that you can use directly from the embedded Java code. - ↑ If you are not familiar with XML, you might take a look at the XML background section. It is Java that runs on the server The embedded Java code runs on the server so as to gain easy access to any resources available on the server. For example, <window title="Property Retrieval" border="normal"> Enter a property name: <textbox/> <button label="Retrieve" onClick="alert(System.getProperty(self.getPreviousSibling().getValue()))"/> </window> where self is a built-in variable which refers a component receiving the event. If you enter java.version and then click the button, the result will be shown as the following: A component is a POJO A component is a POJO. You could instantiate and manipulate them directly. For example, you could generate the result by instantiating component(s) to represent it, and then append them to another component as shown below. <window title="Property Retrieval" border="normal"> Enter a property name: <textbox id="input"/> <button label="Retrieve" onClick="result.appendChild(new Label(System.getProperty(input.getValue())))"/> <vlayout id="result"/> </window> Once appended, the components can be displayed in the browser automatically. Similarly, if components are detached, they are removed from the browser automatically. In addition, you could change the state of a component directly. All modifications will be synchronized back to the browser automatically. <window title="Property Retrieval" border="normal"> Enter a property name: <textbox id="input"/> <button label="Retrieve" onClick="result.setValue(System.getProperty(input.getValue()))"/> <separator/> <label id="result"/> </window> A component is a LEGO brick Instead of introducing different components for different purposes, our components are designed to build blocks. You are free to compose blocks together to realize sophisticated UI without customizing any components. For example, you could put anything in a grid, including grid itself; anything in any layout, including the layout itself. Please see our demo for more examples. Express data with variable resolver and EL expressions On a ZUML page, you could locate data with a variable resolver (VariableResolver), and then express it with EL expressions. For example, assumes that we have a class called foo.Users, and we can retrieve a list of users by employing its static method called getAll(). Then, we can implement a variable resolver as follows. package foo; public class> There are three methods that we can assume foo.User: getName(), getTitle() and getAge(). forEach is used to instantiate components by iterating through a collection of objects. MVC: Separate code from user interface Embedding Java code in a ZUML page is straightforward and easy for prototyping. However, in a production environment, it is better to separate the code from user interfaces. The code can be compiled at the development time. It is easier to develop and test, and runs much faster than the embedded code which is interpreted at runtime. To separate codes from UI, you can implement a Java class (aka., the controller) that implements Composer, and then handle UI in Composer.doAfterCompose(Component). For example, you can redo the previous example by registering an event listener in Composer.doAfterCompose(Component), and then retrieve the result by instantiating a label to represent it in the event listener as follows. package foo; import org.zkoss.zk.ui.Component; import org.zkoss.zk.ui.util.Composer; import org.zkoss.zk.ui.event.EventListener; import org.zkoss.zul.Label; public class PropertyRetriever implements Composer { public void doAfterCompose(final Component target) { //handle UI here target.addEventListener("onClick", new EventListener() { //add a event listener in Java public void onEvent(Event event) { String prop = System.getProperty(((Textbox)target.query("#input")).getValue()); target.query("#result").appendChild(new Label(prop)); } }); } } As shown, an event listener could be registered with the use of Component.addEventListener(String, EventListener). An event listener must implement EventListener, and then handle the event in EventListener.onEvent(org.zkoss.zk.ui.event.Event. Also notice that a component could be retrieved with the use of Component.query(String), which allows the developer to use a CSS 3 selector to select a component, such as query("#id1 grid textbox"). Then, you could associate the controller (foo.PropertyRetriever) with a component using the apply attribute as shown below. <window title="Property Retrieval" border="normal"> Enter a property name: <textbox id="input"/> <button label="Retrieve" apply="foo.PropertyRetriever"/> <vlayout id="result"/> </window> For more information, please refer to Get ZK Up and Running with MVC MVC: Autowire UI objects to data members Implementing and registering event listeners is a bit tedious. Thus, ZK provides a feature called autowiring. By extending from SelectorComposer, ZK looks for the members annotated with @Wire or @Listen to match the components. For example, you could rewrite foo.PropertyRetriever by utilizing the autowriing as follows. package foo; import org.zkoss.zk.ui.Component; import org.zkoss.zk.ui.event.Event; import org.zkoss.zk.ui.select.SelectorComposer; import org.zkoss.zk.ui.select.annotation.*; import org.zkoss.zul.*; public class PropertyRetriever extends SelectorComposer<Window> { @Wire Textbox input; //wired to a component called input @Wire Vlayout result; //wired to a component called result @Listen("onClick=#retrieve") public void submit(Event event) { //register a listener to a component called retrieve, @Wire will cause input and result to be wired automatically, such that you could access the components directly. Also @Listen("onClick=#retrieve") indicates that the annotated method will be registered as an event listener to the component called retrieve to handle the onClick event. If the component's ID is different from the member's name or the pattern is complicated, you could specify a CSS 3 selector such as @Wire("#id"), <code>@Wire("window > div > button") and @Listen("onClick = button[label='Clear']"). You can use with Spring or CDI managed bean in the composer too. For example, @VariableResolver(org.zkoss.zkplus.spring.DelegatingVariableResolver) public class PasswordSetter extends SelectorComposer<Window> { @WireVariable //wire Spring managed bean private User user; @Wire private Textbox password; //wired automatically if there is a textbox named password @Listen("onClick=#submit") public void submit() { user.setPassword(password.getValue()); } } Notice : MVC pattern is recommended for a production application. On the other hand, to maintain readability, many examples in our documents embed code directly into ZUML pages. MVVM: Automate the access with data binding EL expressions are convenient but they are limited to display read-only data. If you allow end users to modify data (such as CRUD), or change how the data can be displayed based on users' selection, you could use ZK data binding to handle the display and modification automatically for you. All you need to do is to implement a so-called ViewModel (a POJO) that provides the data beans and/or describes relationship between UI and data beans[1]. For example,[2]. package foo; public class UserViewModel { List<User> users = Users.getAll(); public List<User> getUsers() { return users; } } Then, you could put them together by applying a built-in composer called BindComposer in a ZUML document as follows. <grid apply="org.zkoss.bind.BindComposer" viewModel="@id('vm') @init('foo.UserViewModel')" model="@bind(vm.users)"> <columns> <column label="Name" sort="auto" /> <column label="Title" sort="auto" /> <column label="Age" sort="auto" /> </columns> <template name="model" var="user"> <row> <textbox value="@bind(user.name)" /> <textbox value="@bind(user.title)" /> <intbox value="@bind(user.age)" /> </row> </template> </grid> Please notice that you do not need to write any code to handle the display or modification. Rather, you declare the relation of the UI and data beans in annotations, such as @bind(user.name). Any modification made to each input (by the end user) is stored back to the object (foo.User) automatically and vice versa, assuming that the POJO has the required setter methods, such as setName(String). For more information, please refer to Get ZK Up and Running with MVVM - ↑ ZK data binding is based on the MVVM design pattern, which is identical to the Presentation Model introduced by Martin Fowler. For more information, please refer to ZK Developer's Reference: MVVM. - ↑ Here we load the users by assuming there is a utility called Users. However, it is straightforward if you'd like to wire Spring-managed or CDI-managed beans. For more information, please refer to ZK Developer's Reference: MVC Notice : MVVM pattern applies to ZK 6 and later Define UI in pure Java In additions to XML, developers could also define UI in pure Java. For example, you could implement the property-retrieval example as follows. public class PropertyRetrieval extends GenericRichlet { public void service(Page page) throws Exception { final Window main = new Window("Property Retrieval", "normal", false); main.appendChild(new Label("Enter a property name: ")); final Textbox input = new Textbox(); input.setId("input"); main.appendChild(input); final Button button = new Button("Retrieve"); button.addEventListener("onClick", new EventListener() { public void onEvent(Event event) throws Exception { Messagebox.show(System.getProperty(input.getValue())); } }); main.appendChild(button); main.setPage(page); //attach so it and all descendants will be generated to the client } } A richlet (Richlet) is a small Java program that creates all necessary user interfaces for a given page in response to users' request. Here we extend java.lang.Object from a skeleton called GenericRichlet. Then, we create all the required components in Richlet.service(Page). Adding client-side functionality In addition to handling events and components on the server, ZK also provides an option allowing developers to control UI from the client side. We have dubbed this blending of technology, Server+client Fusion. For example, we could re-implement the Hello World example with the code from the client side as follows. <button label="Say Hello" w:onClick='jq.alert("Hello World!")' xmlns: where we declare a XML namespace named client to indicate the event handler which will be evaluated at the client side. In addition, jq.alert(String, Map) is a client-side method equivalent to Messagebox.show(String). All components are available and accessible to the client. For example, here is a number guessing game that manipulates UI from the client side. <window title="Guess a number" border="normal"> <vlayout> Type number between 0 and 99 and then press Enter to guess: <intbox w: </vlayout> <script><![CDATA[ var num = Math.floor(Math.random() * 100); function guess(wgt) { var val = wgt.getValue(), mesg = val > num ? "smaller than " + val: val < num ? "larger than "+val: val + " is correct!"; wgt.parent.appendChild(new zul.wgt.Label({value: mesg})); wgt.setValue(""); } ]]></script> </window> where onOK is an event fired when the user presses Enter, and script is used to embed the client-side code (in contrast to zscript for embedding the server-side code). Architecture overview When a ZK application runs on the server, it gives access to backend resources, assemble UI with components, listen to users' activity, and then manipulate components to update UI. All are done on the server. The synchronization of the states of the components between the browser and the server is done automatically by ZK and transparently to the application. When running on the server, the application can access full Java technology stack. Users' activities, including Ajax and Server Push, are abstracted to event objects. UI are composed of POJO-like components. It is the most productive approach to develop a modern Web application. With ZK's Server+client Fusion architecture, your application will never stop running on the server. You can enhance your application's interactivity by adding optional client-side functionality, such as client-side event handling, visual effect customizing and even UI composing without server-side coding. ZK is the only framework to enable seamless fusion from pure server-centric to pure client-centric. You can have the best of two worlds: productivity and flexibility. Last Update : 2014/9/26
http://books.zkoss.org/wiki/ZK_Getting_Started/Tutorial
CC-MAIN-2014-52
en
refinedweb
I am trying to make a meter with rotating needle (not speedometer exactly). This meter is not controlled by the player, but when a certain speed is hit the player must press a button. I want to move my needle from 90 degrees to -90 degrees one time. The speed of the needles movement will need to depend on a time variable. I am able to move the needle to a specific point, but I am having difficulty getting the needle to smoothly move from 90 to -90 over time. I was messing around with Mathf.Lerp and Quaternion.Euler. Here's what I have so far: using UnityEngine; using System.Collections; public class MoveDial : MonoBehaviour { public Transform needle; private float needleSpeed; private float needleStart; private float needleEnd; // Use this for initialization void Start () { } // Update is called once per frame void Update () { needle.rotation = Quaternion.Euler(0,0,-90); //needle.rotation = Mathf.Lerp (needleStart, needleEnd, needleSpeed); } } Answer by sumitb_mdi · Dec 18, 2016 at 11:03 AM private const float ANGLE_CHANGE_SPEED = 10.0f; //Change this as per your requirement. float currentAngle = 90.0f; void Update () {} if (currentAngle > -90.0f) { needle.rotation = Quaternion.Euler (0, 0, currentAngle); currentAngle -= (ANGLE_CHANGE_SPEED * Time.deltaTime); } } This works exactly how I. Uniform rotation - uniform lerp 1 Answer Cant slow down rotation Speed 1 Answer Rotate multiple objects around a object, maintaining their own trajectory(not rotating their local forward vector) 1 Answer Quaternion Rotation On Specific Axis Issue 0 Answers Problem about Rotating the Character 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1287202/moving-needle-from-one-angle-to-another-angle-one.html
CC-MAIN-2022-05
en
refinedweb
CU is STM32F767ZI, The major architecture is shown in the following diagram: It is based on the high-performance Arm® Cortex®-M7 32-bit RISC core operating at up to 216 MHz frequency - Datasheet - The Cortex®-M7 core features a floating point unit (FPU) which supports Arm® double-precision - All the devices offer three 12-bit ADCs (3×12-bit, 2.4 MSPS ADC: up to 24 channels ), two DACs (2×12-bit D/A converters), a low-power RTC, twelve general-purpose 16-bit timers including two PWM timers for motor control, two general-purpose 32-bit timers, a true random number generator (RNG). - Chrom-ART Accelerator™ (DMA2D), graphical hardware accelerator enabling enhanced graphical user interface - Hardware JPEG codec - LCD-TFT controller supporting up to XGA resolution - MIPI® DSI host controller supporting up to 720p 30 Hz resolution ( 8- to 14-bit camera interface up to 54 Mbyte/s) - USB 2.0 high-speed/full-speed device/host/OTG controller with dedicated DMA, on-chip full-speed PHY and ULPI - 10/100 Ethernet MAC with dedicated DMA: supports IEEE 1588v2 hardware, MII/RMII Task 1.1 ARM MBED OS Quick Start One of the features of theSTM NUCLEO-F767ZI is the ARM MBED OS support. -. - A broad range of connectivity options are available in Mbed OS, supported with software libraries, development hardware, tutorials and examples. -. - Mbed supports key MCU families including STM32, Kinetis, LPC, PSoC and nRF52: device support link Please go to the ARM MBED device link for the STMF767: [link]. This page shows all the features of the NUCLEO-F767ZI and the pinout definitions. - In the right side of the page, click “add to Mbed Compiler” - You are required to create an ARM Mbed account or login to the existing account. After successful login, you will see ” NUCLEO-F767ZI has been added to your account”. Click the “Open Mbed Compiler” button as shown below - You will see the main page of the ARM Mbed online compiler, as shown below - You can create a new program based on the NUCLEO-F767ZI platform and select one of the Templates (shown below). - Select the first first blinky example, you will see your created project. The main.cpp file looks like this. The ARM Mbed code is very simple and highly abstracted. The following code defines the serial port pc, then simply using pc.printf to output message to the terminal. Define the led1 is also very simple (based on DigitalOut), and led1=!=led1 simply toggles the LED. - Before you compile the project, make sure you selected the right platform in the right side of the toolbar. If your platform is not NUCLEO-F767ZI, you can click the device manager and select NUCLEO-F767ZI as shown below (you also can switch to different platforms). - To compile the code, simply click the “Compile” button. - After compiling done, you will be asked to save one BIN file. In ARM Mbed, downloading the code to the board is very simple. When you connect the device to your PC, it will shown like a flash drive. You just need copy the BIN file to the flash drive, it will automatically download the executable code (BIN) to the board. Here, we simply save the generated BIN file to the NODE_F767ZI flash drive as shown below. - After the code has been downloaded, you will see the LD1 blink. You can further connect the board via the terminal. You can open Tera Term or any other terminal software, and select the right COM port and buadrate “9600” to see the message output. Task 1.2 ARM MBED Examples You can go to this page to check the ADC/DAC example of the ARM Mbed: [link]. Click “Import into Compiler”. You will see the main.cpp file like this #include "mbed.h" AnalogIn in(A0); #if !DEVICE_ANALOGOUT #error You cannot use this example as the AnalogOut is not supported on this device. #else AnalogOut out(PA_4); #endif DigitalOut led(LED1); //------------------------------------ // Hyperterminal configuration // 9600 bauds, 8-bit data, no parity //------------------------------------ int main() { printf("\nAnalog loop example\n"); printf("*** Connect A0 and PA_4 pins together ***\n"); while(1) { for (float out_value = 0.0f; out_value < 1.1f; out_value += 0.1f) { // Output value using DAC out.write(out_value); wait(0.1); // Read ADC input float in_value = in.read(); // Display difference between two values float diff = fabs(out_value - in_value); printf("(out:%.4f) - (in:%.4f) = (%.4f) ", out_value, in_value, diff); if (diff > 0.05f) { printf("FAIL\n"); } else { printf("OK\n"); printf("\033[1A"); // Moves cursor up of 1 line } led = !led; } } } The ADC input pin is “A0”, which is PA_3 pin in CN9 connector; the DAC output pin is PA_4 in CN7. The pin definition is in the this page. Using one jumper cable to connect PA_3 pin and PA_4 together, i.e., DAC output to the ADC. After you connect these two pins, you will see the following terminal output You can go to this link to see the most simple ARM Mbed RTOS example. The main.cpp file looks like this #include "mbed.h" void print_char(char c = '*') { printf("%c", c); fflush(stdout); } Thread thread; DigitalOut led1(LED1); void print_thread() { while (true) { wait(1); print_char(); } } int main() { printf("\n\n*** RTOS basic example ***\n"); thread.start(print_thread); while (true) { led1 = !led1; wait(0.5); } } You can simply use “thread.start(print_thread)” to create a new RTOS thread. You can check other sample codes in the main page of the board: [link] Task 1.3 ARM MbedStudio (optional) ARM also developed one local version of the IDE: ARM MbedStudio. The Mbed Studio supports debug features. However, it is still in beta version. The supported board is very limited. After you install the ARM MbedStudio (Windows, Linux, Mac version), the main page of the ARM MbedStudio is like this. You can click File->New Program, select one of the template (e.g., mbed-os-example-blinky) The main code looks like this In order to make the ARM MbedStudio to recognize our NUCLEO-F767ZI board, we need to upgrade the ST-LINK firmware. You can download the fireware upgrade software from here: [link]. After you installed the software, you will see this page You can click “Device Connect” to identify the device, and click “Yes” to upgrade the firmware. After you upgraded the firmware, the ARM MbedStudio will recognize the NUCLEO-F767ZI board in the Target part automatically. After the board has been recognized, you will have a new “Run” button after the Build button (as shown below). You can click the “Run” to download the code. The ARM MbedStudio provides the Debug features, but it requires the pyOCD support [link]. Currently, the debug feature only support the following boards [link]. Our NUCLEO board is not in the list yet. Task 2 STM32Cube STM32CubeM7 [link] also includes STM32CubeMX, a graphical software configuration tool that allows the generation of C initialization code using graphical wizards. Task 2.1 STM32CubeF7 Examples The STM32CubeM7 components are shown in the following figure. STM32CubeF7 gathers, in a single package, all the generic embedded software components required to develop an application on STM32F7 microcontrollers. In line with the STMCube™ initiative, this set of components is highly portable, not only within the STM32F7 Series but also to other STM32 series.STM32CubeF7GettingStarted The package structure of the STM32CubeF7 is shown in After you downloaded the STM32CubeF7 and extract the package, the projects related to F767 is shown under Projects->STM32F767ZI-Nucleo folder. The firmware architecture (divided into three levels) is shown in the following figure. - The folders inside the STM32CubeF7 called Examples, Examples_LL, and Examples_MIX are in level 0. These examples use respectively HAL drivers, LL drivers and a mix of HAL and LL drivers without any middleware component. - The Applications folder is in level 1, they provide typical use cases of each middleware component. - The Demonstration folder is in level 2, they implement all the HAL, BSP and middleware components. Open the IDE for STM32, for example, System Workbench for STM32 (). It has Mac, Linux and Windows version. The download link of the Windows version is [here]. If you do not have Java installed, you should download Java from [here]. The Oracle Java 11/12 installers do not register Java as the default JRE on the system path, you should setup the Java path in Windows environment. We can import any example projects inside the STM32CubeF7 folder, for example, the GPIO example The main code looks like the following figure. The drivers are defined as the HAL lay. All APIs start with HAL_, for example, HAL_GPIO_TogglePin. You can import other Examples in the STM32CubeF7 to check the sample code of different peripherals. Task 2. - STM32CubeMX main link. Download the STM32CubeMX software - STM32CubeMX is available as standalone software running on Windows®, Linux® and macOS® (macOS® is a trademark of Apple Inc. registered in the U.S. and other countries.) operating systems, or through Eclipse plug-in. Inside the download folder, you will see the Mac, Linux and Windows version - Install the STM32CubeMX software - After the STM32CubeMX has been installed, you can create a new project by either access to MCU selector or Board selector. - Click “Access to Board Selector”, and select the NUCLEO-F767ZI board from the list - Click start project, click YES for the popup window (initialize in default mode) - After you opened the project, you can configure the pinout, clock, and manage the project in the following GUI - You can click any pins in the chip diagram and select the function of each pin - In addition to select different pin functions and modules, you also can select middleware componets in the left side. For example, you can select FREERTOS in the middleware part - We can also configure the timebase source for the FreeRTOS. We can select SYS in System Core, and configure the Timebase Source as TIM2. - After the pin and module configuration finished, we can configure the Project Manager. Name the project and location, select the Toolchain as “SW4STM32”. Click save the project, it will ask to download the required fireware (1.24GB). - In the Code Generator page (left side bar), we can select “Generate peripheral initialization as a pair of .c/.h files” - After all the configuration is done, we can click “Generate the code” button to generate the System Workbench project. Open System Workbench for STM32, we can import our generated project. The source file architecture as shown below - Open the freertos.c, add the following code (create a ToggleLedThread) after the default thread definition /* USER CODE BEGIN RTOS_THREADS */ /* add threads, ... */ osThreadDef(Thread, ToggleLedThread, osPriorityBelowNormal, 0, configMINIMAL_STACK_SIZE); osThreadCreate(osThread(Thread), NULL); /* USER CODE END RTOS_THREADS */ Question: please add the missing ToggleLedThread function, and toggle the LED2 or LED3 in every 1 or 2 second. Task 2.3 STMCubeIDE (optional) STMicroelectronics’ STM32CubeIDE is a free, all-in-one STM32 development tool offered as part of the STM32Cube software ecosystem. - Latest version: 1.0 - STM32CubeMX tool for configuring the microcontroller and managing the project build. - Based on ECLIPSE™/CDT, with support of ECLIPSE™ add-ons, GNU C/C++ for Arm® toolchain and GDB debugger. - Support of ST-LINK (STMicroelectronics) and J-Link (SEGGER) debug probes - Import project from Atollic® TrueSTUDIO® and AC6 System Workbench for STM32 - Multi-OS support: Windows®, Linux®, and macOS® - Additional advanced debug features including: CPU core, IP register, and memory views; Live variable watch view; System analysis and real-time tracing (SWV); CPU fault analysis tool Task 3 STM SensorTile The STEVAL-STLKT01V1 (SensorTile development kit) is a comprehensive development kit designed to support and expand the capabilities of the SensorTile and comes with a set of cradle boards enabling hardware scalability. - STLKT01V1: [Link] - The SensorTile is a tiny, square-shaped IoT module that packs powerful processing capabilities leveraging an 80 MHz STM32L476JGY microcontroller and Bluetooth low energy connectivity based on BlueNRG-MS network processor as well as a wide spectrum of motion and environmental MEMS sensors, including a digital microphone. - To upload new firmware onto the SensorTile, an external SWD debugger (not included in the kit) is needed. It is recommended to use ST-LINK/V2-1 found on any STM32 Nucleo-64 development board. - In this lab, we will use our Nucleo-F767ZI board as the external SWD debugger for the SensorTile. There are three PCB boards inside the development kit - SensorTile module (STEVAL-STLCS01V1) with STM32L476JG MCU and other sensors - LSM6DSM: The LSM6DSM is a system-in-package featuring a 3D digital accelerometer and a 3D digital gyroscope; SPI & I2C serial interface with main processor data synchronization - LSM303AGR: The LSM303AGR is an ultra-low-power high-performance system-in-package featuring a 3D digital linear acceleration sensor and a 3D digital magnetic sensor; SPI / I2C serial interfaces - LPS22HB: The. - MP34DT05-A: The MP34DT05-A is an ultra-compact, low-power, omnidirectional, digital MEMS microphone built with a capacitive sensing element and an IC interface; PDM output - BlueNRG-MS: The BlueNRG-MS is a very low power Bluetooth low energy (BLE) single-mode network processor, compliant with Bluetooth specification v4.2; The Bluetooth Low Energy stack runs on the embedded ARM Cortex-M0 core. The stack is stored on the on-chip non-volatile Flash memory and can be easily upgraded via SPI. - BALF-NRG-02D3: This device is an ultra-miniature balun which integrates matching network and harmonics filter - LD39115J18R: 150 mA low quiescent current low noise voltage regulator; Input voltage from 1.5 to 5.5 V - The functional block diagram is shown below The hardware core system is shown below - SensorTile expansion Cradle board (we will use this one in this lab) as shown below - Equipped with 16bit stereo audio DAC (TI PCM1774) - USB port, STM32 Nucleo, Arduino UNO R3 and SWD connector - with SensorTile plug connector - ST2378ETTR – 8-bit dual supply 1.71 to 5.5 V level translator - Sensortile Cradle board with SensorTile footprint (solderable) (not use it in this lab) as shown below There are four major software libraries and tools for the STM SensorTile - STSW-STLKT01: SensorTile firmware package that supports sensors raw data streaming via USB, data logging on SDCard, audio acquisition and audio streaming - FP-SNS-ALLMEMS1 - STBLESensor: iOS and Android demo Apps - BlueST-SDK: BlueST-SDK is a multi-platform library (Android/iOS/Python) that enables easy access to the data exported by a Bluetooth Low Energy (BLE) device implementing the BlueST protocol Task 3.1 STSW-STLKT01 DataLog The STSW-STLKT01 firmware package for SensorTile provides sample projects for the development of custom applications [link] - Built on STM32Cube software technology, it includes all the low level drivers to manage the on-board devices and system-level interfaces. - The package comes with the DataLog_Audio, DataLog, AudioLoop and BLE_SampleApp applications. - The DataLog_Audio application allows the user to save the audio captured by the on-board microphone on SD card as a common .wav file. - The DataLog application features raw sensor data streaming via USB (Virtual COM Port class) and sensor data storage on an SD card exploiting RTOS features - The AudioLoop application sends audio signals acquired by the microphone via I²S and USB interfaces, allowing the user to play the sound on loudspeakers/headphones or record it on an host PC - The BLE_SampleApp provides an example of Bluetooth Low Energy configuration that enables SensorTile to stream environmental sensor data; it is compatible with the STBLESensor app available for Android and iOS Let’s download the software from this [link], you will get the following package. The organization is similar to STM32Cube. To program the SensorTile board, we first plug the SensorTile module to the SensorTile expansion Cradle board as shown below To enable the SWD debug feature, we need to use one external ST-LINK debugger (here we just use our NUCLEO-F767ZI board ) - Remove the ST-LINK jumpers (two jumpers) in the NUCLEO-F767ZI board. This step will disconnect the ST-LINK part to the STM32F767 target MCU. We will use the ST-LINK part to connect the external MCU, i.e., SensorTile. Do not lost the two jumpers. - Connect the ST-LINK port in the NUCLEO-F767ZI board to the SWD connector on the SensorTile cradle extension board. A 5-pin flat cable is provided in the SensorTile Kit package. The pin1 should be aligned together. - The following figure shows the connection result - Plug the two USB ports to your computer (one for ST-Link in NUCLEO-F767ZI, another is for the SensorTile) - Using ST Link utility software (en.stsw-link004) to verify the connected target MCU is the SensorTile (STM32L4) not the STM32F7. (If you have any connection errors, you can lower the SWD frequency from 4MHz to other frequencies). After the hardware setup is ready, we can open System Workbench to import the sample code in STSW-STLKT01. We first import the DataLog sample code first. You can build and run the code. After you run the code, you will see some popup windows to show a new USB device. This is because the Datalog code setup the USB device and transfer the sensordata through the USB port. In Device Manager, you will see two COM ports: 1) COM9 is the ST link port (the USB port that connected to the NUCLEO-F767ZI board); 2) COM19 is the newly created USB serial device in the Datalog code for the SensorTile. To get the sensor data to the computer, we need to use terminal software (e.g., Tera Term) to connect to the COM10 If your code is running, you will see the sensor data is shown in the terminal. (If your terminal connection is stuck or nothing shows, you can plugin the sensortile USB port again) You can check the project code of the Datalog example In side the main.c, you will configure the USB device after the HAL_Init(); Then, two threads are created: GetData_Thread and WriteData_Thread. osKernelStart() starts the FreeRTOS kernel. Please read the code and answer the following questions: - The GetData_Thread created one semaphore in the following code. Which function will release the semaphore and let the GetData_Thread continue? readDataSem_id = osSemaphoreCreate(osSemaphore(readDataSem), 1); osSemaphoreWait(readDataSem_id, osWaitForever); - The GetData_Thread created one pool and one message in the following code. Which code is used to put the sensor data into the Pool? What’s the usage of the Message? How (which code) can the WriteData_Thread get the sensor data? sensorPool_id = osPoolCreate(osPool(sensorPool)); dataQueue_id = osMessageCreate(osMessageQ(dataqueue), NULL); - Which code is used to send the sensor data to the USB interface? - Why the humidity value is always “0”? - MX_X_CUBE_MEMS1_Init function and getSensorsData function in datalog_application.c are used to initialize the sensors and get the sensor value. Task 3.2 STSW-STLKT01 BLE Sample App Import the BLE Sample App from the STSW-STLKT01 In side the main.c, we perform the following code to initialize the BLE stack after the HAL_Init() and SystemClock_Config(). /* Initialize the BlueNRG */ Init_BlueNRG_Stack(); /* Initialize the BlueNRG Custom services */ Init_BlueNRG_Custom_Services(); /* initialize timers */ InitTimers(); StartTime = HAL_GetTick(); The BLE protocol stack is shown in the follow figure [link] The host controller interface (HCI) layer provides a standardized interface to enable communication between the host and controller. - In BlueNRG, this layer is implemented through the SPI hardware interface. - The host can send HCI commands to control the LE controller. - The HCI interface and the HCI commands are standardized by the Bluetooth core specification At the highest level of the core BLE stack, the GAP specifies device roles, modes and procedures for the discovery of devices and services, the management of connection establishment and security. - GAP handles the initiation of security features. - The BLE GAP defines four roles with specific requirements on the underlying controller: Broadcaster, Observer, Peripheral and Central. The GATT defines a framework that uses the ATT for the discovery of services, and the exchange of characteristics from one device to another. GATT specifies the structure of profiles. In BLE, all pieces of data that are being used by a profile or service are called “characteristics”. A characteristic is a set of data which includes a value and properties. - The ATT protocol allows a device to expose certain pieces of data, known as “attributes”, to another device. BLE protocol stack is used by the applications through its GAP and GATT profiles. The GAP profile is used to initialize the stack and setup the connection with other devices. The GATT profile is a way of specifying the transmission – sending and receiving – of short pieces of data known as ‘attributes’ over a Bluetooth smart link. All current Low Energy application profiles are based on GATT. The GATT profile allows the creation of profiles and services within these application profiles Here is a depiction of how the data services are setup in a typical GATT server. Inside the Init_BlueNRG_Stack() - function hci_init(HCI_Event_CB, NULL); is used to initialize the HCI. - ret = aci_gatt_init(); is used to initialize the GATT server on the slave device. Initialize all the pools and active nodes. Until this command is issued the GATT channel will not process any commands even if the connection is opened. This command has to be given before using any of the GAP features. [link] - aci_gap_init_ function is used to register the GAP service with the GATT. The device name characteristic and appearance characteristic are added by default and the handles of these characteristics are returned in the event data. The role parameter can be a bitwise OR of any of the values mentioned below. This API initializes BLE device for a particular role (peripheral, broadcaster, central device etc.). The role is passed as first parameter to this API. Two services are added inside the Init_BlueNRG_Custom_Services() - Add_HWServW2ST_Service - Use aci_gatt_add_serv to add a service on the GATT server device. Here service_uuid is the 128-bit private service UUID allocated for the service (primary service). This API returns the service handle in servHandle. - aci_gatt_add_char()is used to add the characteristics - Add Add_ConfigW2ST_Service - Use aci_gatt_add_serv to add a service on the GATT server device. - aci_gatt_add_char is used to add the characteristics “ConfigCharHandle” After initialization, the main loop of the application will blink Led when there is not a client connected. It will then handle BLE event (hci_user_evt_proc) and update the BLE advertise data and make the board connectable (setConnectable). When SendEnv=1 (periodically setup by the TIM1_CH1 timer), it will call SendEnvironmentalData() function to send environmental data. - By checking TargetBoardFeatures, then utilize BSP_ENV_SENSOR_GetValue function to read sensor value. - If the BLE connection is there, it will use Term_Update function to send data - It will call aci_gatt_update_char_value to update the BLE characteristic value. - If the BLE connection is not there, it will use STLBLE_PRINTF to print the data to the USB terminal Task 3.3 SensorTile FP-SNS-ALLMEMS1 (optional) FP. - The FP-SNS-ALLMEMS1 firmware provides a complete framework to build wearable applications. The STBLESensor application based on the BlueST-SDK protocol allows data streaming and a serial console over BLE controls the configuration parameters for the connected boards. - FP-SNS-ALLMEMS1 is the default firmware installed in the SensorTile for out of box experience. All STEVAL-STLKT01V1 is already programmed with FP-SNS-ALLMEMS1 firmware. - software creates the following first Bluetooth services: - HW characteristics related to MEMS sensor devices - SW characteristics: - quaternions generated by the MotionFX library in short precision - magnetic North direction (e-Compass) - recognized activity using the MotionAR algorithm - recognized carry position using the MotionCP algorithm - recognized gesture using the MotionGR algorithm - audio source localization using the AcousticSL algorithm - audio beam forming using the AcousticBF algorithm - voice over Bluetooth low energy using the BlueVoiceADPCM algorithm - SD data logging (audio and MEMS data) using Generic FAT File System middleware The second service exposes the Console service with: - stdin/stdout for bi-directional communication between client and server - stderr for a mono-directional channel from the STM32 Nucleo board to an Android/iOS device The full software architecture is shown in After you download the software, it contains the following folders (similar to STM32Cube) Open System workbench for STM32, import the ALLMEMS1 sensortile project The full code looks like this You can build and download the code the SensorTile. The SensorTile will be function as the same to the out of box experience. Task 3.4 SensorTile Bootloader (optional) We can use the ST-Link Utility to flash the code to the Sensortile board and make Sensortile board run the program every time power is supplied to the board. Open the ST-Link Utility software, click File->Open Files, navigate to the folder where you have the new bin file compiled from the IDE (System workbench) After you opened the bin file, you will see the following window. Change the value of the Address field to be 0x08004000 and change the value of the Size to be 0x1000. Click Target-Connect Click Target->Program to start the download window as shown below. Modify the Start address to be 0x08004000. Click Start, then close the bin file. Navigate to the Utilities folder->BootLoader->STM32L476RG, open the BootLoaderL4.bin file. Change the address field to be 0x08000000, click Target->Program. Change the start address to be 0x08000000, click start. Then, you can remove the SWD cable and the SensorTile device will start the code automatically after power on. Apart from storing code, FP-SNS-ALLMEMS1 uses the FLASH memory for Firmware-Over-The-Air updates. It is divided into the following regions (see figure below): - 1. the first region contains a custom boot loader - 2. the second region contains the FP-SNS-ALLMEMS1 firmware - 3. The third region is used for storing the FOTA before the update The FP-SNS-ALLMEMS1 cannot not be flashed at the beginning of the flash (address 0x08000000), and is therefore compiled to run from the beginning of the second flash region, at 0x08004000 - The FP-SNS-ALLMEMS1 cannot not be flashed at the beginning of the flash (address 0x08000000), and is therefore compiled to run from the beginning of the second flash region, at 0x08004000 On any board reset: - If there is a FOTA in the third Flash region, the boot loader overwrites the second Flash region (with FPSNS-ALLMEMS1 firmware) and replaces its content with the FOTA and restarts the board. - If there is no FOTA, the boot loader jumps to the FP-SNS-ALLMEMS1 firmware - To flash modified ALLMEMS1 firmware, simply flash the compiled FP-SNS-ALLMEMS1 firmware to the correct address (0x08004000).
https://kaikailiu.cmpe.sjsu.edu/embedded-system/stm-lab-1-stm32f7-and-sensortile/
CC-MAIN-2022-05
en
refinedweb
How to access Firestore subcollections with react-redux-firebase I would like to share with you a code snipped of how to access subcollections in the Cloud Firestore database. Lets say we have this structure: - Users | -- Tasks Our exercise is to access the tasks list of a certain user. import { compose } from "redux"; import { connect } from "react-redux"; import { firestoreConnect } from "react-redux-firebase"; ... const enhance = compose( firestoreConnect(props => { return [{ collection: "users", doc: props.uid, subcollections: [{ collection: "tasks" }], storeAs: `${props.uid}-tasks` }]; }), connect(({ firestore }, props) => { return { tasks: firestore.ordered[`${props.uid}-tasks`] || [] }; }) ); ... const Tasks = ({ firestore, tasks }) => { return <YOUR_TASKS_LIST_UI> } export default enchance(Tasks) It's really that simple! Happy coding! 👨💻
https://seishin.me/how-to-access-firestore-subcollections-with/
CC-MAIN-2022-05
en
refinedweb
example, with the char2Bytes function of the @taquito/utils package: import { char2Bytes } from '@taquito/utils';const bytes = char2Bytes(formattedInput);const payloadBytes = '05' + '01' + char2Bytes(bytes.length) + bytes; The bytes representation of the string must be prefixed with 3 pieces of information: - "05" indicates that this is a Micheline expression - "01" indicates that a string was converted to bytes - the number of characters in the byte string encoded on 4 bytes Once you have your bytes, you can send them to the wallet to have them signed: import { RequestSignPayloadInput, SigningType } from '@airgap/beacon-sdk';const payload: RequestSignPayloadInput = {signingType: SigningType.MICHELINE,payload: payloadBytes = char2Bytes(formattedInput);const payloadBytes = '05' + '01' + char2Bytes(bytes.length) + bytes;// The payload to send to the walletconst payload: RequestSignPayloadInput = {signingType: SigningType.MICHELINE,payload: payloadBytes,sourceAddress: userAddress,};// The signingconst signedPayload = await wallet.client.requestSignPayload(payload);// The signatureconst { signature } = signedPayload; Taquito also offers the possibility to sign Michelson code. This feature can be useful, for example, if you need to send a lambda to a contract to be executed but want to restrict the number of users who can submit a lambda by verifiying the signer's address. The signing of Michelson code requires the use of the michel-codec package: First, you provide the Michelson code to be signed as a string along with its type. Then, you create a new instance of the michel-codec parser and call the parseMichelineExpression on it to get the JSON representation of the Michelson code and type. Once done, you can pack the data using the packDataBytes function available in the @taquito/michel-codec package. To finish, use one of the methods presented above to sign the packed data (with the InMemorySigner like in this example or with the Beacon SDK). caution In the previous example, the data is packed locally in Taquito using the packDataBytes function of the @taquito/michel-codec package instead of the RPC. You should always verify the packed bytes before signing or requesting that they be signed when using the RPC to pack. This precaution helps protect you and your applications users from RPC nodes that have been compromised. A node that is operated by a bad actor, or compromised by a bad actor could return a fully formed operation that does not correspond to the input provided to the RPC endpoint., payloadBytes). October 2021 - Taquito version 10.2.0
https://tezostaquito.io/docs/11.0.1/signing/
CC-MAIN-2022-05
en
refinedweb
Asyncio doesn't work in pythonista 3 - jeremyherbert Hi, I've just purchased pythonista 3 and I'm trying to run the following code straight from the docs: import asyncio async def hello_world(): print("Hello World!") loop = asyncio.get_event_loop() # Blocking call which returns when the hello_world() coroutine is done loop.run_until_complete(hello_world()) loop.close() I get the error: RuntimeError: There is no current event loop in thread 'Dummy-1'. Am I doing something wrong? Thanks, Jeremy Asyncio does work in Pythonista. If you double click the home button and throw Pythonista 3 out of memory and then relaunch Pythonista 3 and rerun your script, it will work. My advise is that you comment out the line loop.close()and then you will be able to run your asyncio script multiple times in a single Pythonista session. Added: - jeremyherbert Great, that workaround does sort it out in the meantime. Thank you for reporting the bug for me :) loop = asyncio.get_event_loop_policy().new_event_loop() I am confused. The issue was closed as ”fixed”, while all examples everywhere use get_event_loopand fail with the error stating There is no current event loop in thread 'Dummy-1'. Why is this not an issue with Pythonista? As I understand things, the code above does work, the first time it is run. But, because the pythonista environment is not fully reset when running a new script, the default event loop is now closed. This would be same as if you ran this code multiple times in IDLE for example, without python exiting. So bot so much a bug, as a subtlety with operating in a interactive environment. Other examples of things like this, include loggingloggers dont get cleared out, which is a good thing sometimes but also mean your code needs to check for existing handlers before adding new ones. What users can do is use asyncio.set_event_loop(asyncio.new_event_loop()) after loop.close, to reset the default event loop. Then get_event_loop works again. In theory, this could go into pythonista_preflight.py, however it is not obvious to me this wouldnt cause other issues, or would really be the intended behavior. @JonB, maybe some of these errors come from Pythonista running UI in separate thread, which does not know about the loop unless I provide it specifically. Need to experiment.
https://forum.omz-software.com/topic/3240/asyncio-doesn-t-work-in-pythonista-3/3
CC-MAIN-2022-05
en
refinedweb
NAME gnutls_openpgp_crt_verify_self - Verify the self signature on the key SYNOPSIS #include <gnutls/openpgp.h> int gnutls_openpgp_crt_verify_self(gnutls_openpgp_crt_t key, unsigned int flags, unsigned int * verify); ARGUMENTS DESCRIPTION Verifies the self signature in the key. The key verification output will be put in verify and will be one or more of the gnutls_certificate_status_t enumerated elements bitwise ord..
https://linux.fm4dd.com/en/man3/gnutls_openpgp_crt_verify_self.htm
CC-MAIN-2022-05
en
refinedweb
Windows using Python and the AduHid DLL module. Basics of opening a USB device handle, writing and reading data, as well as closing the handle is provided as an example. The aduhid module calls functions from AduHid.dll in order to interface with the devices. All source code is provided so that you may review details that are not highlighted here. After importing the aduhid module, we can open an ADU device by product id or serial number via aduhid.open_device_by_product_id() or aduhid.open_device_by_serial_number(). from ontrak import aduhid PRODUCT_ID = 200 # Set the product id to match your ADU device. See list here: # open device by product id device_handle = aduhid.open_device_by_product_id(PRODUCT_ID, 100) if device_handle == None: print('Error opening device. Ensure that the product id is correct and that it is connected') exit(-1) Now that we have successfully opened our device, we can write commands to the ADU device and read the result. First, let's write some commands: # Write a command to the device result = aduhid.write_device(device_handle, 'RK0', 100) print('Write result: %i' % result) # Should be non-zero if successful result = aduhid.write_device(device_handle, 'SK0', 100) print('Write result: %i' % result) # Should be non-zero if successful In order to read from the ADU device, we can send a command that requests a return value (as defined in our product documentation). Such a command for the ADU200 is RPA. This requests the value of PORT A in binary format. We can then use read_from_adu() to read the result. read_from_adu() returns the data read in string format on success, and None on failure. A timeout is supplied for the maximum amount of time that the host (computer) will wait for data from the read request. result = aduhid.write_device(device_handle, 'RPK0', 100) print('Write result: %i' % result) # Should be non-zero if successful # Read from device (result, value) = aduhid.read_device(device_handle, 100) # Result should non-zero if successful, value will contain the returned value from the device in integer form # If read is not successful, result is 0 and value is None if result != 0: print('Read result: %i, value: %i' % (result, value)) else: print('Read failed - was a resulting command issued prior to the read?') When finished communicating with the device it should be closed. This is generally donw when the application is closed. IMPORTANT: In Windows USB devices with no handle open will be forced into suspend mode afer a few seconds. aduhid.close_device(device_handle) If you would like to obtain the vendor id, product id and serial number belonging to each connected ADU, the aduhid.device_list() function may be used: # Get a device list of connected ADUs. List will be empty if no devices are connected. device_list = aduhid.device_list(100) for device in device_list: print('Vendor ID: %i, Product ID: %i, Serial Number: %s' % (device.vendor_id, device.product_id, device.serial_number)) Download Python DLL Module Example File in ZIP Format
https://ontrak.net/pythondll.htm
CC-MAIN-2022-05
en
refinedweb
Track events and user actions when the user starts a new conversation. Attach custom metadata to every conversation started via the SDK. This API is now deprecated with SDK v3.0. It is expected that you will pass the user's name, email or, user identifier using the Login API. Also, please note that this API will not work with the Conversational Experience. However, there will no impact on older SDK versions. They will keep on working as usual. This API is now deprecated with SDK v3.0. It is expected that you will pass the user's name, email or, user identifier using the Login API. Also, please note that there will no impact on older SDK versions, and they will keep on working as usual. Details on user management are available here. Optionally, if you want to send additional debug logs, use HelpshiftApi.HelpshiftLogger to log messages in all files where you have used Log statements. This will send your logs when a new issue is registered. Example: HelpshiftApi.HelpshiftLogger.e("AppLogTag", "Error Log message"); HelpshiftApi.HelpshiftLogger.d("AppLogTag", "Debug log message");. On tag names & compatibility Examples :); You can attach tags with metadata to every reported issue via a reserved constant key HelpshiftApi.HelpshiftSupport.HSTAGSKEY. This is used with the config dictionary as follows:(uiViewController, issue is created. A Conversation is considered inactive when a user cannot respond with new messages on it. As soon as an end user opens the conversation screen, they see a greeting message, and the conversation is considered active. For example: HelpshiftApi.HelpshiftSupport.CheckIfConversationActive(); You can implement the "DidCheckIfConversationActive" delegate method like the following example: public class HSDelegate : InternalHsApiDefinition.HelpshiftSupportDelegate { . . . . . . public void DidCheckIfConversationActive(bool isActive) { // your code here } } If you want to keep track of when your users end an ongoing conversation, you can implement this delegate callback. This delegate is called whenever the ongoing conversation is resolved, rejected from the Dashboard, timed out or archived. public override void ConversationEnded() { // Conversation has ended }
https://developers.helpshift.com/xamarin/tracking-ios/
CC-MAIN-2022-05
en
refinedweb
#include <CGAL/Hyperbolic_octagon_translation.h> The class Hyperbolic_octagon_translation defines an object to represent a hyperbolic translation of the fundamental group of the Bolza surface \(\mathcal M\). It accepts one template parameter: A translation \(g\) in \(\mathcal G\) is a mapping acting on the hyperbolic plane \(\mathbb H^2\). It has the form \[ g(z) = \frac{ \alpha\cdot z + \beta }{ \overline{\beta}\cdot z + \overline{\alpha} }, \qquad \alpha,\beta \in \mathbb C, \qquad z \in \mathbb H^2, \qquad |\alpha|^2 - |\beta|^2 = 1, \] where \(\overline{\alpha}\) ane \(\overline{\beta}\) are the complex conjugates of \(\alpha\) and \(\beta\) respectively. In this implementation, the translation \(g\) is uniquely defined by its coefficients \(\alpha\) and \(\beta\). Considering the set of generators \(\mathcal A = [a, \overline{b}, c, \overline{d}, \overline{a}, b, \overline{c}, d]\) as an alphabet, a translation \(g\) in \(\mathcal G\) can be seen as a word on the alphabet \(\mathcal A\). Each letter of this alphabet is represented as an unsigned integer from from 0 to 7, and each word (i.e., translation) is a sequence of letters. Represents a word on the alphabet \(\mathcal A\). By extension, represents a hyperbolic translation in the group \(\mathcal A\). Represents a single letter of the alphabet \(\mathcal A\). By extension, represents a generator of the group \(\mathcal G\). Enumeration type for the alphabet \(\mathcal A\). This enumeration can be used to recover the generators of the group \(\mathcal G\). generator() Returns the coefficient \(\alpha\) of the translation. The first element of the returned pair contains the real part of \(\alpha\). The second element contains its imaginary part. Returns the coefficient \(\beta\) of the translation. The first element of the returned pair contains the real part of \(\beta\). The second element contains its imaginary part. Return the generator wl of the group \(\mathcal G\). Note that wl can be an element of the enumeration set Generator. The calls generator(0) and generator(A) will both return the translation \(a\). Returns the set of generators of \(\mathcal G\) and their inverses in a std::vector. The generators are given in the order: \( [a, \overline{b}, c, \overline{d}, \overline{a}, b, \overline{c}, d].\) Comparison operator. Each translation \(g\) of \(\mathcal G\), when applied to the octagon \(\mathcal D_O\), produces a copy of \(\mathcal D_O\) labeled by the translation \(g\). The copies of \(\mathcal D_O\) incident to \(\mathcal D_O\) are naturally ordered counter-clockwise around \(\mathcal D_O\). The comparison operator compares two translations based on the ordering of the copies of \(\mathcal D_O\) that they produce. For more details, see Section Representation of the User manual. Returns a string representation of the translation, containing its Word_letters. This function is given as utility for printing/debugging purposes. For example, for the translation \(abcd\), this function returns 0527, and for the identity it returns _.
https://doc.cgal.org/4.14/Periodic_4_hyperbolic_triangulation_2/classCGAL_1_1Hyperbolic__octagon__translation.html
CC-MAIN-2022-05
en
refinedweb
ComlinkComlink Comlink makes WebWorkers enjoyable. Comlink is a tiny library (1.1kB), that removes the mental barrier of thinking about postMessage and hides the fact that you are working with workers. At a more abstract level it is an RPC implementation for postMessage and ES6 Proxies. $ npm install --save comlink Browsers support & bundle sizeBrowsers support & bundle size Browsers without ES6 Proxy support can use the proxy-polyfill. Size: ~2.5k, ~1.2k gzip’d, ~1.1k brotli’d IntroductionIntroduction On mobile phones, and especially on low-end mobile phones, it is important to keep the main thread as idle as possible so it can respond to user interactions quickly and provide a jank-free experience. The UI thread ought to be for UI work only. WebWorkers are a web API that allow you to run code in a separate thread. To communicate with another thread, WebWorkers offer the postMessage API. You can send JavaScript objects as messages using myWorker.postMessage(someObject), triggering a message event inside the worker. Comlink turns this messaged-based API into a something more developer-friendly by providing an RPC implementation: Values from one thread can be used within the other thread (and vice versa) just like local values. ExamplesExamples Running a simple function main.js import * as Comlink from ""; async function init() { const worker = new Worker("worker.js"); // WebWorkers use `postMessage` and therefore work with Comlink. const obj = Comlink.wrap(worker); alert(`Counter: ${await obj.counter}`); await obj.inc(); alert(`Counter: ${await obj.counter}`); } init(); worker.js importScripts(""); // importScripts("../../../dist/umd/comlink.js"); const obj = { counter: 0, inc() { this.counter++; }, }; Comlink.expose(obj); Callbacks main.js import * as Comlink from ""; // import * as Comlink from "../../../dist/esm/comlink.mjs"; function callback(value) { alert(`Result: ${value}`); } async function init() { const remoteFunction = Comlink.wrap(new Worker("worker.js")); await remoteFunction(Comlink.proxy(callback)); } init(); worker.js importScripts(""); // importScripts("../../../dist/umd/comlink.js"); async function remoteFunction(cb) { await cb("A string from a worker"); } Comlink.expose(remoteFunction); SharedWorker When using Comlink with a SharedWorker you have to: - Use the portproperty, of the SharedWorkerinstance, when calling Comlink.wrap. - Call Comlink.exposewithin the onconnectcallback of the shared worker. Pro tip: You can access DevTools for any shared worker currently running in Chrome by going to: chrome://inspect/#workers main.js import * as Comlink from ""; async function init() { const worker = new SharedWorker("worker.js"); /** * SharedWorkers communicate via the `postMessage` function in their `port` property. * Therefore you must use the SharedWorker's `port` property when calling `Comlink.wrap`. */ const obj = Comlink.wrap(worker.port); alert(`Counter: ${await obj.counter}`); await obj.inc(); alert(`Counter: ${await obj.counter}`); } init(); worker.js importScripts(""); // importScripts("../../../dist/umd/comlink.js"); const obj = { counter: 0, inc() { this.counter++; }, }; /** * When a connection is made into this shared worker, expose `obj` * via the connection `port`. */ onconnect = function (event) { const port = event.ports[0]; Comlink.expose(obj, port); }; // Single line alternative: // onconnect = (e) => Comlink.expose(obj, e.ports[0]); For additional examples, please see the docs/examples directory in the project. APIAPI Comlink.wrap(endpoint) and Comlink.expose(value, endpoint?) Comlink’s goal is to make exposed values from one thread available in the other. expose exposes value on endpoint, where endpoint is a postMessage-like interface. wrap wraps the other end of the message channel and returns a proxy. The proxy will have all properties and functions of the exposed value, but access and invocations are inherently asynchronous. This means that a function that returns a number will now return a promise for a number. As a rule of thumb: If you are using the proxy, put await in front of it. Exceptions will be caught and re-thrown on the other side. Comlink.transfer(value, transferables) and Comlink.proxy(value) By default, every function parameter, return value and object property value is copied, in the sense of structured cloning. Structured cloning can be thought of as deep copying, but has some limitations. See this table for details. If you want a value to be transferred rather than copied — provided the value is or contains a Transferable — you can wrap the value in a transfer() call and provide a list of transferable values: const data = new Uint8Array([1, 2, 3, 4, 5]); await myProxy.someFunction(Comlink.transfer(data, [data.buffer])); Lastly, you can use Comlink.proxy(value). When using this Comlink will neither copy nor transfer the value, but instead send a proxy. Both threads now work on the same value. This is useful for callbacks, for example, as functions are neither structured cloneable nor transferable. myProxy.onready = Comlink.proxy((data) => { /* ... */ }); Transfer handlers and event listenersTransfer handlers and event listeners It is common that you want to use Comlink to add an event listener, where the event source is on another thread: button.addEventListener("click", myProxy.onClick.bind(myProxy)); While this won’t throw immediately, onClick will never actually be called. This is because Event is neither structured cloneable nor transferable. As a workaround, Comlink offers transfer handlers. Each function parameter and return value is given to all registered transfer handlers. If one of the event handler signals that it can process the value by returning true from canHandle(), it is now responsible for serializing the value to structured cloneable data and for deserializing the value. A transfer handler has be set up on both sides of the message channel. Here’s an example transfer handler for events: Comlink.transferHandlers.set("EVENT", { canHandle: (obj) => obj instanceof Event, serialize: (ev) => { return [ { target: { id: ev.target.id, classList: [...ev.target.classList], }, }, [], ]; }, deserialize: (obj) => obj, }); Note that this particular transfer handler won’t create an actual Event, but just an object that has the event.target.id and event.target.classList property. Often, this is enough. If not, the transfer handler can be easily augmented to provide all necessary data. Comlink.releaseProxy Every proxy created by Comlink has the [releaseProxy] method. Calling it will detach the proxy and the exposed object from the message channel, allowing both ends to be garbage collected. const proxy = Comlink.wrap(port); // ... use the proxy ... proxy[Comlink.releaseProxy](); Comlink.createEndpoint Every proxy created by Comlink has the [createEndpoint] method. Calling it will return a new MessagePort, that has been hooked up to the same object as the proxy that [createEndpoint] has been called on. const port = myProxy[Comlink.createEndpoint](); const newProxy = Comlink.wrap(port); Comlink.windowEndpoint(window, context = self, targetOrigin = "*") Windows and Web Workers have a slightly different variants of postMessage. If you want to use Comlink to communicate with an iframe or another window, you need to wrap it with windowEndpoint(). window is the window that should be communicate with. context is the EventTarget on which messages from the window can be received (often self). targetOrigin is passed through to postMessage and allows to filter messages by origin. For details, see the documentation for Window.postMessage. For a usage example, take a look at the non-worker examples in the docs folder. TypeScriptTypeScript Comlink does provide TypeScript types. When you expose() something of type T, the corresponding wrap() call will return something of type Comlink.Remote<T>. While this type has been battle-tested over some time now, it is implemented on a best-effort basis. There are some nuances that are incredibly hard if not impossible to encode correctly in TypeScript’s type system. It may sometimes be necessary to force a certain type using as unknown as <type>. NodeNode Comlink works with Node’s worker_threads module. Take a look at the example in the docs folder. Additional ResourcesAdditional Resources License Apache-2.0
https://www.npmjs.com/package/comlink
CC-MAIN-2022-05
en
refinedweb
Talk:Dell XPS 15 7590. Direction of this page Talk status This discussion is done. Bugalo , the last changes to this page moved from a hardware description to a full on personal install and that's not the purpose of these device pages. Everyone should be following the Handbook to install and deviate where they see fit. Please keep this limited to hardware quirks not covered by the handbook. ZFS does not belong here nor does "booting from Ubuntu Live CD" --Grknight (talk) 15:37, 22 January 2020 (UTC) - Brian Evans (Grknight) , I understand your correction. If it is appropriate, I will keep in my namespace the page I was writing, and keep here only the hardware description. If it is not adequate, feel free to let me know. Bugalo (talk) 18:19, 22 January 2020 (UTC)
https://wiki.gentoo.org/wiki/Talk:Dell_XPS_15_7590
CC-MAIN-2022-05
en
refinedweb
C++ strspn() Function You are Here: C++ strspn() The strspn() function finds up to what length two strings are identical. This function is defined in <cstring> header file. Example C++ Compiler #include <iostream> #include <cstring> using namespace std; int main() { char str1[] = "wikimass is awesome"; char str2[] = "wikipedia is great"; long len = strspn(str1, str2); cout << "Length of initial segment matching: " << len; return 0; } Output Length of initial segment matching: 4 Syntax size_t str.
https://wikimass.com/cpp/strspn
CC-MAIN-2022-05
en
refinedweb
Ns-3.33 errata On January 7, 2021, ns-3.33 was published. This page lists some minor issues that have been fixed in the mainline since that time, but we considered to be minor enough to just list here rather than make a maintenance release to update ns-3.33. ns-3 openflow module fails to compile During the ns-3.33 release cycle, some conditional (optional) code related to Boost C++ libraries was introduced to the file src/core/model/length.h. Boost has also been recommended to be installed for openflow support. This caused an unintended interaction with a token in the wscript file for the ns-3 openflow module. As a result, when the openflow OFSID switch library is included in the Waf configuration, and ns-3 tries to build the openflow module, a compilation error occurs; a subset of the error output is shown below: /usr/include/boost/units/dimension.hpp:63:30: error: parse error in template argument list typedef typename detail::static_power_impl<DL::size::value>::template apply< In addition, the wscript requests to include an obsolete boost library (signals). These issues will be fixed in the ns-3.34 release, and are fixed if you migrate to ns-3-dev. If you want to use the ns-3 openflow module with ns-3.33 release, you must install Boost development libraries (typically 'boost-devel' on Fedora-based systems, or 'libboost-all-dev' on Debian/Ubuntu-based systems), and we suggest that you modify both of the following two files as follows: 1) Remove the obsolete 'signals' library dependency from the file src/openflow/wscript: diff --git a/src/openflow/wscript b/src/openflow/wscript index 98b7ee65e..d361a8bb8 100644 --- a/src/openflow/wscript +++ b/src/openflow/wscript @@ -9,7 +9,7 @@ def options(opt): help=('Path to OFSID source for NS-3 OpenFlow Integration support'), default='', dest='with_openflow') -REQUIRED_BOOST_LIBS = ['system', 'signals', 'filesystem'] +REQUIRED_BOOST_LIBS = ['system', 'filesystem'] def required_boost_libs(conf): conf.env['REQUIRED_BOOST_LIBS'] += REQUIRED_BOOST_LIBS and 2) Delete or disable the conditional boost code in the file src/core/model/length.h in three places: diff --git a/src/core/model/length.h b/src/core/model/length.h index 171c28202..715620977 100644 --- a/src/core/model/length.h +++ b/src/core/model/length.h @@ -24,11 +24,6 @@ #include "attribute.h" #include "attribute-helper.h" -#ifdef HAVE_BOOST_UNITS -#include <boost/units/quantity.hpp> -#include <boost/units/systems/si.hpp> -#endif - #include <istream> #include <limits> #include <ostream> @@ -404,23 +399,6 @@ private: */ Length (Quantity quantity); -#ifdef HAVE_BOOST_UNITS - /** - * Construct a Length object from a boost::units::quantity - * - * \note The boost::units:quantity must contain a unit that derives from - * the length dimension. Passing a quantity with a Unit that is not a length - * unit will result in a compile time error - * - * \tparam U A boost::units length unit - * \tparam T Numeric data type of the quantity value - * - * \param quantity A boost::units length quantity - */ - template <class U, class T> - explicit Length (boost::units::quantity<U, T> quantity); -#endif - /** * Copy Constructor * @@ -1073,19 +1051,6 @@ Length Yards (double value); Length Miles (double value); * Copy Constructor * @@ -1073,19 +1051,6 @@ Length Yards (double value); Length Miles (double value); /**@}*/ -#ifdef HAVE_BOOST_UNITS -template <class U, class T> -Length::Length (boost::units::quantity<U, T> quantity) - : m_value (0) -{ - namespace bu = boost::units; - using BoostMeters = bu::quantity<bu::si::length, double>; - - //convert value to meters - m_value = static_cast<BoostMeters> (quantity).value (); -} -#endif - } // namespace ns3 #endif /* NS3_LENGTH_H_ */ vanet-routing-compare.cc does not run The program vanet-routing-compare.cc exits with an error: msg="GlobalValue name=VRCcumulativeBsmCaptureStart: input value is not a string", file=../src/core/model/global-value.cc, line=128 terminate called without an active exception This will be fixed in the ns-3.34 release, and is fixed if you migrate to ns-3-dev. To patch ns-3.33 release to fix this problem, apply the patch that can be downloaded from the following URL: click library fails to compile The click module relies on the click library, a version of which is installed with bake in the ns-alllinone-3.33 package. If this library fails to compile with the error: ‘SIOCGSTAMP’ was not declared in this scope This indicates the following problem: and this can be worked around by adding two include statements: diff --git a/elements/userlevel/fromdevice.cc b/elements/userlevel/fromdevice.cc index 76e2b12a3..ff8940ccf 100644 --- a/elements/userlevel/fromdevice.cc +++ b/elements/userlevel/fromdevice.cc @@ -28,6 +28,7 @@ #else # include <sys/ioccom.h> #endif +#include <linux/sockios.h> #if HAVE_NET_BPF_H # include <net/bpf.h> # define PCAP_DONT_INCLUDE_PCAP_BPF_H 1 diff --git a/elements/userlevel/rawsocket.cc b/elements/userlevel/rawsocket.cc index 40e9ce25e..1e4fb25f3 100644 --- a/elements/userlevel/rawsocket.cc +++ b/elements/userlevel/rawsocket.cc @@ -34,6 +34,7 @@ #include <sys/socket.h> #include <arpa/inet.h> #include <fcntl.h> +#include <linux/sockios.h> #ifndef __sun #include <sys/ioctl.h> The version of click referenced by future ns-3 releases will be updated to include a more recent click library.
https://www.nsnam.org/mediawiki/index.php?title=Ns-3.33_errata&direction=prev&oldid=12398
CC-MAIN-2022-05
en
refinedweb
This series of blogs on Python was compiled as I was trying to learn the language. I present it here, for someone who might want a quick introduction to the language, without digging through all the manuals. This is not a 'Complete Reference' nor is it a 'Python for Dummies'. It is meant for someone who understands development and wants to peep into the world of Python. This is a series of blogs covers these topics. (It will continue to grow as I continue to learn) As per the StackOverflow survey, Python was the fastest growing programming language of 2019. And we can expect this trend to grow as the developer community continues to move towards Python. That makes Python the language of choice for any developer. There are many different sources and distributions of Python. Foremost is available on their own site Install Python. This page has everything you need. Download the version you like. For legacy reasons, Python continues to host the Python 2.* installers. But, also warns us that it will be deprecated very soon. This "soon" has not arrived over the last 4 years that I have seen their page. But, from the trends we can say that, anyone serious about the future, should use Python 3. Specially if you are learning afresh, there is no reason to go for Python 2. (Unless you are forced to maintain a legacy Python 2 application). We also have other distributions like Anaconda that package a good amount of useful stuff along with the base Python. This is what most enthusiasts use. And then there are other commercial distributions of Python like ActiveState Python that charge you for support and packaging. That is meant for people who prefer to spend money. Python comes with its own editor (IDLE) that provides syntax highlighting. It is good for developing and running minor scripts and also for testing single commands. But, for doing anything more complex and useful, you will need a better IDE. Several open source IDE's are available on the net. The list keeps growing. Just lookup one you like from Google and you should be ready to go. I liked PyCharm, Spyder, Atom and Visual Studio Code. If you don't like these, just search for one on the net and let me know if you find something useful. We often put in a lot of effort on Traditions. We do a lot of things we don't really know why, but we do them because "that is the way"! The "Hello World" is another such tradition. None knows what is so magical about those two words. But there is something so magical that everyone wants to use just that phrase. Anyway, let's do the same here - announcing to the world that we have started learning Python.Open your IDE and create a new Project - Learn Python or HelloWorld; Create a new file with the appropriate extension (.py). If you are running on Windows, the extension should be enough. But if you are fond of Linux, the extension has no meaning. You need to explicitly indicate the interpreter using the shabang on the first line of the script. After that, add the following one line in the new script. #!/bin/python print("Hello World") (The shabang would point to python or python3 depending upon the Python distribution that you use.) That is all we need in order to start. The code prints the two words - Hello World - Nothing much for the world, but it does tell you that you have started well! Congratulations! You have started off on your journey with the Python. Now follow along as we see how the Python has engulfed the world, and grown into an Anaconda! Before we jump into the code, it is important to understand the core values that are expected of a Python Programmer. Python language was created with some important principles in mind - that were lacking in other languages. These principles are the key to its success, and are called the Zen of Python. Readability of the code is an important component of the Zen of Python. Python imposes this value in its core syntax. We have all seen code that could be more readable, only if the "author had the mercy to indent it". Indentation is the least you can do for the future developers who need to work on the code. But many programmers are sadists, who enjoy avoiding it. Python will not let you do it. Here, indentation is a part of the syntax. In Python, there are no semi-colons or curly-braces. It is just newline and indentation. A new line is a new statement. Code with similar indentation is part of a block. So your code will not do what it should, if you do not take care of indenting it properly. We will see more of this as we work with code flow, and other upcoming modules. For now, it is enough to understand that Python cares for you and enforces a level of readability in the code. Comments are another important (and the most ignored) part of any programming language. Everyone knows they are required. Everyone knows why they are required. Everyone curses the developer when they see a code without comments. But, very few are gracious enough to comment their own code. For these generous minded developers, Python provides a simple syntax for adding comments to their code - #. Any text that follows a # - till the end of line, is ignored by the interpreter - as a comment. A # inside quotes is treated simply as a part of the string, and hence does not mark any comment. Python provides for most normal functionality like data types and normal code flow structures that any normal programming language can provide. Computing started with numbers. Today, it has covered several data types. But, numbers still form a major chunk of tasks. Python provides for different types of numbers. It also provides huge functionality for processing them. We have integers, floats, Try out the below code to check out the various numeric functions: a = 10 b = 3 c = a + b # 13 print(c) c = a - b # 7 print(c) c = a * b # 30 print(c) c = a / b # 3.3333333333333335 print(c) c = a // b # 3 print(c) c = a % b # 1 print(c) c = a ** b # 1000 print(c) In addition to the integer and floating point numbers described above, Python also supports Decimal / Fraction / Complex numbers - that provide a lot more functionality. We will have a look at them later. The other most commonly used data type is that of strings. Python provides for a huge functionality to work with and manipulate strings. Strings can be defined in single quotes as well as in double quotes. Special characters need to be escaped with a '\'. There is no particular difference between a string defined in single quotes and one defined in double quotes. Naturally, they have to be consistent and a string defined in single quotes should escape a single quote character within the string and a double quoted string should escape a double quote character in the string. Python defines several useful functions for Strings. Check out the code below s = 'Single quoted String' print(s) s = "Double quoted String" print(s) s = "Adding a # inside a quoted string does not make it a comment." print(s) s = 'Single quoted string needs to escape \' character not "' print(s) s = "Double quoted string needs to escape \" character not '" print(s) s = r'use r if you \\ do not like the escape \ ' print(s) s = """\ A Multi Line String """ print(s) A feature rich language like Python naturally has the basic functionality to split / join / append / substring, and a lot more that you can explore with the auto suggest in any sensible IDE, or by looking up the manuals. Try the below code to check out the basics. s = "lEaRnInG" # Append s = s.__add__(" PyThOn") #split print(s.split()) print(s.split(sep="t")) # Splicing print(s[:]) print(s[1:]) print(s[1:-1]) print(s[5:-5]) print(s[16:0]) # Casing print(s.lower()) print(s.upper()) print(s.title()) Booleans are logical variables - used in decision making. Python defines two values True and False for Boolean variables. Although these two values are predefined in the language, Python is a bit loose about Booleans. Internally, True is just the number 1 and False is the number 0. You can verify these by adding True + True, or if you are adventurous, by dividing True/False - Don't blame me for the exception! Most other datatypes can be used in a 'Boolean context' - and they have a criteria for when they should be considered False and when True. Any non-zero number is True. Any non-empty string is True, and so on. Compound data types in Python are quite similar to collections in Java. We have lists, tuples, sets and dictionaries. Let's peep into each of them one by one. Python provides for several compound data types - that can be used to process data in groups. A list is the simplest of these. A list can contain a collection of any type of data, including other lists. To define a list, the variables need to be placed sequentially in square brackets. The code below describes the definition of a list and how its elements can be accessed. Note that unlike many other languages, Python does not insist on having a consistent data type among the elements of the list. We can have a number, string, boolean and any other data types in the same list. We can access the elements of the list using indexes in the square brackets. l = [1, 'Hello', "World", True, [2, 'Learn', "Python", False]] print(l) # [1, 'Hello', 'World', True, [2, 'Learn', 'Python', False]] print(l[0]) # 1 print(l[4]) # [2, 'Learn', 'Python', False] print(l[4][0]) # 2 print(l[4][3]) # False print(l[-3]) # World Few lines of code say a lot more than what could be said in words. We can have indexing, nested indexing as well as reverse indexing. Python lists also support splicing, and provide several utility functions to add / remove / change data in the list. Check out the code below: letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g'] print(letters) # ['a', 'b', 'c', 'd', 'e', 'f', 'g'] print(letters[:]) # ['a', 'b', 'c', 'd', 'e', 'f', 'g'] letters[2:5] = ['C', 'D', 'E'] print(letters) # ['a', 'b', 'C', 'D', 'E', 'f', 'g'] letters[2:5] = [] print(letters) # ['a', 'b', 'f', 'g'] letters[2:2] = ['c', 'd', 'e'] print(letters) # ['a', 'b', 'c', 'd', 'e', 'f', 'g'] letters.append('h') print(letters) # ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'] print(len(letters)) # 8 letters.remove('a') print(letters) # ['b', 'c', 'd', 'e', 'f', 'g', 'h'] letters.pop() print(letters) # ['b', 'c', 'd', 'e', 'f', 'g'] del letters[2] print(letters) # ['b', 'c', 'e', 'f', 'g'] del letters[2:4] print(letters) # ['b', 'c', 'g'] letters.reverse() print(letters) # ['g', 'c', 'b'] Notice that the output of print(letters) is same as that of print(letters[:]). But, there is a difference. The letters[:] is not the same object as letters. It is a copy of the original - a shallow copy. Tuples are similar to lists but, have one marked difference. Tuples are immutable. They cannot be changed once they are defined. Naturally, tuples provide most of the methods that lists provide - except any method that would modify the list. You might ask, what is the advantage of forcing such a restriction? It is speed! Due to its immutability, a tuple can be implemented differently from lists - with more focus on speed of execution. As mentioned before, Python being an interpreted language has to lag in speed. But, optimizations like these take it far ahead of others. Tuples are much faster than Lists and are used where we know that the data in the list is not going to change - this is a common scenario. One major syntactical difference between List and Tuple is that a tuple is enclosed in circular brackets '(. . .)', while List is enclosed in square brackets '[. . .]'. You can convert a list to tuple and tuple to list by typecasting. Python also allows you to define a tuple without any brackets - because it is the most natural sequence for Python. A tuple can be converted to a list and a list can be converted to a tuple. Check out the code below for more. t = (1, 2, 3) print(t) # (1, 2, 3) t = 1, 2, 3 print(t) # (1, 2, 3) t = (1, 2, 'String', (3, 4, "String 2"), [1, 2, 3]) print(t) # (1, 2, 'String', (3, 4, 'String 2'), [1, 2, 3]) print(t[4]) # [1, 2, 3] t[4].extend([2, 3, 4]) print(t) # (1, 2, 'String', (3, 4, 'String 2'), [1, 2, 3, 2, 3, 4]) l = list(t) print(l) # [1, 2, 'String', (3, 4, 'String 2'), [1, 2, 3, 2, 3, 4]] t = tuple(l) print(t) # (1, 2, 'String', (3, 4, 'String 2'), [1, 2, 3, 2, 3, 4]) Note that although the tuple is immutable, a list contained in the tuple can be modified - because the tuple just contains the reference to the list object. The reference should not change. The list itself may be modified. Sets are similar to their counterparts in other languages. As the name suggests, they ensure a distinct set of elements. Any duplicates are ignored. Sets do not have any order of elements. They are defined by data enclosed in curly braces - '{. . .}'. A set can be typecast to and from lists or tuples. Sets define various methods for manipulation. s = {1, "String", ('1', 'Tuple'), 1, 2} print(s) # {1, 'String', 2, ('1', 'Tuple')} s.add(1) print(s) # {1, 'String', 2, ('1', 'Tuple')} s.add(3) print(s) # {1, 'String', 3, 2, ('1', 'Tuple')} s.remove(1) print(s) # {'String', 3, 2, ('1', 'Tuple')} # remove throws an exception and discard just ignores any attempt to remove non existent element s.discard("Strings") print(s) # {'String', 3, 2, ('1', 'Tuple')} s.pop() print(s) # {3, 2, ('1', 'Tuple')} s.clear() print(s) # set() Sets are choosy about the elements that they allow. For example, you cannot have a list inside a set. The elements have to be immutable and "hashable". Dictionaries are a special set of keys with a value associated with each key. You can work with a dictionary as below: d = {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'} print(d) # {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'} print(d['key1']) # value1 d['key7'] = 'value7' print(d) # {'key1': 'value1', 'key2': 'value2', 'key3': 'value3', 'key7': 'value7'} del d['key7'] print(d) # {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'} d['key1'] = 'New Value 1' print(d) # {'key1': 'New Value 1', 'key2': 'value2', 'key3': 'value3'} The above code describes the most common functionalities of a dictionary. Programming is all about data and decisions. In the above section, we saw a few ways data can be stored an processed in Python. Now, let us check out how decisions can be made, how code can be made to flow from one line to another. Then we will see a small code snippet to demonstrate basics of code flow. One major point where Python improves over most other languages is that it forces the developers to write readable code. It forces developers to indent the code; making it more readable. There are some developers who just have to write unreadable code - and they do find ways around Python as well. Python does not use the curly braces {. . .} to define a block of code. A block of code is defined by its indentation. Consider the two blocks of code below: a = 0 while a<10: a = a+1 print(a) a = 0 while a<10: a = a+1 print(a) They show a typical while loop in Python. Note the ':' after the condition, followed by code that is indented. In the first block, both lines of code are indented, while the second block has print(a) outside the block. That made a big difference to the code flow. The print(a) when indented, was considered as a part of the while loop - hence executed 10 times. But, the same when not indented, is executed only once at the end of the loop. This holds true for any block of code - if / elif / else / for / while / def (function) / class - anything that is appropriately indented is part of the block. Else, it is not. Python does not insist on any specifics about space/tab indentation, number of spaces, etc. But the convention - that everyone follows - dictates that it should be 4 spaces. I am not going to bore you (and myself) with the details of if / elif / else / while / for... We know them too well already. There are some subtle improvements in Python and we will see them as we go. Suffices here to say - Python provides for them. We will start with syntax of the basics and then move further. I plan to just brush through the basics and jump to something more interesting. The code snippet below covers the basics of control flow. Check it out on your IDE! # Define an empty list primes = [] # Define an empty set / map divisors = {} # Loop through all the numbers from 2 to 100. for n in range(2, 100): divisors[n] = [] # Loop through all values in list of primes for p in primes: # Break out of the loop if the number is # divisible by any of the primes if (not(n % p)): divisors[n].append(p) # This else block will be executed only if the # above for loop exits normally without a break # - implying that the number is prime. else: primes.append(n) # Print the list of prime numbers and divisors print(primes) print(divisors) Anyone who understands basic programming will surely understand what this code is doing. It just identifies the lists of divisors and prime numbers among the numbers 2-100. The point to note here is the way code indentation defines the flow of the code. Most interesting is the else block. Because of its indentation, it is applied to the for loop rather than the if. Yes, Python also provides for an else block on a for loop - that is executed only if the loop reaches its natural end, without breaking off anywhere midway.. Modularity is one of the basic components of any language. Any decent language - be it a low level assembly language or a 4G language that generates code, has to provide some mechanism that allows us to reuse what we have done once. It provides for some way of extracting common functionality and hiding its complexity. Of course, Python adds its own flavor to this. Let us see how. Python provides modularity in three forms - Functions (or Methods), Modules, and Classes. 'Demonstrate Python Functions' def getFunction(full=True): 'Outer Function' print(getFunction.__doc__) def p(frm=0, to=1, step=1): 'Inner Function' print(p.__doc__) return (x ** 3 for x in range(frm, to, step)) if (full): return p else: return lambda frm = 0, to = 1, step = 1: (x ** 3 \ for x in range(frm, to, step)) print(__doc__) t = getFunction() print("Check the elaborate function") for v in t(step=1, to=10): print(v) t = getFunction(False) print("Check the lambda function") for v in t(1, 5): print(v) As shown above, functions can be abbreviated using lambda functions. That saves the lines of code and can be used to improve performance - so long as it is readable. Python provides for a concept similar to Java Docs. The first line in a function - if a single quoted string, is considered the function doc. But, Python takes this a step further. It is possible to use this value in code! Modules provide a way to reuse code. A module is simply a file containing python code that we can 'import' into our code. import re' if __name__ == '__main__': print (plural('abc')) print (plural('def')) print (plural('des')) print (plural('xyz')) Check out the code above. The first line imports a module re - that is built into Python. As the name suggests, it is meant for regular expressions. It has several methods related to regular expression search, replace, etc. All the methods / objects inside this module are invoked with the prefix of re. You can also notice the line if __name__ == '__main__': before the main code starts. This is a useful construct in any code that your write. It helps anyone who might import this code as a module. When you run this code as a standalone code, it will execute the code under this if clause. But, if anyone imports this module, it is most likely that he just wants the methods in this module, he does not want to run the code outside these methods. This construct prevents such an accident. The built in dir() function can be used to identify at runtime, the list of names defined within a given module. If you like, you can import only a part of the module by using the (from module import method) construct. Packages are a common means of avoiding name clashes. You can import a module from a package using the (from package import module) construct. Python packages are similar to Java. Each package should have its own folder. You can have sub packages in subfolders. One additional requirement that Python imposes for packages is that a package folder should have the init.py file. Python will consider a folder as a package only if it has this script. This file could be empty, or it can define the value of all. The all is the list of modules defined in the package - that would be imported if the user invokes (from package import *). This is a helpful construct - something like the main inside a module file. It prevents unwanted execution of any additional code that is saved in the package folder - it allows you to save additional code in the package folder. The init.py script can also execute any initialization code for the package. After reading the simple scripts we have seen so far, one might wonder why is it called an object oriented language. Yes, Python does not come in way of plain functional code, and allows you to write simple script to do small chunks of tasks. But Python is object oriented to its core! Everything in Python is an object - everything including the code itself! Before we jump into the Python implementation of classes, and object orientation in general. Let's introspect to ask ourselves, what exactly do we mean by object oriented code? What is good or bad about object oriented code? Why is it more maintainable and when is it not efficient? What is an object? In software, an object is defined as "something that has a defined behavior influenced by some information related to it". Any software, whether it is functional or object oriented, implements some functionality for objects. What matters is the point of focus - whether the functionality is in focus or the object itself? A functional code would have all the functionality separated from the information, whereas a good object oriented code should have the information clubbed with the behavior. The language used to do this is not so important. You can have Java code that is functional in essence and also have C code that is object oriented in essence. What matters is the spirit of clubbing together the information with the behavior of the object. There is no good or bad about functional or object oriented code. Both are equally good in their own context. What is important is identifying which one is required in the current scenario and then applying it appropriately. Most often it is a mixture. For example, a properties file used to configure the system pulls out the information - thus adding a functional flavor to it. Or a function static variables in a C code clubs the behavior with the data - making it object oriented. With that, let us get into the details of object oriented coding in Python. From the point of view of semantics, Python provides for class definition, inheritance, constructor, destructor and member variables. For some strange reason, they forgot to add private members. Python allows you all the freedom, but provides guidelines for discipline with conventions. Python does not let you enforce private members, but it is a universally accepted convention that any member variable with name starting with _ or __ has a special meaning and should not be touched by 'outsiders'. Developers use this for adding private members. Python provides two special member methods - init and del. These are similar to the constructor and destructor. Needless to say that constructor is invoked when the object is created and can be used to initialize any members, while destructor is invoked in the cleanup process and is used to perform any cleanup activity. A class can define several member methods and variables. One peculiar point about member functions is that they all must have one first parameter - self. Python compiler does not enforce the name "self". But, convention dictates it. Don't use any other word there if you feel that someone somewhere might ever peek into your code. This parameter is not passed to the method when calling it in the context of an object. But, the runtime takes care of passing a reference to the particular object in there. Let's check out this example code that gives basic details: # A Sample Base Class to demostrate basic semantics class Base: "A Sample Base Class" def __init__(self): print("Base Class Constructor") self._base_member_variable_ = 0 def __del__(self): print("Base Class Constructor") def printBaseValue(self): print("Base Class: " + str(self._base_member_variable_)) # A Sample Derrived Class to demostrate basic semantics class Derrived(Base): "A Sample Derrived Class" def __init__(self): super(Derrived, self).__init__() print("Derrived Class Constructor") self._derrived_member_variable_ = 1 def __del__(self): print("Derrived Class Constructor") super(Derrived, self).__del__() def printDerrivedValue(self): print("Derrived Class: " + str(self._derrived_member_variable_)) print("Derrived Class: " + str(self._base_member_variable_)) # A Python method to check out the classes defined above. def checkout(): o = Derrived() print(o.__class__) print(o.__doc__) o.printBaseValue() o.printDerrivedValue() checkout(); The output of this code looks like this: Base Class Constructor Derrived Class Constructor A Sample Derrived Class Base Class: 0 Derrived Class: 1 Derrived Class: 0 Derrived Class Destructor Base Class Destructor Notice the following points in the code above: A derived class id defined with the base class as a parameter in its definition. Python allows for Multiple Inheritance. In case of Multiple Inheritance, we can have clashes in method names. Python takes care of this by giving higher priority to the first parent in the list. All the methods in the class are defined with one minimum parameter (self). This is not passed to the methods when they are invoked in the context of the object. The interpreter takes care of passing a reference of the object in this parameter. A unique feature in Python is the support for code documentation. The line immediately after the class keyword is the class document. This can be a multiline or single line string. This is not just a code comment that is used for better readability, but Python allows you to use this documentation at runtime. What more, you can also modify this at runtime! Note that the member variables in the class are not explicitly declared anywhere. They are just assigned values in the code. And they are available after that. Since the variables are not private, we can always create new variables in an object - at runtime. Thus, the variables are not really member variables - they are not tied to the class definition - they are just associated with the given instance. But, Python reflection code is powerful enough to show us this association and lets us manipulate them. The object is created by invoking the constructor. We do not need a new keyword while creating objects in Python. The destructor is called when the object goes out of scope and needs to be destroyed. If the parent class constructor and destructor should be invoked, we need to call them explicitly. The method 'super' is part of the reflection API, that lets you identify the superclass of the given child class. This short structure of Python's Object Oriented code allows for infinite possibilities. Having seen the major parts of the core language, let us now look into some of additional frills that make code a lot easier.. The folks who made Python were not satisfied with simple Generators. They wanted to go one step further. Why do we have to create a new class or method for something that can be done by just one line of code? Generator expressions do just that! Naturally they are not as versatile as iterators and generators. But there are times when we really do not need all that. For example, if you want the list of first 10 cubes, you just need a single line of code! print([x**3 for x in range(10)]) Everything in Python is an object - so is an exception. Any exception is an instance of a class that extends the common base class - Exception. You can 'raise' an exception, using an object of the Exception class. Or else, you can just give the class name as a parameter to the 'raise' command. Python will take care of creating an object for it. class B. The concept of Exceptions is not new in Python. Exceptions have been used in several other languages in the past, and most developers are very familiar with them. But the interesting twist that Python provides is because of the flexibility of Python classes and objects. Now you can pass in any damn information with any exception. All you need to do is to create the instance of the exception object, set the object attributes and then raise it! try: e = Exception('Additional information') e.more_info = 'Some more information' raise e except Exception as e: print(type(e)) print(e.args) print(e.more_info) This prints the type of the Exception (Exception), followed by a tuple containing the one argument that was passed in while creating the exception. You can have multiple arguments there. Next line prints 'Some more information' about the exception. This opens infinite possibilities for passing data from the exception to the catch block. You can send out not just strings, but any object that could be useful to the catch block. Such minor flexibilities in Python open up infinite possibilities when you design and code!
https://blog.thewiz.net/python-crash-course
CC-MAIN-2022-05
en
refinedweb
Populate a stat structure #include <sys/iofunc.h> int iofunc_stat( resmgr_context_t* ctp, iofunc_attr_t* attr, struct stat* stat ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically.. QNX Neutrino iofunc_attr_t, iofunc_stat_default(), iofunc_time_update(), resmgr_context_t, stat() Writing a Resource Manager Resource Managers chapter of Getting Started with QNX Neutrino
http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/i/iofunc_stat.html
CC-MAIN-2022-05
en
refinedweb
@auth0/auth0-spa-js@auth0/auth0-spa-js Auth0 SDK for Single Page Applications using Authorization Code Grant Flow with PKCE. Table of ContentsTable of Contents - Documentation - Installation - Getting Started - Contributing - Support + Feedback - Frequently Asked Questions - Vulnerability Reporting - What is Auth0 - License DocumentationDocumentation InstallationInstallation From the CDN: <script src=""></script> npm install @auth0/auth0-spa-js yarn add @auth0/auth0-spa-js Getting StartedGetting Started Auth0 ConfigurationAuth0 Configuration Create a Single Page Application in the Auth0 Dashboard. If you're using an existing application, verify that you have configured the following settings in your Single Page Application: - Click on the "Settings" tab of your application's page. - Ensure that "Token Endpoint Authentication Method" under "Application Properties" is set to "None" -: - Allowed Web Origins: These URLs should reflect the origins that your application is running on. Allowed Callback URLs may also include a path, depending on where you're handling the callback (see below). Take note of the Client ID and Domain values under the "Basic Information" section. You'll need these values in the next step. Creating the clientCreating the client Create an Auth0Client instance before rendering or initializing your application. You should only have one instance of the client. import createAuth0Client from '@auth0/auth0-spa-js'; //with async/await const auth0 = await createAuth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', redirect_uri: '<MY_CALLBACK_URL>' }); //with promises createAuth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', redirect_uri: '<MY_CALLBACK_URL>' }).then(auth0 => { //... }); //or, you can just instantiate the client on it's own import { Auth0Client } from '@auth0/auth0-spa-js'; const auth0 = new Auth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', redirect_uri: '<MY_CALLBACK_URL>' }); //if you do this, you'll need to check the session yourself try { await getTokenSilently(); } catch (error) { if (error.error !== 'login_required') { throw error; } } 1 - Login1 - Login <button id="login">Click to Login</button> //with async/await //redirect to the Universal Login Page document.getElementById('login').addEventListener('click', async () => { await auth0.loginWithRedirect(); }); //in your callback route (<MY_CALLBACK_URL>) window.addEventListener('load', async () => { const redirectResult = await auth0.handleRedirectCallback(); //logged in. you can get the user profile like this: const user = await auth0.getUser(); console.log(user); }); //with promises //redirect to the Universal Login Page document.getElementById('login').addEventListener('click', () => { auth0.loginWithRedirect().catch(() => { //error while redirecting the user }); }); //in your callback route (<MY_CALLBACK_URL>) window.addEventListener('load', () => { auth0.handleRedirectCallback().then(redirectResult => { //logged in. you can get the user profile like this: auth0.getUser().then(user => { console.log(user); }); }); }); 2 - Calling an API2 - Calling an API <button id="call-api">Call an API</button> /); }); //with promises document.getElementById('call-api').addEventListener('click', () => { auth0 .getTokenSilently() .then(accessToken => fetch('', { method: 'GET', headers: { Authorization: `Bearer ${accessToken}` } }) ) .then(result => result.json()) .then(data => { console.log(data); }); }); 3 - Logout3 - Logout <button id="logout">Logout</button> import createAuth0Client from '@auth0/auth0-spa-js'; document.getElementById('logout').addEventListener('click', () => { auth0.logout(); }); You can redirect users back to your app after logging out. This URL must appear in the Allowed Logout URLs setting for the app in your Auth0 Dashboard: auth0.logout({ returnTo: '' }); Data caching optionsData caching options The SDK can be configured to cache ID tokens and access tokens either in memory or in local storage. The default is in memory. This setting can be controlled using the cacheLocation option when creating the Auth0 client. To use the in-memory mode, no additional options need are required as this is the default setting. To configure the SDK to cache data using local storage, set cacheLocation as follows: await createAuth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', redirect_uri: '<MY_CALLBACK_URL>', cacheLocation: 'localstorage' // valid values are: 'memory' or 'localstorage' }); Important: This feature will allow the caching of data such as ID and access tokens to be stored in local storage. Exercising this option changes the security characteristics of your application and should not be used lightly. Extra care should be taken to mitigate against XSS attacks and minimize the risk of tokens being stolen from local storage. Creating a custom cacheCreating a custom cache The SDK can be configured to use a custom cache store that is implemented by your application. This is useful if you are using this SDK in an environment where more secure token storage is available, such as potentially a hybrid mobile app. To do this, provide an object to the cache property of the SDK configuration. The object should implement the following functions. Note that all of these functions can optionally return a Promise or a static value. Here's an example of a custom cache implementation that uses sessionStorage to store tokens and apply it to the Auth0 SPA SDK: const sessionStorageCache = { get: function (key) { return JSON.parse(sessionStorage.getItem(key)); }, set: function (key, value) { sessionStorage.setItem(key, JSON.stringify(value)); }, remove: function (key) { sessionStorage.removeItem(key); }, // Optional allKeys: function () { return Object.keys(sessionStorage); } }; await createAuth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', redirect_uri: '<MY_CALLBACK_URL>', cache: sessionStorageCache }); Note: The cache property takes precedence over the cacheLocation property if both are set. A warning is displayed in the console if this scenario occurs. We also export the internal InMemoryCache and LocalStorageCache implementations, so you can wrap your custom cache around these implementations if you wish. Refresh TokensRefresh Tokens Refresh tokens can be used to request new access tokens. Read more about how our refresh tokens work for browser-based applications to help you decide whether or not you need to use them. To enable the use of refresh tokens, set the useRefreshTokens option to true: await createAuth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', redirect_uri: '<MY_CALLBACK_URL>', useRefreshTokens: true }); Using this setting will cause the SDK to automatically send the offline_access scope to the authorization server. Refresh tokens will then be used to exchange for new access tokens instead of using a hidden iframe, and calls the /oauth/token endpoint directly. This means that in most cases the SDK does not rely on third-party cookies when using refresh tokens. Note This configuration option requires Rotating Refresh Tokens to be enabled for your Auth0 Tenant. Refresh Token fallbackRefresh Token fallback In all cases where a refresh token is not available, the SDK falls back to the legacy technique of using a hidden iframe with prompt=none to try and get a new access token and refresh token. This scenario would occur for example if you are using the in-memory cache and you have refreshed the page. In this case, any refresh token that was stored previously would be lost. If the fallback mechanism fails, a login_required error will be thrown and could be handled in order to put the user back through the authentication process. Note: This fallback mechanism does still require access to the Auth0 session cookie, so if third-party cookies are being blocked then this fallback will not work and the user must re-authenticate in order to get a new refresh token. OrganizationsOrganizations Organizations is a set of features that provide better support for developers who build and maintain SaaS and Business-to-Business (B2B) applications. Log in to an organizationLog in to an organization Log in to an organization by specifying the organization parameter when setting up the client: createAuth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', redirect_uri: '<MY_CALLBACK_URL>', organization: '<MY_ORG_ID>' }); You can also specify the organization when logging in: // Using a redirect client.loginWithRedirect({ organization: '<MY_ORG_ID>' }); // Using a popup window client.loginWithPopup({ organization: '<MY_ORG_ID>' }); Accept user invitationsAccept user invitations Accept a user invitation through the SDK by creating a route within your application that can handle the user invitation URL, and log the user in by passing the organization and invitation parameters from this URL. You can either use loginWithRedirect or loginWithPopup as needed. const url = new URL(invitationUrl); const params = new URLSearchParams(url.search); const organization = params.get('organization'); const invitation = params.get('invitation'); if (organization && invitation) { client.loginWithRedirect({ organization, invitation }); } Advanced optionsAdvanced options Advanced options can be set by specifying the advancedOptions property when configuring Auth0Client. Learn about the complete set of advanced options in the API documentation createAuth0Client({ domain: '<AUTH0_DOMAIN>', client_id: '<AUTH0_CLIENT_ID>', advancedOptions: { defaultScope: 'email' // change the scopes that are applied to every authz request. **Note**: `openid` is always specified regardless of this setting } });. Frequently Asked QuestionsFrequently Asked Questions For a rundown of common issues you might encounter when using the SDK, please check out the FAQ. LicenseLicense This project is licensed under the MIT license. See the LICENSE file for more info.
https://www.npmjs.com/package/@auth0/auth0-spa-js
CC-MAIN-2022-05
en
refinedweb
Grad-CAM is a popular technique for visualizing where a convolutional neural network model is looking. Grad-CAM is class-specific, meaning it can produce a separate visualization for every class present in the image: Example cat and dog Grad-CAM visualizations modified from Figure 1 of the Grad-CAM paper Grad-CAM can be used for weakly-supervised localization, i.e. determining the location of particular objects using a model that was trained only on whole-image labels rather than explicit location annotations. Grad-CAM can also be used for weakly-supervised segmentation, in which the model predicts all of the pixels that belong to particular objects, without requiring pixel-level labels for training: Part of Figure 4 of the Grad-CAM paper showing predicted motorcycle and person segmentation masks obtained by using Grad-CAM heatmaps as the seed for a method called SEC (Seed, Expand, Constrain) Finally, Grad-CAM can be used to gain better understanding of a model, for example by providing insight into model failure modes: Figure 6 of the Grad-CAM paper, showing example model failures along with Grad-CAM visualizations that illustrate why the model made incorrect predictions. The main reference for this post is the expanded version of the Grad-CAM paper: Selvaraju et al. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization.” International Journal of Computer Vision 2019. A previous version of the Grad-CAM paper was published in the International Conference on Computer Vision (ICCV) 2017. Grad-CAM as Post-Hoc Attention Grad-CAM is a form of post-hoc attention, meaning that it is a method for producing heatmaps that is applied to an already-trained neural network after training is complete and the parameters are fixed. This is distinct from trainable attention, which involves learning how to produce attention maps (heatmaps) during training by learning particular parameters. For a more in-depth discussion of post-hoc vs. trainable attention, see this post. Grad-CAM as a Generalization of CAM Grad-CAM does not require a particular CNN architecture. Grad-CAM is a generalization of CAM (class activation mapping), a method that does require using a particular architecture. CAM requires an architecture that applies global average pooling (GAP) to the final convolutional feature maps, followed by a single fully connected layer that produces the predictions: In the sketch above, the squares A1 (red), A2 (green), and A3 (blue) represent feature maps produced by the last convolutional layer of a CNN. To use the CAM method upon which Grad-CAM is based, we first take the average of each feature map to produce a single number per map. In this example we have 3 feature maps and therefore 3 numbers; the 3 numbers are shown as the tiny colored squares in the sketch. Then we apply a fully-connected layer to those 3 numbers obtain classification decisions. For the output class “cat” the prediction will be based on 3 weights (w1, w2, and w3). To make a CAM heatmap for “cat”, we perform a weighted sum of the feature maps, using the “cat” weights of the final fully-connected layer: Note that the number of feature maps doesn’t have to be three – it an be any arbitrary k. For a more detailed explanation of how CAM works, please see this post. Understanding CAM is important for understanding Grad-CAM, as the two methods are closely related. Part of the motivation for the development of Grad-CAM was to come up with a CAM-like method that does not restrict the CNN architecture. Grad-CAM Overview The basic idea behind Grad-CAM is the same as the basic idea behind CAM: we want to exploit the spatial information that is preserved through convolutional layers, in order to understand which parts of an input image were important for a classification decision. Similar to CAM, Grad-CAM uses the feature maps produced by the last convolutional layer of a CNN. The authors of Grad-CAM argue, “we can expect the last convolutional layers to have the best compromise between high-level semantics and detailed spatial information.” Here is a sketch showing the parts of a neural network model relevant to Grad-CAM: The CNN is composed of some convolutional layers (shown as “conv” in the sketch). The feature maps produced by the final convolutional layer are shown as A1, A2, and A3, the same as in the CAM sketch. At this point, for CAM we would need to do global average pooling followed by a fully connected layer. For Grad-CAM, we can do anything – for example, multiple fully connected layers – which is shown as “any neural network layers” in the sketch. The only requirement is that the layers we insert after A1, A2, and A3 have to be differentiable so that we can get a gradient. Finally, we have our classification outputs for airplane, dog, cat, person, etc. The difference between CAM and Grad-CAM is in how the feature maps A1, A2, and A3 are weighted to make the final heatmap. In CAM, we weight these feature maps using weights taken out of the last fully-connected layer of the network. In Grad-CAM, we weight the feature maps using “alpha values” that are calculated based on gradients. Therefore, Grad-CAM does not require a particular architecture, because we can calculate gradients through any kind of neural network layer we want. The “Grad” in Grad-CAM stands for “gradient.” The output of Grad-CAM is a “class-discriminative localization map”, i.e. a heatmap where the hot part corresponds to a particular class: If there are 10 possible output classes, then for a particular input image, you can make 10 different Grad-CAM heatmaps, one heatmap for each class. Grad-CAM Details First, a bit of notation: In other words, y^c is the raw output of the neural network for class c, before the softmax is applied to transform the raw score into a probability. Grad-CAM is applied to a neural network that is done training. The weights of the neural network are fixed. We feed an image into the network to calculate the Grad-CAM heatmap for that image for a chosen class of interest. Grad-CAM has three steps: Step 1: Compute Gradient The particular value of the gradient calculated in this step depends on the input image chosen, because the input image determines the feature maps A^k as well as the final class score y^c that is produced. For a 2D input image, this gradient is 3D, with the same shape as the feature maps. There are k feature maps each of height v and width u, i.e. collectively the feature maps have shape [k, v, u]. This means that the gradients calculated in Step 1 are also going to be of shape [k, v, u]. In the sketch below, k=3 so there are three u x v feature maps and three u x v gradients: Step 2: Calculate Alphas by Averaging Gradients In this step, we calculate the alpha values. The alpha value for class c and feature map k is going to be used in the next step as a weight applied to the feature map A^k. (In CAM, the weight applied to the feature map A^k is the weight w_k in the final fully connected layer.) Recall that our gradients have shape [k, v, u]. We do pooling over the height v and the width u so we end up with something of shape [k, 1, 1] or to simplify, just [k]. These are our k alpha values. Step 3: Calculate Final Grad-CAM Heatmap Now that we have our alpha values, we use each alpha value as the weight of the corresponding feature map, and calculate a weighted sum of feature maps as the final Grad-CAM heatmap. We then apply a ReLU operation to emphasize only the positive values and turn all the negative values into 0. Won’t the Grad-CAM Heatmap Be Too Small? The Grad-CAM heatmap is size u x v, which is the size of the final convolutional feature map: You may wonder how this makes sense, since in most CNNs the final convolutional features are quite a bit smaller in width and height than the original input image. It turns out it is okay if the u x v Grad-CAM heatmap is a lot smaller than the original input image size. All we need to do is up-sample the tiny u x v heatmap to match the size of the original image before we make the final visualization. For example, here is a small 12 x 12 heatmap: Now, here is the same heatmap upsampled to 420 x 420 using the Python package cv2: The code to visualize the original small low-resolution heatmap and turn it into a big high-resolution heatmap is here: import cv2 import matplotlib import matplotlib.pyplot as plt small_heatmap = CalculateGradCAM(class='cat') plt.imshow(small_heatmap, cmap='rainbow') #Upsample the small_heatmap into a big_heatmap with cv2: big_heatmap = cv2.resize(small_heatmap, dsize=(420, 420), interpolation=cv2.INTER_CUBIC) plt.imshow(big_heatmap, cmap='rainbow') Grad-CAM Implementation A Pytorch implementation of Grad-CAM is available here. More Grad-CAM Examples Grad-CAM has been applied in numerous research areas and is particularly popular in medical images. Here are a few examples: Yang et al. “Visual Explanations From Deep 3D Convolutional Neural Networks for Alzheimer’s Disease Classification” Top row is CAM, bottom row is Grad-CAM. Kim et al. “Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities.” Grad-CAM visualizations from Woo et al. “CBAM: Convolutional Block Attention Module.” This paper is an example of a trainable attention mechanism (CBAM) combined with a post-hoc attention mechanism for visualization (Grad-CAM). Caveat: Explainability is Not Interpretability. Any Post-Hoc Attention Mechanism May Not Be Optimal for High-Stakes Decisions “Explainability” is not the same as “interpretability.” “Explainability” means that it’s possible to explain how a model made its decision, although the explanation is not guaranteed to make sense to humans, and the explanation is also not constrained to follow any known rules of the natural world. For example, a model may “explain” a boat classification by highlighting the water, or a model may “explain” a “severely ill” classification by highlighting a label within a medical image that indicates that the image was taken while the patient was lying down. The explanation is also not guaranteed to be fair or free from biases. “Interpretability” means that a model has been designed from the beginning to produce a human-understandable relationship between the inputs and the outputs. For example, logistic regression is an interpretable model, in which the design of the model results in weights that show which inputs contribute more or less to the final prediction. Rule-based methods are also interpretable. Grad-CAM is a technique for “explainability” meaning that it is meant to explain what a trained CNN did. Grad-CAM does not make a model “interpretable.” While Grad-CAM heatmaps often make sense, they aren’t required to make sense, and they must be used carefully – especially for sensitive applications like medical image interpretation, an area in which Grad-CAM is particularly popular. If you are working on weakly-supervised localization or weakly-supervised segmentation, Grad-CAM is definitely a useful method. If you are interested in “debugging” a model and gaining more insight into why the model is making certain mistakes, Grad-CAM is also useful. If you are working on an application with sensitive data used for real world, high-stakes decisions, any post-hoc attention mechanism (i.e. any method for making heatmaps that is “tacked on” after a network has been trained) including Grad-CAM is potentially inappropriate, depending on how it is going to be used. If you are interested in interpretable machine learning models, I recommend this excellent paper: Cynthia Rudin “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” Caveat: Use Vanilla Grad-CAM, Not Guided Grad-CAM As another caveat to be aware of, the Grad-CAM paper mentions a variant of Grad-CAM called “Guided Grad-CAM” which combines Grad-CAM with another CNN heatmap visualization technique called “guided backpropagation.” I discuss guided backpropagation in this post and this post. The short summary is that recent work by Adebayo et al. and Nie et al. suggests that guided backpropagation is performing partial image recovery and acting like an edge detector, rather than providing insight into a trained model. Therefore, it is best not to use guided backpropagation. The good news is that the vanilla Grad-CAM method discussed in this post (i.e., Grad-CAM without guided backpropagation) passes Adebayo et al.’s sanity checks and is a great option to use. Summary - Grad-CAM is a popular technique for creating a class-specific heatmap based off of a particular input image, a trained CNN, and a chosen class of interest. - Grad-CAM is closely related to CAM. - Grad-CAM is compatible with any CNN architecture as long the layers are differentiable. - Grad-CAM can be used for understanding a model’s predictions, weakly-supervised localization, or weakly-supervised segmentation. - Grad-CAM is a method for explainability, not interpretability, and therefore should be used with caution in any sensitive domain. - Vanilla Grad-CAM is a better choice than Guided Grad-CAM. About the Featured Image The featured image is modified from Figures 1 and 20 of the Grad-CAM paper.
https://glassboxmedicine.com/2020/05/29/grad-cam-visual-explanations-from-deep-networks/
CC-MAIN-2022-05
en
refinedweb
ITK/Code Review Check List From KitwarePublicJump to navigationJump to search Code Review Check List The following is the list of coding style issues that must be verified on every class and file during a code review. - Filename must match class name - All files must have the copyright notice at the top [] - #define for class name in the headers - Brief class doxygen description - namespace igstk - Complete class doxygen description - Constructor/Destructor private/public - No acronyms in class name or method names - no unnecessary headers #included - Justify every public method - All member variables must be private - 100% code coverage (see dashboard) - All 'non-const' method must justify why they are not 'const' - Any information that is printed or displayed has to be legible to human eyes
https://itk.org/Wiki/index.php?title=ITK/Code_Review_Check_List&oldid=8005
CC-MAIN-2022-05
en
refinedweb
[hackers] [sbase] tail: Add rudimentary support to detect file truncation || sin This message : [ Message body ] [ More options ( top , bottom ) ] Related messages : [ Next message ] [ Previous message ] Contemporary messages sorted : [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ] From : < git_AT_suckless.org > Date : Tue, 24 Mar 2015 23:53:39 +0100 (CET) commit a25a57f6ac74174bfbceeaf1255e64a028338474 Author: sin <sin_AT_2f30.org> Date: Mon Feb 9 14:41:49 2015 +0000 tail: Add rudimentary support to detect file truncation We cannot in general detect that truncation happened. At the moment we use a heuristic to compare the file size before and after a write happened. If the new file size is smaller than the old, we correctly handle truncation and dump the entire file to stdout. If it so happened that the new size is larger or equal to the old size after the file had been truncated without any reads in between, we will assume the data was appended to the file. There is no known way around this other than using inotify or kevent which is outside the scope of sbase. diff --git a/tail.c b/tail.c index 36fec77..11af1dd 100644 --- a/tail.c +++ b/tail.c _AT_@ -1,4 +1,6 @@ /* See LICENSE file for copyright and license details. */ +#include <sys/stat.h> + #include <limits.h> #include <stdint.h> #include <stdio.h> _AT_@ -59,6 +61,7 @@ usage(void) int main(int argc, char *argv[]) { + struct stat st1, st2; FILE *fp; size_t n = 10, tmpsize; int ret = 0, newline, many; _AT_@ -96,6 +99,8 @@ main(int argc, char *argv[]) if (many) printf("%s==> %s <==\n", newline ? "\n" : "", argv[0]); + if (stat(argv[0], &st1) < 0) + eprintf("stat %s:", argv[0]); newline = 1; tail(fp, argv[0], n); _AT_@ -108,8 +113,18 @@ main(int argc, char *argv[]) fflush(stdout); } if (ferror(fp)) - eprintf("readline '%s':", argv[0]); + eprintf("readline %s:", argv[0]); clearerr(fp); + /* ignore error in case file was removed, we continue + * tracking the existing open file descriptor */ + if (!stat(argv[0], &st2)) { + if (st2.st_size < st1.st_size) { + fprintf(stderr, "%s: file truncated\n", argv[0]); + rewind(fp); + } + st1 = st2; + } + sleep(1); } } fclose(fp); Received on Tue Mar 24 2015 - 23:53:39 CET This message : [ Message body ] Next message : git_AT_suckless.org: "[hackers] [sbase] Convert tail(1) to use size_t || FRIGN" Previous message : git_AT_suckless.org: "[hackers] [sbase] No need to free the buffer for every call to getline() || sin" Contemporary messages sorted : [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ] This archive was generated by hypermail 2.3.0 : Wed Mar 25 2015 - 00:08:33 CET
http://lists.suckless.org/hackers/1503/6566.html
CC-MAIN-2022-05
en
refinedweb
In this article, We are going to talk about “C++ Program to Build a Hangman Word Guessing Game on Command Line”. Below, We can see the possibilities to perform this task. Let’s move. C++ Program to Build a Hangman Word Guessing Game on Command Line Program Code #include <iostream.h> #include <stdlib.h> #include <string.h> #include <conio.h> const int MAXLENGTH=80; const int MAX_TRIES=5; const int MAXROW=7; int letterFill (char, char[], char[]); void initUnknown (char[], char[]); int main () { char unknown [MAXLENGTH]; char letter; int num_of_wrong_guesses=0; char word[MAXLENGTH]; char words[][MAXLENGTH] = { ates”, ”, “Uruguay”, “USA”, “Uzbekistan”, “Vanuatu”, “Vatican”, “Venezuela”, “Vietnam”, “wales”, “Yemen”, “Zambia”, “Zimbabwe”, }; //choose and copy a word from array of words randomly randomize(); int n=random(10); strcpy(word,words[n]); // Initialize the secret word with the * character. initUnknown(word, unknown); // welcome the user cout << "nnWelcome to hangman...Guess a country Name"; cout << "nnEach letter is represented by a star."; cout << "nnYou have to type only one letter in one try"; cout << "nnYou have " << MAX_TRIES << " tries to try and guess the word."; cout << "n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; // Loop until the guesses are used up while (num_of_wrong_guesses < MAX_TRIES) { cout << "nn" << unknown; cout << "nnGuess a letter: "; cin >> letter; // Fill secret word with letter if the guess is correct, // otherwise increment the number of wrong guesses. if (letterFill(letter, word, unknown)==0) { cout << endl << "Whoops! That letter isn't in there!" << endl; num_of_wrong_guesses++; } else { cout << endl << "You found a letter! Isn't that exciting!" << endl; } // Tell user how many guesses has left. cout << "You have " << MAX_TRIES - num_of_wrong_guesses; cout << " guesses left." << endl; // Check if they guessed the word. if (strcmp(word, unknown) == 0) { cout << word << endl; cout << "Yeah! You got it!"; break; } } if(num_of_wrong_guesses == MAX_TRIES) { cout << "nSorry, you lose...you've been hanged." << endl; cout << "The word was : " << word << endl; } getch(); return 0; } /* Take a one character guess and the secret word, and fill in the unfinished guessword. Returns number of characters matched. Also, returns zero if the character is already guessed. */ int letterFill (char guess, char secretword[], char guessword[]) { int i; int matches=0; for (i = 0; secretword[i]!=''; i++) { // Did we already match this letter in a previous guess? if (guess == guessword[i]) return 0; // Is the guess in the secret word? if (guess == secretword[i]) { guessword[i] = guess; matches++; } } return matches; } // Initialize the unknown word void initUnknown (char word[], char unknown[]) { int i; int length = strlen(word); for (i = 0; i < length; i++) unknown[i]='*'; unknown[i]=''; } // Project ends here Conclusion I hope this article helps you to know about “C++ Program to Build a Hangman Word Guessing Game on Command Line“. If you face any issues please let me know via the comment section. Share this article with other C/C++ developers via social networks.
https://codingdiksha.com/cpp-program-build-a-hangman-word-guessing-game-on-command-line/
CC-MAIN-2022-05
en
refinedweb
Introducing the Telerik UI for Blazor Early Preview Progress Telerik Jan 17 Originally published at telerik.com on Jan 17, 2019 ・9 min read! - What is Blazor - What is Razor Components - Blazor Recommended Reading - Built from the Ground-Up - 100% Native .NET - Experiment With Us - Telerik UI for Blazor Early Preview Mono WebAssembly runtime (the .NET runtime compiled to wasm), thus allowing .NET to run on the client’s browser inspiring the name “Blazor” (Browser + Razor). In this configuration the application’s resources including .dll files are delivered to the client and executed the Mono WebAssembly runtime. While the WebAssembly deployment of Blazor is sill in active development a server-side deployment option was introduced called Razor Components. Blazor Component Model The Blazor component model is refreshingly simple in its design. Components can contain markup (HTML) and logic (C#) in a single Razor (cshtml) file. The component is capable of handling data binding, events, and dependency injection all without JavaScript. The Counter Component below demonstrates the basic composition of a Blazor component. The counter component uses a basic HTML button to increment a counter field which is displayed within a paragraph tag. Because Blazor operates as a single page application all of the interactions in the component happen on the client. Updates to the browser’s Document Object Model (DOM) are handled by the Blazor framework though data binding. is the way the application is deployed. Instead of Blazor running via WebAssembly on the client, the Blazor framework runs on the server as an executable. In this mode, ASP.NET Core hosts the application and provides a communication channel using SignalR technology. Using SignalR the application sends UI updates and receives changes and events as binary packets of data over a web socket connection. Since only the changes are sent over the connection, the payload is small and efficient. Since Razor Components utilizes the Blazor framework, components can be used in both deployment types. Blazor Recommended Reading If all of this is new to you, and it’s most likely that it is, then we have some blog posts to catch you up on all things Blazor. The following articles should bring you up to speed, or if you’re biting your nails to check out what’s below, we’ve included the TLDR as well. - Blazor – A New Framework for Browser-based .NET Apps (DevReach 2018) - What’s Coming in .NET Core 3.0 - Blazor Q&A with Microsoft’s Daniel Roth - Goodbye JavaScript, Hello WebAssembly - A Breakdown of Blazor Project Types - Razor Components for a JavaScript-Free Front-end in 2019 TLDR Razor Components , is what originally started as Blazor Server-Side in early 2018. Blazor is a .NET (SPA) framework that is generally associated with .NET running on Web Assembly. However, Blazor is capable of running under multiple scenarios including server-side as Razor Components. - Razor is a popular template markup syntax for .NET - (Browser + Razor) Blazor is a .NET based web framework which can run on the client using WebAssembly or as: - All Blazor hosting models, both client and server-side, utilize C# APIs instead of JavaScript - Razor Components is expected to ship in ASP.NET Core 3.0 Built from the Ground-Up - 100% Native .NET Telerik UI for Blazor will not wrap existing jQuery/JavaScript products in C# and pretend it’s something new. With Telerik UI for Blazor we are starting from scratch and writing .NET community has expressed the need for front-end web tooling that does not require JavaScript and we’re happy to have the opportunity to serve this community through Blazor. Experiment With Us Blazor is an exciting prospect for .NET developers because it allows us to create full-stack .NET web application. We have received countless feedback items asking for our support this next generation platform. Initially we’re offering Telerik UI For Blazor as an early preview release. This development model closely resembles the effort being made by Microsoft with Blazor as we aim to release small sets of functionality in hopes to gain feedback and knowledge about how our customers use the product. During the experimental phase Telerik UI for Blazor will be a free trial for all and we hope that you will continue sharing with us your use cases, experience, road-blocks, and bugs. After downloading Telerik UI for Blazor you will receive an email with instructions on how to share your feedback, wishes and Blazor experiments with us. Telerik UI for Blazor Early Preview The initial offering will be small with just a few popular UI components including the Data Grid , Tab Set , and Buttons. Through customer feedback we plan to expand the number of components and rage of APIs. We believe that building to our customers needs and recommendations is the path to success. Prerequisites Install the Blazor SDK by following the steps outlined on the Blazor website Download the Telerik UI for Blazor NuGet package directly (optional) Use the Telerik NuGet feed as a package source Your First Blazor Project We recommend starting with the “Blazor (Server-side in ASP.NET Core)” project type when using the New Project dialog. This project type is also known as “Razor Components” and will eventually ship with ASP.NET Core 3.0. However, Telerik UI for Blazor will work on all Blazor project types. With the new project created, we’ll need to install our Telerik UI for Blazor NuGet dependency. Before adding the package, be sure to navigate to the project that contains the application’s UI. This will be either <project>.App or <project>.Client depending of the template you chose. Install the Telerik.UI.for.Blazor NuGet package. This will add the component library to your application. Register the components in the application. In the root of the application, locate the _ViewImports.cshtml file and add the following code. @using Kendo.Blazor @addTagHelper *,Kendo.Blazor - We’ll also need to reference the style sheet needed for the components. Locate the index.htmlfile in the /wwwrootfolder and add the following line. <link href="//kendo.cdn.telerik.com/2019.1.115/styles/kendo.material-v2.min.css" rel="stylesheet"> Notice we’re not referencing any JavaScript files, that’s because there aren’t any JavaScript dependencies. You’re now ready to start testing Telerik UI for Blazor. Blazor Data Grid The Telerik UI for Blazor Data Grid has quite a few features in this preview. The data grid in this release is capable of: data binding, sorting, paging, themes, templates, and in-cell editing, which only supports int, string and DateTime fields. Let’s see these features in action by replacing the hand coded table in the Fetch Data example with the Telerik Data Grid. First take a moment to run and explore the Fetch Data example at localhost/fetchdata. Locate the code for Fetch Data under the /Pages folder. Replace the entire table element with a KendoGrid component. Telerik UI for Blazor components use the Kendo namespace as it is a familiar reference to our existing front-end libraries and shares CSS code with those libraries. else { <KendoGrid Data=@forecasts Pageable=true PageSize=5 Sortable=true> <KendoGridColumn Field=@nameof(WeatherForecast.Date)> <Template> @($"{(context as WeatherForecast).Date:d}") </Template> </KendoGridColumn> <KendoGridColumn Field=@nameof(WeatherForecast.TemperatureC) /> <KendoGridColumn Field=@nameof(WeatherForecast.TemperatureF) /> <KendoGridColumn Field=@nameof(WeatherForecast.Summary) /> </KendoGrid> } The KendoGrid component binds the Data property to forecasts which is an array of the WeatherForecast object. The grid also has the Pageable, PageSize, and Sortable properties enabled. Inside of the KendoGrid component, we define child components for each field we would like displayed in the grid. Because this is all C# code, we can set the Field property with C#'s nameof operator giving us type safety. In addition, templates can be used to display custom formats, images, and even other UI components. Here a template is used to format the Date field. Blazor Tab Set The other major component included in this release is the KendoTabSet. The KendoTabSet supports multiple tab positions: Top (default), Bottom, Left, and Right. We can use Blazor’s bind attribute to demonstrate the tab positions at run-time. Locate the index.cshtml page under the /pages folder. Replace the page’s content with the following code. @using Kendo.Blazor.Components.TabStrip <h1>Hello, world!</h1> <select bind=@tabPosition> <option value=@KendoTabPosition.Top>Top</option> <option value=@KendoTabPosition.Left>Left</option> <option value=@KendoTabPosition.Right>Right</option> <option value=@KendoTabPosition.Bottom>Bottom</option> </select> <KendoTabStrip TabPosition=@tabPosition> <KendoTab Title="Sofia"> <h2>22<span>ºC</span></h2> <p>Sunny weather in Sofia</p> </KendoTab> <KendoTab Title="New York"> <h2>24<span>ºC</span></h2> <p>Partly Cloudy weather in New York</p> </KendoTab> <KendoTab Title="Paris"> <h2>21<span>ºC</span></h2> <p>Rainy weather in Paris</p> </KendoTab> </KendoTabStrip> @functions { KendoTabPosition tabPosition = KendoTabPosition.Top; } In this example we start by creating a simple select list element with all of the KendoTabStrip position values, KendoTabPosition.<value>. Most importantly, KendoTabPosition is a standard C# enum type so we get strongly typed values and intellisense here. Next a KendoTabStrip component is created with several KendoTab components that display some weather forecast data. The TapPosition property is bound to both the select element and KendoTabStrip through the a simple backing field tabPosition declared in @function. Because select is using the bind attribute, it automatically updates the tabPosition value when the option is changed, this allows us to modify the position at run-time. Summary JavaScript dependencies (there’s no jQuery this time folks). Throughout the year Microsoft will be working on Blazor (aka. Seeing is believing, so register to see all the new features – WEBINARS ARE COMING UP FAST! It will help you to follow along easily if you download the latest release here. Telerik Date/Time: Friday, January 18th @ 11:00 am - 12 pm EST Register Now Kendo UI Date/Time: Tuesday, Jan 22, 11:00 AM ET - 12:00 PM ET Register Now (open source and free forever ❤️) My Programming Journey So Far. It has been about 5 months now. Since I started making the change to pursue graphics programming. I have done a ton of work in that time. Between school and learning a new language and studying a whole lot of math. And in this time, I have decided why not start documenting my process. How To Make Your .NET Applications Less Quirky Rion Williams - Feb 12 Episode 015 - Calling the Web API from the frontend - ASP.NET Core: From 0 to overkill João Antunes - Feb 12 Using VSCode with Dotnet Core Nick Shoup - Feb 11
https://dev.to/progresstelerik/introducing-the-telerik-ui-for-blazor-early-preview-2f4c
CC-MAIN-2019-09
en
refinedweb
Visual Source Safe Admin Password Reset We used to use Visual Source Safe (VSS) 6.0 for projects, and so have some older projects that are not routinely accessed. So what happens when you forget the admin password a few years down the line? There are various suggestions around the net, but this little tool should help: Reset VSS 6 admin password Simply run in the Data directory of your VSS 6.0 project (where the file um.dat is located) using a command prompt, then rename the files as instructed and your admin password will become blank. Murthy said, March 27, 2006 @ 1:42 pm Simply excellent! Small app but great result. Pramela said, July 10, 2006 @ 7:24 am IT works.. thanks a lot Krishna said, January 24, 2007 @ 1:07 pm Simply Great PushThePramALot said, February 2, 2007 @ 10:48 pm phew, thanks man. This certainly saved the day today… the lead dev who maintained VSS changed the password and walked out. Works like a champ! Jan Michael said, May 18, 2007 @ 1:45 am It works! Your the man!! thanks… Vic said, October 2, 2007 @ 8:37 am Cool tool thanks for covering my ass! saguni said, November 27, 2007 @ 3:40 am Cool. Thanks. Yann P. said, January 10, 2008 @ 8:40 pm Worked like a charm. Thanks again for sharing it. Pierre Andreasson said, February 7, 2008 @ 9:54 am Nice work! Thank you! sanjay.ivar said, March 20, 2008 @ 4:52 am It’s working Thanks a lot. and also Thanks for sharing. Sanjay MGH said, May 8, 2008 @ 1:13 pm Magic! Saved my life Ed said, July 7, 2008 @ 3:43 pm Spot on. Other sites were complicated and laborious. tony said, July 22, 2008 @ 5:21 pm So sweet! Mike said, July 28, 2008 @ 6:39 pm Wonderful work, you have my heartfelt thanks. Syed Muhammad Salman said, August 1, 2008 @ 11:43 am Thank you soo much… Really it is great utility… Vijay Khapekar said, August 7, 2008 @ 7:39 am First of all, thanks a lot. Simply Superb. simply gr8. Amos Devakumar said, August 12, 2008 @ 8:53 am Cool and great work…! Thanks a lot. Praveen said, September 4, 2008 @ 9:24 pm What about the other users who are on VSS? Will they get affected? Does it reset only the admin user password? 42 said, September 4, 2008 @ 10:54 pm Honestly, it’s been so long I can’t remember. To be safe, backup your whole VSS folder tree and then you can always go back. You could create a small test project elsewhere and try it with that… Ravi said, September 18, 2008 @ 10:29 am Hey Really it’s excellent Thanks a lot Rafael Santos said, October 9, 2008 @ 7:26 pm This application works very well and does not affect the others users. By the way, the only file that’s modified is um.dat… so this is the only one you need to backup. Hey 42, do you allow I translate this tip to pt-BR and post in my website? There’s always someone with an old repository and a lost password… Rafael Santos said, October 9, 2008 @ 7:54 pm I forgot! Obviously I will put the credits e a link to this page! Stephen Davis said, October 16, 2008 @ 9:49 pm Does this application work with VSS 2005? 42 said, October 17, 2008 @ 1:12 pm No, only VSS 6.0 VSS reset admin password - Mauricio Rojas Blog said, October 21, 2008 @ 4:30 pm […] The tool from this page […] M4ndybase said, October 21, 2008 @ 4:45 pm It’s fantastic. Thanks a lot for sharing this with us. [] Bhavika said, November 3, 2008 @ 12:31 pm Great utility !! Thanks !!!! Jeetendra Prasad said, November 9, 2008 @ 7:25 am Thanks man. You just made my day. Keep the good work. — Jeetendra Prasad BurtRM said, December 1, 2008 @ 5:18 pm You rock!!! The little exe definitely saved the day. Thanks!! M4ndybase said, December 2, 2008 @ 7:31 pm Thanks a lot. sridhar said, December 3, 2008 @ 11:12 am wonderful. great. splendid. marvelous. amazing. rocking. mind blowing. Kalyan said, December 30, 2008 @ 9:48 am Simply Great asutosha said, January 15, 2009 @ 5:42 am Thanks , it is aGreat Tool. Lepi Nesha said, February 11, 2009 @ 10:08 am Huge thank you man. HUUUUGE!!! Glenn said, March 17, 2009 @ 10:44 pm Thank you! Aruna said, March 31, 2009 @ 6:21 am Thanks a lot! Great tool ! Resetando a senha de Admin no Visual Source Safe - RafaelSantos.com said, May 29, 2009 @ 7:52 pm […] após uma breve “googlada” deparei-me com um blog que indicava um minúsculo programa para resetar a senha do usuário […] Hanna said, August 26, 2009 @ 11:53 am Gr8 tool. Works like a charm. Syed said, October 7, 2009 @ 7:44 am Simply fantastic……..Works anthony said, November 4, 2009 @ 10:10 pm Simply Great… Martin said, January 5, 2010 @ 2:31 pm Hi, I have the Visual SourceSafe Version 8.0 and forget the password for user admin. do you have any soft to resete that user in this version??.. Thanks a lot!!!.. Krishna said, January 9, 2010 @ 6:51 am …rename the files as instructed and your admin password will become blank. How many files would you have to rename ? 42 said, January 12, 2010 @ 12:25 am Nope, just for the old version. Things changed big time between now and then. 42 said, January 12, 2010 @ 12:27 am It’s been so long! 2 I believe – the current um.dat to back it up. The modified one to um.dat. The tool tells you when you run it. As always, backup everything before you start to make sure you’re safe. Baddy said, January 14, 2010 @ 6:50 am Thanks man. U r simply gr8… myths said, February 17, 2010 @ 10:35 am Thanks a ton.. helped me alot.. Deva said, March 4, 2010 @ 3:11 pm sorry boss not working Ansari said, March 25, 2010 @ 8:48 am Excellent utility Raunak said, April 5, 2010 @ 6:07 am Hey buddy, thanks a lot for this sweet little tool – it saved my life! Muhammad Azam said, April 5, 2010 @ 6:25 am compact solutino Srinivas said, June 9, 2010 @ 8:49 am Thanks a lot. Now I can reset my VSS admin pwd. SP said, June 23, 2010 @ 10:49 am very helpful , thanx a ton!!!! Tam said, June 30, 2010 @ 8:26 pm THANKS! it worked great! Simon said, July 9, 2010 @ 2:53 pm Thank you. Worked great on version VSS 6.0d. marcos said, July 19, 2010 @ 1:30 pm Tks man! It works very well. You just forgot to say that it generates a file umfix.dat that needs to be renamed to um.dat. Bunyamin TOPAN said, August 17, 2010 @ 4:51 am It worked.Thnx Parth Gandhi said, September 28, 2010 @ 2:13 pm absolutely superb.. Shyam said, October 21, 2010 @ 9:26 pm I have version 8.0. If some body have any tool to blank the password, please let me know. Luanne said, October 25, 2010 @ 6:55 am excellent utility! thank you Nogol Tardugno said, November 19, 2010 @ 7:05 pm Thanks so much! This really helped! R said, November 23, 2010 @ 2:48 pm Love it 😀 Luis Ramirez said, January 12, 2011 @ 9:15 pm Excellent!!! Thanks 😀 — Pura Vida!! Recovering Visual SourceSafe (VSS) Admin Password « Low IT said, May 16, 2011 @ 9:13 pm […] You can retrieve it right here: […] Hanu said, June 22, 2011 @ 1:36 pm Excellent tool….. Thanks a lot shailender said, January 20, 2012 @ 7:37 am thanks. it works for me. Ram said, January 25, 2012 @ 7:07 pm G8! Balaji said, February 9, 2012 @ 1:25 pm What a wonderful utility man. Awesome. it saved a lot of rework effort. 100 votes from me. Keep up the good work. vjj said, April 29, 2012 @ 2:00 pm thanks! Channaka said, June 15, 2012 @ 9:42 am Wow…Managed to recover some old source codes. Thank for the tool. nazli said, September 17, 2012 @ 10:04 am i could not understand how to use this file.i copy umfix.dat into my vss directory and delete um.dat and then rename fixum.dat to um.dat.(Without Command Prompt) Is it right? when i want to open vss admin i got error “Error reading from File” And also the size of umfix.dat is 0 kb. nazli said, September 17, 2012 @ 10:17 am I did it. it was great thanks SLL said, December 9, 2012 @ 9:53 am Run as Administrator. Perfect. Thank you. Amila Perera said, March 29, 2013 @ 11:53 am Excellent tool….. Thanks a lot for sharing this with us. Christian said, April 17, 2013 @ 3:28 pm thanks a million, we were “saved” as well by the program. Worked like a charm. Larry said, April 19, 2013 @ 2:44 pm Real Forum, running exe makes me nervous? Administrator said, April 19, 2013 @ 9:26 pm It is real, which is why I cleanse the spam posts that make it through the spam filters. Mark said, May 17, 2013 @ 11:23 pm Thanks – works like a charm. Vinay Dwivedi said, June 17, 2013 @ 7:28 am Great job.. It works .. jerry said, June 24, 2013 @ 3:41 pm //c# Source code for an app to fix the um.dat using System; using System.IO; namespace UmDatFixer { class Program { static void Main() { var fileBytes = File.ReadAllBytes(“um.dat”); var nr = fileBytes.Length; var i = 132; for ( ; i nr) { throw new Exception(“Could not find admin password.”); } fileBytes[i – 2] = 0xBC; fileBytes[i – 1] = 0x7F; fileBytes[i + 32] = 0x90; fileBytes[i + 33] = 0x6E; File.WriteAllBytes(“umfix.dat”,fileBytes); } } } Roberto said, July 24, 2013 @ 11:44 am this tool is great! thanks a lot for saving all that time! jp said, September 3, 2013 @ 9:34 pm thanks TimB said, September 5, 2013 @ 10:06 pm Still works great. Easy, too. Thanks! Ganesh said, September 26, 2013 @ 1:18 pm How do I rename the files?? can someone please provide a complete steps I need to do Mike said, October 24, 2013 @ 2:25 pm Still works. Pulling old projects out of the way back machine. Nilesh said, November 19, 2013 @ 10:05 am Excellent Tool Many thanks for sharing this tool to all the system administrator who can save there work in less time. thanks Again !!! Tuyen Nguyen said, December 12, 2013 @ 7:26 pm Thank you very much. It works for me. Raj said, March 31, 2014 @ 1:40 pm Thanks a lot buddy…. It worked..!!! Sid K said, April 16, 2014 @ 4:41 pm Thank you. This helped me get back into the source when 7 out of the 7 people who had access the repository left the company. Worked like a charm.. btw – I use VSS 2005.. Antonio Vasquez said, July 3, 2014 @ 10:17 pm Thanks works!!!! great Rick said, December 19, 2014 @ 8:17 pm Still works!!!!! Thanks!!!! Donald said, January 6, 2015 @ 10:44 am This saved my life. My wife was about to leave me and our 16 children, but I ran this utility and it fixed my login so she decided to stay and now we have 34 children. You are amazing. Iqbal said, January 6, 2015 @ 5:04 pm Thanks a bunch… It works as promised Elter Souza said, February 12, 2015 @ 12:21 pm Thanks a lot, man! This helped me so much. Joe said, March 10, 2015 @ 8:24 pm Still works like a champ!!! Had to migrate off an old VSS install on a Win2003 Server. Reinstalled on a Win2008 box; copied over the DBs, ran the utility, and……success – “admin” is back in business. THANKS so much…!!! Nadeem Akhtar said, March 24, 2015 @ 3:53 am Yes, It was very simle. Thanks.
http://not42.com/2005/06/16/visual-source-safe-admin-password-reset/
CC-MAIN-2019-09
en
refinedweb
scoring Tools and algorithms to score loops. The scoring system is split between an environment and scorers. Several scorers can be attached to the same environment containing the actual structural data of the current modelling problem. The environment is updated as the modelling proceeds and manages efficient spatial lookups to be used by the attached scorers. In this example, we load a structure, setup a score environment, link a few scorers to it and finally score some loops: from ost import io, seq from promod3 import loop, scoring # load data ent = io.LoadPDB("data/1CRN.pdb") ent_seq = seq.SequenceFromChain('A', ent.FindChain('A')) # setup score environment linked to entity score_env = scoring.BackboneScoreEnv(ent_seq) score_env.SetInitialEnvironment(ent) # setup scorers attached to that env. clash_scorer = scoring.ClashScorer() clash_scorer.AttachEnvironment(score_env) cbeta_scorer = scoring.LoadCBetaScorer() cbeta_scorer.AttachEnvironment(score_env) # calculate scores for 10 residues starting at residue number 23. # all required structural information comes from the environment # that can evolve as the modelling proceeds. print "Clash-Score", clash_scorer.CalculateScore(23, 10) print "CBeta-Score", cbeta_scorer.CalculateScore(23, 10) Contents: Enter search terms or a module, class or function name. Subrotamer Optimization Backbone Score Environment
https://openstructure.org/promod3/1.3/scoring/
CC-MAIN-2019-09
en
refinedweb
Posts Tagged ‘TensorFlow’. Updates to the TensorFlow API Introduction Last year I published a series of posts on getting up and running on TensorFlow and creating a simple model to make stock market predictions. The series starts here, however the coding articles are here, here and here. We are now a year later and TensorFlow has advanced by quite a few versions (1.3 as of this writing). In this article I’m going to rework that original Python code to use some simpler more powerful APIs from TensorFlow as well as adopt some best practices that weren’t well known last year (at least by me). This is the same basic model we used last year, which I plan to improve on going forwards. I changed the data set to record the actual stock prices rather than differences. This doesn’t work so well since most of these stocks increase over time and since we go around and around on the training data, it tends to make the predictions quite low. I plan to fix this in a future articles where I handle this time series data correctly. But first I wanted to address a few other things before proceeding. I’ve placed the updated source code tfstocksdiff13.py on my Google Drive here. Higher Level API In the original code to create a layer in our Neural Network, we needed to define the weight and bias Tensors: layer1_weights = tf.Variable(tf.truncated_normal( [NHistData * num_stocks * 2, num_hidden], stddev=0.1)) layer1_biases = tf.Variable(tf.zeros([num_hidden])) And then define the layer with a complicated mathematical expression: hidden = tf.tanh(tf.matmul(data, layer1_weights) + layer1_biases) This code is then repeated with mild variations for every layer in the Neural Network. In the original code this was quite a large block of code. In TensorFlow 1.3 there is now an API to do this: hidden = tf.layers.dense(data, num_hidden, activation=tf.nn.elu, kernel_initializer=he_init, kernel_regularizer=tf.contrib.layers.l1_l2_regularizer(), name=name + "model" + "hidden1") This eliminates a lot of repetitive variable definitions and error prone mathematics. Also notice the kernel_regularizer=tf.contrib.layers.l1_l2_regularizer() parameter. Previously we had to process the weights ourselves to add regularization penalties to the loss function, now TensorFlow will do this for you, but you still need to extract the values and add them to your loss function. reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) loss = tf.add_n([tf.nn.l2_loss( tf.subtract(logits, tf_train_labels))] + reg_losses) You can get at the actual weights and biases if you need them in a similar manner as well. Better Initialization Previously we initialized the weights using a truncated normal distribution. Back then the recommendation was to use random values to get the initial weights away from zero. However since 2010 (quite a long time ago) there have been better suggestions and the new tf.layers.dense() API supports these. The original paper was “Understanding the difficulty of training deep feedforward neural networks” by Xavier Glorot and Yoshua Bengio. If you ran the previous example you would have gotten an uninitialized variable on he_init. Here is its definition: he_init = tf.contrib.layers.variance_scaling_initializer(mode="FAN_AVG") The idea is that these initializers vary based on the number of inputs and outputs for the neuron layer. There is also tf.contrib.layers.xavier_initializer() and tf.contrib.layers.xavier_initializer_conv2d(). For this example with only two hidden layers it doesn’t matter so much, but if you have a much deeper Neural Network, using these initializers can greatly speed up training and avoid having the gradients either go to zero or explode early on. Vanishing Gradients and Activation Functions You might also notice I changed the activation function from tanh to elu. This is due to the problem of vanishing gradients. Since we are using Gradient Descent to train our system then any zero gradients will stop training improvement in that dimension. If you get large values out of the neuron then the gradient of the tanh function will be near zero and this causes training to get stalled. The relu function also has similar problems if the value ever goes negative then the gradient is zero and again training will likely stall and get stuck there. On solution to this is to use the elu function or a “leaky” relu function. Below are the graphs of elu, leaky relu and relu. Leaky relu has a low sloped linear function for negative values. Elu uses an exponential type function to flatten out a bit to the left of zero so if things go a bit negative they can recover. Although if things go more negative with elu, they will get stuck again. Elu has the advantage that its is rigged to be differentiable at 0 to avoid special cases. Practically speaking both of these activation functions have given very good results in very deep Neural Networks which would otherwise get stuck during training with tanh, sigmoid or relu. Scaling the Input Data Neural networks work best if all the data is between zero and one. Previously we didn’t scale our data properly and just did an approximation by dividing by the first value. All that code has been deleted and we now use SciKit Learn’s MinMaxScaler object instead. You fit the data using the training data and then transform any data we process with the result. The code for us is: # Scale all the training data to the range [0,1]. scaler = MinMaxScaler(copy=False) scaler.fit(train_dataset) scaler.transform(train_dataset) scaler.transform(valid_dataset) scaler.transform(test_dataset) scaler.transform(final_row) The copy=False parameter basically says to do the conversion in place rather than producing a new copy of the data. SciKit Learn has a lot of useful utility functions that can greatly help with using TensorFlow and well worth looking at even though you aren’t using a SciKit Learn Machine Learning function. Summary The field of Neural Networks is evolving rapidly and the best practices keep getting better. TensorFlow is a very dynamic and quickly evolving tool set which can sometimes be a challenge to keep up with. The main learnings I wanted to share here are: - TensorFlow’s high level APIs - More sophisticated initialization like He Initialization - Avoiding vanishing gradients with elu or leaky ReLU - Scaling the input data to between zero and one These are just a few things of the new things that I could incorporate. In the future I’ll address how to handle time series data in a better manner. An Introduction to Image Style Transfer Introduction Image Style Transfer is an AI technique that is becoming quite popular for enhancing or stylizing photos. It takes one picture (often a classical painting) and then applies the style of that picture to another picture. For example I could take this photo of the Queen of Surrey passing Hopkins Landing: Combined with the style of Vincent van Gogh’s Starry Night: To then feed these through the AI algorithm to get: In this article, we’ll be look at some of the ways you can accomplish this yourself either through using online services or running your own Neural Network with TensorFlow. Playing with Image Style Transfer There are lots of services that let you play with this. Generally to apply a canned style to your own picture is quite fast (a few seconds). To provide your own photo as the style photo is more involved, since it involves “training” the style and this can take 30 minutes (or more). Probably the most popular program is the Prisma app for either iPhone or Android. This app has a large number of pre-trained styles and can apply any of them to any photo on your phone. This app works quite well and gives plenty of variety to play with. Plus its free. Here is the ferry in Prisma’s comic theme: If you want to provide your own photo as the style reference then deepart.io is a good choice. This is available as a web app as well as either an iPhone or Android app. The good part about this for photographers is that you can copy photos from your good camera to your computer and then use this program’s website, no phone required. This site has some pre-programmed styles based on Vincent van Gogh which work really quickly and produce good results. Then it has the ability to upload a style photo. Processing a style is more work and typically takes 25 minutes (you can pay to have it processed quicker, but not that much quicker). If you don’t mind the wait this site is free and works quite well. Here is an example of the ferry picture above van Gogh’ized by deepart.io (sorry they don’t label the styles so I don’t know which painting this is styled from): Playing More Directly These programs are great fun, but I like to tinker with things myself on my computer. So can I run these programs myself? Can I get the source code? Fortunately the answer to both is yes. This turns out to be a bit easier than you first might think, largely due to a project out of the Visual Geometry Group (VGG) at the University of Oxford. They created an exceptional image recognition neural network that they trained and won several competitions with. It turns out that the backbone to doing Image Style Transfer is to have a good image recognition Neural Network. This Neural Net is 19 layers deep and Oxford released the fully trained network for anyone to use. Several people have then taken this network, figured out how to load it into TensorFlow and created some really good Image Style Transfer programs based on this. The first program I played with was Anish Athalye’s program posted on GitHub here. This program uses VGG and can train a neural network for a given style picture. Anish has quite a good write up on his blog here. Then I played with a program that expanded on Anish’s by Shafeen Tejani which is on GitHub here along with a blog post here. This program lets you keep the trained network so you can perform the transformation quickly on any picture you like. This is similar to how Prisma works. The example up in the introduction was created with this picture. To train the network you require a training set of image like the Microsoft COCO collection. Running these programs isn’t for everyone. You have to be used to running Python programs and have TensorFlow installed and working on your system. You need a few other dependent Python libraries and of course you need the VGG saved Neural Network. But if you already have Python and TensorFlow, I found both of these programs just ran and I could play with them quite easily. The writeups on all these programs highly recommend having a good GPU to speed up the calculations. I’m playing on an older MacBook Air with no GPU and was able to get quite good results. One trick I found that helped is to play with reduced resolution images to help speed up the process, then run the algorithm on a higher resolution version when you have things right. I found I couldn’t use the full resolution from my DLSR (12meg), but had to use the Apple’s “large” size (286KB). Summary This was a quick introduction to Image Style Transfer. We are seeing this in more and more places. There are applications that can apply this same technique to videos. I expect this will become a standard part of all image processing software like PhotoShop or Gimp. It also might remain the domain of specialty programs like HDR has, since it is quite technical and resource intensive. In the meantime projects like VGG have made this technology quite accessible for anyone to play with. A Crack in the TensorFlow Platform Introduction Last time we looked at how some tunable parameters through off a TensorFlow solution of a linear regression problem. This time we are going to look at a few more topics around TensorFlow and linear regression. Then we’ll look at how Google is implementing Linear Regression and some problems with their approach. TensorFlow Graphs Last time we looked at calculating the solution to a linear regression problem directly using TensorFlow. That bit of code was: #]) TensorFlow does all its calculations based on a graph where the various operators and constants are nodes that then get connected together to show dependencies. We can use TensorBoard to show the graph for the snippet of code we just reviewed here: Notice that TensorFlow overloads the standard Python numerical operators, so when we get a line of code like: “denom = (X – Xavg) ** 2”, since X and Xavg are Tensors then we actually generate TensorFlow nodes as if we had called things like tf.subtract and tf.pow. This is much easier code to write, the only downside being that there isn’t a name parameter to label the nodes to get a better graph out of TensorBoard. With TensorFlow you perform calculations in two steps, first you build the graph (everything before the with statement) and then you execute a calculation by specifying what you want. To do this you create a session and call run. In run we specify the variables we want calculated. TensorFlow then goes through the graph calculating anything it needs to, to get the variables we asked for. This means it may not calculate everything in the graph. So why does TensorFlow follow this model? It seems overly complicated to perform numerical calculations. The reason is that there are algorithms to separate graphs into separate independent components that can be calculated in parallel. Then TensorFlow can delegate separate parts of the graph to separate GPUs to perform the calculation and then combine the results. In this example this power isn’t needed, but once you are calculating a very complicated large Neural Network then this becomes a real selling point. However since TensorFlow is a general tool, you can use it to do any calculation you wish on a set of GPUs. TensorFlow’s New LinearRegressor Estimator Google has been trying to turn TensorFlow into a platform for all sorts of Machine Learning algorithms, not just Neural Networks. They have added estimators for Random Forests and for Linear Regression. However they did this by using the optimizers they created for Neural Nets rather than using the standard algorithms used in other libraries, like those implemented in SciKit Learn. The reasoning behind this is that they have a lot of support for really really big models with lots of support for one-hot encoding, sparse matrices and so on. However the algorithms that solve the problem seem to be exceedingly slow and resource hungry. Anything implemented in TensorFlow will run on a GPU, and similarly any Machine Learning algorithm can be implemented in TensorFlow. The goal here is to have TensorFlow running the Google AI Cloud where all the virtual machines have Google designed GPU like AI accelerator hardware. But I think unless they implement the standard algorithms, so they can solve things like a simple least squares regression quickly hand accurately then its usefulness will be limited. Here is how you solve our fire versus theft linear regression this way in TensorFlow: features = [tf.contrib.layers.real_valued_column("x", dimension=1)] estimator = tf.contrib.learn.LinearRegressor(feature_columns=features, model_dir='./linear_estimator') # Input builders input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, num_epochs=10000) estimator.fit(input_fn=input_fn, steps=2000) mm = estimator.get_variable_value('linear/x/weight') bb = estimator.get_variable_value('linear/bias_weight') print(mm, bb) This solves the problem and returns a slope of 1.50674927 and intercept of 13.47268105 (the correct numbers from last post are 1.31345600492 and 16.9951572327). By increasing the steps in the fit statement I can get closer to the correct answer, but it is very time consuming. The documentation for these new estimators is very limited, so I’m not 100% sure it’s solving least squares, but I tried getting the L1 solution using SciKit Learn and it was very close to least squares, so whatever this new estimator is estimating (which might be least squares), it is very slow and quite inaccurate. It is also strange that we now have a couple of tunable parameters added to make a fairly simple calculation problematic. The graph for this solution isn’t too bad, but still since we know the exact solution it is a bit disappointing. Incidentally I was planning to compare the new TensorFlow RandomForest estimator to the Scikit Learn implementation. Although the SciKit Learn one is quite fast, it uses a huge amount of memory so I kind of would like a better solution. But when I compared the two I found the TensorFlow one so bad (both slow and resource intensive) that I didn’t bother blogging it. I hope that by the time this solution becomes more mainstream in TensorFlow that it improves a lot. Summary TensorFlow is a very powerful engine for performing calculations that can be automatically parallelized and distributed over multiple GPUs for amazing computational speeds. This really does make it possible to spend a few thousand dollars and build quite a powerful supercomputer. The downside is that Google appears to have the hammer of their neural network optimizers that they really want to use. As a result they are treating everything else as a nail and hitting it with this hammer. The results are quite sub-optimal. I think they do need to spend the time to implement a few of the standard non-Neural Network algorithms properly in TensorFlow if they really want to unleash the power of this platform. Dangers of Tunable Parameters in TensorFlow Introduction One of the great benefits of the Internet era has been the democratization of knowledge. A great contributor to this is the number of great Universities releasing a large number of high quality online courses that anyone can access for free. I was going through one of these, namely Stanford’s CS 20SI: Tensorflow for Deep Learning Research and playing with TensorFlow to follow along. This is an excellent course and the course notes could be put together into a nice book on TensorFlow. I was going through “Lecture note 3: Linear and Logistic Regression in TensorFlow”, which starts with a simple example of using TensorFlow to perform a linear regression. This example demonstrates how to use TensorFlow to solve this problem iteratively using Gradient Descent. This approach will then be turned to much harder problems where this is necessary, however for linear regression we can actually solve the problem exactly. I did this and got very different results than the lesson. So I investigated and figured I’d blog a bit on why this is the case as well as provide some code for different approaches to this problem. Note that a lot of the code in this article comes directly from the Stanford course notes. The Example Problem The sample data they used was fire and theft data in Chicago to see if there is a relation between the number of fires in a neighborhood to the number of thefts. The data is available here. If we download the Excel version of the file then we can read it with Python XLRD package. import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import xlrd DATA_FILE = "data/fire_theft.xls" # Step 1: read in data from the .xls file book = xlrd.open_workbook(DATA_FILE, encoding_override="utf-8") sheet = book.sheet_by_index(0) data = np.asarray([sheet.row_values(i) for i in range(1, sheet.nrows)]) n_samples = sheet.nrows - 1 With the data loaded in we can now try linear regression on it. Solving With Gradient Descent This is the code from the course notes which solve the problem by minimizing the loss function which is defined as the square of the difference (ie least squares). I’ve blogged a bit about using TensorFlow this way in my Road to TensorFlow series of posts like this one. Its uses the GadientDecentOptimizer and iterates through the data a few times to arrive at a solution. # Step 2: create placeholders for input X (number of fire) and label Y (number of theft) X = tf.placeholder(tf.float32, name="X") Y = tf.placeholder(tf.float32, name="Y") # Step 3: create weight and bias, initialized to 0 w = tf.Variable(0.0, name="weights") b = tf.Variable(0.0, name="bias") # Step 4: construct model to predict Y (number of theft) from the number of fire Y_predicted = X * w + b # Step 5: use the square error as the loss function loss = tf.square(Y - Y_predicted, name="loss") # Step 6: using gradient descent with learning rate of 0.01 to minimize loss optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss) with tf.Session() as sess: # Step 7: initialize the necessary variables, in this case, w and b sess.run(tf.global_variables_initializer()) # Step 8: train the model for i in range(100): # run 100 epochs for xx, yy in data: # Session runs train_op to minimize loss sess.run(optimizer, feed_dict={X: xx, Y:yy}) # Step 9: output the values of w and b w_value, b_value = sess.run([w, b]) Running this results in w (the slope) as 1.71838 and b (the intercept) as 15.7892. Solving Exactly with TensorFlow We can solve the problem exactly with TensorFlow. You can find the formula for this here, or a complete derivation of the formula here. #]) This results in a slope of 1.31345600492 and intercept of 16.9951572327. Solving with NumPy My first thought was that I did something wrong in TensorFlow, so I thought why not just solve it with NumPy. NumPy has a linear algebra subpackage which easily solves this. # Calculate least squares fit exactly using numpy's linear algebra package. x = data[:, 0] y = data[:, 1] m, c = np.linalg.lstsq(np.vstack([x, np.ones(len(x))]).T, y)[0] There is a little extra complexity since it handles n dimensions, so you need to reformulate the data from a vector to a matrix for it to be happy. This then returns the same result as the exact TensorFlow, so I guess my code was somewhat correct. Visualize the Results You can easily visualize the results with matplotlib. # Plot the calculated line against the data to see how it looks. plt.plot(x, y, "o") plt.plot([0, 40], [bb, mm * 40 + bb], 'k-', lw=2) plt.show() This leads to the following pictures. First we have the plot of the bad result from GradientDecent. This course instructor looked at this and decided it wasn’t very good (which it isn’t) and that the solution was to fit the data with a parabola instead. The parabola gives a better result as far as the least squares error because it nearly goes through the point on the upper right. But I don’t think that leads to a better predictor because if you remove that one point the picture is completely different. My feeling is that the parabola is already overfitting the problem. Here is the result with the exact correct solution: To me this is a better solution because it represents the lower right data better. Looking at this gives much less impetus to replace it with a concave up parabola. The course then looks at some correct solutions, but built on the parabola model rather than a linear model. What Went Wrong? So what went wrong with the Gradient Descent solution? My first thought was that it didn’t iterate the data enough, just doing 100 iterations wasn’t enough. So I increased the number of iterations but this didn’t greatly improve the result. I know that theoretically Gradient Descent should converge for least squares since the derivatives are easy and well behaved. Next I tried making the learning rate smaller, this improved the result, and then also doing more iterations solved the problems. I found to get a reasonable result I needed to reduce the learning rate by a factor of 100 to 0.00001 and increase the iterations by 100 to 10,000. This then took about 5 minutes to solve on my computer, as opposed to the exact solution which was instantaneous. The lesson here is that too high a learning rate leads to the result circling the solution without being able to converge to it. Once the learning rate is reduced so small, it takes a long time for the solution to move from the initial guess to the correct solution which is why we need so many iterations. This highlights why many algorithms build in adaptable learning rates where they are higher when moving quickly and then they dynamically reduce to zero in on a solution. Summary Most Machine Learning algorithms can’t be double checked by comparing them to the exact solution. But this example highlights how a simple algorithm can return a wrong result, but a result that is close enough to fool a Stanford researcher and make them (in my opinion) go in a wrong direction. It shows the danger we have in all these tunable parameters to Machine Learning algorithms, how getting things like the learning rate or number of iterations incorrect can lead to quite misleading results.
https://smist08.wordpress.com/tag/tensorflow/
CC-MAIN-2019-09
en
refinedweb
We compare four numerical languages in a Vox blog post. Section 3 of the blog post compares running times for the calculation of GARCH log-likelihood. It is a good representative example of the type of numerical problem we often encounter. Because the values at any given time t depend on values at t-1, this loop is not vectorisable. The GARCH(1,1) specification is with its log-likelihood given by For the purposes of our comparison, we are only concerned with the iterative calculation involved for the second term. Hence, the constant term on the left is ignored. We coded up the log-likelihood calculation in the four languages and in C, keeping the code as similar as possible between languages. We then simulated a sample of size 10,000, loaded that in, and timed the calculation of the log-likelihood 1000 times (wall time), picking the lowest in each case. The code can be downloaded here. The fastest calculation is likely to be in C (or FORTRAN) gcc -march=native -ffast-math -Ofast run.c double likelihood(double o, double a, double b, double h, double y2, int N){ double lik=0; for (int j=1;j<N;j++){ h = o+a*y2[j-1]+b*h; lik += log(h)+y2[j]/h; } return(lik); } R does not really have a good just-in-time (JIT) compiler so it is likely slow. likelihood =function(o,a,b,y2,h,N){ lik=0 for(i in 2:N){ h=o+a*y2[i-1]+b*h lik=lik+log(h)+y2[i]/h } return(lik) } Python is likely to be quite slow, def likelihood(hh,o,a,b,y2,N): lik = 0.0 h = hh for i in range(1,N): h=o+a*y2[i-1]+b*h lik += np.log(h) + y2[i]/h return(lik) but will be significantly sped up by using Numba. from numba import jit @jit def likelihood(hh,o,a,b,y2,N): lik = 0.0 h = hh for i in range(1,N): h=o+a*y2[i-1]+b*h lik += np.log(h) + y2[i]/h return(lik) (Octave’s syntax was designed to be largely compatible with MATLAB, as an open-source alternative) function lik = likelihood(o,a,b,h,y2,N) lik=0; for i = 2:N h=o+a*y2(i-1)+b*h; lik=lik+log(h)+y2(i)/h; end end function likelihood(o,a,b,h,y2,N) local lik=0.0 for i in 2:N h = o+a*y2[i-1]+b*h lik += log(h)+y2[i]/h end return(lik) end We tried two versions of Matlab (2015a, 2018a), Octave 4.4, R 3.5, Python 3.6 and two versions of Julia (0.6 and 0.7). All runtime results are presented relative to the fastest (C). All code ran on a late model iMac. In a practical implementation, the log-likelihood calculation is only a part of the total time. We also timed the total execution time, including the starting and stopping of the software package, and compile time in the case of C. This was done in the terminal with a command like time python run.py C, even when including the compile time remains fastest, but now followed by R. Generally, languages having poorer just-in-time compiling performed worse. However, there exist many tricks that can shorten processing time significantly. For instance, Python’s Numba package allows for efficient just-in-time compiling with minimal additional code — implementing Numba led to the calculation here being 200 times faster. Cython is also possible, but it is much more involved than Numba. For all the languages, embedding C code for the loop is possible, and would lead to performance improvements. Julia’s performance is impressive, reflecting its advantage due to features like just-in-time compiling. Moreover, the difference between 0.6 and 0.7 reflects the progress made in the language’s development. The results for Octave and MATLAB 2015a/2018a are reflective of the improvements MATLAB has made in terms of speeding up its code processing, in particular the introduction of JIT compiling in MATLAB’s execution engine from MATLAB 2015b onward. One can of course bypass many of those issues by just coding up the GARCH likelihood in C, including the optimisation, and calling that within any of the languages. This is indeed what we do in our own work via Rccp.
https://www.modelsandrisk.org/appendix/speed/
CC-MAIN-2019-09
en
refinedweb
In this tutorial, we are going to solve the following problem. Given an integer n, we have to find the (1n+2n+3n+4n)%5 The number (1n+2n+3n+4n) will be very large if n is large. It can't be fit in the long integers as well. So, we need to find an alternate solution. If you solve the equation for the number 1, 2, 3, 4, 5, 6, 7, 8, 9 you will get 10, 30, 100, 354, 1300, 4890, 18700, 72354, 282340 values respectively. Observe the results of the equation carefully. You will find that the last digit of the equation result repeats for every 4th number. It's the periodicity of the equation. Without actually calculating the equation we can say that the if n%4==0 then (1n+2n+3n+4n)%5 will be 4 else 0. Let's see the code. #include <bits/stdc++.h> using namespace std; int findSequenceMod5(int n) { return (n % 4) ? 0 : 4; } int main() { int n = 343; cout << findSequenceMod5(n) << endl; return 0; } If you run the above code, then you will get the following result. 0 If you have any queries in the tutorial, mention them in the comment section.
https://www.tutorialspoint.com/find-1-n-plus-2-n-plus-3-n-plus-4-n-mod-5-in-cplusplus
CC-MAIN-2021-31
en
refinedweb
Is this a bug? import torch t = torch.HalfTensor([0])t = torch.autograd.Variable(t) This code causes these errors, Traceback (most recent call last): File "a.py", line 4, in t = torch.autograd.Variable(t)RuntimeError: Variable data has to be a tensor, but got HalfTensor Hi,CPU half tensors do not actually exist.Using cuda HalfTensors work as expected: import torch t = torch.cuda.HalfTensor([0]) t = torch.autograd.Variable(t) Thank your for reply. Are you intend to implement cpu HalfTensor? I think It is desired torch.HalfTensor is deleted or sends warnings until it... This is very unexpected behaviour.
https://discuss.pytorch.org/t/variable-failed-to-wrap-halftensor/3220
CC-MAIN-2017-47
en
refinedweb
Permission Since: BlackBerry 10.0.0 #include <bb/platform/bbm/Permission> To link against this class, add the following line to your .pro file: LIBS += -lbbplatformbbm The type of application permission. The user can set permissions for your app by using the global settings application. Overview Public Types Index Public Types The list of permission types. BlackBerry 10.0.0 - ProfileUpdatesAllowed 1 Indicates that profile updates, including activities and achievements, can be added to the user's BBM profile. - ContactInvitationsAllowed 2 Indicates whether this user has allowed other users of this app to send this user invitations to become a BBM contact.Since: BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__platform__bbm__permission.html
CC-MAIN-2017-47
en
refinedweb
Opened 12 years ago Closed 10 years ago Last modified 9 years ago #2669 closed enhancement (fixed) excel/openoffice calc export Description a microsoft excel/openoffice calc export would be a great feature. it could be implemented by: - producting tab separated or comma separated format - setting the http mime type of the result to application/msexcel - set the extension of the result to ../xx.xls and it should work with most (all?) browsers. Attachments (5) Change History (37) comment:1 Changed 12 years ago by comment:2 Changed 12 years ago by I think that having a direct Excel export can be useful: it's more convenient than having to save the text output to a file, and then open that file. The following patch sets the mimetype used for Tab-separated value to be application/vnd.ms-excel. Note that the mimetype value given to the add_link is not used. Index: trac/ticket/query.py =================================================================== --- trac/ticket/query.py (revision 2831) +++ trac/ticket/query.py (working copy) @@ -381,8 +381,8 @@ 'application/rss+xml', 'rss') add_link(req, 'alternate', query.get_href('csv'), 'Comma-delimited Text', 'text/plain') - add_link(req, 'alternate', query.get_href('tab'), 'Tab-delimited Text', - 'text/plain') + add_link(req, 'alternate', query.get_href('tab'), + 'Tab-delimited Text (Excel)', 'text/plain') constraints = {} for k, v in query.constraints.items(): @@ -406,7 +406,7 @@ elif format == 'csv': self.display_csv(req, query) elif format == 'tab': - self.display_csv(req, query, '\t') + self.display_csv(req, query, '\t', 'application/vnd.ms-excel') else: self.display_html(req, query) return 'query.cs', None @@ -572,9 +572,9 @@ self.env.is_component_enabled(ReportModule): req.hdf['query.report_href'] = self.env.href.report() - def display_csv(self, req, query, sep=','): + def display_csv(self, req, query, sep=',', mimetype='text/plain'): req.send_response(200) - req.send_header('Content-Type', 'text/plain;charset=utf-8') + req.send_header('Content-Type', '%s;charset=utf-8' % mimetype) req.end_headers() cols = query.get_columns() comment:3 Changed 12 years ago by What does prevent the browser to open Excel when receiving a CSV file ? Again, I don't think it's a good idea to add references to proprietary formats in the Trac core - may be it is possible to have an extension (plugin) for such formats ? comment:4 Changed 12 years ago by It's the mimetype. If it's text/plain, the browser will show the text by itself. With a more neutral alternative to application/vnd.ms-excel, like text/csv or text/tab-separated-values, IE only propose to save the file, and Firefox starts the generic Open with… dialog. Note that the data format itself is unchanged and non-proprietary, it's only the type annotation that has been made more specific. Nothing prevents you to choose a text editor to open it. And I'm pretty sure that free MS-office alternatives like Open Office or KOffice will know what to do with the ms-excel type. comment:5 Changed 12 years ago by I agree with eblot that we shouldn't do this… if someone wants their report results in excel, it's easy enough. comment:6 Changed 12 years ago by eblot, exactly, it would be great for ticket exports as additional link on the bottom to ease peoples life a little who want to take the list of issues with them, for a meeting e.g. the purpose of the mime-type and the ending is to help the operating system or the browser to choose the way how to display the document. if the result is csv, then the mime type could be different from text to allow configuration of a different application. setting it is definitely not a crime. btw, the patch above replaces the csv link, which was not the idea. there should be two links both producing text: - one with mime type text, opens the same as displaying the source of a wiki page - one with mime type differen, wich opens a spreadsheet program on any platform by default (and the proposed solution works on linux, windows, macos). how you call this links? i'm not sure if this is important at all. e.g. csv + excel, text + csv, text + spreadsheet just to name possibilities. am i allowed to reopen the ticket pls? comment:7 Changed 12 years ago by sorry … how do i change the wrong statement above? it is already an additional link i think *sweat*. comment:8 Changed 12 years ago by My personal opinion is that Trac should not provide OS-specific format (text/csv is not, application/ms-excel is). I don't think it would be a nice thing to clutter the interface with links which are useless on MacOS or Linux, for example. People who want to add platform specific feature can edit the Trac code, or create plugins to support proprietary file format, but I wish Trac core stayed platform independent. [About the latest question: you cannot edit comments, only append new ones] comment:9 Changed 12 years ago by exactly this is the point and i agree fully with you: such a link has to work on every operating system where you have a spreadsheet program and a browser. i only tested linux and windows and it does. but as the same programs exist also for macos it would be a big surprise if it would not work there. if it is a plugin, even better. Changed 12 years ago by Additional changes on top of alect's attachment:ticket:2296:content-converter.diff comment:10 Changed 12 years ago by This ticket is now related to #2296. I attached a single file plugin implementing that feature (attachment:tickets_to_excel_tsv.py) which is using the extension points introduced in the attachment:ticket:2296:content-converter.diff patch, and some additional changes on top of that (attachment:reusable_export_csv.diff). I think we will be able to close this as worksforme as soon as the #2296 related changes will be in trunk. Changed 12 years ago by Simple but effective plugin implementing the requested feature (take 2) comment:11 Changed 12 years ago by I've applied the patch Christian and I'll upload a new version soon (once I've migrated the versioncontrol zip/diff downloads). comment:12 Changed 12 years ago by Alternatively, that Excel stuff could also be done in a more general way by pipelining the transforms… Registering once a 'text/csv' => 'application/vnd.ms-excel' converter, and for each of the XYZ converter of type * => 'text/csv', propose an additional XYZ (Excel) conversion… Changed 12 years ago by Report to Native-xls by pyExcelerator for trac 0.9.5 comment:13 follow-up: 29 Changed 12 years ago… Changed 12 years ago by Updated version for trunk API comment:14 Changed 12 years ago by comment:15 Changed 12 years ago by Let me first propose an implementation for the pipelining stuff… Here's the implementation on top of r3307: attachment:pipelining_conversions.patch I changed the text/plain MIME Type of the CSV to text/csv, as this is the registered on (cf. rfc:4180). The plugin would be extremely simple: """Convert any `text/csv` data to `application/vnd.ms-excel`.""" from trac.core import * from trac.mimeview.api import IContentConverter EXCEL_MIMETYPE = 'application/vnd.ms-excel' class CSVToExcelConverter(Component): implements (IContentConverter) # IContentConverter methods def get_supported_conversions(self): yield ('excel', 'Excel', 'csv', 'text/csv', EXCEL_MIMETYPE, 8) def convert_content(self, req, mimetype, content, k, fname=None, url=None): return (content, EXCEL_MIMETYPE) Changed 12 years ago by comment:16 Changed 11 years ago by I don't know exactly since when, but now for me the .csv files are automatically opened in Excel… Does it worksforme for other people too? comment:17 Changed 11 years ago by Yep, worksforme as well on a Mac. comment:18 Changed 11 years ago by Ok, if it works on a Mac … ;) comment:19 follow-up: 20 Changed 11 years ago by yes, it works perfect for queries. could you pls add the same mime type for reports too pls? may i reopen it for that reason. comment:20 follow-up: 21 Changed 11 years ago by comment:21 Changed 11 years ago by comment:22 Changed 11 years ago by can it be that spreadsheet is opened because the result link is called "query.csv" in case of the query? because the report link is called "1" (the id), and windows does not know that it should open excel? comment:23 Changed 11 years ago by Right, report_1.csv would certainly be a better choice of filename anyway. comment:24 Changed 11 years ago by comment:25 Changed 11 years ago by comment:26 Changed 11 years ago by tx a lot, now works correctly comment:27 follow-up: 28 Changed 11 years ago by I installed the CSVToExcelConverter plugin in my trac's plugins directory, and also set in the ini file the following fields: [components] CSVToExcelConverter.* = enabled the trac.log says 2007-02-13 12:29:35,887 Trac[init] DEBUG: Loading file plugin CSVToExcelConverter from /home/trac/plugins/CSVToExcelConverter.py but, when I open a report I only keep seeing: Download in other formats: - RSS Feed - Comma-delimited Text - Tab-delimited Text - SQL Query What am I doing wrong? Thank you. comment:28 Changed 11 years ago by What am I doing wrong? Quoting the new ticket page: Support and installation questions should be asked on the mailing list or IRC channel, not filed as tickets. Please ask for support on the MailingList rather than re-opening tickets. You may reopen ticket if the provided feature is buggy, but not to ask for support. Thanks in advance. comment:29 Changed 11 years ago by I ported it to trac 0.10.3, and uploaded it as a new project… comment:30 follow-up: 31 Changed 10 years ago by what about the possibility to have an organized xls report output with many tab, with different pie charts : one with percentage pieces differentiated for gravity, another with pieces for status, etc. Yes always as a plugin, so to not loose OS indipendence of trac. (sorry if this was not a good reason to reopen the ticket) comment:31 Changed 10 years ago by Replying to andreacolpo@yahoo.it: (sorry if this was not a good reason to reopen the ticket) Yep, as this feature won't be implemented in Trac core, there's no need to re-open the ticket. I would have said you need to report these suggestion to the th:ExcelReportPatch maintainer, but as trac.hacks.org web site is down for now, you'll have to wait to do so. comment:32 Changed 9 years ago by How can i export to spreadsheet ……… i want to use open office Can you give some examples of which kind of data you are thinking of ? CSV and TSV exports are already available for ticket reports. I don't really agree with the second/third points: I don't think Trac should export to some proprietary format. Plus, I'm not sure it is valid to generate a CSV file with a .XLSextension.
https://trac.edgewall.org/ticket/2669
CC-MAIN-2017-47
en
refinedweb
RFC 46: GDAL/OGR unification Author: Even Rouault Contact: even dot rouault at mines dash paris dot org Status: Adopted, implemented in GDAL 2.0 Summary In the 1.X series of GDAL/OGR, the GDAL/raster and OGR/vector sides are quite different on some aspects even where there is no strong reason for them to be different, particularly in the structure of drivers. This RFC aims at unifying the OGR driver structure with the GDAL driver structure. The main advantages of using the GDAL driver structure are : - metadata capabilities : description of driver, extensions, creation options, virtual IO capability ... - efficient driver identification and opening. Similarly, OGR datasource and layer classes lack the metadata mechanisms offered by the corresponding GDAL dataset and raster band classes. Another aspect is that the separation between GDAL "datasets" and OGR "datasources" is sometimes artificial. Various data containers can accept both data types. The list of drivers that have a GDAL side and OGR side is : SDTS, PDS, GRASS, KML, Spatialite/Rasterlite?, GeoPackage (raster side not yet implemented), PostGIS/PostGIS Raster, PDF, PCIDSK, FileGDB (raster side not yet implemented). For applications that are interested in both, this currently means to open the file twice with different API. And for update mode, for file-based drivers, the updates must be done sequentially to avoid opening a file twice simultaneously in update mode and making conflicting changes. Related RFCs There are a few related past RFCs that have never been adopted but strongly relate to RFC 46 : - RFC 10: OGR Open Parameters. All the functionality described in RFC 10 is included in RFC 46, mainly the GDALOpenEx() new API - RFC 25: Fast Open. RFC 25 mentions avoiding to systematically listing the sibling files to the file being opened. This can now achieved in RFC 46 by lazy loading with GDALOpenInfo::GetSiblingFiles?(). At least Identify() should not trigger GetSiblingFiles?(). - RFC 36: Allow specification of intended on GDALOpen. The new GDALOpenEx() accepts a list of a subset drivers that must be probed, as suggested by RFC36. The specification of the drivers on the command line of utilies could be easily done through a new option, but that's not in the scope of RFC 46. - RFC 38: OGR Faster Open is completely included in RFC 46 through the possibility of using Open(GDALOpenInfo*) in OGR drivers. Core changes: summary - OGRSFDriver extends GDALDriver. - Vector drivers can be implemented as GDALDriver. - OGRSFDriverRegistrar is a compatibility wrapper around GDALDriverManager for legacy OGRSFDriver. - OGRDataSource extends GDALDataSource. - GDALOpenEx() API is added to be able to open "mixed" datasets. - OGRLayer extends GDALMajorObject, thus adding metadata capability. - The methods of OGRDataSource related to layers are moved to GDALDataset, making it both a raster and vector capable container. - Performance improvements in GDALOpenInfo() mechanism. - New driver metadata item to describe open options (i.e. deprecate the use of configuration option). - New driver metadata item to describe layer creation options. Core changes: details Drivers and driver registration - The OGRSFDriver now extends GDALDriver and is meant as being a legacy way of implementing a vector driver. It is kept mainbly because, in the current implementation, not all drivers have been migrated to being "pure" GDALDriver. The CopyDataSource?() virtual method has been removed since no in-tree drivers implement it. The inheritance to GDALDriver make it possible to manage vector drivers by the GDALDriverManager, and to be able to attach metadata to them, to document driver long name, link to documentation, file extension, datasource creation options with the existing GDAL_DMD_* metadata items. - Drivers directly inheriting from GDALDriver (to be opposed to those inheriting from OGRSFDriver) should : - declare SetMetadataItem?( GDAL_DCAP_VECTOR, "YES" ). - implement pfnOpen() for dataset opening - optionaly, implement pfnCreate() for dataset creation. For vector drivers, the nBands parameter of Create() is supposed to be passed to 0. - optionaly, implement pfnDelete() for dataset deletion - The *C* OGR Driver API will still work with drivers that have been converted as "pure" GDALDrivers (this is not true of the C++ OGR Driver API). For example OGR_Dr_GetName() calls GDALDriver::GetDescription?(), OGR_Dr_CreateDatasource() calls GDALDriver::Create(), etc... - The C++ definition of GDALDriver is extended with the following function pointers so that it can work with legacy OGRSFDriver. /* For legacy OGR drivers */ GDALDataset *(*pfnOpenWithDriverArg)( GDALDriver*, GDALOpenInfo * ); GDALDataset *(*pfnCreateVectorOnly)( GDALDriver*, const char * pszName, char ** papszOptions ); CPLErr (*pfnDeleteDataSource)( GDALDriver*, const char * pszName );They are used by GDALOpenEx(), GDALDriver::Create() and GDALDriver::Delete() if the pfnOpen, pfnCreate or pfnDelete pointers are NULL. The OGRSFDriverRegistrar class has an implementation of those function pointers that calls the legacy C++ OGRSFDriver::Open(), OGRSFDriver::CreateDataSource?() and OGRSFDriver::DeleteDataSource?() virtual methods. - GDALDriver::Create() can accept nBands == 0 for a vector capable driver. - GDALDriver::DefaultCreateCopy?() can accept a dataset with 0 bands for a vector capable driver, and if the output dataset has layer creation capability and the source dataset has layers, it copies the layers from the source dataset into the target dataset. - GDALDriver::Identify() now iterates over all kinds of drivers. It has been modified to do a first pass on drivers that have an implementation of Identify(). If no match is found, it does a second pass on all drivers and use the potentially slower Open() as the identification method. - Related to the above point, the implementations of GDALDriver::pfnIdentify function pointer used to return a boolean value to indicate if the passed GDALOpenInfo was a match for the driver. For some drivers, this was too restrictive so that they were able to implement Identify(). For example where the detection logic can return "yes, I definitely recognize that file", "no, it is not for me" or "I have not enough elements in GDALOpenInfo to be able to tell". That last state can now be advertized with a negative return value. - The OGRSFDriverRegistrar is trimmed down to be mostly a wrapper around GDALDriverManager. In particular, it does not contain any longer a list of drivers. The Open(), OpenShared?(), ReleaseDataSource?(), DeregisterDriver?() and AutoLoadDrivers?() methods are removed from the class. This change can have impact on C++ code. A few adaptations in OGR utilities have been done to accomodate for those changes. The RegisterDriver?() API has been kept for legacy OGR drivers and it automatically sets SetMetadataItem?( GDAL_DCAP_VECTOR, "YES" ). The GetDriverCount?(), GetDriver?() and GetDriverByName?() methods delegate to GDALDriverManager and make sure to only take into account drivers that have the GDAL_DCAP_VECTOR capability. In the case a driver has the same name as GDAL and OGR driver, the OGR variant is internally prefixed with OGR_, and GetDriverByName?() will first try the OGR_ variant. The GetOpenDSCount() and GetOpenDS() have now a dummy implementation returning 0/NULL. For reference, neither MapServer? nor QGIS use those functions. - OGRRegisterAll() is now an alias of GDALAllRegister(). The past OGRRegisterAll() is now renamed OGRRegisterAllInternal() and called by GDALAllRegister(). So, GDALAllRegister() and OGRRegisterAll() are now equivalent and register all drivers. - GDALDriverManager has received a few changes : - use of a map from driver name to driver object to speed-up GetDriverByName?() - accept OGR_SKIP and OGR_DRIVER_PATH configuration options for backward compatibility. - The recommanded separator for driver names in GDAL_SKIP is now comma instead of space (similarly to what OGR_SKIP does). This is to make it possible to define OGR driver names in GDAL_SKIP that have spaces in their names like "ESRI Shapefile" or "MapInfo? File". If there is no comma in the GDAL_SKIP value, then space separator is assumed (backward compatibility). - removal of GetHome?()/SetHome?() methods whose purpose seemed to define an alternate path for the search directory of plugins. Those methods only existed at the C++ level, and are redundant with GDAL_DRIVER_PATH configuration option - Raster-capable drivers should declare SetMetadataItem?( GDAL_DCAP_RASTER, "YES" ). All in-tree GDAL drivers have been patched to declare it. But the registration code detects if a driver does not declare any of GDAL_DCAP_RASTER nor GDAL_DCAP_VECTOR, in which case it declares GDAL_DCAP_RASTER on behalf of the un-patched driver, with a debug message inviting to explicitely set it. - New metadata items : - GDAL_DCAP_RASTER=YES / GDAL_DCAP_VECTOR=YES at driver level. To declare that a driver has raster/vector capabilities. A driver can declare both. - GDAL_DMD_EXTENSIONS (with a final S) at driver level. This is a small evolution of GDAL_DMD_EXTENSION where one can specify several extensions in the value string. The extensions are space-separated. For example "shp dbf", "tab mif mid", etc... For ease of use, GDALDriver::SetMetadataItem?(GDAL_DMD_EXTENSION) also sets the passed value as GDAL_DMD_EXTENSIONS, if it is not already set. So new code can always use GDAL_DMD_EXTENSIONS. - GDAL_DMD_OPENOPTIONLIST at driver level. The value of this item is an XML snippet with a format similar to creation options. GDALOpenEx(), once it has identified with Identify() that a driver accepts the file, will validate the passed open option list with the authorized open option list. Below an example of such an authorized open option list in the S57 driver <OpenOptionList> <Option name="UPDATES" type="string-select" description="Should update files be incorporated into the base data on the fly" default="APPLY"> <Value>APPLY</Value> <Value>IGNORE</Value> </Option> <Option name="SPLIT_MULTIPOINT" type="boolean" description="Should multipoint soundings be split into many single point " "sounding features" default="NO" /> <Option name="ADD_SOUNDG_DEPTH" type="boolean" description="Should a DEPTH attribute be added on SOUNDG features and " "assign the depth of the sounding" default="NO" /> <Option name="RETURN_PRIMITIVES" type="boolean" description="Should all the low level geometry primitives be returned as " "special IsolatedNode, ConnectedNode, Edge and Face layers" default="NO" /> <Option name="PRESERVE_EMPTY_NUMBERS" type="boolean" description="If enabled, numeric attributes assigned an empty string as a " "value will be preserved as a special numeric value" default="NO" /> <Option name="LNAM_REFS" type="boolean" description="Should LNAM and LNAM_REFS fields be attached to features " "capturing the feature to feature relationships in the FFPT " "group of the S-57 file" default="YES" /> <Option name="RETURN_LINKAGES" type="boolean" description="Should additional attributes relating features to their underlying " "geometric primtives be attached" default="NO" /> <Option name="RECODE_BY_DSSI" type="boolean" description="Should attribute values be recoded to UTF-8 from the character " "encoding specified in the S57 DSSI record." default="NO" /> </OpenOptionList> - GDAL_DS_LAYER_CREATIONOPTIONLIST at dataset level. But can also be set at driver level because, in practice, layer creation options do not depend on the dataset instance. The value of this item is an XML snippet with a format similar to dataset creation options. If specified, the passed creation options to CreateLayer?() are validated against that authorized creation option list. Below an example of such an authorized open option list in the Shapefile driver. <LayerCreationOptionList> <Option name="SHPT" type="string-select" description="type of shape" default="automatically detected"> <Value>POINT</Value> <Value>ARC</Value> <Value>POLYGON</Value> <Value>MULTIPOINT</Value> <Value>POINTZ</Value> <Value>ARCZ</Value> <Value>POLYGONZ</Value> <Value>MULTIPOINTZ</Value> <Value>NONE</Value> <Value>NULL</Value> </Option> <Option name="2GB_LIMIT" type="boolean" description="Restrict .shp and .dbf to 2GB" default="NO" /> <Option name="ENCODING" type="string" description="DBF encoding" default="LDID/87" /> <Option name="RESIZE" type="boolean" description="To resize fields to their optimal size." default="NO" /> </LayerCreationOptionList> Datasets / Datasources - The main methods from OGRDataSource have been moved to GDALDataset : virtual int GetLayerCount() { return 0; } virtual OGRLayer *GetLayer(int) { return NULL; } virtual OGRLayer *GetLayerByName(const char *); virtual OGRErr DeleteLayer(int); virtual int TestCapability( const char * ) { return FALSE; } virtual OGRLayer *CreateLayer( const char *pszName, OGRSpatialReference *poSpatialRef = NULL, OGRwkbGeometryType eGType = wkbUnknown, char ** papszOptions = NULL ); virtual OGRLayer *CopyLayer( OGRLayer *poSrcLayer, const char *pszNewName, char **papszOptions = NULL ); virtual OGRStyleTable *GetStyleTable(); virtual void SetStyleTableDirectly( OGRStyleTable *poStyleTable ); virtual void SetStyleTable(OGRStyleTable *poStyleTable); virtual OGRLayer * ExecuteSQL( const char *pszStatement, OGRGeometry *poSpatialFilter, const char *pszDialect ); virtual void ReleaseResultSet( OGRLayer * poResultsSet ); int GetRefCount() const; int GetSummaryRefCount() const; OGRErr Release();The following matching C API is available : int CPL_DLL GDALDatasetGetLayerCount( GDALDatasetH ); OGRLayerH CPL_DLL GDALDatasetGetLayer( GDALDatasetH, int ); OGRLayerH CPL_DLL GDALDatasetGetLayerByName( GDALDatasetH, const char * ); OGRErr CPL_DLL GDALDatasetDeleteLayer( GDALDatasetH, int ); OGRLayerH CPL_DLL GDALDatasetCreateLayer( GDALDatasetH, const char *, OGRSpatialReferenceH, OGRwkbGeometryType, char ** ); OGRLayerH CPL_DLL GDALDatasetCopyLayer( GDALDatasetH, OGRLayerH, const char *, char ** ); int CPL_DLL GDALDatasetTestCapability( GDALDatasetH, const char * ); OGRLayerH CPL_DLL GDALDatasetExecuteSQL( GDALDatasetH, const char *, OGRGeometryH, const char * ); void CPL_DLL GDALDatasetReleaseResultSet( GDALDatasetH, OGRLayerH ); OGRStyleTableH CPL_DLL GDALDatasetGetStyleTable( GDALDatasetH ); void CPL_DLL GDALDatasetSetStyleTableDirectly( GDALDatasetH, OGRStyleTableH ); void CPL_DLL GDALDatasetSetStyleTable( GDALDatasetH, OGRStyleTableH );OGRDataSource definition is now reduced to : class CPL_DLL OGRDataSource : public GDALDataset { public: OGRDataSource(); virtual const char *GetName() = 0; static void DestroyDataSource( OGRDataSource * ); };The existing OGR_DS_* API is preserved. The implementation of those functions casts the OGRDataSourceH opaque pointer to GDALDataset*, so it is possible to consider GDALDatasetH and OGRDataSourceH as equivalent from the C API point of view. Note that it is not true at the C++ level ! - OGRDataSource::SyncToDisk?() has been removed. The equivalent functionnality should be implemented in existing FlushCache?(). GDALDataset::FlushCache?() nows does the job of the previous generic implementation of OGRDataSource::SyncToDisk?(), i.e. iterate over all layers and call SyncToDisk?() on them. - GDALDataset has now a protected ICreateLayer() method. virtual OGRLayer *ICreateLayer( const char *pszName, OGRSpatialReference *poSpatialRef = NULL, OGRwkbGeometryType eGType = wkbUnknown, char ** papszOptions = NULL );This method is what used to be CreateLayer?(), i.e. that drivers should rename their specialized CreateLayer?() implementations as ICreateLayer(). CreateLayer?() is kept at GDALDataset level, but its implementation does a prior validation of passed creation options against an optional authorized creation option list (GDAL_DS_LAYER_CREATIONOPTIONLIST), before calling ICreateLayer() (this is similar to RasterIO() / IRasterIO() ) A global pass on all in-tree OGR drivers has been made to rename CreateLayer?() as ICreateLayer(). - GDALOpenEx() is added to be able to open raster-only, vector-only, or raster-vector datasets. It accepts read-only/update mode, shared/non-shared mode. A list of potential candidate drivers can be passed. If NULL, all drivers are probed. A list of open options (NAME=VALUE syntax) can be passed. If the list of sibling files has already been established, it can also be passed. Otherwise GDALOpenInfo will establish it. GDALDatasetH CPL_STDCALL GDALOpenEx( const char* pszFilename, unsigned int nOpenFlags, const char* const* papszAllowedDrivers, const char* const* papszOpenOptions, const char* const* papszSiblingFiles );The nOpenFlags argument is a 'or-able' combination of the following values : /* Note: we define GDAL_OF_READONLY and GDAL_OF_UPDATE to be on purpose */ /* equals to GA_ReadOnly and GA_Update */ /** Open in read-only mode. */ #define GDAL_OF_READONLY 0x00 /** Open in update mode. */ #define GDAL_OF_UPDATE 0x01 /** Allow raster and vector drivers. */ #define GDAL_OF_ALL 0x00 /** Allow raster drivers. */ #define GDAL_OF_RASTER 0x02 /** Allow vector drivers. */ #define GDAL_OF_VECTOR 0x04 /* Some space for GDAL 3.0 new types ;-) */ /*#define GDAL_OF_OTHER_KIND1 0x08 */ /*#define GDAL_OF_OTHER_KIND2 0x10 */ /** Open in shared mode. */ #define GDAL_OF_SHARED 0x20 /** Emit error message in case of failed open. */ #define GDAL_OF_VERBOSE_ERROR 0x40The existing GDALOpen(), GDALOpenShared(), OGROpen(), OGROpenShared(), OGR_Dr_Open() are just wrappers of GDALOpenEx() with appropriate open flags. From the user point of view, their behaviour is identical to the existing one, i.e. GDALOpen() family will only returns datasets of drivers with declared raster capabilities, and similarly with OGROpen() family with vector. - GDALOpenInfo class. The following changes are done : - the second argument of the constructor is now nOpenFlags instead of GDALAccess, with same semantics as GDALOpenEx(). GDALOpenInfo uses the read-only/update bit to "compute" the eAccess flag that is heavily used in existing drivers. Drivers with both raster and vector capabilities can use the GDAL_OF_VECTOR/GDAL_OF_RASTER bits to determine the intent of the caller. For example if a caller opens with GDAL_OF_RASTER only and the dataset only contains vector data, the driver might decide to not open the dataset (if it is a read-only driver. If it is a driver with update capability, it should do that only if the opening is done in read-only mode). - the open options passed to GDALOpenEx() are stored into a papszOpenOptions member of GDALOpenInfo, so that drivers can use them. - the "FILE* fp" member is transformed into "VSILFILE* fpL". This change is motivated by the fact that most popular drivers now use the VSI Virtual File API, so they can now directly use the fpL member instead of re-opening again the file. A global pass on all in-tree GDAL drivers that used fp has been made. - A VSIStatExL() was done previously to determine the nature of the file passed. Now, we optimistically begin with a VSIFOpenL(), assuming that in most use cases the passed filename is a file. If the opening fails, VSIStatExL() is done to determine the nature of the filename. - If the requested access mode is update, the opening of the file with VSIFOpenL() is done with "rb+" permissions to be directly usable. - The papszSiblingFiles member is now private. It is accessed by a GetSiblingFiles?() method that does the ReadDir?() on demand. This can speed up the Identify() method that generally does not require to know sibling files. - A new method, TryToIngest?(), is added to read more than the first 1024 bytes of a file. This is useful for a few vector drivers, like GML or NAS, that must fetch a bit more bytes to be able to identify the file. Layer - OGRLayer extends GDALMajorObject. Drivers can now define layer metadata items that can be retrieved with the usual GetMetadata?()/GetMetadateItem?() API. - The GetInfo?() method has been removed. It has never been implemented in any in-tree drivers and has been deprecated for a long time. Other - The deprecated and unused GDALProjDefH and GDALOptionDefinition types have been removed from gdal.h - GDALGeneralCmdLineProcessor() now interprets the nOptions (combination of GDAL_OF_RASTER and GDAL_OF_RASTER) argument as the type of drivers that should be displayed with the --formats option. If set to 0, GDAL_OF_RASTER is assumed. - the --formats option of GDAL utilities outputs whether drivers have raster and/or vector capabilities - the --format option of GDAL utilities outputs GDAL_DMD_EXTENSIONS, GDAL_DMD_OPENOPTIONLIST, GDAL_DS_LAYER_CREATIONOPTIONLIST. - OGRGeneralCmdLineProcessor() use GDALGeneralCmdLineProcessor() implementation, restricting --formats to vector capable drivers. Changes in drivers - OGR PCIDSK driver has been merged into GDAL PCIDSK driver. - OGR PDF driver has been merged into GDAL PDF driver. - A global pass has been made to in-tree OGR drivers that have to open a file to determine if they recognize it. They have been converted to GDALDriver to accept a GDALOpenInfo argument and they now use its pabyHeader field to examine the first bytes of files. The number of system calls realated to file access (open/stat), in order to determine that a file is not recognized by any OGR driver, has now dropped from 46 in GDAL 1.11 to 1. The converted drivers are : AeronavFAA, ArcGEN, AVCBin, AVCE00, BNA, CSV, DGN, EDIGEO, ESRI Shapefile, GeoJSON, GeoRSS, GML, GPKG, GPSBabel, GPX, GTM, HTF, ILI1, ILI2, KML, LIBKML, MapInfo? File, MySQL, NAS, NTF, OpenAIR, OSM, PDS, REC, S57, SDTS, SEGUKOOA, SEGY, SOSI, SQLite, SUA, SVG, TIGER, VFK, VRT, WFS - Long driver descrption is set for most OGR drivers. - All classes deriving from OGRLayer have been modified to call SetDescription?() with the value of GetName?()/poFeatureDefn->GetName?(). test_ogrsf tests that it is properly set. - Following drivers are kept as OGRSFDriver, but their Open() method does early extension/prefix testing to avoid datasource object to be instanciated : CartoDB, CouchDB, DXF, EDIGEO, GeoConcept?, GFT, GME, IDRISI, OGDI, PCIDSK, PG, XPlane. - Identify() has been implemented for CSV, DGN, DXF, EDIGEO, GeoJSON, GML, KML, LIBKML, MapInfo? File, NAS, OpenFileGDB, OSM, S57, Shape, SQLite, VFK, VRT. - GDAL_DMD_EXTENSION/GDAL_DMD_EXTENSIONS set for following drivers: AVCE00, BNA, CSV, DGN, DWG, DXF, EDIGEO, FileGDB, Geoconcept, GeoJSON, Geomedia, GML, GMT, GPKG, GPX, GPSTrackMaker, IDRISI Vector, Interlis 1, Interlis 2, KML, LIBKML, MDB, MapInfo? File, NAS, ODS, OpenFileGDB, OSM, PGDump, PGeo, REC, S57, ESRI Shapefile, SQLite, SVG, WaSP, XLS, XLSX, XPlane. - Document dataset and layer creation options of BNA, DGN, FileGDB, GeoConccept?, GeoJSON, GeoRSS, GML, GPKG, KML, LIBKML, PG, PGDump and ESRI Shapefile drivers as GDAL_DMD_CREATIONOPTIONLIST / GDAL_DS_LAYER_CREATIONOPTIONLIST. - Add open options AAIGRID, PDF, S57 and ESRI Shapefile drivers. - GetFileList?() implemented in OpenFileGDB, Shapefile and OGR VRT drivers. - Rename datasource SyncToDisk?() as FlushCache?() for LIBKML, OCI, ODS, XLSX drivers. - Use poOpenInfo->fpL to avoid useless file re-opening in GTiff, PNG, JPEG, GIF, VRT, NITF, DTED. - HTTP driver: declared as GDAL_DCAP_RASTER and GDAL_DCAP_VECTOR driver. - RIK: implement Identify() - Note: the compilation and good working of the following OGR drivers (mostly proprietary) could not be tested: ArcObjects?, DWG, DODS, SDE, FME, GRASS, IDB, OCI, MSSQLSpatial(compilation OK, but not runtime tested) Changes in utilities - gdalinfo accepts a -oo option to define open options - ogrinfo accepts a -oo option to define open options - ogr2ogr accepts a -oo option to define input dataset open options, and -doo to define destination dataset open options Changes in SWIG bindings - Python and Java bindings: - add new GDALDataset methods taken from OGRDataSource : CreateLayer?(), CopyLayer?(), DeleteLayer?(), GetLayerCount?(), GetLayerByIndex?(), GetLayerByName?(), TestCapability?(), ExecuteSQL(), ReleaseResultSet?(), GetStyleTable?() and SetStyleTable?(). - make OGR Driver, DataSource? and Layer objects derive from MajorObject? - Perl and CSharp: make sure that it still compiles but some work would have to be done by their mainteners to be able to use the new capabilities Potential changes that are *NOT* included in this RFC "Natural" evolutions of current RFC : - Unifying the GDAL MEM and OGR Memory drivers. - Unifying the GDAL VRT and OGR VRT drivers. Further unification steps : - Source tree changes to move OGR drivers from ogr/ogrsf_frmts/ to frmts/ , to move ogr/ogrsf_frmts/generic/* to gcore/*, etc... - Documentation unification (pages with list of drivers, etc...) - Renaming to remove traces of OGR namespace : OGRLayer -> GDALLayer, etc... - Kill --without-ogr compilation option ? It has been preserved in a working state even if it embeds now ogr/ogrsf_frmts/generic and ogr/ogrsf_frmts/mitab for conveniency. - Unification of some utilities : "gdal info XXX", "gdal convert XXX" that would work on all kind of datasets. Backward compatibility GDALDriverManager::GetDriverCount?(), GetDriver?() now returns OGR drivers, as well as GDAL drivers The reference counting in GDAL datasets and GDAL 1.X OGR datasources was a bit different. It starts at 1 for GDAL datasets, and started at 0 for OGR datasources. Now that OGRDataSource is basically a GDALDataset, it starts at 1 for both cases. Hopefully there are very few users of the OGR_DS_GetRefCount() API. If it was deemed necessary we could restore the previous behaviour at the C API, but that would not be possible at the C++ level. For reference, neither MapServer? nor QGIS use OGR_DS_GetRefCount(). Documentation A pass should be made on the documentation to check that all new methods are properly documented. The OGR general documentation (especially C++ API Read/Write? tutorial, Driver implementation tutorial and OGR architecture) should be updated to reflect the changes. Testing Very few changes have been made so that the existing autotest suite still passes. Additions have been made to test the GDALOpenEx() API and the methods "imported" from OGRDataSource into GDALDataset. Version numbering Although the above describes changes should have very few impact on existing applications of the C API, some behaviour changes, C++ level changes and the conceptual changes are thought to deserve a 2.0 version number. Implementation Implementation will be done by Even Rouault. The proposed implementation lies in the "unification" branch of the repository. The list of changes : Voting history +1 from JukkaR, FrankW, DanielM, TamasS and EvenR.
http://trac.osgeo.org/gdal/wiki/rfc46_gdal_ogr_unification
CC-MAIN-2017-47
en
refinedweb
wakelock 0.1.4+1 Wakelock # This Flutter plugin allows you to enable and toggle the screen wakelock on Android and iOS, which prevents the screen from turning off automatically. Essentially, this allows you to keep the device awake, i.e. prevent the device from sleeping. Usage # To use this plugin, follow the installing guide. Implementation # Everything in this plugin is controlled via the Wakelock class. If you want to enable the wakelock, i.e. keep the device awake, you can simply call Wakelock.enable and to disable it again, you can use Wakelock.disable: import 'package:wakelock/wakelock.dart'; // ... // The following line will enable the Android and iOS wakelock. Wakelock.enable(); // The next line disables the wakelock again. Wakelock.disable(); For more advanced usage, you can pass a bool to Wakelock.toggle to enable or disable the wakelock and also retrieve the current wakelock status using Wakelock.isEnabled: import 'package:wakelock/wakelock.dart'; // ... // The following lines of code toggle the wakelock based on a bool value. bool on = true; // The following statement enables the wakelock. Wakelock.toggle(on: on); on = false; // The following statement disables the wakelock. Wakelock.toggle(on: on); // If you want to retrieve the current wakelock status, // you will have to be in an async scope // to await the Future returned by isEnabled. bool isEnabled = await Wakelock.isEnabled; If you want to wait for the wakelock toggle on Android or iOS to complete (which takes an insignificant amount of time), you can also await any of Wakelock.enable, Wakelock.disable, and Wakelock.toggle. Notes # This plugin is originally based on screen. Specifically, the wakelock functionality was extracted into this plugin due to lack of maintenance by the author of the screen plugin. If you want to contribute to this plugin, follow the contributing guide..3+4 # - Fixed iOS simulator issue. 0.1.3+3 # - Fixed Flutter SDK version constraint. 0.1.3+2 # - Fixed pubspec.yaml. 0.1.3+1 # - Updated pubspec.yamlto match new format. 0.1.3 # - Completed AndroidX migration. 0.1.2+8 # - Updated documentation. 0.1.2+7 # - Formatted AndroidManifest.xml. 0.1.2+6 # - Cleaned up the Android manifest. - Changed a test name in test_driver. - Updated the plugin description. - Updated README.md. - Updated CONTRIBUTING.md. - Updated .travis.yml. - Removed unnecessary Assetsdirectory from the iosfolder. 0.1.2+5 # - Expanded continuous integration to include format checking and code analysis. 0.1.2+4 # - Updated the example's README. 0.1.2+3 # - Improved Travis CI setup. - Updated badges. 0.1.2+2 # - Updated description. - Flutter master is used in integration tests now. 0.1.2+1 # - Added integration testing. - Removed unnecessary Android Manifest permission. - Added a contributing guide. - Added CI. 0.1.2 # - Changed Wakelock.toggle's parameter to a named parameter. - Improved iOS implementation. 0.1.1+2 # - Made the plugin description more concise. 0.1.1+1 # - Elaborated a bit more in description. 0.1.1 # - Renamed functions. - Improved README. 0.1.0+3 # - Added wakelock permission in Android Manifest. 0.1.0+2 # - Improved README. - Removed unnecessary files. 0.1.0+1 # - Fixed dependency issue. - Removed unnecessary dependencies. 0.1.0 # - Bumped version to indicate that the plugin is fully usable. - Improved README's. - Formatted Dart files. 0.0.1 # - Initial version. import 'package:flutter/material.dart'; import 'package:wakelock/wakelock.dart'; void main() { runApp(ExampleApp()); } /// The wakelock implementation is located inside the [FlatButton.onPressed] functions and a [FutureBuilder]. /// The [FlatButton]'s and the [FutureBuilder] sit inside the [Column] that is a child of the [Scaffold] in [_ExampleAppState]. class ExampleApp extends StatefulWidget { const ExampleApp({Key key}) : super(key: key); @override _ExampleAppState createState() => _ExampleAppState(); } class _ExampleAppState extends State<ExampleApp> { @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ FlatButton( onPressed: () { // The following code will enable the wakelock on Android or iOS using the wakelock plugin. setState(() { Wakelock.enable(); // You could also use Wakelock.toggle(on: true); }); }, child: const Text('enable wakelock'), ), FlatButton( onPressed: () { // The following code will disable the wakelock on Android or iOS using the wakelock plugin. setState(() { Wakelock.disable(); // You could also use Wakelock.toggle(on: false); }); }, child: const Text('disable wakelock'), ), FutureBuilder( future: Wakelock.isEnabled, builder: (context, AsyncSnapshot<bool> snapshot) { // The use of FutureBuilder is necessary here to await the bool value from isEnabled. if (!snapshot.hasData) return Container(); // The Future is retrieved so fast that you will not be able to see any loading indicator. return Text( 'wakelock is currently ${snapshot.data ? 'enabled' : 'disabled'}'); }, ), ], ), ), ), ); } } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: wakelock: ^0.1:wakelock/wakelock.dart'; We analyzed this package on Mar 27, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.1 - pana: 0.13.6 - Flutter: 1.12.13+hotfix.8
https://pub.dev/packages/wakelock
CC-MAIN-2020-16
en
refinedweb
How to read user input in Kotlin How to take input from user in Kotlin : In this tutorial, we will learn how to take user input in Kotlin with different examples. Reading user input in Kotlin is easy. Kotlin provides one inbuilt function to make this task easy for us. readLine() function allow us to read the input that is entered by the user. This function actually read the input as a string. We can convert it to a different datatype if we want. Alternatively, we can also use Scanner class to read the content of user input. Read user input string using readLine() : Let’s try to create one simple project first. The user will enter one line, our program will read it using readLine() and then it will print out the result : fun main(args: Array) { println("Enter a string :") val userInputString = readLine() println("You have entered : $userInputString") } Sample output : Enter a string : hello world You have entered : hello world Enter a string : 1234567 You have entered : 1234567 Read content using readLine() and convert it : As we have explained earlier that readLine() actually read the contents as a string but we can also convert them to any other formats. For example, if we want to convert a string integer to an integer value, we can do it by using Integer.valueOf() or toInt() method. fun main(args: Array) { println("Enter a integer :") val userInputInt = Integer.valueOf(readLine()) println("You have entered : $userInputInt") println("Enter another integer : ") val userInputInt2 = readLine()!!.toInt() println("Using toInt() : $userInputInt2") } Sample Output : Enter a integer : 12 You have entered : 12 Enter another integer : 23 Using toInt() : 23 Similarly, we can get convert other data types in kotlin. Using Scanner class : We can also use java.util.Scanner class to read user contents in Kotlin. We can use nextInt(),nextFloat(),nextLong(),nextBoolean() and nextDouble() methods to read int,float,long,boolean and double value from a user input. import java.util.Scanner fun main(args: Array) { val scanner = Scanner(System.`in`) println("Enter a integer :") val intNum = scanner.nextInt() println("Enter a float : ") val floatNum = scanner.nextFloat() println("Entered integer : $intNum") println("Entered float : $floatNum") } Sample Output : Enter a integer : 12 Enter a float : 34.54 Entered integer : 12 Entered float : 34.54 Conclusion : As you can see that we have different ways to read the contents from a user in Kotlin. Try to run the examples above and drop one comment below if you have any queries. 3 Comments Abdulkadir · June 26, 2019 at 8:55 am As a Kotlin beginner, i am truly happy to get this amazing website teaching Kotlin in a such simple way with more examples done in a source-code editor i really like, VSCODE. I tried to setup Kotlin with VSCODE in Windows10 by watching a video from Youtube. Though i did everything done in that Youtube video but eventually ended up with this error, “no main manifest attribute, in index.jar” when running the code in VSCODE. Any help for fixing this issue ? Thanks in advance. Ahmad · February 15, 2020 at 9:44 pm Hi, How can I prevent white space input? admin · February 16, 2020 at 3:14 am If you are taking inputs in an edittext, you can do something like : android:digits=”ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890″ android:inputType=”textFilter”
https://www.codevscolor.com/read-user-input-kotlin/
CC-MAIN-2020-16
en
refinedweb
Adding validation support for Json in Django Models Saadullah Aleem ・1 min read It’s pretty cool that Django has support for JSON based features of PostgreSQL. The JSONField() can be assigned to model attributes to store JSON based data. However, enforcing pre-save validation on this field can be tricky. Doing so in your serializers and that too through using Python’s dicts can be a hassle and can get ugly pretty quickly. Fortunately, there’s a great tool available to validate JSON data. It’s called json-schema and it has implementations in multiple languages, including Python. It can check for data types, make sure that strings match enums, and allow additional properties which might not require any validation. You can also nest multiple data types like having an array of objects with each object having a set number of fields with different data types of their own. Here’s an example of a schema against which we can validate our data: { "title" : "work experience", "type" : "object", "additionalProperties": false, "properties":{ "data": {"type": "array", "items": { "properties" : { "job_title": {"type": "string"}, "speciality": {"type": "string"}, "company": {"type": "string"}, "address": {"type": "string"}, "date_from": {"type": "string", "format": "date"}, "date_to": {"type": "string", "format": "date"} } } } } } JSON schema for data holding an employee’s work experience The idea here is to link a schema to your field against which you could validate your JSON before saving it into the database. We’ll extend the JSONField() provided by Django and add pre-save validation functionality to it. from jsonschema import validate, exceptions as jsonschema_exceptions from django.core import exceptions from django.contrib.postgres.fields import JSONField class JSONSchemaField(JSONField): def __init__(self, *args, **kwargs): self.schema = kwargs.pop('schema', None) super().__init__(*args, **kwargs) @property def _schema_data(self): model_file = inspect.getfile(self.model) dirname = os.path.dirname(model_file) # schema file related to model.py path p = os.path.join(dirname, self.schema) with open(p, 'r') as file: return json.loads(file.read()) def _validate_schema(self, value): # Disable validation when migrations are faked if self.model.__module__ == '__fake__': return True try: status = validate(value, self._schema_data) except jsonschema_exceptions.ValidationError as e: raise exceptions.ValidationError(e.message, code='invalid') return status def validate(self, value, model_instance): super().validate(value, model_instance) self._validate_schema(value) def pre_save(self, model_instance, add): value = super().pre_save(model_instance, add) if value and not self.null: self._validate_schema(value) return value We’re using an implementation of jsonschema here which validates our data against a schema such as the one defined above. In the constructor, we expect a path to a json file which we then load into memory as a property in the _schema_data method. The pre_save method makes sure that validation is performed before the model instance is saved. Using this field is pretty straightforward. You’ll need to define your schema and save it before providing the relative path to it as an argument: class Employee(models.Model): bio_short = models.CharField(max_length=256) bio_long = models.TextField() work_experience = JSONSchemaField( schema='schemas/jsonschema.example.json', default=dict, blank=True) Now if we try saving an employee instance with json that doesn’t correspond to the schema defined above, a ValidationError will be thrown. This is a great way to automate json validation instead of doing it in your serializers or elsewhere. We can go further and have a configurable schema property that is different for each model instance. For that, we’d have to pass the schema to the field while saving the instance and manually call our validation method. Got questions or suggestions? Comment on this post and I’ll try my best to answer them. The software industry moves fast. But if you keep up, you can have an incredible career. Join DEV. 100% Free Forever. Impressive work, my lead. I have seen, its implementation in HAH.
https://dev.to/saadullahaleem/adding-validation-support-for-json-in-django-models-5fbm
CC-MAIN-2020-16
en
refinedweb