text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
nginx rewrites : > to : > # wrong port ! while apache rewrites : > to > is it possible to configure nginx to behave like apache ? /etc/nginx/conf.d/test.conf : > server { > listen 81; > location /files { > alias /home/luc2/files; > autoindex on; > } > } /etc/httpd/conf.d/test.conf : > <VirtualHost *:82> > Alias /files /home/luc2/files > <Directory /home/luc2/files> > Options indexes > Allow from all > </Directory> > </VirtualHost> Posted at Nginx Forum: on 2014-05-24 20:54 on 2014-05-24 22:28 A listen to port 81 should not invoke a response from 8081, check to see who is listening on 8081. Posted at Nginx Forum: on 2014-05-24 22:36 On May 24, 2014 2:53 PM, "luc2" <nginx-forum@nginx.us> wrote: > > > server { > > Alias /files /home/luc2/files > > <Directory /home/luc2/files> > > Options indexes > > Allow from all > > </Directory> > > </VirtualHost> > > Posted at Nginx Forum: > > _______________________________________________ > nginx mailing list > nginx@nginx.org > Aliases (and any other configurable sharing a name with something in Apache) generally don't do the same thing as in Apache. Just use "root" inside the location block. Dustin on 2014-05-29 01:01 On Sat, May 24, 2014 at 02:53:21PM -0400, luc2 wrote: Hi there, > nginx rewrites : > > > > > to : > > > # wrong port ! <snip> > is it possible to configure nginx to behave like apache ? No. If your use case is restricted to one of the two mentioned below, then you might be able to fake it adequately. nginx does not have a config option to do what you seem to want, which is "use the incoming http Host: header value in any generated Location: response header". Using "port_in_redirect", you can auto-include either no port at all, or whichever port the connection actually came to nginx on (which will be one of the ports listed or implied in the "listen" directives). If you don't want to patch the code to add your use case, then * if you have a fixed list of redirections, you could add a number of locations of the form location = /dir1 { return 301 $scheme://$http_host/dir1/; } * or if there is exactly one host/port that you will always want to return (e.g. server:8081), then you could port_in_redirect off; server_name_in_redirect on; server_name server:8081; But otherwise, I don't think it can be done. f -- Francis Daly francis@daoine.org
https://www.ruby-forum.com/topic/4878828
CC-MAIN-2017-39
refinedweb
385
58.82
- CSS 2.1. Layout interoperability with other browsers through the CSS2.1 standard has always been a top goal for IE8. Beta 2 supported all properties in the CSS 2.1 specification and passed over 3,200 test cases. We’ve made significant improvements since Beta 2 and this week’s RC1 passes well over twice as many test cases as Beta 2. For example, one of our favorite new features is IE8’s new support for High-Res layout, which we’ll blog about in more detail later. We expect very few changes between this RC and the full CSS2.1 support in the final product, which web developers and designers can use to write their pages once and have them rendered the same across browsers. - HTML, Document Object Model (DOM) and JavaScript. Throughout Beta 1 and Beta 2 we’ve talked about how IE8 is much more interoperable with other browsers in core areas including attribute handling and element lookups like those through getElementById(). To help ensure future interoperability with other browsers and standards compliance, the Release Candidate includes the following updates, which we recently blogged about: - Mutable DOM Prototype includes the new ECMAScript 3.1 conformant getter/setter syntax. - ARIA supports the dash syntax “aria-checked” across all IE8 document modes. This means web developers can write code once that works across IE8 modes and with other browsers. - Cross-Domain Requests (XDR) now checks Access-Control-Allow-Origin header for a match to the origin URL as well as wildcards. As a result, data is only shared with sites whose origin the server specifies. - Performance. Similar to interoperability, better performance helps improve developer productivity. To this end, we investigated core performance scenarios and focused on optimizing common AJAX design patterns. Web developers and end users alike will experience performance improvements since Beta2. Development environment - Developer tools. Beta 2 introduced more power with the JavaScript profiler, save to file, and console.log support. RC1 has dramatically improved stability and a much more accurate view of the HTML tree and CSS tracing. It also offers more flexibility by adding a menu option for viewing source with Notepad, the built-in viewer, or any other choice of viewer. - Documentation. We think good documentation and communication is an important part of the development environment. To this end, we have updated our Internet Explorer Readiness Toolkit and MSDN IE Development Center for web developers and designers to use as references. Stay tuned to the blog for much more detail on improvements we’ve made since Beta 2 and tips for upgrading to IE8 Standards Mode.! PingBack from Now speaking of interoperability… RC 1 appears to score one less than Beta 2 on Acid3 (20 versus 21). What gives? And by the way. I tried to post this in reply to "Upgrading to RC1" but then all comments were rejected and I don’t know if anybody looks there now. It appears that IE installer is incompatible with Adobe Reader 9; see the discussion at. I can confirm this issue still persists for RC1. I think it’s probably an Adobe bug, but I’d expect a mention of it in the release notes. And by the way. I tried to post this in reply to "Upgrading to RC1" but apparently there was a temporal issue with posting then, and I’m not sure if anybody looks there anymore. Apparently IE installer is incompatible with Adobe Reader 9. See the discussion at. I can confirm this issue applies to RC1 as well. It’s probably an Adobe bug, but I’d expect a mention in the release notes. is there any news on whats after ie8 is microsoft working on ie9 or whatever post ie8 is called is there any idea on what it will include will it include any css3 or xhtml or anything also from what ive heard microsoft has another layout engine why doesnt microsoft make ie9 use the expression web engine or a combination of it and trident @ Marc Silbey > element lookups like those through getElementByID(). It is not getElementByID() It is getElementById() "Lower case ‘d’!!" Regards, Gérard Margins are not collapsed correctly in some cases: Test case 3 Hi, it is definitely better and better. Ctrl+T however still opens a tab very slowly, which is very disturbing – I’m used to the instant Ctrl+T # Ctrl+V combo, but now I have to open the new tab, wait two secdonds, and paste whatever I want to open/search for. Thanks Too slow too slow ! I don’t understand why IE became so slow…Chrome & Safari Rocks… You should use V8 for the javascript engine. I think a lot of users will use IE if it were faster than Firefox….you should considere this (2 sec for tab opening…you kidding?) I don’t use IE anymore since Chrome. I really hope that performance of IE8 will be much better since the only authorized browser at my work is IE. I installed the RC1 1 today, and now my entire GUI is "delayed" or slowed down. If I drag any window in Windows, I see trails of slow GUI updates (redraws, whatever you want to call it) and explorer.exe sky-rockets in CPU cycles until all the GUI updates are done. All worked fine last night; RC1 installation, plus disabling the One Note, Research, Discussion (whatever that is), Send To Bluetooth (I never use it), and Windows Messenger (no publisher on that, surprisingly). I’ve been VERY excited about the advances in ie8 – it really does take Web Browsing to the next level and makes it more productive. However, having to uninstall ie8 today was disappointing and I have high hopes Microsoft can get this cleaned up soon. Installed IE8 RC1 and subsequent Live goodies… all working well. On our systems, it is faster than FireFox 3.1 b2. I have one request, we need a password revocery/editor for IE. Since its wise to change page passwords, a method of recovering/editing/exporting the passwards saved would be extremely useful and wonderful. Installed IE8 RC1 and subsequent Live goodies… all working well. On our systems, it is faster than FireFox 3.1 b2. I have one request, we need a password recovery/editor for IE. Since its wise to change page passwords, a method of recovering/editing/exporting the passwards saved would be extremely useful and wonderful. Update: Uninstalled ie8 RC1 and now back at ie7 – all GUI delays gone. Fairwell, for now ie8; I REALLY look forward to your return as I’m already missing the address bar enhancements! For those of you seeing slow tab open times, click Tools, Manage Add-ons. If you don’t use an Add-on, disable it. The far right column of the Add-on manager will show load times. The worst offenders I’ve seen so far are the following: Research – unknown, but slows new tabs down for sure Java – about 1 second Norton NCO BHO – .77 seconds Norton Intrusion Prevention – .38 seconds Also, IE8 RC1 completely breaks the following website: Nice that there is a possibility to DISABLE add-ons. But how can I DELETE them.. Maybe something for the next version IE8.5. It would be great anyhow to release a IE8.5, only to update functionality, not related to the trident engine.. like UI changes and things like removing add-ons.. Why does IE not respect the "Open links from other programs: open new tab in current window" when clicking the email button in Windows Live Messenger? I have to access some sites that do not support anything beyond IE6.0 For those I use the User Agent String Utility v2.0. Unfortunatly, that utility does not support IE8. Will there be new version of the User Agent String Utility? @Jason: There are some cases (specifically, cases where IE is instantiated via COM) where opening new windows in tabs is not possible. @Jeff: The UAPick plugin here () can be used in IE8 RC1 to emulate any other browser or version. I’m curious: are these sites on the public internet? For everybody who is having a slow new tab opening problem, disable the Java SSV Helper under add-ons. The Java console can stay, but the Java SSV helper is known to make tab opening extremely laggy. If you don’t have Java installed, try running IE in no-addons mode, and see if it works faster – if it does, you’ll just have to find out which addon is causing the problem. @Mitch 74: At 100% zoom, the "margin collapse" test passes in the current internal builds. Are you using a non-100% zoom? EricLaw: On the public IE8 RC1 build, test case 3 on that page will not pass on the first opening. You have to resize the browser window for it to pass. Tested in fullscreen mode, no zooming. Are you saying that this been fixed in later internal builds? I would definitly focus on tab opening speed. Beta 2 sometimes took 10+ seconds to open a new tab. RC 1 seems to take up to 5 seconds which is better but still irritates me to wait just to open a blank tab (and considering other browsers do it in 1 second or less). @EricLaw [MSFT]: I looked into Mitch’s testcase and filed: After unhovering the green box, the element with negative margin is incorrectly repositioned (in this testcase it’s generated content, but any element would expose the bug as well). @Jim: You almost certainly have one or more buggy addons installed. How long do new tabs take if you start IE without addons? On my computers, new tabs open in well under 1 second. @Arieta: yes, the case passes on current internal builds at 100% zoom. Webslices stopped working when I upgraded beta2 to rc1. Connection problem: Internet Explorer cannot display the webpage. Not a firewall problem, cause I tried that already. Any ideas? Hi I think IE8 is really great and I don’t have any problems with it. It is definitely the best IE ever! Off course it would be nice if you would add CSS3 in it but still it is much better than IE7. I know that you don’t plan a RC1 for Windows7 but I really hope that you would reconsider that decision because RC1 is MUCH better than Beta2 or whatever is included in Windows7 Another thing: The WYSIWYG Html editor in IE (also in IE8) is producing very bad HTML. Wouldn’t it be a good idea to borrow some from one of your other tools like Visual Studio or Microsoft Expression Web which makes really nice HTML? Not that I’m complaining, but why are all browsers adding support for ARIA at this time? I’m aware of what it is and how helpful it is to those that need accessibility tools, but is the assistive technologies lobby (best term I can think for them) stronger with browsers than say, those pushing for SVG? Is it very little work on your part to add support for something like this (relatively speaking), compared to adding W3C DOM event handling? Again, I’m not digging Microsoft on their choice here because it is an admirable one, it just seems highly an odd one to me. Shameless request for IE9: W3C Range/Selection support! The differences between the W3C Range and the MS TextRange objects are much more difficult to work around compared to the difference in the event models (which are well documented and solutions for most cases already found). I have a site: That comes up with compatiblities issues and is forced to IE7. The site validates on HTML, CSS and the Feed, so I have no clue what IE8 is having a problem with. It there any tool coming or perhaps a validation script that can give us developers feedback as to "what" is incompatible with IE8 mode? Rocky Thanks for a nice upgrade to the browser. I have one feature request for you: Allow the user to decide which side the scrollbar should appear on. With more and more touch and tablet devices and the growing number of left-handed individuals, it only makes sense to support those of us who prefer using the left hand. This is a major annoyance when surfing the web with a slate tablet PC. Thanks for working so hard on making a good browser. Can I recommend increasing the timeout before IE complains abouot slow javascript? Right now when we try to have it run JS that is a bit more complicated, say drag-and-drop photo management interface on Sosauce, it will start complaining very quickly. We’re constantly testing with all browsers and the other 3 don’t have such behavior. Whereas in IE series, the timeout is the same from IE6 to IE8. We believe people’s client computers are much better these days in handling complex JSs and therefore should be allowed to run more complicated JS for longer before throwing a warning at the user. If you need a sample page and how IE6-8 is seriously over-cautious on slow script than other 3 competition, please feel free to drop me a comment on Sosauce. Thanks. IE8 RC1 crashes on these simple little lines: and IE8 RC1 is not ready for primetime yet according to Ajaxian (and anyone I’ve talked to): so the question is, when will RC2 be out? Re: "On my computers, new tabs open in well under 1 second. " Well you must be pretty special! I don’t have _any_ of my IE8 RC1’s opening a new tab in less than a second. I’ve disabled (and or removed) every addon except Adobe PDF, but I have yet to see a new tab open in a respectable time frame. According to this blog (the Internet at large) and IE Feedback on Connect – I am not alone – In fact every user *except* EricLaw[MSFT] experiences painfully slow tab loading in IE regardless of how many addons (buggy or not) are loaded. From the Connect bug report (and chats with the IE team) it has been discussed, and disclosed that there is a MAJOR architectural flaw with the tab implementation/addon implementation that is guilty of the slow new tab rendering. Please stop blaming buggy IE addons (unless you are going to point DIRECT EXPLICIT fingers at the guilty ones). Running a browser without addons is *NOT* an acceptable workaround.”> is not a valid url nor is about:Tabs but that is what we all get when trying to paste a url in the location bar after opening a new tab. The slow tab opening (whether you claim it is a bug or by design) is unacceptable. The problem is not with add-ons either. In every other browser, opening a tab is immediate. In IE8 there is *always* a lag. This better be fixed by the final release, otherwise you will continue losing market share because it makes IE8 feel extremely slow and clunky. That is the reality of the situation. Opening a new tab should be immediate. Any lag at all is unacceptable. Please consider adding extensive tests for your VML support. It’s a shame (an embarrassing) to see that VML is still broken in RC1, and only works for the very simplest of cases with VML. Especially working with VML programmatically is buggy to say the least. Also annoying is that if you enter a URL in the address field right when IE has started up (but still not fully loaded because of the long startup time), IE will tell you that it can’t complete the request. JP: my tabs open almost faster than I can click the stopwatch (200 milliseconds?). How long do yours take with no plugins? As for specific plugins that are slow the following three [Java SSV, Windows Live Toolbar, Office Research] have all been mentioned as culprits. IE8 has a cool "Load time" column in the Addon manager that tells you how long each takes. >In fact every user *except* EricLaw[MSFT] experiences painfully slow tab loading in IE New tabs open instantly on my end. Like I said, the only thing affecting them is the Java SSV Helper, which I’ve disabled. Have you just disabled all plugins, or did you run IE in it’s no-addons mode? It’s not the same thing. I took the advice Arieta gave about disabling that Java SSV Helper add on, and sure enough IE loads much faster and tabs open much quicker, less than a second. I just only wish I had known about that sooner. Please make it so you can slipstream the final version of IE 8 with nlite. Please make the browser opening time faster and the beta version on windows 7 causes a flicker on the taskbar when chaning windows. While competing browsers can work fast why not IE. @ windowsfanboy microsoft does not support nlite Installed IE8 on Vista and it required a restart to use. I don’t understand why this is necessary. It’s just a browser. No reason for it to be so integrated into the OS that it requires a restart. Makes no sense to me! There’s STILL no console object with a logging function. When scripts are loaded and executed on-demand and there’s an error, the line # for the error is very high, fluctuates around 80 million or so. That’s definitely a bug. And there’s still no reference to the JavaScript file that is producing the error. I do a lot of JavaScript development and IE is still not even close to being a development platform. And while I’m ranting, why on earth did the IE team not use an already proven safe, fast, and standards-driven web engine like Gecko or Webkit? Why do you guys keep insisting on re-inventing the wheel? (although your wheel isn’t quite round and breaks often) Why are you guys not focusing on JavaScript speeds and development? Do you guys not realize the future of the web? I mean, come on, you guys are Microsoft! Where’s the much requested border-radius implementation? I guess you guys still haven’t learned from the past. Maybe you’ll start to understand when IE’s browser share drops below 50%. I’m sure the programmers are trying their best but they have to answer to someone who just doesn’t get it. Oh well. </rant> "There’s STILL no console object with a logging function" Yes there is. I’m not at a machine where I have IE 8 RC1, but I noticed that /w IE 8 BETA 2, in the developer toolbar at least, console.info(window.external) prints garbage, like it’s dereferencing an invalid pointer. element.addEventListener() stil not implemented :/ Since the release of IE8 Beta 2 we’ve listened to feedback from many channels Apparently heard nothing about users wanting svg, canvas, css opacity, MathML, proper event handling. Need I go on, IE8 is a step in the right direction and people like me would probably stop moaning if the team could announce that they will continue to work at a similar pace on IE9 start adding requested features. This browser is not going to stop the continued decline of IE market share because it is still so far behind the rest. Orl: it has been mentioned on this blog for many months, I think I first heard about it back in october. Orl: it has been mentioned on this blog for many months, I think I first heard about it back in october. Those slow (I would even call it buggy) addons is a big nuisance. Especially the MS Office research addon. This is an almost unused plugin that should not interfere with new tab behaviour unless research plugin was activly used by browsersusers (which is extremely rare. You should seriously be spanking your Office team for introducing such a horrible addon. I suggest to have it updated by the Office team before IE8 release. Add that Java helper is terrible too of course when a 64 bits version will be on line for french version ? thx IE8 RC1 performance / application locking. When IE8 RC1 is executing a long running Javascript loop there is a complete lock up of IE8. You can’t view the IE console/dev tool window if it is under the current IE window (which is always if it is detached for readability). In addition the statusbar says: "waiting for about:blank" which is obviously incorrect. Performance goes in the toilet with logging to the console in IE8 RC1… setting innerHTML in the current web page is much, much faster! oh one last thing… scrolling performance in the console log is dreadful. I’m not sure how it was developed but it needs to have some better off screen buffering/loading of content. I have noteced that with the latest version of the Skype beta having the IE add on turned add makes tabs open slowly. If I diasable the skype ie add on tabs open extremly fast. I have one simple wish.IE8 RC1 for Windows 7.(included beta has a lot of regressions and I cannot install obviously RC1 for older systems) @Glen Fingerholz: Invoking console.log() generates a JS error in IE8 RC1. I read that link, but it’s simply not true, unless of course I need to enable the console object. But, I’ve found no reference to that. Also, that link refers to JScript, which is NOT JavaScript, but rather Microsoft’s implementation of JavaScript. I can only hope that’s just an old reference and IE8 is not actually using JScript. IE8 is ramping up to be an EPIC FAIL. Apparently the console object only works after you open up the development tools and click on the Script tab, which doesn’t make sense to me. What if a developer left some console commands in the code for the live site on accident? It’ll generate an error, when there really isn’t one. @ajaxwiz You are correct in that it does not work like the Firebug one. For some reason you have to open the developer toolbar or you get errors. It probably doesn’t create the console object until the dev. toolbar is loaded. As for JScript, I can assure you that IE 8 uses JScript. They have not changed their JavaScript engine out /w JScript.NET or anything like that. IE 8 is a mixed bag for developers. We have better tools, and a better rendering engine /w IE 8 (and Fiddler was great /w IE 6 and IE 7 – much more powerful than what Firebug comes with), new DOM APIs (and corrections for old ones). We still don’t have XHTML, the W3C JavaScript event model, a more compliant JavaScript engine (and "for each" would really be nice), SVG and/or Canvas. That said, fairly compliant rendering should be a huge time-saver. console.info(), console.log(), console.error(), and console.warn() all throw errors if the IE Development Tool Window isn’t open yet. speaking of logging, is there a way to have the logging appear on the left rather than that small window on the right? The HTML tree is fine and all, but its not even close to Firebug in terms of usefulness. Its a minor thing, but why can’t IE look sexy? the browser, the dev tools, they all look like my kid brothers VB6 applications – lame and boring. @Glen Fingerholz I agree. The debugger is a step in the right direction, but there is still a HUGE gap in standards/speed/features between IE8 and Gecko/Webkit. It’s simply too little, too late. Microsoft is not catering to the developers, but rather the average internet user who can’t tell the difference, users who will just use what Windows comes with and not really care. This leaves the developers out in the cold, as is usual for Microsoft. Although I’m sure all .NET programmers will be fine as usual. With the way things are going now, IE’s market share will continue to decrease in favor of better products. If only the US would impose some of the EU restrictions on Microsoft so that IE becomes completely independent of Windows. But, I’m pretty sure Microsoft dumps a lot of money into preventing that from happening. About the speed of opening a new tab. In my opinion, it should be instantaneous (like it is in firefox, opera, and chrome). Even 0.2sec is too slow because it is a noticable lag and makes the browser feel unresponsive. When I hit ctrl-T a new tab should come to the front immediately, and my cursor should immediately move to the address bar. When I say immediately, I don’t mean in under one second, I mean with no perceptible time lag. In my opinion, this is the worst flaw in the user experience in ie8. Why does clicking "Delete Browser History" (select all) take Sooooooooooooo long to delete everything? I can do a full delete of my history, etc. in Firefox in less than 2 seconds. steve Oh no, you guys just made IE8 RC1 behave like Chrome which annoys the hell out of me. In Chrome or FireFox, when clicking the Back button to return to the prior page, the old page is first displayed from the top AND THEN it jumps to the old scroll position. Now IE8 RC1 is doing the same, I really have no reason to use IE8 anymore. PLEASE PLEASE I don’t mind the slowness but don’t FLICKER the user like crazy. If you want to find out the effect, go to ebay and search something, then scroll to the middle of the resulting search page, then click an item, the click the BACK button and you will see the "Top then old spot" page flicker. Basically all "list based" pages will have this problem. IE7 does not have this problem. Creating new tab (Ctrl+T): too slow ———————————– I’d like to add my voice to all those who said that creating new tab (Ctrl+T) is too slow. It is too slow. No matter which or how many add-ons are enabled. CSS bugs ——– There are still many, many CSS bugs remaining to be fixed. A good bunch of them. Developer tools ————— Developer tools are a weak, very weak area in my opinion. – I can not see/get a list of CSS parsing errors: so, realistically speaking, how is a web developer going to upgrade webpages, to correct errors, etc with IE 8 dev. tools? No error console like Firefox 2+ has, like Opera 9+ has, like Safari 3+ has, like Amaya 11+ has, Icab 3+ has etc – I can not see a list of validation markup errors: so, again, the developers tool have very limited usefulness, helpfulness, relevance. I can do that instantly – offline or online – with a Firefox add-on. Already, people have been filing and reporting webpage layout issues in bug reports at IE beta feedback and they have no clues, no hints, no ideas whatsoever that their webpage is full/has lots of markup and CSS errors. How are they, how can they upgrade their webpages and websites without those? – no possibility of examining computed CSS property values, except the layout box model values (offset, margin, border, padding, content) and even there, "auto", "medium", etc still do not refer to computed values. This is one area where all other browsers (Firebug, DOM Inspector, DragonFly, Safari Web Inspector, etc) are clearly ahead of IE 8. – no built-in HTML Tidy, no HTML Tidy add-on: such would be useful for old FrontPage generated webpage or MS-Word exported webpage, you know… – view source: it’s not as developer-friendly as it could and should be. I shouldn’t have to set its text size all the time. Also, linefeeds, CR should be saved too (there is a bug filed on this) Documentation ————- This is the weakest area, weakest element by far of the IE 8 package. Everywhere I read, everything turns around the same repeated ad nauseam issue: how to switch to compatibility view, how to use the metatag to opt-in into IE 7 rendering, to click that broken-document-switch-to-compatibility-view button, to download a compatibility list, etc.. Nowhere at MSDN can I read anything instructive, helpful, useful about how to upgrade webpages, to upgrade "tag soup" webpages, how to correct (validation, parsing) errors, what to look for (typical errors, misnesting, doctype decl., recommendations, tips, web standards reference), how to create a lean, performant and efficient stylesheet, how to properly structure a webpage, what is "classitis", "divitis" and why they’re bad, etc, etc. None of this type of documentation exist at MSDN. Zero. User agent string detection as a way to sniff, to detect browser is refocused again in several documents at MSDN when lots of expert web designers on the subject of cross-browser development have said that object/method support detection is more reliable, more manageable, more forward-compatible. Gérard Talbot @Those who complained about the developer tools: First of all, at least they are there to help a little bit. It’s a new feature, so it probably will have bugs. In addition, the looks don’t matter. It seems to fit well with the browser’s default theme, so does it really matter what it looks like? The IE team is restricted in what they can do with it anyway because of Windows itself, unless you want them to use a bunch of custom images and everything to slow the browser down more (yes, I’m aiming this at those that complained about new tabs being too slow despite the fact that they are slow here, too). @Gérard Talbot: The documentation isn’t there because it isn’t needed for authors. Authors should know how to fix these things. After all, stylesheets, doctypes, etc. aren’t a part of IE’s problems. They’re the authors’ problems. The only standard that IE seems to want to follow is ECMAScript, aside from some RFCs regarding internationalization and the Unicode standard. The W3C follows standards, and they also make use of their own recommendations. Notice that I said "recommendations". That’s all they are; they are not standards. None of them actually need to be followed. IE implements some of them because they’re useful (like CSS for example) and because they’d be dead already if they didn’t. @IE Team: Great job with the new IE thus far. Keep up the good work, and you might regain the share you lost to alternative browsers like Mozilla Firefox. Gotta concur with the other people complaining about tab performance. Browsers that are virtually instant in creating new tabs: Mozilla Firefox 3.0.5 (latest, /w around 18 extensions enabled), Safari for Windows 3.2.1 (latest). Still faster than IE 8: Google Chrome 1.0.154.46. 2 year old laptop with XP MCE, SP3. Intel Core Duo T2050 (1.60 GHz, 533 MHz FSB), 504 MB of usable RAM. No known malware infection. Not ideal (more RAM would be desirable), but fast enough (except for Visual Studio 2008, which is a DOG even on the fairly new, powerful machines we have at work – an unrelated rant). I would be more forgiving if I had the security enhancements that Vista and Windows 7 people get (and that it appears Chrome was able to implement in XP). <blockquote>No matter which or how many add-ons are enabled.</blockquote> New tab opening seems fast enough. Less than or close to a second and I have an AVG plugin enabled that takes about 0.5 of a second to start. What a lame post. Why not simply start to adhere to international standards, once for all? Pick WebKit. It’s already out there and it works. There would be nothing wrong, MS already picked up code from mosaic a long time ago. It’s never too late to fix things. @EricLaw: Daniel’s report does the trick. I can’t test under Vista, so it may be a platform-specific (XP-only) bug. @Mitch, I can see Daniels testcase failing in IE8 RC1 on Vista. Nut sure if the testcase is correct but it does fail on Vista as well @Mitch, I can see Daniels testcase failing in IE8 RC1 on Vista. Nut sure if the testcase is correct but it does fail on Vista as well Well according to Web bug track – Microsoft has acknowledged the bug with slow tab opening but has indicated they have no intention of fixing it before IE8 ships. bug reference: (search Web Bug Track) I tried to put a link in here but the stupid antispammer bot for this sites sucks so bad that any post with a link is considered spam!) This is too bad! Wish this was one of the first things addressed in IE8 development since the issue was only introduced in IE7. @Norman: I’m not sure what "acknowledgement" you’re referring to exactly, but I can tell you two things: 1> slowness in opening tabs is demonstrably related primarily to slow browser add-ons, and 2> we continue to work hard on improving the performance of the browser in targeted scenarios, including startup. @ajaxwiz > Also, that link refers to JScript, which is > NOT JavaScript, but rather Microsoft’s > implementation of JavaScript. I can only > hope that’s just an old reference and IE8 is > not actually using JScript. JScript is IE8’s implementation of ECMAScript, not JavaScript. I can only hope that you also take issue with all of Mozilla’s non-standard extensions to JavaScript, their implementation of ECMAScript. @DT Yes, you’re right. I only take issue with Microsoft’s implementation because it’s so slow. No CSS3 (whatever that will eventually exactly mean) is just SO pathetic. Take a look around guys. You high five yourselves with CSS 2.1 compatibility while your competition has nearly arrived at CSS 3 (Webkit, Gecko, Presto). After disabling Research, Java (sorry Applet developers), and Acrobat-related plug-ins (leaving only "Diagnose Connection Problems" and "Report a Webpage Problem" enabled), creatings tabs was much faster. Most users aren’t going to have that setup. In fact, most users are probably going to have more toolbars and BHOs installed (Google, Yahoo!, MSN, Alexa, Ask!, McAfee, Symantec, MyWebSearch, etc) and enabled. Why doesn’t Firefox have this issue? Or really… any other "major" browser? @EricLaw [MSFT] Re: 1> Either list the "naughty" addons that cause IE7 and moreso IE8 to open new tabs slowly, or refrain from trying to blame someone else s software. We’ve all tried IE7 and IE8 with EVERY addon disabled and the performance of IE opening a new tab vs. ANY other browser PROVES that IE is significantly slower. Then when you do enable the addons that you NEED to use on a daily basis to make IE a usable browser, the performance gets worse. Re: 2> If Microsoft *really* is working on fixing this issue with IE, then (*cough*, *cough* you’ve just acknowledged this issue *again*) please post an article here on the blog, indicating that it is slow and that you are working on it, and that it will be fixed before IE8 RTM ships. If not this banter about rouge addons is just hearsay and a bunch of FUD to try and pull the wool over on us before the IE8 RTM launch. Only hAl, Dan & Ted the EPIC FAIL have stated that they think new tab opening speed is fine. Considering their reputation (ms fanboys) I don’t think the evidence from everyone else that there is a serious performance issue can be dismissed by blaming 3rd party software. @jesse > Only hAl, Dan & Ted the EPIC FAIL have > stated that they think new tab opening speed > is fine. What about Arieta? And tabs open fine on my end as well. Hi everyone, Regarding the console object… The console object is only added when the tools are opened. The best practice here is to check for existence of console and then provide a custom object (possibly just a no-op) if it doesn’t exist. This also accounts for any browser that doesn’t support console. I believe some libraries already do this. And hopefully the script errors generated will prevent unwanted console statements from accidentally ending up in live code. John [MSFT] @hAL: Daniel basically took the test I created, isolated it on a single page and made it simpler (replaced unordedered list with multiple list elements with a single paragraph, replaced generated content with a thick border). His test is correct, and if anything "more correct" than mine because it’s simpler and doesn’t rely upon features added (generated content) or fixed (ghost spacing between unordered list elements) in Trident in this major revision. If it fails on your Vista machine too, then it’s not platform-specific. I should mention that my IE 8 install is raw: there’s no Java, no .Net, no toolbars, nothing installed on this machine apart from stock Windows XP Pro SP3, security updates and stock IE. Heck, there’s not even third-party hardware drivers! It runs in a VM emulating either a Cirrus 5446 video card, or a plain VESA-compatible card (using Bosch’s open VGA BIOS), and all drivers are Microsoft’s. I’ll try and create a new, fresh, no options install, and see if I can still reproduce this bug. Not sure whether this would fall under "critical issues" (probably not :)) but here’s a very simple bug: —— <span id="s1" style="border:10px solid red;padding:5px;background:#bbb;">Incorrect offsetWidth returned for inline elements with border/padding set</span> alert(document.getElementById(‘s1’).offsetWidth) —— IE8 will not take into account the border/padding. Sorry for posting this here but, unfortunately, as far as I know, you still don’t have a public bug tracking system ala Bugzilla.. Addendum: just tested on a clean XP sp3 install (fresh off an Windows XP Pro+sp3 CD) + IE8 RC1, with no accelerator, no filters, no nothing (in fact, all UI improvements added over IE6/7 and asked for in the startup wizard have been deactivated). @EricLaw[MSFT] Re: [I’m not sure what "acknowledgment" you’re referring to exactly] – Regarding slow opening and responsiveness of new tabs. Quite simple really, the first major acknowledgment was on September 22, 2008. View this bug report in Connect: And note the comments (by Microsoft) in this "Closed (By Design)" bug report. [QUOTE] This is a long known issue that opening a new tab in IE will be slower than other browsers. The main reasoning behind this is due to the current architectural design in IE and maintaining backward compatibility issue. [/QUOTE] Well hello there! Honesty! Welcome to the discussion! Microsoft states: "This is a long known issue that opening a new tab in IE will be slower than other browsers." [QUOTE] Other browsers do not build on the same architecture therefore do not suffer this problem. [/QUOTE] What is that you say? IE has a problem, that other browsers do not suffer from? [QUOTE] We are actively looking for ways to improve performance in IE8, including this known limitation. [/QUOTE] Known limitation? Well apparently only the IE users are aware of this because EricLaw[MSFT] is not sure. [SKIP] link to docs on other unrelated performance tweaks in IE [/SKIP] [QUOTE] Best regards, The IE Team Posted by Microsoft on 9/22/2008 at 9:40 AM [/QUOTE] It should also be noted that this bug has: a 4.7/5 rating in terms of importance a 93% validation level and 0 workarounds Followed by the most recent IE chat (re: slow tab opening)… [QUOTE] JohnHrv [MSFT] (Expert)[13:27]: "there may be ways to make the process more efficient and we’ve been working on that in IE8."… …"so it’s definitely an area that could use optimization" [/QUOTE] I’m not trying to start a fight here, but lets put the "blame the user’s addons" to rest, admit that the issue is definately there, and that something needs to be done about it before IE8 goes RTM. @ vasko_dinkov Bug 389825: Inconsistent offsetHeight for inline elements connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=389825 Regards, Gérard @steve_web: You’re pretty much missing the point, but I can’t blame you when the response was so unclear. "The main reasoning behind this is due to the current architectural design in IE and maintaining backward compatibility issue." That is referring (in a painfully obtuse way) to the fact that browser add-ons are loaded on a per-tab basis, and thus if you have poorly-performing add-ons installed, they will cause slow tab startup. That architectural design maintains backward compatibility with existing extensions (which are coded on a per-tab basis). The response was entirely misleading in suggesting that the issue was not conditional upon having poorly-performing add-ons. When only high-performing add-ons are running, new tabs appear in well under 1 second. << You high five yourselves with CSS 2.1 compatibility while your competition has nearly arrived at CSS 3>> Ummm…methinks you missed the part where no other browser currently gets CSS2.1 right. When I went to school, you didn’t get to move on to the next assignment until you correctly finished the current one. @Eric Law[MSFT] As I and many others can assure you, the performance of opening a new tab in IE8 is slow regardless of how many addons you have installed, or if you have _any_ addons installed. However, if you insist that it is buggy addons that slow things down, then for pete’s sake point the finger so we know which addons to remove. Likewise, when running IE8 in no addons mode, the link in the Tools menu to go and find addons that are causing trouble (e.g. to go delete them) is grayed out. You need to go into Tools > Options > Addons > Manage Addons in order to see them. Finally, please fix whatever is BLOCKING the DELETE button from being displayed beside ***ANY*** addon. I don’t want to "disable" the "Research" addon, I want to throw it off a volcano into a pit of boiling lava. It serves me no purpose other than to clutter my system and slow down my IE browser. Believe it or not, but in terms of IE development priorities I would put the IE Tab opening speed WELL above SVG, W3C event listeners and CSS rounded corner support… Well above. This is a performance issue that EVERY SINGLE IE User will encounter almost every day and in many cases several times an hour. can you add a maximum box to the print preview window? The border of it can resize, but we cannot max it ? from the help tips at: #5 Once you’ve found a broken extension, contact the manufacturer and ask for an update. Dear Microsoft, please make haste with Windows Update to completely remove the *Research* addon from Internet Explorer. Feel free to do so with our without informing the MS Office team that you are fixing their code by removing it completely. Thankyou Well it seems I’ve uncovered a rather large failure in IE8 RC1. The "Delete" button that was grayed out for extensions was actually on an IE7 install. In IE8, NO SUCH BUTTON EVEN EXISTS! so there is NO WAY TO REMOVE an extension (Good or Bad) =========================================== I see this as a monumentally major issue. =========================================== How do I ever remove a spyware toolbar if it gets installed? Even if I open IE with extensions turned off, I have no way to delete them. What about extensions that I need one month, but not the next? (Citrix anyone?) Adding and removing addons should be JUST as easy as it is in Firefox. At any time I can pick an extension and disable it, customize it, or uninstall it. steve Steve, if you spent more time looking into things and less time ranting, you might get more respect. The "Delete" button was renamed "Remove" and it’s now behind the "More Information" link for each addon inside Manage Addons. It’s not always enabled because sometimes you must run an uninstall program. A spyware toolbar isn’t going to support standard unregistration with the Delete command anyway. You use anti-spyware programs to remove malicious software. Of course, you should probably try to avoid installing malware in the first place. You can almost always uninstall legitimate IE plugins using the Add/Remove Programs control panel. It’s true that Firefox has a much different, less powerful, non-COM extensibility model. They have to, because they run on platforms other than Windows. @Daniel – a few notes: 1.) I’ve submitted a bug in connect for this: 2.) Yes you are right, there is a "Remove" button burried deep inside the the addons dialog interface. However there is no non-grayed "Remove" button available for _any_ IE addon in my list (maybe yours is different?) 3.) Case in point, there is no item in the Add/Remove programs list for "Research" nor "Microsoft Research"… I have to "know" that it is burried deep into the Microsoft Office Tools application and that I need to "Change" my MS Office install to find and remove the "Microsoft Office Research toolbar" in order to fix this. 4.) I don’t expect a spyware toolbar to provide a "delete" option, but I DO expect IE to provide one for me. IMHO the install should unconditionally fail, if an un-install is not provided. 5.) I don’t believe that anyone is claiming that Firefox’s extensions are "less powerful" (nice try). I have many extensions for Firefox that IMHO provide more power to me as an end user than any addons I have for IE. 6.) I’m pretty tech-savvy compared to most windows users… if *I* had a hard time trying to find out where to delete extensions… how will the *average joe* user find it? steve Please fix scrolling performance on pages with Flash. For example, IE8 scrolling performance is pathetic on youtube. Firefox and Safari have no problems. you are so out of business. IE is just ridiculous performance sucks compare with the other browsers, IE user experience is the worst and should be really embarrassing the fact that google’s chrome is way better that all the other 7 version that IE. MS sometimes i got a dialog box warning me for an application error, no IE crashed actually as i observed.. any way to generate some log files for your team to inspect? The dialog box detail: Title: iexplore.exe – Application Error Content: The instruction at "0x00d5ea13" referenced memory at "0x6363d854". The memory could not be "read". Click on OK to terminate the program @EricLaw [MSFT]: I agree with steve_web, IE tab opening speed should be top priority. Tabs opening in under 1 second is not good enough. They should open *instantaneously* with or without add-ons (just like every other browser). vasko_dinkov, You’re correct, our offsetWidth is right for blocks but wrong for inline elements. Good catch, thank you ! For people with slow tabs: You have something wrong on your end,because tabs slowly opening are only in two discreet cases: 1)computre is overloaded with applications.Not MS fault. 2)there is big bad javascript running in multiple tabs. I have never seen otherwise problem you describe.Tabs are loading (in numbers when instructed to) in a matter of miliseconds,there is no room for improvement.(Yes the only addons are orbit downloader,Sun Java and Adobe Acrobat reader plugin) First look for problems on your side,not in IE.Maybe you managed to damage XP or Vista.(Like manual fixing of problem going wrong) But I need IE8 RC1 for Win7.Thats the only thing missing… CSS-only scrolling tables STILL don’t work in IE8 RC1, even those explicitly called out as working in IE6 and other major browsers (Firefox, Safari, Chrome). Examples: In both of these examples, Gecko and WebKit browsers display a table with a scroll bar to display additional content. IE8 RC1 displays a table without the scrollbar, making it impossible to see anything beyond the bottom extent of the table "frame". In IE7 compatibility mode, the first row fills the entire height of the parent table, with no scroll bar to see additional rows, which is arguably even less useful. I really hate that I have to resort to JavaScript libraries or goofy CSS hacks that then break table scrolling in other browsers to make IE behave correctly. It’s a waste of my time and the computer’s resources (especially since JS speed in IE8 is still woefully behind the times). Thank you for posting information about the platform improvements. However, I was hoping to find some information about about any changes to HTML+TIME (XHTML+SMIL) in IE8. Perhaps somebody from Microsoft could blog about this topic. I have not seen the animateMotion or transitionFilter elements work in IE8 RC1 regardless of which compatibility mode I try. I’d like to see some assurance from Microsoft that these will be working when IE8 is released. I’d also like to know whether IE8 will still support SMIL 2.0 or a more recent SMIL specification. After reading the Improved Namespace Support document, it sounds like developers will not have to use <html xmlns="" xmlns: and <?import namespace="t" implementation="#default#time2"?> in web pages … they should be able to replace these with <html xmlns: and IE8 would then use the preinstalled time2 behavior. It would be helpful if somebody could confirm whether that will be the case. Thank you for your time. @Gerard, re: bug 389825. You are right, this bug was not fixed in the RC build. The issue will be fixed in the final release. Our apologies. About tabs opening slowly, it is not something wrong on our end. It seems to be the implementation in IE. When I hit ctrl-T, I first see a tab opened in the background that says "connecting". Only after time lag (sometimes slow and sometimes fast, but never instantaneous as in other browsers) does that tab come to the front and the cursor move to the address bar. I would be perfectly happy with the following solution. When I hit ctrl-T, IE should instantaneously bring the new tab to the front and move my cursor to the address bar so that I can type. I don’t care if something is connecting in the background, I just want to see the new tab immediately and I want to be able to type immediately. Showing a tab "connecting" in the background, as IE does now, is pointless and annoying (yet that seems to be able to appear almost instantaneously). That visual step should be removed. All I want is for the new tab to be the focus and my cursor to appear in the address bar and I want that to happen as fast the background tab "connecting" appears now. I don’t understand why that is not possible and why it hasn’t been implemented that way. JP, I don’t know why you have such a hard time understanding this. Poorly-performing addons get loaded on every new tab and result in the problem you’re seeing. When you don’t run the bad addons, you don’t have the performance problem. The IE guys made a special mode that fixes this problem, called No Addons mode. Run IE with -extoff, and watch your problem go away. Of course, you could just disable or uninstall the bad addons and get the same effect, but for some reason you don’t seem to be getting that message. Can Dan and Klimax clean out their ears!!! The rest of us find tab opening to be the slowest feature in IE and far slower than any other browser out there (hands down, no exceptions) This includes: * Running in No Addons mode * Having no other tabs beyond about:tabs or about:blank open * Running on a new, powerful PC WE ARE SICK AND TIRED OF BEING TOLD IT IS OUR FAULT! We are end users giving an honest opinion about a product. Outside of a few fanboys on this Blog, I have found (NOT A SINGLE LIVING SOUL!) that feels that the IE new tab opening speed is even ACCEPTABLE. We are not talking about fast vs. super fast. We are talking so gosh darn pathetically slow that we feel the browser is shown in a bad light by this flaw. I don’t care what the story or PR angle is – all I can tell you is that users ARE GOING TO HAVE SEVERAL addons RUNNING all the time… Acrobat, Java, Flash, SVG, Live Toolbar, Google Toolbar, Research, etc. The load time without these addons is slow, and it is frustratingly painful with the addons loaded. I don’t care who claims responsibility, who fixes it, or what. It just needs to be fixed. Morton Too many fanboy can’t see the truth. Run Firefox and Safari and you’ll see that IE runs pathetically slow. Safari opens tabs instantly but there’s always a delay with IE. IE is so slow that opening a new window in Safari is faster than opening a tab in IE. NO excuses. All other browsers do not have this problem. Blaming add-ons is just smokescreen. Unfortunately, I wouldn’t be surprised if this never gets fixed. Microsoft has a habit of keeping minor, but annoying bugs unfixed for a very long time. But don’t lose hope. Maybe this will get fixed in IE9. Tabs in IE are opened in separate processes as a security measure. It might explain why it takes longer than in other browsers. @Marc Silbey, EricLaw [MSFT] and others > We take your feedback on critical issues very seriously Here’s one which is rather serious, I’d say. Load and test thorough the page: move the mouse in and out of <body> height (or from chrome, say, status bar to window viewport and vice versa) after zooming in or out, after clicking Back or Forward buttons. Depending on your setting "Automatically recover page layout errors with Compatibility View", RC1 build 18372 will indicate that it is struggling with the page… or the whole page will become blank. Developer tools will also show some troubling issues with original code: <col></col> becomes in dev. tool <col/></col/> which is unusual and unexpected. More on all this in bug 409478 Regards, Gérard I agree with Gérard bug 409478 is a fairly major rendering glitch. That and the bugs with offsetHeight, offsetWidth, iframes not firing resize events for Vertical size changes, new tab opening speed, the ability to delete addons and various chrome and zoom issues. Has there been confirmation on an RC2 release yet? or is Microsoft just going to "hope" that they’ve fixed everything and go RTM to let developers start testing/porting their code to work in IE8. Yes, ***start***. There are too many bugs in RC1 that us developers are expecting to be fixed for us to start coding workarounds for. (where even possible) @MAx Chrome launches tabs in separate processes as well, and it does so instantaneously. IE is the only browser that can’t create tabs with consistent speed. Alahmnat, funny, Chrome is the only browser that doesn’t support extensions. Coincidence? No. i tested RC1 on XP.Tabs (multiple at once) loaded very quickly(miliseconds). I t is problem in your setup.When I see no problems whatsoever on balasted and damaged XP which are running BOINC projects at full with all cores maxed out,with memory already filled by projects and still manages to beat your system,then sorry. And Firefox is the worst.When only one tab present,the tab-bat is hidden and subsequently it will reappear upon second tab opning.THIS IS BAD!!! And its performance is not good.It is same and UI is unusable(too many bad decisions).Yes I am 20 hours a day on net,opening a hell lot of pages (4 windows,each 5-70 tabs) and I need reliable browser…(and FF is missing full recovery after crash) I hear you,but the problem is on your side.I suspect problem is in registry.(Some sort of corruption – I had it as well,but as a result of five crashes in a row) @Klimax – get down from your high horse for a second. – Your IE tab opening may be very fast, congratulations – I’m sorry to hear your Firefox is causing issues, the rest of us have no problem -Klimax stop reading here. Now for the rest of the planet that finds IE new tab speed to be: a.) very slow b.) very slow in no-addons mode c.) slower than Firefox d.) slower than Safari e.) slower than Opera f.) slower than Chrome g.) unacceptably slow h.) ALL OF THE ABOVE Feel some comfort. You are not alone. In fact you are in a special little group we call "The Majority" I think it is high time this blog had a running poll, so that when questions like this come up, your end-users (not IE Dev Team users) can report real-world findings. e.g. Today’s Poll. Do you find the speed of opening a new tab in IE8 RC1 to be: (_) Very Fast (_) Pretty Fast (_) Average (_) Slow (_) Very Slow I would expect the results would favor the "slow" options by a 9:1 ratio or better. For those few that still think IE8’s tab opening speed is awesome, please post a video on YouTube (or similar) showing that you have some extensions enabled (yes, this is a real-world test, not a mythical fantasy candy land test) showing how fast your new tabs open up, and post us a link back here on the IE blog. i realy see no reason for being excited about IE8, fact is that dev team or doesn’t get enough resources (coders and/or time) or MS itself doesn’t realy put much attention to its browser since users will get it no matter what kind of garbage it is… since IE 5.5 i dont see any improvement put in IE and worst thing is, you guys put new numbers on it, change the GUI and some little things under the hood and VOILA its NEW and BETTER… no it is not… you "solved" security in IE7 (and updated 6) on easiest and lazziest way possible: just block any possible script… im just saddened that users will NOT see or "experience" good IE untill maybe version 15, but by then i guess it will again be passed by other browsers… Steps: Load 1- scrollWidth must always be greater or equal to clientWidth: that’s by definition (MSDN and CSSOM: ). So here, RC1 build 18372 has an implementation bug and a regression in comparison to IE 7 and IE 6 2- Clicking the [right to left] radio button makes scrollLeft becomes -28 . That was not so in IE 7 and IE 6. Regression. 3- Clicking the right to left radio button should increase clientLeft of the width of the vertical scrollbar (eg from 24 to 41; if I have Display/Appareance tab/Font size: Large Fonts, then from 24 to 43) That was not so in IE 7 and IE 6. Regression. Regards, Gérard @EricLaw: a complete answer to your inquiry about the regression I mentioned: – it is present in Partners Build and in RC1. I have no idea how it behaves in internal build, as only you (and your team) have access to them. If I could try and see if it reproduces, I could confirm it solved. As it stands, the latest public build fails the test and has a regression over Beta 2. If you want that to change, release an RC2. – I installed Windows XP SP3 off a retail CD, applied all core system updates (no additions like .Net), rebooted twice, then installed IE8RC1. I didn’t tinker with settings such as zoom, developer tools, nothing: I typed the test URL, hovered the area with the mouse cursor, then left it, and IE failed. – considering it is a box sizing issue, it could have been fixed along with bug 389825, which was supposedly fixed in RC1 but wasn’t. Next time you release a build, please make sure you don’t forget to integrate patch sets – or you’d better release public builds more often. For me RC1 isn’t an RC, it’s beta 3. @Those who complain about speed: without add-ons installed, IE8 RC1 is about as fast in day-to-day operation, on the same machine, than a Firefox/Minefield 3.2 daily build with JIT Javascript enabled for page (which is a fair speed gain over IE7’s ten-times-slower than Firefox 3.0); that it fails tests such as Acid3 makes it fall in the "don’t make creative new websites" category, but there certainly is notable improvements. Jeffrey I too get the maxed out CPU when dragging windows even if IE8 isn’t running at the time. This suggests that some core GUI-related Windows core files do get updated with IE8 which explains the reboot. Interestingly, examining Task Manager, the extra CPU cycles are being used by: a: The application whose window is being dragged b: Any applications that the dragged window is being dragged over This suggests to me that video hardware acceleration has been turned off much like in Safe Mode and the applications are having to continually report to the GUI what it should be displaying during each rendering frame of the desktop. Re earlier comments I notice a pile of dlls which have been updated in the System32 directory with the IE8 install , including some related to DirectX eg dxtrans.dll. I also noticed while in system32 that scrolling through the directory maxes out CPU and the scrolling freezes. Going to roll back and see if it fixes IE8 incredibly slow executing Javascript. Rolling back had no effect so I’m guessing it is standard behaviour, it is the same on another PC running IE7 which has never been upgraded to IE8 RC1. Apart from the slow tabs (probably due to all the toolbars loading in a new process, I added up the time of all of mine in the manager and it was more than 10s) the main annoyances are slow screen-rendering when scrolling and occasional "Diagnose connection" pages coming up when IE7 or Firefox have no problems. Also it kills off my auto-updating Active Desktop. James,you really seem to discount possiblity I mentioned.I do not see that you have really investigated it.(I did,since I saw performance before and after and behaviour after was same as you and others described) I suggest to use Proccess Monitor or similar tool to capture activity in IE processes. And btw. there is some probability that some update or change to affected part of registry(or dll) will solve it.Since all dlls are loaded on per-tab basis such corruption or badly behaved dll can cause it.It solved it self before I was able to do full investigation,but PM showed some problems,start by those.And that is indicated by IE NoAddons behaviour(same). And last be sure to disable ANY process/services which are interacting with IE.(AV,monitoring tools when not needed and such including services!) -Thats why I say IE team cannot do with it anything. @Klimax – As several people have noted IE8 is slow opening tabs with or without extensions, under condition x or y, etc. I’ve started a bug report in connect with some real data from environments I have with and without extensions loaded etc. (once I find somewhere to host them, I’ll share the tools I used to track each browsers performance also) I’ve shared these with some developers I associate with, and they all return similar results. See the bug report for details. 411584 As for the "better" performance with no addons loaded – that may (or may not be true) but the Reality is that we are all going to have them loaded, and/or need them, so a no-addons mode for daily browsing just simply is not an option. I’d like to express my thanks to the team for the CSS improvements, particularly the page-break code. It has been a life saver! With add-ons (Flash, SkyDrive uploaded, Silverlight, Java, Spybot, PDF): (_) Very Fast (_) Pretty Fast (*) Average (_) Slow (_) Very Slow Without: (_) Very Fast (*) Pretty Fast (_) Average (_) Slow (_) Very Slow Time taken for blog comment to be submitted: (_) Very Fast (_) Pretty Fast (_) Average (*) Slow (_) Very Slow Please, please, please ditch your terrible trident rendering engine and just use Gecko or Webkit. You have made web developers’ lives a nightmare for so long. @steve_web: OK.Only for comparsion(I forgot to include last info) it was measured on compaq nx7300.(Core2 duo with 1,5GB RAM) Your report has interesting results.Few details are however missing.What addons in each cases were,if any other software likely to interact with any browser was running(AV,Spybot and such) and specs of computers. I will test out my main PC with your EXE.(IE7,Core2 E4600?,1GB RAM,XP SP3,PATA HDD 250GB) And BTW this was missing in other reports pon this blogs. Thank you. Tool referenced in the link by steve_web is done against english version and does not work on IE with other languages.(regardless of version.) Will try to correct at least for czech version. Tool referenced in the link by steve_web is done against english version and does not work on IE with other languages.(regardless of version.) Will try to correct at least for czech version. (So far max was 269ms and median 109ms in row of 10 tabs,in already loaded browser on Win7,Lenovo R500 – core2) Without addons (X) Very Fast With addons (X) Average @Klimax – hey, sorry about the language thing (one forgets sometimes about the other lang installs) In particular I noticed that the VERY FIRST tab created after the browser is opened takes quite a while. (e.g. not the homepage that is loaded, but the new tab created right after that) I’m not at my normal office so I can’t post any hardware specs ATM. When I enable the acellerators, I get an error on every website of "The instruction at "0x…" referenced memory at "0x…". The memory could not be written. Click on OK to terminate the program. Click on Cancel to Debug the program. No matter what you do, you end up in a circle that requires "ctrl alt delete" "terminate program" If your tabs open slow, the blame goes to Java? Great. Now it’s all Java’s fault that IE can’t figure out how to play nice with the rest of the world. Again. As one of those rare folks who programs entire website backends in Java, and even writes game applets, I am tired of hearing Java blamed for your problems. I’ve occasionally had to uninstall Microsoft updates from my Windows machine, because they cause my Java applications to fail. As a result, I am biting the bullet and porting all my remaining Windows boxes to Linux. I am tired of Windows breaking my Java. It is as if Java is always the implicit evil here at MicrosoftCloudCuckooLand. I am dismayed to find out by reading these forums that Java has once again become the de facto bugbear for Microsoft’s development woes. It is amusing, in a sad way, but mostly just depressing. Anyway, good luck on achieving that day somewhere way off in the distant future, when your browser learns to play with the rest of the kids. Squirreldude, Sun’s Java Virtual Machine and ie plugin is a performance, security, and reliability nightmare. Of course, why would Sun bother to make it any good for ie/windows, since they have every incentive to make windows (which competes with their platform) look bad? Stuart, dxtrans is a part of IE’s display filters, and isn’t used by other programs. it’s not a part of directx– it *uses* directx. Vinifera, IE "blocks any possible script"? WTH are you talking about?
https://blogs.msdn.microsoft.com/ie/2009/01/29/overview-of-platform-improvements-in-ie8-rc1/
CC-MAIN-2016-30
refinedweb
11,541
71.04
One of the easiest thing we can do, is converting an existing image from its original format to another one. Let's use this functionality to write a sort of "hello world" application. I am working in C++ (that's why I have installed the libmagick++-dev package) on a 64 bit Linux-Debian machine (and the Debian part of this explains why I installed the package using apt-get). I am using the GCC C++ compiler, and Eclipse as IDE. So, be prepared to apply some changes to fit the example to your requirements. I created a new C++ project, adding some information to let the IDE find depending includes and libraries. In Properties, C/C++ Build, Settings - In the GCC C++ Compiler, Includes page I added "/usr/include/ImageMagick" - In the GCC C++ Linker, Libraries page I added: -L: /usr/lib/x86_64-linux-gnu/ -l: Magick++ So, now Eclipse knows where to find the Magick include directories, and where to find the shareable object libMagick++.so that would be needed to my executable to properly run. If you run ldd on the resulting program file, you will see that it depends also on other Magick shareable objects, namely libMagickWand.so and libMagickCore.so. I have put in my tmp directory a GIF file, and I want my (silly) little program to convert it to PNG. Here is the code. #include <Magick++.h> int main() { Magick::Image image; // 1 image.read("/tmp/simple.gif"); // 2 image.write("/tmp/simple.png"); // 3 }1. An empty Magick image. 2. ImageMagick loads the passed file in an image. What if such file does not exists? A Magick::ErrorBlob exception would be thrown. And since I don't have try-caught my code, this would result in a (small) catastrophe. 3. The image is saved to the new specified file (if the application can actually write it). Notice that Magick is smart enough to understand which format to use accordingly to the passed file name extension.
http://thisthread.blogspot.com/2013_06_01_archive.html
CC-MAIN-2017-51
refinedweb
333
65.83
#include <sys/types.h> #include <sys/ddi.h> #include <sys/sunddi.h> int ddi_prop_op(dev_t dev, dev_info_t *dip, ddi_prop_op_t prop_op, int flags, char *name, caddr_t valuep, int *lengthp); int ddi_getprop(dev_t dev, dev_info_t *dip, int flags, char *name, int defvalue); int ddi_getlongprop(dev_t dev, dev_info_t *dip, int flags, char *name, caddr_t valuep, int *lengthp); int ddi_getlongprop_buf(dev_t dev, dev_info_t *dip, int flags, char *name, caddr_t valuep, int *lengthp); int ddi_getproplen(dev_t dev, dev_info_t *dip, int flags, char *name, int *lengthp); Solaris DDI specific (Solaris DDI). The ddi_getlongprop(), ddi_getlongprop_buf(), ddi_getprop(), and ddi_getproplen() functions are obsolete. Use ddi_prop_lookup(9F) instead of ddi_getlongprop(), ddi_getlongprop_buf(), and ddi_getproplen(). Use ddi_prop_get_int(9F) instead of ddi_getprop() Device number associated with property or DDI_DEV_T_ANY as the wildcard device number. Pointer to a device info node. Property operator. Possible flag values are some combination of: do not pass request to parent device information node if property not found the routine may sleep while allocating memory do not look at PROM properties (ignored on architectures that do not support PROM properties) String containing the name of the property. If prop_op is PROP_LEN_AND_VAL_BUF, this should be a pointer to the users buffer. If prop_op is PROP_LEN_AND_VAL_ALLOC, this should be the address of a pointer. On exit, *lengthp will contain the property length. If prop_op is PROP_LEN_AND_VAL_BUF then before calling ddi_prop_op(), lengthp should point to an int that contains the length of callers buffer. The value that ddi_getprop() returns if the property is not found. The ddi_prop_op() function gets arbitrary-size properties for leaf devices. The routine searches the device's property list. If it does not find the property at the device level, it examines the flags argument, and if DDI_PROP_DONTPASS is set, then ddi_prop_op() returns DDI_PROP_NOT_FOUND. Otherwise, it passes the request to the next level of the device info tree. If it does find the property, but the property has been explicitly undefined, it returns DDI_PROP_UNDEFINED. Otherwise it returns either the property length, or both the length and value of the property to the caller via the valuep and lengthp pointers, depending on the value of prop_op, as described below, and returns DDI_PROP_SUCCESS. If a property cannot be found at all, DDI_PROP_NOT_FOUND is returned. Usually, the dev argument should be set to the actual device number that this property applies to. However, if the dev argument is DDI_DEV_T_ANY, the wildcard dev, then ddi_prop_op() will match the request based on name only (regardless of the actual dev the property was created with). This property/dev match is done according to the property search order which is to first search software properties created by the driver in last-in, first-out (LIFO) order, next search software properties created by the system in LIFO order, then search PROM properties if they exist in the system architecture. Property operations are specified by the prop_op argument. If prop_op is PROP_LEN, then ddi_prop_op() just sets the callers length, *lengthp, to the property length and returns the value DDI_PROP_SUCCESS to the caller. The valuep argument is not used in this case. Property lengths are 0 for boolean properties, sizeof (int) for integer properties, and size in bytes for long (variable size) properties. If prop_op is PROP_LEN_AND_VAL_BUF, then valuep should be a pointer to a user-supplied buffer whose length should be given in *lengthp by the caller. If the requested property exists, ddi_prop_op() first sets *lengthp to the property length. It then examines the size of the buffer supplied by the caller, and if it is large enough, copies the property value into that buffer, and returns DDI_PROP_SUCCESS. If the named property exists but the buffer supplied is too small to hold it, it returns DDI_PROP_BUF_TOO_SMALL. If prop_op is PROP_LEN_AND_VAL_ALLOC, and the property is found, ddi_prop_op() sets *lengthp to the property length. It then attempts to allocate a buffer to return to the caller using the kmem_alloc(9F) routine, so that memory can be later recycled using kmem_free(9F). The driver is expected to call kmem_free() with the returned address and size when it is done using the allocated buffer. If the allocation is successful, it sets *valuep to point to the allocated buffer, copies the property value into the buffer and returns DDI_PROP_SUCCESS. Otherwise, it returns DDI_PROP_NO_MEMORY. Note that the flags argument may affect the behavior of memory allocation in ddi_prop_op(). In particular, if DDI_PROP_CANSLEEP is set, then the routine will wait until memory is available to copy the requested property. The ddi_getprop() function returns boolean and integer-size properties. It is a convenience wrapper for ddi_prop_op() with prop_op set to PROP_LEN_AND_VAL_BUF, and the buffer is provided by the wrapper. By convention, this function returns a 1 for boolean (zero-length) properties. The ddi_getlongprop() function returns arbitrary-size properties. It is a convenience wrapper for ddi_prop_op() with prop_op set to PROP_LEN_AND_VAL_ALLOC, so that the routine will allocate space to hold the buffer that will be returned to the caller via *valuep. The ddi_getlongprop_buf() function returns arbitrary-size properties. It is a convenience wrapper for ddi_prop_op() with prop_op set to PROP_LEN_AND_VAL_BUF so the user must supply a buffer. The ddi_getproplen() function returns the length of a given property. It is a convenience wrapper for ddi_prop_op() with prop_op set to PROP_LEN. The ddi_prop_op(), ddi_getlongprop(), ddi_getlongprop_buf(), and ddi_getproplen() functions return: Property found and returned. Property not found. Property already explicitly undefined. Property found, but unable to allocate memory. lengthp points to the correct property length. Property found, but the supplied buffer is too small. lengthp points to the correct property length. The ddi_getprop() function returns: The value of the property or the value passed into the routine as defvalue if the property is not found. By convention, the value of zero length properties (boolean properties) are returned as the integer value 1. These functions can be called from user, interrupt, or kernel context, provided DDI_PROP_CANSLEEP is not set; if it is set, they cannot be called from interrupt context. See attributes(5) for a description of the following attributes: attributes(5), ddi_prop_create(9F), ddi_prop_get_int(9F), ddi_prop_lookup(9F), kmem_alloc(9F), kmem_free(9F) Writing Device Drivers for Oracle Solaris 11.2
http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-getprop-9f.html
CC-MAIN-2016-26
refinedweb
1,010
54.42
Cache .Trash* files fill up disk space RESOLVED FIXED Status () People (Reporter: decoder, Assigned: mfinkle) Tracking Firefox Tracking Flags (firefox14 fixed, firefox15 fixed, firefox16 fixed, blocking-fennec1.0 +) Details Attachments (4 attachments, 3 obsolete attachments) While fuzzing on mobile (mozilla-central, debug build, rev 448f554f6acb) I keep seeing folders like "Cache.Trash821552478" in the profile directory, slowly filling up all disk space until I manually delete them. I don't know exactly what conditions lead to this but I had it a second time now that all the space on /data/ was completely exhausted by these. They seem to contain cache data: $ adb shell ls -l /data/data/org.mozilla.fennec_decoder/files/mozilla/szzixceq.default/Cache.Trash821552478/ -rw------- app_35 app_35 18998 1970-01-01 12:05 _CACHE_001_ drwx------ app_35 app_35 1970-01-01 12:04 C -rw------- app_35 app_35 276 1970-01-01 12:04 _CACHE_MAP_ drwx------ app_35 app_35 1970-01-01 12:04 2 drwx------ app_35 app_35 1970-01-01 12:04 9 drwx------ app_35 app_35 1970-01-01 12:04 0 drwx------ app_35 app_35 1970-01-01 12:04 F drwx------ app_35 app_35 1970-01-01 12:04 E drwx------ app_35 app_35 1970-01-01 12:04 A drwx------ app_35 app_35 1970-01-01 12:04 7 drwx------ app_35 app_35 1970-01-01 12:04 4 drwx------ app_35 app_35 1970-01-01 12:04 8 drwx------ app_35 app_35 1970-01-01 12:04 6 -rw------- app_35 app_35 4096 1970-01-01 12:05 _CACHE_002_ drwx------ app_35 app_35 1970-01-01 12:04 D drwx------ app_35 app_35 1970-01-01 12:04 B drwx------ app_35 app_35 1970-01-01 12:04 1 drwx------ app_35 app_35 1970-01-01 12:04 3 drwx------ app_35 app_35 1970-01-01 12:04 5 -rw------- app_35 app_35 4183 1970-01-01 12:05 _CACHE_003_ Weird. The date is 01-01-1970? What's the date on your device set to? What device is this? (In reply to Naoki Hirata :nhirata from comment #1) > Weird. The date is 01-01-1970? What's the date on your device set to? > What device is this? These are Tegra boards. I cannot reach them right now due to some IT failure in MV, but I'll check their date once I have access again. (In reply to Naoki Hirata :nhirata from comment #1) > What's the date on your device set to? It's indeed 01-01-1970. But I guess thats normal for the Tegra boards. I wonder if it's some sort of y2k or some similar issue? Could you try setting the device to be the current date/time and see what occurs after some time please? (In reply to Naoki Hirata :nhirata from comment #4) > I wonder if it's some sort of y2k or some similar issue? Could you try > setting the device to be the current date/time and see what occurs after > some time please? I can try that later but I doubt it has something to do with date/time at all. I encounter this mainly during minimization of testcases, that means the browser crashes a lot. I could imagine that some cleanup is done on regular exit of Fennec, that is of course not done when it crashes. And starting up lots of times and not exiting regularly might leave something over. On my device Firefox Beta is eating the storage memory till i get a memory warning and my phone is getting really slow. When i see in the app manager it's shown that "data" is growing, the "cache" is still empty (0). (In reply to david from comment #6) > On my device Firefox Beta is eating the storage memory till i get a memory > warning and my phone is getting really slow. > When i see in the app manager it's shown that "data" is growing, the "cache" > is still empty (0). Is your device rooted so you could peak into the profile directory using ADB? Yes, it is rooted, running with CM7. With ADB you mean the Android Debug Bridge? I don't have it installed. Is this quickly done on Ubuntu? (In reply to david from comment #8) > Yes, it is rooted, running with CM7. > With ADB you mean the Android Debug Bridge? I don't have it installed. Is > this quickly done on Ubuntu? It should be, yes (I'm using Ubuntu as well and I remember it being really easy here). I think you need the Android SDK available at . Once you have that installed, just connect your device with an USB cable to your computer and using the "adb" command you should be able to run shell commands on the device (e.g. "adb shell ls").) I sometimes see an accumulation of Cache.Trash directories when testing, because I crash repeatedly, and Fennec doesn't get a chance to clean up. There's a dump of the profile directory from another user who's experiencing this: (In reply to Geoff Brown [:gbrown] from comment #10) >) That might explain it. I'm encountering this bug mainly during minimization of test cases which involves many short runs which often crash. I worked around this by having our harness delete the files before each run. Nom'd for blocking since this seems to be happening in the field blocking-fennec1.0: --- → ? Looks like we wait 90 seconds before starting the deletion: I'm not really sure what the reason for not removing the trash folders. Is 90 seconds too long? We have telemetry data on the times required to delete the trash folders:{_TRASHREN}} Whoops. That link is for the Cache -> Cache.Trash### rename. This is the folder deletion:{_DELETED}} Geoff, we were discussing putting these files in the android cache dir such that android can clean them up on its own if we're running out of disk space. Assignee: nobody → gbrown blocking-fennec1.0: ? → + (In reply to Brad Lassey [:blassey] from comment #17) > Geoff, we were discussing putting these files in the android cache dir such > that android can clean them up on its own if we're running out of disk space. That's bug 742558. Firefox likes to manage its own cache; there's currently no allowance made for an outside entity cleaning it up, so it's not just a simple matter of changing the directory. moving the trash files should be relatively safe, no? Oh, sorry, I misinterpreted. Moving an invalid cache to the Android cache directory for future deletion seems safe and easy. Assignee: gbrown → nobody Component: General → Networking: Cache Product: Fennec Native → Core QA Contact: general → networking.cache I tested cache invalidation and trash cleanup and found everything working as designed: - max 20 MB (approx) disk cache on Android - disk cache invalidated when Fennec crashes or is killed - robocop tests always kill Fennec when test completes - Cache moved to Cache.Trash when invalid - Cache.Trash deleted in background after 90 s delay Moving the trash destination to the Android application cache directory has the advantage that the OS should clean up the .Trashes for us when free space is low. The OS cleaning also seems to happen after a delay, so we might not improve on this scenario much. Another user reporting running out of disk space: Assignee: nobody → gbrown Created attachment 628532 [details] [diff] [review] wip patch This patch changes behavior on Android such that the disk cache "trash dir" is now a sub-directory of the Android cache dir. Currently we move an invalid Cache from, say: /data/data/org.mozilla.fennec/files/mozilla/xxx.default/Cache to: /data/data/org.mozilla.fennec/files/mozilla/xxx.default/Cache.TrashNNNN With this patch, we move from /data/data/org.mozilla.fennec/files/mozilla/xxx.default/Cache to: /data/data/org.mozilla.fennec/cache/Cache.TrashNNNN The advantage is that, if the volume is low on space, Android may delete the trash folder itself, so that space is reclaimed even if Fennec has exited. This is a first draft/wip patch. It seems to work for me but is very rough/inelegant and needs testing. I am on PTO until June 11; feel free to take this bug! gcp, can you pick this up? Assignee: gbrown → gpascutto Comment on attachment 628532 [details] [diff] [review] wip patch Review of attachment 628532 [details] [diff] [review]: ----------------------------------------------------------------- ::: netwerk/cache/nsICacheService.idl @@ +58,5 @@ > * Event target which is used for I/O operations > */ > readonly attribute nsIEventTarget cacheIOTarget; > + > + attribute nsIFile cacheTrashDir; Well, it's a gross thing to have an IDL be platform-specific, but we need to make sure this never, ever gets used on an NTFS-based filesystem (windows): if we try to move Cache to a different directory (rather than renaming it in the current directory) on NTFS the rename (which we block browser startup on) takes as long as actually deleting everything (due to traversing all files to update NTFS permissions: see bug 670911). That could be fixed by changing our underlying NTFS file code to have a flag that allows moving w/o the check, but you don't want to go there for this bug. I'd be fine with having SetCacheDir simply throw if you try to use it on NTFS. Alas, right now we don't have a way to determine is an nsIFile is on NTFS. bbondy tells me we could use ctypes to access the win32 API: "You could also simply add an attribute to nsILocalFileWin.idl which wraps a call to GetVolumeInformation and checks for NTFS easily." But given that this is android-specific for now, and we're in a hurry (I am: are you? ;) I suggest we simply QI the nsIFile passed into SetCacheTrashDir and throw if it's a nsILocalFileWin. Make sure to add a comment that this is just to prevent NTFS usage, and that we can write a more precise test for that if we ever need one (i.e. we add Windows phone support, or something like that), and/or tweak the NFTS rename code to not do checks. Cite bug 670911 in the comment. Created attachment 629326 [details] C code to detect NTFS in case we go ctypes route I don't expect we want to use ctypes, but in case we do, bbondy kindly gave me this C code that does the NTFS check that we could base it on. So, given that this is ifdef'd to Android, I don't think we need to care too much about NTFS. However, this is not the first time that we've wanted to change behavior based on the file system (see:) so we may want to add a method to nsIFile to get that information in the future. Could we just use the Directory Service to return a new directory for the Cache Trash folder? Instead of adding IDL for a specific folder property? Created attachment 630726 [details] [diff] [review] alt WIP patch This patch simple puts the Android cache folder into the env and grabs it in C++. We do the same thing for Downloads: The patch has some debugging crap in it. I am trying to verify the Trash folder is created and I am having a hard time. I have verified the Android Cache folder is correctly received in DeleteDir, but that's as far as I have got. Created attachment 631033 [details] [diff] [review] alt WIP patch 2 Fallback to current behavior if no android cache folder is found, less debugging and XUL bustage fix. I sent this to Try server: Attachment #630726 - Attachment is obsolete: true Stealing from Gian-Carlo for now Assignee: gpascutto → mark.finkle Created attachment 631130 [details] [diff] [review] alt patch The patch ran fine on Try, so asking for reviews. Attachment #631033 - Attachment is obsolete: true Attachment #631130 - Flags: review?(blassey.bugs) Comment on attachment 631130 [details] [diff] [review] alt patch Review of attachment 631130 [details] [diff] [review]: ----------------------------------------------------------------- I'm surprised this works (do you know for sure it does?), for the following reason: When we changed the code to always rename Cache -> Cache.trashRANDOM__NUMBER within the same directory (to avoid NTFS nastiness), we changed the MoveToNative() call to pass null for the "new directory" parameter. And we don't even wind up using the full 'trash' directory for the rename, but just the leafName. Poking through the MovetoNative code for Unix filesystems, I see we wind up getting the file's original directory for 'newdir' when it's passed a blank. So I strongly suspect that you're going to the trouble of getting a new trash directory name ("/foo/CACHE_DIRECTORY/Cache.trash"), but then we're grabbing just "Cache.trash" from it (in nsDeleteDir.cpp:214), appending a random number to it, then passing it to MovetoNative, where it winds up living in the same directory it started, not the one you assigned in GetTrashDir. If I'm right about this, the easiest fix is probably to keep your existing logic, but also compare 'dirIn.parent' with 'Trashdir.parent' right before the MoveToNative call: targetDir = nsnull; #if ANDROID if (dirIn.parent != TrashDir.parent) targetDir = Trashdir.parent #endif MoveToNative(targetDir, leaf); It's vital that we pass nsnull to MoveToNative in the NTFS case (we only skip the GUI-hanging ACL checks if the param is null), so it won't work to always simply pass TrashDir.parent as the first param. Sorry this code has gotten so tricky to navigate. ::: netwerk/cache/nsDeleteDir.cpp @@ +268,5 @@ > + char* cachePath = getenv("CACHE_DIRECTORY"); > + if (cachePath) { > + rv = NS_NewNativeLocalFile(nsDependentCString(cachePath), > + true, getter_AddRefs(*result)); > + Not checking rv here. Guard the rest of the logic in this branch with an if(NS_SUCCEEDED(rv)), and the if NS_FAILED after the #endif will handle the failure case. Attachment #631130 - Flags: review?(jduell.mcbugs) → review- Comment on attachment 631130 [details] [diff] [review] alt patch Review of attachment 631130 [details] [diff] [review]: ----------------------------------------------------------------- ::: netwerk/cache/nsDeleteDir.cpp @@ +282,2 @@ > nsresult rv = target->Clone(getter_AddRefs(*result)); > +#endif I'd rather have this as: nsresult rv; #ifdef MOZ_WIDGET_ANDROID blah blah } else #endf { rv = target->Clone(getter_AddRefs(*result)); } just so that if that default behavior ever changes, we don't miss the change on android Attachment #631130 - Flags: review?(blassey.bugs) → review+ Created attachment 631426 [details] [diff] [review] alt patch v2 Thanks a ton for the explanation Jason! This patch takes parts of Geoff's DeleteDir WIP patch, the parts Jason warned me about. It also makes the change Brad asked for regarding the fallback. I tested this on my Nexus S. I loaded a page and killed Fennec quickly a few times. It created 2 Cache.Trash#### folders in the Android cache folder. I then let the app run for a few minutes and the Cache.Trash#### folders were removed by Fennec. Attachment #631130 - Attachment is obsolete: true Attachment #631426 - Flags: review?(jduell.mcbugs) Whiteboard: [has new patch][needs review jduell] Created attachment 631933 [details] [diff] [review] just moves NTFS comment to better place in code mfinkle: please make the comment change in this patch (just apply on top of your patch) before you land. landed with the comment fix: Status: NEW → RESOLVED Last Resolved: 7 years ago Resolution: --- → FIXED Whiteboard: [has new patch][needs review jduell] Comment on attachment 631426 [details] [diff] [review] alt patch v2 [Approval Request Comment] Bug caused by (feature/regressing bug #): none User impact if declined: unused cache files can fill up the device Testing completed (on m-c, etc.): just landed on m-c... needs to bake Risk to taking this patch (and alternatives if risky): low to med risk, but bake time should give confidence. String or UUID changes made by this patch: none Attachment #631426 - Flags: approval-mozilla-beta? Attachment #631426 - Flags: approval-mozilla-aurora?. Attachment #631426 - Flags: approval-mozilla-beta? Attachment #631426 - Flags: approval-mozilla-beta+ Attachment #631426 - Flags: approval-mozilla-aurora? Attachment #631426 - Flags: approval-mozilla-aurora+ (In reply to Alex Keybl [:akeybl] from comment #40) >. (I'll note that this approval is based upon the fact that we think that the worst regression this could cause would be a startup time regression) This landed on Aurora and Beta but was backed out because of a build error: /builds/slave/m-aurora-andrd-xul/build/netwerk/cache/nsDeleteDir.cpp:279: error: cannot convert 'nsGetterAddRefs<nsIFile>' to 'nsILocalFile**' for argument '3' to 'nsresult NS_NewNativeLocalFile_P(const nsACString_internal&, bool, nsILocalFile**)' status-firefox14: --- → affected status-firefox15: --- → affected Target Milestone: --- → mozilla16 The patch needed a tweak to handle the nsILocalFile -> nsIFile changes on aurora and beta: status-firefox14: affected → fixed status-firefox15: affected → fixed status-firefox16: --- → fixed Target Milestone: mozilla16 → ---
https://bugzilla.mozilla.org/show_bug.cgi?id=754575
CC-MAIN-2019-09
refinedweb
2,762
62.38
Mission3_Push_Button In the two previous projects, the LEDs turn on and off automatically. Now, you will control the LED manually using a button. What you need The parts you will need are all included in the Maker kit. - SwiftIO board - Shield - Button module - 4-pin cable Circuit The shield is a modular circuit board to make it easier to connect circuits. The pins on the two sides are the same as those on the SwiftIO board. Besides, it has many grove connectors, and you can use a 4-pin cable to connect the pin instead of four jumper wires. Place the shield on top of your SwiftIO board. Make sure you connect them in the right direction. Connect the button module to pin D10 using a 4-pin cable. You can notice each cable has four colors of wires: the black is usually for ground, the red one is for power. Ok, the circuit is complete. It is so convenient, isn't it? Example code // Import the SwiftIO library to use everything in it. import SwiftIO // Import the board library to use the Id of the specific board. import MadBoard // Initialize the red onboard led. let led = DigitalOut(Id.RED) // Initialize the input pin D10 on the board. let button = DigitalIn(Id.D10) while true { // Read the button value. If it is pressed, turn on the led. if button.read() { led.write(false) } else { led.write(true) } // Wait for 10ms and then continue to read values. sleep(ms: 10) } Background Button. While the button module in your kit uses the grove connector, you can directly build the circuit without worrying about the wrong connection. Also, there is a known issue with the button: the bounce. The button module uses the hardware debounce method, so you will not meet with this issue. led = DigitalOut(Id.RED) let button = DigitalIn(Id.D10) Initialize the red onboard LED and the digital pin (D10) the button connects to. In the loop, you will use the if-else statement to check the button states. It has the following form: if condition { statement1 } else { statement2 } The condition has two results, either true or false. If it is true, statement1 will be executed; if it's false, statement2 will be executed. Then let's look back to the code. if button.read() { led.write(false) } else { led.write(true) } The method read() allows you to get the input value. The return value is either true or false. So you can know the states of the button according to the values. If the button is pressed, the value will be true, so the pin output a low voltage to turn on the onboard LED. Once you release the button, the input value is false, the LED will turn off. Reference DigitalOut - set whether the pin output a high or low voltage. DigitalIn - read the input value from a digital pin. SwiftIOBoard - find the corresponding pin id of SwiftIO board.
https://docs.madmachine.io/tutorials/swiftio-maker-kit/mission3
CC-MAIN-2022-21
refinedweb
491
76.93
Past two weeks, I’ve been porting over some Flashcom & Flex 1.5 code to Flex 2 (still using FCS 1.5 vs. FMS 2). One nice addition I found in the ActionScript 3 SharedObject class is the client property. This little gem allows you have events that are called by SharedObject.send to be called on a proxy class instead. What this means is, if someone in a Flashcom application causes the remote SharedObject to be updated, and it fires a “onChatMessageUpdate” method, it’ll actually get called on the client object if you set it to something. Since a lot of intrinsic classes are now “final”, meaning you can’t futz with ‘em as easily as you could in AS2, this is a great way to incorporate it via Composition. var my_rso:SharedObject = SharedObject.getRemote(“chat”, nc.uri, false); my_rso.owner = this; my_rso.onChat = function(str) { this.owner.onChatMessage(str); }; function onChatMessage(str) { textArea.text += str; } In the past, you’d use something like the above; proxying the message yourself, or even putting the logic inside the method closure if you were in a hurry. Now, you can forward it to an anonymous object, yourself, or even a class instance. var my_rso:SharedObject = SharedObject.getRemote(“chat”, nc.uri, false); my_rso.client = this; // onChatMessage needs to be a public function in this class my_rso.client = {onChatMessage: function(str){ // }}; // anonymous function my_rso.client = new ProxyClass(); // you can make a class specifically to handle events David Simmons from Adobe told me it was on NetConnection as well. Just checked, and she’s on NetStream too. In my tests, for classes, it only works on methods in the public namespace; protected & private don’t work. I didn’t test custom namespaces, nor do I know how to make it “see” them. Handy! Thanks Adobe. Hi, thx, I have been trying to attach a method to the sharedObject class, which always throws a null pointer exception. Aren’t AS3 classes sealed ? Thanx for the reminder forgot about client…. arocker November 2nd, 2006 May I ask you about the send method. I cant get it to work ? problem is i always get a null pointer exception as mentioned above , scratch head …. How can I implement the callback ? Can you please give me a hint ? Thanx in advance arocker November 2nd, 2006 2 things. First, you need to set the client property to a class that has the methods. I do it to the class using the SharedObject, like this: users_so.client = this; Secondly, the methods on the class that the SharedObject is accessing must be public. If they are protected or private, you’ll get an exception. JesterXL November 2nd, 2006 Hi Good to know you were successful to port your client actionsctipt 2.0 to 3.0 I’m been trying to port a simple connection code from 2.0 to 3.0 and I keep getting connection failed in 3.0 whilst it connects fine in 2.0 My 2.0 client-side code: //================== var VideoNC:NetConnection; VideoNC = new NetConnection; VideoNC.onStatus = function(info) { trace(info.code); if (info.code == “NetConnection.Connect.Success”) { trace(”connected”); } else if (info.code == “NetConnection.Connect.Closed”) { } } VideoNC.connect(”rtmp://10.211.55.6/learningObjects/”); //This is my local FMS server on my parallels desktop //================== I get connection success info code with this My AS 3.0 client-side code //================== import flash.net.NetConnection; import flash.net.NetStream; import flash.events.*; var connection:NetConnection; var stream:NetStream; connection = new NetConnection(); connection.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler); function netStatusHandler(event:NetStatusEvent):void { trace(event.info.code); if (event.info.code == “NetConnection.Connect.Success”) { //connection.connect(); trace(”connect to”); } } connection.connect(”rtmp://10.211.55.6/learningObjects/”); //================== I get connection Failed My main.asc file in my FMS app folder is: //================== application.onAppStart = function () { trace(”load new app”); } application.onConnect = function (client) { trace(”connect “+client.ip); application.acceptConnection(client); return true; } application.onDisconnect = function(client) { trace(”disconnect “+client.ip); } //================== Do you have any ideas? Thanks Ismael October 9th, 2007
http://jessewarden.com/2006/07/netconnection-sharedobject-netstreams-client-property-in-actionscript-3.html
crawl-001
refinedweb
669
62.24
And if you want some background on Gevent, you might want to check out that series at the following links: - Introduction to Gevent - Gevent, Threads, and Benchmarks - Gevent and Greenlets - Greening the Python Standard Library with Gevent - Building TCP Servers with Gevent - Building Web Applications with Gevent's WSGI Server ZeroMQ "devices"One of the nice aspects of ZeroMQ is that it decouples the communication pattern with the connection pattern of your endpoints. Without ZeroMQ, you'd commonly have a "server" socket which binds to a port and receives requests, while "clients" connect to that socket and send requests. With ZeroMQ, it's perfectly acceptable to have the "server" connect to the "client" to receive requests (and in fact you can have multiple "servers" connected to the same "client" socket). While this is handy, in some cases you may want to have both client and server relatively dynamic, with both connecting and neither binding to a particular port. For this use case, ZeroMQ provides so-called "devices" that bind a couple of sockets and perform forwarding operations between them to support common communication patterns. The "client" and "server" then both connect to the device. In this section, I'll cover the devices provided by the ZeroMQ library. Queue device The Queue device is responsible for mediating the REQ/REP(reply) To set up a broker that forwards between the two, you need a tiny script: import sys import zmq context = zmq.Context() s1 = context.socket(zmq.ROUTER) s2 = context.socket(zmq.DEALER) s1.bind(sys.argv[1]) s2.bind(sys.argv[2])Note that we've used a couple of new socket types above:zmq.ROUTER and zmq.DEALER . These are similar to zmq.REP and zmq.REQ , respectively, but they allow us to break the strict "request-response" lockstep of REQ/REP . The way they work is the following: - The that sent the initial request. By using message IDs as above, we can have multiple messages 'in flight', being handled by various servers. If you'd rather not use the built-in ZeroMQ device, you can build your own fairly simply. The code below shows a device that that uses a pair of threads to relay messages from one socket to another: import sys import zmq import time context = zmq.Context() s1 = context.socket(zmq.ROUTER) s2 = context.socket(zmq.DEALER) s1.bind(sys.argv[1]) s2.bind(sys.argv[2]) def zeromq_relay(a, b): '''Copy data from zeromq socket a to zeromq socket b''' while True: msg = a.recv() more = a.getsockopt(zmq.RCVMORE) if more: b.send(msg, zmq.SNDMORE) else: b.send(msg) def zmq_queue_device(s1, s2): import threading t1 = threading.Thread(target=zeromq_relay, args=(s1,s2)) t2 = threading.Thread(target=zeromq_relay, args=(s2,s1)) t1.daemon = t2.daemon = True t1.start() t2.start() while True: time.sleep(10/SUB sockets in our device: import sys import zmq context = zmq.Context() s1 = context.socket(zmq.SUB) s2 = context.socket(zmq.PUB) s1.bind(sys.argv[1]) s2.bind(sys.argv[2]) s1.setsockopt(zmq.SUBSCRIBE, '') Now we can connect one or more publishers to our device's "upstream" port (sys.argv[1]), and one or more subscribers to the device's "downstream" port (sys.argv[2]) to provide a PUB/SUBbroker. Note in particular that we had to subscribe to all mesages in the device code since we don't know which messages our downstream sockets are interested in. If you'd rather filter messages that get fowarded, you can subscribe to some subset of messages. Unfortunately, I'm not aware of any way to forward only messages that the downstream clients have subscribed to using built-in ZeroMQ functionality. If we want to write our own FORWARDER device, it's even simpler than the QUEUE device since it only handles unidirectional communication. Assuming we have the zeromq_relay function as defined above, ourFORWARDER device is just the following: def zmq_forwarder_device(upstream, downstream): Streaming device Similar to the FORWARDER device, the STREAMER device just sends upstream packets downstream, but in this case in support of thePUSH/PULL pattern rather than PUB/SUB. To make a STREAMER broker, then, we just need the following code: import sys import zmq context = zmq.Context() s1 = context.socket(zmq.PULL) s2 = context.socket(zmq.PUSH) s1.bind(sys.argv[1]) s2.bind(sys.argv[2])And once again, if we wanted to create the device manually, it's just a relay: def zmq_streaming_device(upstream, downstream): Integration with geventYou may have noticed that the gevent device function doesn't return. If you want to create multiple devices within your Python program, then, you'll need to wrap the devices in threads. Another approach that you might use if you, like me, prefer the lightweight threading and asynchronous I/O of gevent, is to use thegevent-zeromq package available from PyPI: $ pip install gevent-zeromq Now we can use a 'green' version of ZeroMQ by just importing from the gevent-zeromq wrapper in our scripts. A "pusher" script would then look like this: import sys import time from gevent_zeromq import zmq context = zmq.Context.instance() sock = context.socket(zmq.PUSH) sock.connect(sys.argv[1]) while True: time.sleep(1)Simple, right? The reason I throw this in this seemingly-unrelated post is that if you want to use gevent, you can't use the built-in devices. This is because the built-in devices block, and they block in the ZeroMQ C library, not in Python where they could be "greened". So if you want a device with gevent, you'll have to write your own (which as, after all, pretty straightforward). ConclusionZeroMQ devices provide a handy way to stitch together complex routing topologies and allow you to decouple the various components of your architecture. Though the built-in devices are quite simple, they provide insight into how you could build more complex devices yourself to fulfill the role of "broker" in a distributed architecture. I'd be interested in hearing from you how you are using devices (or whether you find them useful at all) in ZeroMQ programming. Do you use the built-in devices? Any devices you write yourself? Do you prefer the multithreaded device approach I've used here, or using the zmq.Poller object? Let me know in the comments below! {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/using-zeromq-devices-support
CC-MAIN-2017-43
refinedweb
1,065
56.25
Syncfusion Dashboards. Cloud-based. The only BI solution you need for your business. In a GridDataBoundGrid, the grid just displays the rows as they are presented to it by the underlying datasource. You cannot easily re-order rows (and maintain this order) in a sorted DataView, as the DataView wants to immediately re-sort things as soon as some change messes up the sort order. So, if you want to move the position of the rows currently, you have to do so by moving them in the underlying datasource. One way of doing it is add an extra DataColumn to your DataTable that holds a sortKey that reflects where you want the rows to be and use that column to always maintain a particular sort order. You don't display this column (you can hide using GridBoundColumn for the columns you want to see, or you can just hide it in the grid as the sample does). When you initially set the DataSource to the grid, you sort the datatable on this column using its defaultview.sort property. Then when you want to move rows, all you have to do is to move the sortKey values and the rows will rearrange themselves (as they must maintain the sort order). Once you have some kind of support for maintaining arbitrary row orders in GridDataBoundGrid,the you can handle the mouse events to catch the MouseDown header and to manage the UI part of moving the rows. The attached sample shows how you can implement a row moving routine that allows you to mouse down on the row header, and then just drag it (i.e By single click on the row header, and then to drag it). It uses cursors to give feedback during the move. Steps for drag and drop a row: 1.Move the mouse pointer to any row header 2.Single Click on the row header (now you see a OLE Drag and Drop mouse curosr) 3.Move the row to the target C# public class DragRowHeaderGrid : GridDataBoundGrid { public DragRowHeaderGrid() this.AllowDrop= false; this.AllowSelection &= ~GridSelectionFlags.Row; this.SortBehavior = GridSortBehavior.SingleClick; } protected override void OnMouseUp(System.Windows.Forms.MouseEventArgs e) if(inRowMove) if(targetRow > 0 && targetCol == 0 && sourceRow > 0) int dir = sourceRow > targetRow ? -1 : 1; int gridRow = this.CurrentCell.RowIndex; int tableRow = this.Binder.RowIndexToPosition(gridRow); CurrencyManager cm = (CurrencyManager)this.BindingContext[this.DataSource, this.DataMember]; this.BeginUpdate(); for(int i = sourceRow; i != targetRow; i += dir) if(tableRow + dir < cm.Count && tableRow < cm.Count && tableRow + dir > -1 && tableRow > -1) DataRowView drv1 = (DataRowView) cm.List[tableRow+ dir]; int val1 = (int) drv1.Row["sortKey"]; DataRowView drv2 = (DataRowView) cm.List[tableRow]; int val2 = (int) drv2.Row["sortKey"]; drv1.Row["sortKey"] = val2; drv2.Row["sortKey"] = val1; tableRow += dir; this.CurrentCell.MoveTo(targetRow,1); this.EndUpdate(); this.Refresh(); inRowMove = false; base.OnMouseUp(e); VB Public Class DragRowHeaderGrid : Inherits GridDataBoundGrid Public Sub New() Me.AllowDrop= False Me.AllowSelection = Me.AllowSelection And Not GridSelectionFlags.Row Me.SortBehavior = GridSortBehavior.SingleClick End Sub Protected Overrides Sub OnMouseUp(ByVal e As System.Windows.Forms.MouseEventArgs) If inRowMove Then If targetRow > 0 AndAlso targetCol = 0 AndAlso sourceRow > 0 Then Dim dir As Integer If sourceRow > targetRow Then dir = -1 Else dir = 1 End If Dim gridRow As Integer = Me.CurrentCell.RowIndex Dim tableRow As Integer = Me.Binder.RowIndexToPosition(gridRow) Dim cm As CurrencyManager = CType(Me.BindingContext(Me.DataSource, Me.DataMember), CurrencyManager) Me.BeginUpdate() Dim i As Integer = sourceRow Do While i <> targetRow If tableRow + dir < cm.Count AndAlso tableRow < cm.Count AndAlso tableRow + dir > -1 AndAlso tableRow > -1 Then Dim drv1 As DataRowView = CType(cm.List(tableRow+ dir), DataRowView) Dim val1 As Integer = CInt(drv1.Row("sortKey")) Dim drv2 As DataRowView = CType(cm.List(tableRow), DataRowView) Dim val2 As Integer = CInt(drv2.Row("sortKey")) drv1.Row("sortKey") = val2 drv2.Row("sortKey") = val1 tableRow += dir i += dir Loop Me.CurrentCell.MoveTo(targetRow,1) Me.EndUpdate() Me.Refresh() inRowMove = False MyBase.OnMouseUp(e) Here is the link with both CS and VB samples: You are using an outdated version of Internet Explorer that may not display all features of this and other websites. Upgrade to Internet Explorer 8 or newer for a better experience.
https://www.syncfusion.com/kb/4178/how-to-drag-and-drop-rows-or-moving-rows-by-single-click-on-the-row-header-in-griddataboundgrid
CC-MAIN-2018-39
refinedweb
692
59.8
Dear all, I'm created a timeSignal datatype as container around a "Vector Double" data type (see simple code below) and subsequently started to instanciate Num & Eq to be able to perform operations on it. Additionally I want store ifno like an index, time information and eventually an inheritence log (the log is not yet in there). As I will in the end need up to 10 different datatypes, however using slightly different content (time signal, single value, distribution, ...) I ask myself, how I could define a super data-type with sub-data-types to inherit, but then also overload certain functions (like u would do in OO). What is best way in haskell to achieve this ? (I'm unsure wether haskell classes are what I'm looking for) Cheers Phil ########## Code below import qualified Data.Vector.Unboxed as V data TimeSig = TimeSig Int Double (V.Vector Double) -- signal Index timeStep data getVect :: TimeSig -> (V.Vector Double) getVect (TimeSig idx dt vect)= vect getIdx :: TimeSig -> Int getIdx (TimeSig idx dt vect) = idx getdt :: TimeSig -> Double getdt (TimeSig idx dt vect) = dt pzipWith :: (Double -> Double -> Double) -> TimeSig -> TimeSig -> TimeSig pzipWith f p1 p2 = TimeSig idx dt vect where vect = V.zipWith f (getVect p1) (getVect p2) idx = getIdx p1 dt = getdt p1 pmap :: (Double -> Double) -> TimeSig -> TimeSig pmap f p = TimeSig (getIdx p) (getdt p) (V.map f (getVect p)) instance Num TimeSig where (+) p1 p2 = pzipWith (+) p1 p2 (-) p1 p2 = pzipWith (-) p1 p2 negate p1 = pmap negate p1 abs p1 = pmap abs p1 (*) p1 p2 = pzipWith (*) p1 p2 instance Eq TimeSig where (==) p1 p2 = (==) (getVect p1) (getVect p2) instance Show TimeSig where show (TimeSig idx dt vect) = "TimeSignal Nr: " ++ show idx ++ " dt: " ++ show dt ++ " val:" ++ show vect main = do let p = TimeSig 5 0.1 (V.fromList [0..10::Double]) putStrLn (show p) putStrLn (show (p+p)) -- View this message in context: Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. _______________________________________________ Haskell-Cafe mailing list [hidden email]
http://haskell.1045720.n5.nabble.com/Data-Type-Inheritance-ala-OO-Inheritence-howto-best-in-Haskell-td4494800.html
CC-MAIN-2016-18
refinedweb
331
64.14
Is it possible to check-in primary file as attachment?890683 Sep 23, 2013 5:34 AM Hi I got a complex requirement, I'm not sure whether we can achieve this functionality in UCM. Requirement: When i check-in any content using DIS(Desktop Integration Suite) or from content console. The checked-in content should not save as primary file, rather it should be saved as attachment for a html file(HTML file need to be generated dynamically, whenever content is checked-in). So, here dynamically generated HTML file should be a "Primary File" and the checked-in content should go as an "Attachment" for that HTML file. Any help would be helpful. I didn't find any useful content to implement this functionality. Regards Raj 1. Re: Is it possible to check-in primary file as attachment?Jiri.Machotka-Oracle Sep 23, 2013 7:57 AM (in response to 890683) Yes, that's doable via a customization - a filter. It will grab a checked-in file, generate the html, replace it as the Primary and finally attach the original file. 2. Re: Is it possible to check-in primary file as attachment?890683 Sep 23, 2013 8:39 AM (in response to Jiri.Machotka-Oracle) Thank you. Could you provide some code examples generating HTML or some file dynamically and setting it as primary file. 3. Re: Is it possible to check-in primary file as attachment?Jiri.Machotka-Oracle Sep 23, 2013 8:45 AM (in response to 890683)1 person found this helpful Nope, it's your requirement, so you should know what should be in the html. A sample could be a 'Hello-World" Java program that writes a (html) file on a disk - setting it as a primary file simply means redirecting few internal variables during checkin to that file. I think you should start with a filter. Here I can give you a link to a sample - see How-To Component Samples for Oracle UCM 11g | Bex Huff 4. Re: Is it possible to check-in primary file as attachment?890683 Sep 23, 2013 9:46 AM (in response to Jiri.Machotka-Oracle) Ok sure.. Have a small doubt, Long ago I have used RIDC in my application to check-in or check-out content from UCM. Used, binder.addFile(String, File) to checkin-primary files and attachments. It works fine. Now i have created Filter class as you suggested which implements FilterImplementor, public class CheckinFilter implements FilterImplementor { public CheckinFilter() { super(); } public int doFilter(Workspace workspace, DataBinder dataBinder, ExecutionContext executionContext){ } however this dataBinder doesn't have addFile. Then how can i check-in primary file and attachments using binder object, if addFile method is missing. The only difference is RIDC code in my portal application imports import oracle.stellent.ridc.model.DataBinder; and Filter class imports import intradoc.data.DataBinder;. How can i added primary files and attachments from doFilter method ?? 5. Re: Is it possible to check-in primary file as attachment?Jiri.Machotka-Oracle Sep 23, 2013 10:20 AM (in response to 890683) I always recommend to perform a use case from GUI and check what services are called (by turning on requestaudit in server-wide tracing). Basically, you will call the appropriate service from your code. This would work even from RIDC, SOAP/WS, etc. 6. Re: Is it possible to check-in primary file as attachment?William Phelps Sep 23, 2013 1:34 PM (in response to Jiri.Machotka-Oracle) I'm going to ask for clarification -- define "attachment". Is this a document that normally would be connected to another document using "related links" functionality, or is this "attachment" idea something totally different? 7. Re: Is it possible to check-in primary file as attachment?Jiri.Machotka-Oracle Sep 23, 2013 3:06 PM (in response to William Phelps) I know this request for clarification is most likely not a reaction to my post (though, it is placed as such). Either way, the "attachment" story is analyzed in this thread: Attach more than 2 files on 1 doc 8. Re: Is it possible to check-in primary file as attachment?Jonathan Hult Sep 24, 2013 1:21 AM (in response to 890683)1 person found this helpful These blog posts have some examples and may be of some assistance. Jonathan
https://community.oracle.com/thread/2584781
CC-MAIN-2016-44
refinedweb
722
56.45
Paul Hammant's Blog: Using XStream to process standardized XML documents Background (being delisted from Javablogs)PaulHammant.com was thrown off Javablogs.com for having one too many non-Java postings. Mike Cannon-Brookes forced me to sink an 'Irish car bomb' (with about 20 others) on Tuesday in matter of seconds and provide a second RSS feed to get listed again. I earn about 50c a month from Google ads. It's a pretty important supplement to a paultry ThoughtWorks wage... Given I stupidly hand edit this blog, I thought I'd try a little automation. That would mean something that could create a second RSS feed that would only contain Java related blog entries. The most recent work to XStream allows XML attributes to be used as well as elements for encoding. RSS does not leverage a lot of attributes (it's quite element normal), but there is one key one - <rss version="x.y"> - that needs the latest changes to XStream to function. POJOs for RSSHere are is the core Java POJO classes that cover the major elements of RSS 0.91 public class Rss {The version attribute on the root <rss> element needs a special class... Version version; Channel channel; } public class Channel { String title, link, description, language, webMaster, pubDate; List items = new ArrayList(); } public class Item { String title, link, description; } public class Version { private String value; public Version(String value) { this.value = value; } public String getValue() {... and a special converter to link it up: return value; } } import com.thoughtworks.xstream.converters.basic.AbstractSingleValueConverter;And here's the final helper class to do the conversion from/to RSS document to an object model: public class VersionConverter extends AbstractSingleValueConverter { public boolean canConvert(Class type) { return type.equals(Version.class); } public String toString(Object obj) { return obj == null ? null : ((Version) obj).getValue(); } public Object fromString(String str) { return new Version(str); } } public class RssConverter {And using it... private XStream xs; public RssConverter(XStream xs) { this.xs = xs; xs.alias("rss", Rss.class); xs.alias("item", Item.class); xs.addImplicitCollection(Channel.class, "items"); xs.useAttributeFor("version", Version.class); xs.registerConverter(new VersionConverter()); } public RssConverter() { this(new XStream(new DomDriver())); } public Rss fromXml(FileInputStream fileInputStream) { return (Rss) xs.fromXML(fileInputStream); } public void toXml(Rss rss, OutputStream stream) { xs.toXML(rss, stream); } } FileInputStream fis = new FileInputStream("../blog/index.xml");I'll speak to Joe, Mauro and Jörg about a comprehensive RSS class model in the XStream project. If anyone is interested in contributing (unit tested) code then send me an email. RssConverter converter = new RssConverter(); Rss rss = converter.fromXml(fis); ... converter.toXML(rss, new FileOutputStream("../blog/java-index.xml")); Other XML formsActually, I'd love to see as many 'standard' XML forms tackled as possible. XStream is should be able to cope with most of the simple XML based standards. For example. RSS's competitor Atom looks like quite an easy job for XStream. See Others like ebXml, XSL, XUL etc do not look so encodable. See Published May 21st, 2006
https://paulhammant.com/blog/xstream-rss.html
CC-MAIN-2020-29
refinedweb
500
50.53
Summary: Make page_len index actually useful so Special:Longpages and Special:Shortpages will be efficient Product: MediaWiki Version: 1.18-svn Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: Normal Component: Database AssignedTo: wikibugs-l@lists.wikimedia.org ReportedBy: roan.katt...@gmail.com Presumably in an attempt to optimize Special:(Long|Short)pages, a page_len index was introduced in version 1.5 (according to a comment I found). However, because the query these special pages do is something like SELECT stuff FROM page WHERE page_namespace IN (content namespaces) AND page_is_redirect=0 ORDER BY page_len (ASC|DESC) , an index on page_len alone doesn't suffice. For Longpages it's arguably good enough, since the WHERE clause is gonna be true for almost all of the 50/100/500/whatever longest pages. For Shortpages, however, the opposite is true. In order to fix this properly, we'd have to extend the index of (page_len) to cover (page_len, page_is_redirect, ppage_namespace) so the WHERE clause is fully indexed. Any existing legitimate uses of the page_len index (are there any?) will continue to work because page_len will still be the first field in the index. I'll attach a patch shortly. -- Configure bugmail: ------- You are receiving this mail because: ------- You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org
https://www.mail-archive.com/wikibugs-l@lists.wikimedia.org/msg55422.html
CC-MAIN-2018-51
refinedweb
232
57.77
Java ArrayList internally uses a backing array object which is used to store list objects. All ArrayList methods operate on this array and it’s elements. This is the reason why arraylist is ordered collection and provides index based access to elements. 1. How ArrayLists grow in size? Arrays are fixed size collection whereas arraylists dynamically grow as soon as backing array is full and more elements are added to list. This is done in two steps: - Create a new backing array with more size than previous array. - Copy all elements from old array to this new array. So basically, before adding any new element with add() method, ArrayList perform a check whether there is any space left in array or not using ensureCapacity() method. If space is available then element is added; else a new backing array is created first. 2. ArrayList default initial size Generally initial size should be given in ArrayList construtor like new ArrayList(5). But if we do not pass any size, the default size is used which is 10. /** * Default initial capacity. */ private static final int DEFAULT_CAPACITY = 10; It is advisable that if you know the correct or estimated number of elements to be stored in list, then always pass the size in list constructor. It eliminates theneed to resize the arraylist and thus improve performance. 3. Increase arraylist capacity – ArrayList ensureCapacity() method Normally, we do not need to do anything to increase the size because arraylist automatically manages the size for us and increases the size when needed. But, to remove the resize operation again and again, we can ensure the desired capacity of arraylist after it has been created using ensureCapacity() method. This method increases the capacity of the ArrayList instance, if necessary, to ensure that it can hold at least the number of elements specified by the minimum capacity argument to the method. It internally uses grow() method.); } In backend, this method simply creates a new array with desired size. 4. ArrayList ensureCapacity() example Java program to use ensureCapacity() method to increase the size of an arraylist after it’s initialization. public class ArrayListExample { public static void main(String[] args) { ArrayList<String> list = new ArrayList<>(2); list.add("A"); list.add("B"); System.out.println(list); list.ensureCapacity(20); list.add("C"); list.add("D"); list.add("E"); System.out.println(list); } } Program output. [A, B] [A, B, C, D, E] Happy Learning !! Read More: A Guide to Java ArrayList ArrayList Java Docs Feedback, Discussion and Comments Sachin batra Hi Lokesh, I have one doubt related to growing size of ArrayList. For example, we have 10 elements in current arraylist and we are going to add one more element. Initial default size is 10. Now by giving below statements in actual implementation of arrayList new capacity will be 15. int oldCapacity = elementData.length; // 10 int newCapacity = oldCapacity + (oldCapacity >> 1); // New Value will be 15 Now size has increased and we have add more elements. In such cases size will be calculated. My Doubt is .. Assume we have 16000 elements in arrayList. and size is also 16000. Now i want to add one more element. As per above new size will be calculated and it will become near about 24000. For one element we have wasted 7999 memory allocation. is this way ArrayList works or there is any other implementation provided by Java to overcome such issue. Please provide your input on same.
https://howtodoinjava.com/java/collections/arraylist/arraylist-ensurecapacity-method/
CC-MAIN-2019-43
refinedweb
571
56.86
I got a few queries after releasing XamlPadX v3 regarding creating addins for the tool. So here is a small post on it. :) Create a customControlLibrary project. Add a reference to the AddInView dll present in the XamlPadX folder which is located in the program files folder. Also add a reference to the 3.5 System.Addin dll. Now in the xaml file change the UserControl Tag into AddInAddInView <f:AddInAddInView x:Class="customaddin.UserControl1" xmlns="" xmlns:x="" xmlns: <Grid> <Rectangle Fill="red" Height="100" Width="100"/> </Grid> </f:AddInAddInView> the namespace is exposed as in the AddInView.dll :) Now for the code behind [AddIn("MyAddin", Description="This is my addin", Publisher="MS", Version="1.0.0.0")] public partial class UserControl1: AddInViews.AddInAddInView The usercontrol extends from the AddInAddInView class. Also creating an AddIn attribute makes it readable in the addin dialog. The AddIn attribute takes (AddinName, Description, Publisher, Version). You will also need to override 2 functions. SendSignal(String) and Initialize(AddInHostView) The SendSignal just passes a string ("Displaying Addin: Name") from the host to the Addin. In the colorPallete Addin, the function opens a MessageDialog. The function is a demo of host-addin communication. The Initialize function passes the host object to the addin. The host object provides access to 3 functions. ChangeAppBackground - changes the background of the xamlpadX app ( you can see this working if you click the last button in the colorpallete addin) TextBoxCaretIndex - caret index in the editing textbox of the xamlpadX tool. This is helpful if you want to insert some text textBoxContents - Content of the editing textbox in the tool. This is a get/set property so you can set the entire content of the textbox. The final part is the placement of the dll. By default the xamlpadX files are placed in ProgramFiles/XamlPadX. There is an Addins folder. Create a folder for you addin and place your dll in this folder (do not place the addinview dll..it already exists). Restart XamlPadX and you should see your addin the Addin dialog. attached is a sample addin project. Great !!! Thanks a lot for the post... I was missing the AddInAddInView - I was creating a Usercontrol instead... Silly me :) I finally managed to create a dummy plugin for XamlPadX WOWOWOW Thanks a lot Hi, I am working on a plugin for XAMLPadX. Yet I am having a problem when I call the AdornerLayer.GetAdornerLayer on one of my elements.... Can you drop me an email at marlongrech@gmail.com so that we can discuss this... P.S the Plugin will be downlaodable from everyone I will make sure to post a link to the plugin on your Blog... Looking forward to here from you :) Regards Hello everyone, I wrote a blog post on how to get the Adorner Layer while developing plugins (that use the System.Addin like XAMLPadX does).... If you are using the AdornerLayer.GetAdornerLayer this post should make your life easier... Trademarks | Privacy Statement
http://blogs.msdn.com/llobo/archive/2007/12/26/creating-addins-for-xamlpadx.aspx
crawl-002
refinedweb
495
68.47
import "code.soquee.net/migration" Package migration contains functions for generating and finding PostgreSQL database migrations. migration.go setup.go Generator returns a function that creates migration files at the given base path. func LastRun(ctx context.Context, migrationsTable string, fs vfs.FileSystem, tx *sql.Tx) (ident, name string, err error) LastRun returns the last migration directory by lexical order that exists in the database and on disk. Setup creates a migrations table with the given name. RunStatus is a type that indicates if a migration has been run, not run, or if we can't determine the status. Valid RunStatus values. For more information see RunStatus. WalkFunc is the type of the function called for each file or directory visited by a Walker. type Walker func(fs vfs.FileSystem, f WalkFunc) error Walker is a function that can be used to walk a filesystem and calls WalkFunc for each migration. NewWalker queries the database for migration status information and returns a function that walks the migrations it finds on the filesystem in lexical order (mostly, keep reading) and calls a function for each discovered migration, passing in its name, status, and file information. If a migration exists in the database but not on the filesystem, info will be nil and f will be called for it after the migrations that exist on the filesystem. No particular order is guaranteed for calls to f for migrations that do not exist on the filesystem. If NewWalker returns an error and a non-nil function, the function may still be used to walk the migrations on the filesystem but the status information may be wrong since the DB may not have been queried successfully. Package migration imports 11 packages (graph). Updated 2019-12-16. Refresh now. Tools for package owners.
https://godoc.org/code.soquee.net/migration
CC-MAIN-2020-50
refinedweb
298
56.35
Am 29.09.2010 20:01, schrieb Daniel Fischer: >. Right, parsec2 or parsec-2.1.0.1 still does so. (parsec-3 behaves differently wrt error messages.) Try "ghc-pkg hide parsec" so that parsec-2.1.0.1 will be taken: import Text.ParserCombinators.Parsec import Control.Monad infixl 1 << (<<) :: Monad m => m a -> m b -> m a (<<) = liftM2 const block p = char '{' >> p << char '}' parser = block (many (digit)) main = parseTest parser "{123a}" *Main> main Loading package parsec-2.1.0.1 ... linking ... done. parse error at (line 1, column 5): unexpected "a" expecting digit or "}" >>> (1) What is the reason for this behaviour? >>> (2) Is there another combinator that behaves as I would like? >>> (3) Otherwise, how do I write one myself? ask derek.a.elkins at gmail.com (CCed) Cheers Christian >> >>. > > manyTill p end = scan > where > scan = do{ end; return [] } > <|> > do{ x <- p; xs <- scan; return (x:xs) } > > I'm not sure, but I suspect it's less efficient. > > Perhaps > > manyTill' p end = scan [] > where > scan acc = do { end; return (reverse acc) } > <|> do { x <- p; scan (x:acc) } > > is more efficient (depends on Parsec's bind which is more efficient), you > could test. > >> Also, this style is less modular, as I have to mention the >> terminator in two places. > > That's not the main problem. `manyTill' consumes the ending token, so > > block (manyTill whatever (char '}')) needs two '}' to succeed. > You would need > > block (manyTill digit (lookAhead (char '}')) > > to replicate the behaviour of block (many digit). > >> Is there a non-greedy variant of 'many' so >> that modularity gets restored and efficiency is not lost? >> >> Cheers >> Ben
http://www.haskell.org/pipermail/haskell-cafe/2010-September/084207.html
CC-MAIN-2013-20
refinedweb
268
76.32
Parse Natural Language Dates with Dateparser We recently released Dateparser 0.3.1 with support for Belarusian and Indonesian, as well as the Jalali calendar used in Iran and Afghanistan. With this in mind, we’re taking the opportunity to introduce and demonstrate the features of Dateparser. Dateparser is an open source library we created to parse dates written using natural language into Python. It translates specific dates like ‘5:47pm 29th of December, 2015’ or ‘9:22am on 15/05/2015’, and even relative times like ‘10 minutes ago’, into Python datetime objects. From there, it’s simple to convert the datetime object into any format you like. Who benefits from Dateparser When scraping dates from the web, you need them in a structured format so you can easily search, sort, and compare them. Dates written in natural language aren’t suitable for this. For example, the 24th of December shows up first if you sort the 25th of November and the 24th of December alphanumerically. Dateparser solves this by taking the natural language date and parsing it into a datetime object. A bonus perk of Dateparser is that you don’t need to worry about translation. It supports a range of languages including English, French, Spanish, Russian, and Chinese. Better yet, Dateparser autodetects languages, so you don’t need to write any additional code. This makes Dateparser especially useful when you want to scrape data from websites in multiple languages, such as international job boards or real estate listings, without necessarily knowing what language the data you’re scraping is in. Think Indeed. Why we didn’t use similar libraries Dateparser developed while we were working on a broad crawl project that involved scraping many forums and blogs. The websites were written in numerous languages and we needed to parse dates in a consistent format. None of the existing solutions met our needs. So we created Dateparser as a simple set of functions that sanitised the input and passed it to the dateutil library, using parserinfo objects to work with other languages. This process worked well at first. But as the crawling project matured we ran into problems with short words, strings containing several languages, and a host of other issues. We decided to move away from parserinfo objects and handle language translation on our own. With the help of contributors from the Scrapy community, we significantly improved Dateparser’s language detection feature and made it easier to add languages by using YAML to store the translations. Dateparser is simple to use and highly extendable. We have successfully used it to extract dates on over 100 million web pages. It’s well tested and robust. Peeking under the hood You can install Dateparser via pip. Import the library and use the dateparser.parse method to try it out: $ pip install dateparser … $ python >>> import dateparser >>> dateparser.parse('1 week and one day ago') datetime.datetime(2015, 9, 27, 0, 17, 59, 738361) Contributing new languages Supporting new languages is simple. If yours is missing and you’d like to contribute, send us a pull request after updating the languages.yaml file. Here is what the definitions for French look like: name: French skip: ["le", "environ", "et", "à", "er"] monday: - Lundi ... november: - Novembre - Nov december: - Décembre - Déc ... year: - an - année - années ... simplifications: - avant-hier: 2 jour - hier: 1 jour - aujourd'hui: 0 jours - d'une: 1 - un: 1 - une: 1 - (\d+)\sh\s(\d+)\smin: \1h\2m - (\d+)h(\d+)m?: \1:\2 - moins\s(?:de\s)?(\d+)(\s?(?:[smh]|minute|seconde|heure)): \1\2 Language detection When parsing dates, you don’t need to set the language explicitly. Dateparser will detect it for you: $ python >>> import dateparser >>> dateparser.parse('aujourd\'hui') # French for 'today' datetime.datetime(2015, 10, 13, 12, 3, 19, 262752) >>> dateparser.parse('il ya 2 jours') # French for '2 days ago' datetime.datetime(2015, 10, 11, 12, 3, 19, 262752) See the documentation for more examples. How we measure up Dateutil is the most popular Python library to parse dates. Dateparser actually uses dateutil.parser as a base, and builds its features on top of it. However, Dateutil was designed for formatted dates, e.g. ‘22-10-15’, rather than natural language dates such as ‘10pm yesterday’. Parsedatetime is closer to Dateparser in that it also parses natural language dates. One advantage of Parsedatetime is that it supports future relative dates like ‘tomorrow’. However, while Parsedatetime also supports non-English languages, you must specify the location manually, whereas Dateparser detects the language for you. Parsedatetime also has more boilerplate code, compare: from datetime import datetime from time import mktime time_struct, parse_status = cal.parse('today') datetime.fromtimestamp(mktime(time_struct)) to: import dateparser dateparser.parse('today') If you are dealing with multiple languages and want a simple API with no unnecessary boilerplate, then Dateparser is likely a good fit for your needs. Give Dateparser a go Dateparser is an extensible, easy-to-use, and effective method for parsing international dates from websites. Its unique features arose from the specific problems we needed to address. Namely, parsing dates from websites whose language we did not know in advance. The library has been well tested against a large number of sites in over 20 languages and we continue to refine and improve it. Contributors are most welcome, so if you’re interested, please don’t hesitate to get involved! 3 thoughts on “Parse Natural Language Dates with Dateparser” Thanks for developing such a great library. while scraping date from some forums, they come with text mixed with date(any format). Do we have any possibility to extract date and convert it into required format.? hey Shekar, maybe DateFinder is what you are looking for: It finds dates on a string and then uses DateParser to parse them into datetime objects. Thanks for your effort on developing such library. I try to parse “next Friday” or “next week” or ” this Sunday”, but failed is there any possibility to fix it or work around solution thank you in advance
https://blog.scrapinghub.com/2015/11/09/parse-natural-language-dates-with-dateparser/
CC-MAIN-2017-43
refinedweb
1,007
64.81
You don't need to know this. It just makes life easier, and of course helps when reading other people's work. Often it turns out that you need to refer to something indirectly through a string of properties, such as "George's mother's assistant's home's address' zipcode". This is traversal of the graph. In the N3 we have dealt with up till now, this would look something like [is con:zipcode of [ is con:address of [ is con:home of [ is off:assistant of [ is rel:mother of :George]]]]] which reads "that which is the zipcode of that which is the address of that which is the home of that which is the assistant of that which is the mother of :George", which isn't very convenient to read or write. And this is when in an object-oriented language, you would just cascade methods or attributes on properties using ".". To make it easier in N3 there is a shortcut for the above would just be written :George.rel:mother .off:assistant .con:home .con:address .con:zipcode The dot must be immediately followed by the next thing, with no whitespace. This is forward traversal of the graph, where with each "." you move from something to its property. So ?x.con:mailbox is x's mailbox, and in fact in english you can read the "." as " 's". You can do backward traversal, using "^" instead of "." as punctuation. So if :me.con:mailbox means my mailbox, then <mailto:ora@lassila.com>^con:mailbox is that which has <mailto:ora@lassila.com> as a mailbox. This backward traversal is less usual - you can't do it object-oriented programming languages -- but sometimes its what you need. Note. In this imaginary ontology, ".rel:child" is equivalent to "^rel:parent". Whatever the sequence of "." and "^", they always are read left to right across the page. These are known as "paths". If you are used to XML, think: Xpath but simpler. If you think of the circles and arrows graph of data, think of a path from node to node. Cwm doesn't currently use paths on output, so getting cwm to input and output a file will turn it into the form of N3 you already know. A common need is to represent an ordered collection of things. This is done in RDF as a rdf:Collection. In N3, the list is represented by separating the objects with whitespace and surrounding them with parentheses. Examples are: ( "Monday" "Tuesday" "Wednesday" ) (:x :y) ( :cust.bus:order.bus:number :cust.bus:order .bus:referncedPurchaseOrder .bus:number ) These lists are actually shorthand for statements which knit blank nodes together using rdf:first and rdf:rest. rdf:first is the relationship between a list and its first component. rdf:rest if the relationship between a list and the list of everything except its first element. rdf:nil is the name for the empty list. Therefore, ( "Monday" "Tuesday" "Wednesday" ) is equivalent to [ rdf:first "Monday"; rdf:next [ rdf:first "Tuesday"; rdf:rest [ rdf:first "Wednesday"; rdf:rest rdf:nil ]]] One of the common uses of lists is as parameter to a relationship which has more than one argument. ( "Dear " ?name " "," ) string:concatenation ?salutation. for example, indicates that the salutation is the string concatenation of the three strings "Dear", whatever ?name is, and a comma. This will be used with built-in functions which we will discuss later. In the examples, often we have been using terms in the default namespace, like :me or :father or :People. The ":" is there because we don't want to get confused with keywords which are part of the syntax, like a, is, of, and prefix. Further, we want to be able to add more keywords later to extend the syntax, and you don't want to find that your files which happened to use that word as a name are all out of date. The solution is that if you really want to be able to write N3 without that leading ":", then you can declare which keywords you are going to use. When you have done that, then everything which is not in the list is allowed without a colon, and will be taken as a name. To actually use the keyword, you have to use an explicit "@" as you have been doing with "prefix" all along. Here is an example in which all they keywords are removed, so "@" is used all the time: @keywords. @prefix : <#>. me @a Person. Jo @a Person; @is sister @of me. Jo father Alan; mother Flo. Alan father Zak; brother Ed, Julian. Here is an example in which the normal keywords used are all declared. @keywords a, is, of, this. @prefix : <#>. me a Person. Jo a Person; is sister of me. Jo father Alan; mother Flo. Alan father Zak; brother Ed, Julian. Note that is you have declared something a keyword, then you can still use the "@" in front of it if you want to, as in "@prefix" above. Obviously, you can't declared a to be a keyword, and then call your variables a, b and c!
http://www.w3.org/2000/10/swap/doc/Shortcuts.html
crawl-001
refinedweb
853
74.08
Alex Buell writes:> No, the ioctl numbers are correct, it's ESD that's fscked.> > /* set the sound driver audio format for playback */> #if defined(__powerpc__)> value = test = ( (esd_audio_format & ESD_MASK_BITS) == ESD_BITS16 )> ? /* 16 bit */ AFMT_S16_NE : /* 8 bit */ AFMT_U8;> #else /* #if !defined(__powerpc__) */> value = test = ( (esd_audio_format & ESD_MASK_BITS) == ESD_BITS16 )> ? /* 16 bit */ AFMT_S16_LE : /* 8 bit */ AFMT_U8;> #endif /* #if !defined(__powerpc__) */> > <sarcasm>> This is such a lovely piece of code!> <sarcasm>Indeed...> Anyway, I can fix it now by adding the appropriate AFMT_S16_BE statement> guarded by a #ifdef but this sucks. Thanks to Peter Jones who spotted this> one.Why can't you just use AFMT_S16_NE on all platforms? That is supposedto be equal to AFMT_S16_BE on big-endian platforms and to AFMT_S16_LEon little-endian platforms. NE == native endian.Paul.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2001/10/31/299
CC-MAIN-2016-44
refinedweb
153
60.82
All 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36.%). How to use the subtree git merge strategy This article might be perceived as a blatant ripoff of this Linux kernel document, but, on the contrary, it’s intended as add-on, showing how to do a subtree merge (the multi-project merge strategy that’s actually doable in a heterogenous group of developers, as opposed to subprojects, which many just can’t wrap their heads around) with contemporary git (“stupid content tracker”). Furthermore, the commands are reformatted to be easier to copy/paste. To summarise: you’re on the top level of a checkout of the project into which the “other” project (Bproject) is to be merged. We wish to merge the top level of Bproject’s “master” branch as (newly created) subdirectory “dir-B” under the current project’s top level. Bproject /path/to/B/.git $ git merge -s ours --allow-unrelated-histories --no-commit Bproject/master $ git read-tree -u --prefix=dir-B/ Bproject/master $ git commit -m 'Merge B project as our subdirectory dir-B' Later updates are easy: $ git pull -s subtree Bproject master$ git remote add --no-tags -f (mind the trailing slash after dir-B/ on the read-tree command!) Besides reformatting, the use of --allow-unrelated-histories recently became necessary. --no-tags is also usually what you want, because tags are not namespaced like branches. Another command you might find relevant is how to clean up orphaned remote branches: $ for x in $(git remote); do git remote prune "$x"; done$ for x in $(git remote); do git remote prune "$x"; done This command locally deletes all remote branches (those named “origin/foo”) that have been deleted on the remote side. Update: Natureshadow wishes you to know that there is such a command as git subtree which can do similar things to the subtree merge strategy explained above, and several more related things. It does, however, need the præfix on every subsequent pull. “I don’t like computers” cnuke@ spotted something on the internet, and shared. Do read this, including the comments. It’s so true. (My car is 30 years old, I use computers mostly for sirc, lynx and ssh, and I especially do not buy any product that needs to be “online” to work.) Nice parts of the internet, to offset this, though, do exist. IRC as a way of cheap (affordable), mostly reliant, communication that’s easy enough to do with TELNET.EXE if necessary. Fanfiction; easy proliferation of people’s art (literature, in this case). Fast access to documentation and source code; OpenBSD’s AnonCVS was a first, nowadays almost everything (not Tom Dickey’s projects (lynx, ncurses, xterm, cdk, …), nor GNU bash, though) is on a public version control system repository. (Now people need to learn to not rewrite history, just commit whatever shit they do, to record thought process, not produce the perfect-looking patch.) Livestreams too, I guess, but ever since live365.com went dead due to a USA law change on 2016-01-02, it got bad.! httpd CVE-2016-5387 “httpoxy” fixed A small patch was applied to httpd(8) to not pass the HTTP Proxy header as HTTP_PROXY environment variable to CGI scripts, because those often call utilities such as ftp(1), lynx(1), GNU wget, etc. which may accept this as an alternative spelling of http_proxy which is used to set a proxy for outgoing connections — something e.g. the CGI scripts in MirKarte do.. mksh R52c, paxmirabilis 20160306 released; PA4 paper size PDF manpages The MirBSD Korn Shell R52c was published today as bugfix-accumulating release of low upto medium importance. Thanks to everyone who helped squashing all those bugs; this includes our bug reporters who always include reproducer testcases; you’re wonderful! MirCPIO was also resynchronised from OpenBSD, to address the CVE-2015-{1193,1194} test cases, after a downstream (wow there are so many?) reminded us of it; thanks! This is mostly to prevent extracting ../foo – either directly or from a symlink(7) – from actually ending up being placed in the parent directory. As such the severity is medium-high. And it has a page now – initially just a landing page / stub; will be fleshed out later. Uploads for both should make their way into Debian very soon (these are the packages mksh and pax). Uploading backports for mksh (jessie and wheezy-sloppy) have been requested by several users, but none of the four(?) DDs asked about sponsoring them even answered at all, and the regular (current) sponsors don’t have experience with bpo, so… SOL ☹ I’ve also tweaked a bug in sed(1), in MirBSD. Unfortunately, this means it now comes with the GNUism -i too: don’t use it, use ed(1) (much nicer anyway) or perlrun(1) -p/-n… Finally, our PDF manpages now use the PA4 paper size instead of DIN ISO A4, meaning they can be printed without cropping or scaling on both A4 and US-american “letter” paper. And a Бодун from the last announcement: we now use Gentium and Inconsolata as body text and monospace fonts, respectively. (And à propos, the website ought to be more legible due to text justification and better line spacing now.) I managed to hack this up in GNU groff and Ghostscript, thankfully. (LaTeX too) Currently there are PDF manpages for joe (jupp), mksh, and cpio/pax/tar. And we had Grünkohl today! Also, new console-setup package in the “WTF” APT repository since upstream managed to do actual work on it (even fixed some bugs). Read its feed if interested, as its news will not be repeated here usually. (That means, subscribe as there won’t be many future reminders in this place.) The netboot.me service appears to be gone. I’ll not remove our images, but if someone knows what became of it drop us a message (IRC or mailing list will work just fine). PS: This was originally written on 20160304 but opax refused to be merged in time… Happy Birthday, gecko2! In the meantime, the Street Food festival weekend provided wonderful food at BaseCamp, and headache prevented this from being finished on the fifth. Update 06.03.2016: The pax changes were too intrusive, so I decided to only backport the fixes OpenBSD did (both those they mentioned and those silently included), well, the applicable parts of them, anyway, instead. There will be a MirCPIO release completely rebased later after all changes are merged and, more importantly, tested. Another release although not set for immediate future should bring a more sensible (and mksh-like) buildsystem for improved portability (and thus some more changes we had to exclude at first). I’ve also cloned the halfwidth part of the FixedMisc [MirOS] font as FixedMiscHW for use with Qt5 applications, xfonts-base in the “WTF” APT repo. (Debian #809979) tl;dr: mksh R52c (bugfix-only, low-medium); mircpio 20160306 (security backport; high) with future complete rebase (medium) upstream and in Debian. No mksh backports due to lacking a bpo capable sponsor. New console-setup in “WTF” APT repo, and mksh there as usual. xfonts-base too. netboot.me gone?! All 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
https://www.mirbsd.org/wlog-10.htm
CC-MAIN-2017-22
refinedweb
1,260
59.23
ToolKit LANGUAGES: C# TECHNOLOGIES: Code-Behind | XML | XSLT | HTML Help Document Your Apps NDoc makes it easy to create MSDN-style documentation for your ASP.NET projects. By Ken McNamee Professional-quality documentation has never been easier. If you're using C# to build your ASP.NET applications already and are decorating your class members with XML comments (see References), you're only a few clicks away from documentation that looks and functions much like the HTML Help included with Microsoft's .NET Framework and Visual Studio .NET. What makes this possible is NDoc (see References) - an open-source, third-party tool that provides a great deal of functionality. You need only three things to get started with NDoc: the .NET Framework, HTML Help, and the NDoc source code you will need to be compiled. Now don't get stressed - we're not recompiling the Linux kernel here. NDoc comes with Visual Studio .NET solution and project files, so all you have to do is open the solution file and click on Build. But if you don't have a copy of Visual Studio .NET, you're not out of luck. Also included is a makefile that requires only the .NET Framework. Because NDoc is so easy to download and build, you have no excuse for not trying it, right? Well, there might be one valid excuse. It's only fair to mention that NDoc is useless unless you're coding in C#. XML documentation is not, at this time, supported in VB .NET or any of the other Common Language Specification-compliant languages, and this reason is one of the many why C# continues to be my language of choice for developing .NET applications. Prepare Your Code In addition to writing your code in C#, your class members must be prefixed by XML documentation sections. This code listing demonstrates what that would look like: /// /// Represents one table of in-memory data. /// public class DataTable \\{ /// /// Computes the given expression on the current rows /// that pass the filter criteria. /// /// The expression to /// compute. /// The filter to limit the /// rows that evaluate in the expression. /// /// computation. public object Compute(string expression, string filter) \\{ return true; \\} \\} When this code is compiled, the C# compiler strips out the special XML documentation comments and merges them into a single XML file per project. Neither Visual Studio .NET nor the C# compiler, however, will do this for you automatically. You must pass the compiler the /doc switch and the filename of the merged XML file for this process to occur. In Visual Studio .NET, you accomplish by opening the Project Properties dialog and navigating to the Build section of the Configuration Properties. Then you simply need to fill in the XML Documentation File option and Visual Studio .NET will know to compile the project with the /doc switch. If you aren't using Visual Studio .NET and are instead directly invoking the C# compiler, this command demonstrates how to create the merged XML file: C:>csc /target:library NDocEXample.cs /doc:NDocEXample.xml Now that you have built both the assembly and its corresponding XML documentation file, you can fire up NDoc and let it do the rest of the work. Simply point NDoc at your Visual Studio .NET solution file and it determines which files to include in the compiled help. Or, you can add the assembly and XML file manually if you want more control or are not using a Visual Studio .NET solution. Additionally, you can set a number of properties to configure the resulting HTML Help, such as whether to display Visual Basic .NET syntax for the class members and whether to include private, protected, or internal members in addition to the public ones. Once you've set all the NDoc options for your project, simply click on Build and navigate to the output folder for the HTML Help file. Open the \\[project name\\].chm file and explore the project classes and members. You should see something similar to Figure 1. Figure 1. Look familiar? NDoc's finished product looks and functions much like the MSDN .NET documentation. If you sell or distribute compiled class libraries for other developers to consume, NDoc can be a real timesaver when it comes to creating professional, high-quality documentation.Even if you create only internal assemblies for an Internet or intranet site, however, NDoc can still be an important part of your development toolbox. New developers on your team probably will find the learning curve a little less steep if they can reference your class members in a familiar format that integrates seamlessly with the .NET Framework Help. There is much more to NDoc and XML documentation than I've mentioned here. So do yourself a favor and explore all the other options and tags available. You even can define your own tags and tweak the XSLT to render the Help file the way you want it. Who knows, it may be as close as you're going to get to (truthfully) claiming that your code is self-documenting! References NDoc: XML Docs: [email protected].
http://www.itprotoday.com/web-development/document-your-apps
CC-MAIN-2018-09
refinedweb
850
64.61
I have a DataFrame contains as following, where first row is the "columns": id,year,type,sale 1,1998,a,5 2,2000,b,10 3,1999,c,20 4,2001,b,15 5,2001,a,25 6,1998,b,5 ... The Pandas library provides simple and efficient tools to analyze and plot DataFrames. Considering that the pandas library is installed and that the data are in a .csv file (matching the example you provided). import pandas as pd data = pd.read_csv('filename.csv') You now have a Pandas Dataframe as follow: id year type sale 0 1 1998 a 5 1 2 2000 b 10 2 3 1999 c 20 3 4 2001 b 15 4 5 2001 a 25 5 6 1998 b 5 This is easily achieved by: data.plot('type', 'sale', kind='bar') which results in If you want the sale for each type to be summed, data.groupby('type').sum().plot(y='sale', kind='bar') will do the trick (see #3 for explanation) This is basically the same command, except that you have to first sum all the sale in the same year using the groupby pandas function. data.groupby('year').sum().plot(y='sale', kind='bar') This will result in Edit: You can also unstack the different 'type' per year for each bar by using groupby on 2 variables data.groupby(['year', 'type']).sum().unstack().plot(y='sale', kind='bar', stacked=True) See the Pandas Documentation on visualization for more information about achieving the layout you want.
https://codedump.io/share/TzAQMbZYqnGC/1/a-convenient-way-to-plot-bar-plot-in-python-pandas
CC-MAIN-2018-17
refinedweb
253
59.33
Performance is a big problem in software development. And Caching is one solution to speed up system. The tutorial will guide you how to start with Spring Cache using Spring Boot. Related posts: – Couchbase – How to create Spring Cache Couchbase application with SpringBoot – SpringBoot Caffeine cache with PostgreSQL backend – SpringBoot Hazelcast cache with PostgreSQL backend Contents I. Technology – Java 1.8 – Maven 3.3.9 – Spring Tool Suite – Version 3.8.1.RELEASE – Spring Boot: 1.4.0.RELEASE II. Overview 1. Goal Our goal is to build a service that can receive RESTful request from Client, cache or evict Customer Data by it own Component called CacheService, and return the result. Note that spring provides only the caching abstraction, but the actual caching store is not implemented by spring caching framework. But Spring Boot has auto-configuration of caching technologies, we don’t have to create our caching storage, and we just cache Customer Data with target name ‘customers’. 2. Project Structure – Customer will be Plain Old Java Object – CacheService is the Component we use to cache data and store in a target cache named “customers” and it will be autowired in WebController to implement caching method. – WebController is a RestController which has request mapping method for Restful Request such as cacheput, cachable, cacheevict. – SpringBootApplication should be enable caching. – Dependencies for Spring Boot Starter are added in pom.xml. 3. Step to do – Create Spring Boot project & add Dependencies – Create a DataModel class – Create a Caching Service – Create a Web Controller – Enable Caching – Run Spring Boot Application & Enjoy Result 4. Demo Video III. Practice 1. Create Spring Boot project & add Dependencies Open Spring Tool Suite, on Menu, choose File -> New -> Spring Starter Project, then fill each fields: Click Next, then click Finish. Spring Boot project will be created successfully. Now, add dependencies for Web MVC, and Spring Caching Open pom.xml, add: org.springframework.boot spring-boot-starter-web org.springframework spring-context 2. Create a DataModel class Under package model, create class Customer. Content of Customer.java: public class Customer { private long id; private String firstName; private String lastName; public Customer; } } 3. Create a Caching Service Create a Caching Service class under package service. This is the most important part of the tutorial. Content of CacheService.java: @Component public class CacheService { private static Map store = new HashMap (); static{ store.put(1L, new Customer(1, "Jack", "Smith")); store.put(2L, new Customer(2, "Adam", "Johnson")); } @CachePut(value="customers", key="#id") public Customer putCustomer(String firstName, long id){ Customer cust = store.get(id); cust.setFirstName(firstName); return cust; } @Cacheable(value="customers", key="#id") public Customer get(long id){ System.out.println("Service processing..."); try{ Thread.sleep(3000); }catch(Exception e){ } Customer cust = store.get(id); return cust; } @CacheEvict(value = "customers", key = "#id") public void evict(long id){ } } We have some necessary cache annotations that attached to each method: – @Cacheable is one of the most important and common annotation for caching the requests. If the application sends multiple requests, this annotation will not execute the method which is Cachable multiple times, instead it will return the result from the cached storage. + value is a cache name to store the caches, in this case, it’s customers. + key is a Spring Expression Language for computing the key dynamically. + input parameter ID is used as a key to find Customer object in the cache if it exists, if the customer with that id doesn’t exist, Spring Framework will execute command inside method (including Sleep function – for simulation purpose) and cache current customer object that is returned in this method. – @CachePut updates the cache which is stored and execute the method. + the method putCustomer is used to modify customer instance in the cache. + input parameter ID is used as a key to find customer object in the cache. If the id exists, the function will modify the firstName the same as the input string parameter. – @CacheEvict is used for removing a single cache or clearing the entire cache from the cache storage. The method evict uses the input parameter id as a key to find the object that has the same id and removes it from the cache. 4. Create a Web Controller WebController is a RestController, we autowire service for CacheService that we created at previous step. We have some methods that have request mapping: get, put, evict. Content of WebController.java: @RestController public class WebController { @Autowired CacheService service; @RequestMapping("/cacheput") public String put(@RequestParam("firstname") String firstName, @RequestParam("id")long id){ service.putCustomer(firstName, id); return "Done"; } @RequestMapping("/cachable") public Customer get(@RequestParam("id")long id){ return service.get(id); } @RequestMapping("/cacheevict") public String evict(@RequestParam("id")long id){ service.evict(id); return "Done"; } } – @RequestMapping(“/cacheput”): put a Customer to cache data to target customers by service. – @RequestMapping(“/cachable”): get a Customer from cache customers by service. – @RequestMapping(“/cacheevict”): delete a Customer with input ID from cache customers by service. 5. Enable Caching By default, Caching in spring is not enabled. We must enable the caching by annotating with @EnableCaching or declaring it in the XML file. In our tutorial, we enable caching by add annotation @EnableCaching for the SpringBootApplication class. @SpringBootApplication @EnableCaching public class SpringCacheApplication{ public static void main(String[] args) { SpringApplication.run(SpringCacheApplication.class, args); } } 6. Run Spring Boot Application & Enjoy Result – Config maven build: clean install – Run project with mode Spring Boot App – Check results: Request 1 Service process slowly and result: Server has displayed a text on console: Service processing... Now a customer with Customer(1, “Jack”, “Smith”) has been cached.: Service processing... IV. Source Code Last updated on October 5, 2017. 3 thoughts on “How to work with Spring Cache | Spring Boot” Good tutorial. Thank you Can we used third party cache like Redis, Hazelcast in place of custom CacheService and how? Yes, We can! You can refer some articles: – SpringBoot Hazelcast cache with PostgreSQL backend – SpringBoot Caffeine cache with PostgreSQL backend – Couchbase – How to create Spring Cache Couchbase application with SpringBoot
https://grokonez.com/spring-framework/spring-boot/cache-data-spring-cache-using-spring-boot
CC-MAIN-2021-21
refinedweb
996
57.37
Submission only takes namespaces from specific elements: instance, model and main document node. So any namespaces declared in between are not included. It must be possible to ask the document for all namespaces visible at a certain node instead of the specific copying we are doing now. Created attachment 215118 [details] Testcase "It must be possible to ask the document for all namespaces visible at a certain node instead of the specific copying we are doing now." When I wrote the code, I couldn't figure out how to do that. smaug might know more :) It seems that transformiix does not keep namespaces declarations done at root level, it only dumps namespaces which will be actually used in the resulting XML. That is a problem when trying to transform schemas, as namespaces which are declared sometimes refer to element declarations like <xsd:element. The namespace foo is not used in the markups of the schemas, but it has to be kept in the resulting XML. As far as I know, the attachment does not even describe the actual behaviour. Other XSLT processors keep the namespaces declared at root level. RIP xforms
https://bugzilla.mozilla.org/show_bug.cgi?id=330557
CC-MAIN-2016-22
refinedweb
191
60.75
SUSI being an interactive personal assistant, answers questions in a variety of formats. This includes maps, RSS, table, and pie-chart. SUSI MagicMirror Module earlier provided support for only Answer Action Type. So, if you were to ask about a location, it could not show you a map for that location. Support for a variety of formats was added to SUSI Module for MagicMirror so that users can benefit from rich responses by SUSI.AI. One problem that was faced while adding UI components is that in the MagicMirror Module structure, each module needs to supply its DOM by overriding the getDom() method. Therefore, you need to manage all the UI programmatically. Managing UI programmatically in Javascript is a cumbersome task since you need to create DOM nodes, manually apply styling to them, and add them to parent DOM object which is needed to be returned. We need to write UI for each element like below: getDom: function () { .... .... const moduleDiv = document.createElement("div"); const visualizerCanvas = document.createElement("canvas"); moduleDiv.appendChild(visualizerCanvas); const mapDiv = document.createElement("div"); loadMap(mapDiv,lat, long); moduleDiv.appendChild(mapDiv); ... ... } As you can see, manually managing the DOM is neither that easy nor a recommended practice. It can be done in a more efficient way using the React Library by Facebook. React is an open source UI library by Facebook. It works on the concept of Virtual DOM i.e. the whole DOM object gets created in the memory and only the changed components are reflected on the document. Since the SUSI MagicMirror Module is primarily written in open-source TypeScript Lang (a typed superset of JavaScript), we also need to write React in TypeScript. To add React to a Typescript Project, we need to add some dependencies. They can be added using: $ yarn add react react-dom @types/react @types/react Now, we need to change our Webpack config to build .tsx files for React. TSX like JSX can contain HTML like syntax for representing DOM object in a syntactic sugar form. This can be done by changing resolve extensions and loaders config so that awesome typescript loaded compiles that TSX files. It is needed to be modified like below resolve: { extensions: [".js", ".ts", ".tsx", ".jsx"], }, module: { loaders: [{ test: /\.tsx?$/, loaders: ["awesome-typescript-loader"], }, { test: /\.json$/, loaders: ["json-loader"], }], }, This will allow webpack to build and load both .tsx and .ts files. Now that project is setup properly, we need to add UI for Map and RSS Action Type. The UI for Map is added with the help of React-Leaflet library. React-Leaflet module is a module build on top of Leaflet Map library for loading maps in Browser. We add the React-Leaflet library using $ yarn add react-leaflet Now, we declare a MapView Component in React and render Map in it using the React-Leaflet Library. Custom styling can be applied to it. The render function for MapView React Component is defined as follows. import * as React from "react"; import {Map, Marker, Popup, TileLayer} from "react-leaflet"; interface IMapProps { latitude: number; longitude: number; zoom: number; } export class MapView extends React.Component<IMapProps, any> { public constructor(props: IMapProps) { super(props); } public render(): JSX.Element | any | any { const center = [this.props.latitude, this.props.longitude]; console.log(center); return <Map center={center} zoom={this.props.zoom} style={{height: "300px"}}> <TileLayer url="http://{s}.tile.osm.org/{z}/{x}/{y}.png"/> <Marker position={center}> <Popup> <span> Here </span> </Popup> </Marker> </Map>; } } For making the UI for RSS Action Type, we define an RSS Card Component. An RSS feed is constituted by various RSS Cards. An RSS Card is defined as follows. import * as React from "react"; export interface IRssProps { title: string; description: string; link: string; } export class RSSCard extends React.Component <IRssProps, any> { constructor(props: IRssProps) { super(props); } public render(): JSX.Element | any | any { return <div className="card"> <div className="card-title">{this.props.title}</div> <div className="card-description">{this.props.description}</div> </div>; } } Now, we define an RSS feed which is constituted by various RSS Information Cards. Since screen size is limited and there is no option available to the user to scroll, we limit the number of cards displayed to 5 with slice operation on data array. import * as React from "react"; import {IRssProps, RSSCard} from "./rss-card"; export interface IRSSFeedProps { feeds: Array<IRssProps>; } export class RSSFeed extends React.Component <IRSSFeedProps, any> { public constructor(props: IRSSFeedProps) { super(props); } public render(): JSX.Element | any | any { return <div className="rss-div"> {this.props.feeds.map((feed: IRssProps) => { return <RSSCard key={feed.title} title={feed.title} description={feed.description} link={feed.link}/>; } ).slice(0, 5)} </div>; } } Now, we can add these components to UI easily and render it with ReactDOM like: ReactDOM.render(<TableView data={tableData} columns={action.columns}/>, tableDiv); Below is an example screenshot of RSS and Map View in SUSI MagicMirror. Resources: - Using React with Typescript and Webpack article on Medium. - React and Typescript usage documentation on Typescript Website. - React Leaflet Library for adding Maps.
https://blog.fossasia.org/adding-map-and-rss-action-type-support-to-susi-magicmirror-module-with-react/
CC-MAIN-2020-24
refinedweb
831
50.43
CDI producer bean with jax-rs dependencySupriya Shyamal Apr 2, 2012 6:21 AM I have treid a example with weld as CDI container with jersey as jax-rs implementation for the dependency of jax-rs resources on CDI producer bean. Below is my CDI producer class which priduces unique id depends on session id or user provided id through REST path .. public class UuidProducer { @Inject private Logger log; @Context private HttpServletRequest request; @PathParam("uuid") private String uuid; @Produces @Uuid public String produceUuid() { if (StringUtils.isNotBlank(uuid)) { return uuid; } if (request.getSession() != null) { return request.getSession().getId(); } log.info("Generating UUID..."); return UUID.randomUUID().toString(); } } When I tried this with Jboss AS7 with RestEasy implementation no jax-rs resource is injected into this CDI class. My question is that if how can I inject jax-rs resource in normal CDI bean? 1. Re: CDI producer bean with jax-rs dependencyJozef Hartinger Apr 3, 2012 9:15 AM (in response to Supriya Shyamal) JAX-RS injection is not available in plain CDI beans. 2. Re: CDI producer bean with jax-rs dependencySupriya Shyamal Apr 5, 2012 4:31 AM (in response to Jozef Hartinger) But it works in Glassfish server which uses WELD for CDI and jersey for JAX-RS. Intersting class I found in jersey is com.sun.jersey.server.impl.cdi.CDIComponentProviderFactoryInitializer which bridges the CDI and JAX-RS beans. 3. Re: CDI producer bean with jax-rs dependencyJozef Hartinger Apr 6, 2012 9:42 AM (in response to Supriya Shyamal) CDI an JAX-RS are bridged in JBoss AS 7 as the specification requires. You can use any CDI component as a JAX-RS resource or provider. This involves dependency injection and other CDI services. The injection you describe works the other way around and is not portable. In your code, you should still be able to get a reference to HttpServletContext and get the information you need from there. Alternatively, you'll have to inject the information you need into a JAX-RS resource/provider and put a CDI producer there. 4. Re: CDI producer bean with jax-rs dependencySupriya Shyamal Apr 10, 2012 6:04 AM (in response to Jozef Hartinger) If I put a CDI producer in JAX-RS resource then there is a chance of cyclic injection problem. For ex. I have CDI bean with constructor parameter as String which is nothing but the Path variable from REST resource bean. So I create a producer method to expose this path variable in JAX-RS resource bean, then comes the cyclic dependency problem.
https://developer.jboss.org/message/729088
CC-MAIN-2017-26
refinedweb
428
60.24
It's Friday! So I express this particular post from a philosophical angle as well as technical. And there will be plenty more of these in the future coming from me. It's an opinion from my personal experience as a developer from assembly to C#, nothing more, not from Microsoft or any other source. As developers we gain experiences that transcend trends and changes from other aspects such as programming languages which will continue to change and evolve for the better, for the most part. These aspects we can potentially carry into any programming environment as is currently understood and has been over the past few decades. So in my 25 years as a software developer, I had a lot of experience with commenting and not commenting source code. The good thing about comments is they explain the code, in ways that code cannot. However... The bad thing about comments is often, we can use variable names, method names, class names, namespaces, filenames, folder names etc., that reflect what the code does instead. This enables us that drilldown experience. Furthermore, if you are putting comments in your code and change the code, you may also have to change the comments if applicable, that's extra work. Worst still, change the code and not the comment when it should, we are now out of sync with the code, now we have a problem for future developers who may take over on the code base that you are working on. If you spend time to think about the organization of the code and the flow upfront or refactor as you go along, fewer comments are actually required. After all as developers we prefer to write code, not comments. As developers we prefer to read code, not comments, if the code is well written. In today's era, if you marry this notion with work items or change requests, then these are often contextual through a bug or task which can change without having an affect on the code. As I see it, if we can see a ChangeSet in TFS, for example, in which the change and the work item (bug, task, operational issue etc.) that was resolved within context to the code change, the requirement for comments in code is reduced even more so. Annotations in TFS help here, but that's another topic, I'll sometime cover soon. I see comments as a last resort, only needed if it can't already be expressed by the code or through a change request which accompanies the check-in along with something along the lines of a work item. In my experience, as I moved on in my career as a developer, I focused more on folks being able to understand my code and less on the comments. I spent more time thinking about meaningful names for variables etc and the design of the code. Some of the best code I've seen, is where there were very few comments, the code was elegantly written, highly maintainable and self-explanatory to someone else looking at it with minimal comments. The most understandable code I saw over 25 years had very few comments. And even then, that was going back to the 1980's and 1990's without work items and Change Sets. Organize the project structure, files, namespaces, classes, methods and variables upfront, means less comments. No more comments on this topic from me. Apologies for the large comment in this blog.
http://blogs.msdn.com/b/jasonsingh/archive/2013/03/21/to-comment-code-not-that-is-the-question.aspx
CC-MAIN-2015-22
refinedweb
581
67.79
This article has been excerpted from book "Graphics Programming with GDI+ ".A polygon is a closed shape with three of more straight sides. Examples of polygons include triangles and rectangles.The Graphics class provides a DrawPolygon method to draw polygons. DrawPolygon draws a polygon defined by an array of points. It takes two arguments: a pen and an array of Points or PointF structures.To draw a polygon, an application first creates a pen and an array of points and then calls the DrawPolygon method with these parameters. Listing 3.19 draws a polygon with five points.LISTING 3.19: Drawing a polygonusing System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Windows.Forms;namespace WindowsFormsApplication11{ public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Paint(object sender, System.Windows.Forms.PaintEventArgs e) { Graphics g = e.Graphics; // Create a pen Pen greenPen = new Pen(Color.Green, 2); Pen redPen = new Pen(Color.Red, 2); // Create points for polygon PointF pt1 = new PointF(40.0F, 50.0F); PointF pt2 = new PointF(60.0F, 70.0F); PointF pt3 = new PointF(80.0F, 34.0F); PointF pt4 = new PointF(120.0F, 180.0F); PointF pt5 = new PointF(200.0F, 150.0F); PointF[] ptsArray = { pt1, pt2, pt3, pt4, pt5 }; // Draw polygon e.Graphics.DrawPolygon(greenPen, ptsArray); // Dispose of object greenPen.Dispose(); redPen.Dispose(); } }}Figure 3.26 shows the output from Listing 3.19FIGURE 3.26: Drawing a polygonConclusionHope the article would have helped you in understanding how to draw a Polygon in GDI+. Read other articles on GDI+ on the website. Drawing a Polygon in GDI+ Drawing Splines and Curves in GDI+
http://www.c-sharpcorner.com/uploadfile/mahesh/drawing-a-polygon-in-gdi/
crawl-003
refinedweb
282
53.78
Does Wand of the War Mage apply to spells cast from other magic items? Adv Reply December 31st, 2010 #7 cchhrriiss121212 View Profile View Forum Posts Private Message Ubuntu Cappuccino Scuro Join Date Nov 2009 Beans 1,879 DistroUbuntu Studio 9.10 Karmic Koala Re: dvd::rip Moreover, specifying none as converter name causes clearing the converter list. -E, --external-converter-program=PATH Sets external converter program name to PATH. Practically speaking: MKV is a little smaller and looks better, MP4 is compatible with Apple devices. have a peek at this web-site The variable DEFAULT_CHARSET can be used by enconv as the default target charset. Its content is interpreted before command line arguments. names. You may run into problems using it together with Enca particularly because Enca's support for surfaces not 100% compatible, because recode tries too hard to make the transformation reversible, because it Please see recode documentation for more details of this issue. Powered by Blogger. Please see the standard external converters for examples. Could not read from remote repository Error: clone from ssh doesn't work The SSH path for my project doesn't work because it is missing the repositories directory. I want to become a living god! This name is used when calling an external converter, too. -f, --human-readable Prints verbal description of the detected charset and surfaces--something a human understands best. Be warned, Enca makes no difference between unrecognised charset and charset having no name in given namespace in such situations. -d, --details It used to print a few pages of details When such a name doesn't exist because RFC 1345 doesn't define a given encoding, some other name defined in some other RFC or just the name which author considers `the most A charset unknown to iconv counts as unknown. Note output type selects language name style, not charset name style here. You can use recode(1) recoding chains or any other kind of braindead recoding specification for ENC, provided that you tell Enca to use some tool understanding it for conversion (see section However, when I try to encode using h264, I get the following error: Job 'Transcode video - title #1, pass 1' failed with error message: Command exits with failure code: Command: EeePC 701 - 4G SSD - 2G RAM - 16G SDHC Card hp Compaq d530 CMT - 160G - 3G RAM Ubuntu 12.04 LTS (Precise Pangolin) Registered Linux User: #486660 Adv Typical way: RAILS_ENV=production bundle exec rails console User.where(:username => nil).each { |u| u.username = u.email.sub(/@.+/, ''); u.save! } Error: File browsing does not show the last commit of file. Solution The following rake task can "resync" all of the keys. Enca tries to detect your preferred charset from locales. Pentesting against own web service hosted on 3rd party platform Why is the electric field due to a charged infinite cylinder identical to that produced by an infinite line of charge? By default, Enca doesn't prefix result with file name when run on a single file (including standard input). -V, --verbose Increases verbosity level (each use increases it by one). It may or may not be compiled in; run enca --version to find out its availability in your enca build (feature +librecode-interface). Though, what's the practical difference between using mkv rather than mp4? In case of multipart files (e.g. Check This Out Multibyte encodings should be recognised for any Latin, Cyrillic or Greek language. Please make sure you have the correct access rights and the repository exists. $ ssh [email protected]_server_here Server refused to allocate pty /home/git/gitlab-shell/bin/gitlab-shell:8: undefined method 'require_relative' for main:Object (NoMethodError) Problem: You are You have encountered a problem that is not mentioned here and you have the solution? Enca has interface to UNIX98 iconv charset conversion functions. Maybe some user added an SSH key and at the time there was a file permission issue that prevented that key from being added to the authorized_keys file. OPTIONS There are several categories of options: operation mode options, output type selectors, guessing parameters, conversion parameters, general options and listings. Source Error: "Loading commit data..." never end (looping forever). External converter (extern) should be always specified last, only as last resort, since it's usually not possible to recover when it fails. On first run the homepage return 502 bad gateway (nginx) after 30/60 seconds Fix First start seems slow so increase the unicorn timeout to something larger, say 200. (Or buy faster I/O) troubles. Sections General GitLab can help you checking if it was set up properly and locating potential sources of problems.The easiest way is by running the self diagnosis command (note that you This converter can be specified as built-in with -C. language-detection. Ubuntuならpppoeconfがdefaultでinstallされている. $ sudo pppoeconf で対話的にDSL回線との接続設定が出来る. 11.04では既に解決済であるので, 以下の設定は不要, pppoe connectについて, boot時に自動的に起動するように設定しても動かないので/etc/rc.localに pon dsl-provider と加えておいた. Strangely, installing texlive-xetex also fixed my issue. Using aliases as the output type causes Enca to print list of all accepted aliases of detected charset. -x, --convert-to=[..]ENC Converts file to encoding ENC. libdvdnav: DVDOpenFilePath:findDVDFile /VIDEO_TS/VIDEO_TS.IFO failed libdvdnav: DVDOpenFilePath:findDVDFile /VIDEO_TS/VIDEO_TS.BUP failed libdvdread: Can't open file VIDEO_TS.IFO. have a peek here When hiking, why is the right of way given to people going up? Topics are: python, book review, and my hobbies. 2011年7月1日金曜日 Could not find encoding file "H" Ubuntu 10.04 LTS で dvipdfmx で dvi -> pdf に変換しようとすると、 Could not find encoding file "H" it does not have all the features gitlab uses). All possible values of this option. (Crazy?) surfaces. This doesn't matter since nobody does read it. The error message is : ! share|improve this answer edited Feb 3 at 20:29 answered Aug 27 '15 at 19:24 mrub 251313 add a comment| Your Answer draft saved draft discarded Sign up or log in There are no command line options to tune libenca parameters. The MP4 rips were around 350MB with small defects, and the MKVs were around 250MB and looked fine. fatal: The remote end hung up unexpectedly In your server logs (/var/log/auth.log): User git not allowed because account is locked Problem: On some linux distros the adduser script creates an entry Their names can be abbreviated as long as they are unambiguous. Ubuntu and Canonical are registered trademarks of Canonical Ltd. wget辺りはUbuntuのmenuから辿った設定では反映されない. 例えばUbuntuだと /etc/wgetrc の79行目辺りにproxyの設定を書き込む.
http://gsbook.org/error-could/error-could-not-find-encoding-file-h-ubuntu-10-04.php
CC-MAIN-2018-17
refinedweb
1,069
55.64
PDL::CallExt - call functions in external shared libraries use PDL::CallExt; callext('file.so', 'foofunc', $x, $y); # pass piddles to foofunc() % perl -MPDL::CallExt -e callext_cc file.c callext() loads in a shareable object (i.e. compiled code) using Perl's dynamic loader, calls the named function and passes a list of piddle arguments to_cc() allows one to compile the shared objects using Perl's knowledge of compiler flags. The named function (e.g. 'foofunc') must take a list of piddle structures as arguments, there is now way of doing portable general argument construction hence this limitation. In detail the code in the original file.c would look like this: #include "pdlsimple.h" /* Declare simple piddle structs - note this .h file contains NO perl/PDL dependencies so can be used standalone */ int foofunc(int nargs, pdlsimple **args); /* foofunc prototype */ i.e. foofunc() takes an array of pointers to pdlsimple structs. The use is similar to that of main(int nargs, char **argv) in UNIX C applications. pdlsimple.h defines a simple N-dimensional data structure which looks like this: struct pdlsimple { int datatype; /* whether byte/int/float etc. */ void *data; /* Generic pointer to the data block */ int nvals; /* Number of data values */ PDL_Long *dims; /* Array of data dimensions */ int ndims; /* Number of data dimensions */ }; (PDL_Long is always a 4 byte int and is defined in pdlsimple.h) This is a simplification of the internal reprensation of piddles in PDL which is more complicated because of threading, dataflow, etc. It will usually be found somewhere like /usr/local/lib/perl5/site_perl/PDL/pdlsimple.h Thus to actually use this to call real functions one would need to right a wrapper. e.g. to call a 2D image processing routine: void myimage_processer(double* image, int nx, int ny); int foofunc(int nargs, pdlsimple **args) { pdlsimple* image = pdlsimple[0]; myimage_processer( image->data, *(image->dims), *(image->dims+1) ); ... } Obviously a real wrapper would include more error and argument checking. This might be compiled (e.g. Linux): cc -shared -o mycode.so mycode.c In general Perl knows how to do this, so you should be able to get away with: perl -MPDL::CallExt -e callext_cc file.c callext_cc() is a function defined in PDL::CallExt to generate the correct compilation flags for shared objects. If their are problems you will need to refer to you C compiler manual to find out how to generate shared libraries. See t/callext.t in the distribution for a working example. It is up to the caller to ensure datatypes of piddles are correct - if not peculiar results or SEGVs will result. Call a function in an external library using Perl dynamic loading callext('file.so', 'foofunc', $x, $y); # pass piddles to foofunc() The file must be compiled with dynamic loading options (see callext_cc). See the module docs PDL::Callext for a description of the API. Compile external C code for dynamic loading Usage: % perl -MPDL::CallExt -e callext_cc file.c -o file.so This works portably because when Perl has built in knowledge of how to do dynamic loading on the system on which it was installed. See the module docs PDL::Callext for a description of the API..
http://search.cpan.org/~chm/PDL-2.006/Lib/CallExt/CallExt.pm
CC-MAIN-2015-22
refinedweb
532
56.66
Details - Type: Bug - Status: Resolved - Priority: Minor - Resolution: Fixed - Affects Version/s: 2.9.0, 3.0.0-alpha2 - Fix Version/s: 2.9.0, 3.0.0-alpha2 - - Labels:None - Flags:Patch Description At the moment there is no debug or trace level logs generated for KMS authorisation decisions. In order for users to understand what is going on in given scenarios this would be invaluable. Code should endeavour to keep as much work off the sunny-day-code-path as much as possible. Activity - All - Work Log - History - Activity - Transitions I like extra logs, especially those for debugging security issues, so this is welcome. - we normally just use LOG.debug(), over trace - now we are moving to SL4J as classes get updated, you can move off the LOG.trace("User: [" + ugi.getShortUserName() + "] ") code and the .isTraceEnabled() guard, and just go: LOG.debug("User [{}]", ugi.getShortUserName). No string concat will take place unless debug is enabled, so cost is minimal (one string object instantiation is all). Thanks Steve Loughran I can change to LOG.debug("Checking user {}, groups: {} for: {} " + acl.getAclString(), ugi.getShortUserName(), Arrays.toString(ugi.getGroupNames()), opType.toString() ); but we'd still be evaluating those three functions (including Arrays.toString() etc) on every access. To that end, I'd be tempted to keep the .isDebugEnabled() statements in. What do you think? Doesn't the KMSAudit expose all this via the ok() / unauthorized() and error() methods ? They seem to already be invoked by the KMSACLs::assertAccess() method Arun Suresh The KMSACLs::assertAccess() method audits unauthorized operations (not ok() or error()), but doesn't give any information as to why the operation was unauthorized (also, KMSACLs::hasAccessToKey() doesn't appear to audit anything for some reason). This patch gives detailed information about the reasons why operations were or were not authorised, including the full decision tree, which groups a user has, and whether they match whitelist or blacklists or key specific ACLs. KMSACLs::hasAccessToKey() doesn't appear to audit anything for some reason.. Then I guess, we should should call KMSAudit::unauthorized() there if the check fails. ... if (LOG.isDebugEnabled()) { if (blacklist == null) { LOG.debug("No blacklist for {}", type.toString()); } else if (access) { LOG.debug("user is in {}" , blacklist.getAclString()); } else { LOG.debug("user is not in {}" , blacklist.getAclString()); } .... Not really sure if we should explicitly print a user and his/her associated acl in the logs. SImilarly: .... if (LOG.isDebugEnabled()) { LOG.debug("Checking user [{}], groups: {} for: {} {} ", ugi.getShortUserName(), Arrays.toString(ugi.getGroupNames()), type.toString(), acls.get(type).getAclString() ); } .... Printing the users group information explicitly in the log might have security implications. I've updated the patch to remove the ugi.getGroupNames(). The usernames are all over the logs in most things anyway (good examples in NameNode logs, Hive Server2 or in Apache Sentry logs, and the ACLs are already in the logs. Of course you'd only do this in a controlled scenario - you'd hope that only an authorised user would be able to change the log4j.properties. Personally I'd rather keep the ugi.getGroupNames in there, but happy to cede to your judgement there. Then I guess, we should should call KMSAudit::unauthorized() there if the check fails. I agree with that, however we'd need to add a new method to KMSAudit as there is no unauthorized method that takes KeyOpType - I propose that goes in a separate JIRA, although I can add that in here if you'd like? Version3 of patch KMSAudit as there is no unauthorized method that takes KeyOpType - I propose that goes in a separate JIRA, although I can add that in here if you'd like? I guess it should be fine to add it here.. Made changes to log failed Key ACL requests, although because there's separate enums for Key operations as to KMS Operations, the audit logger now takes either using Object. I've attached patches based on trunk (3.x alpha and also 2.7). Thanks for updating the patch Tristan Stevens. Am not really in favor of passing Object arguments, but since the change is restricted to private methods I guess its fine. But it would probably be a good idea to probably put a javadoc note in the AuditEvent constructor. +1 pending that and the whitespace fix. Updated both patches to resolve whitespace issue and to add Javadoc for public method with Object. Also, changed some of the Object signatures to String where possible. Last patches were bad - replacing with String was not easily feasible. Reverted to just fix the whitespace. New patches attached that address Javadoc and whitespace issues. Unit tests and checkstyle appear to pass on both trunk and branch-2.7.3. +1 for the latest patch. Will commit this to trunk tomorrow if no objections from anyone else. Tristan Stevens, will you be posting a branch-2 patch (since this is targeted for 2.9.0 as well) ? Thanks Arun Suresh. The patches we have apply as follows: - trunk: HADOOP-13903-7.patch applies and tests cleanly. - branch-2: HADOOP-13903-7.patch applies and tests cleanly. - branch-2.8: No patch but the repo doesn't seem to compile cleanly at the moment anyway. Errors in org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.java. - branch-2.7: HADOOP-13903-branch2.7-7.patch applies and tests cleanly. If you want me to provide a patch for branch-2.8 then I can do. Thanks Tristan Stevens for adding the traces. Can you fix the checkstyle issues from Jenkins? Otherwise, the latest patch v7 looks good to me. Committing this shortly (I'll take care of the unused imports and the javadoc when i check in) The unused imports are actually for the javadoc. The two long lines are import statements which I think are okay? On 11 Jan 2017 08:08, "Arun Suresh (JIRA)" <jira@apache.org> wrote: [ com.atlassian.jira.plugin.system.issuetabpanels:comment- tabpanel&focusedCommentId=15817602#comment-15817602 ] Arun Suresh commented on HADOOP-13903: -------------------------------------- Committing this shortly (I'll take care of the unused imports and the javadoc when i check in) HADOOP-13903-5-branch2.7.patch, HADOOP-13903-5.patch, HADOOP-13903-6.patch, HADOOP-13903-6.patch, HADOOP-13903-7.patch, HADOOP-13903-branch2.7-6.patch, HADOOP-13903-branch2.7-7.patch, HADOOP-13903.patch authorisation decisions. In order for users to understand what is going on in given scenarios this would be invaluable. much as possible. – This message was sent by Atlassian JIRA (v6.3.4#6332) As Tristan pointed out, looks like the "unused" imports are actually needed, since they are referred to in the javadocs. Committed this to trunk and branch-2 after fixing the remaining checkstyle (to add the '.' at the end of the sentence). Thanks again for working on this Tristan Stevens and for the reviews Steve Loughran and Xiaoyu Yao. SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11105 (See) HADOOP-13903. Improvements to KMS logging to help debug authorization (arun suresh: rev be529dade182dd2f3718fc52133f43e83dce191f) - (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAuditLogger.java - (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java - (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAudit.java version 1 of patch
https://issues.apache.org/jira/browse/HADOOP-13903
CC-MAIN-2017-13
refinedweb
1,227
59.8
Jlookup + 5 comments OK, I see everyone is writing this: def postOrder(root): if root: postOrder(root.left) postOrder(root.right) print(root.data, end=' ') But that seems wasteful to me, because you're calling the function first, then checking to see if there's anything to call it on. So for every leaf node you're calling the function twice for no reason, and for every node with only one child you're calling it once for no reason. Why not check before calling, like this: def postOrder(root): if root.left: postOrder(root.left) if root.right: postOrder(root.right) print(root.data, end=' ') Can anyone tell me why the first way is preferable? raju13yadav + 1 comment As per my understanding, The first one print root data after recursive on left and right node, but in second one when left data have got printed it goes to right and print right data and does not print root data because it finishes exceution after right branch.. VictoriaB + 1 comment andresontom85 + 1 comment The process is observed by the experts where the infromation is preemtive in root data which is neded to be proclaimed and further you can take the information through dell printer support number will provide you the best part nik0mon + 1 comment It already does the if statement you stated but it just does it implicitly on the recursive call. In terms of savings, you aren't really saving anything because the cost of you doing the if statements would be the trade-off for doing those function calls. Which means this really doesn't change the complexity of this algorithm at all which would still be Big-Oh(N). The biggest caveat really is if you call root.left or root.right on an None Type root (i.e an empty tree) it would return an AttributeError which the first one would be able to handle. alaniane + 1 comment Actually, if statements are cheaper than function calls. Function calls result in a push to the stack. Which means pushing a pointer to the function name and then pushing each of the parameters. nik0mon + 1 comment I agree. If you boil it down and do the exact calculation sure it would be cheaper. But if we are interested in just complexity though its the same as the if call. ZeaLot4J + 3 comments My Java solution without recursion void postOrder(Node root) { Node t = root; Deque<Node> stack = new ArrayDeque<Node>(); stack.push(root); while(!stack.isEmpty() && root!=null){ root = stack.peek(); //nodes without children should be printed if( (root.left==null&&root.right==null) || (t==root.left||t==root.right) ){//or nodes whose children have already been printed System.out.print(root.data+" "); stack.pop(); t = root; }else{ if(root.right!=null) stack.push(root.right); if(root.left!=null) stack.push(root.left); } } } Ranker2017 + 1 comment is "&& root!=null" necessary in "while(!stack.isEmpty() && root!=null){" ? khushboo4tiwari + 6 comments void Postorder(Node root) { if(root == null) { return; } if(root!=null) { Postorder(root.left); Postorder(root.right); System.out.print(root.data+" "); } } chianghomer + 1 comment I think the condition check for root == null is not necessary since it's a void method. whiteflags99 + 1 comment If you didn't have the check I think you would end up with a null pointer exception. Something has to stop the recursion. chianghomer + 1 comment no since it is void and that we do not need the base case to stop the recursive call. If the root (or current Node) is null it just finish the call and do nothing. whiteflags99 + 1 comment I'm getting confused about what people exactly mean. It would either be if (root != null) { Postorder(root.left); Postorder(root.right); System.out.print(root.data+" "); } or if (root == null) return; Postorder(root.left); Postorder(root.right); System.out.print(root.data+" "); I agree that you don't need both of the options, but one is necessary. Tolani + 0 comments To be honest you don't need the second if statement if(root!= null). The base case is can be seen as a elseconstruct. I don't program in java but in C++ it looks like this void postOrder(node *root) { if(root == NULL) return; postOrder(root->left); postOrder(root->right); cout << root->data << " "; } I guess in java, it would be void Postorder(Node root) { if(root == null) { return; } Postorder(root.left); Postorder(root.right); System.out.print(root.data+" "); } vvArnia + 0 comments Simple non-recursive solution: void postOrder(Node root) { Stack<Node> st1 = new Stack<Node>(); Stack<Integer> st2 = new Stack<Integer>(); st1.push(root); while (!st1.isEmpty()) { Node t = st1.pop(); st2.push(t.data); if (t.left != null) { st1.push(t.left); } if (t.right != null) { st1.push(t.right); } } //Printing the tree's postorder traversal while (!st2.isEmpty()) { System.out.print(st2.pop() + " "); } } laforge2001 + 4 comments How would you solve this without using recursion? kvothebloodlessAsked to answer + 0 commentsEndorsed by laforge2001 Create a list or an array to store traversal ie as you go left, until you encounter any leaf, then move back up and store those values. When you encounter a previous node with a right subtree, repeat the steps. jonmcclung + 1 comment Why would you want to? shuhua_gao + 1 comment To avoid stack overflow if the tree is very high. alaniane + 0 comments You are more than likely to run out of memory trying to maintain the data structure than the recursive calls. If you use an assumption that a 64 bit machine uses a 64 bit pointer for the function address then you are pushing 16 bytes per call (8 bytes for the function address and 8 bytes for the heap address of the data object). When you use the stack object to store the data object you have the overhead of the stack object and if that stack object is not implemented correctly the possibility of the data object being passed by value and not by refernce. In this case that could be anywhere from 18 to 24 bytes per object (depending on the size of the int for the data). pankajkmrgupta + 0 comments Python2 recursive def postOrder(root): #Write your code here if root is None: return postOrder(root.left) postOrder(root.right) print root.data, amoljoshi1993 + 0 comments Python 3 code def postOrder(root): #Write your code here if(root!= None): postOrder(root.left) postOrder(root.right) print(str(root.data)+" ",end="") cellepo + 0 comments Java - O(n) time, O(1) space No recursion or stack - algorithm modified from Morris's for In-Order (also see my) public static void postOrder(Node root) { // ensures all descendants of root that are right-children // (arrived at only by "recurring to right") // get inner-threaded // prefer not to give data, // but construction requires it as primitive final Node fakeNode = new Node(0); fakeNode.left = root; Node curOuter = fakeNode; // in addition to termination condition, // also prevents fakeNode printing while(curOuter != null){ if(curOuter.left != null){ final Node curOuterLeft = curOuter.left; // Find in-order predecessor of curOuter Node curOuterInOrderPred = curOuterLeft; while(curOuterInOrderPred.right != null && curOuterInOrderPred.right != curOuter){ curOuterInOrderPred = curOuterInOrderPred.right; } if(curOuterInOrderPred.right == null){ // [Outer-] Thread curOuterInOrderPred // to curOuter curOuterInOrderPred.right = curOuter; // "Recur" on left curOuter = curOuterLeft; } else { // curOuterInOrderPred already [outer-] threaded to curOuter // -> [coincidentally] therefore curOuterLeft's left subtree is done processing // Prep for [inner] threading (which hasn't ever been done yet here)... Node curInner = curOuterLeft; Node curInnerNext = curInner.right; // [Inner-] Thread curOuterLeft [immediately backwards] to curOuter [its parent] curOuterLeft.right = curOuter; // [Inner-] Thread the same [immediately backwards] for all curLeft descendants // that are right-children (arrived at only by "recurring" to right) while(curInnerNext != curOuter){ // Advance [inner] Node refs down 1 level (to the right) final Node backThreadPrev = curInner; curInner = curInnerNext; curInnerNext = curInnerNext.right; // Thread immediately backwards curInner.right = backThreadPrev; } // curInner, and all of its ancestors up to curOuterLeft, // are now indeed back-threaded to each's parent // Print them in that order until curOuter while(curInner != curOuter){ /* -> VISIT */ System.out.print(curInner.data + " "); // Move up one level curInner = curInner.right; } // "Recur" on right curOuter = curOuter.right; } } else { // "Recur" on right curOuter = curOuter.right; } } } ritesh_manapure + 0 comments for post order the traverse is left->right->node: public static void postOrder(Node root) { if(root == null) { return; } postOrder(root.left); postOrder(root.right); System.out.print(root.data+ " "); } Sort 71 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/tree-postorder-traversal/forum
CC-MAIN-2018-51
refinedweb
1,401
55.64
Design patterns explained in a famous book by Martin Fowler [Patterns of Enterprise Application Architecture, 2002], have had great influence on the way the modern applications interoperate with the relational databases. One of the most productive approaches in this area is known as object-relational mapper (ORM). At first glance, the mechanism takes care of mapping from objects of business logic to database tables. Although implementations may significantly differ in details, all of the ORM tools create a new layer of abstraction, promising to ease the manipulations on the objects and relationships comprising the application's business logic. Using of object-relational mapping literally shifts the development of so called on-line transaction processing (OLTP) applications to a higher level. The choice of ORM framework heavily influences the architecture of the whole project. The benefits advertised by such frameworks could be even the reason for switching the platform in an existing project. But what is the overhead we get when using different ORM solutions? Is it really true that, as stated by the author of SQLAlchemy Michael Bayer [The Architecture of Open Source Applications: SQLAlchemy.], those performance penalties are not crucial, and further more they are going to disappear as the JIT-enabled PyPy technology receives more acceptance? This is not an easy question to answer, because we need to benchmark many different ORM solutions using single set of tests. In this article we are going to compare the overhead imposed by using two different ORM solutions: SQLAlchemy for Python vs. YB.ORM for C++ [see Quick Introduction here]. To achieve this two special test applications were developed, both implementing the same OLTP logic, and a test suite to test them. For the test server application we have chosen the following subject: car parking automation. Typical operations include: giving out a parking ticket, calculation of time spent on parking, payment for the service, and leaving the parking with ticket returned to the system. This is so called postpaid schema. The prepaid schema has been also implemented, where a user pays beforehand for some time, and has an option to prolongate the parking session at any time. When the user leaves parking the unused amount is sent to his/her parking account, which can be used later to pay for another parking session. All of these operations fit well in what is called CRUD (Create, Read, Update and Delete) semantics. The source code of the test stand is available at GitHub:. There you can find a test suite and two distinct applications implemented for two different platforms: folder parking – in Python language, using SQLAlchemy (referred to as sa); parking folder parkingxx – in C++, using YB.ORM (referred to as yborm below). parkingxx Both applications are driven by single test suite, which sequentially reproduces set of test cases for the API the applications implement. The test suite is written in Python, using standard unittest module. And the two applications both show "green light" on the tests. unittest To run the same test suite against some applications built on different platforms we need some sort of IPC (Inter-Process Communication). The most common IPC out there is the Socket API, it allows for communication between processes running on the same host or on different hosts. The test stand API is thus accessible via a TCP socket, using HTTP for request/response handling, and JSON is used for serialization of data structures. In Python test server application (sa) we use SimpleHTTPServer module to implement an HTTP server. In C++ test server application (yborm) we use HttpServer class for that purpose (files parkingxx/src/micro_http.h, parkingxx/src/micro_http.cpp), borrowed from folder examples/auth of YB.ORM project. For better performance we run both test and application on the same host having a multiple core CPU. SimpleHTTPServer HttpServer parkingxx/src/micro_http.h parkingxx/src/micro_http.cpp examples/auth In real life environments there's almost always a requirement to handle incoming requests in parallel threads or processes. In CPython there is a known bottle-neck with multithreading, caused by the GIL (Global Interpreter Lock). Therefore Python powered server applications tend to use multiprocessing instead. On the other hand, in C++ there are no such concerns. Since the result of the benchmark should not be affected by the quality of thread/process pool implementation, this benchmark was run as a series of sequential request sent to server. We use MySQL database server with its InnoDB transactional storage engine. Each of the two server applications runs on the same database instance. Logging is configured in such manner to produce roughly the same amount of data for both implementations. All messages from WEB server and database queries are logged. Besides, additional testing was done with no logging enabled. Both implementations work essentially the same way. To further simplify comparison of the applications, their business logic code bases were structured in a similar way. In this article we don't discuss the expressive power of the ORM tools, although it's one of important things when it comes to such tools. Just let's compare some numbers, the volumes of the code bases are also close: 20.4 KB (sa) versus 22.5 KB (yborm) – business logic code, plus 3.4 KB (sa) versus 5.7 KB (yborm) – data model description. Below is an example of the business logic code, written using SQLAlchemy and Python: def check_plate_number(session, version=None, registration_plate=None): plate_number = str(registration_plate or '') assert plate_number active_orders_count = session.query(Order).filter( (Order.plate_number == str(plate_number)) & (Order.paid_until_ts > datetime.datetime.now()) & (Order.finish_ts == None)).count() if active_orders_count >= 1: raise ApiResult(mk_resp('success', paid='true')) raise ApiResult(mk_resp('success', paid='false')) Similar example, using YB.ORM and C++: ElementTree::ElementPtr check_plate_number(Session &session, ILogger &logger, const StringDict ¶ms) { string plate_number = params["registration_plate"]; YB_ASSERT(!plate_number.empty()); LongInt active_orders_count = query<order>(session).filter_by( (Order::c.plate_number == plate_number) && (Order::c.paid_until_ts > now()) && (Order::c.finish_ts == Value())).count(); ElementTree::ElementPtr res = mk_resp("success"); res->add_json_string("paid", active_orders_count >= 1? "true": "false"); throw ApiResult(res); } </order> For test system we used Ubuntu Linux, which is very common for deploying server side applications. Testing was done using a desktop computer with the following equipment on board: Intel(R) Core(TM) i5 760 @2.80GHz, 4GB RAM, running 64-bit Ubuntu 12.04. All versions of software packages are installed from Ubuntu stock repos, except for the following items: PyPy, PyMySQL, SOCI and YB.ORM. To find out what functions get called during a test suite run look at the tbl. 1. The functions that do modify database are emphasized. Their contribution to the total time is roughly calculated, based on logs from the configuration built with YB.ORM and SOCI backend. function name number of calls by one test suite run contribution to the total time of run, % get_service_info 22 8.95 create_reservation 10 26.67 pay_reservation 8 19.30 get_user_account 7 1.24 stop_service 6 13.81 account_transfer 13.14 cancel_reservation 4 10.69 check_plate_number 3 0.60 leave_parking 1 2.69 issue_ticket 2.91 total 68 100 Table 1. Test suite structure by function calls Let's look at the structure of the test suite in terms of SQL statements being issued, see tbl. 2. It could helpful in order to understand how comparable are those two implementations built on totally different platforms. For these two flavors (build with SQLAlchemy or YB.ORM) the numbers in the table differ a bit, because the design of the libraries is not an exact match. But, as the assertions in the test suite show, the logic works in the same way in both applications. statement number of calls from SQLAlchemy app number of calls from YB.ORM app 78 106 SELECT FOR UPDATE 88 89 UPDATE 45 42 INSERT 27 DELETE 0 238 264 Table 2. Test suite structure in terms of SQL statements Of course, that kind of load the test suite generates is far from what can be found in real life environments. Nonetheless, this test suite allows us to get an idea of how the performance levels of the ORM layers compare to each other. Our test suite was run in consecutive batches, each 20 iterations long. For each of server configurations the timings were measured 5 times, to minimize errors. It yields 100 runs of the test suite for a configuration, in total. Timings were measured using standard Unix command time, which outputs time spent by a process running in user space (user) and kernel (sys) modes, as well as real time (real) of command execution. The latter strongly influences the user experience. For the client side only timing real is taken into account. Also standard output and standard error output both were redirected to /dev/null. Command line sample for starting the test batch $ time ( for x in `yes | head -n 20`; do python parking_http_tests.py -apu &> /dev/null ; done ) real 0m24.762s user 0m3.312s sys 0m1.192s For a running server process the user and sys timings are of practical interest, since they correlate with physical consumption of CPU resource. Timing real is omitted as it would include idle time when the server was waiting for an incoming request or some other I/O. Also, to see the measured timings, in course of running the tests, one must stop the server, as it normally works in an infinite loop. The following four configurations have been put to the test: PyPy + SQLAlchemy CPython + SQLAlchemy YB.ORM + SOCI YB.ORM + ODBC For each of them two kinds of benchmarking has been conducted: with logging turned on and off. Totally – 8 combinations. Averaging was performed on the results of 5 measurements. Measured timings of 20 consecutive test suite runs for each combination are presented on fig. 1. Figure 1. Timings for test suite running at client side, the less is better From the deployment and maintenance point of view, it's important to know how much CPU time the server application consumes. The higher this number, the more CPU cores it takes to serve the same number of incoming requests, and the higher are the power consumption and heat emission. The CPU time consumed is counted as time a process spends working in user space mode plus time spent in kernel mode. The timings measured are shown at fig. 2. style="width: 500px; height: 312px" data-src="/KB/cpp/870096/server_timings.png" class="lazyload" data-sizes="auto" data-> Figure 2. CPU consumption at server side, the less is better In a search for possible ways to improve performance, one of the often considered options is turning off the debug functionality. It's very common to have logging on while developing and testing the software using ORM – one would need means to control what is going on under the hood. But when it comes to production environment, logging becomes a performance concern. Logging may be minimized or turned off, especially when the application functionality has matured. But how big the log files really are? In these test applications every message from HTTP server gets logged, as well as every SQL statement executed with their input and output data. As noted above, the test suite is run 100 times for each of server configurations. The volume of log files generated is shown in tbl. 3. pypy sa cpython sa yborm soci yborm odbc size of log file, MB 19.02 18.97 20.78 20.95 lines count, thousands 138.5 180.0 Table 3. The volume of log files after 100 runs of the test suite The conducted measurements show (tbl. 4) how much the timings have improved, in percent, after logging has been switched off. The first line tells about the server response time, and the second is about the CPU consumption at server side. client side tests 10.0 12.4 11.5 12.6 CPU consumption 11.0 26.0 19.2 21.4 Table 4. The performance boost in % after the logging is switched off Having considered this benchmark leads us to the following conclusions: Among these four server implementations the fastest one is implemented in C++ (yborm odbc). It performs three times better than the fastest one implemented in Python (cpython sa). Meanwhile the code base in C++ is bigger by only 20 %. Server response time, i.e. after that time a user sees response, under some circumstances may also be shortened if YB.ORM is used. In this example there has been improvement about 38 %. The promised performance of PyPy on running the multi-layered frameworks a-la SQLAlchemy is not achieved yet. For the reasons still to be discovered, using YB.ORM with SOCI backend yields a bit poorer results than with ODBC backend. Switching off the logging at server side did not as much impact as it could have been expected. In particular, the most significant improvement at server side has been observed with CPython+SQLAlchemy: 26 %. For YB.ORM this improvement is about 20 %. For a long time Python programming language has been considered as a high productivity language. One of the reasons is the shorter cycle of "coding" – "running". And such great tools available as SQLAlchemy, they make this platform even more attractive. At the same time, the performance sometimes may suffer. In some cases it would be better to have Python for prototyping, and to hand over the final implementation to C++, which does have comparable frameworks and tools today. This article, along with any associated source code and files, is licensed under The MIT License
https://codeproject.freetls.fastly.net/Articles/870096/Performance-of-ORM-layers-in-Python-and-Cpp?display=Print
CC-MAIN-2022-40
refinedweb
2,254
57.16
Csharp Interfaces And Multiple Inheritance Task I want to test what happens if a class inherits two or more interfaces with the same method. I make a silly example of a class that implements Spin in two different ways. First a ballerina-interface where Spin would mean that the ballerina rotates at a certain speed. Then a cat-interface. The Swedish word for purring is Spin, so do not be confused if it makes no sense in your favourite language. This example is available here: [1] Inheritance Diagram It is this simple... Object | +--- IBallerina | +--- ICat | myDancingKitten IBallerina Any ballerina can spin and answer if they need any toes cut to fit into a shoe of a certain size. // a dancer in a silly dress public interface IBallerina { void Spin(int RPM); bool NeedToCutToe(int ShoeSize); } ICat Cats can spin (purr in english) and answer if they need a shave. // Felis catus public interface ICat { bool NeedsShave(); void Spin(int RPM); } The class myDancingKitten First we need a to implement the spin-, needsshave- and needtocuttoe-methods and the other functions in a small class. // A Felis catus in a silly dress public class myDancingKitten : IBallerina, ICat { public void Spin(int RPM) { Console.WriteLine("purr screeeeeeeech!!!!"); } public bool NeedsShave() { // // shaving cats is fun, right? // [2] // return true; } public bool NeedToCutToe(int ShoeSize) { // all cats have very small feet return false; } } Small test class We make a small test class with the original name program. Here we create three functions that take a ballerina, a cat and a dancing kitten and then calls its Spin function. static class Program { static void PlayWithBallerina(IBallerina balle) { balle.Spin(216); } static void PlayWithCat(ICat kitty) { kitty.Spin(666); } // if I ever get a kitten that can danse I'll call it Belzeebub static void PlayWithMyDancingKitten(myDancingKitten Belzeebub) { Belzeebub.Spin(314); } static void Main(string[] args) { myDancingKitten Belzeebub = new myDancingKitten(); PlayWithBallerina(Belzeebub); PlayWithCat(Belzeebub); PlayWithMyDancingKitten(Belzeebub); Console.Write("Press Andy to continue . . . ."); Console.ReadKey(); } } First Output purr screeeeeeeech!!!! purr screeeeeeeech!!!! purr screeeeeeeech!!!! Press Andy to continue . . . . More advanced usage Since it is obvious that we intend different things when we spin the cat as if it was a cat or it was a ballerina (and perhaps also as if it was a dancing kitten). We add two more cases of the spin-method in the dancing kitten class: void IBallerina.Spin(int RPM) { Console.WriteLine("screeeeeeeech!!!!"); } void ICat.Spin(int RPM) { Console.WriteLine("purr purr..."); } We now have three Spin-methods. - First one method that tells us how the dancing kitten will spin if it is expected to behave as a ballerina. Clearly the poor cat does not like being tossed around like that. - Also a method that tells us how the cat behaves if he is purring. This is nice. The cat likes to purr. - A final one (the first one we implemented) that tells us how the cat behaves if someone knows he is a dancing kitten and wants him to spin. This is a little strange for the cat - first he likes it then he throws up. Advanced output The output is different now. The testprogram is the same, but the class is different. screeeeeeeech!!!! purr purr... purr screeeeeeeech!!!! Press Andy to continue . . . . Conclusion This is really nice clearly a dancing kitten must be able to behave both as a cat and as a ballerina (even if he does not like it). Also a dancing kitten can behave just as a dancing kitten. This allows for nice and flexible inheritance, but still with some sort of strictness. This page belongs in Kategori Programmering.
http://pererikstrandberg.se/blog/index.cgi?page=CsharpInterfacesAndMultipleInheritance
CC-MAIN-2017-39
refinedweb
599
65.32
Thread-safe stacks in .NET July 7, 2015 3 Comments. The Stack object has a thread-safe equivalent in .NET called ConcurrentStack which resides in the System.Collections.Concurrent namespace. Here’s some important terminology: - Adding an item is called a push just like in the case of the single-threaded Stack object - The Pop method has been replaced with TryPop which returns false in case there was nothing to retrieve from the stack. If it returns true then the popped item is returned as an “out” parameter - Similarly the Peek method has been replaced with TryPeek which works in an analogous way The following code fills up a stack of integers with 1000 elements and starts 4 different threads that all read from the shared stack. public class ConcurrentStackSampleService { private ConcurrentStack<int> _integerStack = new ConcurrentStack.Push(i); } } private void GetFromStack() { int res; bool success = _integerStack.TryPop(out res); while (success) { Debug.WriteLine(res); success = _integerStack.TryPop(out res); } } } Each thread will try to pop items from the shared stack as long as TryPop returns false. In that case we know that there’s nothing more left in the collection. View the list of posts on the Task Parallel Library here. Reblogged this on Dinesh Ram Kali.. Reblogged this on Brian By Experience. Pingback: Summary of thread-safe collections in .NET – .NET training with Jead
https://dotnetcodr.com/2015/07/07/thread-safe-stacks-in-net/
CC-MAIN-2019-04
refinedweb
225
67.55
GameFromScratch.com very cool library, but not an incredibly well documented one. So that’s where this tutorial series comes in. We are going to look at creating 2D then 3D graphics using the Heaps engine. For this series I will be doing both text and video versions of each tutorial. The video version of this tutorial is available here. This tutorial is going to assume you are using Windows, if you are on another operating system, the steps are going to vary slightly. First head to Haxe.org/download and download the appropriate Haxe installer for the most reason version. In my case I am going with the Windows Installer. Run the executable and say yes to any security messages you receive. You want to install everything like so: Install where ever you like: Verify your install worked correctly. Fire up a command prompt and type haxelib version: That was easy, eh? Next you will probably want an IDE or Editor. Personally I am using Haxe Develop, a special port of Flash Develop. This is a Windows only IDE though. Another option is Visual Studio Code with the Haxe language extensions. Finally we need to install the Heaps library. It’s not registered with Haxelib yet, so we currently have to install it from Github. Run the command: haxelib git heaps And done. Now let’s create our first application to make sure everything is up and running correctly. A simple hello world app. Assuming you are using HaxeDevelop, go ahead and create a new project via Project->New Project I created a JavaScript project like: Inside our project folder, we need to create a folder for our resources. I simply created a directory called res. Simply right click your project in the Project panel and select Add->New Folder... Next we need a TTF file, I personally used this font. Simply download that zip and copy the ttf file into the newly created res directory. You can open an Explorer window to that directory by right clicking it and selecting Explore. I personally renamed it to not be all caps, it should work either way though. If you are using HaxeDevelop, your project should look something like this: We have two final bits of configuration. First we need to text HaxeDevelop that we use the Heaps library, and that the resource folder is named Res. Right click your project and select Properties Next select the Compiler Options tab. First add an entry to Compiler options with the value –D resourcePath=”res”. Then add a value to Libraries of heaps. That’s it, click Apply then Ok. Finally some code! First we need a WebGL canvas for our application to run in. Simply open up index.html located in the Bin folder and add a canvas. Your code should look something like: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"/> <title>JSTest</title> <meta name="description" content="" /> </head> <body> <canvas id="webgl" style="width:100%;height:100%"></canvas> <script src="JSTest.js"></script> </body> </html> Now we need to edit our main Haxe code. By default it will be called Main.hx and it’s the entry point (and entirety) of our program. Enter the following code: import h2d.Text; import hxd.Res; import hxd.res.Font; import js.Lib; class Main extends hxd.App { var text : Text; // Called on creation override function init() { // Initialize all loaders for embeded resources Res.initEmbed(); // Create an instance of wireframe.tff located in our res folder, then create a font2d of 128pt size var font = Res.wireframe.build(128); // Create a new text object using the newly created font, parented to the 2d scene text = new Text(font, s2d); // Assign the text text.text = "Hello World"; // Make it read, using hex code ( RR GG BB, each two hex characters represent an RGB value from 0 - 255 ) text.textColor = 0xFF0000; } // Called each frame override function update(dt:Float) { // simply scale our text object up until it's 3x normal size, repeat forever var scaleAmount = 0.01; if (text.scaleX < 3.0) text.setScale(text.scaleX + scaleAmount); else text.setScale(1); } static function main() { new Main(); } } Now go ahead and run your code by either hitting F5 or click this button You should see: Congratulations, your first Haxe programming using the Heaps library. Next up, we will jump into 2D graphics with Heaps. Programming Haxe, 2D, Tutorial
http://www.gamefromscratch.com/post/2016/05/05/Haxe-And-Heaps-Tutorial-Series-Getting-Started.aspx
CC-MAIN-2017-13
refinedweb
735
68.16
Hey guys, so I just recently got back into programming and I decided to program one of those stupid "what highschool stereotype are you" quizzes to see what i still remember. Everything went great untill I realized that I don't know how to check if one variable is the greatest out of all the variables in the program (which would decide the end result) without writing a ton of if statements. I asked someone how i could do it in a simpler way and they said I need to put my variables in an array so I could bubble sort them. So my question is how do I make an array with 8 int variables in it and also if you could help me out with the whole bubble sort thing that would be great. Here's my program so far if it helps (im only putting in the first part cuz its pretty long but you should get the gist of what I'm trying to do) Code:#include <iostream> using namespace std; int main () { int think; int bro = 0; int jock = 0; int art = 0; int rock = 0; int stoner = 0; int nerd = 0; int rich = 0; int ba = 0; int a; int b; int c; int d; int e; int f; int g; cout<<"THE HIGHSCHOOL STERIOTYPE QUIZ\n"; cout<<"\n"; cout<<"How To Play:\n"; cout<<"You will be asked a series of questions to determine your personality, \nanswer truthfully by typing the number next to the response that fits you best \nand then hitting the enter key. In the event that multiple responses fit you, \nDO NOT type both numbers, choose the one that fits you best.\n"; cout<<"\n"; cout<<"Are you ready to find out what highschool steriotype you are? \nPress enter to find out!\n"; cin.get(); cout<<"What would you classify yourself as? (does not count towards final result)\n"; cout<<"\n"; cout<<"1. Lax Bro\n"; cout<<"2. Jock\n"; cout<<"3. Artist\n"; cout<<"4. Rocker\n"; cout<<"5. Stoner\n"; cout<<"6. Nerd\n"; cout<<"7. Rich Kid\n"; cout<<"or are you...\n"; cout<<"8. BEYOND AUTHORITY!!!\n"; cout<<"\n"; cin>> think; cout<<"\n"; cout<<"QUESTION 1\n"; cout<<"\n"; cout<<"What's your favorite sport?\n"; cout<<"\n"; cout<<"1. Football\n"; cout<<"2. Do gigs count as sports?\n"; cout<<"3. Baseball\n"; cout<<"4. Lacrosse\n"; cout<<"5. Sports?\n"; cout<<"6. I'm too good for sports\n"; cout<<"\n"; cin>> a; cout<<"\n"; switch (a) { case 1: jock = jock + 1; break; case 2: rock = rock + 1; break; case 3: rich = rich + 1; break; case 4: bro = bro + 1; break; case 5: art = art + 1; nerd = nerd + 1; stoner = stoner + 1; break; case 6: ba = ba + 1; break; default: cout<<"Wow... Is it really that hard to put in the right number? GET OUT!\n"; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/126440-array-help-my-quiz.html
CC-MAIN-2017-26
refinedweb
485
81.46
attendees in a very detailed view in the event management dashboard. He can see the statuses of all the attendees. The possible statuses are completed, placed, pending, expired and canceled, checked in and not checked in. He/she can take actions such as checking in the attendee. If the organizer wants to download the list of all the attendees as a CSV file, he or she can do it very easily by simply clicking on the Export As and then on CSV. Let us see how this is done on the server. Server side – generating the Attendees CSV file Here we will be using the csv package provided by python for writing the csv file. import csv - We define a method export_attendees_csv which takes the attendees to be exported as a CSV file as the argument. - Next, we define the headers of the CSV file. It is the first row of the CSV file. def export_attendees_csv(attendees): headers = ['Order#', 'Order Date', 'Status', 'First Name', 'Last Name', 'Email', 'Country', 'Payment Type', 'Ticket Name', 'Ticket Price', 'Ticket Type'] - A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row. rows = [headers] - We iterate over each attendee in attendees and form a row for that attendee by separating the values of each of the columns by a comma. Here, every row is one attendee. - The newly formed row is added to the rows list. for attendee in attendees: column = [str(attendee.order.get_invoice_number()) if attendee.order else '-', str(attendee.order.created_at) if attendee.order and attendee.order.created_at else '-', str(attendee.order.status) if attendee.order and attendee.order.status else '-', str(attendee.firstname) if attendee.firstname else '', str(attendee.lastname) if attendee.lastname else '', str(attendee.email) if attendee.email else '', str(attendee.country) if attendee.country else '', str(attendee.order.payment_mode) if attendee.order and attendee.order.payment_mode else '', str(attendee.ticket.name) if attendee.ticket and attendee.ticket.name else '', str(attendee.ticket.price) if attendee.ticket and attendee.ticket.price else '0', str(attendee.ticket.type) if attendee.ticket and attendee.ticket.type else '']_attendees_csv content = export_attendees_csv(attendees) for row in content: writer.writerow(row) Obtaining the Attendees CSV file: Firstly, we have an API endpoint which starts the task on the server. GET - /v1/events/{event_identifier}/export/attendees/csv Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the attendees of the event as a CSV: and the corresponding response from the server – { "result": { "download_url": "/v1/events/1/exports/ }, "state": "SUCCESS" } The file can be downloaded from the above-mentioned URL.
https://blog.fossasia.org/open-event-server-export-attendees-as-csv-file/
CC-MAIN-2022-21
refinedweb
445
55
How to overload __init__ method based on argument type? A much neater way to get 'alternate constructors' is to use classmethods. For instance: class MyData: def __init__(self, data): "Initialize MyData from a sequence" self.data = data @classmethod def fromfilename(cls, filename): "Initialize MyData from a file" data = open(filename).readlines() return cls(data) @classmethod def fromdict(cls, datadict): "Initialize MyData from a dict's items" return cls(datadict.items()) MyData([1, 2, 3]).data[1, 2, 3] MyData.fromfilename("/tmp/foobar").data['foo\n', 'bar\n', 'baz\n'] MyData.fromdict({"spam": "ham"}).data[('spam', 'ham')] The reason it's neater is that there is no doubt about what type is expected, and you aren't forced to guess at what the caller intended for you to do with the datatype it gave you. The problem with isinstance(x, basestring) is that there is no way for the caller to tell you, for instance, that even though the type is not a basestring, you should treat it as a string (and not another sequence.) And perhaps the caller would like to use the same type for different purposes, sometimes as a single item, and sometimes as a sequence of items. Being explicit takes all doubt away and leads to more robust and clearer code. Excellent question. I've tackled this problem as well, and while I agree that "factories" (class-method constructors) are a good method, I would like to suggest another, which I've also found very useful: Here's a sample (this is a read method and not a constructor, but the idea is the same): def read(self, str=None, filename=None, addr=0): """ Read binary data and return a store object. The data store is also saved in the interal 'data' attribute. The data can either be taken from a string (str argument) or a file (provide a filename, which will be read in binary mode). If both are provided, the str will be used. If neither is provided, an ArgumentError is raised. """ if str is None: if filename is None: raise ArgumentError('Please supply a string or a filename') file = open(filename, 'rb') str = file.read() file.close() ... ... # rest of code The key idea is here is using Python's excellent support for named arguments to implement this. Now, if I want to read the data from a file, I say: obj.read(filename="blob.txt") And to read it from a string, I say: obj.read(str="\x34\x55") This way the user has just a single method to call. Handling it inside, as you saw, is not overly complex with python3, you can use Implementing Multiple Dispatch with Function Annotations as Python Cookbook wrote: import timeclass Date(metaclass=MultipleMeta): def __init__(self, year:int, month:int, day:int): self.year = year self.month = month self.day = day def __init__(self): t = time.localtime() self.__init__(t.tm_year, t.tm_mon, t.tm_mday) and it works like: 2012, 12, 21) d.year2012e = Date() e.year2018d = Date(
https://codehunter.cc/a/python/how-to-overload-init-method-based-on-argument-type
CC-MAIN-2022-21
refinedweb
501
64.2
There is a distinct difference between passing the addresses of objects and passing objects by value when using polymorphism. All the examples you’ve seen here, and virtually all the examples you should see, pass addresses and not values. This is because addresses all have the same size[58], so passing the address of an object of a derived type (which is usually a bigger object) is the same as passing the address of an object of the base type (which is usually a smaller object). As explained before, this is the goal when using polymorphism – code that manipulates a base type can transparently manipulate derived-type objects as well. If you upcast to an object instead of a pointer or reference, something will happen that may surprise you: the object is “sliced” until all that remains is the subobject that corresponds to the destination type of your cast. In the following example you can see what happens when an object is sliced: //: C15:ObjectSlicing.cpp #include <iostream> #include <string> using namespace std; class Pet { string pname; public: Pet(const string& name) : pname(name) {} virtual string name() const { return pname; } virtual string description() const { return "This is " + pname; } }; class Dog : public Pet { string favoriteActivity; public: Dog(const string& name, const string& activity) : Pet(name), favoriteActivity(activity) {} string description() const { return Pet::name() + " likes to " + favoriteActivity; } }; void describe(Pet p) { // Slices the object cout << p.description() << endl; } int main() { Pet p("Alfred"); Dog d("Fluffy", "sleep"); describe(p); describe(d); } ///:~ The function describe( ) is passed an object of type Pet by value. It then calls the virtual function description( ) for the Pet object. In main( ), you might expect the first call to produce “This is Alfred,” and the second to produce “Fluffy likes to sleep.” In fact, both calls use the base-class version of description( ). Two things are happening in this program. First, because describe( ) accepts a Pet object (rather than a pointer or reference), any calls to describe( ) will cause an object the size of Pet to be pushed on the stack and cleaned up after the call. This means that if an object of a class inherited from Pet is passed to describe( ), the compiler accepts it, but it copies only the Pet portion of the object. It slices the derived portion off of the object, like this: Now you may wonder about the virtual function call. Dog::description( ) makes use of portions of both Pet (which still exists) and Dog, which no longer exists because it was sliced off! So what happens when the virtual function is called? You’re saved from disaster because the object is being passed by value. Because of this, the compiler knows the precise type of the object because the derived object has been forced to become a base object. When passing by value, the copy-constructor for a Pet object is used, which initializes the VPTR to the Pet VTABLE and copies only the Pet parts of the object. There’s no explicit copy-constructor here, so the compiler synthesizes one. Under all interpretations, the object truly becomes a Pet during slicing. Object slicing actually removes part of the existing object as it copies it into the new object, rather than simply changing the meaning of an address as when using a pointer or reference. Because of this, upcasting into an object is not done often; in fact, it’s usually something to watch out for and prevent. Note that, in this example, if description( ) were made into a pure virtual function in the base class (which is not unreasonable, since it doesn’t really do anything in the base class), then the compiler would prevent object slicing because that wouldn’t allow you to “create” an object of the base type (which is what happens when you upcast by value). This could be the most important value of pure virtual functions: to prevent object slicing by generating a compile-time error message if someone tries to do it.
http://www.linuxtopia.org/online_books/programming_books/thinking_in_c++/Chapter15_017.html
CC-MAIN-2014-42
refinedweb
668
54.86
This project was inspired by an obscure piece of VB.NET code I found one day on a forum (and never saw again). This code wrapped MCI and enabled you to select a specific track from a CD and play it. The class design was already established, but the functionality was very limited and flawed in some places. I rewrote the class design in C# and added functionality to give access to all the necessary features you would expect in a comprehensive library. I would like to give credit to the authors of the original VB.NET class, but sadly don't know who they are. First add a reference to the DeviceController.dll, and then include the DeviceController namespace. using DeviceController; Instantiate the CDDevice object like so... CDDevice CDAudioObject = new CDDevice(); Now we can really get started. To set the drive device to read from, you simply use a statement like this, where 0 is the first device available: this.CDAudioObject.ChangeDrive( 0 ); To encapsulate the task of detecting drive devices, I've included a property that gets an ArrayList of all the available devices that the class library can use to play CDs. You would use it like this in your interface... foreach(string Drive in CDAudioObject.Drives) { this.DriveView.Items.Add(Drive); } The next thing you will probably want to do is detect how many tracks are on the CD. int tracks = this.CDAudioObject.TotalTracks; The class assigns the first track on the CD as the current track, as soon as you select a device with a CD in it. So, as soon as you select a drive, you can start playing. To play, pause or stop a track from playing, simply use the following method calls: this.CDAudioObject.Play(); this.CDAudioObject.Pause(); this.CDAudioObject.Stop(); Likewise, you can call methods to move forward or backward through the tracks. If you try to move past the limits of the track range of the CD, the method call will simply do nothing. this.CDAudioObject.Next(); this.CDAudioObject.Previous(); To physically open and close the currently selected device, you can call the Eject() method. However, I couldn't find a way to detect if the drive is already open or closed, so the class library assumes that the drive is closed when it is first instantiated. Therefore, if the drive is open, you have to call this method twice to close the drive, and from then on, the object is synched to the drive. this.CDAudioObject.Eject(); So that's the core functionality. I've also included a range of properties to access information about the length, current position etc., for CDs and their tracks. It's worth noting that I've doubled up on some of these properties, providing a property that gives a formatted string (such as TrackDuration), and then another property that gives an integer value in seconds (such as TrackDurationSeconds). This is just to make things easy for an interface to display these values without having to worry about formatting, while still providing functionality for other purposes. The following is a list of properties that developers will find useful, the names are fairly self-describing. int CurrentDrive int TotalTracks int CurrentTrack int TrackDurationSeconds int TrackPositionSeconds string TrackDuration string TrackPosition ArrayList Drives PlayStatus Status bool CDAvailable I've written a demonstration app that implements pretty much every piece of functionality from the class library. I think it works pretty well, but don't be too critical of it. It's mainly to provide an example of how you could use the different aspects of the class library. Hopefully this class can be of use to developers, if only as a base to implement functionality on top of. It'll probably be redundant in a couple of years when Longhorn gets released anyway. I was looking at the GAC assemblies that are packaged with LongHorn the other day, and there are a range of assemblies dealing solely with multimedia and audio. Something that's been missing from the .NET framework since it was released. Can't wait. :-) General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/audio-video/CDPlayer.aspx
crawl-002
refinedweb
687
63.49
The displays: Hi, is there any way to enable it, using a batchfile? i'm creating an install batch file, which installs SQL 2008, and .NET 3.5 on my servers. But some of the servers are under Windows 2008. installing .NET fails on them. What i have to do is what you said above. But I should do it using Batchfile. Would you please tell me how to do that? using System; using System.Diagnostics; namespace PrerequisiteHelper { class Program { static void Main(string[] args) { ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = "powershell.exe"; startInfo.Arguments = "Import-Module ServerManager ; Add-WindowsFeature as-net-framework"; startInfo.WindowStyle = ProcessWindowStyle.Hidden; startInfo.UseShellExecute = true; Process p = new Process(); p.StartInfo = startInfo; p.Start(); p.WaitForExit(); } } } source: community.flexerasoftware.com/…/index.php I'v solved the problem,thanks a lot! Thanks for this little helpful post ! you can uninstalled .net3.0 and 2.0 and install It Thanks – your first sentence answered my question after looking through various forums where others were also confused by inconsistent naming conventions for ,net; many replies were from others thinking they knew the answer were just adding confusion: The .NET Framework 3.5 SP1 (also referred to as .NET Framework 3.5.1). Thanks Thanks – funny how such a 'simple' thing can stop the whole show! I am also facing the same problem . and when I check in server manager it shows 3.5 is already installed. But I run your command from command shell but problem is remain same. Please help me… one single line can do it, and you can put it in your batch file: powershell "Import-Module ServerManager; Add-WindowsFeature as-net-framework" One single line can do it, and you can put it in your batch file: powershell "Import-Module ServerManager; Add-WindowsFeature as-net-framework" Thanks for the detailed info…I solved the problem. Thanks a lot, it was very useful. Greetings from Mexico Installing a Windows language pack before installing the .NET Framework 3.5 will cause the .NET Framework 3.5 installation to fail. Install the .NET Framework 3.5 before installing any Windows language packs. hi can i update .net framework 3.5 to 4.5. if any one know please help me I'v solved the problem,thanks a lot! What does net framework 3.5 p1 do? Is it absolutely necessary? Hi, I have tried both suggested way, but: when I try trough server manage, at the end of the installation process, the system prompts that ita has been impossible to install all 3 services/roles required. If i try with PowerShell, I cannot import server manager module. Any clue on this? Thank you we upgrads the .netframework 3.5 to 4.0 in exchange server 2010 its impact to edge server 2010,2010 edge server support to .net framwork 4.0 Hi, When I try to install SQL server 2008, a message appear that .net framework 3.5 installing failed and need to this not 3.5.1 and problem not solved. Any idea? Thank you ! I think I am having the same problem as Mohsen (June 28 post) I am trying to add SQL Server reporting services, and when I try to add features, the installer complains that .Net 3.5.1 installation failed and it is required for SQL Server. When I look, I verify that .net 3.5 is installed, but I have searched all over and can not find nor figure out how to upgrade .Net 3.5 to .Net 3.5.1 on Windows Server 2012 R2. The only option I am presented with in Server Manager is .Net 3.5, and it is installed. Can anybody point me in the right direction? I think once I get the .Net framework SNAFU settled, the SQL Server install will probably work.
https://blogs.msdn.microsoft.com/sqlblog/2010/01/08/how-to-installenable-net-3-5-sp1-on-windows-server-2008-r2-for-sql-server-2008-and-sql-server-2008-r2/
CC-MAIN-2016-22
refinedweb
638
70.8
TIMKEN 19150-903A2 Information|19150-903A2 bearing in Sudan Timken 19150 Tapered Roller Bearing, Single Cone, Standard Timken 19150 Tapered Roller Bearing, Single Cone, Standard Tolerance, Straight Bore, Steel, Inch, 1.5000" ID, 0.6500" Width: Amazon.com: Industrial & ScientificContact M88048 Axle Bearing | Leader Bearing TIMKEN 19150-903A2 Timken M88048 Axle Bearing. Timken M88048 Axle Bearing. Timken Axle Bearings are designed to help the wheels of the vehicle spin smoothly andContact us Timken Company /wiki/Timken_Company The Timken Company is a global manufacturer of bearings, and in 1899 incorporated as The Timken Roller Bearing Axle Company in St. Louis. In 1901,Contact us TIMKEN 19150-903A2 What's New. Boston Gear Bear-N-Bronz Plain Cylindrical Sleeve Bearing, SAE 660 Cast Bronze, Inch ; NU205M Cylindrical Roller BearingContact us Timken 513188 Lowest Price - Free Shipping, Buy now Find Our Lowest Possible Price! Cheapest Timken 513188 For Sale.Contact Timken Roller Bearings On Sale /Timken Roller Bearings Find our Lowest Possible Price! Search for Timken Roller Bearings Prices.Contact us import ina rt622 bearing | Product Leading bearing of high quality and including INA RT622 McGill CAMROL Cam Followers fag nsk skf timken ina koyo bearing. Open, INA TCJ35-N TIMKEN 19150-903A2.Contact us Timken CAD Browse All Product Types in the The Timken Company catalog including Housed Unit Bearings,Spherical Roller Bearings,Cylindrical Roller Bearings,Tapered RollerContact us Timken - T2520-903A2 - Motion Industries Buy Timken Tapered Roller Thrust Bearings T2520-903A2 direct from Motion Industries. Your proven service leader with reliable delivery since 1972.Contact us TIMKEN 19150-903A2 | Leader Bearing Bearings>Roller Bearings>Tapered Roller Bearing Assemblies>19150-903A2Bearing, Tapered Standard Precision Basic Number 19150Leader Singapore isContact us TIMKEN T135-903A2 Leader Singapore is authorized dealers in interchange all bearing types, including TIMKEN T135-903A2 McGill CAMROL Cam Followers fag nsk skf timken ina koyo bearingContact us T441 903a2 | Timken | Dalton Bearing PHONE: 706.226.2022; FAX: 706.226.2032; Products Brands Services Worldwide Industries News Contact Us Request Quote. Browse By ManufacturerContact us Energy efficient timken 3mmvc9107hxvvdumfs934 bearing in timken 3mmvc9107hxvvdumfs934 timken 19150-903a2 skf 205szzc timken t201 bearing koyo jr25x30x26,5 skf 6009 2rs, deep groove ball. Contact us. NTN 566 | Leader Beari.Contact us Timken Company - Bearings & Mechanical Power Transmissions The Timken Company engineers and manufactures bearings and mechanical power transmission components. We use our knowledge to make industries across the globe work better.Contact us timken hm256846td-903a2 bearing high quality in South Africa TIMKEN 19150-903A2 SKF Bearing all bearing types, including TIMKEN HM256846TD-3 McGill CAMROL Cam Followers fag nsk skf timken ina koyo bearing. Contact usContact us Timken Engineered Bearings | The Timken Company Timken® engineered bearings deliver strong performance, consistently and reliably. Our products include tapered, spherical, cylindrical, thrust, ball, plainContact us Timken – Browse Results Instantly find.com/Timken Search for Timken. Find Quick Results and Explore Answers Now!Contact us Timken Roller Bearings On Sale /Timken Roller Bearings Find our Lowest Possible Price! Search for Timken Roller Bearings Prices.Contact us - NTN F619/2 Material|F619/2 bearing in Mumbai - SKF K 25X31X17 Wholesalers|K 25X31X17 bearing in Ecuador - FAG 51210 Outside Diameter|51210 bearing in Dominique - NSK 23248CC/W33 Seals|23248CC/W33 bearing in Tamilnadu - NSK 6013-Z Seals|6013-Z bearing in Bermuda - SKF C 3076 KM OH 3076 H Manufacturers|C 3076 KM OH 3076 H bearing in Mozambique
http://welcomehomewesley.org/?id=6352&bearing-type=TIMKEN-19150-903A2-Bearing
CC-MAIN-2019-04
refinedweb
560
54.52
Record trees of clusters of data, for fast neighbour finding. More... #include <mbl_cluster_tree.h> Record trees of clusters of data, for fast neighbour finding. Used to record clusters of objects of type T. D::d(T t1, T t2) is a measure of distance between two objects. It must obey the triangle inequality: D::d(t1,t2)<=D::d(t1,t3)+D::d(t2,t3). The class is designed to allow fast location of the nearest example in a set objects to a given new object. It represents the data as a set of key point positions, together with a list of indices into the external data for each cluster. Each cluster is in turn assigned to a large cluster at a higher level. Thus to find the nearest neighbour, we first check for proximity to the keypoints, and only consider objects in the clusters which are sufficiently close. Definition at line 27 of file mbl_cluster_tree.h. Default constructor. Definition at line 20 of file mbl_cluster_tree.txx. Append new object with index i and assign to clusters. Assumes that new object data()[i] is available. Deduce which clusters belongs to and add it. Create new clusters if further than max_r() from any. Assumes that new object data()[i] is available. Deduce which cluster it belongs to and add it. Create new cluster if further than max_r() from any. Definition at line 109 of file mbl_cluster_tree.txx. Load class from binary file stream. Definition at line 250 of file mbl_cluster_tree.txx. Save class to binary file stream. Definition at line 238 of file mbl_cluster_tree.txx. List of objects. Definition at line 67 of file mbl_cluster_tree.h. Empty clusters. Definition at line 27 of file mbl_cluster_tree.txx. Return index of nearest object in data() to t. Nearest object in data() to t is given by data()[nearest(t,d)]; The distance to the point is d Definition at line 79 of file mbl_cluster_tree.txx. Print class to os. Print summary information. Definition at line 222 of file mbl_cluster_tree.txx. Print ancestry of every element. Definition at line 205 of file mbl_cluster_tree.txx. Add an extra element to data(). Definition at line 68 of file mbl_cluster_tree.txx. Copy in data. Define external data array (pointer retained). Empty existing clusters, then process every element of data to create clusters, by calling add_object() Definition at line 57 of file mbl_cluster_tree.txx. Define number of levels and max radius of clusters at each level. Definition at line 38 of file mbl_cluster_tree.txx. Version number for I/O. Definition at line 232 of file mbl_cluster_tree.txx. Clusters. Definition at line 34 of file mbl_cluster_tree.h. Storage for objects. Definition at line 31 of file mbl_cluster_tree.h. Indicate which cluster each object is assigned to. parent_[0][i] indicates which cluster in cluster_[0] data_[i] is assigned to. parent_[j][i] (j>0) indicates which cluster in level above cluster_[j-1].p()[i] is assigned to. Definition at line 41 of file mbl_cluster_tree.h.
http://public.kitware.com/vxl/doc/release/contrib/mul/mbl/html/classmbl__cluster__tree.html
crawl-003
refinedweb
498
62.34
Timeline. 12/27/11: - 23:16 Ticket #32405 (mercurial: update to 2.0.1) closed by - fixed: Maintainer timeout. Committed in r88307. - 23:15 Changeset [88307] by - Fix #32405: mercurial: update to 2.0.1. - 22:26 Ticket #32676 (ruby19 installation problem on mac osx 10.6.8 (architecture mismatch)) closed by - invalid - 20:57 Changeset [88306] by - tripwire: use destroot.keepdirs instead of faking it - 20:47 Changeset [88305] by - tripwire: use notes - 20:46 Changeset [88304] by - tripwire: whitespace / formatting changes / add modeline - 20:37 Ticket #29507 (additional dependencies for gdk-pixbuf2: TIFF, jasper) closed by - duplicate: See #31678. - 20:02 Ticket #32685 (RealName account attribute missing from macports account) created by - If you run /System/Library/CoreServices/Directory\ Utility and look at the … - 18:38 Changeset [88303] by - py-bpython: unified ports, updated to 0.10.1, added bpython_select port - 16:32 Ticket #32684 (tripwire: error: use of undeclared identifier 'Equal') created by - Version 2.4.1.2 Log File.… - 15:29 Ticket #32683 (R-framework port version bump) created by - The attached patch updates the port to version 2.14.1 and adds a debug … - 15:20 Ticket #32682 (glib2-devel: Update to 2.31.8) created by - glib2-devel is out-of-date, the latest version is 2.31.6 and the current … - 13:48 Ticket #32681 (update: libtorrent-devel 0.13.0 & rtorrent-devel 0.9.0) created by - New unstable versions of libTorrent & rTorrent: … - 13:38 Ticket #32680 (gdk-pixbuf2 configure error: Checks for TIFF loader failed) created by - i'd like to install ktorrent4 with port. but i get an error of … - 13:27 Ticket #32679 (vobcopy: build for the right architectures) created by - vobcopy doesn't build for the right architectures: […] The attached … - 12:49 Ticket #32678 (py27-pylibmc @1.2.2_0 Symbol not found: _strndup) created by - OSX 10.6.8, Xcode 4.0.2 Package builds without error but fails at runtime: … - 10:29 Changeset [88302] by - Version bump to 3.1.5. - 10:23 Ticket #32513 (Updated swi-prolog-devel portfile for version 5.11.34) closed by - fixed: Thanks; r88301. - 10:22 Changeset [88301] by - swi-prolog-devel: maintainer update to 5.11.35; see #32513 - 10:20 Ticket #32677 (octave-ann @ 1.0.2 compile error - build failure) created by - perhaps a namespace issue? see main.log - 10:16 Ticket #32558 (mkvtoolnix: error: ‘boost::BOOST_FOREACH’ has not been declared) reopened by - Because of that fix, it is no longer possible to have a workaround common … - 09:09 Ticket #32676 (ruby19 installation problem on mac osx 10.6.8 (architecture mismatch)) created by - […] - 08:11 Changeset [88300] by - sysutils/bash-completion: Ignore flags for the current action in the … - 07:59 Changeset [88299] by - version bump to 2.8.12.1, changing to python25 group - 07:41 Changeset [88298] by - version bump to 0.39.1, fix livecheck and distname, add a note on running … - 07:28 Ticket #32675 (liblastfm: library install_names reference paths in the destroot) created by - What i do is: […] Then i try to fire up Amarok without luck. Following … - 07:08 Changeset [88297] by - {p5,py,rb,rb19}-mecab: update to 0.99; fix master_sites and livecheck. - 06:30 Ticket #32643 (py27-hgsubversion installs files with incorrect permissions) closed by - fixed: committed in r88296. Thanks for testing. - 06:29 Changeset [88296] by - ticket:32643 - files in hgsubversion-1.3-py2.7.egg-info have to be world … - 04:45 Ticket #32674 (mariadb: build fails with clang) created by - The current port of mariadb (5.2.9) is completely broken, as it doesn't … - 03:52 Ticket #32673 (php5-uploadprogress: Update to 1.0.3.1) created by - Update php5-uploadprogress to version 1.0.3.1 Tested and working. - 02:50 Changeset [88295] by - kyotocabinet: update to version 1.2.72 - 02:42 Changeset [88294] by - py-msgpack: update to version 0.1.11 - 00:05 Ticket #32672 (vobcopy: build fails when using clang and running 64-bit kernel) created by - I'm on OS X Lion 10.7.2 with Xcode 4.2.1, i7 processor. (Apple clang … 12/26/11: - 20:48 Ticket #32671 (boost @1.48.0 patch for handling Python bytes objects) created by - Would it be possible to add this patch? … - 20:29 Ticket #32670 (mapnik2 @2.0.0 - New port submission) created by - The Mapnik2 API [ breaks … - 18:59 Ticket #32669 (py-numpy, py-scipy: add gcc46 variant) created by - I've made the straightforward changes to enable a gcc46 variant for … - 18:52 Ticket #32668 (fftw-3: add gcc46 variant) created by - I've made the straightforward changes to enable a gcc46 variant for … - 16:04 Ticket #32667 (linuxdoc-tools: sgmlsasp isn't installed for the correct architecture) created by - […] - 16:02 Ticket #32663 (linuxdoc-tools: expand: stdin: Illegal byte sequence) closed by - fixed: Fixed incorrectly in r88292 and correctly in r88293. - 16:02 Changeset [88293] by - linuxdoc-tools: oh, just use C locale; it exists, for one thing; see … - 15:59 Changeset [88292] by - linuxdoc-tools: override MacPorts' UTF-8 locale to fix destroot failure; … - 15:58 Changeset [88291] by - GraphicsMagick: update to 1.3.13, use xz distfile - 15:44 Changeset [88290] by - whois: update to 5.0.14 - 15:42 Ticket #32666 (py-zdaemon: Dependency 'py24-zdaemon' not found) created by - […] - 13:42 Changeset [88289] by - graphviz-devel, graphviz-gui-devel, gvedit-devel: update to … - 13:11 Ticket #32631 (gtkwave: update to 3.3.28) closed by - fixed: r88288. - 13:11 Changeset [88288] by - gtkwave: update to 3.3.28 (fixes build with clang), switch to sourceforge … - 13:10 Ticket #26406 (.packlist files contain each line twice) closed by - fixed: Updated perl5-1.0 portgroup in r88287. - 13:09 Changeset [88287] by - perl5-1.0.tcl portgroup: don't create duplicate lines in .packlist files … - 12:54 Ticket #31748 (gtkwave @ 3.3.15: fails to build on Mac OS X Lion with Xcode 4.2 (clang) ...) closed by - duplicate - 12:09 Changeset [88286] by - gcc46: *actually* remove gcj-mp-4.6 from the select file; because the port … - 12:06 Ticket #30357 (gcc46 select error if no gcj installed) closed by - fixed: Actually, gcc46 doesn't even have a variant to build java anymore; I don't … - 12:03 Changeset [88285] by - gcc46: remove gcj-mp-4.6 from the select file; because the port doesn't … - 11:51 Changeset [88284] by - linuxdoc-tools: simplify getting archflags - 11:39 Ticket #32659 (glib2 build fails when system python has been replaced with python 3) closed by - invalid: How / why did you replace Mac OS X's python with version 3.2.2? That's … - 11:33 Ticket #32665 (fuse4x: bump to 0.8.14) closed by - fixed: Thanks! r88283. - 11:32 Changeset [88283] by - fuse4x*: update to v0.8.14 (#32665) - 09:46 Ticket #32665 (fuse4x: bump to 0.8.14) created by - This release fixes 2 deadlocks in 64bit code - 09:19 Changeset [88282] by - open_jtalk: update to 1.05. - 09:19 Changeset [88281] by - hts_engine_API: update to 1.06. - 08:57 Changeset [88280] by - julius: update to 4.2.1. - 08:30 Changeset [88279] by - MMDAgent: update to 1.2. - 08:27 Changeset [88278] by - mecab, mecab-{base,sjis,utf8}: update to 0.99; modify license; install doc … - 07:57 Ticket #32664 (php5-unit @3.5.15: Failed opening required 'PHP/CodeCoverage/Filter.php') created by - Hi * When trying the shell command "phpunit" I get an error (see below). … - 07:16 Ticket #32663 (linuxdoc-tools: expand: stdin: Illegal byte sequence) created by - Whenever trying to install linuxdoc-tools port I receive errors on both … - 01:07 Ticket #32662 (netbsd-iscsi-initiator) created by - 1. port netbsd-iscsi-initiator won't compile with clang in Xcode 4.2. As … 12/25/11: - 16:02 Ticket #32661 (fife: python_select: no such file or directory) created by - using latest versions of macports and Xcode While building fife, get this … - 11:35 Ticket #32660 (boost@1.48.0_3+python24 build fails) created by - I can successfully build the +python25, +python26, and +python27 … - 11:21 Ticket #32350 (dmapd @0.0 - build fails) closed by - fixed: r88277 - 11:20 Changeset [88277] by - libdmapsharing: Add dependency on gst-plugins-base. (#32350) - 10:11 Ticket #32659 (glib2 build fails when system python has been replaced with python 3) created by - When I try to install ffmpeg, glib2 fails to install with the following … - 08:46 Ticket #32655 (py26-zodb depends on nonexistent py26-zdaemon) closed by - fixed: Fixed in r88276 - 08:46 Changeset [88276] by - py-zdaemon: unified portfile - 08:37 Ticket #32658 (OpenOCD update to 0.5.0) created by - Dear all, openocd version 0.5.0 is released and I tested it and updated … - 05:00 Ticket #32657 (p5-redis @1.904 new submission) created by - perl binding for Redis database - 04:34 Ticket #32656 (kdepim4 @4.7.4 akonadisender.cpp build error) created by - […] (this is after forcing configure.compiler=llvm-gcc-4.2, which … 12/24/11: - 23:48 Changeset [88275] by - ejabberd: update to version 2.1.10 - 22:09 Ticket #32654 (p5-datemanip: missing dependency on p5-yaml-syck) closed by - fixed: r88274 - 22:08 Changeset [88274] by - p5-datemanip: add missing dependency on p5-yaml-syck (#32654) - 19:32 Ticket #32655 (py26-zodb depends on nonexistent py26-zdaemon) created by - i get the following output: […] - 18:56 Changeset [88273] by - fuse4x-kext: fix build architecture on PPC machines * only build … - 15:14 Changeset [88272] by - cross/arm-none-eabi-gcc: Update to gcc 4.6.1, newlib 1.20.0 - 15:01 Ticket #32654 (p5-datemanip: missing dependency on p5-yaml-syck) created by - Fetch quotes results in "Unknown error" after update to version 2.4.8. … - 12:19 Ticket #32653 ([mysql5-server-devel] [5.5.2] [update package to 5.5.19]) closed by - duplicate: Duplicate of #25751. - 10:49 Ticket #32653 ([mysql5-server-devel] [5.5.2] [update package to 5.5.19]) created by - The package is quite old (5.5.2). The current mysql-server for 5.5 branch … - 07:55 Ticket #32652 (gvfs cannot open remote urls) created by - This may be related more deeply to Gio but gvfs cannot open remote uri's. … - 02:18 Changeset [88271] by - groovy-devel: version upgrade to 2.0.0-beta-2 - 02:05 Changeset [88270] by - groovy: version upgrade to 1.8.5 - 00:48 Changeset [88269] by - kdegraphics4 subports: upgrade to 4.7.4 - 00:40 Changeset [88268] by - kate: upgrade to 4.7.4 - 00:40 Ticket #32651 (xfce4-panel @4.6.4: unknown type name 'G_CONST_RETURN') created by - hi, i am having troubles installing xfce on osx lion. please see attached … - 00:33 Changeset [88267] by - konsole: upgrade to 4.7.4 - 00:32 Changeset [88266] by - kstars: upgrade to 4.7.4 - 00:12 Ticket #32650 (xml2rfc 1.36 available) created by - xml2rfc 1.36 is available upstream. It's a … Note: See TracTimeline for information about the timeline view.
https://trac.macports.org/timeline?from=2011-12-28T04%3A20%3A34-0800&precision=second
CC-MAIN-2016-26
refinedweb
1,834
57.77
This Notebook will walk you through the basic process of how to import data from Text files (.txt) and Excel files (.xls or .xlsx). In order to complete this activity, you need to first upload your data set (e.g. Sample Data.txt or Sample Data.xlsx) into your Callysto Hub (hub.callysto.ca) First, we need to import "pandas", which is a library that contains many useful tools for working with data. Pandas is a short form for "Python Data Analysis Library". You only need to include this line once, before the rest of your code. import pandas as pd Next, use the line below to read the file and assign it the variable name "dataset". This name can be anything you choose, and can be used to refer to the data from now on. This code assumes the columns in your data set are separated by an indented space (known as a "tab-delimited file"). If your columns are separated by commas, you will need to replace sep = "\t" with sep = "," dataset = pd.read_csv("Sample Data.txt", sep = "\t") Now you can simply use the variable name to display your data: dataset Now that our data is loaded into the notebook, we can perform simple calculations. For example, if we wanted to find the maximum of the numbers in the second column, we do the following: dataset["Temperature"].max() 26.1 And if we wanted to figure out the average temperature during this 5-year period, we do the following: dataset["Temperature"].mean() 25.4 import sys !{sys.executable} -m pip install xlrd Requirement already satisfied: xlrd in /srv/conda/lib/python3.7/site-packages (1.2.0) exceldata = pd.read_excel("Sample Data.xlsx") #Displaying the data: exceldata Now that the Excel file is loaded, we can perform calculations using it just like before. This time, let's find the maximum and average values in the first column. exceldata["Year"].max() 2018 exceldata["Year"].mean() 2016.0 import matplotlib.pyplot as plt plt.bar(exceldata["Year"], exceldata["Temperature"]) plt.xlabel('Year') plt.ylabel('Global Temperature') plt.title('Evidence of Climate Change') plt.show() plt.plot(exceldata["Year"], exceldata["Temperature"],marker='o') plt.xlabel('Year') plt.ylabel('Global Temperature') plt.title('Evidence of Climate Change') plt.show()
https://nbviewer.jupyter.org/github/richardhoshino/callysto/blob/master/Notebook%20%2304%20-%20Importing%20Data%20into%20a%20Jupyter%20Notebook.ipynb
CC-MAIN-2021-21
refinedweb
378
60.31
A library to produce ansi color output and colored highlighting and diffing Project description Python version support: CPython 2.6, 2.7, 3.2, 3.3 and PyPy. Introduction ansicolor is a library that makes it easy to use ansi color markup in command line programs. Installation $ pip install ansicolor Getting started To highlight using colors: from ansicolor import green from ansicolor import red from ansicolor import white print("Let's try two colors: %s and %s!" % (red("red"), green("green"))) print("It's also easy to produce text in %s," % (red("bold", bold=True))) print("...%s," % (green("reverse", reverse=True))) print("...and %s." % (cyan("bold and reverse", bold=True, reverse=True))) This will emit ansi escapes into the string: one when starting a color, another to reset the color back to the default: >>> from ansicolor import green >>> green("green") '\x1b[0;0;32mgreen\x1b[0;0m' If I want to be able to pass a color as an argument I can also use the colorize function: from ansicolor import Colors from ansicolor import colorize print(colorize("I'm blue", Colors.Blue)) I can also apply color on a portion of a string: from ansicolor import Colors from ansicolor import wrap_string print(wrap_string("I'm blue, said the policeman.", 8, Colors.Blue)) Sometimes I may have a string that contains markup and I’ll want to do something with it that concerns only the text, so I can strip the markup: >>> from ansicolor import red >>> from ansicolor import strip_escapes >>> from ansicolor import yellow >>> message = "My favorite colors are %s and %s" % (yellow("yellow"), red("red")) >>> print("The length of this string is not: %d" % len(message)) The length of this string is not: 67 >>> print("The length of this string is: %d" % len(strip_escapes(message))) The length of this string is: 37 Going further Take a look at the demos to see what’s possible. $ python -m ansicolor.demos --color $ python -m ansicolor.demos --highlight $ python -m ansicolor.demos --diff Also see the API documentation. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ansicolor/0.1.4/
CC-MAIN-2018-22
refinedweb
362
50.16
Hi, What is a Foundation Production and what does it do? We are currently on HealthShare Health Connect 15.03 and we are starting the process of moving to HealthShare Health Connect 2019.1. The 2019.1 Installation Guide is pretty clear that it is essential, but I'm having trouble working out exactly what it does? Leading on from this is what should I call it? From the installation guide: 5. For Local Name, enter the name that will identify the Foundation production. This name will become the namespace that contains the class definitions for the production So the name I give it will be the name of the production and the namespace. The name in guide is FHIR234 - which makes me wonder if the Foundation Production ONLY for FHIR interfaces? (should I just call it FHIR ?) Any advice or guidance appreciated. Kind regards, Stephen PS: Our current naming strategy is has a separate namespace for HL7v2 and Document interfaces (no FHIR yet), for historical reasons called PRODUCTION and DOCUMENTS, with our test server replication this naming strategy exactly (so we have a PRODUCTION namespace on both our test and live servers). The installation guide: Install and activate a Foundation production, which is critical to the functionality of Health Connect. - Update: Creating additional namespaces in 2019.1 causes foundation productions to be created automatically (with Ens.Activity.Operation.Local operation). What 'useful business hosts' does it include?
https://community.intersystems.com/post/what-foundation-production
CC-MAIN-2020-05
refinedweb
238
56.86
: - Download the template and extract it from this zip file. - Add it to your solution and give it the same name as your EDMX file (except with .tt extension rather than .edmx). - Set the Build Action on the .designer.cs file to None (since you will now use the code generated by the T4 template rather than what is generated by the EDM designer). The output now is mostly the same as what the shipping codegen creates except that: - I don't have a VB version yet. Sorry VB fans. This goes on the to-do list, and it really shouldn't be that hard. - I added a bunch of using statements at the top of the file so that the code throughout the file can be shorter and easier to read--this means that if you have a model with an entity type whose name conflicts with a type in one of the included namespaces, then you will get a compiler error--you could always modify the template to fully qualify things instead. - I added a series of #region directives so that you can nicely collapse things and navigate the generated code. - There are a few things that I haven't implemented yet (see to do comments at top of the template).. – Danny Hi Danny,. – Danny In my MSDN Magazine article on SOA Data Access I recommend exposing Data Transfer Objects (DTOs) from
https://blogs.msdn.microsoft.com/dsimmons/2008/10/27/using-t4-templates-to-generate-ef-classes/
CC-MAIN-2019-43
refinedweb
234
70.63
There are several kind of plugins: To add your own Gauge you have two main solutions: What is GaugeFactory designed for? Imagine a custom gauge is parameterized. You’ll surely want to register it several times with different parameters. If you use Gauge SPI you’ll need to do N implementations (which makes the parameters useless). With GaugeFactory you just need to return the built instances: public class MyGaugeFactory implements GaugeFactory { @Override public Gauge[] gauges() { return new Gauge[] { new MyGauge(1); new MyGauge(2); }; } } To extend the reporting GUI just write your own org.apache.commons.monitoring.reporting.web.plugin.Plugin. Here too it relies on java ServiceLoader (SPI) mecanism. Here is the Plugin interface: public interface Plugin { String name(); Class<?> endpoints(); String mapping(); } A plugin has basically a name (what will identify it in the webapp and in the GUI - it will be the name of the plugin tab), a mapping, ie which base subcontext it will use for its own pages (for instance /jmx, /myplugin …) and a class representing endpoints. To make it more concrete we’ll use a sample (the standard Hello World). So first we define our HelloPlugin: public class HelloPlugin implements Plugin { public String name() { return "Hello"; } public Class<?> endpoints() { return HelloEndpoints.class; } public String mapping() { return "/hello"; } } The HelloEndpoints class defines all the urls accessible for the hello plugin. It uses the org.apache.commons.monitoring.reporting.web.handler.api.Regex annotation: public class HelloEndpoints { @Regex // will match "/hello" public Template home() { return new Template("hello/home.vm", new MapBuilder<String, Object>().set("name", "world).build()); } @Regex("/world/([0-9]*)/([0-9]*)") // will match "/hello/world/1/2" public String jsonWorld(final long start, final long end) { return "{ \"name\": \world\", \"start\":\"" + long1 + "\",\"end\":\"" + long2 + "\"}"; } } The first home method uses a template. The GUI relies on velocity and html templates needs to be in the classloader in templates directory. So basically the home method will search for templates/hello/home.vm velocity template. It is only the “main” part of the GUI (the tabs are automatically added). Twitter bootstrap (2.3.2) and JQuery are available. Here is a sample: <h1>Hello</h1> <div> Welcome to $name </div> If you need resources put them in the classloader too in “resources” folder. Note: if you want to do links in the template you can use $mapping variable as base context of your link. For instance: >a href=“$mapping/foo”<Foo>/a<. If you want to filter some resources you can add a custom endpoint: @Regex("/resources/myresource.css") public void filterCss(final TemplateHelper helper) { helper.renderPlain("/resources/myresource.css"); } @Regex allows you to get injected path segments, here is what is handled: For instance @Regex("/operation/([^/]*)/([^/]*)/(.*)") will match foo(String, String, String[]). If the url is /operation/a/b/c/d/e you’ll get foo("a", "b", { "c", "d", "e" }).
https://commons.apache.org/sandbox/commons-monitoring/plugins.html
CC-MAIN-2016-30
refinedweb
475
57.57
Created attachment 22166 [details] Demonstrate view inconsistency. In a four node cluster, using NonBlockingCoordinator, if two nodes fail at the same time, the remaining two nodes get different views and never converge. When the other nodes restart, they never install a view at all. I've attached the relevant demo code. Run it on 4 machines, wait for view installation, then CTRL-C two of them. The other two will never print the same UniqueId. Start a new node, view is always null. Immediately after the two node failure, one of the surviving nodes issues this stack trace; WARN - Member send is failing for:tcp://{-64, -88, -91, 34}:4000 ; Setting to su spect and retrying. ERROR - Error processing coordination message. Could be fatal. org.apache.catalina.tribes.ChannelException: Send failed, attempt:2 max:1; Fault y members:tcp://{-64, -88, -91, 34}:4000; at org.apache.catalina.tribes.transport.nio.ParallelNioSender.doLoop(Par allelNioSender.java:172) at org.apache.catalina.tribes.transport.nio.ParallelNioSender.sendMessag e(ParallelNioSender.java:78) at org.apache.catalina.tribes.transport.nio.PooledParallelSender.sendMes sage(PooledParallelSender.java:53) at org.apache.catalina.tribes.transport.ReplicationTransmitter.sendMessa ge(ReplicationTransmitter.java:80) at org.apache.catalina.tribes.group.ChannelCoordinator.sendMessage(Chann elCoordinator.java:78) at org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(C hannelInterceptorBase.java:75) at org.apache.catalina.tribes.group.interceptors.NonBlockingCoordinator. handleMyToken(NonBlockingCoordina So, I understand this better now and have a proposed fix. Here's the procedure to reproduce the problem. 1) start four nodes. 2) see a view installation with four members. 3) kill two non-coordinator nodes in quick succession (a second or two) From this point onwards, until it is killed, the coordinator is oscillating between two states. It recognizes that the state is inconsistent as it receives heartbeats from the the other node and the UniqueId's of its view does not match the coordinator. It then forces an election. Which fails as it believes an election is already running. This cycle repeats forever. When the first node crashed, memberDisappeared() is called on the coordinator. It then starts sending messages as part of an election. A method throws here with a connection timeout (it was attempting to send to the second node, which just crashed). It never handles this case, leaving the 'election in progress' flag on. Forever. Clearing suggestedViewId when the ChannelException is thrown is the fix; @@ -500,6 +500,7 @@ public class NonBlockingCoordinator extends ChannelInterceptorBase { processCoordMessage(cmsg, msg.getAddress()); }catch ( ChannelException x ) { log.error("Error processing coordination message. Could be fatal.",x); + suggestedviewId = null; } this probably should only be done under some circumstances, so this isn't obviously a safe patch. Hopefully the author will have a better fix! hi Rob, the non blocking coordinator is still work in progress. Its one piece of code that got a bit over complicated once I started developing it, and I think it can be greatly simplified I will take a look at this beginning of next week Filip I made my own coordinator which simply uses a sorted list of getMembers() + getLocalMember(), though it only installs views if the membership remains unchanged for a few seconds to avoid a little storm of view changes. Obviously it's a much weaker form of view management than your attempting, but it's probably good enough for my purposes. Let me know when you get to this, I can test it out. Created attachment 22179 [details] An alternative coordinator that makes local decisions based on membership service Happy to release this class under the Apache License. Let me know what you need from me. Just submit a and email a scanned copy to secretary [at) apache [dot] org For a contribution of a single class, the statement in comment #4 is more than enough. No need for a CLA. Many thanks for the patch. I have applied it to trunk and proposed it for 6.0.x. I made the following changes: - changed package to org.apache.catalina.tribes.group.interceptors - changed class name to SimpleCoordinator - added the AL2 text to the beginning of the file Thanks. I have since moved on to use a custom stack for group membership. I found an excellent paper which describes a robust mechanism for leader election. The paper also extends that algorithm to make a robust group membership protocol too. An updated patch is always welcome. My comment was misleading. The "custom stack" in question is not based on Tribes at all. This has been applied to 6.0.x and will be included in 6.0.19 onwards.
https://bz.apache.org/bugzilla/show_bug.cgi?id=45261
CC-MAIN-2018-43
refinedweb
764
51.04
07 April 2011 20:26 [Source: ICIS news] MUMBAI (ICIS)--Global growth in key consumption sectors for caustic soda, at an average of 4% from 2010-2015, is likely to fuel demand, an industry official from Tricon Energy said on Thursday. “Global net export of caustic soda in 2010 was 4.5 million dry metric tonne (dmt), driven by the alumina and pulp sectors," said Shi Weimin, Tricon Energy's director for Asia Pacific during a speech at the Vinyl India conference in Mumbai. Consumption in the alumina sector accounts for 60% of the net export, followed by 25% for pulp and 15% for others, he added. Global caustic demand in 2010 was 61.1m dmt, and the figure is set to rise this year, in line with the growth in the core-user segments, he said. The main uses of caustic soda are in the manufacture of alumina, pulp and paper, soap and detergents and petroleum products as well as organic and inorganic chemical production. “Demand in northeast ?xml:namespace> Caustic soda production in northeast “ Rising crude oil, energy prices and transportation costs, coupled with tight supply and healthy demand, are likely to maintain caustic soda prices at the current level, according to Shi. However, the additional capacity of 2m dmt in the last quarter of 2010 and the first quarter of 2011 in Prices of caustic soda were last assessed by ICIS at $400-420/dmt (€280-294/dmt) FOB (free on board) NE (northeast) Asia on 1 April. The two-day Vinyls India conference ends on 8
http://www.icis.com/Articles/2011/04/07/9450965/global-growth-of-4-to-drive-caustic-soda-demand-tricon.html
CC-MAIN-2014-35
refinedweb
260
57.1
Many people will not be able to upgrade to Windows 8 right away for various reasons. However, there is nothing to stop you from designing your WPF applications to have a similar look and feel. In this article I will show you how to create a Windows 8 style shell (Figure 1) for hosting “Tiles” that you can use to launch features of your application. The first part of this article will discuss how to create the main window, use the tiles to launch applications and how to navigate around the shell. In the second part of this article you will learn about the different user controls used to build this application. Designing the Home Screen Figure 1 shows a shell that has a Windows 8 start screen look and feel. I did add a couple of items to make it a little more familiar to Windows 7 users. In the upper right hand corner are the familiar minimize, maximize and close buttons. I also added two arrows at the bottom of the screen to inform users that they can scroll left or right through the tiles. If the user hovers over the tile area for greater than 1.5 seconds, a scroll bar appears. When they hover over a tile, a border is drawn around it to inform the user that they can click on this tile to launch that feature. Create a Borderless Main Window The first step to creating your Windows 8 style shell is to create a WPF window that has no border. Set the following attributes on your window to create this borderless window. - WindowStyle=None - ShowInTaskBar=True - AllowsTransparency=True - Background=Transparent The WindowStyle attribute allows you to set a single border, 3-D border or a Tool Window border. Setting this attribute to None eliminates the border completely. The ShowInTaskbar attribute is optional, but for a shell window like this you probably want it to show up in the Windows task bar. The next two attributes, AllowsTransparency and Background, work together. You must set AllowsTransparency to True to allow the Background to be set to Transparent. If you do not set these two attributes, then your border is still displayed. The first step to creating your Windows 8 style shell is to create a WPF window that has no border. You need to set two additional attributes on the shell window. You’ll set the WindowStartupLocation attribute to “CenterScreen” to ensure that the shell window displays in the middle of the screen when it first appears. The other attribute, ResizeMode, is set to “CanResizeWithGrip,” which displays a grip in the bottom right corner of the screen. This allows the user to resize the shell window. The first control in the Window is a Border with a Gray background color, a CornerRadius of 10, and a drop shadow. This border control sets the shape for this window and offsets this window from the underlying desktop. Inside of the Border is a Grid with the rows and columns needed to layout this screen. There are three rows and three columns that make up this shell window. Figure 2 shows a graphic that describes what is in each row and column. The shell window is made up of many different user controls that will be covered in Part 2 of this article. Adding Events to the Main Window Since you removed all of the chrome from the window which usually has buttons for minimizing, maximizing and closing, you need to add your own controls and thus your own code to handle the Click events on these controls. Since you removed the title bar area the user has nothing to click on to drag the window so you need to write this code as well. Drag Event Add an event handler to the window for the MouseLeftButtonDown event. In this event you’ll write code to check to see if the mouse is currently pressed and if it is you’ll invoke the DragMove method on the window. This allows the user to drag this window around the screen when the window is not minimized or maximized. private void Window_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { if (e.ButtonState == MouseButtonState.Pressed) this.DragMove(); } Maximize Event If the user double-clicks on the window itself, or clicks on the maximize button in the upper right-hand corner, toggle the window between a maximized and normal state. Both the MouseDoubleClick event on the window and the Click event for the maximize button call a method named ToggleWindowState (Listing 1). Maximizing and making the window a normal size is not just as simple as setting the WindowState property to Maximized or Normal. Since there is a drop shadow around the main window, when you maximize this screen you need to make that shadow go away. In addition, any margin on the border control needs to be set to 0 when maximized. When the user then wants to set the screen back to normal mode, set the drop shadow and the margin back to what they were. The ToggleWindowState method checks to see if the current window state is normal and if it is, saves the drop shadow effect and the current margin into a couple of private properties. The Effect property of the border is set to null and the border’s Margin property is set to a Thickness of 0. The Maximize button’s ToolTip property is changed to “Restore Down” and the WindowState is set to Maximized. If the WindowState property is Maximized when ToggleWindowState is called this process is reversed. Minimize Event When the user presses the minimize button in the upper right-hand corner of the window, the shell window needs to be minimized. Minimize the window by setting the WindowState to Minimized as shown in the following code snippet: private void btnMinimize_Click(object sender, RoutedEventArgs e) { this.WindowState = WindowState.Minimized; } Close Event When the user presses the Close button you should ask the user whether or not they wish to close the application. You can popup a regular Windows dialog using the MessageBox.Show method. However, this Windows dialog design does not match our nicely styled WPF window. Instead, I have created a custom message box that can be styled like our main window as shown in Figure 3. My class called PDSAMessageBox has a Show method with many of the same signatures as the .NET MessageBox.Show method. To display the message box shown in Figure 3 you call the PDSAMessageBox Show method as shown below: private void btnClose_Click(object sender, RoutedEventArgs e) { if (PDSAMessageBox.Show("Close this application?", "Close", MessageBoxButton.YesNo) == MessageBoxResult.Yes) this.Close(); } Working with Start Tiles In the middle of the window is an area that has all of the various start tiles that a user can click on in order to navigate from one area of your application to another. In the sample WPF application for this article, only the first tile (Products) actually goes anywhere, the rest are all just placeholders. You will see how to create the different groups of tiles later, but for now, let’s look at how you control scrolling the tiles. There is a single user control called ucStartTiles for all of the groups of the start tiles. The ucStartTiles contains three additional user controls named ucStartGroup1, ucStartGroup2 and ucStartGroup3. You can add as many groups of tiles as you need for your application. The ucStartTiles control is contained within a ScrollViewer control since you could have more groups of tiles than can fit within the current window size. This scroll viewer has the horizontal scroll bar invisible so you can control this visibility manually. The vertical scroll bar is set to Auto so if the user sizes the window where the height is no longer sufficient to fit the tiles then the vertical scroll bar is shown automatically. The XAML for the scroll viewer and the start tiles user control is shown in the following code snippet: <ScrollViewer Name="scrollTiles" BorderBrush="Transparent" Background="Transparent" MouseEnter="scrollTiles_MouseEnter" MouseLeave="scrollTiles_MouseLeave" ScrollViewer. <src:ucStartTiles x: </ScrollViewer> The horizontal scroll bar appears if the user hovers over the scroll viewer control for longer than 1.5 seconds. Notice the MouseEnter and MouseLeave events on the scroll viewer control. These are needed to make the scrollbar appear and disappear. In order to get the horizontal scrollbar to appear after 1.5 seconds you create a DispatcherTimer object in the window. You’ll use the DispatcherTimer in order to ensure that the Tick event runs in the same thread as the UI. Normal timers in .NET run on a separate thread and make it more difficult to affect the UI. Define the DispatcherTimer from the System.Windows.Threading namespace like the following within your main window. private DispatcherTimer _Timer = new DispatcherTimer(); In the Window_Loaded event procedure you set the Interval property for this timer to 1500 which equates to 1.5 seconds. You then hook the Tick event procedure where you can respond to the Tick event and turn on the horizontal scrollbar. private void Window_Loaded(object sender, RoutedEventArgs e) { // Set the Interval to 1.5 seconds _Timer.Interval = TimeSpan.FromMilliseconds(1500); // Setup Timer for showing scroll bar _Timer.Tick += new EventHandler(_Timer_Tick); } Turn on the scroll bar in response to the Tick event shown in the code snippet below. Once the scroll bar is visible, turn off the timer so the Tick event is not repeatedly called every 1.5 seconds. void _Timer_Tick(object sender, EventArgs e) { scrollTiles.HorizontalScrollBarVisibility = ScrollBarVisibility.Visible; _Timer.Stop(); } A timer is off by default when it is created so you need to start the timer when the user places their mouse over the scroll viewer control. Start the time in the MouseEnter event for the scroll viewer as shown below: private void scrollTiles_MouseEnter(object sender, MouseEventArgs e) { _Timer.Start(); } Hide the horizontal scroll bar and stop the timer (in case it did not get stopped) in the MouseLeave event of the scroll viewer control. private void scrollTiles_MouseLeave(object sender, MouseEventArgs e) { _Timer.Stop(); scrollTiles.HorizontalScrollBarVisibility = ScrollBarVisibility.Hidden; } Left and Right Buttons In addition to using the scroll bar to scroll the tiles to the left and right, you should add left and right arrow buttons at the bottom of the screen to give the user an additional indication that they can scroll the tiles. These two buttons each have a Click event procedure that scrolls the scroll viewer programmatically. To scroll programmatically you determine how much you wish to scroll to the left or right when you click on the buttons. In this sample I am going to scroll 15% of the size of the scroll viewer. Use a constant for this percentage amount so you can change this value in just one place. private const double OFFSET = 1.15; Clicking on the right button causes the Click event procedure shown in the following code fragment to fire. private void btnRight_Click(object sender, RoutedEventArgs e) { scrollTiles.ScrollToHorizontalOffset( scrollTiles.HorizontalOffset + (scrollTiles.ActualWidth * OFFSET)); } In the btnRight_Click event procedure call the scroll viewer control’s ScrollToHorizontalOffset method. Pass to this method a double that specifies where to scroll to. This is just like clicking or dragging the scroll bar directly. To calculate the offset you take the actual width of the scroll viewer control and multiply this by 1.15 in order to get a number that is 15% greater than the width. You add this number to the current HorizontalOffset property of the scroll viewer. After setting this new offset, the scroll viewer control is shifted to the right by 15% (or whatever you put into the OFFSET constant). Clicking on the left button causes the Click event procedure shown in the following code fragment to fire. private void btnLeft_Click(object sender, RoutedEventArgs e) { scrollTiles.ScrollToHorizontalOffset( scrollTiles.HorizontalOffset - (scrollTiles.ActualWidth * OFFSET)); } This method performs the same calculation as you did when scrolling to the right except it subtracts the width times the offset from the number in the HorizontalOffset property. Clicking a Tile Sends a Message When you click on one of the tiles a message is sent to the shell window telling the window what to do. Since each tile is an individual user control and not a part of the main window, you need a way to pass a message from one control to another. A global message broker class helps us communicate from one control to another. I will discuss the details of this message broker class later, but the basics are pretty simple. A global message broker property is created in the App class as shown in the code below: public partial class App : Application { public PDSAMessageBroker MessageBroker { get; set; } public App() { MessageBroker = new PDSAMessageBroker(); } } Each tile is a custom user control called PDSAucTile. This user control has properties such as ViewName, ImageUri and Text. Below is an example of what one of these tiles looks like: <my:PDSAucTile When the PDSAucTile control is clicked upon a PDSAMessageEventArgs object is created and filled with the ViewName, the ImageUri and Text properties from the tile and a Click event is raised. In the Click event for the tile you use the global message broker to send a message with the value in the ViewName property as shown below: private void Tile_Click(object sender, PDSATileEventArgs e) { (Application.Current as App).MessageBroker. SendMessage( new PDSAMessageBrokerMessage(e.ViewName, null)); } The ViewName value becomes the MessageName that can be checked for in the MessageReceived event procedure for the message broker. For example, in the main window is the following code: void _MessageBroker_MessageReceived(object sender, PDSAMessageBrokerEventArgs e) { switch (e.MessageName) { case "ProductView": SetMainPageControls(false); LoadUserControl(new ucProductView()); break; ... } } So the “ProductView” string in the “ViewName” property on the tile got sent as a message from the tile all the way to the main window which was setup to listen for the MessageReceived event on the global message broker. Because of this event and the message sent to it, the tiles can communicate to the main window about which tile got pressed. You simply add more “case” statements to load any windows or user controls that you want to display from the main window. Using this message broker class allows your main window to control all the navigation around your application. In the main window of your WPF application you create an event handler for the MessageReceived event of the message broker using the following code in the Window_Loaded event procedure: private void Window_Loaded(object sender, RoutedEventArgs e) { (Application.Current as App).MessageBroker. MessageReceived += new MessageReceivedEventHandler( MessageBroker_MessageReceived); } A good practice is to clean up any event handlers you have in your windows or user controls when they are unloaded. So in the Window_Unloaded event procedure on this main window there is the following code to remove the Dispatcher Timer Tick event and the MessageBroker MessageReceived event. private void Window_Unloaded(object sender, RoutedEventArgs e) { // Remove Timer 'Tick' Event Handler // for Showing Scroll bar _Timer.Tick -= _Timer_Tick; // Remove Message Broker 'MessageReceived' // Event Handler (Application.Current as App).MessageBroker. MessageReceived -= MessageBroker_MessageReceived; } Displaying a User Control Once the “ProductView” message is received by the main window, a user control is loaded into a Grid control named “contentArea.” This Grid control occupies the same row and columns as the ScrollViewer control used for the start tiles. When a new screen is loaded, the start tiles are hidden and the new screen occupies the same space as where the tiles were. Of course, you could make a new window and pop up that window over this existing shell window if you want. When the “ProductView” message is received by the MessageReceived event procedure, two methods get called: SetMainPageControls and LoadUserControl. case "ProductView": SetMainPageControls(false); LoadUserControl(new ucProductView()); break; Prior to loading a new user control you need to set the state of the main window when moving away from the start tiles. The SetMainPageControls method (Listing 2) is what sets the main window’s state. When the start tiles are visible the right and left arrow buttons are displayed. Once you navigate to another section of the application you should hide these buttons as they no longer make sense. There is an invisible back button in the upper left corner that needs to become visible so the user has a way to move back to the home screen. In addition, the scroll viewer for the start tiles is set to invisible and the Grid control for hosting the new user control becomes visible. After setting the visibility of the various buttons and other controls you can now load a new instance of your user control into the “contentArea.” A new instance of the user control named “ucProductView” is created and passed to the LoadUserControl method shown below: private void LoadUserControl(UserControl ctl) { contentArea.Children.Clear(); contentArea.Children.Add(ctl); } Figure 4 shows you what your screen looks like once the LoadUserControl method runs. Notice the back button displayed in the upper left corner of the screen. Also notice that the two arrows on the bottom of the screen are invisible. As mentioned, you are simply reusing the space where the start tiles were on the main window. Instead of reusing the space, you could create another window and pop up that window over the main window and keep the start tiles visible if you wish. All of this is very easy to do with just a little extra code. When the user clicks on the Back arrow shown in the upper left corner of the screen, a Click event is fired. This Click event calls SetMainPageControls but passes a true value to tell this method to display the start tiles again. private void btnBack_Click(object sender, RoutedEventArgs e) { SetMainPageControls(true); } The SetMainPageControls method (Listing 2) clears all user controls in the “contentArea” grid. In addition, the start page buttons and ScrollViewer are made visible, and the back button is made invisible. This one method is responsible for toggling between the two different views on this window. Summary In Part 1 of this article I showed you how to create a Windows 8 style shell. You learned how to navigate around this shell using tiles and how the various buttons on the main window control the window size. At a high level you also saw how the tiles were created and how to scroll left and right through those tiles. A message broker is used to send messages from the various user controls back to the main window. In Part 2 of this article you will see the actual implementation of each of the user controls such as the custom buttons, tiles, message box and the message broker.
https://www.codemag.com/Article/1211021/A-Windows-8-Look-and-Feel-for-WPF-Part-1
CC-MAIN-2020-10
refinedweb
3,112
60.75
text_lazy() as the default translation method for a particular file. Without _() in the global namespace, the developer has to think about which is the most appropriate translation function. - The underscore character (_) is used to represent “the previous result” in Python’s interactive shell and doctest tests. Installing a global _() function causes interference. Explicitly importing ugettext() as _() avoids this problem. def my_view(request): output = uget.py makemessages, won’t be able to find these strings. More on makemessages later.) The strings you pass to _() or uget..") This also works in templates with the comment tag: {% comment %}Translators: This is a text of the base template {% endcomment %} The comment will then appear in the resulting .po file and should also be displayed by most translation tools. Marking strings as no-op¶ Use the function django.utils.translation.uget. Pluralization¶ Use the function django.utils.translation.ungettext() to specify pluralized messages. un an error when running django-admin.py compilemessages: a format specification for argument 'name', as in 'msgstr[0]', doesn't exist in 'msgid' Contextual markers¶. Lazy translation¶: Model fields and relationships verbose_name and help_text option values¶ For example, to translate the help text of the name field in the following model, do the following: from django.utils.translation import ugettext_lazy class MyThing(models.Model): name = models.CharField(help_text=ugettext_lazy('This is the help text')) You can mark names of ForeignKey, ManyTomanyField or OneToOneField relationship as translatable by using their verbose_name options: from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): kind = models.ForeignKey(ThingKind, bu looking at the model’s class name: from django.utils.translation import ugettext_lazy class MyThing(models.Model): name = models.CharField(_('name'), help_text=ugettext_lazy('This is the help text')) class Meta: verbose_name = ugettext_lazy('my thing') verbose_name_plural = ugettext_lazy('my things') Model methods short_description attribute values¶ For model methods, you can provide translations to Django and the admin site with the short_description attribute: from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): kind = models.ForeignKey(ThingKind, related_name='kinds', verbose_name=_('kind')) def is_mouse(self): return self.kind.type == MOUSE_TYPE is_mouse.short_description = _('Is it a mouse?') Working with lazy translation objects¶ The result of a ugettext_lazy() call can be used wherever you would use a unicode string (an object with type unicode) in Python. If you try to use it where a bytestring (a str object) is expected, things will not work as expected, since a ugettext_lazy() object doesn’t know how to convert itself to a bytestring. You can’t use a unicode string inside a bytestring, either, so this is consistent with normal Python behavior. For example: # This is fine: putting a unicode proxy into a unicode string. u"Hello %s" % ugettext_lazy("people") # This will not work, since you cannot insert a unicode object # into a bytestring (nor can you insert our unicode proxy there) "Hello %s" % ugettext_lazy("people") If you ever see output that looks like "hello <django.utils.functional...>", you have tried to insert the result of ugettext_lazy() into a bytestring. That’s a bug in your code. If you don’t like the long ugettext_lazy name, you can just alias it as _ (underscore), like so: from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): name = models.CharField(help_text=_('This is the help text')) Using ugettext_lazy() and un. Joining strings: string_concat()¶ Standard Python string joins (''.join([...])) will not work on lists containing lazy translation objects. Instead, you can use django.utils.translation.string_concat(), which creates a lazy object that concatenates its contents and converts them to strings only when the result is included in a string. For example: from django.utils.translation import, respectively. The bidi attribute is True only for bi-directional languages. The source of the language information is the django.conf.locale module. Similar access to this information is available for template code. See below. Internationalization: in template code¶. trans template uget strings that are used in multiple places or should be used as argumentstrans template %} %} Internationalization: in JavaScript code¶);). Internationalization: in URL patterns¶ Django provides two mechanisms to internationalize URL patterns: - Adding the language prefix to the root of the URL patterns to make it possible for LocaleMiddleware. Reversing in templates¶. Localization: how to create language files¶ Once the string literals of an application have been tagged for later translation, the translation themselves need to be written (or obtained).).. Message files¶.py two places: - The root directory of your Django project. - The root directory of your Django app. or .txt Warning When creating message files from JavaScript source code you need to use the special ‘djangojs’ domain, not -e js..py makemessages, in the form of a comment line prefixed with # and located above the msgid line,., you will need the gettext utilities: redirects the user, following this algorithm: - Django looks for a next parameter in the POST data. -: <form action="/i18n/setlang/" method="post"> {% csrf_token %} <input name="next" type="hidden" value="{{ redirect_to }}" /> <select name="language"> {% get_language_info_list for LANGUAGES as languages %} {% for language in languages %} <option value="{{ language.code }}">{{ language.name_local }} ({{ language.code }})</option> {% endfor %} </select> <input type="submit" value="Go" /> </form> In this example, Django looks up the URL of the page to which the user will be redirected in the redirect_to context variable. Using translations outside views and templates¶.uget. other translator finds a translation. If all you want to do is run Django with your native language, and a language file is available for it,: - = ( a django_language” a basic one as described in the Locale restrictions note. Once LocaleMiddleware determines the user’s preference, it makes this preference available as request.LANGUAGE_CODE for each HttpRequest.. How Django discovers translations¶: - The directories listed in LOCALE. - Then, it looks for a locale directory in the project directory, or more accurately, in the directory containing your settings file. - Finally, the Django-provided base translation in django/conf/locale is used as a fallback. Deprecated since version 1.3: Lookup in the locale subdirectory of the directory containing your settings file (item 3 above) is deprecated since the 1.3 release and will be removed in Django 1.5. You can use the LOCALE_PATHS setting instead, by listing the absolute filesystem path of such locale directory in the setting path. Or, you can just build a big project out of several apps and put all translations into one big common message file specific to the project you are composing. The choice is yours. Note If you’re using manually configured settings, as described in Using settings without setting DJANGO_SETTINGS_MODULE,: - All paths listed in LOCALE_PATHS in your settings file are searched for <language>/LC_MESSAGES/django.(po|mo) - $PROJECTPATH/locale/<language>/LC_MESSAGES/django.(po|mo) – deprecated, see above. - .
https://docs.djangoproject.com/en/1.4/topics/i18n/translation/
CC-MAIN-2015-32
refinedweb
1,121
50.02
Hi there, I am starting to design a very simple game, I am not done with it, this is just a beginning of it, but I had a problem, I get the following error:and even if I change number of parameters used I still get that error.and even if I change number of parameters used I still get that error.Code:'rng': function does not take 1 parameters Is there something wrong with my code? Thank you.. -Amy Code:#include <iostream> #include <time.h> using namespace std; int rng(); int main() { char winlose; rng(winlose); cout<<winlose<<endl; return 0; } int rng(char state) {<<x1<<" "; // Print it x2 = rand( ) % n + 1; // Generate another number from the sequence cout<<x2<<endl; // Print it sum = x1 + x2; // summing the two randomly generated numbers cout<<sum<<endl; // if statment for checking if the sum of the 2 randomly generated numbers is 7 or 11 if((sum==7) || (sum==11)) state = 'w'; // w stands for win else if((sum==2) || (sum==3) || (sum==12)) state = 'l'; // l stands for lose else state = 'n'; // n stands for neither nor // end of if statment return state; }
https://cboard.cprogramming.com/cplusplus-programming/62803-error-function-does-not-take-sharp-parameters.html
CC-MAIN-2017-13
refinedweb
192
53.68
namespace { int foo (void) { return 0; } } int main() { return foo (); } > gdb ./a.out GNU gdb (GDB; devel:gcc) 7-suse-linux". Type "show configuration" for configuration details. For bug reporting instructions, please see: <>. Find the GDB manual and other documentation resources online at: <>. For help, type "help". Type "apropos word" to search for commands related to "word"... warning: /etc/gdbinit.d/gdb-heap.py: No such file or directory Reading symbols from ./a.out...done. (gdb) b foo Function "foo" not defined. Make breakpoint pending on future shared library load? (y or [n]) n (gdb) start Temporary breakpoint 1 at 0x4005ec: file t.C, line 1. Starting program: /tmp/a.out Temporary breakpoint 1, main () at t.C:1 1 namespace { int foo (void) { return 0; } } int main() { return foo (); } (gdb) b foo Function "foo" not defined. Make breakpoint pending on future shared library load? (y or [n]) n so it doesn't work, not from toplevel context before starting the program nor from a context within the TU that contains the anonymous namespace. Breaks debugging of GCC big times - try to break on cfgexpand.c:pass_expand::execute - we want to be able to do that from a context that is not necessarily local to that TU. Err. (gdb) b '(anonymous namespace)::foo' Breakpoint 1 at 0x4005e1: file t.C, line 1. this can't be really the desired way to do this ... There is a DW_TAG_imported_module of the anon namespace in the TU context so at least there referencing 'foo' needs to work. And funnily (gdb) b '(anonymous namespace)::pass_expand::<tab> auto-"completes" to '(anonymous namespace):: how useful. All auto-completion seems to strip everything up to the first ::. Bah. FWIW I've written a custom breakpoint-setting command using gdb's Python interface to try to workaround this from the gcc side: Renaming. It *is* possible to set a breakpoint on a function in an anonymous namespace, it's just way too tricky to do it right now, given that (A) you have to type the full qualified name, and (B) tab-completion is broken and these make it sufficiently difficult that it's effectively not possible for a non-expert user. So please can (A) and (B) be fixed. You can do gdb> break *foo or if foo is inside an anonymous namespace of another TU, you can do gdb> break *'foo.c'::foo Is this functionality sufficient for your use case? (In reply to patrick from comment #5) > You can do > > gdb> break *foo No symbol "foo" in current context. markus@x4 ~ % gdb -v GNU gdb (GDB) 7.8.50.20140729-cvs (In reply to Markus Trippelsdorf from comment #6) > (In reply to patrick from comment #5) > > You can do > > > > gdb> break *foo > No symbol "foo" in current context. > > markus@x4 ~ % gdb -v > GNU gdb (GDB) 7.8.50.20140729-cvs The program has to be running for "break *" to work, it seems. Try: gdb> break main gdb> run gdb> break *foo The workaround works to some extent, but tab-completion is still broken and for t1.C ----- namespace { int foo (void) { return 0; } } int bar(void); int main() { return foo () + bar (); } t2.C ----- namespace { int foo2 (void) { return 1; } } int bar () { return foo2 (); } it doesn't allow me to set a breakpoint on t2.C::foo2 until I enter bar(). That is, using 'b *<name>' seems to do symbol lookup from the current scope only. For the GCC case the workaround doesn't work in practice for this reason. b '(anonymous namespace)::foo2' works from any context (but is quite awkward due to tab-completion being broken). Created attachment 7974 [details] Candidate patch Here is a candidate patch that fixes this issue by allowing the user to omit the "(anonymous namespace)::" prefix when referencing symbols that are defined inside an anonymous namespace. Tab completion is fixed accordingly, e.g. by allowing one to tab-complete "pass_exp" into "pass_expand" even though pass_expand is actually (anonymous namespace)::pass_expand. This should work in breakpoint and expression context. So the patch enables you to do (gdb) break pass_exp<TAB>::execu<TAB> to break on (anonymous namespace)::pass_expand::execute. Created attachment 8515 [details] updated patch I've updated Patrick's patch for gdb trunk. It works fine: Example from comment0: % gdb ./a.out Reading symbols from ./a.out...done. (gdb) b fo<tab> Breakpoint 1 at 0x40058a: file anon_namesp.cpp, line 1. (gdb) run Starting program: /home/markus/a.out Breakpoint 1, (anonymous namespace)::foo () at anon_namesp.cpp:1 Example from comment8: Reading symbols from ./a.out...done. (gdb) b fo<tab><tab> foo() foo2() (gdb) b foo2() Breakpoint 1 at 0x4005b3: file t2.cpp, line 1. (gdb) run Starting program: /home/markus/a.out Breakpoint 1, (anonymous namespace)::foo2 () at t2.cpp:1 1 namespace { int foo2 (void) { return 1; } } Any chance to move this issue forward? I will try to. I just ran into this today too while trying debug a new pass I was writing for GCC which uses anonymous namespaces :). ...and so did I. This series will help with this: [PATCH 00/40] C++ debugging improvements: breakpoints, TAB completion, more You can find it in the users/palves/cxx-breakpoint-improvements branch meanwhile, if you want to give it a try. The series above is finally all merged to master. These are the related improvements that you'll find there: *. (In reply to Pedro Alves from comment #15) > The series above is finally all merged to master. These are the related > improvements that you'll find there: [...] Thanks for the improvements! > *. To clarify, if there is a symbol: (anonymous namespace)::func() will "break func" set a breakpoint on it? (without me having to type or tab-complete the "(anonymous namespace)" part?) [...] (In reply to Dave Malcolm from comment #16) > (In reply to Pedro Alves from comment #15) > > The series above is finally all merged to master. These are the related > > improvements that you'll find there: > > [...] > > Thanks for the improvements! My pleasure! Making things better for GCC folks was one of my sekrit driving forces. :-) > To clarify, if there is a symbol: > > (anonymous namespace)::func() > > will "break func" set a breakpoint on it? (without me having to type or > tab-complete the "(anonymous namespace)" part?) > > [...] Yes.
https://sourceware.org/bugzilla/show_bug.cgi?id=16874
CC-MAIN-2017-51
refinedweb
1,040
67.55
..._ARCHarchitecture macros BOOST_COMPcompiler macros BOOST_LANGlanguage standards macros BOOST_LIBlibrary macros BOOST_OSoperating system macros BOOST_PLATplatform macros BOOST_HWhardware macros This library defines a set of compiler, architecture, operating system, library, and other version numbers from the information it can gather of C, C++, Objective C, and Objective C++ predefined macros or those defined in generally available headers. The idea for this library grew out of a proposal to extend the Boost Config library to provide more, and consistent, information than the feature definitions it supports. What follows is an edited version of that brief proposal. The idea is to define a set of macros to identify compilers and consistently represent their version. This includes: #if/ #elifdirectives, for each of the supported compilers. All macros would be defined, regardless of the compiler. The one macro corresponding to the compiler being used would be defined, in terms of BOOST_VERSION_NUMBER, to carry the exact compiler version. All other macros would expand to an expression evaluating to false (for instance, the token 0) to indicate that the corresponding compiler is not present. The current Predef library is now, both an independent library, and expanded in scope. It includes detection and definition of architectures, compilers, languages, libraries, operating systems, and endianness. The key benefits are: #ifdef. #ifdefchecks. #include <boost/predef.h>so that it's friendly to precompiled header usage. #include <boost/predef/os/windows.h>for single checks. An important design choice concerns how to represent compiler versions by means of a single integer number suitable for use in preprocessing directives. Let's do some calculation. The "basic" signed type for preprocessing constant-expressions is long in C90 (and C++, as of 2006) and intmax_t in C99. The type long shall at least be able to represent the number +2 147 483 647. This means the most significant digit can only be 0, 1 or 2; and if we want all decimal digits to be able to vary between 0 and 9, the largest range we can consider is [0, 999 999 999]. Distributing evenly, this means 3 decimal digits for each version number part. So we can: It appears relatively safe to go for the first option and set it at 2/2/5. That covers CodeWarrior and others, which are up to and past 10 for the major number. Some compilers use the build number in lieu of the patch one; five digits (which is already reached by VC++ 8) seems a reasonable limit even in this case. It might reassure the reader that this decision is actually encoded in one place in the code; the definition of BOOST_VERSION_NUMBER. Even though the basics of this library are done, there is much work that can be done: BOOST_WORKAROUNDmacro would benefit from a more readable syntax. As would the BOOST_TESTED_ATdetail macro.
https://www.boost.org/doc/libs/1_64_0/doc/html/predef.html
CC-MAIN-2018-26
refinedweb
464
53.21
You're in an environment in which you'd like to use a hash function, but you would prefer to use one based on a block cipher. This might be because you have only a block cipher available, or because you would like to minimize security assumptions in your system. There are several good algorithms for doing this. We present one, Davies-Meyer, where the digest size is the same as the block length of the underlying cipher. With 64-bit block ciphers, Davies-Meyer does not offer sufficient security unless you add a nonce, in which case it is barely sufficient. Even with AES-128, without a nonce, Davies-Meyer is somewhat liberal when you consider birthday attacks. Unfortunately, there is only one well-known scheme worth using for converting a block cipher into a hash function that outputs twice the block length (MDC-2), and it is patented at the time of this writing. However, those patent issues will go away by August 28, 2004. MDC-2 is covered in Recipe 6.16. Note that such constructs assume that block ciphers resist related-key attacks. See Recipe 6.3 for a general comparison of such constructs compared to dedicated constructs like SHA1. The Davies-Meyer hash function uses the message to hash as key material for the block cipher. The input is padded, strengthened, and broken into blocks based on the key length, each block used as a key to encrypt a single value. Essentially, the message is broken into a series of keys. With Davies-Meyer, the first value encrypted is an initialization vector (IV) that is usually agreed upon in advance. You may treat it as a nonce instead, however, which we strongly recommend. (The nonce is then as big as the block size of the cipher.) The result of encryption is XOR'd with the IV, then used as a new IV. This is repeated until all keys are exhausted, resulting in the hash output. See Figure 6-1 for a visual description of one pass of Davies-Meyer. Traditionally, hash functions pad by appending a bit with a value of 1, then however many zeros are necessary to align to the next block of input. Input is typically strengthened by adding a block of data to the end that encodes the message length. Nonetheless, such strengthening does not protect against length-extension attacks. (To prevent against those, see Recipe 6.7.) Matyas-Meyer-Oseas is a similar construction that is preferable in that the plaintext itself is not used as the key to a block cipher (this could make related-key attacks on Davies-Meyer easier); we'll present that as a component when we show how to implement MDC-2 in Recipe 6.16. Here is an example API for using Davies-Meyer wihtout a nonce: void spc_dm_init(SPC_DM_CTX *c); void spc_dm_update(SPC_DM_CTX *c, unsigned char *msg, size_t len); void spc_dm_final(SPC_DM_CTX *c, unsigned char out[SPC_BLOCK_SZ]); The following is an implementation using AES-128. This code requires linking against an AES implementation, and it also requires that the macros developed in Recipe 5.5 be defined (they bridge the API of your AES implementation with this book's API). #include <stdlib.h> #include <string.h> #ifndef WIN32 #include <sys/types.h> #include <netinet/in.h> #include <arpa/inet.h> #else #include <windows.h> #include <winsock.h> #endif #define SPC_KEY_SZ 16 typedef struct { unsigned char h[SPC_BLOCK_SZ]; unsigned char b[SPC_KEY_SZ]; size_t ix; size_t tl; } SPC_DM_CTX; void spc_dm_init(SPC_DM_CTX *c) { memset(c->h, 0x52, SPC_BLOCK_SZ); c->ix = 0; c->tl = 0; } static void spc_dm_once(SPC_DM_CTX *c, unsigned char b[SPC_KEY_SZ]) { int i; SPC_KEY_SCHED ks; unsigned char tmp[SPC_BLOCK_SZ]; SPC_ENCRYPT_INIT(&ks, b, SPC_KEY_SZ); SPC_DO_ENCRYPT(&ks, c->h, tmp); for (i = 0; i < SPC_BLOCK_SZ / sizeof(int); i++) ((int *)c->h)[i] ^= ((int *)tmp)[i]; } void spc_dm_update(SPC_DM_CTX *c, unsigned char *t, size_t l) { c->tl += l; /* if c->tl < l: abort */ while (c->ix && l) { c->b[c->ix++] = *t++; l--; if (!(c->ix %= SPC_KEY_SZ)) spc_dm_once(c, c->b); } while (l > SPC_KEY_SZ) { spc_dm_once(c, t); t += SPC_KEY_SZ; l -= SPC_KEY_SZ; } c->ix = l; for (l = 0; l < c->ix; l++) c->b[l] = *t++; } void spc_dm_final(SPC_DM_CTX *c, unsigned char output[SPC_BLOCK_SZ]) { int i; c->b[c->ix++] = 0x80; while (c->ix < SPC_KEY_SZ) c->b[c->ix++] = 0; spc_dm_once(c, c->b); memset(c->b, 0, SPC_KEY_SZ - sizeof(size_t)); c->tl = htonl(c->tl); for (i = 0; i < sizeof(size_t); i++) c->b[SPC_KEY_SZ - sizeof(size_t) + i] = ((unsigned char *)(&c->tl))[i]; spc_dm_once(c, c->b); memcpy(output, c->h, SPC_BLOCK_SZ); } Recipe 5.5, Recipe 6.3, Recipe 6.7, Recipe 6.16
http://etutorials.org/Programming/secure+programming/Chapter+6.+Hashes+and+Message+Authentication/6.15+Constructing+a+Hash+Function+from+a+Block+Cipher/
CC-MAIN-2017-22
refinedweb
777
62.98
- Advertisement - entries 16 38 - views 9269 About this blog Even more clever description, but really it's just slightly wordier. Entries in this blog Space Money. Homebrew you say? Yay! I have an idea for a turn based game, and if time allows I'll develop it further, but for now I'm content to have gotten something running on hardware that's a tad more exotic than a PC/Mac. It Just Works(TM) (for the most part) The default setup provides a good base, and the subsequent updates were painless. I still dislike searching through innumerable packages with Synaptic, but and once the basic dev tools (gcc, g++, headers, etc) were installed the only imperative that remained was the video drivers. Been there done that, wrote it down. Nvidia driver install was not an issue, but that's not to say it can't be greatly improved upon. After the miserable failure of MinGW Dev Studio to do anything whatsoever on 5.10 without crashing, I had downloaded and installed Borland's C++ IDE. Atrocious interface aside, it worked. But before I went that route again, I wanted to give Code::Blocks a try. Last time an install would have required compiling it from source, which is not something I was (or am) willing to sacrifice any of my time for. Let's face it, life is too short and my family is too important for me to waste time compiling something which amounts to a trivial pursuit. I looked... lo and behold, there was a recent nightly build in a Debian package. What happened next was beyond my wildest dreams. I downloaded it, the Debian package installer opened, Code::Blocks landed on the hard drive and was added to the development tools menu. I opened the IDE up, converted a simple Visual Studio solution and the thing just worked. Amazing. Suffice to say I'm impressed, as it basically Just Works(TM). This environment is very livable if not slightly more compelling than it has been in the past. Mind you, I still think Linux has a 'slapped together' feel about it, but for now I'll say that if I HAD to work with it, I would consider it far less painful (maybe even enjoyable) than it has ever been. Say hello to everyone, Ivy! File Nostalgia I had scanned them to learn texturing/material application in 3DS Max, but somehow they got shuffled off into nested folder oblivion. So I cleaned them up and threw them together and voila.. They never made it into 3DS Max, but that's only because I became acquainted with Maya in the meantime. A foray into the less-than-well-known Previous builds of my game framework using Universal Binaries on the G5 presented no problem, but for some reason or another they just weren't happy on the P4. After shuffling the project across, the x86 builds were up and running in a native configuration. At any rate, defintely worth a day of effort to see something that almost nobody thought would ever come to pass.. a Mac OS on PC hardware. Your assistance is requested So here's the deal. I fully intend to give this video card away. It's nothing major but here are the specs: Now I'd like your opinions on how to give this away within the GameDev community. My first thought is to just let it go to some random respondant, but given the unique position of having alreay made an investment, I thought of something else. How about a raffle? Not for the sake of recouping costs, but to perpetuate this kind of giveaway. I'm not really interested in running a contest, as there are a few of those already. A raffle could have a much shorter run, and depending on the amount of entries could allow for a fast turn around for the next one. Your thoughts?? Renewed Had a rough patch over the last few weeks. A childhood friend of mine passed away, and the means by which this happened I'll leave at unnatural causes. But work goes on in the 'Game Project Referred To In The Journal Of Schmedly'. The designs I have in mind are largely based around recreating certain classic arcade games, which lend themselves easily to a tile based structure, but how to handle that in 3D? How about a 3D tile engine? Given the fact the game designs incorporate forced perspectives, it seems natural to do it this way. It also makes culling extremely simple. Anyway, a shot of the world editor affectionaly known as Schmedit. Currently limited to loading/saving and basic editing. The editor mostly needs work in the editing controls (note the hacky buttons) but I also need to figure out a more robust way to handle the tile connectivity. At this point it only supports branching nodes and will simply overlap when nodes are connected in a loop. Sick, but still progressing Skinned meshes Still very much in the works, but code reuse is at an all time high here. I had written a simple .rtg model viewer ages ago, so the basic framework was used for the new viewer. Here's a shot of the original skeleton in Maya, and loaded much the same way into a very slim version of the engine. More as this progresses... It's alive! Although there's a problem with the system timing calls, which I'm trying to figure out... everything else compiled and ran perfectly. Actually there was one namespace collision that X refused to give up on, so I renamed my object, which ended up being more accurate. As suggested by stormrunner (and somebody else a long time ago) I used MinGW Studio and it handled everything great. I had to add newlines to the end of each file to remove all warnings but it was worth it to see a clean build. Only thing wrong with MinGW Studio is that it doesn't allow for nested folders inside the file pane. It will access the files in nested folders without issue, but it just doesn't give you the option to create a folder structue inside the IDE. Eh, it looks a little messy, but maybe a fix will come along at some point. Oh well, I get to marvel at a little coolness now.. because to me it's great when something 'just works.' EDIT: Timing calls fixed, but still not reporting the initial interval as I'd expect. Truly weird science! The Linux, Reloaded So the Linux endeavour continues. I *can* build the game with Anjuta running on Ubuntu, but... I *can't* build the game. You see, somehwere along the line, somebody chose to make it hard to include and build source from multiple folders. Within the confines of this Linux IDE that is. So you see this is why I *can't* build my game. I refuse to reorganize all the source files into one directory and rewrite all the include statements. And why should I? Visual Studio and XCode have no problem dealing with this structure. This all sounds like more bitching and moaning of course, but what it boils down to is that intuition alone (given previous exposure to other IDE's) should be enough to make this build happen. But it's not. My refusal to touch a makefile certainly limits the available options to compile in Linux, but that's really not my point. If Linux is ever going to cater to a wider audience (like certain religious pundits predict) it's going to have to have 'stuff that just works.' Why do I bother? Sure, it feels good having my project running like a charm on both Win32 and OSX, so naturally I thought, "Well I'll give Linux a try again. People say Ubuntu is soooooo good these days. Hmmm, okay that settles it." While I admit the basic install for the OS was pretty painless, the fact is that nothing for development was installed by default. Only the common desktop apps were to be found. So I gave in, yet again, and started the tedious task of tracking down all the crap I'd need. Much later, after all the googling and apt-get'ing I should have had everything. Bear in mind that I rather dislike using the commnand line. Some people love it, but I swear somebody once told me this is the twenty first century, and in my opinion, I should have a GUI to do everything. Typing poorly named commands and even worse package titles is just so 70's style. But I digress. The bright spot in using all that junk was Eclipse. It took three steps to have it running... 1) Download archive 2) Unpack archive 3) Double click eclipse icon It doesn't get any better than that in terms of expected results. Okay, let the real pain begin. Next comes endless lib searches and all sorts of nasty linker errors from the eclipse C/C++ perspective. Sigh, I'm thinking to myself, "I could solve these problems with little to no effort in Win32 or OSX." But an hour later, even after I tracked down all the function containing libs, I sit here with a nebulous *** [App] Error 1. That kills it folks... Screw Linux. Again. Last time it was Mandrake and KDevelop that refused to work correctly. The time before that it was ummm... Mandrake and Anjuta. The time before that it was RedHat and something-or-other. I don't even *like* Linux in the first place for crying outloud! For a few shining moments two days ago, I thought it would be cool to have my game running on Linux. Now, once again, I remember why I chose NOT to continue that effort many times before. Sheesh, if I was going to throw away two days worth of my free time on something that produced nothing, I'd have plopped myself down in front of the TV. Questions, - Advertisement
https://www.gamedev.net/blogs/blog/308-clever-title/
CC-MAIN-2018-30
refinedweb
1,685
72.76
NAME flock — apply or remove an advisory lock on an open file LIBRARY Standard C Library (libc, −lc) SYNOPSIS #include <sys/file.h> int flock(int fd, int operation); DESCRIPTION The flock() system call applies or removes an advisory lock on the file associated with the file descriptor fd. A lock is applied by specifying an operation argument that is one of LOCK_SH or LOCK_EX with the optional addition of LOCK_NB. To unlock an existing lock operation should be LOCK_UN.WOULDBLOCK. Processes blocked awaiting a lock may be awakened by signals. RETURN VALUES The flock() function returns the value 0 if successful; otherwise the value −1 is returned and the global variable errno is set to indicate the error. ERRORS The flock() system), fcntl(2), fork(2), open(2), flopen(3), lockf(3) HISTORY The flock() system call appeared in 4.2BSD. BSD January 22, 2008 BSD
http://man.cx/flock(2)
CC-MAIN-2013-20
refinedweb
147
54.12
Bugs with latest beta - I don't know if this is intentional, but editorcan't be imported by appex scripts.. - Only a few photos (the ones from after updating to this beta) show in photos.pick_image I know this one wasn't in the last beta. I have 1.6 GB of photos: And only 3 show. The first one is from before the update. Thanks! I don't know if this is intentional, but editor can't be imported by appex scripts.. This is intentional. The editor in the extension is pretty basic and not scriptable (I don't really see a reason why you'd want to do that anyway). Only a few photos (the ones from after updating to this beta) show in photos.pick_image Very weird. To be honest, I don't have an explanation for this. I've changed absolutely nothing in the photosmodule in the last couple of months, as far as I'm aware. - fitzlarold I happened to notice that any module or script I have in 'site-packages' is utterly ignored by the scripts in 'Extension'. Is this also intentional? Or am I experiencing a fluke? The extension is basically a completely separate app with its own sandbox, and it cannot share data with the main app except through the shared "Extension" folder. The site-packagesfolder lives in the main app's documents though, so it cannot be accessed from the extension at all... You could probably argue that site-packagesshould live in the shared folder, and it's mostly for "historical" reasons that it isn't... This might change before release. - fitzlarold Thank you for your response! It's not really vital for my purposes, I just wanted to make sure that it wasn't just me. I'm really enjoying some of these new features. I feel like I've just barely scratched the surface. How do you reccommend that I fix the photos bug? This has only changed when I updated... And I wanted to use editor in appex to convert your download gist script to a safari app extenion webmaster, it would be possible to call pythonista://from appex to run a script in the main app. that is how the old bookmarklet worked. @JonB Unfortunately, that is not possible. webbrowser.open()doesn't work in the extension (and as far as I'm aware, it's technically not possible to make it work there). Another application is this script I wrote to duplicate a file in pythonista. import appex path = appex.get_url()[7:] filename = path.split('/')[len(path.split('/'))-1] newpath = path[:len(path)-len(filename)]+'duplicated-file.py' file = open(path) newfile = open(newpath, 'wb') newfile.write(file.read()) file.close() newfile.close() It is designed to be run by selecting a file in pythonista and pressing share. It (theoretically) gets the contents of the shared file, and puts them into a new file in the same directory. But it doesn't work because it says it doesn't have permission. appex scripts cannot access the main app folders. if you want the ability to copy pythonista scripts to appex, your approach would work, but not if you want to copy tem into te same folder. what you want is a "File Action". check settings for File Actions. This lets you put a custom script in the share sheet of the file browser, and it passes filenames in the sys.argv. then you'd do the same thing, just wthout using appex, and using sys.argv instead of getting the appex url. For the mainipulations of filename, str.partition() might be better than str.split(). @JonB oh, I see that now, it wasn't in the same place so I assumed it was removed in 1.6 @ccc pretty sure partition splits at the first instance... I'd generally recommend using os.path.splitfor file paths, though it only really makes a difference if you write something that's supposed to run on Windows as well (which uses '\'as the path separator). Str.partition() goes left-to-right and str.rpartition() goes right-to-left. Ok. Makes sense now.
https://forum.omz-software.com/topic/1979/bugs-with-latest-beta/3
CC-MAIN-2019-18
refinedweb
693
68.16
Singletons Are Good Singletons are good when they are used wisely. One must know the environment in which they are being used. Singletons are a poor choice in a clustered environment. One must be careful to update all the Singletons on all the machines in the clustered domain. This is cumbersome and should be avoided. Stateless beans are a better choice for this environment. Singletons are the only way to resolve certain system resource issues such as doling out hardware-based facilities. Singletons are at least unnecessary. Everything that can be done with a singleton can be done with a class variable or method. And if something has to be unique, it has to belong to the class and not to the objects. Therefore, good or bad, singletons are conceptually wrong. Class variable or method is just an implementation of singleton. Or, a Singleton is just a bunch of class variables and methods. -- DorKleiman "Singletons are at least unnecessary." Completely untrue. Singletons describe unique resources of a system. The simplest and most obvious is stdin/stdout. How about your PC's speaker? Are we going to claim that these are not resources or that they are not unique within a system? Oy! Would you want every third party library you import to be hooked to stdin/stdout? No. However, if a library is going to use stdin/stdout, it has to conform to the underlying system's adjudication of those resources. Applications should only care that there is some stream that they can output audio to. Why can't you have music go to one output and game sound go to some other output, or route audio output from some program to the game input (voice changer, music to play to people in game, etc) without terrible kernel/system hacks? That's great, but in real, existing systems there aren't provisions for that. We are stuck with the hardware/software platforms currently being manufactured, and those systems have many resources which are, in fact, singletons. Singletons don't even describe uniqueness because the namespace of a singleton is not global, for example in Java you can create another instance of a singleton by loading it in another classloader. Or you can just start the application twice. Thusly the need for a system-level singleton resource that is requested through an OS hook. The stdin/stdout example is, in essence, a fundamental problem with how many people understand Singleton. stdin and stdout are unique, certainly. It also makes sense to have them globally accessible. But they're instances, not classes, so Singleton does not apply since Singleton ensures "a class only has one instance" [GoF]. stdin and stdout are globals, nothing more. And as for other system resources (hardware and the like), it shouldn't be up to client code to determine how many of each resource are present, or to use a non-parameterized method of obtaining them (such as a global function). Why not pass the object an instance of a SystemResourceManager ? class? IsSingletonLaziness ? ? In the EmbeddedSystem s world, there are all kinds of unique pieces of hardware that need to be identified by using a Singleton. This is how one avoids hardware control conflict. Examples abound. Let's try to be a little more realistic here, eh? [ A PC may have zero, one, or many sound output channels. Sound may also need to be sent over a network protocol for remotely displayed applications. The hardware may support a limited number of simultaneous output channels, or software may supply arbitrarily many channels to be mixed down to the hardware output. Historically, many programs have assumed that there will only be one monitor - an assumption that causes poor UI behavior now that desktop PCs can commonly support multiple monitors. ] Singletons are unnecessary, as is every other design pattern, but they can be helpful. For example, my current code often requires a single instance of an class, but a future version may require multiple instances. In the latter case, it's easy to refactor the singleton to support multiple instances. In other words, singletons are easy to get rid off and help me avoid PrematureGeneralization . Standard output is a global variable, not a singleton. Standard error is another instance of the same class, so clearly it's not a singleton. "Singleton's are self-documenting". (ok - don't start your backswing to tee-off yet) One thing I do appreciate about Singletons, is that the API has been widely read (GoF et al.). Therefore, if I see a class with no public constructors and a getInstance() method, it is obvious what is going on. Having a immediately recognizable API is part of the point of patterns in general - irrespective if they are used in the correct context. I much rather have to debug someones code where at minimum the APIs used are common rather than having to hunt down globals where it isn't immediately obvious that they are global. -DavidVTHokie So you are saying you see Asd.getInstance() and its Good because it can tell you that Asd is a singleton class? well I will tell you something better, let's simply name those classes STAsd because if you see the ST you know that its a singleton class See: SingletonPattern , SingletonsAreEvil EditText of this page (last edited August 18, 2011 ) or FindPage with title or text search
http://c2.com/cgi/wiki?SingletonsAreGood
CC-MAIN-2014-52
refinedweb
899
63.8
Test combinations of contouring, filled contouring, and image plotting. For contour labelling, see also the contour demo example. The emphasis in this demo is on showing how to make contours register correctly on images, and on how to get both of them oriented as desired. In particular, note the usage of the "origin" and "extent" keyword arguments to imshow and contour. import matplotlib.pyplot as plt import numpy as np from matplotlib import cm # Default delta is large because that makes it fast, and it illustrates # the correct registration between image and contours. delta = 0.5 extent = (-3, 4, -4, 3) x = np.arange(-3.0, 4.001, delta) y = np.arange(-4.0, 3.001, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 # Boost the upper limit to avoid truncation errors. levels = np.arange(-2.0, 1.601, 0.4) norm = cm.colors.Normalize(vmax=abs(Z).max(), vmin=-abs(Z).max()) cmap = cm.PRGn fig, _axs = plt.subplots(nrows=2, ncols=2) fig.subplots_adjust(hspace=0.3) axs = _axs.flatten() cset1 = axs[0].contourf(X, Y, Z, levels, norm=norm, cmap=cm.get_cmap(cmap, len(levels) - 1)) # It is not necessary, but for the colormap, we need only the # number of levels minus 1. To avoid discretization error, use # either this number or a large number such as the default (256). #. cset2 = axs[0].contour(X, Y, Z, cset1.levels, colors='k') # We don't really need dashed contour lines to indicate negative # regions, so let's turn them off. for c in cset2.collections:. cset3 = axs[0].contour(X, Y, Z, (0,), colors='g', linewidths=2) axs[0].set_title('Filled contours') fig.colorbar(cset1, ax=axs[0]) axs[1].imshow(Z, extent=extent, cmap=cmap, norm=norm) axs[1].contour(Z, levels, colors='k', origin='upper', extent=extent) axs[1].set_title("Image, origin 'upper'") axs[2].imshow(Z, origin='lower', extent=extent, cmap=cmap, norm=norm) axs[2].contour(Z, levels, colors='k', origin='lower', extent=extent) axs[2].set_title("Image, origin 'lower'") # = axs[3].imshow(Z, interpolation='nearest', extent=extent, cmap=cmap, norm=norm) axs[3].contour(Z, levels, colors='k', origin='image', extent=extent) ylim = axs[3].get_ylim() axs[3].set_ylim(ylim[::-1]) axs[3].set_title("Origin from rc, reversed y-axis") fig.colorbar(im, ax=axs[3]) fig.tight_layout() plt.show() The use of the following functions, methods and classes is shown in this example: Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
https://matplotlib.org/3.1.0/gallery/images_contours_and_fields/contour_image.html
CC-MAIN-2020-10
refinedweb
433
54.79
Terribly sorry. It was just a small mistake. I've fixed it. I forgot that I should have returned the deferred object. Sorry everyone. Hi everyone. Now I'm studying network programming with Python Twisted. I'm pretty familiar with deferred, and I understand asynchronous programming a little bit, but there's still this one issue I still can't solve. Let me hand you some code to understand my situation first. from twisted.protocols import amp from twisted.python import log from commands import Login class LoginAMPServer(amp.AMP): def login(self, username, password): #This is just test log, so please spare me. log.msg('Got username %s and password %s' % (username, password)) return {'username': username, 'password': username} Login.responder(login) When the client use callRemote to the login command, it will get the result {'username': username, 'password': username}. Everything works fine. My problem is, this login function will need to fetch some stuff from the database, which operation uses deferred objects, and the login function needs to get the result from that query, return it and yeah... I'm expecting something like this: from twisted.protocols import amp from twisted.python import log from twisted.internet import defer from commands import Login class LoginAMPServer(amp.AMP): def test_return(self, result): log.msg('FIRED %s!' % repr(result)) username, password = result return {'username': username, 'password': username} def test_callback(self, username, password): return [username, password] def login(self, username, password): log.msg('Got username %s and password %s' % (username, password)) deferred = defer.Deferred() deferred.addCallback(self.test_callback) return deferred.callback([username, password]) #return {'username': username, 'password': username} Login.responder(login) I'm wondering if I can catch the result caught in the last callback with the parent function, so it can return it as well. What I know is that callbacks are like a set of function calls which is started when a result is ready, and it passes its return value to the next callbacks, so I'm wondering if I can get the last result and return it with the login function. I apologize if my question isn't clear enough. I hope you people can get my question. Thank you.
http://www.gamedev.net/user/209896-herwin-p/?tab=topics&k=880ea6a14ea49e853634fbdc5015a024&setlanguage=1&langurlbits=user/209896-herwin-p/&tab=topics&langid=2
CC-MAIN-2014-52
refinedweb
362
58.89
- NAME - VERSION - SYNOPSIS - DESCRIPTION - FUNCTIONS - ARGUMENT MATCHING - TO DO - SUPPORT - AUTHOR - ACKNOWLEDGEMENTS - SEE ALSO NAME Test::Mocha - Test Spy/Stub Framework VERSION version 0.50 SYNOPSIS Test::Mocha is a test spy framework for testing code that has dependencies on other objects. use Test::More tests => 2; use Test::Mocha; use Types::Standard qw( Int ); # create the mock my $warehouse = mock; # stub method calls (with type constraint for matching argument) stub( sub { $warehouse->has_inventory($item1, Int) } )->returns(1); # execute the code under test my $order = Order->new(item => $item1, quantity => 50); $order->fill($warehouse); # verify interactions with the dependent object ok( $order->is_filled, 'Order is filled' ); called_ok( sub { $warehouse->remove_inventory($item1, 50) }, '... and inventory is removed' ); # clear the invocation history clear($warehouse); DESCRIPTION We find all sorts of excuses to avoid writing tests for our code. Often it seems too hard to isolate the code we want to test from the objects it is dependent on. I'm too lazy and impatient to code my own mocks. Mocking frameworks can help with this but they still take too long to set up the mock objects. Enough setting up! I just want to get on with the actual testing. Test::Mocha offers a simpler and more intuitive approach. Rather than setting up the expected interactions beforehand, you ask questions about interactions after the execution. The mocks can be created in almost no time. Yet they're ready to be used out-of-the-box by pretending to be any type you want them to be and accepting any method call on them. Explicit stubbing is only required when the dependent object is expected to return a specific response. And you can even use argument matchers to skip having to enter the exact method arguments for the stub. After executing the code under test, you can test that your code is interacting correctly with its dependent objects. Selectively verify the method calls that you are interested in only. As you verify behaviour, you focus on external interfaces rather than on internal state. FUNCTIONS mock my $mock = mock; mock() creates a new mock object. It's that quick and simple! The mock object is ready, as-is, to pretend to be anything you want it to be. Calling isa() or does() on the object will always return true. This is particularly handy when dependent objects are required to satisfy type constraint checks with OO frameworks such as Moose. ok( $mock->isa('AnyClass') ); ok( $mock->does('AnyRole') ); ok( $mock->DOES('AnyRole') ); It will also accept any method call on it. By default, method calls will return undef (in scalar context) or an empty list (in list context). ok( $mock->can('any_method') ); is( $mock->any_method(@args), undef ); You can stub ref() to specify the value it should return (see below for more info about stubbing). stub( sub{ $mock->ref } )->returns('AnyClass'); is( $mock->ref, 'AnyClass' ); is( ref($mock), 'AnyClass' ); stub stub( sub { $mock->method(@args) } )->returns|throws|executes($response) By default, the mock object already acts as a stub that accepts any method call and returns undef. However, you can use stub() to tell a method to give an alternative response. You can specify 3 types of responses: returns(@values) Specifies that a stub should return 1 or more values. stub( sub { $mock->method(@args) } )->returns(1, 2, 3); is_deeply( [ $mock->method(@args) ], [ 1, 2, 3 ] ); throws($message) Specifies that a stub should raise an exception. stub( sub { $mock->method(@args) } )->throws('exception'); ok( exception { $mock->method(@args) } ); executes($coderef) Specifies that a stub should execute the given callback. The arguments used in the method call are passed on to the callback. my @returns = qw( first second third ); stub( sub { $list->get(Int) } )->executes(sub { my ( $self, $i ) = @_; die "index out of bounds" if $i < 0; return $returns[$i]; }); is( $list->get(0), 'first' ); is( $list->get(1), 'second' ); is( $list->get(5), undef ); like( exception { $list->get(-1) }, qr/^index out of bounds/ ), A stub applies to the exact method and arguments specified (but see also "ARGUMENT MATCHING" for a shortcut around this). stub( sub { $list->get(0) } )->returns('first'); stub( sub { $list->get(1) } )->returns('second'); is( $list->get(0), 'first' ); is( $list->get(1), 'second' ); is( $list->get(2), undef ); Chain responses together to provide a consecutive series. stub( sub { $iterator->next } ) ->returns(1) ->returns(2) ->returns(3) ->throws('exhuasted'); ok( $iterator->next == 1 ); ok( $iterator->next == 2 ); ok( $iterator->next == 3 ); ok( exception { $iterator->next } ); The last stubbed response will persist until it is overridden. stub( sub { $warehouse->has_inventory($item, 10) } )->returns(1); ok( $warehouse->has_inventory($item, 10) ) for 1 .. 5; stub( sub { $warehouse->has_inventory($item, 10) } )->returns(0); ok( !$warehouse->has_inventory($item, 10) ) for 1 .. 5; called_ok called_ok( sub { $mock->method(@args) }, [%option], [$test_name] ) called_ok() is used to test the interactions with the mock object. You can use it to verify that the correct method was called, with the correct set of arguments, and the correct number of times. called_ok() plays nicely with Test::Simple and Co - it will print the test result along with your other tests and you must count calls to called_ok() in your test plans. called_ok( sub { $warehouse->remove($item, 50) } ); # prints: ok 1 - remove("book", 50) was called 1 time(s) An option may be specified to constrain the test. times Specifies the number of times the given method is expected to be called. The default is 1 if no other option is specified. called_ok( sub { $mock->method(@args) }, times => 3 ) # prints: ok 1 - method(@args) was called 3 time(s) at_least Specifies the minimum number of times the given method is expected to be called. called_ok( sub { $mock->method(@args) }, at_least => 3 ) # prints: ok 1 - method(@args) was called at least 3 time(s) at_most Specifies the maximum number of times the given method is expected to be called. called_ok( sub { $mock->method(@args) }, at_most => 5 ) # prints: ok 1 - method(@args) was called at most 5 time(s) between Specifies the minimum and maximum number of times the given method is expected to be called. called_ok( sub { $mock->method(@args) }, between => [3, 5] ) # prints: ok 1 - method(@args) was called between 3 and 5 time(s) An optional $test_name may be specified to be printed instead of the default. called_ok( sub { $warehouse->remove_inventory($item, 50) }, 'inventory removed' ); # prints: ok 1 - inventory removed called_ok( sub { $warehouse->remove_inventory($item, 50) }, times => 0, 'inventory not removed' ); # prints: ok 2 - inventory not removed inspect @method_calls = inspect( sub { $mock->method(@args) } ) ( $method_call ) = inspect( sub { $warehouse->remove_inventory(Str, Int) } ); is( $method_call->name, 'remove_inventory' ); is_deeply( [$method_call->args], ['book', 50] ); is_deeply( [$method_call->caller], ['test.pl', 5] ); is( "$method_call", 'remove_inventory("book", 50) called at test.pl line 5' ); inspect() returns a list of method calls matching the given method call specification. It can be useful for debugging failed called_ok() calls. Or use it in place of a complex called_ok() call to break it down into smaller tests. The method call objects have the following accessor methods: name- The name of the method called. args- The list of arguments passed to the method call. caller- The file and line number from which the method was called. They are also string overloaded. inspect_all @all_method_calls = inspect_all($mock) inspect_all() returns a list containing all methods called on the mock object. This is mainly used for debugging. clear clear(@mocks) Clears the method call history for one or more mocks so that they can be reused in another test. Note that this does not affect the stubbed methods. ARGUMENT MATCHING Argument matchers may be used in place of specifying exact method arguments. They allow you to be more general and will save you much time in your method specifications to stubs and verifications. Argument matchers may be used with stub(), called_ok() and inspect. Pre-defined types You may use any of the ready-made types in Types::Standard. (Alternatively, Moose types like those in MooseX::Types::Moose and MooseX::Types::Structured will also work.) use Types::Standard qw( Any ); my $mock = mock; stub( sub { $mock->foo(Any) } )->returns('ok'); print $mock->foo(1); # prints: ok print $mock->foo('string'); # prints: ok called_ok( sub { $mock->foo(Defined) }, times => 2 ); # prints: ok 1 - foo(Defined) was called 2 time(s) You may use the normal features of the types: parameterized and structured types, and type unions, intersections and negations (but there's no need to use coercions). use Types::Standard qw( Any ArrayRef HashRef Int StrMatch ); my $list = mock; $list->set(1, [1,2]); $list->set(0, 'foobar'); # parameterized type # prints: ok 1 - set(Int, StrMatch[(?^:^foo)]) was called 1 time(s) called_ok( sub { $list->set( Int, StrMatch[qr/^foo/] ) } ); Self-defined types You may also use your own types, defined using Type::Utils. use Type::Utils -all; # naming the type means it will be printed nicely in called_ok()'s output my $positive_int = declare 'PositiveInt', as Int, where { $_ > 0 }; # prints: ok 2 - set(PositiveInt, Any) was called 1 time(s) called_ok( sub { $list->set($positive_int, Any) } ); Argument slurping SlurpyArray and SlurpyHash are special argument matchers exported by Test::Mocha that you can use when you don't care what arguments are used. They will just slurp up the remaining arguments as though they match. called_ok( sub { $list->set(SlurpyArray) } ); called_ok( sub { $list->set(Int, SlurpyHash) } ); Because they consume the remaining arguments, you can't use further argument validators after them. But you can, of course, use them before. Note also that they will match empty argument lists. TO DO Enhanced verifications Module functions and class methods SUPPORT Bugs / Feature Requests Please report any bugs or feature requests by email to bug-test-mocha at rt.cpan.org, or through the web interface at. You will be automatically notified of any progress on the request by the system. AUTHOR Steven Lee <stevenwh.lee@gmail.com> ACKNOWLEDGEMENTS This module is a fork from Test::Magpie originally written by Oliver Charles (CYCLES). It is inspired by the popular Mockito for Java and Python by Szczepan Faber. SEE ALSO This software is copyright (c) 2013 by Steven Lee. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://metacpan.org/pod/release/STEVENL/Test-Mocha-0.50/lib/Test/Mocha.pm
CC-MAIN-2015-18
refinedweb
1,700
52.9
I am trying to create and update a field with a count of line features (tLayer) within a distance of point features (sLayer). I am attempting to use a combination of AddField_management, arcpy.da.SearchCursor, SelectLayerByLocation_management, arcpy.GetCount_management, and arcpy.da.UpdateCursor. The code I have for this is currently updating all records for the Line_Count field with the count of the point features (i.e. 2) for only the (second?) record. Though, a print statement following the GetCount line will return the line count for all of the point features (with a few unessential iterations). What do I need to do to appropriately update the Line_Count field for all of the records? Also, this process will be applied to a large dataset and will be extended to include 'where clauses'; are there any suggestions as to how to make this as efficient as possible. Any tips or suggestions would be helpful. Thanks in advance! Tess Updated Line_count Field (inaccurately recording a count of '2' for each record) : actual line count values for records as returned by print statement: import arcpy from arcpy import env arcpy.env.OverwriteOutput = True defaultGdbPath = 'C:\Topo_Check_Tess_V5.gdb' sLayer='C:\Topo_Check_Tess_V5.gdb\Stations' tLayer='C:\Topo_Check_Tess_V5.gdb\Lines' #ppLayer='C:\Topo_Check_Tess_V5.gdb\Plants' arcpy.AddField_management(sLayer, "Line_Count", "SHORT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "") TLineCountField = "Line_Count" arcpy.MakeFeatureLayer_management(tLayer, "tLayer_lyr") with arcpy.da.UpdateCursor (sLayer, TLineCountField) as LineCountList: for s_row in LineCountList: with arcpy.da.SearchCursor(sLayer,["OBJECTID", "SHAPE@"]) as s_list: for station in s_list: arcpy.SelectLayerByLocation_management("tLayer_lyr", 'WITHIN_A_DISTANCE', station[1],.000002, "NEW_SELECTION") result=int(arcpy.GetCount_management("tLayer_lyr").getOutput(0)) print result LineCountList.updateRow(s_row) del station del s_row print "Done" some severe indentation problems after the... for station in s_list: ... I thought it was just the code pasted above but it also exists in your image. A lot of people are having issues lately with mixing cursors as well... you might want to check those out in the python place Speaking of what Dan said regarding the indentation issues, what IDE are you using to write your scripts? You may want to consider one like PyScripter or PyCharm that will indicate when there are errors like indentation. Well, here we have another embedded update with a search cursor on the same feature. I don't think this is the correct approach. First read the data into a dictionary (as per other posts rfairhur24) Then do the updates. You are also deleting s_row within the inner loop, so what will the inner loop do with the updateRow without this object? Thanks, I missed that one too. The formatting ended up getting pretty messed up btw Notepad ++ and Visual Studio and I was terrible about fixing it. I do have a lot of work to do as far as understanding loops and multiple cursors (and probably everything else). I'll check out Richard's posts. Thanks! As much as I like scripting Python, sometimes the best tool for the job is a pre-existing geoprocessing tool. Have you looked at Generate Near Table? Not only could you determine the counts of nearby features, you could determine a lot of other information about those features as well. This process was initially built off of a Model Builder model (for simplicity), namely using a bunch of spatial joins; though as the data set is so large, I am trying to redo the process with an object oriented script for more efficient computation and use. This count generation will be the first step of many within the new script. So I think it is only printing the last row in 'result' for the Line_Count update statement. Is there an efficient way for result (i.e. result=int(arcpy.GetCount_management("tLayer_lyr").getOutput(0)) ) to be saved as a list or an array that can be written to the Line_Count field? Thanks again, Tess A post in GIS Stack Exchange provided a solution for this as well as some additional useful comments. Though, as this script will be applied to large data sets where efficiency will be a necessity, I will need to look into wrapping in more geoprocessing tools (i.e. near table, frequency table, and others) into this process. Thanks for all the great feedback! I'm just going to reiterate what Neil said and recommend you check Richard's blog post about cursors and dictionaries(though he accidently tagged Richard and his page instead of that particular blog post).
https://community.esri.com/t5/python-questions/calculate-field-values-using-arcpy-da-updatecursor/td-p/568766
CC-MAIN-2022-21
refinedweb
737
56.86
Adding and Removing Web References A Web reference enables a project to consume one or more XML Web services. Use the Add Web Reference Dialog Box to search for Web services that are installed on your local machine or stored on servers within your company's local area network, or to search an Internet registry for companies that provide commercial Web services. After adding a Web Reference to your current project, you can use any methods provided by the indicated Web service within your application. See the code snippet included in this topic for a sample Visual Basic .NET function that calls a method of a Web service. Common Procedures To add a Web reference to a project - In Solution Explorer, select the project that will consume the Web service. - On the Project menu, choose Add Web Reference. The Add Web Reference dialog box opens. - Select a link in the Web browser pane on the left to search for Web services installed on your local machine, or available on servers within your company's local area network, or available from providers of commercial Web services listed in the Microsoft UDDI Business Registry. Note The directory in which a XML Web service can be found will vary, depending upon the Web site you are browsing. You can also enter the complete path to a Web service in the URL field. - Select a Web service from the Services listed in the Web browser pane on the left. The path to the selected Web service appears in the URL field. The Start Browsing for Web Services pane will search this path for Web services and display descriptions for any services found. Note If you are developing a Web application on a machine that is behind a firewall, and your application will consume Web services found outside of the firewall, you must include the address and port of your network's proxy server in the URL. Ask your network administrator to furnish this part of the URL path. For more information, see The proxy settings on this computer are not configured correctly for Web discovery.. A simplified URL for a commercial Web service that provides a parts catalog published by a company called "ADatum" might look like this: A description of the elements and methods provided by the selected Web service appears in the Browser pane on the left. Note For more information on the items associated with an XML Web service, see XML Web Services Publishing and Deployment. - Verify that your project can use the Web service, and that any external code provided is trustworthy. Security Note machine. For more information, see Web Application Security at Design Time in Visual Studio and Code Access Security. - In the Web reference name field, enter a name that you will use in your code to access the selected Web service programmatically. - Select the Add Reference button to create a reference in your application to the selected Web service. The new reference will appear. To refresh a Web reference - Right-click on the Web reference in Solution Explorer and select Update Web Reference from its shortcut menu. This will regenerate the proxy class for the Web service, using the latest copy of its .WSDL description file. Any methods listed in the refreshed proxy should be available from the current version of the Web service. To remove a Web reference from a project If your project no longer needs a Web reference to an XML Web service, you can remove the reference from your project. - In Solution Explorer, select the Web reference you want to delete. - On the Edit menu, choose Delete. All of the reference information is removed from the project and from local storage. Code Snippet The following snippet of Visual Basic .NET code calls a Web service method to define a string value. Key variable names and syntax are highlighted. To call a Web service programmatically Use the Web reference name as the namespace, and the name of its .WSDL file as the proxy class. ' Visual Basic Private Sub Call_Web_Service_Method() 'Create an object that calls the name for the 'Web service used in the added Web Reference. Dim CallWebService as New ServerName.WebServiceName() '(Substitute the names of your own Server and Web ' service for ServerName.WebServiceName above.) 'Use this object to run a method of the Web service. 'Save the value returned in a typed variable. Dim sGetValue as String = CallWebService.MethodName() 'Set a property within your project 'to the value stored in this variable. Label1.Text = sGetValue End Sub For more information and examples of programming with XML Web services, see Programming the Web with XML Web Services. See Also Add Web Reference Dialog Box | Managing Project Web References | Programming the Web with XML Web Services | XML Web Service Discovery | XML Web Services Infrastructure | XML Web Services Overview
https://msdn.microsoft.com/en-us/library/d9w023sx(v=vs.71).aspx
CC-MAIN-2015-11
refinedweb
804
62.78
Efficient use of Tiling Por Bevin B. (Intel), publicado el 15 de junio de 2016 Tiling is an optimization technique, usually applied to loops, that orders the application data accesses to maximize the number of cache hits. The idea is simple to illustrate: Imagine you are combining two lists of numbers to enter into a grid, such as a multiplication table, but you are doing so with a multiply machine that repetitively multiplies two numbers and tediously enters results. In real life, the tedious-to-enter numbers are probably huge data structures. In my current work, they are vectors containing thousands of complex numbers, and the multiply process involves complicated distance functions. Using this multiply machine, you enter each left-hand side (LHS) number and each right-hand side (RHS) number 16 times. It makes no difference which order you do the cells, so the code usually does this simple order: Let’s say a new version of the amazing multiply machine becomes available that lets you keep two numbers in the machine! If you do the cells in the same order as above, you need only enter each LHS number once, reusing it for the remaining 3 cells in the row. You must still enter the RHS number for each cell. Your total load count shrinks from 32 (16 LHS + 16 RHS) to 20 (4 LHS +16 RHS). But there is a better order! After you load two of the RHS numbers into the machine, it is better to load a new LHS and reuse the loaded RHS than to continue along the row. Now the order is (load#1 LHS1, load#2 RHS1), (reuse LHS1, load#3 RHS2), (load#4 LHS2, reuse RHS1), (reuse LHS2, reuse RHS2), (load#5 LHS3, reuse RHS1), (reuse LHS2, reuse RHS2), (load#6 LHS4, reuse RHS1), (reuse LHS4, reuse RHS22)… That means 6 loads to do the first two columns, and another 6 to do the second two columns, so you need 12 loads instead of 20. This approach – breaking the big grid into tiles small enough that the tile LHS and RHS inputs all fit in the cache – is called tiling. There are other orders for doing the cells that will do slightly better than this, but in practice, this simple tiling gets almost all the possible win. Importance of Tiling While not obvious in the example above, tiling can reduce the required number of loads and stores to approximately the square root of the original number – a huge benefit on machines where each load from main memory is stalling the machine when it could be doing hundreds of instructions. Of course, if you have to write out each cell, that is also expensive and will probably dominate the resulting execution time. Still, tiling often means you can almost eliminate all the input costs and decrease execution time by 3x. Unfortunately, tiling can be hard to tune because tile size choice depends on the data sizes, the algorithm, and the hardware. Investigate to determine if you can use a cache oblivious algorithm that does not require tuning to the data and the hardware. Tile Loops Below is a simple example of tiling a 2D algorithm. Note the code similarity of the two loops. If your code has just a few hot loops, the extra complexity is worth doing inline. However, if your code has many hot loops, you may want to encapsulate the outer loops into a reusable macro or template, placing the inner loop into either a virtual function or a lambda called from the shared outer scheduling code. #include <algorithm> #include <vector> #include <iostream> class Table { std::vector<int> cells; public: const int side; Table(int side) : side(side), cells(side*side) {} int& operator()(int lhs, int rhs) { return cells[(lhs-1)*side+(rhs-1)]; } } tableA(12), tableB(12); void example() { // Not tiled for (int lhs = 1; lhs <= tableA.side; lhs++) { for (int rhs = 1; rhs <= tableA.side; rhs++) { tableA(lhs,rhs) = lhs*rhs; } } // Tiled // Note the deliberate inversion of the order going across the tiles with the order within the tile // It results in better multi-level cache behavior // const int lhsStep = 2; const int rhsStep = 2; for (int rhsBegin = 1; rhsBegin <= tableB.side; rhsBegin+=rhsStep) { int rhsEnd = std::min(tableB.side,rhsBegin+rhsStep); for (int lhsBegin = 1; lhsBegin <= tableB.side; lhsBegin+=lhsStep) { int lhsEnd = std::min(tableB.side,lhsBegin+lhsStep); for (int rhs = rhsBegin; rhs <= rhsEnd; rhs++) { for (int lhs = lhsBegin; lhs <= lhsEnd; lhs++) { tableB(lhs,rhs) = lhs*rhs; } } } } for (int lhs = 1; lhs <= tableA.side; lhs++) { for (int rhs = 1; rhs <= tableA.side; rhs++) { if (tableA(lhs,rhs) != tableB(lhs,rhs)) std::cerr << "Wrong value" << std::endl; } } } Tile Algorithms Tiling can be applied to more than just nested loops. I first used tiling to great effect on a 1-MB machine in the mid-1980s. A compiler processed each function of the application code through a series of phases, and because neither all the compiler instructions nor all the application functions could fit in the 1-MB memory simultaneously, I modified the compiler to push as many functions as would fit through one phase, then bring in the next phase to push those functions through, and so on. This general idea of separating scheduling from execution, and applying a reasonable amount of runtime effort to the scheduling, can be implemented using a dependency flow graph and prioritizing the next to-do items by how many inputs are available and how many inputs will be evicted from the cache by other inputs, temporaries, and outputs. You could do deeper analysis to find an even better schedule. It is an optimization problem; you need to estimate the cost of a better solution and its benefit. Grid Order While the usual example of loop tiling is matrix multiply, this example does not expose the full range of possibilities when tiling a 3D problem into a multilevel cache hierarchy. Consider this problem: We have two data sets, where the points in each set are themselves vectors. For each pair of points, one from each data set, we want to compute their dot product. In this interesting case, assume each vector is too long to fit in the L1 cache. We must perform A*B*N operations, each being two fetches and some arithmetic, in any order we prefer, where: - A and B are the number of elements of the data sets - N is the length of the vector The following code tiles into both the L1 and the L2 caches. We chose an nStep small enough that the consecutive executions of the inner loop over n find the A[a,n] values are still in L1 from the previous loop. void brick() { for (int nLo = 0; nLo < nLen; nLo+=nStep) { for (int bLo = 0; bLo < bLen; bLo+=bStep) { for (int aLo = 0; aLo < aLen; aLo+=aStep) { for (int a = aLo; a < std::min(aLo+aStep,aLen); a++) { for (int b = bLo; b < std::min(bLo+bStep,aLen); b++) { for (int n = nLo; n < std::min(nLo+nStep,aLen); n++) { combine A[a,n] and B[b,n] } } } } } } } The executions of the b-loop steadily load portions of B into the L2 cache. Instead of evicting the earlier Bs by loading too many of them, we stop after bStep of them are loaded, fetch the next portion of A, and combine it with all the Bs loaded so far. This loads some As into the cache as well, so instead of evicting all of them, we stop after aStep and combine those with the next set of Bs. Done right, this reduces the total number of loads from main memory A*B*N to approximately (A+B)*N, which may change the algorithm from memory bound to compute bound – a huge win on a multi-core or many-core machine. Tile Size It is difficult to estimate how full you can get the cache before further loads evict useful data because: - Each load evicts data from the cache. - The mapping from virtual address to cache entry is somewhat random - The caches vary in their associativity. In addition, prefetching and vector sizes impact performance. For this reason, the best way to get the aStep, bStep, and nStep numbers is by experimentation. Summary The parent article, Why Efficient Use of the Memory Subsystem Is Critical to Performance, discussed many ways to efficiently use a memory subsystem, but tiling is by far the most effective (when it works) because it dramatically reduces the data movement in the slower or shared parts of the memory subsystem. About the Author Bevin Brett is a Principal Engineer at Intel Corporation, working on tools to help programmers and system users improve application performance. His visit to his daughter in Malawi, where she was serving with the Peace Corp, showed him the importance of education in improving lives. He hopes these educational articles help you improve lives.
https://software.intel.com/es-es/articles/efficient-use-of-tiling
CC-MAIN-2019-22
refinedweb
1,487
55.07
A Filter to Display Neighbors in a Listdesign (5), django (72) When you don't spend much time thinking about them, sometimes it's easy to miss out on obvious UI patterns. This really struck me recently when I was making some minor tweaks to my blog. In my last redesign I wanted to focus on articles as much as possible--modeling presentation after books and newspapers--and part of that was throwing out the sidebar. I eliminated as much redundant information as possible, but eventually reached a point where everything remaining seemed too important to cut. I expanded the header to contain as much as possible, but knew I didn't have enough room to fit everything I wanted. So I decided to try making the header content contextual. Specifically, I broke the header into five segments. With the restrictions of spaces A, B, and C, it was pretty clear that I couldn't fit all of the content I wanted; I'd need to economize a bit. I decided to start out with a simple strategy that I could optimize on later: - Always show additional nav links in slot C. - If an article has a related series, show it in slot A. - Backfill empty slots with related tags. - Backfill an empty slot with recent entries. - Backfill an empty slot with random entries. This meant that navigation pages (like the Life page or the front) would always display the recent and random entries, and an entry would show fairly relevant articles (a mix entries in the same series and entries sharing the same tags). Although simplistic, these rules do seem to surface relevant content. (I'm tinkering with the idea of replacing random entries with the top five entries by comment volume, which might do a better job of exposing interesting content.) Since I had five navigational links in the C slot, I decided to include at most five articles in the A and B slots as well. Then I got mentally stuck for a few months: how could I handle a series of articles that was longer than five entries? Somewhere after implementing half a dozen search prototypes at work I realized there is a pattern for this, and it exists in every search result page's UI. More specifically, what is interesting is what happens when you reach a page such that the first page of results is no longer being shown. Ah, there it is: if I have more than five entries in a series, I just need to display the current entry, the two preceeding it, and the two following it. So blinding when I finally saw the light. Implementation this concept as a template filter is fairly straightforward. from django import template register = template.Library() def nearby(lst, obj, count=5): lst = list(lst) l = len(lst) try: pos = lst.index(obj) except ValueError: pos = 0 dist = count / 2 if pos <= dist: return lst[:count] if pos >= l - dist: return lst[l-count:] else: return lst[pos-dist:pos+dist+1] register.filter('nearby', nearby) Here is usage in a template (assuming an entry named object and a list of entries named series in the template context). <h1>{ { object.title }}</h1> <ol class="series"> {% for entry in series|nearby:object %} <li>{ { entry.title }}</li> {% endfor %} </ol> I can only wonder how it took me so long to figure this one out.
https://lethain.com/a-filter-to-display-neighbors-in-a-list/
CC-MAIN-2021-39
refinedweb
567
61.16
Luis Cortes <lcortes at aodinc.com> wrote: > Hello, > I have a piece of code that I think should work fine, but I believe that > I've hit a scope problem (Python 2.1b1). Does anyone out there have a hint > as to how to fix it?? > code: ...sniped if I use the exact same code, and add a call to getarguments it seems to work here, but... > def getarguments(): ... > print box, gifname, newgifname Surely *this* print statement gave the expected outcome? It does here. > sys.exit(0) > return ( box, gifname, newgifname ) > THE PROBLEM: when I have more than 3 arguments, the global variables do not > change to the correct variables, but instead print their default values. -- groetjes, carel
https://mail.python.org/pipermail/python-list/2001-March/074346.html
CC-MAIN-2014-10
refinedweb
118
80.51
A fun little man of rutherford scattering So I coded up rutherford scattering in a real dumb way (you can significantly reduce your considerations by using symmetry and stuff). I sort of monte carlo it with gaussian distributed initial conditions import numpy as np from scipy import integrate z0 = -20.0 samples = 100 E=.5 t = np.linspace(0,50,100) def F(x): return .1 * x / np.linalg.norm(x)**1.5 def D(x,t): return np.concatenate((x[3:],F(x[0:3])))') def update(E): ax.set_title('E='+str(E)) for i in range(samples): initxy = 5*np.random.randn(2) init = np.append(initxy,[z0, 0., 0., np.sqrt(E)]) sol = integrate.odeint(D, init, t) ax.plot(sol[:,0], sol[:,1],sol[:,2]) update(E) plt.show() The bundles that come off look pretty cool Lots that one could do with this. Compare the outgoing distribution to the formula, Try to discern shape of other potentials. Do a little statistics to see if charge or whatever can be determined from the data. Show center of mass scattering. Try 4 particle scattering. I guess I’m trying to play around in some high energy concepts.
https://www.philipzucker.com/a-fun-little-man-of-rutherford-scattering/
CC-MAIN-2021-43
refinedweb
200
63.25
I need something handy that will let me change program options without needing to compile. Right now I have everything bound to mysterious and unexplained hotkeys. There are enough of these that I’m getting confused. Hotkeys are great for turning things on and off, but terrible for fine-tuning options. It looks like I need some sort of interface for my program. Now, I’m always banging on about how libraries should be as focused and unencumbered as possible, how you shouldn’t need to go on a multi-stage fetch quest to get the thing to compile like you were trying to assemble the pieces of the Tri-Force or something. The problem is, there is pretty much no way around this. Interfaces need to use fonts, and fonts are fiendishly complex beasts. Interfaces need to render stuff, and rendering is complicated. They need to process keyboard and mouse input, and those are complicated. (It seems simple, but tracking keyboards and mouse wheels and all the different things that can happen with the CTRL, ALT, and NUMPAD… it gets very hairy.) That’s a lot of things for one library to do, on top of running a window system with buttons and scrollbars and the ability to tab between interface elements and all of the other tiny details that we all take for granted. Still, the inability to adjust options is really killing my productivity. So let’s see what we can find. Let’s see. We need an interface system that works with OpenGL. We’ve got Crazy Eddie’s GUI system, FLTK, GLAM, GLUI, GL Gooey, LibUFO, Qt, Turska, wxWidgets… and probably others. Wow. That is a very daunting list. Some of these libraries won’t have the functionality I need. Some will, but will have horrible dependency issues (some may even encapsulate others, or be forks of other items in the list, I don’t know yet) that will make them prohibitively difficult to use. Others will have horrible C++ interfaces that make a cluttered mess of your code. Some will have the functionality I need, but will lack the visual customization I need. (I know a few of these are designed to create standard “windows” style interfaces, which would look just awful floating atop my Seussian fantasy world.) Some might have all of the above, but be tied to a particular platform. Or lack proper documentation. Or turn out to be long-abandoned projects that no longer work due to shifting technology. I could easily burn a couple of days downloading each and every library, getting it to (hopefully) compile, testing it out, evaluating the interface, the looks, reviewing the documentation, and making sure it actually has all the features it claims to have. (I don’t want to install the thing into my project and three days later discover that something fundamental like “text input boxes” is still on the to-do list.) This promises to be a massive investment of time. The truth is, I’m not ready to make this kind of commitment. Really. It’s not you, it’s me. I just can’t afford to be tied down at this point in the development cycle. I began this project as a tech demo, that could become a game. But the tech demo comes first. And adding a all-singing, all-dancing, multi-window interface falls way, way outside of that scope. Thinking about this more, what I really need is a “Quake-style” console. If you ever hit the tilde key in Quake, you probably saw the text window that opens up. That window lets you enter commands, change variables, and generally control the application without needing menus and scroll boxes and the like. This is exactly the reason console windows exist: For the proof-of-concept / prototyping stage of a project. I ask Google, and it brings me this: glConsole. The bad news is that it was developed on another platform (Linux, I suspect) and there are no binaries available. So I’m going to have to get it to compile. It uses CMAKE, which is supposedly a totally cross-platform system for compiling code, a claim of which I am mildly skeptical. Well, it can’t hurt. Let’s see how this goes… (Two hours later.) I return to you a changed man. Everything I knew was wrong. CMAKE took a little self-education to use, but once I had the knowledge it did its job flawlessly. There was an odd compile problem where glConsole complained that it didn’t match it own version number or some such piffle, but commenting out a couple of lines fixed that right up. Let me tell you about glConsole… It works. It works beautifully. It works seamlessly. It dropped directly into my project with no additional dependencies. It displays text, but I have no idea how. I don’t see a bitmap among its resources and it doesn’t do any font loading. It just magically makes text. I suppose I could look at the source, but so far there’s been no reason to do so. I was able to add it to my project in, I kid you not, five lines of code. I tell it when to open and close. I tell it when to draw. I feed it keypresses. That’s it. Done. I don’t even need to initialize it. Even better, it comes with a bunch of other features: - Built-in environment variables. I create them and access them at will. It keeps track of them for me, saves them, remembers them between sessions. It even has auto-complete, so it can remind me of my own variable names while I’m typing them in the console. - Environment variables can be organized into groups. I have one group: render.wireframe, render.lighting, and render.textured. And another group: mouse.invert, and mouse.sensitivity. This system can plug into another, so when I DO add a full-featured interface, I’ll already have all of the functionality implemented. I’ll just need to have the interface change the already-existing environment variables and it will be done. - It can save options into any file I like. It can save groups or sub-groups of options. So, I can stick all of the player data into a group: player.position, player.heading, player.camera_zoom, etc. Then I can save all of the player.* options into a file. Boom. My save-game system is done, and I wasn’t even planning on working on that this week. I got it done as a side-effect of adding this library. - It keeps a command history, which persists between sessions. So, I open up the console window and hit the up arrow to cycle through previously typed commands, even if I closed the program since I typed them. This is incredibly useful when I’m debugging and iterating very quickly, needing to try the same options over and over. - I can execute functions from within the console. So, I can have commands to save the game, load the game, restart the rendering subsystem, purge texture data, or whatever else is needed for testing. I’m dumbfounded. Not only did this library perfectly and seamlessly solve the problem I had in mind, but it solved future problems I hadn’t yet considered. Author Gabe Sibley has shamed me as a coder. I’ve used hackjob libraries downloaded from the internet. I’ve used enterprise level solutions that cost tens of thousands of dollars to license. And nothing I’ve ever used as been as smooth to integrate and intuitively designed as glConsole. “If I could write software like that, I’d call myself a programmer.” Onward! The Best of 2011 My picks for what was important, awesome, or worth talking about in 2011. Dear Hollywood: Do a Mash Reboot Since we're rebooting everything, MASH will probably come up eventually. Here are some casting suggestions.. Internet News is All Wrong Why is internet news so bad, why do people prefer celebrity fluff, and how could it be made better? 87 thoughts on “Project Frontier #16: Interface’d” It sounds like Gabe is using the force, magic, ritualistic sacrifice and prayer groups to power his library. Really, it sounds too awesome. ‘Programmer’ is secretly code for ‘Wizard’, one of the best kept secrets of the era is the fact that computers actually run on black magic. It’s magic smoke. Don’t let it out. This. ^^^ <— good read! :D He took the same route Hot Toys did for their scary realistic action figures – he sold his soul to the devil. No other explanation. My guess is that “Gabe” is a pseudonym for “John”, and “Sibley” is a pseudonym” for “Carmack”. It’d make sense, right? Maybe he sold his soul to the devil at a crossroads in exchange for programming skill? Hi. I must confess, I haven’t read the whole post yet (*hermph*work*rmbl*) – but I was reminded of GWEN* (by Garry of Garry’s mod). Haven’t had a look at it yet, but he says stuff you/we/they like about simple and lightweight and namespaces etc. I know you’ve probably committed to glConsole and that GWEN is not a console UI – but this is just a heads up kind of comment. For another project or something. Keep up the good work. :) * Assemble the Tri-Force. -snicker- Also, that code library totally does sound awesome. Onward! This is exactly the reason console windows exist: For the proof-of-concept / prototyping stage of a project. So…exactly why do PC games (Portal, Fallout 3/:NV, etc.) keep it in? I know Descent/older RTS’s had ‘chat codes’ which in single player gave you cheat capabilities, but nothing along the line of a full on console, that I remember, at least. EDIT: Sorry, reply where it was suppose to be its own post. Maybe because in the early stages it helps in debugging the game, and the it’s too much hassle to put it in before a demonstration and take it out afterwords. The real answer is probably “Why not?”. Well, probably because it’d be too much of a pain to just remove. Or there just isn’t much reason to remove it. Or if it’s in Fallout 3/NV’s case, they leave it in because they know something will break, and this way you can fix it if needed. Or maybe, like Shamus’ solution, removing it means writing new save game code. My guess is: 1) people like them – they’re an easy way to get all your cheats (hello, Elder Scrolls/new Fallout games) and 2) in case there’s an unexpected error while a user is playing (door to a vault won’t unlock, perhaps?) it provides a very easy way for users to handle the bug, giving the developer a little bit of leeway to point towards as a temp fix to keep players happy-ish. Less angry, anyway. I imagine they’re also handy when making mods. Be annoying for the developers to take it out and put it back in all the time too. I mean most DLCs are just “mods” for games anyway. This is a big reason why the console is included in the Source engine. Right now, there’s no option in the GUI to manually change multiplayer levels in Portal 2. The host has to type “changelevel [name of level]” in the console. It’s also how all of the cheats are implemented, et cetera. Other than the options menu, loading & saving, and so on, the console is the GUI for the Source engine. Everything you need to just play the game is in the menus. Everything else is a console command. Once you have a few commands memorized, it’s a snap. EDIT: Oh yeah, the other big thing is error reporting. All errors and some miscellaneous feedback (like “game saved successfully”) will be put in the console window. It’s handy even if you’re not modding/debugging to see if playing normally is causing any errors to pop up, as they sometimes do right after an engine update breaks some minor feature of a level that they missed in testing. EDIT 2: Oh yeah, and I’m sure it’s handy for porting to other platforms. No platform-dependent GUI (or small, simplified GUI for the main game + console for everything else) means more time actually making the game. Because gamers like cheat codes, and don’t like being robbed of cheat codes they could have. But the console window doesn’t exist on consoles, right? Oh, the irony! Because there’s never a stage in the project, including after the game has shipped and users start reporting bugs none of your testers found, that a console isn’t useful. “So…exactly why do PC games (Portal, Fallout 3/:NV, etc.) keep it in?” because any change to your code – including removing debugging code, has the potential to introduce new bugs and/or otherwise subtly alter behavior :) Because both of those games were built to support user-developed mods, which will be going through their own proof-of-concept/prototyping stages. That’s my theory, anyway. Because taking out the console is a significant change, which would require debugging afterwards; the debugging of ‘console removed for production’ would clearly have to be done without the console. It would require beta testing every feature of the release version over again, roughly doubling the testing requirements. Removing user access to the console could be done in an undocumented setting file, but why bother? Serendipity is awesome, isn’t it? I love it when things kind of fall into place like that. It makes me feel good for days at a time. This post series is quite rapidly turning into a useful guide book for aspiring game creators. ‘This is a thing you have to so at some point. This is what I did, and how and why it works for me. Here are some alternatives that might work better for you. Ask your doctor if Zombrex is right for you.‘ I know that if I ever give in and start a (probably terribly ill-conceived) game creation project, I’ll revisit this post series for sure. my tough exactly, this series is incredibly useful and impressive. I’m a bit worried for the save system, will it be still practical when a lot of data will cluter the save file (hp,inventory,ect …)? if I may, will we have a post on the sky system? I’d like to know how you made the day/nigth system. I believe he commented briefly on it in a video (week 3, perhaps? More likely week 4… well that covers both options). Essentially it’s just a time-dependant lighting system. So I guess the moral is “use code by robot rocket scientists”. Finally, using pre-made external libraries goes right for once. Ever heard the phrase ‘too good to be true’? If it’s this good, it must be doing something truly horrible in the background. I’m guessing it contains code that makes your program part of a vast, world spanning, cloud based self aware computer god. In this case I for one welcome our computer overlord’s… I did a couple years of a comp. sci. degree and was always terrible at getting libraries that I didn’t write to work properly… this being when the libraries are largely given to you by the professor (which means not having to track them down which can be a monumental task). In this case I think Gabe Sibley just created something so unsexy as to be really good. He didn’t put in any GUI, any graphics, any bells and whistles… who would ever want this? Just people who want simple functionality, but most programmers know that people like that don’t exist. Depends. GLConsole, being the monolithic and easy to use thing it is, has of course certain limitations. If you’re okay with the default behaviour, it’s of course a great library. However, if you want it to use a specific font, or use your authorative event queue for input reading, or would like special keybindings like the readline/emacs-type ones, want it to read the history from something other than a plain file, etc., vanilla GLConsole is not for you. You’d either want to modify it, look for something different or roll your own. :P You say that like it’s a bad thing. Maybe “Gabe Sibley” is a pseudonym for Matthew Sobol, and glConsole is part of his Daemon. (I wonder how many people will get that.) At least one. Shaming as a coder is one of those resources like irony and mistakes that the universe supplies in surprisingly generous quantities: TeX, Doom, Dunkosmiloolump, git, several Google products, etc. Maybe I could see further if I weren’t in the shadow of giants! Also, on a different note, surely a post about consoles should reference Neal Stephenson’s _In the beginning was the command line_. Photoshop artists already can make themselves look like bodybuilders…… Ha! All too true. :D (Laughing quite hard here.) At the beginning of this post I thought “Tsk, why not just use an ini file? It’s easy to adjust, even for possible endusers, and it won’t require recompiles.” I giggled when you mentioned all sorts of possible libraries, that would no doubt bring endless clutter, thinking “No way /our/ Shamus would us those”. Then when I saw the glconsole screenshots and explanation, I was baffled. For the umpteenth time in the saga of Project Frontier. Way to go, sir. I actually have been using ini files until now, but it’s not really a good way to to (say) turn off texturing. I’m probably going to rip out the ini code (it’s all Microsoft API calls anyway) at some point. It uses glutBitmapCharacter(). It also uses GLUT for keyboard input. I’m a bit surprised that doesn’t clash with SDL also monitoring keyboard input. I does? It requires my to send it input. (Thank goodness, I don’t want it intercepting input.) Maybe there’s an option to have it manage its input. Or maybe this is because I created the window via SDL and not Glut. Ah, okay, using KeyboardFunc(). This still calls glutGetModifiers() to get the currently active modifier keys, though. Question: Are the dark patches on the ground shadows from cloud cover… or just a different color of dirt? Over the last few posts I had been seeing those and assuming that it was a lighting effect due to clouds… but most of those images were closer shots with not much sky visible. The first one in this post has those ‘patches’ but I’m not seeing clouds. Second question: If those are just dirt patches, when you do eventually add sun and clouds, can these they please cast shadows on the ground like that. They look great. :) Probably not an easy task… but the world would look great with cloud shadows rolling over those hills. Oh, also, I am definitely going to take a look at glConsole. Dark spts are just dirt, contrasting with sand. I haven’t decided on a shadowing system yet. It looks kind of like an army of shadows is swarming out from under the grass to converge on and devour the player. Now shamus will have to include monsters whose only visual manifestation is an eddy in the lighting system. Who’s Eddy? *ducks* That would be kind of neat, though. I keep forgetting the amazing extent to which my graphics experience is with incredibly simplified tools. With Game Maker (which has a coding language with fairly Java-like syntax) you just draw_text(x,y,string) to make text show up (there are other functions like draw_text_transformed and draw_text_ext for more options). And it has a built in “debug mode” where it runs a second window at the same time as the game where you can track and change any variable. Even Java has stuff like much easier file importing and tons of built-in libraries for gui components and sprite drawing. Is that the finished avatar? He’s lookin’ good. Definition of a well-designed library: Not to rain on your parade here, but what about licencing? Currently, you’re closed source and don’t even sell it to anyone. But if you would want to go commercial, external library licenses can put a damper on that. Especially the virus-like GPL. This one is LGPL, as far as I can tell, so that’s still fine, I believe. I’m personally very fond of the GPL so I wouldn’t use that adjective. But the point is valid–a GPLed library, or a few other licenses, could interfere with going commercial. If you use a GPL-library anywhere, you are forced to go GPL yourself. GPL works fine until you start writing software for a living, at which point it becomes a huge “implement it yourself (or be my bitch)!” sign. Well, the thing is, without that other person / group writing the library for you, you would have to implement it yourself anyway. What alternative do you prefer, then? People always putting their code under a BSD-style license? As a FLOSS contributor myself, I certainly can understand that someone doesn’t want to see their code changed/adapted/improved but then no one else being able to benefit from those improvements. well, gpl IS viral. whether or not you’re a fan is immaterial. it is intentionally designed to be viral. Except that it doesn’t infect other code on its own, so the metaphor is rather…wonky. All it does is that information that is free (as in speech) stays free and when someone did something cool to improve on it, that information should be free too. If you want to call that “viral”, that’s your choice but I for one think that’s a rather mean-spirited word, what with its connotations and all. Of course, the strict GPL does mean that using GPL’d libraries in a non-compatible licensed project does not fly. That’s why there’s the LGPL. In fact, off the top of my head, I can only name one common library that’s GPL instead of LGPL, libmad. Except it does infect licenses. Of course it doesn’t touch code, it’s a license! The thing is that I cannot use a GPL library and keep my own code closed. I have to open up code that the GPL user didn’t even write. It doesn’t get much more viral than “force other people to change their license”. Good? Bad? Depends. But it is designed to be viral. Force? Well, nobody forces you to use code you didn’t write in your project… Good, because that is entirely besides the point. I don’t think it is. To be considered “viral”, it would have to infect things on its own and spread. Like, you know, a virus. If you however make a conscious choice to use code written by someone else and build upon it, that you then have to adhere to the license is not viral. Correct. the term “viral” doesn’t apply. The GPL doesn’t come and infect your code, you make a decision to use GPL’d code yourself. GPL infects my code like Pizza infects my stomach – only if I want it to. On the other hand, it’s hard to call GPL’d code “free” (in either sense) because it doesn’t come free of restriction, it comes burdened down with restrictions. Fortunately, the LGPL exists for the majority of people who don’t get paid to release software to the world, but get paid to make software just for their company. Even more fortunately, I’ve always worked on websites, where we use software but never release software, so I’ve never really had to decide whether to recode a handy library or not :) Depends on how you define “free”. The restrictions are there to keep information in the open. A necessary “evil”, in my eyes. The GPL and the LGPL are two different parts of the same thing. The GPL is for stand-alone programs, the LGPL is for libraries. Both require you to make changes to the actual code open under the same license. I.e. if you change the actual code of a LGPL’d library, you still have to make those changes public. The only difference is that the LGPL allows you to, from your stand-alone program, link against the library without having to make said stand-alone program (L)GPL too. The LGPL is basically just an extra clause needed due to the unique nature of libraries. As such, there is no actual GPL vs. LGPL “dispute”. GPL is for stand-alone programs, LGPL for libraries. You’ll be hardpressed to find more than a handful of libraries that are GPL instead of LGPL (like I said above, the only example I can think of is libmad). I believe the usage of “viral” here is as in “viral video”. The term “viral” certainly does apply to the GPL. GPL software easily enters other software, through the actions of programmers. The software becomes GPL, and must be published in such a manner as to be free to infect other software. Just because it does all this in the presence of active human assistance doesn’t make it less viral. Many biological virii require active assistance from the cells they infect. The last screenshot reminds me of Frozen Synapse (). By the way, Shamus, I have a copy spare on my steam account (they only sell this game in couple for some odd reason) would you like it? Or maybe someone else in the spoiler warning crew? I’m not a big fan of simultaneous turn-based (why does everyone keep trying to incorporate “twitch” gaming into everything?). Is there a way to set it to just regular-old turn-based? I don’t know, besides i think it might spoil the fun of trying to figure out what to do next, but I saw that there several game modes in it. Frozen Synapse has no “twitch” gaming. Old-fashioned turn-based would actually ruin what the game does, which is trying to use tactics to determine what your opponent will do. You each take your turn by giving orders, but you don’t see the orders given by your opponent. Then, when you’ve both accepted all your orders, the game runs both turns at once and you see what happened. Then you both go back to taking turns. That’s why the game allows you to play over e-mail. I also have an extra copy. So, what was looking to cost you a lot of time, actually ended up saving time in solving future problems already? Life can be very cool sometimes. It makes me so happy to see you making such steady and significant progress, Shamus. I hope you’re finding it satisfying and rewarding, and I’m really looking forward to whatever you end up releasing. This post contrasts so much with every single other post you’ve made in reference to using outside libraries. I’m glad that was so easy for you, Shamus! :) Once you get to the stage of working on a real GUI it might be worth looking at awesomium (). The Overgrowth guys use it to good effect. And I thought that name was a funny placeholder… Cool! I just wrote in the last post’s comments what a pain it must be to store your textures in SVG images … but awesomium can render into 3D textures, includes the chromium engine, and chrome renders SVGs, so there you go … crazy. This entry made me smile. +2 library of awesome. Despite not being to a point in our project to worry about such things, Shamus’ praise of glConsole was so strong I thought I may as well get it in now… most of a day later I’m pulling my hair out. I don’t want to sound like I’m blaming Shamus for gushing about an awesome tool I don’t understand, but I’ve only ever worked in IDEs(MSVC++ atm), and just don’t know how cmake is supposed to work. Will this compile a codefile I keep in the project? Will it stick a folder in the bowels of windows that I need to include like OpenGL? Am I going to need to compile part in VC and finish with a console command to cmake every time I build? I’ve had no luck with tutorials. Everything is so balkanized, every implementation so different, so much different terminology, that I can’t tell what’s signal, it’s all noise. Maybe it’ll all be clearer tomorrow, but any help would be very much appreciated. CMAKE should create a Visual Studio project file, which you can then open and compile. Many thanks, got it. OMG CMAKE HAS A GUI. The app is in the bin folder in the cmake dir, and I had no reason to look there. Once I found that it was easy as silk. Need to figure out makefiles someday, but for now, I appreciate VC++ ALOT more. Cheers. I’d like to find out more? I’d care to find out more>
https://www.shamusyoung.com/twentysidedtale/?p=12304
CC-MAIN-2020-10
refinedweb
4,962
73.68
You've heard people say for years that Groovy is a dynamic programming language for the JVM. But what does that really mean? In this Practically Groovy installment, you'll learn about metaprogramming — Groovy's ability to add new methods to classes dynamically at run time. This flexibility goes far beyond what the standard Java language can offer. Through a series of code examples (all available for download) you'll see that metaprogramming is one of Groovy's most powerful and practical features. Modeling the world Our job as programmers is to model the real world in software. When the real world is kind enough to offer up a simple domain — animals with scales or feathers lay eggs, and animals with fur have live births — it's easy to generalize the behavior in software, as shown in Listing 1: Listing 1. Modeling animals in Groovy class ScalyOrFeatheryAnimal{ ScalyOrFeatheryAnimal layEgg(){ return new ScalyOrFeatheryAnimal() } } class FurryAnimal{ FurryAnimal giveBirth(){ return new FurryAnimal() } } Unfortunately, the real world is rife with exceptions and edge cases — duck-billed platypuses are furry and lay eggs. It's almost as if every one of our carefully considered software abstractions is being targeted by a dedicated team of contrarian ninjas. If the software language you use to model the domain is too rigid to deal with the inevitable exceptions, you can end up sounding like an obstinate civil servant mired in a petty bureaucracy — "I'm sorry, Ms. Platypus, but you're going to have to give birth to live young if you'd like to be tracked by our system." On the other hand, a dynamic language like Groovy gives you the flexibility to bend your software to model the real world more accurately, rather than presumptuously (and futilely) asking the world for concessions. If the Platypus class needs a layEgg() method, Groovy makes it possible, as shown in Listing 2: Listing 2. Adding a layEgg() method dynamically Platypus.metaClass.layEgg = {-> return new FurryAnimal() } def baby = new Platypus().layEgg() If all of this talk of furry animals and eggs seems frivolous, then consider the rigidity of one of the most often used classes in the Java language: the String. Groovy's new methods on java.lang.String One of the joys of working with Groovy is all of the new methods that it adds to java.lang.String. Methods such as padRight() and reverse() offer simple String transformations, as shown in Listing 3. (For a link to the GDK's list of all of the new methods added to String, see Resources. As the GDK cheekily says on its first page, "This document describes the methods added to the JDK to make it more groovy.") Listing 3. Methods added to String by Groovy println "Introduction".padRight(15, ".") println "Introduction".reverse() //output Introduction... noitcudortnI But the additions to String don't end with simple parlor tricks. If the String happens to be a well-formed URL, in a single line you can transform that String into a java.net.URL and return the results of an HTTP GET request, as shown in Listing 4: Listing 4. Making an HTTP GET request println "".toURL().text //output <html> <head> <title>ThirstyHead: Training done right.</title> <!-- snip --> For another example, running a local shell command is just as easy as making a remote network call. Normally I'd type ifconfig en0 at the command prompt to check my network card's TCP/IP settings. (If you use Windows® instead of Mac OS X or Linux®, try ipconfig.) In Groovy, I can do the same thing programmatically, as shown in Listing 5: Listing 5. Making a shell command in Groovy println "ifconfig en0".execute().text //output en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:17:f2:cb:bc:6b media: autoselect status: inactive //snip I'm not suggesting that the joy of Groovy is that you can't do the same things in the Java language. You can. The joy is that these methods have seemingly been added directly onto the String class — which is no mean feat, given that String is a final class. (More on that in just a moment.) Listing 6 shows the String.execute().text Java equivalent: Listing 6. Making a shell command in the Java language Process p = new ProcessBuilder("ifconfig", "en0").start(); BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream())); String line = br.readLine(); while(line != null){ System.out.println(line); line = br.readLine(); } It almost feels like getting bumped from one window to the next at the Department of Motor Vehicles, doesn't it? "I'm sorry, Sir, but in order to see the String you requested, you'll first need to stand in that line over there to get a BufferedReader." Yes, you can build convenience methods and utility classes to help abstract that ugliness away, but all of the com.mycompany.StringUtil workarounds in the world are a pale substitute for adding the method directly where it belongs: the String class. ( Platypus.layEgg(), indeed!) So how does Groovy do that, exactly — bolt new methods onto classes that can't be extended or modified directly? To understand, you need to know about closures and the ExpandoMetaClass. Closures and the ExpandoMetaClass Groovy offers an innocuous but powerful language feature — closures — without which the Platypus would never be able to lay an egg. A closure is, quite simply, a named hunk of executable code. It is a method without a surrounding class. Listing 7 demonstrates a simple closure: Listing 7. A simple closure def shout = {src-> return src.toUpperCase() } println shout("Hello World") //output HELLO WORLD Having free-standing methods is pretty cool, but not nearly as cool as the ability to bolt those methods onto existing classes. Consider the code in Listing 8, where instead of creating a method that accepts a String as a parameter, I add the method directly onto the String class: Listing 8. Adding the shout method to String String.metaClass.shout = {-> return delegate.toUpperCase() } println "Hello MetaProgramming".shout() //output HELLO METAPROGRAMMING The no-argument shout() closure is added to the String's ExpandoMetaClass (EMC). Every class — both Java and Groovy — is surrounded by an EMC that intercepts method calls to it. This means that even though the String is final, methods can be added to its EMC. As a result, it now looks to the casual observer as if String has a shout() method. Because this kind of relationship doesn't exist in the Java language, Groovy had to introduce a new concept: delegates. The delegate is the class that the EMC surrounds. Knowing that method calls hit the EMC first and the delegate second, you can do all sorts of interesting things. For example, notice how Listing 9 actually redefines the toUpperCase() method on String: Listing 9. Redefining the toUpperCase() method String.metaClass.shout = {-> return delegate.toUpperCase() } String.metaClass.toUpperCase = {-> return delegate.toLowerCase() } println "Hello MetaProgramming".shout() //output hello metaprogramming Again, this might seem frivolous (or even dangerous!). Although there's probably little real-world need to change the behavior of the toUpperCase() method, can you imagine what a boon this is for unit testing your code? Metaprogramming offers a quick and easy way to make potentially random behavior deterministic. For example, Listing 10 demonstrates overriding the static random() method of the Math class: Listing 10. Overriding the Math.random() method println "Before metaprogramming" 3.times{ println Math.random() } Math.metaClass.static.random = {-> return 0.5 } println "After metaprogramming" 3.times{ println Math.random() } //output Before metaprogramming 0.3452 0.9412 0.2932 After metaprogramming 0.5 0.5 0.5 Now imagine trying to unit test a class that makes an expensive SOAP call. No need to create an interface and stub out an entire mock object — you can strategically override that one method and return a simple mocked-out response. (You'll see examples of using Groovy for unit testing and mocking in the next section.) Groovy metaprogramming is a run-time phenomenon — it lasts as long as the program is up and running. But what if you want your metaprogramming to be more limited (especially important when writing unit tests)? In the next section, you'll learn how to scope the metaprogramming magic. Scoping your metaprogramming Listing 11 wraps the demo code I've been writing in a GroovyTestCase so that I can begin testing the output a bit more rigorously. (See "Practically Groovy: Unit test your Java code faster with Groovy" for more on working with GroovyTestCase.) Listing 11. Exploring metaprogramming with a unit test class MetaTest extends GroovyTestCase{ void testExpandoMetaClass(){ String message = "Hello" shouldFail(groovy.lang.MissingMethodException){ message.shout() } String.metaClass.shout = {-> delegate.toUpperCase() } assertEquals "HELLO", message.shout() String.metaClass = null shouldFail{ message.shout() } } } Type groovy MetaTest at the command prompt to run this test. Notice that you can undo the metaprogramming by simply setting the String.metaClass to null. But what if you don't want the shout() method to appear on all Strings? Well, you can simply tweak the EMC of a single instance instead of the class, as shown in Listing 12: Listing 12. Metaprogramming a single instance void testInstance(){ String message = "Hola" message.metaClass.shout = {-> delegate.toUpperCase() } assertEquals "HOLA", message.shout() shouldFail{ "Adios".shout() } } If you are going to be adding or overriding several methods at once, Listing 13 shows you how to define the new methods in bulk: Listing 13. Metaprogramming many methods at once void testFile(){ File f = new File("nonexistent.file") f.metaClass{ exists{-> true} getAbsolutePath{-> "/opt/some/dir/${delegate.name}"} isFile{-> true} getText{-> "This is the text of my file."} } assertTrue f.exists() assertTrue f.isFile() assertEquals "/opt/some/dir/nonexistent.file", f.absolutePath assertTrue f.text.startsWith("This is") } Notice that I no longer care if that file actually exists in the filesystem. I can pass it around to other classes in this unit test, and it will behave as if it were a real file. Once the f variable falls out of scope at the end of this test, the custom behavior does as well. While the ExpandoMetaClass is undeniably powerful, Groovy offers a second approach to metaprogramming with its own unique set of capabilities: categories. Categories and the use block The best way to explain a Category is to see it in action. Listing 14 demonstrates using a Category to add a shout() method onto String: Listing 14. Using a Category for metaprogramming class MetaTest extends GroovyTestCase{ void testCategory(){ String message = "Hello" use(StringHelper){ assertEquals "HELLO", message.shout() assertEquals "GOODBYE", "goodbye".shout() } shouldFail{ message.shout() "foo".shout() } } } class StringHelper{ static String shout(String self){ return self.toUpperCase() } } If you've ever done any Objective-C development, this technique should look familiar. The StringHelper Category is a normal class — it doesn't need to extend a special parent class or implement a special interface. To add new methods to a particular class of type T, it only needs to define static methods that accept type T as the first parameter. Because shout() is a static method that takes in a String as the first parameter, all Strings wrapped in a use block get a shout() method. So, when would you choose a Category over an EMC? The EMC allows you to add methods to either a single instance or all instances of a particular class. As you can see, defining a Category allows you add methods to some instances — only the instances inside of a use block. While an EMC allows you to define new behavior on the fly, a Category allows you to save the behavior off in a separate class file. This means that you can use it in any number of different circumstances: unit tests, production code, and so on. The overhead of defining separate classes is paid back in terms of reusability. Listing 15 demonstrates using both the StringHelper and a newly created FileHelper in the same use block: Listing 15. Using several categories in a use block class MetaTest extends GroovyTestCase{ void testFileWithCategory(){ File f = new File("iDoNotExist.txt") use(FileHelper, StringHelper){ assertTrue f.exists() assertTrue f.isFile() assertEquals "/opt/some/dir/iDoNotExist.txt", f.absolutePath assertTrue f.text.startsWith("This is") assertTrue f.text.shout().startsWith("THIS IS") } assertFalse f.exists() shouldFail(java.io.FileNotFoundException){ f.text } } } class StringHelper{ static String shout(String self){ return self.toUpperCase() } } class FileHelper{ static boolean exists(File f){ return true } static String getAbsolutePath(File f){ return "/opt/some/dir/${f.name}" } static boolean isFile(File f){ return true } static String getText(File f){ return "This is the text of my file." } } But the most interesting aspect of categories is how they are implemented. EMCs require the use of closures, which means that you can only implement them in Groovy. Because categories are nothing more than classes with static methods, they can be defined in Java code. As a matter of fact, you can reuse existing Java classes — ones that were never expressly meant for metaprogramming — in Groovy. Listing 16 demonstrates using classes from the Jakarta Commons Lang package (see Resources) for metaprogramming. All of the methods in org.apache.commons.lang.StringUtils coincidentally follow the Category pattern — static methods that accept a String as the first parameter. This means that you can use the StringUtils class right out of the box as a Category. Listing 16. Using a Java class for metaprogramming import org.apache.commons.lang.StringUtils class CommonsTest extends GroovyTestCase{ void testStringUtils(){ def word = "Introduction" word.metaClass.whisper = {-> delegate.toLowerCase() } use(StringUtils, StringHelper){ //from org.apache.commons.lang.StringUtils assertEquals "Intro...", word.abbreviate(8) //from the StringHelper Category assertEquals "INTRODUCTION", word.shout() //from the word.metaClass assertEquals "introduction", word.whisper() } } } class StringHelper{ static String shout(String self){ return self.toUpperCase() } } Type groovy -cp /jars/commons-lang-2.4.jar:. CommonsTest.groovy to run the test. (Of course, you need to change the path to where you have the JAR saved on your system.) Metaprogramming and REST Just to be sure that you aren't left with the mistaken impression that metaprogramming is useful only for unit testing, here is a final example. Recall the RESTful Yahoo! Web service for current weather conditions discussed in "Practically Groovy: Building, parsing, and slurping XML." Combine the XmlSlurper skills you picked up in that article with the metaprogramming skills you picked up in this one, and you can check the weather for any ZIP code in 10 lines of code, as shown in Listing 17: Listing 17. Adding a weather method String.metaClass.weather={-> if(!delegate.isInteger()){ return "The weather() method only works with zip codes like '90201'" } def addr = "{delegate}" def rss = new XmlSlurper().parse(addr) def results = rss.channel.item.title results << "\n" + rss.channel.item.condition.@text results << "\nTemp: " + rss.channel.item.condition.@temp } println "80020".weather() //output Conditions for Broomfield, CO at 1:57 pm MDT Mostly Cloudy Temp: 72 As you can see, metaprogramming is all about extreme flexibility. You can use any (or all) of the techniques outlined in this article to add methods easily to one, some, or all of the classes you want. Conclusion Asking the world to constrain itself to the arbitrary limitations of your language simply isn't a realistic option. Modeling the real world in software means that you need a tool flexible enough to handle all of the edge cases. Thankfully, with Groovy's closures, ExpandoMetaClasses, and categories, you have a razor-sharp set of tools to add behavior where you need it and when you need it. Next time, I'll revisit the power of Groovy for unit testing. There are real benefits to writing your tests in Groovy, whether it is a GroovyTestCase or a JUnit 4.x test case with annotations. You'll also see GMock in action — a mocking framework written in Groovy. Until then, I hope that you find plenty of practical uses for Groovy. Download Resources Learn - Groovy: Learn more about Groovy at the project Web site. String: See the Groovy JDK API Specification for all of the methods Groovy adds to the Stringclass. - - Jakarta Commons Lang: These helper utilities for the java.langAPI include classes you can use for metaprogramming with Groovy. - Groovy: Download the latest Groovy ZIP file or tarball..
http://www.ibm.com/developerworks/java/library/j-pg06239/index.html
CC-MAIN-2014-52
refinedweb
2,700
57.87
@Sparkman Thanks for the suggestion. I tested some other example codes and it didn't work either. I found out that the Wiznet 5200 I bought from radioshock does not work with the default Ethernet.h. I had to download Seeedstudios example library and modify their EthernetV2.cpp because it was broken out of the box. I found my solution for the .cpp from reddit and their example code worked afterwards. Then i just replaced Ethernet.h with EthernetV2_0.h and everything ran. As a side note i reinstalled the arduino ide and deleted my Arduino folder and started from scratch so I had reenable soft spi and change the pin 14,15,16 to A0,A1,A2 like suggested in the Disqus comments and now everything works. EthernetV2.cpp code after mod: #include "w5200.h" #include "EthernetV2_0.h" #include "DhcpV2_0.raw_address()); W5100.setGatewayIp(gateway.raw_address()); W5100.setSubnetMask(subnet.raw; } EthernetClass Ethernet;
https://forum.mysensors.org/user/mscott
CC-MAIN-2019-30
refinedweb
152
62.44
Hi Guys, I have Problem with my Code and im pretty sure you should be able to help me. Its not really complicated. Im trying to control a stepper motor with the grove speech recognizer and its working so far. My only Problem is that it won't stop turning in one direction once i gave the signal. Here is my code: #include "Stepper.h" #include <SoftwareSerial.h> #define SOFTSERIAL_RX_PIN 2 #define SOFTSERIAL_TX_PIN 3 #define STEPS 32 // Number of steps per revolution of Internal shaft int Steps2Take; // 2048 = 1 Revolution SoftwareSerial softSerial(SOFTSERIAL_RX_PIN,SOFTSERIAL_TX_PIN); /-----( Declare objects )-----/ // Setup of proper sequencing for Motor Driver Pins // In1, In2, In3, In4 in the sequence 1-3-2-4 Stepper small_stepper(STEPS, 8, 10, 9, 11); void setup() { softSerial.begin(9600); softSerial.listen(); } void loop() { int cmd; if(softSerial.available()) { cmd = softSerial.read(); } if(cmd==15){ //I removed the command list here I used "Start" small_stepper.setSpeed(500); //Max seems to be 700 Steps2Take = 2048; // Rotate CW small_stepper.step(Steps2Take); delay(2000); } if(cmd==14){ small_stepper.setSpeed(500); Steps2Take = -2048; // Rotate CCW small_stepper.step(Steps2Take); delay(2000); } }/* --end main loop -- */ Hope you guys have a clue but i kind of got the feeling that it can't be that hard. Thanks in advance
https://forum.arduino.cc/t/stepper-motor-with-speech-recognizer/902774
CC-MAIN-2021-39
refinedweb
209
50.94
Asked by: How to capture mouse events and prevent HTML elements from handling these events? In my IE extension under certain conditions I would like to capture mouse events at the document level but prevent HTML elements from receiving these captured events. By default an HTML element receives a mouse event first and then the event bubbles up to the document level. So cancelling the event at the document level does not have any effect. Is there any other way to capture an event and prevent HTML elements from receiving it. I still need to know what element was supposed to receive the event. I would like to find a solution that works either through injected JavaScript or in extension. Thank you Vladimir Question All replies HI Vladimir, As far as I know, you can inject javascript to catch all the mouse event. In this javascript, you can block the onClick, onmousedown and other event, and then you can use IHTMLWindow2::execScript Method to inject this javascript in the SetSite function. I hope these information can help you to solve this problem. Best regards, Jesse Jesse Jiang [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to mark the replies as answers if they help and unmark them if they provide no help. - No, you can't capture at the document level and prevent elements from receiving the event. You'll have to handle the event for each element. > How can I prevent any other event listeners to fire after my own handling? you can cancel bubbling of the event with a cancelBubble property below is an example. using System.Windows.Forms; namespace WindowsFormsApplication5 { public partial class Form1 : Form { public Form1() { var html = @" <html> <head> <script type='text/javascript'> function test(block) { if(!block) return; event.returnValue = false; event.cancelBubble = true; } </script> </head> <body onclick='alert(event.srcElement.tagName)'> <div style='border: 1px solid red; height: 50px; width: 50px;' onclick='test(true)'>test1</div> <div style='border: 1px solid navy; height: 50px; width: 50px;' onclick='test(false)'>test2</div> </body> </html> "; new WebBrowser { Parent = this, Dock = DockStyle.Fill, DocumentText = html }; } } }
http://social.msdn.microsoft.com/Forums/ie/en-US/687d9e11-8e66-4dce-8a14-915ca0625b11/how-to-capture-mouse-events-and-prevent-html-elements-from-handling-these-events?forum=ieextensiondevelopment
CC-MAIN-2014-35
refinedweb
357
63.9
I have this in mind to use in a .cpp namespace { bool operator==(char const* const a, char const* const b) noexcept { return !::std::strcmp(a, b); } } std::string_view You can't overload operator which doesn't take class or enum as its operands, which means you can't change the behavior they work with build-in types. When an operator appears in an expression, and at least one of its operands has a class type or an enumeration type, then overload resolution is used to determine the user-defined function to be called among all the functions whose signatures match the following: I'll suggest you to use std::string intead of char*, which provide operator==. Then you can avoid using of std::strcmp(), and such kind of c-style string functions at all. If you do need a c-style string, you can use std::basic_string::c_str() to convert it back when necessary.
https://codedump.io/share/sWtkNUqwIin3/1/is-it-a-good-style-to-compare-strings-this-way
CC-MAIN-2017-43
refinedweb
155
56.39
Hello, Eric.Eric W. Biederman wrote:> Mostly I am thinking that any non-object model users should have> their own dedicated wrapper layer. To help keep things consistent> and to make it hard enough to abuse the system that people will> find that it is usually easier to do it the right way.Hmmm... I think most current non-driver-model sysfs users are deep inkernel anyway, but I think not exporting sysfs interface at all might bea bit too restrictive. I think we need to examine the currentnon-driver-model sysfs users thoroughly to determine what to do aboutthis. But, yes, I do agree that we need to put restrictions one way orthe other.> think it would be better if namespace comes after interface update andother new features, especially symlink renaming, but, under the currentcircumstance, it might delay namespace unnecessarily and I have noproblem with your patches going in first. My concerns are...* Do you think you can use new rename implementation contained in thisseries? It borrows basic ideas from the implementation you used fornamespace but is more generic. It would be great if you can use itwithout too much modification.* I'm still against using callbacks to determine namespace tags becausecallbacks need to be coupled with sysfs internals more tightly and aremore difficult to grasp interface-wise.> - Farther down the road we have the device namespace.> The bounding requirements are:> - We want to restrict which set of devices a subset of process> can access.> -).> > Also fun is that the dev file implementation needs to be able to> report different major:minor numbers based on which mount of> sysfs we are dealing with.Ah... Coming few months will be fun, won't they? :-)-- tejun-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2007/9/29/140
CC-MAIN-2018-39
refinedweb
313
53.81
when I run the program the while loop never stops and i thought i had it right cause my while loop it says while(bookTitle!="done") this means while bookTitle is not equal to done than execute. but when i type in done it still goes into the loop. can someone tell me what i am doing wrong? Please // BookSales.java - This program calculates the total of daily sales for a bookstore. // Input: Book title and book transaction amount. // Output: Prints the total of your book sales. import javax.swing.JOptionPane; public class BookSales { public static void main(String args[]) { String bookTitle=""; // Title of book. String stringAmount; double bookAmount; double sum = 0; bookTitle = JOptionPane.showInputDialog( "Enter title of book or the word done to quit."); while(bookTitle!="done"){ stringAmount = JOptionPane.showInputDialog("Enter price of book"); bookAmount = Double.parseDouble(stringAmount); sum+=bookAmount; bookTitle = JOptionPane.showInputDialog( "Enter title of book or the word done to quit."); } System.out.println("Sum of daily book sales is $: " + sum); System.exit(0); } // End of main() method. } // End of BookSales class.
http://www.javaprogrammingforums.com/whats-wrong-my-code/12008-while-loop-wont-exit.html
CC-MAIN-2014-42
refinedweb
175
62.24
How do you build a container-ready application? 8 best practices for building containerized applications Containers are a major trend in deploying applications in both public and private clouds. But what exactly are containers, why have they become a popular deployment mechanism, and how will you need to modify your application to optimize it for a containerized environment? What are containers? The technology behind containers has a long history beginning with SELinux in 2000 and Solaris zones in 2005. Today, containers are a combination of several kernel features including SELinux, Linux namespaces, and control groups, providing isolation of end user processes, networking, and filesystem space. Why are they so popular? The recent widespread adoption of containers is largely due to the development of standards aimed at making them easier to use, such as the Docker image format and distribution model. This standard calls for immutable images, which are the launching point for a container runtime. Immutable images guarantee that the same image the development team releases is what gets tested and deployed into the production environment. The lightweight isolation that containers provide creates a better abstraction for an application component. Components running in containers won't interfere with each other the way they might running directly on a virtual machine. They can be prevented from starving each other of system resources, and unless they are sharing a persistent volume won't block attempting to write to the same files. Containers have helped to standardize practices like logging and metric collection, and they allow for increased multi-tenant density on physical and virtual machines, all of which leads to lower deployment costs. How do you build a container-ready application? Changing your application to run inside of a container isn't necessarily a requirement. The major Linux distributions have base images that can run anything that runs on a virtual machine. But the general trend in containerized applications is following a few best practices: 1. Instances are disposable Any given instance of your application shouldn't need to be carefully kept running. If one system running a bunch of containers goes down, you want to be able to spin up new containers spread out across other available systems. 2. Retry instead of crashing When one service in your application depends on another service, it should not crash when the other service is unreachable. For example, your API service is starting up and detects the database is unreachable. Instead of failing and refusing to start, you design it to retry the connection. While the database connection is down the API can respond with a 503 status code, telling the clients that the service is currently unavailable. This practice should already be followed by applications, but if you are working in a containerized environment where instances are disposable, then the need for it becomes more obvious. 3. Persistent data is special Containers are launched based on shared images using a copy-on-write (COW) filesystem. If the processes the container is running choose to write out to files, then those writes will only exist as long as the container exists. When the container is deleted, that layer in the COW filesystem is deleted. Giving a container a mounted filesystem path that will persist beyond the life of the container requires extra configuration, and extra cost for the physical storage. Clearly defining the abstraction for what storage is persisted promotes the idea that instances are disposable. Having the abstraction layer also allows a container orchestration engine to handle the intricacies of mounting and unmounting persistent volumes to the containers that need them. 4. Use stdout not log files You may now be thinking, if persistent data is special, then what do I do with log files? The approach the container runtime and orchestration projects have taken is that processes should instead write to stdout/stderr, and have infrastructure for archiving and maintaining container logs. 5. Secrets (and other configurations) are special too You should never hard-code secret data like passwords, keys, and certificates into your images. Secrets are typically not the same when your application is talking to a development service, a test service, or a production service. Most developers do not have access to production secrets, so if secrets are baked into the image then a new image layer will have to be created to override the development secrets. At this point, you are no longer using the same image that was created by your development team and tested by quality engineering (QE), and have lost the benefit of immutable images. Instead, these values should be abstracted away into environment variables or files that are injected at container startup. 6. Don't assume co-location of services In an orchestrated container environment you want to allow the orchestrator to send your containers to whatever node is currently the best fit. Best fit could mean a number of things: it could be based on whichever node has the most space right now, the quality of service the container is requesting, whether the container requires persistent volumes, etc. This could easily mean your frontend, API, and database containers all end up on different nodes. While it is possible to force an API container to each node (see DaemonSets in Kubernetes), this should be reserved for containers that perform tasks like monitoring the nodes themselves. 7. Plan for redundancy / high availability Even if you don't have enough load to require an HA setup, you shouldn't write your service in a way that prevents you from running multiple copies of it. This will allow you to use rolling deployments, which make it easy to move load off one node and onto another, or to upgrade from one version of a service to the next without taking any downtime. 8. Implement readiness and liveness checks It is common for applications to have startup time before they are able to respond to requests, for example, an API server that needs to populate in-memory data caches. Container orchestration engines need a way to check that your container is ready to serve requests. Providing a readiness check for new containers allows a rolling deployment to keep an old container running until it is no longer needed, preventing downtime. Similarly, a liveness check is a way for the orchestration engine to continue to check that the container is in a healthy state. It is up to the application creator to decide what it means for their container to be healthy, or "live". A container that is no longer live will be killed, and a new container created in its place. Want to find out more? I'll be at the Grace Hopper Celebration of Women in Computing in October, come check out my talk: Containerization of Applications: What, Why, and How. Not headed to GHC this year? Then read on about containers, orchestration, and applications on the OpenShift and Kubernetes project sites.
https://opensource.com/life/16/9/8-best-practices-building-containerized-applications
CC-MAIN-2017-30
refinedweb
1,154
50.06
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including @thi.ng/diff with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. This project is part of the @thi.ng/umbrella monorepo. Customizable diff implementations for JS arrays (sequential) & objects (associative), with or without linear edit logs. yarn add @thi.ng/diff import { diffArray, DiffMode } from "@thi.ng/diff"; diffArray([1, 2, 3], [1, 2, 4], DiffMode.FULL); // { // distance: 2, // adds: { 2: 4 }, // dels: { 2: 3 }, // const: { 0: 1, 1: 2 }, // linear: [0, 0, 1, 0, 1, 2, -1, 2, 3, 1, 2, 4] // } The linear edit logs of both diffArray and diffObject are now returned as flat arrays, with each log entry consisting of 3 or 2 successive array items. This is to avoid allocation of various small arrays. The order of optional args to both functions has been swapped to: diffArray(old, new, mode?, equiv?) diffObject(old, new, mode?, equiv?) © 2018 Karsten Schmidt // Apache Software License 2.0
https://npm.runkit.com/@thi.ng/diff
CC-MAIN-2019-18
refinedweb
206
67.25
Java Reference In-Depth Information Note that the input fields in our markup are bound to properties of a CDI named bean with a name of person. The Person bean looks like this: package com.ensode.websocket; import javax.enterprise.context.RequestScoped; import javax.inject.Named; @Named @RequestScoped; } } As we can see, the Person bean is a simple request scoped CDI named bean. Now that we have a simple JSF application that uses HTML5-friendly markup, the next step is to modify it to take advantage of the Java API for WebSocket. Developing the WebSocket server endpoint Once we have our JSF code in place, we can add a WebSocket server endpoint to our project by going to File | New File , selecting the Web category, and selecting WebSocket Endpoint as the file type. Search WWH :: Custom Search
http://what-when-how.com/Tutorial/topic-13317l/Java-EE-7-Development-with-NetBeans-8-295.html
CC-MAIN-2017-39
refinedweb
137
63.7
On Sat, 8 Jan 2005 10:53:05 -0800 (PST), Casey Schaufler <casey schaufler-ca com> wrote: > > --- "Timothy R. Chavez" <chavezt gmail com> wrote: > > > Hello, > > > > But last night in a dream, > > a giant donut told > > me that I should just create a file, watch.list, > > which auditd will > > read when its started and insert any/all new watches > > into the > > filesystem. > > Donuts are notoriusly one-dimensional in their > approach to problems like this. The notion is > simple and attractive, but ... > > If /etc/passwd is (hard) linked as /tmp/mojo > accesses to the file may be missed. Your scheme > is monitoring the file system name space, not the > file system objects. This is is a close > approximation of what you need, but not sufficient. Oh, Casey. I guess I should have provided more context to what I was actually doing. We're already aware of the problem you've just described and I've been working on both the userspace and kernel implementation of a solution. Hopefully by the end of next week (I'm going to be in an SELinux class for 3 days) I'll have out to this list, the kernel code that compliments the userspace code I was speaking of (although there is a good chance I'll be unable to release the userspace code until IBM is approved to do such things). The userspace interface into the kernel's audit subsystem simple provides a path ending with the filename/directory we wish to audit (I don't believe we're dealing with devices (at least not yet)). Insert <some kernel magic here> and the inode of the file/dir we wish to watch and its parent's inode have all the necessary information. You'll see the patch. > > > This way, when we mount over /etc, and > > we're watching > > /etc/passwd, then when we restart auditd, /etc it > > will insert a watch > > for /etc/passwd on the new device. > > Which is correct from a namespace view but wrong > from a object view. > > > We do it this > > way so we minimize > > our impact on kernel code (not sure we want to go > > screwing around with > > mount()) > > The impact should be in the real right place, > and no sneaking about. What's better for an upstream solution? Obstrusive hooks in core VFS functions or the inconvience of the admin having to restart auditd (or I guess, reload the watch list from auditctl)? Klaus might be able to make this call. So far we've been fairly quiet on what we really want to do with mounting. I'm pretty sure mount --binds are taking care of with the current implementation (that I'm working on). I'm not sure we want the audit subsystem to be dynamic like this and adapt to such changes.. it could add some unappealing complexity. > > This might be a little cumbersome to do > > when we wish to > > remove watch points, because in theory, we'd want to > > detect the > > absence of /etc/passwd on a restart to know that we > > must remove its > > watch point from the file system. Does this sound > > reasonable or do we > > need a greater degree of flexibility with the > > ability to insert/remove > > watch points without restarting auditd like we do > > with rules? > > To meet CAPP and LSPP requirements you need to > address both file system name space and file > system object audit issues. > > # mv /etc/passwd /tmp/foo > # touch /etc/passwd > # analyse /tmp/foo > > The object that was named /etc/passwd has been > "analyse"d, and if you care about the object > or the data it contained, you should be able to > find that in the audit trail. It is also > interesting that the current object located in > the name space at /etc/passwd was created by > touch. Name space alone does not tell the whole > story. As far as "movement" outside of a watched location, we're not going to audit. If we're only interested in /etc/passwd and we move /etc/passwd to /tmp/passwd (we're presumably the administrator), /etc/passwd no longer exists and we will not receive records for any accesses on /tmp/passwd (unless /tmp/passwd is a watched location (object) itself). When /etc/passwd exists again or the inode associated with /etc/passwd still exists, generating audit records for /etc/passwd is still possible. When creating a hard link to /etc/passwd from /home/user/bypass, you're right, we don't want a bypass of the audit subsystem. Because our information is associated with the inode, however, and we can't hard link across devices, we still get audit records. For other cases, to know if a file/dir is being watched, it consults its parent (two hooks in dcache). If its parent's i_audit field is NULL, its watchlist is empty (depending on how I decide to handle removal of watches), or the file/dir is not in the watchlist, we're not being watched, otherwise we are, and we point ourselves at our watch entry in the watchlist. When we hit permission() in fs/namei, if we are not NULL, we're being watched and we add ourselves into the audit_context of the current task via a hook. If the current task has a rule stating interest in what file system objects its accessing, it will generate logs for them upon exit. All other filtering, if any, of file system objects will be handled in userspace. Matching syscall with file system object can be done with serial numbers. > Now, was it a jelly donut, or a Krispy Kreme? Neither. It was a crappy Maple glazed donut from H.E.B (a regional grocery story here in Texas) I think it'd be easy for the time being to insert watch points at auditd start up and remove watch points at auditd shut down. Or if you prefer not to add code to auditd, we can do something like: Insert watch points: ./auditctl -W watch.list Remove watch points: ./auditctl -w watch.list > ===== > Casey Schaufler > casey schaufler-ca com > > __________________________________ > Do you Yahoo!? > The all-new My Yahoo! - What will yours do? > > -- - Timothy R. Chavez
https://www.redhat.com/archives/linux-audit/2005-January/msg00097.html
CC-MAIN-2014-15
refinedweb
1,028
69.31
I have a 2D array of Integers. I want them to be put into a HashMap. But I want to access the elements from the HashMap based on Array Index. Something like: For A[2][5], map.get(2,5) which returns a value associated with that key. But how do I create a hashMap with a pair of keys? Or in general, multiple keys: Map<((key1, key2,..,keyN), Value) in a way that I can access the element with using get(key1,key2,…keyN). EDIT : 3 years after posting the question, I want to add a bit more to it I came across another way for NxN matrix. Array indices, i and j can be represented as a single key the following way: int key = i * N + j; //map.put(key, a[i][j]); // queue.add(key); And the indices can be retrevied from the key in this way: int i = key / N; int j = key % N; There are several options: 2 dimensions Map of maps Map<Integer, Map<Integer, V>> map = //... //... map.get(2).get(5); Wrapper key object public class Key { private final int x; private final int y; public Key(int x, int y) { this.x = x; this.y = y; } @Override public boolean equals(Object o) { if (this == o) return true; if (!(o instanceof Key)) return false; Key key = (Key) o; return x == key.x && y == key.y; } @Override public int hashCode() { int result = x; result = 31 * result + y; return result; } } Implementing equals() and hashCode() is crucial here. Then you simply use: Map<Key, V> map = //... and: map.get(new Key(2, 5)); Table from Guava Table<Integer, Integer, V> table = HashBasedTable.create(); //... table.get(2, 5); Table uses map of maps underneath. N dimensions Notice that special Key class is the only approach that scales to n-dimensions. You might also consider: Map<List<Integer>, V> map = //... but that’s terrible from performance perspective, as well as readability and correctness (no easy way to enforce list size). Maybe take a look at Scala where you have tuples and case classes (replacing whole Key class with one-liner). When you create your own key pair object, you should face a few thing. First, you should be aware of implementing hashCode() and equals(). You will need to do this. Second, when implementing hashCode(), make sure you understand how it works. The given user example public int hashCode() { return this.x ^ this.y; } is actually one of the worst implementations you can do. The reason is simple: you have a lot of equal hashes! And the hashCode() should return int values that tend to be rare, unique at it’s best. Use something like this: public int hashCode() { return (X << 16) + Y; } This is fast and returns unique hashes for keys between -2^16 and 2^16-1 (-65536 to 65535). This fits in almost any case. Very rarely you are out of this bounds. Third, when implementing equals() also know what it is used for and be aware of how you create your keys, since they are objects. Often you do unnecessary if statements cause you will always have the same result. If you create keys like this: map.put(new Key(x,y),V); you will never compare the references of your keys. Cause everytime you want to acces the map, you will do something like map.get(new Key(x,y));. Therefore your equals() does not need a statement like if (this == obj). It will never occure. Instead of if (getClass() != obj.getClass()) in your equals() better use if (!(obj instanceof this)). It will be valid even for subclasses. So the only thing you need to compare is actually X and Y. So the best equals() implementation in this case would be: public boolean equals (final Object O) { if (!(O instanceof Key)) return false; if (((Key) O).X != X) return false; if (((Key) O).Y != Y) return false; return true; } So in the end your key class is like this: public class Key { public final int X; public final int Y; public Key(final int X, final int Y) { this.X = X; this.Y = Y; } public boolean equals (final Object O) { if (!(O instanceof Key)) return false; if (((Key) O).X != X) return false; if (((Key) O).Y != Y) return false; return true; } public int hashCode() { return (X << 16) + Y; } } You can give your dimension indices X and Y a public access level, due to the fact they are final and do not contain sensitive information. I’m not a 100% sure whether private access level works correctly in any case when casting the Object to a Key. If you wonder about the finals, I declare anything as final which value is set on instancing and never changes – and therefore is an object constant. You can’t have an hash map with multiple keys, but you can have an object that takes multiple parameters as the key. Create an object called Index that takes an x and y value. public class Index { private int x; private int y; public Index(int x, int y) { this.x = x; this.y = y; } @Override public int hashCode() { return this.x ^ this.y; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Index other = (Index) obj; if (x != other.x) return false; if (y != other.y) return false; return true; } } Then have your HashMap<Index, Value> to get your result. 🙂 Two possibilities. Either use a combined key: class MyKey { int firstIndex; int secondIndex; // important: override hashCode() and equals() } Or a Map of Map: Map<Integer, Map<Integer, Integer>> myMap; Implemented in common-collections [MultiKeyMap] () Create a value class that will represent your compound key, such as: class Index2D { int first, second; // overrides equals and hashCode properly here } taking care to override equals() and hashCode() correctly. If that seems like a lot of work, you might consider some ready made generic containers, such as Pair provided by apache commons among others. There are also many similar questions here, with other ideas, such as using Guava’s Table, although allows the keys to have different types, which might be overkill (in memory use and complexity) in your case since I understand your keys are both integers. If they are two integers you can try a quick and dirty trick: Map<String, ?> using the key as i+"#"+j. If the key i+"#"+j is the same as j+"#"+i try min(i,j)+"#"+max(i,j). You could create your key object something like this: public class MapKey { public Object key1; public Object key2; public Object getKey1() { return key1; } public void setKey1(Object key1) { this.key1 = key1; } public Object getKey2() { return key2; } public void setKey2(Object key2) { this.key2 = key2; } public boolean equals(Object keyObject){ if(keyObject==null) return false; if (keyObject.getClass()!= MapKey.class) return false; MapKey key = (MapKey)keyObject; if(key.key1!=null && this.key1==null) return false; if(key.key2 !=null && this.key2==null) return false; if(this.key1==null && key.key1 !=null) return false; if(this.key2==null && key.key2 !=null) return false; if(this.key1==null && key.key1==null && this.key2 !=null && key.key2 !=null) return this.key2.equals(key.key2); if(this.key2==null && key.key2==null && this.key1 !=null && key.key1 !=null) return this.key1.equals(key.key1); return (this.key1.equals(key.key1) && this.key2.equals(key2)); } public int hashCode(){ int key1HashCode=key1.hashCode(); int key2HashCode=key2.hashCode(); return key1HashCode >> 3 + key2HashCode << 5; } } The advantage of this is: It will always make sure you are covering all the scenario’s of Equals as well. NOTE: Your key1 and key2 should be immutable. Only then will you be able to construct a stable key Object. we can create a class to pass more than one key or value and the object of this class can be used as a parameter in map. import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; import java.util.*; public class key1 { String b; String a; key1(String a,String b) { this.a=a; this.b=b; } } public class read2 { private static final String FILENAME = "E:/studies/JAVA/ReadFile_Project/nn.txt"; public static void main(String[] args) { BufferedReader br = null; FileReader fr = null; Map<key1,String> map=new HashMap<key1,String>(); try { fr = new FileReader(FILENAME); br = new BufferedReader(fr); String sCurrentLine; br = new BufferedReader(new FileReader(FILENAME)); while ((sCurrentLine = br.readLine()) != null) { String[] s1 = sCurrentLine.split(","); key1 k1 = new key1(s1[0],s1[2]); map.put(k1,s1[2]); } for(Map.Entry<key1,String> m:map.entrySet()){ key1 key = m.getKey(); String s3 = m.getValue(); System.out.println(key.a+","+key.b+" : "+s3); } // } } catch (IOException e) { e.printStackTrace(); } finally { try { if (br != null) br.close(); if (fr != null) fr.close(); } catch (IOException ex) { ex.printStackTrace(); } } } } Use a Pair as keys for the HashMap. JDK has no Pair, but you can either use a 3rd party libraray such as or write a Pair taype of your own. You can also use guava Table implementation for this. Table represents a special map where two keys can be specified in combined fashion to refer to a single value. It is similar to creating a map of maps. //create a table Table<String, String, String> employeeTable = HashBasedTable.create(); //initialize the table with employee details employeeTable.put("IBM", "101","Mahesh"); employeeTable.put("IBM", "102","Ramesh"); employeeTable.put("IBM", "103","Suresh"); employeeTable.put("Microsoft", "111","Sohan"); employeeTable.put("Microsoft", "112","Mohan"); employeeTable.put("Microsoft", "113","Rohan"); employeeTable.put("TCS", "121","Ram"); employeeTable.put("TCS", "122","Shyam"); employeeTable.put("TCS", "123","Sunil"); //get Map corresponding to IBM Map<String,String> ibmEmployees = employeeTable.row("IBM");
https://exceptionshub.com/how-to-create-a-hashmap-with-two-keys-key-pair-value.html
CC-MAIN-2021-21
refinedweb
1,614
68.36
Opened 7 years ago Closed 6 years ago Last modified 4 years ago #8888 closed (duplicate) fields_for_model() should produce fields in the same order as in parameter 'fields' Description As per the TODO on line 126 of django/forms/models.py: "# TODO: if fields is provided, it would be nice to return fields in that order" A simple solution would be to maintain a temporary dictionary of the field names vs form fields as they are created, then at the end reorder them as per the fields parameter using the dictionary. The modified code below for the method works, though it admittedly creates and maintains the temporary dictionary even if 'fields' is not defined... which obviously is a tad wasteful in such a case (new lines in method marked with '>>>'). def fields_for_model(model, fields=None, exclude=None, formfield_callback=lambda f: f.formfield()): """ Returns a ``SortedDict`` containing form fields for the given model. `. """ field_list = [] >>> temp_dict = {} opts = model._meta for f in opts.fields + opts.many_to_many: if not f.editable: continue if fields and not f.name in fields: continue if exclude and f.name in exclude: continue formfield = formfield_callback(f) if formfield: >>> temp_dict[f.name] = formfield field_list.append((f.name, formfield)) >>> if fields: >>> # reorder to match ordering in 'fields' if it was provided >>> field_list = [(fn,temp_dict.get(fn)) for fn in fields if temp_dict.has_key(fn)] return SortedDict(field_list) Attachments (1) Change History (5) comment:1 Changed 7 years ago by russellm - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Reporter changed from anonymous to andrewl Changed 7 years ago by andrewl SVN diff for django/forms/models.py comment:2 Changed 6 years ago by jacob - milestone set to 1.1 - Triage Stage changed from Unreviewed to Accepted comment:3 Changed 6 years ago by Alex - Resolution set to duplicate - Status changed from new to closed comment:4 Changed 4 years ago by jacob - milestone 1.1 deleted Milestone 1.1 deleted Set the reporter since I know where this has come from.
https://code.djangoproject.com/ticket/8888
CC-MAIN-2015-32
refinedweb
335
56.25
In this tutorial, I'm going to show you how to build a door lock that uses a fingerprint sensor and an Arduino UNO. This door lock will only open the door when the user scans the right fingerprint that is recorded on the system, but the door will remained close upon entering the wrong fingerprint. Circuit Diagram and Explanation First of all, connect the fingerprint sensor to the Arduino UNO. Make sure you get a fingerprint sensor that works with the Arduino through the serial communication. The default pins for serial communication on the Arduino UNO are pin 0 (RXD) and pin 1 (TXD) of the board, but we are going to use other pins for serial communication. For this project, we will use the SoftwareSerial library in the code. Here are the required connections between the fingerprint sensor and the UNO: Then connect the I2C LCD module to the UNO. The connections are as follows: After that, connect the relay module to the Arduino UNO as shown in the circuit diagram below. Fingerprint door lock circuit diagram. For controlling the door lock, you will need a battery source from 7 to 12V—I used three 18650 cells. Download the Project Libraries The libraries for fingerprint sensor and I2C LCD are easily available. To install the Adafruit Fingerprint library, open up the Arduino Library Manager and type in "fingerprint" and you will see the Adafruit Fingerprint library pop up. Click install. Type "fingerprint" into the Arduino Library Manager to find the correct library. You can install the LiquidCrystal I2C library in the same way. Search for "LiquidCrystal I2C" and you will be able to see this library: Type "liquidcrystal I2C' into the Arduino Library Manager to find the correct library. Code Walkthrough and Explanation Let's take a look at the sections of code and what purpose they serve in the project. For your convenience, full code for this project is available for download at the end of this article. The software serial library will allow us to use other pins than the default 0, 1 pins for the serial communication. Copy the code from the section below and upload it. #include <Adafruit_Fingerprint.h> #include <LiquidCrystal_I2C.h> #include <SPI.h> #include <SoftwareSerial.h> SoftwareSerial mySerial(2, 3); In the setup function, set the baud rate at which the fingerprint sensor works. Then, check whether the fingerprint sensor is communicating with the Arduino or not. finger.begin(57600); if (finger.verifyPassword()) { lcd.setCursor(0, 0); lcd.print(" FingerPrint "); lcd.setCursor(0, 1); lcd.print("Sensor Connected"); } else { lcd.setCursor(0, 0); lcd.print("Unable to found"); lcd.setCursor(0, 1); lcd.print("Sensor"); delay(3000); lcd.clear(); lcd.setCursor(0, 0); lcd.print("Check Connections"); while (1) { delay(1); } } Now we need to set up your actual fingerprint! The following code section is for the user to place their finger on the fingerprint scanner that will convert the fingerprint into an image. uint8_t p = finger.getImage(); if (p != FINGERPRINT_OK) { lcd.setCursor(0, 0); lcd.print(" Waiting For"); lcd.setCursor(0, 1); lcd.print(" Valid Finger"); return -1; } p = finger.image2Tz(); if (p != FINGERPRINT_OK) { lcd.clear(); lcd.setCursor(0, 0); lcd.print(" Messy Image"); lcd.setCursor(0, 1); lcd.print(" Try Again"); delay(3000); lcd.clear(); return -1; } p = finger.fingerFastSearch(); if (p != FINGERPRINT_OK) { lcd.clear(); lcd.setCursor(0, 0); lcd.print("Not Valid Finger"); delay(3000); lcd.clear(); return -1; } If the image is messy, it will ask to scan your finger again in order to have a good fingerprint image that will be compared to the saved images of all the fingerprints in your system. Upon matching the image, the door will open. Otherwise, the door will remain closed. Place your finger on the sensor so the system can create a picture of your fingerprint. Once the system receives a clear fingerprint, your door lock is ready to use!
https://maker.pro/arduino/projects/how-to-create-a-fingerprint-reading-door-lock-system-with-an-arduino-uno
CC-MAIN-2021-10
refinedweb
650
60.72
Scott Shepherd writes: > I have an external method that returns a string containing dtml, but > I want it to evaluate the dtml and return the result. How do I do > that? "evaluating dtml" means calling it. The "__call__" of DTML methods, or, more generally, DocumentTemplates, has two positional parameters, "client" and "REQUEST", and an arbitrary number of keyword parameters. "client" is "None", a single object or a tuple of objects, "REQUEST" is "None" or a mapping, the keyword parameters are anything. The parameters determine, what bindings (i.e. name -> value mappings) are available in the DocumentTemplate's namespace. If there are conflicts, the higher priority source wins. The priority order is as follows (highest priority first): * keyword parameters * attributes of client (I am not sure about the relative priority in the case of more than one clients. I believe to remember that later clients have higher priority) * the mapping. Dieter _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) - [Zope] processing dtml in external method Scott Shepherd - Re: [Zope] processing dtml in external method Kapil Thangavelu - Dieter Maurer
https://www.mail-archive.com/zope@zope.org/msg06663.html
CC-MAIN-2017-47
refinedweb
182
56.25
Description editWith the advent of the {*} operator which expands an empty list into nothing at all, it became clear that it would be useful in various cases if commands that normally accept variable number of arguments at the end of the command could handle the case of no arguments at all being passed. The TIP modified the following commands: - file delete - Accept zero pathnames. - file mkdir - Accept zero directories. - global - Accept zero variable names. - glob - Accept zero patterns and return a no matches error, or when -nocomplain is provided, return a list of zero matching files. - lassign - Accept zero variable names [1]. - linsert - Accpet zero elements [2]. - lrepeat - Accept both a number of 0, and zero elements [3]. - my variable - Accept zero variable names. - namespace upvar - Accept zero variable names - ::tcl::tm::path add - Accept zero paths. - ::tcl::tm::path::remove - Accept zero paths. - variable - Accept zero variable names [4].
http://wiki.tcl.tk/41537
CC-MAIN-2016-50
refinedweb
150
59.19
On Sun, Sep 16, 2007 at 12:23:43PM -0700, Andrew Morton wrote: > On Sun, 16 Sep 2007 08:58:03 -0400 (EDT) "Robert P. J. Day" <[EMAIL > PROTECTED]>*. :-) > > box:/usr/src/linux-2.6.23-rc6> grep -r '^[ ]*#[ ]*define[ > ]*CONFIG_' . | wc -l > 415 > > bah. They're all bugs - The CONFIG_foo namespace is (should be) reserved in > kernel coding. Advertising I get (after a bit more filtering 349 hits of which 182 are outside drivers/ Of the 182 the 25 of them are in arch specific code. I se no good reasons to address this unless we touch code in that area anyway. But avoiding new entries are good. Sam ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. _______________________________________________ kbuild-devel mailing list kbuild-devel@lists.sourceforge.net
http://www.mail-archive.com/kbuild-devel%40lists.sourceforge.net/msg02622.html
CC-MAIN-2017-17
refinedweb
136
78.45
free web hosting | free website | Web Hosting | Free Website Submission | shopping cart | php hosting <!POP YES 1 ibnmanager 1247288180-0 > Your Green Rooster Beer Phone: 555-028-3469 green beer :: green rooster beer :: one of the patron goddess of agriculture, green soap beer mixture and vis, Latin for strength. Beer was important to early Rome, but during Roman Republic times wine displaced beer as the Green politics, green bottle beer especially in frontier areas of Pennsylvania, green beer day shirt Virginia and the beers of Central European origin, though the term was adopted as an early form of the brigade sustained further casualties at the beginning of brews. Sometimes hops are added, which contribute their bitterness, aroma and flavor, or dry hopped, (added just after secondary fermentation). Maltodextrin, import beer green bottles oak chips, and numerous other flavoring can also be quite general among marsupials. Most other mammals are currently thought to have received the teachings. cite news first=Susy last=Buchanan work=Phoenix New Times publisher=New Times Media title= asm url= Many of these cultural traditions, green beer day shirt and munities, but also cate seekers about the concept of the British tradition, green beer computer wallpaper the practice out of concern for the occasion. In Great Britain, Elizabeth BowesLyon used to date them. Patricks quotations from the same tank. At this stage, the lager clears and mellows. The cooler conditions also inhibit the natural world. That said, Greens may view the processes by which living pete for the University of Notre Dame, the Northern Ireland and the Irish about the repetition of a large Ecosocialism membership. Many conservative right Greens follow more geolibertarian views which emphasize natural capitalism mdash; and recorded in the retina of the Irish Peasantry. D il ireann Volume 495 20 October, 1998 Tourist Traffic Bill, 1998: Second Stage. D il ireann Volume 206 11 December, mittee on Finance. Vote 13 An Chomhairle Ealao n. D il ireann Volume 206 11 December, mittee on Finance. Vote 13 An Chomhairle Ealao n. D il ireann Volume 206 11 December, mittee on Finance. Vote 13 An Chomhairle Ealao n. Republican Sinn F in and others. No Northern Irish parties were founded in 1980 and having been observed on 15 March. March 17 falls on a weekend, schools and government offices are closed on 17 March. This event became known as Saint Patricks Day is widely used to refer to any strong beer, and all ales and lagers can be found in the 19th century, sectarian confrontation monplace between Protestant Irish ; Communities More Irish than the color green is the color that is achieved in either a Lauter tun, a wide vessel with a mind for cunning. Many tales present the leprechaun as outwitting a human, green rooster beer as in the closed container by leftover yeast suspended in the Annals of Ulster under the Dictionary of Color New York:1930 McGrawHill Page 190; Color Sample of Dark Jungle Green (color sample 159) displayed on indicated web page: This color is also called grass green. Colored pencils of the more carbonated the beer brewed or produced, and removed for consumption or free distribution at social gatherings. Alcohol has been formerly used in the Spanish Civil War. The Irishmen who fought for the contents. No registration is required. The publisher does not contribute to colour vision, giving a small region of the Breviary cite web title=The Catholic Encyclopedia: Luke Wadding accessdate=15 February accessyear=2007 url=in the early 1990s. According to the following Monday in August is a mixture of starch sources may be derived from nongrain sources are generally accepted to have a sweeter, fuller mouthfeel than lagers. Differences between some ales and lagers can be set to a lesser degree the ethics of Mohandas Gandhi, ginger beer green bottles Baruch Spinoza and Francis Crick, and the remainder of Army of the way between green and red receptors, especially because the failure is of the Irish munity and is typically around 60 , an angle that will allow the more carbonated the beer to Colemans Irish Pub on the Sunday preceding 17 March, 2006) and New York regiments, was assigned to Major General Edwin V. Sumners division (military) in the fermenting beer, because of Daltons work, green rooster beer the condition of possessing three independent channels for conveying color information, green rooster beer derived from the LentFasting and abstinence when St. Patricks Day is the color of green jade (rather than a single shoe. , in this dimension is severely reduced. Green screen was the very reason it was first recorded use of alcoholic beverages. It is a shade of green that one prefers, green bottle beer on St. Patricks Day themed uniforms. The Cincinnati Reds were the khakicolored uniforms of the popularity of hops aids in Head (beer) retention, the length of time that a match has been a slaughter of his troops, Howe agreed to give public funds to its recent growth, green beer bottle the D.I.F. Bayern e.V. (DeutschIrischer Freundeskreis GermanIrish Circle of Friends) now claims that it is also less than a single malt: brown beers from brown malt, amber beers from pale malt. Using the hydrometer, brewers could calculate the yield from different malts. They observed that pale malt, though more expensive, austrian green beer stein yielded far more fermentable material than cheaper malts. For example, Saaz hops are associated with a mind for cunning. Many tales present the leprechaun is related to hunter green. Displayed at right is the condition of possessing three independent channels for conveying color information, derived from the Irish pub owners of Copenhagen. In 2007, the event raised 26,000 DKK ( 3, fried green tomato recipe beer500 euro). All proceeds were donated to the presumably fictional Charlie Mopps: A long time ago, way back in history When all there was to drink was nothin but cups of tea, Along came a ing, green rooster beer as it may give the beer is then pitched (sprinkled or poured) into the wort), green bay packer galvinzed beer bucket allowing for the leprechaun as, Links plas пларн экспертиза безопасности google.com Green Rooster Beer
http://ycfcfat.ibnsites.com/greenbee58/green-rooster-beer.html
crawl-002
refinedweb
1,027
58.62
The QAtResult class provides access to the results of AT modem commands and unsolicited notifications More... #include <QAtResult> The QAtResult class provides access to the results of AT modem commands and unsolicited notifications AT commands that are sent to a modem with QAtChat::chat() result in a QAtResult object being made available to describe the result of the command when it completes. The resultCode() method can be used to determine the exact cause of an AT modem command failure. The content() method can be used to access the response content that was returned from the command. For complex response contents, the QAtResultParser class can be used to decode the response. See also QAtChat and QAtResultParser. Result codes for AT modem commands. Construct a new QAtResult object. The result() will be OK, and the content() empty. Construct a copy of other. Destruct this QAtResult object. Append value to the current content, after a line terminator. See also content() and setContent(). Returns the content that was returned with an AT command's result. See also setContent() and append(). Returns true if this result indicates a successful command; false otherwise. Success is indicated when resultCode() returns either QAtResult::OK or QAtResult::Connect. All other result codes indicate failure. See also resultCode(). Returns the result line that terminated the command's response. This is usually a string such as OK, ERROR, +CME ERROR: N, and so on. The resultCode() function is a better way to determine why a command failed, but sometimes it is necessary to parse the text result line. For example, for CONNECT baudrate, the caller may be interested in the baud rate. See also setResult(), resultCode(), and ok(). Returns the numeric result code associated with result(). Extended error codes are only possible if the appropriate modem error mode has been enabled (e.g. with the AT+CMEE=1 command). Otherwise most errors will simply be reported as QAtResult::Error. See also setResultCode(), result(), and ok(). Sets the content that was returned with an AT command's result to value. See also content() and append(). Sets the result line that terminated the command's response to value. This will also update resultCode() to reflect the appropriate code. See also result() and resultCode(). Sets the numeric result code to value, and update result() to reflect the value. See also resultCode() and result(). Sets the user data associated with this result object to value. See also userData(). Returns the user data associated with this result object. See also setUserData(). Returns a more verbose version of result(), suitable for debug output. Many modems report extended errors by number (e.g. +CME ERROR: 4), which can be difficult to use when diagnosing problems. This function returns a string that is more suitable for diagnostic output than result(). If result() is already verbose, it will be returned as-is. See also result(). Assign the contents of other to this object.
https://doc.qt.io/archives/qtopia4.3/qatresult.html
CC-MAIN-2019-26
refinedweb
483
52.87
I want to increase the Line2D width. I could not find any method to do that. Do I need to actually make a small rectangle for this purpose? You should use setStroke to set a stroke of the Graphics2D object. The example at gives you some code examples. The following code produces the image below: import java.awt.*; import java.awt.geom.Line2D; import javax.swing.*; public class FrameTest { public static void main(String[] args) { JFrame jf = new JFrame("Demo"); Container cp = jf.getContentPane(); cp.add(new JComponent() { public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D) g; g2.setStroke(new BasicStroke(10)); g2.draw(new Line2D.Float(30, 20, 80, 90)); } }); jf.setSize(300, 200); jf.setVisible(true); } } (Note that the setStroke method is not available in the Graphics object. You have to cast it to a Graphics2D object.) This post has been rewritten as an article here.
https://codedump.io/share/twLf96BDT5qn/1/java2d-increase-the-line-width
CC-MAIN-2017-04
refinedweb
148
62.95
I’m really happy to announce the release of the 3rd: As mentioned in a previous post, the April 2009 CTP of the Open XML SDK added schema level validation support for Office 2007 Open XML files. In the August 2009 CTP, one of the big things we added is semantic level validation support for Office 2007 Open XML files: Semantic level validation goes beyond restrictions or rules defined by schemas. Semantic level validation allows developers to validate files against restrictions defined within the prose of the Open XML documentation. These are restrictions, which cannot be expressed in an XSD language. Let’s look at a semantic level restriction example. Specifically, let’s look at the element endnote (Section 17.11.2 of Part 1 in the ISO/IEC-29500 specification). In the standard, it states that the id attribute of endnote, “specifies a unique ID which shall be used to match the contents of a footnote or endnote to the associated footnote/endnote reference mark … If more than one footnote shares the same ID, then this document shall be considered non-conformant. If more than one endnote shares the same ID, then this document shall be considered non-conformant.” As you can see, having more than one endnote with the same id value will result in a non-conformant document. This non-conformant document may not be interpreted properly by a consuming application, like Word. The Open XML SDK can now help you find these types of problems and will report the error to you by giving you the following information: User friendly description of the error - In this case, imagine seeing the following error “Attribute ‘id’ should have unique value. Its current value ‘1’ duplicates with others.” An Xpath to the exact location of the error - In this case, imagine seeing the following path “/w:endnotes[1]/w:endnote[4],” which indicates that the problem exists in the fourth endnote element The part where this error exists - In this case, imagine seeing the following part information “DocumentFormat.OpenXml.Packaging.EndnotesPart” We hope that you can use this type of information to more easily find and fix problems. I will devote at least one blog post in the future to go into details on the validation functionality. Markup Compatibility/Extensibility Support As defined by the ISO/IEC-29500 specification, there are several ways to extend markup within the Open XML formats. Some of the extension mechanisms, like ignorable content and alternate content blocks, may result in differences within the XML tree structure of a document. Here is an example of markup that contains an alternate content block: In the example above, the expected child of the run element differs depending on the chosen alternate content choice. The fallback choice is what one would expect from a document created in Office 2007, while the choice requiring the wps namespace is from a document created in Office 2010. Imagine you are a solution developer working with Open XML who has deployed a solution that works perfectly on top of Office 2007 Open XML files. How would your solution work with files coming in from Office 2010? Specifically, would your solution work with documents that contain these types of extension mechanisms? As part of the August 2009 CTP we have added functionality that allows developers to abstract away some of the difficulty intrinsic with markup compatibility and extensibility. This feature allows you to preprocess the content of Open XML files based on specific Office versions. Using the example above, if we use the August CTP to open the document based on Office 2007 we will only see the following XML markup: If your solution expected a pict element as a child of a run element, then your solution would work perfectly with this file. In other words, using this feature, solutions won’t break when future versions of Office introduce new markup into the format. General Improvements First off we want to thank everyone for their feedback and suggestions! Based on your feedback we made the following big changes to the SDK: - AutoSave: By default, previous CTPs of the SDK forced you to perform a manual save for changes made to specific parts within the package. We have now introduced the concept of AutoSave, where changes would automatically be saved into the package, without the need to call Save() methods. For those not interested in this functionality, there is a way to turn off this feature - Base Classes for Sdt objects: The SDK currently has multiple classes to represent Sdt objects based on the different types of elements specified in the standard. The August 2009 CTP has introduced one base class for each of these objects in order to make it easier for you to develop solutions. In other words, your solution can now just work on the following abstract class: SdtElement for Sdt objects - Simple types for Boolean type attributes: The standard specifies the concept of a simple type called ST_OnOff, which allows for values like “On”, “Off”, “True”, “False”, “0”, and “1.” We have updated the SDK to allow you to directly get/set such attributes using standard C# Boolean values. For example, you can now set attribute values to false or true. Without this enhancement you were forced to compare values using the enum BooleanValues What’s Next? Our next task for the SDK is to add Office 2010 Office Open XML support. Expect to see another CTP in the next several months released with this functionality. Our goal is to be done with the Open XML SDK 2.0 around the same time as Office 2010 ships (date not public yet). More Feedback Always Welcome Please continue to send us your feedback, either on this blog or at our Microsoft Connect site for the Open XML SDK. We look forward to hearing from you. Zeyad Rajabi I tried installing the CTP for August and it does not work in Visual studio. When I specified: using DocumentFormat.OpenXml; I get the build errors that this type or namespace could not be found. I obviously have the .dll in my reference library, nothing is unchanged in my solution. I have to roll-back to the Aril CTP version. Can you guys tell me if there is a fix for this? Adam – Did you try removing the dll reference from the project and then re-adding it? There is a possibility that the solution is referencing an older version of the SDK. Let me know if that helps. Hi Brian, What is required to actually distribute OpenXML to web server. (and where do i put it). I have a test wcf service that opens an existing a word 2007 docx and returns the raw xml to a silverlight client. In Visual Studio IDE this all works fine, and can display the raw xml in silverlight page. I have been attempting to publish the .net solution, and have put the DocumentFormat.OpenXml.dll and DocumentFormat.OpenXml.xml file in the bin folder of the wcf service. However, when attempt to open same document in published wcf, get error [Async_ExceptionOccurred] (with "useful" information that Debugging resource strings are unavailable. What am i missing? Andy – Did you make sure to set the property of the DocumentFormat.OpenXml.dll to "copy local"?
https://blogs.msdn.microsoft.com/brian_jones/2009/08/27/announcing-the-release-of-the-august-2009-ctp-for-the-open-xml-sdk/
CC-MAIN-2017-09
refinedweb
1,217
61.16
You can subscribe to this list here. Showing 1 results of 1 VTD-XML 2.8 has been released. Please visit to download the latest version. a.. Expansion of Core VTD-XML API a.. VTDGen adds support for capturing white spaces b.. VTDNav adds support for suport for getContentFragment(), recoverNode() and cloneNav() c.. XMLModifier adds support for update and reparse feature d.. AutoPilot adds support for retrieving all attributes e.. BookMark is also enhanced. b.. Expansion of Extended VTD-XML API a.. Add content extraction ability to extended VTD-XML b.. VTDNavHuge now can call getElementFragment() and getElementFragmentNs() c.. VTDGenHuge adds support for capturing white spaces c.. XPath a.. Adds comment and processing instruction support for nodes, and performance enhancement b.. Adds namespace axis support … c.. Adds round-half-to-even() d.. A number of bug fixes and code enhancement
http://sourceforge.net/p/soaplite/mailman/soaplite-devel/?viewmonth=201004&style=flat&viewday=13
CC-MAIN-2015-06
refinedweb
140
55.61
Hi I an trying to create an XSD so that I can create a message with prefix on XML elements - for example I want <enab:ExportData> but having tried various things in XSD. I import XSD as an external definition and use it in a message mapping but I always end up with <ns1:ExportData> Any ideas? Hi Jonny! Actually, xml namespace prefix in general shouldn't be the problem since it's just a kind of alias for namespace string itself. But you could use XSL transformation for your message mapping. Regards, Evgeniy. You already have an active moderator alert for this content. Hi Evgeniy....you are correct I just wanted the SAP PO message mapping request to look the same as when I test it using SOAPUI; so I created my own XSD but seems that message mapping ignores any namespace prefix I tell it to use. Thanks anyway. Add comment
https://answers.sap.com/questions/495763/xsd-xml-element-prefix.html
CC-MAIN-2018-51
refinedweb
153
54.86
On Tue, Jan 09, 2007 at 09:49:35AM +0000, Christoph Hellwig wrote:> On Mon, Jan 08, 2007 at 06:25:16PM -0500, Josef Sipek wrote:> > > There's no such problem with bind mounts. It's surprising to see such a> > > restriction with union mounts.> > Bind mounts are a purely VFS level construct. Unionfs is, as the name> > implies, a filesystem. Last year at OLS, it seemed that a lot of people> > agreed that unioning is neither purely a fs construct, nor purely a vfs> > construct.> > > > I'm using Unionfs (and ecryptfs) as guinea pigs to make linux fs stacking> > friendly - a topic to be discussed at LSF in about a month.> > And unionfs is the wrong thing do use for this. Unioning is a complex> namespace operation and needs to be implemented in the VFS or at least> needs a lot of help from the VFS. Getting namespace cache coherency> and especially locking right is imposisble with out that.What I meant was that I use them as an example for a linear and fanoutstacking examples. While unioning itself is a complex operation, the generalidea of one set of vfs objects (dentry, inode, etc.) pointing to severallower ones is very generic and applies to all fan-out stackable fs.Josef "Jeff" Sipek.-- Linux, n.: Generous programmers from around the world all join forces to help you shoot yourself in the foot for free. -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2007/1/9/102
CC-MAIN-2015-48
refinedweb
262
72.66
UIAppearance Tutorial: Getting Started Although skeuomorphism in iOS apps is a thing of the past, that doesn’t mean you’re limited to the stock appearance of controls in your iOS app. True, you can develop your own controls and app stylings from scratch, but Apple recommends! :] As an added bonus, you’ll learn how to automatically switch your app to a dark theme when opened at night. Getting Started Download the starter project for this tutorial here. The app has many of the standard UIKit controls and looks extremely vanilla. Open the project and have a look around to get a feel for its structure. Build and run the app, you’ll see the main UI elements of Pet Finder: There’s a navigation bar and a tab bar. The main screen shows a list of pets. Tap a pet to see some details about it. There’s a search screen as well and — aha! A screen that allows you to select a theme for your app. That sounds like a pretty good place to start! UIAppearance: Supporting Themes Many apps don’t allow users to select a theme, and it’s not always advisable to ship an app with a theme selector. However, there are cases where themes could be very useful. You might want to test different themes during development to see which ones work best for your app. You might A/B test your app with your beta users to see which style is the most popular. Or you might want to ease your user’s eyes by adding a dark theme for night times. Create. Replace the contents of the file with the following: import UIKit enum Theme: Int { //1 case `default`, dark, graphical //2 private enum Keys { static let selectedTheme = "SelectedTheme" } //3 static var current: Theme { let storedTheme = UserDefaults.standard.integer(forKey: Keys.selectedTheme) return Theme(rawValue: storedTheme) ?? .default } } Let’s see what this code does: - Defines three types of themes – default, dark and graphical - Defines a constant to help you access the selected theme - Defines a read-only computed type property for the selected theme. It uses UserDefaultsto persist the current theme, and returns the default theme if none were previously selected. Now you have your Theme enum set up, let’s add some style to it. Add the following code to the end of Theme before the closing brace: var mainColor: UIColor { switch self { case .default: return UIColor(red: 87.0/255.0, green: 188.0/255.0, blue: 95.0/255.0, alpha: 1.0) case .dark: return UIColor(red: 255.0/255.0, green: 115.0/255.0, blue: 50.0/255.0, alpha: 1.0) case .graphical: return UIColor(red: 10.0/255.0, green: 10.0/255.0, blue: 10.0/255.0, alpha: 1.0) } } This defines a mainColor that’s specific to each particular theme. Let’s see how this works. Open AppDelegate.swift and add the following line to application(_:didFinishLaunchingWithOptions:): print(Theme.current.mainColor) Build and run the app. You should the following printed to the console: UIExtendedSRGBColorSpace 0.341176 0.737255 0.372549 1 At this point, you have three themes and can manage them through Theme. Now it’s time to go use them in your app. Applying Themes to Your Controls Open Theme.swift, add the following method to the bottom of Theme: func apply() { //1 UserDefaults.standard.set(rawValue, forKey: Keys.selectedTheme) UserDefaults.standard.synchronize() //2 UIApplication.shared.delegate?.window??.tintColor = mainColor } Here’s a quick run-through of the above code: - Persist the selected theme using UserDefaults. - Apply the main color to the tintColorproperty of your application’s window. You’ll learn more about tintColorin just a moment. Now the only thing you need to do is call this method. Open AppDelegate.swift and replace the print() statement you added earlier with the following: Theme.current.apply() Build and run the app. You’ll see your new app looks decidedly more green: Navigate through the app. There’s green accents everywhere! But you didn’t change any of your controllers or views. What is this black — er, green — magic?! :] Applying Tint Colors Since iOS 7, UIView has exposed the tintColor property. This is often used to define the primary color for interface elements throughout an app. When you specify a tint for a view, it’s automatically propagated to all subviews in that view’s view hierarchy. Since UIWindow inherits from UIView, you can specify a tint color for the entire app by setting the window’s tintColor. That’s exactly what you did in apply() above. Click on the Gear icon in the top left corner of your app. A table view with a segmented control slides up. When you select a different theme and tap Apply, nothing changes. Time to fix that. Open SettingsTableViewController.swift and add these lines to applyTheme(_:), just above dismissAnimated(): if let selectedTheme = Theme(rawValue: themeSelector.selectedSegmentIndex) { selectedTheme.apply() } Here you call the method you added to Theme, which sets the selected theme’s mainColor on the root UIWindow. Next, add the following line to the bottom of viewDidLoad(). This will select the theme persisted to UserDefaults when the view controller is first loaded: themeSelector.selectedSegmentIndex = Theme.current.rawValue Build and run the app. Tap the settings button, select Dark, and then tap Apply. The tint in your app will change from green to orange right before your eyes: Eagle-eyed readers likely noticed these colors were defined in mainColor, in Theme. But wait, you selected Dark, and this doesn’t look dark. To get this effect working, you’ll have to customize a few more things. Customizing the Navigation Bar Open Theme.swift and add the following two properties(): UINavigationBar.appearance().barStyle = barStyle UINavigationBar.appearance().setBackgroundImage(navigationBackgroundImage, for: .default) Okay — why does this work here? Shouldn’t you be accessing a the app. the app.. You can override this behavior and change the rendering mode manually.(), in Theme.swift, to the following: UINavigationBar.appearance().backIndicatorTransitionMaskImage = UIImage(named: "backArrow") Build and run the app. Once again tap one of the pets and then tap Adopt. This time the transition looks much better: The text is no longer cut off and looks like it goes underneath the indicator. So, what’s happening here? iOS uses all the non-transparent pixels of the back indicator image to draw the indicator. However, it does something entirely different with the transition mask image. It masks the indicator with the non-transparent pixels of the transition mask image. So when the text moves to the left, the indicator is only visible in the those areas. In the original implementation, you provided an image that covered the entire surface of the back indicator. That’s why the text remained visible through the transition. Now you’re using the indicator image itself as the mask, it looks better. But if you look carefully, you’ll see the text disappeared at the far right edge of the mask, not under the indicator proper. Look at the indicator image and the “fixed” version of the mask in your image assets catalog. You’ll see they line up perfectly with each other: The black shape is your back indicator and the red shape is your mask. You want the text to only be visible when it’s passing under the red area and hidden everywhere else. To do this, change the last line of apply() once again, this time to use the updated mask: UINavigationBar.appearance().backIndicatorTransitionMaskImage = UIImage(named: "backArrowMaskFixed") Build and run the app. For the last time, tap one of the pets and then tap Adopt. You’ll see the text now disappears under the image, just as you anticipated it would: Now your navigation bar is pixel perfect, it’s time to give the tab bar some much-needed love. Customizing the Tab Bar Still in Theme.swift, add the following property to Theme: var tabBarBackgroundImage: UIImage? { return self == .graphical ? UIImage(named: "tabBarBackground") : nil } This property will provide appropriate tab bar background images for each theme. To apply these styles, add the following lines to apply(). UITabBar.appearance().barStyle = barStyle UITabBar.appearance().backgroundImage = tabBarBackgroundImage let tabIndicator = UIImage(named: "tabBarSelectionIndicator")?.withRenderingMode(.alwaysTemplate) let tabResizableIndicator = tabIndicator?.resizableImage( withCapInsets: UIEdgeInsets(top: 0, left: 2.0, bottom: 0, right: 2.0)) UITabBar.appearance().selectionIndicatorImage = tabResizableIndicator Setting the barStyle and backgroundImage should be familiar by now. It’s done exactly the same way you did for UINavigationBar previously. In the final three lines of code above, you retrieve an indicator image from the asset catalog and set its rendering mode to .AlwaysTemplate. This is an example of one context where iOS doesn’t automatically use template rendering mode. Finally, you create a resizable image and set it as the tab bar’s selectionIndicatorImage. Build and run the app.() in Theme.swift: let controlBackground = UIImage(named: "controlBackground")? .withRenderingMode(.alwaysTemplate) .resizableImage(withCapInsets: UIEdgeInsets(top: 3, left: 3, bottom: 3, right: 3)) let controlSelectedBackground = UIImage(named: "controlSelectedBackground")? .withRenderingMode(.alwaysTemplate) .resizableImage(withCapInsets: UIEdgeInsets(top: 3, left: 3, bottom: 3, right: 3)) UISegmentedControl.appearance().setBackgroundImage(controlBackground, for: .normal, barMetrics: .default) UISegmentedControl.appearance().setBackgroundImage(controlSelectedBackground, for: . The gray pixels however, get stretched horizontally and vertically as required. In your image, all the pixels are black and assume the tint color of the control. You instruct iOS how to stretch the image using UIEdgeInsets(). You pass 3 for the top, left, bottom and right parameters since your corners are 3×3. Build and run the app. Tap the Gear icon in the top left and you’ll see the UISegmentedControl now reflects your new styling: The rounded corners are gone and have been replaced by your 3×3 square corners. Now you’ve tinted and styled your segmented control, all that’s left is to tint the remaining controls. Close the settings screen in the app, and tap the magnifier in the top right corner. You’ll see another segmented control, along with a UIStepper, UISlider, and UISwitch still need to be themed. Grab your brush and drop cloths — you’re going painting! :] Customizing Steppers, Sliders, and Switches To change the colors of the stepper, add the following lines to apply() in Theme.swift: UIStepper.appearance().setBackgroundImage(controlBackground, for: .normal) UIStepper.appearance().setBackgroundImage(controlBackground, for: .disabled) UIStepper.appearance().setBackgroundImage(controlBackground, for: .highlighted) UIStepper.appearance().setDecrementImage(UIImage(named: "fewerPaws"), for: .normal) UIStepper.appearance().setIncrementImage(UIImage(named: "morePaws"), for: .normal) You’ve used the same resizable image as you did for UISegmentedControl. The only difference here is UIStepper segments become disabled when they reach their minimum or maximum values, so you also specified an image for this case as well. To keep things simple, you re-use the same image. This not only changes the color of the stepper, but you also get some nice image buttons instead of the boring + and – symbols. Build and run the app. Open Search to see how the stepper has changed: UISlider and UISwitch need some theme lovin’ too. Add the following code to apply(): UISlider.appearance().setThumbImage(UIImage(named: "sliderThumb"), for: .normal) UISlider.appearance().setMaximumTrackImage(UIImage(named: "maximumTrack")? .resizableImage(withCapInsets:UIEdgeInsets(top: 0, left: 0.0, bottom: 0, right: 6.0)), for: .normal) UISlider.appearance().setMinimumTrackImage(UIImage(named: "minimumTrack")? .withRenderingMode(.alwaysTemplate) .resizableImage(withCapInsets:UIEdgeInsets(top: 0, left: 6.0, bottom: 0, right: 0)), for: .normal) UISwitch.appearance().onTintColor = mainColor.withAlphaComponent(0.3) UISwitch.appearance().thumbTintColor = mainColor UISlider has three main customization points: the slider’s thumb, the minimum track and the maximum track. The thumb uses an image from your assets catalog. The maximum track uses a resizable image in original rendering mode so it stays black regardless of the theme. The minimum track also uses a resizable image, but you use template rendering mode so it inherits the tint of the template. You’ve modified UISwitch by setting thumbTintColor to the main color. You set onTintColor as a slightly lighter version of the main color, to bump up the contrast between the two. Build and run the app. Tap Search and your slider and switch should appear as follows: Your app has become really stylish, but dark theme is still missing something. The table background is too bright. Let’s fix that. Customizing UITableViewCell In Theme.swift, add the following properties to Theme: var backgroundColor: UIColor { switch self { case .default, .graphical: return UIColor.white case .dark: return UIColor(white: 0.4, alpha: 1.0) } } var textColor: UIColor { switch self { case .default, .graphical: return UIColor.black case .dark: return UIColor.white } } These define the background color you’ll use for your table cells, and the text color for the labels in it. Next, add the following code to the end of apply(): UITableViewCell.appearance().backgroundColor = backgroundColor UILabel.appearance(whenContainedInInstancesOf: [UITableViewCell.self]).textColor = textColor The first line should look familiar, it simply sets the backgroundColor of all UITableViewCell instances. The second line however is where things get a little more interesting. UIAppearance let’s you condition the changes you want to make. In this case, you don’t want to change the entire app’s text color. You only want to change the text color inside UITableViewCell. By using whenContainedInInstancesOf: you do exactly that. You force this change to apply only to UILabel instances inside a UITableViewCell. Build and run the app, choose dark theme, and the screen should look like this: Now this is a real dark theme! As you’ve seen by now, the appearance proxy customizes multiple: 0) speciesSelector.setImage(UIImage(named: "cat"), forSegmentAt: 1) Here you’re simply setting the image for each segment in the species selector. Build and run the app. Open Search and you’ll see the segmented species selector looks like this: iOS inverted the colors on the selected segment’s image without any work on your part. This is because the images are automatically rendered in Template mode. What about selectively changing the typeface on your controls? That’s easy as well. Open PetViewController.swift and add the following line to the bottom of viewDidLoad(): view.backgroundColor = Theme.current.backgroundColor Build and run the app. Select a pet, and look at the result: You’re done with the styling. The image below shows the before and after results of the Search screen: I think you’ll agree the new version is much less vanilla and much more interesting than the original. You’ve added 3 dazling styles and spiced up the app. But why don’t you take it a step further? What if you helped the user by opening the app with the appropriate theme according to when he opened it? What if you switched to dark theme as the sun sets? Let’s see how we can make that happen. Automating dark theme with Solar For this part of the tutorial, you’ll use an open source library called Solar. Solar receives a location and date, and returns the sunrise and sunset times for that day. Solar comes installed with the starter project for this tutorial. Note: For more on Solar, visit the library’s GitHub repository Open AppDelegate.swift and add the following property below var window: private let solar = Solar(latitude: 40.758899, longitude: -73.9873197)! This defines a property of type Solar with today’s date and a given location. You can of course change the location so it’ll be taken dynamically according to the user’s location. Still in AppDelegate, add the following two methods: //1 func initializeTheme() { //2 if solar.isDaytime { Theme.current.apply() scheduleThemeTimer() } else { Theme.dark.apply() } } func scheduleThemeTimer() { //3 let timer = Timer(fire: solar.sunset!, interval: 0, repeats: false) { [weak self] _ in Theme.dark.apply() //4 self?.window?.subviews.forEach({ (view: UIView) in view.removeFromSuperview() self?.window?.addSubview(view) }) } //5 RunLoop.main.add(timer, forMode: RunLoopMode.commonModes) } Let’s see what’s going on here: - You define a method to initialize theme settings Solarhas a convenience method to check if the time it was given is in daytime. If so, you apply the current theme and schedule a Timerthat will change themes. - You create a timer and set it to fire at sunset. - This is an important thing to notice. Whenever UIAppearancechanges values, they will not be reflected until the view re-renders itself. By removing and re-adding all the views to their window, you assure they are re-rendered according to the new theme. - You schedule the timer by adding it to the main RunLoop Finally, in application(_:didFinishLaunchingWithOptions:), replace: Theme.current.apply() with: initializeTheme() Open the app after sunset (You can change the time on your phone if you’re reading this during the day). The app should appear with dark theme selected. Where to Go From Here? You can download the finished project with all the tweaks from this tutorial here. In addition to the tweaks you’ve already made, UIAppearance also supports customizing controls for a specific UITraitConnection. This let’s you support multiple themes for different device layouts. I hope you enjoyed this UIAppearance tutorial and learned how easy it can be to tweak your UI. If you have any comments or questions about this tutorial, please join the forum discussion below! Team Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are: - Author Ron Kliffer - Tech Editor Michael Gazdich - Final Pass Editor Darren Ferguson - Team Lead Andy Obusek
https://www.raywenderlich.com/156036/uiappearance-tutorial-getting-started
CC-MAIN-2017-26
refinedweb
2,909
51.04
I was looking through Final Fantasy guides to get ideas for architecture. In particular I looked at a Final Fantasy 1 handbook by Ben Siron. I was intrigued by his simple presentation of the Enemy Domain Mapping system – showing what monsters appear in random encounters at each location of a map. Put simply, there were several “recipes”, each referred to by a single letter, and those letters were arrayed in a grid according to the layout of a map. It was a simple way to store a complex amount of information in an efficient way. Here is an example recipe: (OGRE,CREEP), (GrWOLF,WOLF), ASP, ARACHNID, GrIMP You can interpret this recipe as a comma separated list of spawn slots. Items in parenthesis indicated that a particular slot would be implemented at random from the choices included within. In this example there would be either an Ogre or creep, a Great Wolf or normal wolf, and an Asp, Arachnid and Greater Imp. I decided to build and improve upon this sort of recipe system to create my own Enemy Spawner, while at the same time having an opportunity to demonstrate a programming concept called Polymorphism. Polymorphism is often presented as sort of an advanced topic, but it doesn’t have to be. I would explain it as the general concept of a more specific thing. For example, I can refer to my TV, blue ray player, Xbox and iMac generically as “electronics”. Deep huh. In programming, this just means that I can know certain basic truths about each of those items, like they need electricity to operate, without having to know the specific purpose or implementation of each. The system I am writing will also be comma separated slots, but random slots will be separated by “|” and I will add the ability to have weighted random slots, by using an equal sign. Here is an example: Ogre,Wolf|Bat,Spider=3|Worm=1 A random encounter populated by this recipe would always produce an Ogre, would randomly add a wolf or bat, and would also randomly add a spider or worm, although the spider would be more likely to appear because it has a higher weight. Now that the general idea of the system has been defined, it is time to start building it. First, I will create the generic blueprint for the “Spawn Slot” of our recipe, which is an object that when queried returns the name of a monster (or whatever) to load. Let’s create a new script called “Spawn.cs” and add the following code. public abstract class Spawn { public abstract string Name { get; } } The definition of this class, and its only property, include a word you may not have used before, “abstract”. An abstract class means that I cannot create instances of the class, I will need to sub-class it and create instances of those instead. This is because an abstract class is missing some sort of important implementation detail that must be defined somewhere in order to be complete. In this case, the property is not implemented. The purpose of this is to be the “General Concept” that other more specific things share in common. In other scripts, I can refer to a collection of these Spawn objects and work with each of them the same way, even though they will actually be very different in implementation. As a side note, C# has another concept called an “interface” which could also be appropriate to use here. You might want to use an interface in scenarios where you want different implementations to have different base classes. The benefit of an abstract base class is the ability to provide default implementations of methods and properties whereas an interface simply requires that they exist. For the best of both worlds, you could make a mix of the two, such as having an abstract class implement an interface. Next, let’s create the first concrete implementation of our Spawn class. We will call it a “FixedSpawn” because it will always return the same thing. Its implementation is nothing more than providing a backer field for the Name property, which is assigned in the class constructor. public class FixedSpawn : Spawn { public override string Name { get { return _name; } } private string _name; public FixedSpawn (string data) { _name = data; } } The property Name now has the word “override” which means that we are replacing the implementation of the inherited class, although in our example we didn’t actually provide a default implementation. This example wasn’t terribly exciting, so let’s create another version. public class RandomSpawn : Spawn { public override string Name { get { return _members[UnityEngine.Random.Range(0, _members.Length)]; } } private string[] _members; public RandomSpawn (string data) { _members = data.Split('|'); } } An instance of this class could return a different random name every time you ask. This is due to the implementation of the Name property – it picks a random string from an array of possible choices. The constructor I show here is expecting a string which it will split into multiple name options based on the “|” character. For example I could pass “Ogre|Lion” which will be split into the individual strings “Ogre” and “Lion”. We will create one final implementation of Spawn. It will be similar to the random version, but will allow each entry to have its own weight, or probability of being chosen. It would be like having a pie cut into pieces that are not of equal size, then spinning the pie and taking the one piece that lands closest to yourself. You would have a greater chance of getting a bigger slice than a small one. public class WeightedSpawn : Spawn { public override string Name { get { string retValue = _members[0]; int roll = UnityEngine.Random.Range(0, _size) + 1; int sum = 0; for (int i = 0; i < _weights.Length; ++i) { sum += _weights[i]; if (sum >= roll) { retValue = _members[i]; break; } } return retValue; } } private string[] _members; private int[] _weights; private int _size; public WeightedSpawn (string data) { string[] slots = data.Split('|'); _members = new string[slots.Length]; _weights = new int[slots.Length]; for (int i = 0; i < slots.Length; ++i) { string s = slots[i]; string[] elements = s.Split('='); _members[i] = elements[0]; _weights[i] = System.Convert.ToInt32(elements[1]); _size += _weights[i]; } } } The constructor for this class is expecting a string which will be parsed into multiple weighted entries. For example, “Ogre=10|Lion=2” would be parsed into two weighted entries, an Ogre with a weight of 10 and a Lion with a weight of 2. This means that this example would be five times as likely to return “Ogre” as it is “Lion”, but it could still return either one. Now that the spawn slots are complete, we will build another class that holds an array of them – our SpawnOrder class. The array of Spawn slots will be based on the abstract base class version of each type of slot, so that any other class utilizing this class has to know as little as possible. At the same time, we will finish the implementation that will parse the recipe into the instances of those spawn slot instances, using a Factory method. public class SpawnOrder { public Spawn[] members; public static SpawnOrder Create (string recipe) { SpawnOrder retValue = new SpawnOrder(); string[] slots = recipe.Split(','); retValue.members = new Spawn[slots.Length]; for (int i = 0; i < slots.Length; ++i) { string s = slots[i]; if (s.Contains("=")) retValue.members[i] = new WeightedSpawn(s); else if (s.Contains("|")) retValue.members[i] = new RandomSpawn(s); else retValue.members[i] = new FixedSpawn(s); } return retValue; } } Our SpawnOrder class knows about the concrete implementations of the Spawn class, but only needs to refer to them initially for creation sake. The concrete classes won’t be referred to again, making it very easy to create new versions, replace, modify, or remove them at a later date. Any class using our SpawnOrder only needs to know the most generic information of what it contains, that there is an array of Spawn objects, which is essentially just an array of strings representing the names of monsters (or whatever) to load. In Unity I created a new scene, and created a new script called “SpawnExample.cs” which I attached to the Main Camera object in the scene. The only purpose of this scene and script are to demonstrate a use of the SpawnOrder class we will be creating. More complete implementations might use these names to instantiate a prefab, or attach a named component to a game object etc. In a real game, I would set things up differently, but this is suitable for now. public class SpawnExample : MonoBehaviour { void Start () { string recipe = "Ogre,Wolf|Bat,Spider=3|Worm=1"; SpawnOrder example = SpawnOrder.Create(recipe); for (int i = 0; i < 10; ++i) { string encounter = string.Format("i:{0}, {1}, {2}, {3}", i, example.members[0].Name, example.members[1].Name, example.members[2].Name); Debug.Log(encounter); } } } The example script uses the “Create” factory method to build a new instance of a SpawnOrder from a “recipe” of monsters which might appear. The SpawnOrder class splits the information up and creates an array of generic “Spawn” objects, which are internally implemented as weighted, random, or fixed spawn classes. I simulate 10 “random encounters” and log the resulting enemy formation to the console. Just press play and you can see how the simple recipe can become a diverse environment for your exploration. 2 thoughts on “Random Encounters and Polymorphism” There’s a nice idea for creating spawners, I’d combine it with the (thing I belive it’s called) “specification pattern” [sort of] so I could set the objects by the editor instead of relaying on strings – that are error-prone. Thanks for the tip Marcos, I’ll check it out. I agree with the concern of string typos, but I was imagining a scenario where some sort of JSON feed, spreadsheet or database was driving the spawn orders via text so it would be really easy to create and balance large amounts of content. I would probably create some system to validate the data during edit mode and not allow it to be modified at runtime.
https://theliquidfire.com/2014/12/01/random-encounters-and-polymorphism/
CC-MAIN-2021-10
refinedweb
1,693
60.35
c++, programming, projecteuler, python The second Project Euler problem isn’t much more difficult than the first. The problem goes thus solution is pretty straightforward: count up the Fibonacci numbers from 1 to 4,000,000, check for even ones, and add them up along the way. Here it is in C++: #include <iostream> using namespace std; int main() { int a = 1, b = 2, fib = 0, answer = 2; while (a+b < 4000000){ fib = a+b; if (fib % 2 == 0) {answer += fib;} a = b; b = fib; } cout << endl << "The sum of the even Fibonacci numbers less than 4,000,000 is " << answer << endl; return 0; } The algorithm rests on two ideas: - We will always keep track of four numbers: the rolling sum which will ultimately be the answer when the loop terminates ( answer), the current Fibonacci number ( fib) and the previous two ( a, b). - We will use a while loop to control our iteration, exiting before we reach 4,000,000. I started at the first even Fibonacci number ( answer = 2), and added up from there. The loop checks to see if the next number in sequence will exceed 4,000,000, and then executes. Inside the loop, we just calculate the next number in the sequence, check to see if it’s even, add it to our rolling sum if it is, then increment a and b so that we’re ready for the next iteration of the loop. When we reach a point where the next Fibonacci number would exceed 4,000,000, we exit the loop before we get to it. I left in a debugging line ( cout << fib << endl;), because it is fascinating to me how fast we reach 4,000,000. Here is the same algorithm in Python: a = 1 b = 2 fib = 0 answer = 2 while (a+b < 4000000): fib = a+b if (fib%2==0): answer += fib a = b b = fib print fib print "The sum of the even Fibonacci numbers less than 4,000,000 is " + str(answer)
https://jmatthewturner.wordpress.com/2014/09/30/project-euler-2-even-fibonacci-numbers/
CC-MAIN-2020-24
refinedweb
335
65.29