text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
MS Dynamics CRM 3.0
Hi, i'm new in Python and i'm trying to write some server which can confirm connection from client. Here is a part of code:
import sys import threading from socket import *
class TelGUI(threading.Thread): def __init__(self): threading.Thread.__init__(self)
def run(self): s = socket(AF_INET, SOCK_STREAM) s.bind(('',8900)) s.listen(5) while 1: client,addr = s.accept() print addr print "Or u want to accept connection from this host? [y/n]" opt = sys.stdin.read(1) if opt == 'y': #establish else: s.close() #reject
def main(): app = TelGUI() app.start()
print "Menu" while 1: #some menu operations op = sys.stdin.read(1) if op == 'x': break
if __name__ == "__main__": main()
maybe someone have some ideas how to block first stdin in main function and get stdin from the thread when here is a new connection? Thanks
so that your main thread reads stdin and then uses an instance of Queue to post the 'y's and 'n's it recives to your server thread.
Rob. -- | http://www.megasolutions.net/python/Connection-acception-with-confirmation-78697.aspx | CC-MAIN-2015-11 | refinedweb | 176 | 76.82 |
Albatross!This is the personal blog of Chris Wilson, Platform Architect of the Internet Explorer Platform team at Microsoft (and ex-Group Program Manager). Evolution Platform Developer Build (Build: 5.6.50428.7875)2006-10-20T16:26:00ZI am, however, making a fresh start... moving my blog. Please move your references to this blog to my new one, at <A class="" href="" mce_href="">cwilso.com</A>.<div style="clear:both;"></div><img src="" width="1" height="1">cwilso have I been?<P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3><FONT face=Calibri>I know a few people have wondered why I appear to have stopped blogging altogether.<?xml:namespace prefix = o<o:p></o:p></FONT></FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><o:p><FONT size=3 face=Calibri> </FONT></o:p></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3 face=Calibri>Well, the short version is because I did.<SPAN style="mso-spacerun: yes"> </SPAN>The longer version is that I got tired of dealing with the negativity <SPAN style="mso-spacerun: yes"> </SPAN>– although it's been <A href="" mce_href="">suggested</A></FONT><FONT size=3><FONT face=Calibri> that I turn off comments, that isn’t really me.<SPAN style="mso-spacerun: yes"> </SPAN>I don’t think “not listening” is the right approach.<SPAN style="mso-spacerun: yes"> </SPAN>>It was also frustrating, at the time, that we (the IE team) weren’t saying anything substantial about IE8 or our plans, and I got tired of walking the line.<SPAN style="mso-spacerun: yes"> </SPAN>That changed, obviously, when MIX08 rolled around and we released our beta 1.>Finally, and this was partly exacerbated by the previous issues, I was tired of things I said being taken too seriously, and getting jumped on.<SPAN style="mso-spacerun: yes"> </SPAN>For example, the unfortunately well-publicized back and forth last fall on ES4 – which might have been okay as 1) a statement of my personal opinion, or 2) an open exchange of ideas on the future of ECMAScript.<SPAN style="mso-spacerun: yes"> </SPAN>Unfortunately, it got blown into a much bigger deal than that, and that just made me depressed.>Anyhow, I’m going to take another run at it; there are a few things I wanted to comment on lately.<SPAN style="mso-spacerun: yes"> </SPAN>And you can always watch what’s going on with me on my <A href="" mce_href="">Twitter</A>, or check out my pictures on <A href="" mce_href="">Flickr</A>.<o:p></o:p></FONT></FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso feel happy, too...<P>...that <A class="" href="" mce_href="">someone got it</A>. Thanks, Hal - standards compliance and not breaking existing websites are, in fact, why I show up to work.</P> <P>BTW, I will be at <A class="" href="" mce_href="">Web Directions North 08</A>, <A class="" href="" mce_href="">MIX08</A>, and (this just in) <A class="" href="" mce_href="">SXSW08</A>.</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso that you need me to tell you this... IE8 now <A class="" href="" mce_href="">correctly renders the Acid2 smiley face in IE8 standards mode</A>.<div style="clear:both;"></div><img src="" width="1" height="1">cwilso opinion.<P>Golly, I love this job. <P. <P.) <P>Now we've established that this blog, and the last post as well as this one, are just me talking, and not the "Voice of Microsoft" (said in your best James Earl Jones voice). <P>So, hmm. <A href="" mce_href=" .html">"Open letter to Chris Wilson."</A> <A href="" mce_href="">"Mozilla, Microsoft drawing sabers over next JavaScript."</A> <A href="" mce_href="- afford-another-browser-war/">"We can't afford another browser war."</A> (I'm with ya there.) <A href="" mce_href="">Front page of Slashdot.</A> And so on. Does anyone wonder why I don't post so often on my own blog? <P. <I><U>In my opinion,</U></I> <EM>my opinion</EM>. <P. <P>However, please do not EVER characterize me personally as pushing a proprietary language or platform over open standards based ones, unless you have proof of such action (which you do not, because I am not). I gave my <I>opinion</I> about ES4, not about Javascript as a long-term language for the web; I have no <I>personal</I> <I>professional</I> opinion to be in support of C# in place of Javascript either. It's a shame that the last couple of days of posting, yours included, have presumed that I have any interest in a language different than Javascript; my <I>opinion</I>.) <P <A href="" mce_href="">"helping Microsoft stall improvements to JS while they aggressively evolve C# and its runtimes"</A> - in fact, I <I>personally</I> think those are orthogonal issues, and Javascript's current lack of strong typing, say, doesn't help or hurt C# adoption. I expect you have a different opinion, given your posts, and I simply respectfully disagree. I think (again, <EM>personally</EM>) that Javascript has a lot more going for it in the web ecosystem, and I don't <EM>personally</EM> see C# pushing it out of the way. <P><A>Brendan, you also said (in <A href="" mce_href=" .html#comments">comments on your own blog post</A> that I 'reversed the logic of ScreamingMonkey to try to "prove" that ES4 requires a new VM.' <P, <I>my opinion</I>. <P. <P <A href="" mce_href="">"deferred JScript maintenance and ES3 spec-polishing,"</A> despite what you think. <P <A href="" mce_href="">Microsoft has to lose</A>, you should think about how we could all win. </P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso I think about ES4. <p><a href="" mce_href="">Dean Edwards</a> asked me in a <a href="" mce_href="">comment on the IEBlog</a> what I personally thought of the <a href="" mce_href="">ES4 proposal</a>. (<i>'You say that "Microsoft" think that the web is best served by the creation of a new language. Your name is at the bottom of this article. What do *you* think?'</i> - I'll let the FUD comment bounce off. Damned if we do, damned if we don't say anything.) Consider the rest of this post to be only my opinion, because I haven't even run it by the other people on the team.</p> <p.</p> <p.</p> <p>.</p> <p>As I understand it, on the other hand, the ES4 proposal introduces a lot of new language functionality that essentially changes the character of the language.. (I'd also take on other challenges first if I were redesigning Javascript - e.g. domain-aware security as a language tenet. That's Monday-morning quarterbacking the ES4 design as a new language proposal though.) </p> <p>My point is that it's a fallacy to think that you're evolving Javascript if your expectation is that the scripts will have a different type param, and be handled by a separate runtime (i.e. the <a href="" mce_href="">ScreamingMonkey</a> approach). That doesn't seem like it will have good interop to me, at least not in a world where mashups and separate code components from disparate places (all of which are some variant of ES3 today) are the norm. </p> <p. </p><p>Hey, everyone can have an opinion.</p><div style="clear:both;"></div><img src="" width="1" height="1">cwilso. Spam. Evar.<P>"Dear Chris: <P>This is a shameless marketing e-mail, so if you don’t want to be marketed to, FOR GOODNESS SAKES, TURN OFF YOUR COMPUTER."</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso! Oh! Wait!<P>I'm skeptical cat!</P> <P><A href="" mce_href=""><IMG alt=skeptical-cat-is-fraught-with-skepticism.jpg</A></P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso am... sad cookie cat.<H2>You are: Sad Cookie Cat</H2> <H3>54% Affectionate, 40% Excitable, 60% Hungry</H3> <DIV id=testResultInfoImg><IMG src="" mce_src=""></DIV> <P>You are the classic Shakespearian tragedy of the lolcat universe. The sad story of a baking a cookie, succumbing to gluttony, and in turn consuming the very cookie that was to be offered. Bad grammar ensues. <BR><BR>To see all possible results, checka <A href="" mce_href="">dis</A>. </P> <DIV></DIV> <TABLE class="" cellPadding=20> <TBODY> <TR> <TD class=""><!--t-->Link: <A href="-">The Which Lolcat Are You? Test</A> written by <A href="">GumOtaku</A> on <A href="">OkCupid Free Online Dating</A>, home of the <A href="">The Dating Persona Test<!--/t--></A></TD></TR></TBODY></TABLE><div style="clear:both;"></div><img src="" width="1" height="1">cwilso Thursday, I'm going to be going on vacation. For the first time in a very long time, I'm actually going to be totally offline for at least part of it - since I'm spending a week on a liveaboard dive boat in the Dry Tortugas (west of Key West in Florida). So, if you're trying to get ahold of me for the next week or two (until Monday June 25th) - don't. :)<BR><div style="clear:both;"></div><img src="" width="1" height="1">cwilso I love my MacBook, but Microsoft too.<p>I love my MacBook. I especially love the new photo workflow I'm almost finished setting up, which will not only have a smooth transition from developing to managing to uploading to <a href="" mce_href="">Flickr</a>, but also has automatic <a href="" mce_href="">geotagging</a> if I take along my handheld GPS when I'm shooting. (None of this is stuff I can't do on a Windows machine, by the way.) <br></p><p>Now - why I love Microsoft is because Microsoft has always valued backward compatibility. In my PC experience, there was one major inflection point in compatibility - the "moving to XP" inflection point. Prior to XP, home users used Win9x, and business users used Windows NT/2000. When I moved my home machine to XP, I had one piece of hardware I had to throw out because it lacked drivers (a then-old Mark of the Unicorn MIDI interface). Every other upgrade, I've managed to bring my hardware forward.<br></p><p>My experience getting the GPS (a Garmin eTrek Vista I've had for a long time) to work with the Mac has been less than easy. I guess I've been away from a command line too long - any time I have to poke around with UNIX shell commands to get a new driver to work, I think that's messed up.</p><p>The Garmin eTrex Vista has an RS232 serial interface (how quaint! - but if I DID have one of the newer USB ones, I'd be out of luck too, since Garmin still doesn't support the Mac). So, off to Office Depot to buy a serial-to-USB adapter - pick out a Belkin device, check the back of the box - "Mac OS 8.6 and greater." OK, cool - take it home, insert CD - "does not support this version of the operating system." Huh? Check the net - oh, great, Belkin has no support for OSX. Poke around trying to find drivers for other adapters that use the same chipset, to no avail. Fine. Take that one back, go to Fry's Electronics to find an <a href="" mce_href="">IOGear adapter (GUC232A)</a> that DOES have a MacOS X support.</p><p>Plug THAT CD in this morning. Seems to install fine. Let it reboot, and <i>it's still not recognizing the serial port.</i> No error messages, but no USB serial adapter either, in the apps or in /dev. Spend a couple of hours trying reinstalls, poking around the shell trying to hand-install it. Finally figured out, thanks to one of the last comments in <a href="" mce_href="">this thread</a>, that the problem is that although it's an OS X driver, it's NOT a universal binary - and it's not working on my Intel-based Mac. OK, great - <a href="" mce_href="">there's a GPL driver that DOES work with this</a>.</p><p>So in short, not one but two layers of backward compatibility prevented me from getting this working, and I could only get it working by installing an open-source driver from a site unrelated to the hardware I had. Hmm.</p><p>But I did get it working, so I'm happy. </p><p> </p><p>-C</p><p>PS - if you're interested in what my photo workflow looks like - I shoot in RAW with a Canon 20D, plug the CF card in to a USB CF card reader, import and develop in <a href="" mce_href="">Adobe Lightroom</a> (<a href="" mce_href="">Phase One Capture One LE</a> as backup, since I used that on the PC), add keyword metadata and manage photos in <a href="" mce_href="">iView MediaPro</a> (now acquired by Microsoft), use <a href="" mce_href="">GPSPhotoLinker</a> to download tracks and geotag the photos, and use <a href="" mce_href="">PictureSync</a> to upload to <a href="" mce_href="">Flickr</a>, since it converts the keywords to tags. I'm getting a copy of <a href="" mce_href="">Adobe Photoshop</a> soon to do any further image editing. I also use iView to backup up, both to a Western Digital portable drive and to a Wolverine portable media drive.<br></p><div style="clear:both;"></div><img src="" width="1" height="1">cwilso from MIX... my talk is available on the web - go to <a href="" mce_href=""></a>, type "Chris Wilson" in the search box and hit enter. Choose the first result (the second one is my talk from last year). Available in WMV and Silverlight format (which I checked out on my MacBook, and I'm pleased to report it worked well).<br><div style="clear:both;"></div><img src="" width="1" height="1">cwilso all of you that aren't going to MIX07...<P>...there will be a lot of live information streamed from the event, and videos released afterward.</P> <P><A href=""></A>.</P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso joy of being me<p>So, we're having a <a href="" mce_href="">fun thread</a> on the HTML WG. I'm explaining what Microsoft must do, given the half-billion or so users that rely on us to not break their web experience. A certain contingent disagrees with me (okay) and seems to want to beat it out of me (not okay). </p><p.)</p><p>My wife says, "wow, people are really rude. You don't need that. Why are you still doing this?"</p><p> Hmm.<br> </p><div style="clear:both;"></div><img src="" width="1" height="1">cwilso speaking engagements...<P><IMG title=web2.0expoTrying to stay ahead of the game, but as usual I'm not very good at publicizing what I'm doing. Next week, I will be a panelist on the panel "The Arrival of Web 2.0: The State of the Union on Browser Technology" at the <A class="" href="" mce_href="">Web 2.0 Expo</A> <A class="" href="" mce_href="">twitter</A> or <A class="" href="" mce_href="">dodgeball</A> to see where I am.</P> <P style="CLEAR: both"><IMG title=mix07I'm also speaking and attending the <A class="" href="" mce_href="">MIX'07</A> Microsoft conference in Las Vegas at the end of the month. A few of my fellow IE team members will also be there; my talk is entitled: "<SPAN class=hi>IE7</SPAN> Past, Present, and Future".</P> <P style="CLEAR: both" mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso again.<P>I seem to have two sets of people reading my blog - the people who know me from web work, and those who found this blog because I've occasionally posted about getting braces on my teeth in my mid-30s. This post is for the latter.</P> <P> Danielle - in my experience, the pain (at least, the "I don't even want to eat" pain) goes away after a few days - maybe a week the first time, and re-occurs for a couple of days after every time I get my braces adjusted.</P> <P>My appointment yesterday was good - because I continue to get the "your teeth are doing great, everything's moving along really well" - and bad, because I got rubber bands. I have a slight underbite - this was part of the teeth-wearing problem to begin with - and rather than having jaw surgery (eek!), I now have bands that pull my lower jaw back. They're rather uncomfortable, but I've only had them for 24 hours so far. We'll see how they are next week.</P> <P>They are helping my diet, though, because it's hard to eat without taking them out, and a bit of a pain to put them back in - so I don't snack as much. :)</P> <P> -C</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso<P>I can't believe I forgot to post this.</P> <P>If you're at SXSW, so am I. I was on Eric Meyer's panel "A Decade of Style" on Saturday morning, and even more exciting may be the "Browser Wars" panel at 10am on Tuesday morning, with Brendan Eich from Mozilla, Charles McCathie-Nevile from Opera, and myself. Don't miss it!</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso interview with Jon Udell<P>Jon Udell called me up a week or two ago and asked me a bunch of questions about web history, IE and the HTML WG. I think it's actually broadly interesting, and for once I don't sound like a third grader on speed. Well, too much.</P> <P><A href="" mce_href="">Check it out</A>.</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso<P><A href="">Daniel</A>. Really.</P> <OL> <LI>It's 2007. When I said "spring of last year," I did in fact mean the 2006 Mandelieu meeting you refer to. There is not some back channel there.</LI> <LI>I was not "chosen" then. Chris said, in effect, "hey, some people think you would be good." I believe my response was a chuckle, and a "thanks but I'm too busy." A month or two later, I emailed him and said "if the other potential WG members think I'm a good candidate, I guess I could make it happen." The W3C guys thought I'd be a good person to do it, I suppose, and the other browser vendors agreed to the choice of me. I'm still not "chosen" - all member companies get to vote. You're the only one that I know of who has objected. Again, I'm happy to be let off the hook, and I will be excited to participate.</LI> <LI>You make it sound like the future of HTML is defined, because I was asked to be the initial chair of the HTML WG. That makes me really sad, and that you think I have some hidden agenda. You said "Just drop your current affiliation for a while" - I believe any chair should be, in effect, dropping their affiliation to perform those duties.</LI> <LI>I HAVE read your post about why the WHAT-WG started, and I understand and agree, and I thought I echoed your sentiments. I get it. And I did not mean to say I was the only one who had told the W3C they were on the wrong track; but they did listen when I told them.</LI> <LI>You said: <BLOCKQUOTE>."</BLOCKQUOTE> <P>.</P> <P.</P> <LI>You said that extensive knowledge of the tech and the market is not mandatory to chair this WG. I must disagree - I think it is, because I think not understanding the entirety of the industry is what sunk the XHTML-2-producing HTML WG - I think they failed to recognize a few fundamental tenets they should have had.</LI> <LI>You said "A joke from you about Hixie is not unprofessional, it just shows that we're only human beings. What a scoop... I just don't see the problem here." Spoken like someone who has never been misquoted and put in a bad light in the mainstream press. Whatever. I'm not religious on this point. I'll just keep my joking mouth shut and be less personable, then.</LI> <LI>You said "I just do not understand why the W3C does not list FIRST the organizations and people ready to join this WG, and tries only LATER to find the correct chairperson." Great, I'm all for that. Get a couple other potential WG members to agree, put my name on the email to the W3C along with theirs that this is the way it should work. I really don't care. I just want us to get the working group together and get working on evolving HTML.</LI> <LI>You said "...IPR problems or not, I think Microsoft has no choice, will have no choice here." I want you to understand that Microsoft, and anyone else who might be a target for IP infringement suits, will have to have a choice here. IP cannot be taken lightly as a risk to open specifications and standards. <LI>You also said "I just don't want the W3C to, again, follow its own ideas just "listening" to the external sources." There, I think you and I completely agree, and that's why I offered to help.</LI></OL> <P?</P> <P>-Chris</P> <P.</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso, me and the W3C (aka Reinventing HTML)<P>Some of you may not be aware that the W3C is recognizing that there has been an unfulfilled need for the evolution of HTML, and they are <A class="" href="" mce_href="">chartering a new HTML Working Group</A>.</P> <P>Over the holidays, I noticed <A class="" href="" mce_href="">Daniel Glazman’s response </A>to the charter, and to my proposed role as initial chair. I’ve known Daniel for years, and I have a tremendous amount of respect for him. I don’t take any of his comments personally, as I’m 100% certain they were not meant personally. </P> <P.)</P> <P.</P> . :))</P> <P>. </P> <P>I should be clear here that I am not a lawyer, and the following should not be considered legal advice. Consider it a layman’s simplification of the issues, but you should consult your own lawyer.</P> <P <A class="" href="" mce_href="">public discussion</A> going on.</P> ). </P> <P.</P> <P <A class="" href="" mce_href="">Eolas</A> case.)</P> <P.</P> <P?</P> <P.</P> <P.</P> .</P> .</P> <P>Inside, though I’m really hoping that you have a name of someone else to take over the chairship. :)</P> <P>-Chris</P> <P.<BR></P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso<P>Wow, a bunch of recent comments on <A class="" href="" mce_href="">my first post after getting my braces</A>.</P> <P>So, I've had my braces for about four and a half months now. I've had several followup appointments, the most recent one yesterday. I had my first breakage on Boxing Day, when I bit into an olive at a party not realizing there was a pit in it - I cracked off one of my upper brackets, and since it was the holidays, I had to wait a week to get it repaired.</P> <P>General thoughts - the braces rubbing raw spots on the inside of your mouth/cheeks mostly goes away, but not entirely - that's what wax is for. I recommend keeping some wax around, and some Orajel to kill the pain (and prevent the sore spot from becoming infected).</P> <P>Getting the braces adjusted - it's gonna hurt. As someone else discovered, immediately after the braces are put on or adjusted, it only aches a little - it's 6-8 hours later that it starts to REALLY ache, and it lasts for a variable amount of time, but usually a couple of days. Take ibuprofen or some other painkiller if you need it.</P> <P> I haven't noticed any strange metallic taste while brushing my teeth - andrae, that sounds like your gums may be bleeding when you brush your teeth.</P> <P>Tooth pain - hey, it's gonna hurt whereever the teeth are moving around, so yeah, it will hurt on one side more than the other sometimes.</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso from the Holidays.<P>I’ve been pretty absent for a while from my blog. Sorry about that; I have a bunch of things buffered up to talk about, though, so I’ll probably post quite a bit more in the next few days.</P> <P>My New Year’s resolutions:</P> <P>1) Be 20% more funky. I’m pretty funky, but I could probably stand to be funkier. [Bonus points to the first person who correctly identifies my source for that, and no, Google won’t help you.]<BR>2) Dive more.<BR>3) Be more organized and deliberate in my work.<BR>4) Blog a lot more. I think I don’t understand why anyone might care, so it seems conceited to post my thoughts or what I’m working on, but I recognize that it’s a source of information from the Borg.<BR>5) Lose weight. Sigh. I gained a bit last year, from stress.<BR></P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso re-downloading background images<P>Quick.</P> <P> -C</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso'm leaving my position as IE platform Group Program Manager…<P>…HA! Gotcha!</P> <P.</P> <P.</P> <P.)</P> <P>So rest assured – this doesn’t mean you’ll see less of me, or that I will have less of an effect on the IE team as a whole – likely quite the opposite. I will continue to speak publicly, and hopefully will get to blog more often, etc.</P> <P>-Chris</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso even time to catch my breath...<P>...but I wanted to mention that I'm speaking next week at the <A class="" href="" mce_href="">Ajax Experience</A> conference - both my usual IE7 talk (although updated a little - did you notice <A class="" href="" mce_href="">we shipped</A>? :) ) and a keynote talk entitled "The Once and Future Web".</P> <P> Oh, and I just dumped a bunch of pictures on <A class="" href="" mce_href="">Flickr</A>, including a few from Fundamentos Web 2006.</P><div style="clear:both;"></div><img src="" width="1" height="1">cwilso | http://blogs.msdn.com/b/cwilso/atom.aspx | CC-MAIN-2014-35 | refinedweb | 4,585 | 71.65 |
Talk:Proposed features/Properties for Tags
I like this idea. I'm sure there will be a lot of details to sort out but it seems like as we start to keep track of more and more things with tags that an increase in complexity is inevitable. This seems like a fairly clean solution for keeping tag data together. Beau Gunderson 10:14, 12 August 2008 (UTC)
the usual way people denote this is with a ":" or an "_"... so footway:surface=something or footway_surface=something. There doesn't seem to be any need to come up with a general rule to me. It can be applied as appropriate when coming up with the individual tags. Randomjunk 15:21, 12 August 2008 (UTC)
- I haven't seen anything like that tagged in my area before. There is the sport namespace proposal that uses ":". I think it's prudent to have a general rule for 'subtags', so everyone will immediately know how to tag something like that and what such notations mean. That also means it might be easier to read by programs. Of course you can denote tags the way you want, it's just one possible guideline. --Driver2 18:53, 12 August 2008 (UTC)
- also useful for oppsite bicycle-traffic in oneways.
- oneyway=yes
- oneway.except.access=bicycle --Cbm 20:17, 12 August 2008 (UTC)
Existing syntactic conventions for tagging
Let's have a stab at documenting what people do already before we invent yet another type of separator. This is the way I see the current usage in the Map features tags and in recent proposals:
- Moved to Any tags you like
It may be the case that the third convention for key linked above might do what you're setting out to do here.
Does your proposal add anything to this set of (admittedly rather scrappily documented and loose) conventions? Some strongly machine-readable and formally well defined for doing metatagging might be possible to write down, but there also exists the question of whether there is the demand for it. --achadwick 20:53, 1 January 2009 (UTC)
- This proposal is kind of abandoned. Maybe Proposed features/Tag structures would be interesting for you. I guess your analysis could add to it. It is kind of a mixture of current use and new concepts. But I'm not very good at expressing myself in english, so I'm not sure if this rather complex topic is explained very well. --Driver2 21:49, 1 January 2009 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:Proposed_features/Properties_for_Tags | CC-MAIN-2017-13 | refinedweb | 419 | 71.14 |
Sometimes you forget that GC’s occur: it’s hard to see it’s effect, but what does Garbage Collection do to your code?
A long time ago (3 decades!) I used Fortran and Assembly code for a PDP-11 16 bit computer to design real time signal processing systems. Two of these were used to detect submarines: one would make a very loud noise (think a very big sonar Ping), and the other would pretend to be a submarine by listening to the sound and echoing back, at various set strengths (1X, 10X, 100X), as if it were a particular kind of submarine pointing in various directions. “I’m a Submarine”, “I’m a Submarine” “I’M A SUBMARINE”. If all 3 of these echoes were heard, then we know we heard all signal strengths. If only 2,then we know by how much the target was missed.
The sound maker generated a huge amount of noise underwater via controlling precisely the release of an array of high pressure air guns. The sound wavefront could be steered via subtle variations in firing timing. Sound travels so well and fast underwater!
A typical scuba tank is about 8 inches in diameter and 20 inches long, holding maybe 3000 PSI and is very dangerous (think exploding shark a la Jaws). Our array of 10 air guns were 10 inches in diameter and 20 feet long, with 10,000 PSI.
The listener computer on a different boat hundreds of miles away used an underwater microphone to record the sounds into memory at a sub 1Khz sampling rate using an Analog to Digital converter. The code received the data via DMA (Direct Memory Access) and double-buffered it: as one buffer was being filled, the other would be processed, such as copied to an Array Processor to do a Fourier Transform.
The signal would then be convolved (multiplications, additions) with various prerecorded echoes from a submarine, and then the signal was D-A converted to a sound blasted via an underwater speaker.
The CPU needed to be fast enough to keep up with the incoming raw data: if I tuned it to sample a little bit faster, bad things would occur: missing data, distortion, etc.
I had to have precise control of these real time systems for accuracy and reliability.
Designing a real-time system with a Garbage Collected system could be quite difficult in tight performance scenarios. A GC during a sampling could cause lost data.
What does a GC actually do? Every thread that runs managed code has to be frozen during the entire GC, which means your application pauses. All objects are scanned to see if they are referenced by any other objects. Lonely objects are considered freeable: objects are moved around in memory to coalesce free space, and object references are updated with the new locations.
So how can you see or hear the effect of GC?
Start Visual Studio
File->New->Project->C# WPF Application. Name it SeeHearGC
Replace the MainWindow.Xaml.CS with the contents below.
If you run under the debugger (F5), then Uncheck this option: Project->Properties->Debug->Enable the Visual Studio Hosting Process
Click with right button for just sounds, or left mouse for color display.
The code alternately raster scans the form, painting pink squares. Then it repeats with blue. When a GC occurs, the color for that square is inverted. A Beep can be heard at the end of each horizontal scan. If a GC occurs, the square color is inverted, so you can see when it occurred.
You can also run PerfMon (start->Run->PerfMon) to see the GC behavior of any managed application, even when the program is running,
Select Performance Monitor in the Treeview, delete any counters in the display (Hit the red x), then hit the green “+” to add a couple counters.
Choose <Local computer> , scroll to and expand .NET CLR Memory, and choose “#Gen 0 Collections” and “% Time in GC”. In the Instance listbox, choose the process: SeeHearGC.
Originally I wanted to control the speaker directly because human hearing can detect subtle differences in timing. However, I had to settle on Beeps.
Perhaps in a later post, I’ll write out to a WAV file or otherwise record the GC data for further analysis.
See also:
How to filter out unwanted sounds via Fourier Transform
Heartbeat: Garbage collection in VFP and .NET are similar
;
using System.Windows.Threading;
using System.Runtime.InteropServices;
using System.Diagnostics;
using System.Reflection;
namespace WpfApplication1
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public const int xCnt = 5;
public const int yCnt = 10;
public const int xSize = 19;
public const int ySize = 19;
TextBlock[,] _btns = new TextBlock[xCnt, yCnt];
[DllImport(“kernel32.dll”)]
public static extern bool Beep(int BeepFreq, int BeepDuration);
PerformanceCounter _perfCounterGC;
int _nGCs = 0;
public MainWindow()
{
InitializeComponent();
// Uncheck this option: Project->Properties->Debug->Enable the Visual Studio Hosting Process
var thisAsm = System.IO.Path.GetFileNameWithoutExtension( Assembly.GetEntryAssembly().Location);
_perfCounterGC = new PerformanceCounter(“.NET CLR Memory”, “# Gen 0 Collections”, thisAsm);
SolidColorBrush[] brushes = new SolidColorBrush[2];
brushes[0] = new SolidColorBrush(Colors.LightSalmon);
brushes[1] = new SolidColorBrush(Colors.LightBlue);
this.Title = “Click on the Red”;
int i, x, y;
this.Loaded += (o, e) =>
{
var surface = new Canvas();
this.Content = surface;
for (y = 0; y < yCnt; y++)
{
for (x = 0; x < xCnt; x++)
{
var rect = new TextBlock()
{
Background = brushes[0],
HorizontalAlignment = System.Windows.HorizontalAlignment.Left,
VerticalAlignment = System.Windows.VerticalAlignment.Top,
Width = xSize,
Height = ySize
};
_btns[x, y] = rect;
surface.Children.Add(rect);
Canvas.SetLeft(rect, x * xSize);
Canvas.SetTop(rect, y * ySize);
}
}
int currentMainColor = 0;
this.MouseDown += (om, em) =>
{
bool fDoVisual = true;
if (Mouse.RightButton == MouseButtonState.Pressed)
{
fDoVisual = false;
}
var txt1 = “1”;
var txtToUse = string.Empty;
int colorToUse = 0;
int curGCs = 0;
// main loop: we want to minimize memory allocations
for (i = 0; i < 5; i++)
{
if (fDoVisual)
{
currentMainColor = 1 – currentMainColor;
}
for (y = 0; y < yCnt; y++)
{
Beep(440, 160);
if (fDoVisual)
{
for (x = 0; x < xCnt; x++)
{
curGCs = (int)_perfCounterGC.NextValue();
colorToUse = currentMainColor;
if (curGCs != _nGCs)
{
txtToUse = txt1; // (curGCs – _nGCs).ToString();
_nGCs = curGCs;
colorToUse = 1 – colorToUse;
}
else
{
txtToUse = string.Empty;
}
_btns[x, y].Text = txtToUse;
_btns[x, y].Background = brushes[colorToUse];
{ // we want to wait til after render by synchronously running low pri empty code
this.Dispatcher.Invoke(DispatcherPriority.Render, new Action(() =>
{
}
));
}
}
}
else
{
System.Threading.Thread.Sleep(100);
}
}
}
};
};
}
}
}
</Code>
Çok yararlı bir paylaşım olmuş teşekkürler
Lo! Men have become the tools of their tools!!!!. | https://blogs.msdn.microsoft.com/calvin_hsia/2011/02/26/see-and-hear-the-effects-of-garbage-collection/ | CC-MAIN-2017-17 | refinedweb | 1,074 | 57.16 |
We are proud to announce version 21.2 of the the gem5 project. In this release we incorporated 790 commits from 33 unique authors, new and regular, from both academia and industry. We are, as always, thankful to all the time our community puts into maintaining and improving gem5.
Having existed since v21.1 under the now deprecated name “the components library”, the v21.2 release of gem5 moves the gem5 standard library out of alpha. The purpose of the gem5 standard library is to provide gem5 users a standard set of commonly used components and utilities to aid them in their research. Our overarching goal with the standard library is to remove “boilerplate” code from gem5 configuration files; making the 95% of activities that rarely change from simulation-to-simulation available in an “off-the-shelf” manner to users. As an example, a users wishing to experiment with the effects of cache sizes can used the gem5 standard library to setup a processor, memory system, and test on sensible benchmarks, thus freeing them to focus completely on the impact of cache size changes.
The gem5 standard library is a provided as Python package which contains the following:
SingleChannelDDR3_1600) and others are parameterized. Components can be combined together into
boardswhich can be simulated.
boards) to use directly in gem5 simulations with minimal setup. This release includes an X86 demo board and an example of how it may be used.
Note: Usage of the gem5 standard library is optional. It does not change any established gem5 API, or how gem5 configuration scripts may be created. gem5 configuration scripts that functioned in v21.1 should continue to function in v21.2. We do, however, hope the gem5 library can aid users in creating simulations, as is the case with all libraries.
Users can find example configurations scripts that incorporate the gem5 standard library in the gem5 repository's
configs/example/gem5_library directory.
As an example of how simple the gem5 standard library can make running a gem5 simulation, consider the following script:
from gem5.prebuilt.demo.x86_demo_board import X86DemoBoard from gem5.resources.resource import Resources from gem5.simulate.simulator import Simulators # Here we setup the board. The prebuilt X86DemoBoard allows for Full-System X86 # simulation. board = X86DemoBoard() # We then set the workload. Here we use the 5.4.49 Linux kernel with an X86 # Ubuntu OS. If these cannot be found locally they will be automatically # downloaded. board.set_kernel_disk_workload( kernel=Resource("x86-linux-kernel-5.4.49"), disk_image=Resource("x86-ubuntu-18.04-img"), ) # We then setup the Simulator and run the simulation. simulator = Simulator(board=board) simulator.run()
This script can be executed with
scons build/X86/gem5.opt ./build/X86/gem5.opt <script>
The script will automatically obtain the correct linux kernel and a disk image containing Ubuntu 18.04 from gem5-resources (if not already present on the host system). It will then run a full-system X86 simulation to a complete boot of the operating system, then exit. Prior to the introduction of the gem5 standard library, a user would have to put in considerable effort to build such a simulation (100s of lines of python).
While we hope we have designed the standard library in an intuitive manner, users may reference the source under src/python/gem5.
In the coming month we will be updating the gem5 website with new tutorials and documentation on using the gem5 standard library.
Over the next few gem5 releases we will be expanding the standard library to include more components and features. An big goal of ours is to provide prebuilt components and systems that are proven to be representative of real-world counterparts.
The
Simulate module will be expanded, improved, and moved out of beta state as its role in the gem5 standard library becomes more clear.
If you wish to report a bug in the gem5 standard library or have a feature request, please submit it to gem5's Jira site. Questions regarding usage of the standard library can be made to the gem5 user's mailing list.
LupIO devices were developed by Prof. Joel Porquet-Lupine as a set of open-source I/O devices to be used for teaching. They were designed to model a complete set of I/O devices that are neither too complex to teach in a classroom setting, or too simple to translate to understanding real-world devices. A goal of two undergraduate students at UC Davis, Melissa Jost and Laura Hinman, was to work on incorporating LupIO devices into gem5. As such the gem5 v21.2 release includes a LupIO real-time clock, a random number generator, a terminal device, a block device, a system controller, a timer device, a programmable interrupt controller, and an inter-processor interrupt controller.
A more detailed outline of LupIO can be found in Prof. Porquet-Lupine's paper “LupIO: a collection fo education-friendly I/O devices” and information on the wider LupLab research group can be found on their website.
Users wishing to try out LupIO devices can find an example script and README file in the
configs/example/lupv directory.
Note: These LupIO devices have been built and tested for RISC-V. However, there is no reason these couldn't be modified to work with other ISA targets if required or desired. We welcome further development by the gem5 community.
In continued and welcome collaboration with Arm Holdings, improvements to gem5 Arm implementations have been made. They are:
ArmReleaseobject which models the architectural features supported by a FS/SE Arm simulation.
ArmTLBas a walk cache which stores intermediate PAs obtained during a translation table walk.
Continued efforts by, primarily, AMD, Inc. and the University of Wisconsin have improved gem5's GPU support. In this release:
In recent versions of gem5, we sadly lost the ability to integrate with the Structural Simulation Toolkit (SST). In collaboration with the SST community, we have revived support for connecting gem5 cores to the SST memory system. In v21.2 release, this has been tested for RISC-V and Arm. More information on setting up and running gem5 with SST can be found in
ext/sst/README.md.
[API CHANGE]: All
SimObject declarations in SConscript files now require a
sim_objects parameter that lists all SimObject classes declared in that file which need C++ wrappers (that is, SimObject classes which have a
type attribute defined).
[NEW CHANGE]: There is now an optional
enums parameter for
SimObject classes which must list all the Enum types defined in that SimObject file. Technically, this should only include Enum types which generate C++ wrappers though, as of v21.2, all Enums do so. | https://gem5.googlesource.com/public/gem5-website/+/1795e440eb16016b0639f60b5a61768132340943/_posts/2021-12-28-gem5-21-2.md | CC-MAIN-2022-27 | refinedweb | 1,109 | 56.05 |
Missing schemas for conference info extensions
- Wednesday, April 04, 2012 3:41 PM
Hi,
The XSD schema for conference info extensions
references two XSD documents that seem to be missing in the published documentation (ms-ci-ext2.xsd, endptsettings.xsd). Are these available online somewhere or, if not, can Microsoft post them?
<xs:import
<xs:import
Thanks,
-=- Vlad Vysotsky
All Replies
- Wednesday, April 04, 2012 4:13 PMModerator
Hi Vlad,
Thanks for your question, one of the Open Specifications team members will respond shortly to begin working with you.
Best regards,
Tom Jebo
Escalation Engineer
Microsoft Open Specifications
- Friday, April 06, 2012 5:29 PMModerator
Hi Vlad,
I am the engineer who will be working with you on this issue. I am currently researching the problem and will provide you with an update soon.
Regards,
Vilmos Foltenyi - MSFT
- Friday, April 13, 2012 7:42 PMModerator
Hi Vlad,
Thank you for your patience. It looks like both namespace definitions, mentioned in your posting, are missing. The first one, confinfoextensions, has an older definition in MS-CONFBAS 8.2.
I filed a request to have this included in a future release of the documentation.
Thanks, Vilmos
- Friday, April 13, 2012 7:49 PM
Thanks, Vilmos,
indeed, the older "2005/08" version is posted, and we've used it successfully, while the "2008/12" revision is missing.
Do you have an approximate ETA for publishing the missing schemas?
Regards,
-=- Vlad
- Friday, April 20, 2012 7:55 PMModerator
Hi Vlad,
The documentation and the product groups are working on to correct the problem, two “import namespace”s in MS-CONFAV 6.2 don’t have matching schemas. When the corrected documents will be documented I’ll post the news here.
Thanks, Vilmos
- Marked As Answer by Vilmos Foltenyi MSFTMicrosoft Employee, Moderator Friday, April 20, 2012 7:56 PM
-
- Saturday, April 21, 2012 5:28 PM
Perfect, thanks a lot, Vilmos.
-=- Vlad
- Friday, September 07, 2012 12:14 AM
Hi Vilmos,
I noticed that the [MS-CONFAV] document was touched at least once since April, but no updates were made.
Did you hear back from the documentation and product groups regarding the two missing schemas? Is there an ETA for getting the correction?
Regards,
-=- Vlad
- Saturday, September 08, 2012 2:48 AMModerator
Hi Vlad,
The new release of the Microsoft Office Protocol Documents will contain the fix, which probably will effect two documents.
Thanks, Vilmos | http://social.msdn.microsoft.com/Forums/en-US/os_office/thread/42922430-c33f-409f-b901-38798e19348e | CC-MAIN-2013-20 | refinedweb | 400 | 51.58 |
-----
Of Michael Kay
Yes, I know this one is a pain.
There are some security issues here. Using the current
working directory is not a good idea in a servlet
environment, especially if the stylesheet is untrusted. I'm
having trouble remembering why placing the output files
relative to the stylesheet was also considered a bad idea.
However, I think I can get over these problems by confining
any changes to the command line and not the API.
Could you take xml:base as a commandline parameter Mike?:
Michael,
Just saw on the archives that you changed this for the next release.
Thanks! I agree that it makes sense to force the output directory to be
explicit if called from an API other than the command line.
(But you know a lot of us use that command line, even if you keep
telling us that life would be so much happier if we programmed in
Java:-)
David
________________________________________________________________________
This e-mail has been scanned for all viruses by Star. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
________________________________________________________________________
> > How can I get this fix?
> >=20
> > Thanks,
> > Priya
>=20
> You'll have to be patient and wait for the next release.
Hi Mike,
Will bug #1048476 (Undeclared prefix for built-in namespace) be fixed in
the next release? I have to decide whether to go with an older version
of Saxon or wait for the next release (hopefully it's imminent?) - it's
a choice I can make today if the fix isn't going to be in the next
release.
thanks
andrew
>
> Will bug #1048476 (Undeclared prefix for built-in namespace)
> be fixed in the next release?
yes.
I'm going to try and produce a release before Christmas but there are quite
a lot of loose ends to tie up so I can't guarantee it.
Michael Kay
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/saxon/mailman/saxon-help/thread/E1Ccj2P-0006d1-00@ukmail1.eechost.net/ | CC-MAIN-2017-34 | refinedweb | 367 | 71.24 |
09 April 2010 15:37 [Source: ICIS news]
By John Richardson
SINGAPORE (ICIS news)--Hope springs eternal when it comes to trying to fathom the direction of the polyolefin market in ?xml:namespace>
One particular hope rests on March import numbers from China Customs, due to be released later this month.
The data might just give a pointer to the extent that new local capacity has displaced the need for imports – and whether all the talk about credit-tightening has translated into weaker demand.
“March will be the first ‘normal’ month in 2010, when comparisons might just be valid with import volumes last year. Late January and the whole of February were distorted by the build-up to the Chinese New Year,” said a southeast Asia-based petrochemicals consultant.
New local capacity includes Tianjin Petrochemicals. Volumes from the recently-started complex are being seen in much greater quantities in the market, according to several traders and producers.
This is being absorbed as output from the
But what will make the March numbers hard to read, as so often happens with China import statistics, is a huge inventory overhang in polyethylene (PE) – the result of heavy buying by traders of overseas material late last year.
“I remember that you had asked me where all these heavy imports were going,” said a source with a major
“Now I can tell you – into warehouses! At the end of March, bonded warehouses had 2-3 times their level of normal stocks and we have no idea how bad the situation is inland, at all the warehouses where yuan-priced domestic material is stored.”
Credit remained extremely easy to obtain late last year with less concern over the Chinese government’s efforts to cool the economy down, said a Singapore-based polyolefins trader.
“Many of the traders made the dangerous assumption that the future would be the same as the past.
“Recently, the government has been talking about cutting loan growth by 22% compared with 2009.”
Such has been the inventory overhang in PE that small quantities of resin imported into China has been re-exported to Brazil, Bangladesh and Israel, the trader added.
What is strange, though, is that when you look at the graph below showing import volumes for the whole 2009, you will see that PP shipments also surged late last year – and yet the PP market is in radically different shape to that of PE.
Source of Data: International Trader Publications
“Our assessments of rolling inventory indicate that PP stocks in
“Reduced availability from the
The drop in supply from the
In short, therefore, if the March import figures show a sharp drop in both PE and PP this might tell us little about the underlying, long-term state of the market (PE numbers could be down on this huge inventory overhang with PP also lower, partly on lack of availability).
And if the statistics surprise on the upside, be careful of anybody who argues that this is a firm indication that
“OK, credit has got a little tighter locally, but there are still an awful lot of speculators out there,” continued the Singapore-located trader.
“I have done a lot of business with other traders in
“A lot of foreigners don’t understand what continues to underpin demand in
“These traders will buy resin in US dollars and then sell in Yuan at a loss to local end-users. They will then use the credit to try and make money in steel, coal and other hot commodities before the 90 days are up.”
This complex intra-trade business is now been further bolstered by rising expectations of a yuan revolution, he added.
“The hope is that if you borrow in US dollars and convert to yuan the local currency will have strengthened by the time your 90 days are up.”
This suggests that a bursting of the bubbles in steel, coal and other commodity prices would have a big knock-on to demand for polyolefins, as would a yuan revaluation.
And it also suggests that any month’s polyolefin import statistics need to be taken with a large pinch of salt.
So what’s the sentiment like among buyers then, perhaps a more useful pointer to the underlying state of the market?
“Overall, it’s one of cautious optimism over the economy. But they know there’s a lot more new capacity just around the corner,” said a Hong Kong-based polyolefins trader.
New ethylene capacity, including a lot of downstream PE, will total 9.5m tonne/year in 2010 with global demand growth in normal market conditions around 5m tonne/year, according to ICIS data.
“Increased supply from the Middle East has been particularly big in linear low density PE (LLDPE) so far this year,” continued the
“A major producer from the region plans to deliver 40,100 tonnes into warehouses in
“This same producer only sold a total of 200,000 tonnes to
He added that high density PE (HDPE) would also get ugly.
New PP capacities in 2010 include the 800,000 tonne/year Borouge plant in
The Borouge plant will start up in the third quarter and Siam Cement in the fourth quarter, according to ICIS plants and projects.
The volume of new capacities seems to be far too big to prevent a severe margin-squeeze at some stage, with most estimates indicating that this will happen in the fourth quarter this year.
But making an educated guess about what this margin-squeeze will mean | http://www.icis.com/Articles/2010/04/09/9349334/insight-bad-visibility-clouds-chinas-polyolefins-market.html | CC-MAIN-2015-14 | refinedweb | 919 | 51.82 |
Most of the Django sites I develop these days need to work on mobile. I am using Bootstrap 3 to make my pages responsive. Bootstrap’s philosophy is “mobile first”. But there are many cases where I do not want users with big screens to be penalized by a site that is optimized for a small screen.
While Bootstrap has some css classes for showing and hiding based on screen size. That approach is somewhat limited. It also slows the page down because the content still needs to be downloaded even if its not shown. A more complete solution would be to have the option of detecting the device on the server side and altering the page before its sent. This is especially easy using Django templates.
The big question is how to detect the users device and hopefully get its screen size in Django.. However, when I tried to install it on a CentOS 5, 32-bit machine (Webfaction), it seg faulted. The folks at Webfaction suggested that I migrate to one of their CentOS 6, 64-bit machines. It installed without problems on the machine. But when I tried using the code by fetching a page from my DroidX, it seg faulted. I am not sure how to debug something like that. It works locally. And when it seg faults there are no debugging messages.
Wait. All is not lost. The middleware and context processor works with Firefox on Ubuntu. I added a bunch of print statements to the middleware. The problem occurs in the call to mobile_detector.match in _match. Up until then all the parameters look reasonable. To make matter worse, if I fire up my virtualenv and python interpreter on the server and run that command from the command line, with the params that caused the seg fault, everything works fine. It looks like this is a problem with mod_wsgi.
There is some stuff on stackoverflow related to this. I tried adding WSGIApplicationGroup %{GLOBAL} but no change. This is a mess. Moving on. I am going to try a pure python package.
django-mobile looks OK, but seems to be limited to just differentiating between mobile, tablet and full. Moble ESP looks interesting. The python code is just one file: mdetect.py. It’s very readable and understandable. Not as many Django features as django-mobile. But most of what I want, I can roll my own. Here is quick and dirty middleware:
from xxx import mdetect class DetectMobile: def process_request(self, request): user_agent = request.META.get("HTTP_USER_AGENT") http_accept = request.META.get("HTTP_ACCEPT") if user_agent and http_accept: agent = mdetect.UAgentInfo(userAgent=user_agent, httpAccept=http_accept) request.mobile_esp_agent = agent # in case we want more information about the device if agent.detectMobileQuick(): request.device_type = 'mobile' elif agent.detectTierTablet(): request.device_type = 'tablet' else: request.device_type = 'desktop' else: request.mobile_esp_agent = None request.device_type = 'desktop' # default
The careful reader will no doubt notice that after I called django-mobile limited, I set about creating those exact limits in my middleware. Ha. But an even carefuller reader will notice I include the agent in request, so I can get at the details as needed. So far this approach is working well.
Goog Article | https://snakeycode.wordpress.com/2013/11/19/mobile-detection-on-django/ | CC-MAIN-2017-43 | refinedweb | 531 | 69.07 |
0
import java.util.*; class SampleA { } class SampleB extends SampleA{ } class SampleC extends SampleA{ } class VectorDemo { public static void main(String r[]) { Vector<SampleA> v = new Vector<SampleA>(); v.add(new SampleB()); v.add(new SampleC()); SampleC rect = v.get(2); } }
The output says "Incompatible types. Found SampleA, required SampleC. SampleC rect = v.get(2);"
My doubt is first of all isn't this like using Wildcard where instead of saying Vector<? extends SampleA> v, we mentioned it a form of Vector<Type> v, which is one of the rules of Wild card which says if only <Type> is mentioned then it can accept only that type and no sub or super type. So why is SampleB and SampleC accepted?
Second, although it's accepted, why v.get(2) is not returning SampleC. Infact writing v.get(1) also gives same error! Why not return SampleB for v.get(1) and SampleC for v.get(2)?
Help please. Thanks. | https://www.daniweb.com/programming/software-development/threads/433493/doubts-in-generics-collection | CC-MAIN-2017-26 | refinedweb | 159 | 71.31 |
Catering to a larger audience.
Since RSS is XML, treat it as such
What happens? The RSS or Atom feed is an XML document. When the users clicks it, the browser downloads the document and attempts to display it as XML, that is, as raw code. To make things worse, some versions of Internet Explorer display a security warning.
The situation is not easy. Technically RSS feeds should have the
application/rss+xml
MIME type while Atom feeds should be identified as
application/atom+xml.
If the MIME type is correct and the visitor has a news aggregator properly setup
on his machine, then the browser will launch it automatically.
In practice, few visitors have the proper configuration so they are more
likely to see a cryptic error message. Consequently most Web sites
use the
text/xml or
application/xml
MIME type which is incorrect but at least causes the browser to display
the raw XML code. It's only a slight improvement over an error message
but, hey, take what you can.
To makes matter worse, some sites serve XML documents as
application/octet-stream
due to misconfiguration. The webmaster must update the server configuration
to use the most appropriate MIME type. For example, with the popular Apache
Web server, this is done in the .htaccess file.
To alleviate the problem, the most recent browsers sniff incoming XML files to categorize them properly. Sniffing simply means that they read the first few bytes looking for RSS or Atom tags. But, again, that requires the visitor to use an RSS-aware browser.
Fortunately there's a better solution: an XSLT stylesheet. If the browser treats the feed as an XML document, it will use the stylesheet to render a sensible page. If, on the other hand, the browser recognizes an RSS and Atom feed, it will ignore the stylesheet. Voilà, the best of both worlds!
Listing 1 is an RSS document associated to a stylesheet (an excerpt from
my podcast's feed). Note the second
line is an
xml-stylesheet processing instruction.
This is the crucial link to the stylesheet. The
href
is the path to the stylesheet.
Listing 1. RSS excerpt
Listing 2 is the stylesheet. If you are familiar with XSLT, you can probably write a similar stylesheet in minutes... but for one quirk covered in the next subsection. If you know XSLT, feel free to skip directly to the next subsection. If you are not familiar with XSLT, read on as I'll cover the bare-bone minimum needed to process RSS in the remainder of this series.
Listing 2. XSLT stylesheet
Note that the stylesheet is an XML document
(just like the RSS or Atom stream) and it uses a namespace (like Atom elements or RSS extensions).
Typically for XML documents, you must be wary of the syntax.
Specifically make sure that the opening tags (
<p>)
have a matching closing tag (
</p>).
Empty tags must follow a special syntax (
<br />).
The stylesheet contains XSLT statements
to control the rendering
(in the namespace,
in Listing 2 the XSLT statements begin with the
xsl prefix)
and HTML tags to control the layout of the page.
If you want to modify Listing 2 to adapt to your site layout,
you can edit the contents of the
xsl:template element.
Make sure to preserve the XSLT statements.
The four XSLT instructions that you will need are
xsl:value-of,
xsl:for-each,
xsl:if
and the use of curly brackets.
The
xsl:value-of instruction extracts information from the RSS
or Atom document and inserts it in HTML. The instruction takes one attribute called
select with a path to the RSS or Atom element that you're interested in.
For example, to copy the feed title, the path is
rss/channel/title since the
title
element appears underneath
channel which itself is
included in
rss. As you can see, the path simply
lists the elements in the order in which they appear in the RSS document.
To copy data from an attribute, prefix the attribute name with
@
as in
rss/channel/item/enclosure/@url.
xsl:for-each is the looping instruction. It loops
over a set of elements (selected through the
attribute as well), in this case the various
items.
For each item, the stylesheet prints some basic information: title,
description and a link to the enclosure.
The curly brackets in attributes (and only in attributes)
extract information from the RSS or Atom feed, like
xsl:value-of
does for regular text. In the stylesheet, curly brackets populate several
href attributes.
Last but not least, the
xsl:if instruction
executes only if its test succeeds. In Listing 2,
xsl:if tests whether it's worth printing the enclosure
information or whether the enclosure tag is absent.
I have only scratched the surface of XSLT but if you make good use of copy-and-paste and Listing 2, you can adapt it to fit your site layout. Check Resources for a more complete tutorial on XSLT.
If your stylesheet does not work as expected, review the following:
- Make sure you declare the namespace exactly as shown (the
xmlns:xslattribute), do not change the URI
- If your document uses other namespaces (such as the iTunes extension), make sure you declare those as well
- If the stylesheet seems to work but you cannot extract some data, it most likely is a path problem (when I teach XSLT, incorrect path causes 80% of the problems with my students)
Most feed editors allow you to insert the required
xml-stylesheet
instruction. If yours does not support it, you can turn to FeedBurner to update
the feed. FeedBurner even offers a default XSLT stylesheet (see Resources).
All would be good in the land of RSS and Atom if Firefox had support for
the
disable-output-escaping feature in XSLT but it does not.
disable-output-escaping is an obscure feature in XSLT that serves
only one purpose: it processes tags that appear in other tags, such as CDATA sections.
And, RSS and Atom make heavy use of CDATA sections to embed HTML code.
With
disable-output-escaping, you should be able
to lift the HTML tags from the feed and insert them right into the HTML page...but for Firefox. Firefox essentially ignores the instruction so it ends up displaying
the raw HTML code.
There's been some debate in the Firefox community as to whether this behavior was standard compliant or not. Nevertheless it is a problem and one for which you need a solution.
Fortunately Sean M. Burke came up with a clever piece of JavaScript that circumvents the limitation. Mr Burke was kind enough to place his code in the public domain, enabling anyone to use it in any project. For your convenience, I include a link to a copy of his script in Resources.
For the script to work, your stylesheet must insert a
div
section with the id "cometestme." Your stylesheet must also place every item that needs escaping
in paragraphs with the name "decodeable."
Finally, you must call the script (
go_decoding()),
as you load the HTML document.
What to do in the stylesheet?
Listing the items in the RSS or Atom feed is only the beginning. After all, that content is already available elsewhere on the Web site and the feed was designed to drive subscriptions, not replicate content.
Most webmasters who attach an XSLT stylesheet to their RSS or Atom feed include instructions on how to install a news aggregator and subsequently subscribe to their feed.
While this sounds like the right thing to do, it has been my experience that visitors who are presented with such a page are unlikely to install an aggregator. With viruses and trojans, surfers are suspicious of demands to install software.
Many sites therefore include instructions that direct visitors to an online aggregator such as Google Reader or Yahoo!. While it seems like a good idea, I remain unconvinced on its efficiency. Unless they already subscribe to many feeds, visitors are not much more likely to sign up for a new service than to install new software. Assuming they do, what are the chances that they will remember to visit the online aggregator? My thinking is that if they have to bookmark a site, I'd rather they bookmark mine.
Thinking outside of the box
Personally I offer an option to subscribe through e-mail through one of the RSS-to-e-mail services. You can safely assume that every visitor has an e-mail address. I have drafted detailed instructions, outlining the options and including a very prominent e-mail subscription form. I have found that one fifth of the visitors to my podcast would rather subscribe through e-mail over subscribing through RSS.
RSS and Atom might be better technical solutions but nothing beats a familiar service... and e-mail is the most familiar service for many visitors.
To save me having to write subscription instructions twice (with the risk that they might diverge in the future), I use the stylesheet in Listing 3. It is simpler than Listing 2 and it implements an HTML redirect to send visitors to a regular page on my site.
Listing 3. The most simple solution? Redirect them!
When a visitor clicks the RSS feed, if her browser does not recognize RSS, it behaves like a redirect!
This article has shown how to put a friendly face on an RSS or Atom feed. Until they are more widely known, it is a good idea to implement this as a safeguard.
Learn
- Introduction to Syndication (Vincent Lauria, developerWorks, June 2006): Get started with RSS -- find why it is so popular and its benefits, which feed readers are available and might fit your needs. Plus learn about RSS and Atom subscriptions available to you from IBM.
- The RSS specification: Dig into this surprisingly readable spec for all the details on RSS.
- An overview of the Atom 1.0 Syndication Format (James Snell, developerWorks, June 2005): Consider Atom, an alternative to RSS.
- Hands-on training (Don Day, developerWorks, Mar 2000): Learn Extensible Stylesheet Language Transformations (XSLT) with this simple, hands-on exercise that demonstrates the principles of the XSLT.
- Process Atom 1.0 with XSLT tutorial (Uche Ogbuji, developerWorks, December 2005): Take a more in-depth look at XSLT and Atom.
- JavaScript hack: Download the original instructions and code from Sean M. Burke Web site.
- FeedBurner: If your RSS editor does not support stylesheets, you might want to sign with FeedBurner.
- developerWorks RSS feeds: Learn more about content feeds and add pre-defined or custom RSS and Atom feeds for developerWorks content to your site.
- IBM trial software: Build your next development project with trial software available for download directly from developerWorks.
Discuss
- Participate in the discussion forum.
- Atom and RSS forum: Find tips, tricks, and answers about Atom, RSS, or other syndication topics in this forum.
- XML zone discussion forums: Participate in any of several XML-centered forums.
- developerWorks blogs: Get involved in the developerWorks community.
Benoît Marchal is a Belgian consultant. He is the author of XML by Example, Second Edition and other XML books. You can contact him at bmarchal@pineapplesoft.com or through his personal site at. | http://www.ibm.com/developerworks/xml/library/x-wxxm37.html | crawl-002 | refinedweb | 1,873 | 63.9 |
auto update editRichard Suchenwirth shows how to set up a source file that automatically reloads itself into the interpreter every time you save it. Great development aid! Look for autoupdate on Braintwisters. There are some other cute ideas on that page, including dynamic variables and intgen which returns a new unique integer each time called.
re-source editJeffrey Hobbs posted on comp.lang.tcl that TkCon has a command tkcon resource that he uses all the time in development to reread the script in (it is extremely handy to not have to exit and restart wish). It not only resources in the main interpreter, but reinitializes the primary slave environments. TkCon is a gem in itself. If you don't have it, get it.
Reload a package editSee package require.
editing procedures editRichard Suchenwirth says: In my Tcl sandbox, I often don't even care to source a file at runtime, I typically rather
- display the full proc (name, args, defaults, body) into a text widget
- edit it (maybe just insert a [puts])
- eval the text widget's content, so the proc is redefined.
proc learnSelection {} { set s [selection get] uplevel \#0 eval [list $s] }DKF: Whyever not the following instead?
proc learnSelection {} { uplevel \#0 [selection get] }Sometimes you just have to wonder about people...
proc learnSelection {} {uplevel #0 [selection get]}is also equivalent - hashes matter only at beginning of command - RSunless you're using a syntax-sensitive editor like emacs or Source Navigator, which is not quite clever enough ...
new language constructs editTcl is almost unique in that you can create new language constructs if you really need them.
- For an example of adding a new loop control feature, take a look at do...until in Tcl, which also doubles as a do..while construct, if you happen to prefer that logic.
- A Java-style try ... finally ... has also been implemented.
- returneval lets you return from a procedure and evaluate a script "in its place".
- Very old (or very different) language constructs have also been considered, see Goto in Tcl, Basic in Tcl, RPN in Tcl, Playing APL.
Unknown possibilities editOne of the few Tcl rules says that the first word of a command is the command name, and you can even legally break that, allowing for constructs like
i = $j+1 ==== set i [expr $j+1] a ( 2 ) ==== lindex $a 2 {1 .. 5} ==== {1 2 3 4 5}without having procs i, a, 1 - that would be the known way. See Radical language modification, where this is done with the unknown command. Gadgets o.t.o.h. do "infix assignment" more orthodoxly, with "light-weight objects". Let unknown know if you want to build language modifications incrementally...
Tcl as OS glue editIn the amazing ETLinux project [1], an embedded system is developed that packs stripped-down Linux and Tcl (7.6, enhanced by mount, ifconfig, route, uuen/decode, dup, fork, kill, nice, ...) into a tiny 386SX/2 MB RAM/2 MB disk configuration. Tcl scripts do the init (first process at all), mail server, web server, ...
No-Op Prompts editDKF: When I'm running tclsh or wish interactively, I have my prompt set to the usual default of % and have the following little procedure in my .tclshrc (see "The RC File" for more on tclshrc)
proc % args {uplevel 1 $args}This lets me copy and paste whole lines of Tcl with the mouse triple-clicks without having to worry about stripping leading prompt symbols, so making it faster for me to edit sessions interactively (particularly when you want to change part way through an existing procedure definition.) Power-user type facility? You betcha! Useful? Certainly so for me. I just wish I could do this with other interactive scripting systems...mikeH: Note that you can also set your prompt to the semi-colon and then a space ; to achieve the same goal without doing extra evals etc.. this trick also works nicely for traditional sh style shells like bash
Swapping variables edit
foreach {a b} [list $b $a] break ;# RSor, maybe not a gem:
eval set a \{$b\} \; set b \{$a\} ;# Donald ArseneauThat eval only puts you in Quoting hell, DA. For it to work, you need to do
eval [list set a $b] \; [list set b $a]DKF: In 8.5 or with TclX, with lassign:
lassign [list $b $a] a b
Constants editConstants that shall be visible anywhere in the code must be global. Global variables need however be declared every time you use them, either with global Pi once per proc or ::Pi once per use. Or, you can proc'ify them, since procs are global too:
proc constant {name value} {proc $name {} [list return $value]} constant Pi 3.1415926535 puts "Pi = [Pi]"wdb: My approach:
proc pi args { expr 3.14159265359 $args } proc -pi args { expr -3.14159265359 $args }If I say, [pi / 4], the function returns correctly 0.785398163398.
Whole or in pieces editA proc may be called with either a variable number of non-list arguments, which is convenient interactively, or a list of such arguments (which is good if you have the list anyway - no need for eval), if you interpret the args parameter as follows:
proc lsum {args} { if {[llength $args]==1} {set args [lindex $args 0]} expr [join $args +]+0 } ;# RS lsum 1 2 3 lsum $a_list_I_got_from_somewhere
The canvas is a gem editAn analog clock in Tk is a beautifully simple and simply beautiful piece of canvas demo code.
Optional arguments editHD: Laurent Demailly has written the package opt, which looks very powerful and comprehensive, but I find the following approach very easy to implement and adequate for my needs.Let's say I start with a proc that takes a long list of arguments, each having a default value:
proc myProc {{arg1 dflt1} {arg2 dflt2} ... {arg9 dflt9}} {This requires the user to know the order of arguments, and it's a pain if you only want to specify a non-default argument for arg9. It would be nicer if we call myProc in this case as:
myProc -arg9 FredSo let's rewrite myProc as:
proc myProc {args} { # Set up default values array set the_args { -arg1 dflt1 -arg2 dftl2 (etc ...) -arg9 dflt9 } # Now overwrite with user-supplied values array set the_args $args # And off we go... puts "The ninth argument is $the_args(-arg9)" }In practice I usually have procs like this in a package:
package provide XYZ 1.1 namespace eval XYZ { variable Global array set Global {etc ...} } proc XYZ::myProc {args} { variable Global array set the_args [array get Global -*] array set the_args $args # Carry on as before }
Use profile with ?# command editChang LI [email protected]I defined the ?# command as
proc ?# {args} { if {[info exists ::tcl_platform(debug)]} { uplevel 1 eval $args } }US Very useful, but one round of evaluation too much. Should be:
proc ?# {args} { if {[info exists :tcl_platform(debug)]} { uplevel 1 $args } }? means the alternative comment. It was quite useful to add profile code and you do not need to delete them when you distribute the code later, just set the tcl_platform(debug) as 0.
set tcl_platform(debug) 1 ?# package require Tclx ?# profile -commands on # your tcl code here ?# profile off report ?# parray report
LISPlike lists editKevin Kenny emulates mutable lists built of cons cells with procs in Tcl and Lisp.
List Comparision editEd Suominen (a TCL beginner as of Sept. 2001) contributes the following simple but fast proc for comparing lists. The result are two lists containing items exclusively in one argument list or the other.
# PROCEDURE: LISTCOMP # CALLS: none # RETURNS: lists of unique elements in each list # USAGE: list1, list2 are the lists to compare. # outname1, outname2 are names of lists to use as the # output of the procedure # Copyright (c) 2001 Edwin A. Suominen, # This procedure code is freeware under the terms of the no-endorsement # version of the BSD License, incorporated herein by reference, with # <OWNER> = Edwin A. Suominen, <ORGANIZATION> = author, <YEAR> = 2001. # Inventors of any inventions that employ or are based on this # procedure retain their patent rights to such inventions. proc listcomp { list1 list2 out1Name out2Name } { ### Define empty lists in case one has no unique elements set out1 {}; set out2 {} ### Test each element of each list against all elements of other list foreach {i} $list1 {j} $list2 { # First, test for unique element in list1 if { [ lsearch -exact $list2 $i ] < 0 } { lappend out1 $i } # Then test for unique element in list2 if { [ lsearch -exact $list1 $j ] < 0 } { lappend out2 $j } } ### Put results in specified lists upvar #0 $out1Name x set x $out1 upvar #0 $out2Name x set x $out2 ### END LISTCOMP return }Setok Wouldn't it be better to sort the list first (Ologn) or build an array with the elements as keys? As far as I can tell the above can lead to On^2 time.Michael Schlenker You're absolutly right. See Some ways to do set comparision for some much faster alternatives (more than a magnitude faster). So perhaps this proc should be moved from the Gems section, especially because it returns incorrect results!
lshift for command-line argument and procedure args parsing edit
#========================================================== # NAME : lshift # PURPOSE : shift list and return first element # AUTHOR : Richard Booth # # [email protected] [email protected] # --------------------------------------------------------- # ARGUMENTS : # % inputlist # List to be shifted. # RESULTS : # * Sets inputlist to 2nd to last elements of original inputlist # * Returns first element in inputlist # NOTES : # * useful for command-line arguments and procedure args processing # EXAMPLE-CALL : # # while {[llength $argv] > 0} { # set arg [lshift argv] # switch -- $arg { # -lib {set lib [lshift argv]} # -show {set show 1} # default {lappend tests $arg} # } # } # #========================================================== proc lshift {inputlist} { upvar $inputlist argv set arg [lindex $argv 0] #set argv [lrange $argv 1 end] ;# below is much faster - lreplace can make use of unshared Tcl_Obj to avoid alloc'ing the result set argv [lreplace $argv[set argv {}] 0 0] return $arg }See also Stacks and queues. JCW - Here's a non-recursive directory walker based on it:
proc walkdirs {args} { set files {} while {[set dir [lshift args]] != ""} { foreach x [glob -nocomplain [file join $dir *]] { lappend [lindex {files args} [file isdir $x]] $x } } return $files }
RS considers this code by Lars H from Googol magnitude a true gem - it turns positive decimal integers (of any length) to bit strings: }APN Agreed. In Tcl 8.6, the %lb format specifier provides the same function.
RLH - or even the fact that Tcl programmers are better looking than most users of those other languages - Maybe I need to find another language then! ;-) | http://wiki.tcl.tk/918 | CC-MAIN-2017-51 | refinedweb | 1,739 | 60.24 |
I :)
Two.
I run into a problem today, here is the snippet:
var i = 101; console.log('101: ' + i.toString(2)); console.log('101 >> 1: ' + (i >> 1).toString(2)); var l = -101; console.log('-101: ' + l.toString(2)); console.log('-101 >> 1: ' + (l >> 1).toString(2));'
Output:
"101: 1100101" "101 >> 1: 110010" "-101: -1100101" "-101 >> 1: -110011"
Why
-101 >> 1 is
-110011 instead of
-110010?
Here is what I learned by reading Professional JavaScript for Web Developers:
When js stores a negative number, it does the following things:
- get the binary representation of the absolute value of the negative number
- replace 0s with 1s and 1s with 0s
- add 1 to the result of step 2
So in my case -101 >> 1, we first convert -101 to its binary representation:
The binary representation of Math.abs(-101) is:
0000 0000 0000 0000 0000 0000 0110 0101
invert the 0s and 1s:
1111 1111 1111 1111 1111 1111 1001 1010
add 1 to the end:
1111 1111 1111 1111 1111 1111 1001 1011
Now, shift it to the right by 1:
1111 1111 1111 1111 1111 1111 1100 1101
The binary above should be the correct result of -101 >> 1, but when logging a negative number's binary representation, Javascript simply puts a negative sign in front of the binary representation of the positive number:
var x = 15; console.log(x.toString(2)); // output: 1111 var y = -15; console.log(y.toString(2)); // output: -1111
For our example, this means that when logging the result of -101 >> 1, JS will output minus sign + the binary representation of the positive number. But the positive number is not 101 >> 1 because 101 >> 1 gives you:
(101 >> 1).toString(2); // output: 110010 (-101 >> 1).toString(2); // output: -110011, not -110010!
To get the correct result, we have to reverse the aforementioned step 1-3:
1111 1111 1111 1111 1111 1111 1100 1101 // this is the result we get from step 4
Reverse step 3 by subtracting 1, we get:
1111 1111 1111 1111 1111 1111 1100 1100
Reverse step 2 by invert 0s and 1s:
0000 0000 0000 0000 0000 0000 0011 0011
Reverse step 1 by converting this binary to integer:
parseInt(110011, 2); // output: 51
Finally, when JS logs the result of -101 >> 1, it should be in the format of minus sign + the binary representation of 51:
(51).toString(2); // output: 110011 (-101 >> 1).toString(2); // output: -110011.
I :)
Found an excellent snippet here by monikkinom, which compresses all uploaded images to jpg before saving to disk:
from PIL import Image as Img import StringIO class Images(models.Model): image = models.ImageField() def save(self, *args, **kwargs): if self.image: img = Img.open(StringIO.StringIO(self.image.read())) if img.mode != 'RGB': img = img.convert('RGB') img.thumbnail((self.image.width/1.5,self.image.height/1.5), Img.ANTIALIAS) output = StringIO.StringIO() img.save(output, format='JPEG', quality=70) output.seek(0) self.image= InMemoryUploadedFile(output,'ImageField', "%s.jpg" %self.image.name.split('.')[0], 'image/jpeg', output.len, None) super(Images, self).save(*args, **kwargs)
According to the official doc, the parameters for the
InMemoryUploadedFile are:
file, field_name, name, content_type, size, charset, content_type_extra=None
Bonus
If you want to scale the image and still keeps its aspect ratio:
new_width = 800 img.thumbnail((new_wdith, new_width * self.image.height / self.image.width), Img.ANTIALIAS)
To | https://cheng.logdown.com/?page=3 | CC-MAIN-2022-05 | refinedweb | 563 | 54.32 |
So... is anyone here part of the Microsoft Creators Program?
I applied and was accepted to the beta program.
I've been working on updating my XNA game as a Monogame UWP, and I think I have it all working. On my local PC, at least. I don't own an Xbox One, but I feel like I'm in the right position to buy one, put it in Developer Mode, and see if my game works on it.
If anyone else has been accepted to the MSCP, I'd love to hear from you, where you are in your setup, etc. There appears to be a lot of "paperwork" to fill out on the developers site. Ratings, metadata, screenshots, etc.
Or, any other general discussion regarding the MSCP, as I don't see a thread about it yet.
I'm a little new here, so I hope this isn't inappropriate. Thanks!
So, first question...
The SDK comes in configuration for both UWP and XDK, for both C++ and WinRT. If you are part of the Xbox Live Creator’s Program, you can only use C++ for UWP. If you want to use C#, you must use Unity.
So... using Monogame, how do I access the Xbox Live SDK if I can only use the C++ one?
There was mention of a transpiler, Konaju or Tom should drop by here shortly and let you know where to go...
I wanted to join but not having an Xbox at present nor a game in development, felt a bit too early to sign up... however I am happy to learn anything possible in advance so just tagging in here...
Can you PInvoke it ?
The C#-to-C++ transpiler is for games using the XDK. UWP-based games still use C# and the .NET runtime.
Where did you see "you can only use C++ for UWP"? Post a link?
I found it.
I'm not sure why they say that. I don't have a Xbox One myself, and haven't touched UWP apps yet. The Xbox Live Creators Program is only a week old, so we're still learning as well.
I'm not part of the Creators Program yet, but I do have a Dev Center account and I've been deploying and testing UWP apps to my Xbox One for the last few days. So I might be able to answer some questions about Monogame UWP apps on the Xbox One, but this is all very new to me as well.
If you'd like to read about the deployment process and some issues I've encountered, check out the Xbox One Development section in this guide I've been working on:
Hope this helps.
THANK YOU!
That's such a relief!
I should add, using the UWP Share feature will fail your app, this happened to my apps recently and as almost all my current apps make use of this feature, I had to skip XBOX as a platform, hmm I do have one that does not use such a feature, will try to add it to the supported platform list and see if it throws any other fails on my next update...
But as long as you do not use the share charm feature, you should not have that issue.
Well, like I said, I have my XNA-to-UWP game up and running on my Windows 10 PC. Maybe I'm ahead of the curve here.
Visual Studio 2015 is all linked up with my dev accout/store stuff, and I can upload the package to the dev site. I've filled in all the metadata stuff that I can, and everything appears to pass muster. That is, the package is verified when I create a package locally, as well as when I upload it.
My assumption is that I'll get my Xbox set up, put it into Dev Mode, and be able to deploy my game to it and see it work.
After that, it would be nice to fold in some Xbox Live features. Really the only thing I would like would be to get the Gamertag for each player... I currently have no idea how to do that, or if it's a show-stopping requirement.
I'll keep you updated.
There is a comparison list here:
Scroll down a little...
Alternatively, you could create your own system using a custom ASP.NET app... but how that works on XBOX, I do not know... don't EA and the lot use their own user services? or do they integrate XBLive members and cross reference everything or something?
Worth investigating...
EDIT
Found the below
I am assuming WinRT means C#... No idea...
Now, when it says...
Multiplayer Not supported for the Creator's Program...
Does it disable certain network access? hmm...
I think I need an Xbox at some point... perhaps Project Scorpio...
It would be really great if this could be integrated with the built-in XNA namespaces (if I recall correctly there was this Guide class that could be used and worked at some point with the iOS Game Center - not sure if it is still there).
I found a solution. Microsoft has a C# version of the Xbox Live API. It works like the C++ version:
We should promote that repo.
How?
Ta ta daa da da dee dum... [Well had to add more text...]
Nice. Thanks for the link to the C# Xbox Live SDK. I'll give that a shot.
In other news, my Xbox One arrived last night. I didn't have a whole lot of time, but I was able to put it into Developer Mode and get my game deployed. As far as I can tell, it ran perfectly! So that feels like pretty good news. It's not bad news, at least.
I still need to get myself familiar with the Xbox dev stuff. There's a lot of settings for Portals and Sandboxes and test accounts and stuff that I don't wholly understand. But I'm splashing around.
this github project is no longer available. Is there a replacement?
EDIT: ok, it seems to be nuget-based now.
Can you link the NuGet here?
In the nuget manager in VS, just type "xbox live" and the packages appear. EDIT: Creating an XboxLiveUser crashes though, so I don't know how good my "solution" is
The NuGet is Microsoft.Xbox.Live.SDK.WinRT.UWP
Open your MonoGame project in VS17 and do the following:1/ Right click references2/ Click Manage NuGet packages3/ Click Browse4/ Type Xbox in the search and choose Microsoft.Xbox.Live.SDK.WinRT.UWP, then click install5/ Close the solution and open it again6/ Click to expand References and click the Visual C++ reference with a yellow exclamation mark to clear it
To confirm this has worked type Using Microsoft.Xbox.Services; at the top of your project (better to type instead of paste so you can see if the autopredict can see the reference)
You will also need Microsoft.Xbox.Services.System
To pass certification at a minimum you will need to have a signed in user and display the gamer tag
Now that you have the Xbox library working you can declare the variable XboxLiveUser user;
Unfortunately this is as far as I managed to get
When I do user = new XBoxLiveUser(); in the initialize section I get a stack overflow errorWhen I put it in the game loop (using a Boolean to ensure it is called once) the game crashes out with no error
This website describes the process but in another language
Another tip is Unity comes with VS17 and it has the Xbox cs files (in C#) accessible. Follow this link
And then do the "Import plugin" step
Go to the Assets\Xbox Live\Scripts path of your project and you'll see files written in C# for users, profiles, leaderboards etc...
Some of it is hard to read as it is unity language. Others require jumping from one file to another as you attempt to follow what is going on. It will take some time for somebody to reverse engineer this and adapt it to monogame. Particularly as simply calling a new XboxLiveUser() doesn't seem to be working in three different spots I have tried
I hope I have given somebody a boost in the right direction but I really hope somebody can work this out and document it for the MonoGame community
I should be a simple step by step process that shows the minimum code required and where it needs to be implemented (just like the link I posted with the javascript version)
Thanks in advance
actually I'm at the same point as you: crash creating the XBoxLiveUser. Will report if I find a solution.
There are some extra steps I've done. I've created a new app in the Windows Store devcenter and created a sandbox to test, activated it, and added xbox live accounts .( it is very well explained in the first steps of this tutorial: here . )
However I'm unable to log in with the test account in the XBox once I change the sandbox, so I can't test the game in the new sandbox at all.
(my "theory" was that once the sandbox is created, the user creation would no longer crash, but I'm not sure of that because a function call should never fail like that, even if it hasn't permisions to use XBox Live SDK) | http://community.monogame.net/t/monogame-xbox-one-and-the-creators-program/8847?u=mrvalentine | CC-MAIN-2020-10 | refinedweb | 1,595 | 79.8 |
#include <LiquidCrystal.h>LiquidCrystal lcd(13, 12, 8, 7, 4, 2);#define LED 3void setup() { pinMode(LED, OUTPUT); lcd.begin(16, 2); lcd.print("Hello, world!");}void loop() { analogWrite(LED, 127); lcd.setCursor(0, 1); lcd.print(555);}
1) Why not using a pot? It's not an option.
Did you have a current-limiting resistor with the LED on pin 3?
Quote1) Why not using a pot? It's not an option. It is an option if you have an LCD module that works well with 0v on pin 3 (like many of mine).QuoteDid you have a current-limiting resistor with the LED on pin 3?This is more likely to be the problem.Don
Thanks for all the answers. The problem went away after I connected a series resistor to the LED. However, I'm curious to know an explanation of what actually happened, electrically speaking. Does having a high current on one digital pin affect the logical output of another digital pin? Perhaps this is due to too much current going through that pin so that there is not enough left for the other pins for accurate signalling?Today is my second day experimenting with an Arduino, and I've learned that you actually need to limit the current to an LED as the pins don't have built-in resistors, not only to avoid damaging the components, but also to avoid weird behavior.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=69190.msg512629 | CC-MAIN-2016-30 | refinedweb | 273 | 68.06 |
local provider missing network resources under /proc
Bug Description
When create ubunut machine with juju local provider lots of settings under /proc/sys/net/core are missing.
For example, rmem_max. It prevents our application fails to start.
It happens on some my Linux host systems only.
When I work with HPCC charm Matt Bruzek, who help us view the charm also catch this on his system.
I also opened discussion thread at Ubuntu One (Ask): http://
Well, it might just be part of the required isolation by namespaces.
In fact there is way more missing in /proc/sys comparing a Host to a Container.
sudo sysctl -a | wc -l gives me:
a) # 669 in a wily Container
b) # 1335 in Trusty Host
Since a) and b) essentially share the kernel it should be the same (without isolation).
In fact we can create a List what is missing.
=> missing-
When you first look at it you might think it is a lot, but the major contributor are virtual devices like bridges existing in the Host, but not in the guest.
After cleaning those out of the files the missing lines like this:
=> cleaned-
This was done on an LXD container, I also tried a privileged LXC container but that made no difference.
Still everything that is missing are just network tunables, so it might just be "intended" for network isolation due to network namespaces.
I'm not yet deep into that, but I hope the identified lists help to shorten the bug handling.
Nothing LXC can do about this, the list of files available entirely depends on the kernel and what's namespace aware in the Linux kernel.
This is LXC issue and nothing to do with juju-core/local. I can re-produce it in LXC only environment. This ticket can be closed. We will address the issue with LXC. | https://bugs.launchpad.net/lxc/+bug/1288969/+index | CC-MAIN-2020-10 | refinedweb | 310 | 71.44 |
TimerSettime(), TimerSettime_r()
Set the expiration time for a timer
Synopsis:
#include <sys/neutrino.h> int TimerSettime( timer_t id, int flags, const struct _itimer * itime, struct _itimer * oitime ); int TimerSettime_r( timer_t id, int flags, const struct _itimer * itime, struct _itimer * oitime );
Arguments:
- id
- The ID of the timer whose an expiration date you want to set, as returned by TimerCreate() .
- flags
- The only supported flag is TIMER_ABSTIME. If specified, then nsec represents an absolute expiration date in nanoseconds from the Unix Epoch, 00:00:00 January 1, 1970 UTC. If the date specified has already passed, then the expiration event is delivered immediately.
If the flag isn't specified, nsec represents a relative expiration period that's offset from the given clock's current system time in nanoseconds.
- itime
- A pointer to a _itimer structure that specifies the expiration date. For detailed information, see Expiration date, below.
- oitime
- NULL, or a pointer to a _itimer structure where the function can store the interval timer period (i.e. previous amount of time left before the timer was to have expired), or zero if the timer was disarmed at the time of the call. The previous interval timer period is also stored in the interval_nsec member.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The TimerSettime() and TimerSettime_r() kernel calls set the expiration time of the timer specified by id.
These functions are identical except in the way they indicate errors. See the Returns section for details.
Expiration date
The expiration is specified by the itime argument. The _itimer structure contains at least the following members:
- uint64_t nsec
- The expiration time to set.
- uint64_t interval_nsec
- The interval reload time.
If the nsec member of itime is zero, then the timer is disarmed.
If the interval_nsec member of itime is nonzero, then it specifies a repeat rate which is added to the timer once the nsec period has expired. Subsequently, the timer is automatically rearmed, causing it to become repetitive with a period of interval_nsec.
If the timer is already armed when you call TimerSettime(), this call discards the previous setting and sets a new setting.
If the event notification specified by TimerCreate() has a sigev_code of SI_TIMER, then at most one event is queued. In this case, if an event is pending from a previous timer when the timer fires again, a timer overrun occurs. You can use the TimerInfo() kernel call to obtain the number of overruns that have occurred on this timer.
Blocking states
This call doesn't block.
Returns:
The only difference between these functions is the way they indicate errors:
Errors:
- EINVAL
- The timer specified by id doesn't exist.
- EFAULT
- A fault occurred when the kernel tried to access itime or oitime. | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/t/timersettime.html | CC-MAIN-2020-34 | refinedweb | 467 | 63.9 |
back to overview
Persistent EEM variables
Someone has asked me a while ago whether it's possible to retain variable values between invocations of an EEM policy. Since a new copy of Tcl interpreter is started for each event, global variables obviously won't work; they are lost as soon as the Tcl policy is finished. A potential solution is to modify the router's configuration and save the values you wish to preserve in event manager environment, but that's a time-consuming process that interferes with whatever router configuration management process you have.
The real solution is based on the appl_setinfo and appl_reqinfo calls. They work, but like many other Tcl-related IOS features they are … well … weird.This time, the programmers managed to implement WORO (Write-Once-Read-Once) memory:
If all this sounds a bit strange, don't worry; here's a working example:
The real solution is based on the appl_setinfo and appl_reqinfo calls. They work, but like many other Tcl-related IOS features they are … well … weird.This time, the programmers managed to implement WORO (Write-Once-Read-Once) memory:
- The value you want to preserve is saved with appl_setinfo key name data value function call. Keys must be unique; you can only set the same key once. If you try to set the value of a key multiple times, the function does not overwrite the previous value but fails.
- You can read the value with appl_reqinfo key name function call. If the key value hasn't been set, it returns an empty string and sets the $_cerrno variable, otherwise it returns a list with 'data' as the first element and your value as the second list element (I have to admit I've seen simpler APIs :).
- Once you read the key value, it's gone. You cannot read it twice.
::cisco::eem::event_register_cli sync no skip no pattern "show"
namespace import ::cisco::eem::*
namespace import ::cisco::lib::*
Set the variable value to zero (in case we haven't saved the value before) and read the previous value
set lastCnt 0
set getLastCnt [ appl_reqinfo key "showCounter" ]
If the first element in the list is 'data', then the second element is our value.
if { [ lindex $getLastCnt 0 ] == "data" } {
set lastCnt [ lindex $getLastCnt 1 ]
}
Increase the counter and generate a syslog message
incr lastCnt
action_syslog priority info msg "Show command was executed $lastCnt times"
Save the new value of the counter to be retrieved by the next invocation of the same policy.
appl_setinfo key "showCounter" data $lastCnt
This article is part of You've asked for it series.
You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted. | http://blog.ipspace.net/2007/11/persistent-eem-variables.html | CC-MAIN-2017-43 | refinedweb | 456 | 57.1 |
Several techniques can be used to find and isolate native code memory leaks. In general there is no single ideal solution for all platforms. The following are some techniques to diagnose leaks in native code.
Track All Memory Allocation and Free Calls
Track All Memory Allocations in JNI Library
Track Memory Allocation with Operating System Support
Find Leaks with dbx Debugger
Find Leaks with libumem Toolinitialized memory or accesses to freed memory. See Find Leaks with dbx Debugger.
Not all these types of tools will work with Java applications that use native code, and usually these tools are platform-specific. Because the virtual machine dynamically creates code at runtime, these tools can wrongly Oracle Solaris 9 operating system update 3; see Find Leaks with libumem Tool.
If you write a JNI library, then consider creating a localized way to ensure that your library does not leak memory, by using a simple wrapper approach.
The procedure in Example 3-7 is an easy localized allocation tracking approach for a JNI library. First, define the following lines in all source files.
Example 3-7 Define this Procedure in Source Files
#include <stdlib.h> #define malloc(n) debug_malloc(n, __FILE__, __LINE__) #define free(p) debug_free(p, __FILE__, __LINE__)
Then you can use the functions in Example 3-8 to watch for leaks.
Example 3-8 Function to Watch for debug support. The Microsoft C++ compiler has the
/Md and
/Mdd compiler options that will automatically include extra support for tracking memory allocation.
Linux systems have tools such as
mtrace and
libnjamd to help in dealing with allocation tracking.
Oracle Solaris operating system provides the
watchmalloc tool. Oracle Solaris 9 operating system update 3 also introduced the
libumem tool, see Find Leaks with libumem Tool.
The
dbx debugger includes the Runtime Checking (RTC) functionality, which can find leaks. The
dbx debugger is part of Oracle Solaris Studio and also available for Linux.
Example 3-9 shows a sample
dbx session.
Example 3-9: Example 3-9 used two
suppress commands to suppress the leaks reported in the virtual machine,
libjvm.so and the Java support library,
libjava.so.
First introduced Example 3-10.
Example 3-10 Set an Environment Variable for libumem
$ LD_PRELOAD=libumem.so $ export LD_PRELOAD $ UMEM_DEBUG=default $ export UMEM_DEBUG
Now, run the Java application, but stop it before it exits. Example 3-11 uses
truss to stop the process when it calls the
_exit system call.
At this point you can attach the
mdb debugger, as shown in Example 3-12.
The
::findleaks command is the
mdb command to find memory leaks. If a leak is found,. | https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks005.html | CC-MAIN-2019-22 | refinedweb | 435 | 57.06 |
Analyzing news headlines across the globe with Kimono and MonkeyLearn
The news is probably one of the first things people check in the morning, but how much does what you know and understand about the world depend on your news source? Will you view the world differently if you head over to CNN instead of BBC? Tools like MonkeyLearn and Kimono allow us to gather and analyze the text we encounter in our daily lives and determine what that text says behind the words – in this case, it turns out that different news sources report on different news with different weighting.
We analyzed the front page articles of five different major global news outlets – Fox, CNN, China Daily, Al Jazeera (English), and the BBC. We amassed the data and then used MonkeyLearn’s text analysis, a combination of entity extraction – a process that recognizes text corresponding to important people, places, and organizations – news classification and keyword extraction.
In this post, we share a few insights, and explain how to replicate our process so that you can analyze any text on the web. This analysis is by no means conclusive, as it is based on just one day of data but instead demonstrates the types of insights you can get when you use smart tools like Kimono and MonkeyLearn.
Heatmap of top 20 people discussed by source:
Heatmap of top 20 organizations discussed by source:
Wordmap of news:
We did this in three steps:
1. Set up Kimono APIs for top news sites
2. Format our kimono API output
3. Upload data to MonkeyLearn and run it through the appropriate classifiers
1. Set up Kimono APIs for top news sites
For each news source, we created one ‘Headlines API’ to scrape the links to the articles on the front page, and fed those links into a secondary ‘Articles API’ to crawl the full article text behind each link. You can read more about how to do this in this blog post, or you can watch this video tutorial. Setting up this two step crawl enables you to quickly scrape a broad set of content.
Here’s a sample of our unprocessed output from our BBC Articles API (this is the content behind one of the BBC front page articles):
{ "article_text": "Police in Haiti have clashed with anti-government protesters angry about the high cost of fuel." }, { "article_text": "Several people were injured as police moved in to clear roadblocks set up in the capital, Port-au-Prince." }, { "article_text": "Protest organisers said that drivers who ventured into the city centre during the two-day protest would be putting their lives at risk." }, { "article_text": "Haiti has seen months of protests against President Michel Martelly over delayed elections." }, { "article_text": "\"Because of the price of fuel, the cost of living is going up,\" said Ralph La Croix, a Port-au-Prince resident." },
To format this data to be compatible with MonkeyLearn, we need to format these ‘article_text’ properties into a single string.
2. Format our kimono API output
To format the data you can use your language of choice (or you can write a javascript function directly in the kimono web interface).
In this scenario, we downloaded the data, stitched the article_text properties for each API into one string and pushed it to MonkeyLearn. Here is the code repository along with an IPython Notebook – we suggest you clone the repository and follow the example to see how we formatted the data and called the MonkeyLearn classifiers to analyze the text.
3. Upload data to MonkeyLearn and run it through the appropriate classifiers
Here are links to each of the kimono APIs we used, if you would like to get the data manually:
CNN API JSON Endpoint
China Daily API JSON Endpoint
Al Jazeera API JSON Endpoint
BBC API JSON Endpoint
FOX API JSON Endpoint
Here is an excerpt of code from the IPython notebook that shows you how to run the MonkeyLearn classifiers on the formatted data:
import requests # We will store all of our results here. The first list will contain news # classification, the second extracted entities, and the third will hold # any extracted keywords. Positions will correspond to the position of the # original piece of news in the objects variable. partials = [[], [], []] while src_data: # We will classify data in chunks of BATCH_SIZE. items = src_data[:BATCH_SIZE] # Prepare the data and headers for our request data = {'text_list': items} headers = {'Authorization': 'Token {0}'.format(API_KEY), 'Content-Type': 'application/json'} # We will call the classifiers and extractors here, and then add the # result to the partials list. for i, url in enumerate([NEWS_CLASSIFIER, ENTITY_EXTRACTOR, KEYWORD_EXTRACTOR]): response = requests.post(url, data=json.dumps(data), headers=headers) result = json.loads(response.text)['result'] partials[i] += result # Remove the classified items from the list of items. src_data = src_data[BATCH_SIZE:] # Lets zip the original news with the data MonkeyLearn returned. We will # end up with a list containing tuples that look like: # (news text, news categories, extracted entities, extracted keywords) data = zip(objects, *partials)
By running this scraped data through MonkeyLearn’s categorizers and extractors, we are able to analyze and graph this data to generate different insights. See a few samples below, or view more in the Notebook.
Insights:
Using this process, we garnered some preliminary insights on these news outlets, such as:
- Only 1 person and 4 organizations were consistently mentioned across all globally outlets – Barack Obama, Congress, the White House, Facebook and the United Nations
- Al Jazeera and Fox had the highest overlap – 98 terms; while Al Jazeera and the BBC had the lowest overlap 44 terms
- Mike Pence, the infamous Indiana Governor who passed a controversial ‘religious freedom law’ in Indiana made the news in all outlets except China
- NCAA mentions only appeared on CNN – with March Madness entering the final week.
- Boko Haram appears consistently only on Fox and Al Jazeera (we ran this analysis for longer, and this is consistent across months)
- Bowe Bergdalh, the once-missing American soldier charged with deserting his unit appeared significantly more often (10x) in CNN than Fox, and was not mentioned outside the US
- The corruption charges against Governor Menendez of New Jersey (D) had a high number of mentions, but appeared exclusively on Fox and the BBC, not CNN.
We’d need to run a longer analysis to draw strong conclusions, but it is interesting to note that in this analysis, like our past news analyses – it seems that negativity wins in reporting — CNN vs Fox spend more time reporting negative content relating to the Democratic political party than they do on positive content on the Republicans . | http://blog.kimonolabs.com/blogs-about-kimono/page/2/ | CC-MAIN-2017-47 | refinedweb | 1,097 | 51.41 |
UnknownHostException:
The UnknownHostException occurs when the java program uses the socket to create a connection to the remote host.There are two main reasons for this exception to occur.
1.This exception is usually thrown when DNS(Domain Name Server) is unable to translate the host name into a valid IP address.This could be a problem due to DNS server or the host name that you specified may be invalid.
2.The second reason is that "connection may be unavailable" for some reason.
Example:
public class SocketTest { public static void main(String[] args) { try { Socket s = new Socket("net01.example.blospot.in", 2034); try { InputStream istream = s.getInputStream(); Scanner in = new Scanner(istream); while (in.hasNextLine()) { String line = in.nextLine(); System.out.println(line); } } finally { s.close(); } } catch (IOException e) { e.printStackTrace(); } } }
In this above code the UnknownHostexception would occur when the socket tries to connect to
the remote host since the host name that i have been trying to connect is not available.
So if you come across this error ensure that whether you have given the valid address or not.
If you indeed entered the valid address then check whether the connection is | http://craftingjava.blogspot.com/2012/05/exception-in-thread-main_26.html | CC-MAIN-2017-39 | refinedweb | 196 | 58.69 |
VS.NET macros are small VB.NET functions that group together one or more actions that manipulate the development environment using the VS.NET automation object model. VS.NET makes it easy to create and use macros in a way that does not interfere with the way you develop your softwaremacro projects operate entirely independently of VS.NET solutions. Once you have created a macro, you can then make it available on a menu or toolbar for easy access.
The easiest way to get started using macros is to use the macro recording functionality built into VS.NET. With macro recording, you use the IDE in the normal way, but VS.NET will record all of the actions you perform and save them in a macro.
As an example, consider the common task of changing a project's default HTML layout from Grid to Flow. (See Chapter 2 for information about the HTML designer and layout issues.) Since this is a common but slightly awkward task, it would be nice to have an automated way to set the value to Flow. This is a perfect job for a macro.
To record a macro, go to Tools Macros Record TemporaryMacro (Ctrl-Shift-R). Selecting this menu item brings up a small recorder toolbar with three buttons, one to pause recording, one to cancel the recording, and one to stop recording and generate a macro from the recorded operations. After starting the recording, you can just go through the motions of the task you'd like to record. When you have finished, press the Stop Recording button (Ctrl-Shift-R). (In this example, we are changing the project default HTML layout, so we would go to the Project Properties dialog box, go down to the Designer Defaults node, and change the layout. Once finished, we would press the Stop Recording button.)
To execute your newly recorded macro, go to Tools Macros Run TemporaryMacro (Ctrl-Shift-P). Whenever you ask VS.NET to record a macro, it creates a temporary macro called TemporaryMacro to store the results. It will not save this macro unless you tell it to, so each time you record a new temporary macro, you will be destroying the previous one you recorded.
To store a recorded macro permanently, use Tools Macros Save TemporaryMacro. This will display the Macro Explorer window, which is shown in Figure 8-3, and will give you an opportunity to rename your macro. (You must rename it in order to save itmerely selecting the Save TemporaryMacro item is not enough.)
The Macro Explorer lets you see all the macros on your system. (You can display the Macro Explorer using View Other Windows Macro Explorer or with Alt-F8.) To run a macro from the Macro Explorer, you can either double-click it or right-click on it and select Run from the context menu. You can rename and delete macros from this menu. The menu also allows you to edit a macro, which is useful, because even when you create macros by recording them, you will often need to make a few modifications to the generated macro. When you choose to edit a macro, VS.NET will open the macro IDE.
The macro IDE can be invoked via Tools Macros Macros IDE (Alt-F11), or by choosing to edit a macro in the Macro Explorer. The macro IDE looks very much like a trimmed-down version of the VS.NET IDE, as Figure 8-4 shows.
The Project Explorer window (which is on the left side of the IDE by default) shows all of the macro projects that VS.NET is currently configured to use. (See the next section, Section 8.2.3 for information on how VS.NET manages the files for these projects.) The editor is the normal VB.NET editor, so editing macros works in exactly the same way as writing VB.NET code in the main IDE.
Each macro project contains "files" (although in reality all of the "files" shown are typically contained in a single binary file). When you want to add a new macro, you can either edit an existing code file or add a new one (File Add New Item). When you add a new file, you get three choices: a module, a class, or a source file. The only difference between the three is the declarations VS.NET places in the new file. A module contains a module declaration, a class file contains a class declaration, and the source file option creates an empty file.
VS.NET stores your macros in one or more macro project directories. There is one macro project directory for each item listed under the Macros node in the Macro Explorer (Figure 8-3). These are entirely unrelated to normal VS.NET projects and solutions.
By default, macro project directories will be in either a VSMacros or a VSMacros71 directory underneath your My Documents\Visual Studio Projects directory. (You can place macro project directories wherever you likethese are just the default locations.) You will normally find two macro project directories hereMyMacros, which is intended for your own use, and Samples, which contains a set of example macros.
By default, VS.NET will put newly recorded macros in the MyMacros project. You can select a different project by right-clicking on the project in the Macro Explorer and selecting Set as Recording Project.
Macro project directories typically contain just one file, ProjectName.vsmacro, where ProjectName is the same as the containing directory name. The .vsmacro file is a COM structured storage file that contains all of the source files for the macro project.
You can have VS.NET store each of the source files for a project separately, instead of lumping them all into one structured storage file. (This would be a good idea if you wanted to place your macros into a source control system. However, you're on your own if you want to do thatVS.NET offers no integrated support for revision control of macros.) If you select the project in the Macro Explorer, the Properties panel (F4) will show a Storage Format property. By default, this is set to Binary (.vsmacros) but changing it to Text (UNICODE) will cause VS.NET to store the project as a collection of files instead of one single binary file.
Macro projects are not associated with VS.NET projects or solutions. VS.NET stores the list of macro projects in a per-user section of the registry:
HKCU\Software\Microsoft\VisualStudio\7.1\vsmacros
If you want to share your macro with someone else, you can export one of the individual files by right-clicking on it in the Project Explorer in the macro IDE, and selecting Export Filename.... This will export the macro file as a .vb file. Another developer can then import the macro on her copy of VS.NET using File Add Existing Item, in the macro IDE. Or you can just email someone the text of the macro, and she can add it to her system using cut and paste.
Although many tasks can be recorded as macros, often you will want to edit a recorded macro to extend its functionality beyond what was initially recorded. For example, you may wish to add looping or conditional execution into your macro. Also, it is not uncommon for macro recording to miss stepssome actions, such as typing data into a dialog box, are not recordable,so recorded macros often require a little tweaking.
Example 8-10 shows the macro that we recorded earlier to change a project's default HTML designer layout property from Grid to Flow. It is typical of recorded macros, in that it needs a little work before it will be useful.
Option Strict Off Option Explicit Off Imports EnvDTE Imports System.Diagnostics Public Module RecordingModule Sub TemporaryMacro( ) DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate( ) DTE.ActiveWindow.Object.GetItem("NSChange\NSChange").Select( _ vsUISelectionType.vsUISelectionTypeSelect) DTE.Commands.Raise("{5EFC7975-14BC-11CF-9B2B-00AA00573819}", 397, _ Customin, Customout) DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate( ) End Sub End Module
The first problem with this macro that it is not very general purposeit selects a particular project ("NSChange\NSChange"). Moreover, the part of the macro that does the actual work is hard to decipher: the DTE.Commands.Raise call is a generic method for invoking commands, and anybody who wanted to work out what this macro does by looking at it would have a hard time interpreting the command's GUID and ID. (See the sidebar, Interpreting Command GUIDs and IDs for notes on how to do this.) But worst of all, the macro didn't record the actual change we were trying to make in the propertiesit just activated the properties dialog window. (This illustrates the problem with that impenetrable Raise methodit is wholly unobvious that the command being invoked happens to be the one that opens the Project Properties dialog.)
In all, this recorded macro is not very helpful. The success you will have with recorded macros depends on what you are trying to do. In general, they don't work at all well for anything involving dialogs. For most other kinds of user interface activity, they fare rather better though.
The best approach when using macro recording is usually to use the recorded macro as a starting point for a new macro. Your final macro will probably look quite different, but the recorded macro may provide a quick path to learning how the object model works for a particular action.
So how do we fix the rather pointless macro in Example 8-10? The macro recorder leaves us in the lurch when it comes to project properties. To fix the code, we must use the Project object's Properties property, as we did in Example 8-2. This is a collection of Property objects that represent the project properties.
The exact set of properties that you will find in the Properties collection will depend on the project type. However, it is straightforward to write code that just ignores projects that do not have the property you are looking for. (As mentioned in Section 8.1.1.2 earlier, the VS.NET documentation describes the set of properties available for each object that supports a Properties collection.) In our case, we are looking for the property called DefaultHTMLPageLayout. The code in Example 8-11 iterates through all of the projects currently selected in the Solution Explorer and looks for that property. When it finds it, it sets it to Flow layout.
Imports EnvDTE Imports VSLangProj Public Module FlowModule Public Sub FlowLayout( ) Dim proj As Project For Each proj In DTE.ActiveSolutionProjects Dim prop As [Property] For Each prop In proj.Properties If prop.Name = "DefaultHTMLPageLayout" Then prop.Value = prjHTMLPageLayout.prjHTMLPageLayoutFlow End If Next Next End Sub End Module
This code looks nothing like the code that the macro recorder generated for us. (It also behaves nothing like itthis code actually does what it is supposed to, unlike the recorded code.) Since we know that the macro recorder often doesn't do a good job of recording the setting of properties in dialogs, in retrospect this was a bad choice for the macro recorderwe would have done better to have started out from scratch with a custom macro.
You are not required to use a recorded macro as the starting point for all of your macros. After all, the macro recorder just ends up generating code that you could have written yourself. Sometimes it will be simpler to start from scratch.
We will now work through the creation of an example custom macro that could not reasonably have been created with the macro recorder: it will transfer the contents of the TaskList to a web page. Visual Studio .NET provides a TaskList that can keep track of outstanding development chores (see 'TaskList Comments' in Chapter 2). Imagine a situation in which your team runs a daily build and you would like to make the resulting TaskList available in a web page so that management and other members of your team could see the remaining tasks. In this section, we will develop a custom macro that does just that.
Our macro will read the contents of the TaskList into a DataSet. It will then write the DataSet to disk as XML in a location accessible to the web page. The web page will load the XML back into another DataSet and bind it to a DataGrid control in order to present the results.
Example 8-12 shows the code for our macro. This example shows an entire source file, including all necessary Import statements, so you will need to add a new file to one of your macro projects if you plan to try this code out. Call the new file BuildCommentDataSet. Since this code uses the ADO.NET DataSet class, you will also need to add references to the System.Data.dll and System.Xml.dll components in your macro project.
Imports EnvDTE Imports System.Data Public Module BuildCommentDataSet Public Sub Build( ) Dim tl As TaskList Dim ti As TaskItem " Ask VS.NET for the Task List"s Window object Dim win As Window = _ DTE.Windows.Item(Constants.vsWindowKindTaskList) ' Get the TaskList object associated with the Window tl = win.Object ' Create a new DataSet and DataTable ' for holding the data ' Dim ds As New DataSet("SolutionBuildDataSet") Dim dt As New DataTable( _ DTE.Solution.Properties.Item("Name").Value.ToString( ) _ & "Tasks") ' Need a column for each interesting property ' dt.Columns.Add(New DataColumn("Category", GetType(String))) dt.Columns.Add(New DataColumn("Priority", GetType(String))) dt.Columns.Add(New DataColumn("Description", GetType(String))) dt.Columns.Add(New DataColumn("File", GetType(String))) dt.Columns.Add(New DataColumn("Line", GetType(String))) ' Add each task to the table ' Dim dr As DataRow For Each ti In tl.TaskItems dr = dt.NewRow( ) dr.Item("Category") = ti.Category dr.Item("Priority") = _ ti.Priority.ToString( ).Replace("vsTaskPriority", "") dr.Item("Description") = ti.Description dr.Item("File") = ti.FileName dr.Item("Line") = ti.Line.ToString( ) dt.Rows.Add(dr) Next ' Add the DataTable to the DataSet ' ds.Tables.Add(dt) ' save the DataSet as an XML document ' ds.WriteXml("c:\inetpub\wwwroot\tasklist.xml") End Sub End Module
With this DataSet generation in place, building the ASP.NET page to display the data is quick and easy. Here is code in the .aspx file:
<%@ Page <HTML><HEAD></HEAD> <body> <form id="SolutionTasks" method="post" runat="server"> <asp:DataGrid <HeaderStyle Font-</HeaderStyle> </asp:DataGrid> </form> </body> </HTML>
If you are just copying these files into a web directory rather than adding them to a VS.NET web project, you will need to change the Codebehind attribute to an Src attribute, in order to get ASP.NET to compile the codebehind file. Here is the codebehind file:
using System; using System.Data; using System.Web.UI; using System.Web.UI.WebControls; namespace Automate { public class SolutionTasks : System.Web.UI.Page { protected DataGrid DataGrid1; private void Page_Load(object sender, System.EventArgs e) { DataSet ds = new DataSet( ); ds.ReadXml(MapPath("tasklist.xml")); DataView dv = new DataView(ds.Tables[0]); dv.Sort = "Priority, Category, File, Line DESC"; DataGrid1.DataSource = dv; DataBind( ); } } }
You can see the result in Figure 8-5.
As described earlier in the section entitled Section 8.1.1.2, the VS.NET automation object model provides objects that raise events. Each category of events (e.g., build events, debugging events, text editor events) has a corresponding event source object. Writing macros that get called when these events are raised is very easy.
Whenever you create a new macro project, the macro IDE adds a module called EnvironmentEvents. The sole purpose of this module is to let you handle events raised by the IDE. If you open this file and click on the drop-down list at the top left of the editor window, you will see a list of event sourcesBuildEvents, DebuggerEvents, DocumentEvents, and so forth. If you select one of these, the drop-down list at the top right will be populated with a list of events. If you select one of these, the IDE will add an event handler for you.
Example 8-13 shows a typical event handler. It handles the OnBuildDone event from the BuildEvents object. This example will display a message box every time a build completes.
Private Sub BuildEvents_OnBuildDone(ByVal Scope As EnvDTE.vsBuildScope, _ ByVal Action As EnvDTE.vsBuildAction) _ Handles BuildEvents.OnBuildDone MsgBox("Build complete!") End Sub
Macros are debugged in much the same way as regular code. (See Chapter 3 for more information on VS.NET's debugging facilities.) The main difference is that the debugging occurs in the macro IDE, not in the main IDE. The main IDE becomes inaccessible when you are debugging a macro.
Macros provide a powerful way to automate and customize the IDE, but they do have certain limitations. For example, you cannot invoke a macro as part of a command-line-based automated build, because VS.NET will display the IDE when it runs the macro.
Here are some other limitations on macros:
Cannot create custom property pages for the Options dialog box on the Tools menu
Cannot create custom tool windows
Cannot dynamically enable and disable items on menus and toolbars
Cannot add contact and descriptive information to the Visual Studio .NET Help About box
Cannot build user interfaces for macros
Our TaskList DataSet macro would be much more useful if we could arrange for the DataSet to be created after a solution is built without user intervention. But we would need some way of allowing the user to configure which solutions require a DataSet to be generated and where each solution should write the XML file. This kind of configurability is difficult to achieve with a macro, because macros cannot display user interfaces. Fortunately, we can solve this problem by writing an add-in instead of a macro. | http://etutorials.org/Programming/Mastering+visual+studio+.net/Chapter+8.+Automation+Macros+and+Add-ins/8.2+Macros/ | CC-MAIN-2018-05 | refinedweb | 2,994 | 65.73 |
#include <protocol.hpp>
This structure defines the body used in FastCGI BEGIN_REQUEST records. It can be casted from raw 8 byte blocks of data and received as is. A BEGIN_REQUEST record is received when the other side wished to make a new request.
Definition at line 240 of file protocol.hpp.
Get keep alive value from the record body.
If this value is false, the socket should be closed on our side when the request is complete. If true, the other side will close the socket when done and potentially reuse the socket and multiplex other requests on it.
Definition at line 257 of file protocol.hpp.
References flags, and keepConnBit.
Get the role field from the record body.
Definition at line 247 of file protocol.hpp.
References Fastcgipp::Protocol::readBigEndian(), and roleB1.
Flag value.
Definition at line 267 of file protocol.hpp.
Referenced by getKeepConn().
Flag bit representing the keep alive value.
Definition at line 260 of file protocol.hpp.
Referenced by getKeepConn().
Reseved for future use and body padding.
Definition at line 269 of file protocol.hpp.
Role value least significant byte.
Definition at line 265 of file protocol.hpp.
Role value most significant byte.
Definition at line 263 of file protocol.hpp. | http://www.nongnu.org/fastcgipp/doc/1.2/classFastcgipp_1_1Protocol_1_1BeginRequest.html | crawl-003 | refinedweb | 205 | 62.95 |
?
One easy way is to just pick a folder name and stick with it so you can do this:
from os.path import join
package_path = join(sublime.packages_path(), "MyFolderName")
But, if you really just gotta have the directory name from Python, there is a Python variable that is called file which will give the path to the current script file. Then you can just strip off the file name.
from os.path import dirname, realpath
print(dirname(realpath(__file__)))
Thank you very much for your reply.
I tried
[quote="facelessuser"]
[/quote]
But it prints back
/
What am I doing wrong ?
Don't know. This works for me in ST2 and ST3. I would have to see your code and know where you are putting your code.
Thanks for looking into this !
This is my code:
import commands, subprocess
import sublime, sublime_plugin
from os.path import dirname, realpath
class MyPluginCommand(sublime_plugin.TextCommand):
def run(self, edit):
self.save()
self.myplugin(edit)
def save(self):
self.view.run_command("save")
def myplugin(self, edit):
scriptPath = sublime.packages_path() + "/MyPlugin/scripts/myscript.js"
print("1: " + self.view.file_name())
print("2: " + dirname(realpath(__file__)))
print("3: " + sublime.packages_path())
Output 2 & 3 is identical and on my system is:/Users/MacBookPro/Library/Application Support/Sublime Text 2/Packages
I was expecting Output 2 to look like this:/Users/MacBookPro/Library/Application Support/Sublime Text 2/Packages/MyPlugin
So I do not need to hardcode "MyPlugin" inside python script. Possible ?
Calling it from a function is probably the problem. file is probably only valid at init time. I would save the path to some global variable, and then just reference the global.
MY_PLUGIN = dirname(realpath(__file__))
...
def myplugin(self, edit):
scriptPath = sublime.packages_path() + "/MyPlugin/scripts/myscript.js"
print("1: " + self.view.file_name())
print("2: " + MY_PLUGIN)
print("3: " + sublime.packages_path())
Thank you very much for your help !!! I really appreciate it.It worked just fine, after I moved it outside the function. | https://forum.sublimetext.com/t/current-plugin-folder-path/8784/5 | CC-MAIN-2016-18 | refinedweb | 323 | 52.76 |
MATLAB Newsgroup
(Nate)
I am trying to convert a double of known length to a string using a mex function
Ideally, this is how the mex function would work,
num2str_mex(1)
ans =
'1'
This should be fairly simple, but I'm not having a whole lot of luck.
Thanks for the help,
Nate
The actual code should look something like,
#include "mex.h"
void num2str_mex(char *output, double input)
{
/* convert input of type double to output of type char */
/* not sure what goes here */
}
void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[] )
{
double *input
char *output
/* allocate memory for output string */
output = mxCalloc(1, sizeof(char));
/* create string array for return argument */
plhs[0] = mxCreateString(output);
/* assign pointers to output */
input = mxGetPr(prhs[0]);
/* call num2str_mex subroutine */
num2str_mex(output,input);
}
Or if you don't want to bother with malloc but uses a stack memory.
#include "mex.h"
#include <stdio.h>
void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[] )
{
char output[25];
sprintf(output, "%g", *mxGetPr(prhs[0]));
plhs[0] = mxCreateString(output);
}
% Bruno
"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <j332b7$7qj$1@newscl01ah.mathworks.com>...
>
Awesome, thank you.
N
"Steven_Lord" <slord@mathworks.com> wrote in message <j3357h$hsk$1@newscl01ah.mathworks
>
I am trying to write a function that converts serial dates to readable date strings, kind of like Jan Simon's DateStr2Num found here,
except for the opposite way.
Currently I am reading in Unix time (POSIX time) and converting that serial date to representative numbers. For example, I am able to turn,
1314192660
into
2011 (year)
8 (month)
24 (day)
9 (hour)
31 (minute)
0 (second)
and now I would like to turn those numbers into a string,
'08/24/2011 09:31:00'
I have written a Matlab version of this code, but I wanted to write a mex version because I was hoping that it might go faster.
Thanks again for the help,
N. | http://www.mathworks.com/matlabcentral/newsreader/view_thread/311903 | CC-MAIN-2015-48 | refinedweb | 327 | 57.5 |
- Training Library
- DevOps
- DevOps Courses
- Essential Java Programming
Essential Java Programming - Java Packages packages and visibility as a method to structure your code base. We'll cover items such as: using the the package keyword to define a class within a specific package, using the four levels of accessibility or visibility, using the import keyword to declare references to classes in a specific package, using the standard type naming conventions when creating packages, classes, and/or methods, visibility within the Java Modular System, and correctly executing a Java application class that is defined within a particular package. The English language is rich, but not rich enough for computers. Thus, we often have name clashes, such as the one shown above for Patent. Packages take care of isolating one name from another by providing an explicit context. Packages are a means of organizing code into large core groups. In Java, packages map directly to a directory structure. In the case of the Java Lang object class, the object class is located within the directory java/lang. Packages have two distinct sets of classes. Those of which are made public, and those of which are hidden inside the package. Hidden classes typically are support classes for the public ones, and are not meant to be used outside of the context of the package. The convention in Java is that the package names are lowercase. Class names start with an uppercase, and field or method names begin with lower case. This helps to disambiguate the dot apparatus. An example of this is Java.lang.system.out.printline where Java.lang is the package, inside package, system is a class, out is a static field within system, and printline is a method in the object referred to by out. Packages mapped to the underlying directory structure where the classes are stored.
The class path environment variable is a pointer to the root of this folder structure. The convention is that all packages will have a directory structure that maps to the reverse domain name of the company developing the code. If this scheme is adhered to, then all packages themselves will not have name clashes. The package keyword. Each class may belong to only one package. That is, only one package statement is allowed per file. The package statement must be the first statement within a source file. A package statement at the beginning of a .java file designates which package its contents belong to. Since the proper class name of a class is the full package path name followed by the class name, we would have to reference this fully qualified name in our source code. For classes whose actual class name, such as date, are unique within our source code, it becomes convenient to refer to the class by it's short name. However, since the class name could exist in other packages, we have to tell the compiler from which package we wish to obtain the class. For example, there is a date class in Java.ccore. And, in addition, there is a date class in Java.util. The import statement can be used to direct the compiler to a specific class of a specific package, or to all classes of a package. We can also have multiple input statements in our code. In the case where there would still be a conflict, say using the date class from both packages listed above, we could import one, but then use the full path name for the other, such as Java.ccore.date. Since the proper name of a class is its full name, package name plus class name, we must provide the fully qualified class name to the runtime interpreter, the jvu. This is probably the single biggest problem new developers have when trying to execute their code. Visibility modifiers allow the programmer to encapsulate or hide parts of a class or the class itself. Typically, as a programmer, you should allow the minimum level of visibility, unless there is a reason for making it more visible. This will prevent others from using the class in ways that were unintended. Java supports four levels of visibility. Public allows anyone access. Typically the class constructors and working methods are made public. Default is used when there is no explicit visibility modifier, allows only classes inside the same package to access. Protected allows access to classes inside the same package, and subclasses of the protected class. And finally, private allows access from only the class itself, typically is used for instance variables and hidden methods. Use this modifier if there is no need for anyone outside of the class to use its methods or variables. Java nine introduced the concept of modules. Until Java nine, all public types that reside on the class path were visible to all other types on that same class path.
With the introduction of the modular system, an additional layer of accessibility has been added. A module defines a logical grouping of Java packages. Each module has at least a unique name and a module descriptor which defines the meta data of this module. With the introduction of Java nine, the public keyword no longer means that a type is visible to all other classes within the runtime. A public type is only public to other types in the same module, unless the package in which the type resides is explicitly exported by the module. Within the module descriptor, all packages that are available to consumers of this module must be declared. As we will see later, a module does not have to export a public API. In this case, the module descriptor would not contain any export statements. A dependency on another module must be defined in the module-info.java file. The dependency is defined using the requires keyword and the unique name of the other module. Every module defines which other modules it requires it compile time and runtime. When a module requires a different module, we can say that the module reads the public api from the other module. When a module called airport.service requires the flight.data module, the public api defined by the flight.data module is readable by the airport.service module. Java naming conventions. It is very important to use consistent naming techniques. You can see from the examples presented here what a difference it makes. You'll also be able to read other code that adheres to the same Java convention.
Okay, before we complete this lecture, pause this video and consider the following questions to test yourself on the content that we have just reviewed.
Okay, the answers to the above questions are: One, packages provide a hierarchical namespace which helps to eliminate class name conflicts. Two, packages provide a framework for scope of access when access modifiers are used. Three, the three method and field access modifiers are public, private, protected, plus default. Four, importing a package tells the compiler to look into the package to resolve relative class name references. Five, the proper name of a class is its package name followed by its class name. And six, package names should be in all lowercase.). | https://cloudacademy.com/course/essential-java-programming/essential-java-programming-java-packages/?context_resource=lp&context_id=434 | CC-MAIN-2022-40 | refinedweb | 1,200 | 64.2 |
JSF - How to access each row of dataTable as it is created?
I'm trying to figure out how to access each data item in a dataTable as it is created, in a backing-bean.
I realize that I can bind an HtmlDataTable component to the dataTable but...then what? Is there an event-handler that is fired for each row as it is created? I also realize that I can access the row object by calling the getRowData() method in an action but I don't want to post the form or fire an action...this is a read-only page (report.)
Here's my dataTable (so far):
</p> <p>.............and many other columns beyond this.....</p> <p>
Here's where I've declared the HtmlDataTable in my backing bean:
<br /> public class ChangeOrdersBean<br /> {<br /> private HtmlDataTable coDataTable;</p> <p> public HtmlDataTable getCoDataTable()<br /> {<br /> return coDataTable;<br /> }</p> <p> public void setCoDataTable(HtmlDataTable coDataTable)<br /> {<br /> this.coDataTable = coDataTable;<br /> }<br /> }<br />
...my question is; now what? This builds and runs...but how/where can I access and manipulate each data item (the entity in each individual row) as it is created?
I'm dealing w/ some particularly gnarly legacy data in which the customer is particular in how it is to be displayed, arranged, and converted to different values.
Any advice would be much appreciated, thank you! | https://www.java.net/node/660213 | CC-MAIN-2015-14 | refinedweb | 231 | 57.37 |
Tag: unshare
Found 13 results for 'unshare'.
1) linux - Why is it possible to create other namespaces without real root using user namespaces?
2) process - Why does unshare based killing only work reliably with --fork?
3) namespace - Losing permissions by adding capability?
4) namespace - How can I use a bind mount in a network namespace?
5) linux - how to unshare network for current process
6) namespace - How to `unshare -n` without changing to root?
7) process - Why are PIDs in new PID namespace not contiguous
8) chroot - Force program to use /dev/urandom
9) linux - "unshare --mount" inside a jenkins chroot environment
10) process - Why unshare -p does not imply -f and --mount-proc?
11) linux - Why unshare with chroot does not isolate /dev like /proc?
12) namespace - Why does unshare binary call clone() so many times?
13) linux - How do you get the child pid of `unshare` when using --fork for `nsenter -t <pid>`? | https://programmatic.solutions/tag/unshare | CC-MAIN-2022-40 | refinedweb | 155 | 72.87 |
Styling Components In React
Styling React components over the years has improved and become much easier with various techniques and strategies. In this tutorial, we’re going to learn how to style React components using four major styling strategies — with examples on how to use them. In the process, I will explain the cons and pros of these styling strategies, and by the end of this tutorial, you’ll know all about styling React components and how they work along with the various methods that can be used for styling these components.
Note: A basic understanding of ReactJS and CSS would be good to have for this tutorial.
What Does ‘Styling’ In React Applications Even Mean?
The reason you’ll style your React application is no different from that which you have in mind when styling other websites or web applications you have been working on. Styling in React applications describes how React components or elements are displayed on screen or any other media.
The whole essence of building frontend UIs with React is how flexible it is to build these UIs especially as components and also style them to give us a great look and experience. It is important to know that whatever styling strategy you may decide to use is still CSS — you are writing CSS like you’ve always done. The difference is that the strategies (which we’ll be looking at) help make the process easy because of the uniqueness of React.
Major Styling Strategies In React
There are various strategies to follow when planning to style React components, these strategies have also increased and evolved over the years. In this tutorial, we would be talking about the most popular and modern styling strategies, and how to use them to style our React components. These styling strategies include:
- CSS and SCSS Stylesheets
This involves using separate stylesheets like our conventional way of styling our HTML websites either with CSS or a CSS preprocessor called S using a technique called CSS-in-JS.
- JSS
JSS is an authoring tool for CSS which allows you to use JavaScript to describe styles in a declarative, conflict-free and reusable way. It can compile in the browser, server-side or at build time in Node.
In the next section of this tutorial, we are going to be talking about each of these strategies of styling with examples of their syntax.
1. CSS And SASS Stylesheets
CSS or SCSS Stylesheets is a styling strategy that involves the use of external CSS or SASS stylesheets that can be imported into your React components depending on where you need the styling to be applied.
For example, we have a SASS file of styles called
Box.scss we need to use in a component called
Box.js, below is the code for our SASS file.
// Box.scss .Box { margin: 40px; border: 5px black; } .Box_content { font-size: 16px; text-align: center; }
In order to make use of this styling inside our Box component all we need to do is import the SASS file directly into our
Box.js component like so:
import React from 'react'; import './Box.css'; const Box = () => ( <div className="Box"> <p className="Box_content"> Styling React Components </p> </div> ); export default Box;
After creating the styles and importing it into
Box.js file, we can then set the
className attribute to the match what we have in the stylesheet.
While using this strategy, you could also leverage on existing frameworks like; Bulma, Bootstrap, etc. These frameworks provide you with existing classes and components you could plug into your React application without styling every aspect of your application.
Benefits of using CSS and SASS Stylesheets
- It is much more popular than the rest of the styling strategies, so there is a ton of helpful resources when you run into a bug.
- Caching & Performance
Standard CSS files are easy for the browser to optimize for, caching the files locally for repeat visits, and ultimately giving performance wins.
- Un-opinionated and Universal
CSS and SASS is universal and has no opinion on how you render your UI making it a great choice for teams that have legacy CSS and are migrating over to a new framework or rebuilding their website or product.
- Quickly Iterate A New Design
You can very easily rip out the entire stylesheet and create a new one to refresh the look and feel of your app without digging through potentially hundreds of components.
- CSS Frameworks
CSS frameworks come in handy if you are a new developer, or you want to quickly work on a prototype without diving deep into writing your own full-blown stylesheets. CSS frameworks will provide you with building blocks to get your idea off the ground. Some of these frameworks include, Bootstrap, Bulma, Semantic UI, Materialize.
Shortcomings of using CSS and SASS Stylesheets
- Readability
If not properly structured, a CSS or SASS stylesheet can become long and difficult to navigate through as the application becomes complex.
- Legacy CSS Can Live On For Years
Most times these really large stylesheets can become so complex and long that cleaning up old, outdated or even unused styles can be a pain.
Note: “Sass has two syntaxes. The most commonly used syntax is known as “SCSS” (for “Sassy CSS”) and is a superset of CSS syntax. This means that every valid CSS.”
CSS Modules
A CSS Module is a CSS file in which all class names and animation names are scoped locally by default. When using CSS Modules, each React component is provided with its own CSS file, that is scoped to that file and component alone.
The beauty of CSS modules happens at build time when the local class names which can be super simple without conflict are mapped directly to the automatically-generated ones and are exported as a JS object literal to use within React.
We can make use of CSS Modules in our React applications by importing the file directly into the component file.
For example, the code below is an example of how to use a CSS module in a React Component.
//Box.css :local(.container) { margin: 40px; border: 5px dashed pink; } :local(.content) { font-size: 15px; text-align: center; }
:local(.className) is used when you use create-react-app boilerplate because of webpack configurations.
When using webpack, you can add the loader and also include the module to your
webpack.config.js in other to make CSS modules work with Webpack.
test: /\.css$/, loader: 'style!css-loader?modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]' }
In other to make use of this CSS Module inside our Box component we need to import the module file directly into our
Box.js component and use the
className instead of
style prop to access the style like so:
import React from 'react'; import styles from './Box.css'; const Box = () => ( <div className={styles.container}> <p className={styles.content}> Styling React Components </p> </div> ); export default Box;
styles here is an object that contains the styles we created in
Box.css. This object will contain the classes;
container and
content that maps to their respective styles. To make use of them, we assign the element’s
className to the appropriate class we have in
Box.css.
Benefits Of Using CSS Modules
- Modular and reusable CSS,
- No more styling conflicts,
- Explicit dependencies,
- Local scope,
- Clear dependencies,
- No Code duplication in case of SSR,
- No Additional costs in JS payload,
- Variables, Sharing variables in CSS and exposing it to JavaScript.
Shortcomings of using CSS Modules
- Extra build tools (e.g. webpack).
- Mixing CSS Modules and global CSS classes is cumbersome.
- When a
Referenceis made to an undefined CSS Module, it resolves to undefined without a warning.
- Using the
stylesobject whenever constructing a
classNameis compulsory.
- Only allows usage of
camelCaseCSS class names.
styled-components
styled-components is a library for React and React Native that allows you to use component-level styles in your application that are written with a mixture of JavaScript and CSS.
It was created with the same method of operation of CSS Modules, a way to write CSS that’s scoped to a single component, and not accessible to any other element in the page or even component.
styled-components allows React developers to write plain CSS in React components without having to worry about clashing of class names.
For example, if we need to implement styling in our
Box.js file using styled components, we would first need to carry out the following steps:
- First, we need to install
styled-componentslibrary by running
npm install styled-components --save.
- We then need to import the styled component library into our component by writing
import styled from 'styled-components';.
- Now we can create a variable by selecting a particular HTML element where we store our style keys.
- Then we use the name of our variable we created as a wrapper around our JSX elements.
The code below is an implementation of all the steps we mentioned above.
import React from 'react'; import styled from 'styled-components'; const Box = styled.div` margin: 40px; border: 5px black; `; const Content = styled.p` font-size: 16px; text-align: center; `; const Box = () => ( <Box> <Content> Styling React Components </Content> </Box> ); export default Box;
In the code above, we import the
styled object from
styled-components, which makes use of tagged template literals to style your component. We then create a variable that would hold our styling and also act as a wrapper around content, that’s why we have the
<Box> and
<Content> tags, in this variables, we assign it to the
styled object plus the HTML element we want to style then followed by the accompanying styles for the HTML element. To use the variables we created for styling all we need to do is wrap our JSX or content in between them as tags.
Benefits Of Using styled-components
- Consistency
styled-componentsmake it easy for you to publish a React component to NPM. These components can be customised through props and/or extending via
styled(Component)and no clashing with CSS selectors.
- Sass Syntax Out-Of-The-Box
You can get SASS trademark syntax out of the box without having to install or setup SASS or any extra build tool.
- Dynamic Styling
You can make use of props to dynamically change the styles in any way that feels natural to anyone comfortable with React.
- Theming
Using React’s Context API, styled-components offers a
ThemeContextthat can you can pass a theme object directly to, making it very accessible in any of your components, and by default can be interpolated into your styled definitions.
Shortcomings Of Using styled-components
- Learning Curve
Frontend developers that are already comfortable with writing traditional CSS will have to learn a different way of styling that is different from how traditional CSS is written.
- Integration with Legacy CSS can be painful.
If you’re making use of a UI library like Material UI or even traditional CSS, integrating styled-components together with them can be confusing to locate and debug styles.
- Performance
styled-components converts all of the style definitions in your React component into plain CSS at build time and the inject everything into the
<style>tags in the head of your
index.htmlfile. This affects performance in the sense that it is not only increasing the size of our HTML file which can have an impact on the load time, but there is also no way to chunk the output CSS either.
JSS
JSS is an authoring tool for CSS which allows you to use JavaScript to describe styles in a declarative, conflict-free and reusable way. It can compile in the browser, server-side or at build time in Node. JSS is a new styling strategy that hasn’t been adapted so much. It is framework agnostic and consists of multiple packages: the core, plugins, framework integrations and others.
JSS has third party API adapters that can be used to write JSS like styles but differently, these third party API adapters include:
- Styled-JSS
This is a styled-component API adapter.
- Glamor-JSS
Glamor flavored CSS with JSS under the hood.
- Aphrodite-JSS
Aphrodite like API.
React-JSS
React-JSS makes use of JSS with React using the new Hooks API. JSS and the default preset are already built into the library. According to the official React-JSS docs, the following are the benefits of using React-JSS instead of the core JSS library in your React components.
- Dynamic Theming
This allows context-based theme propagation and runtime updates.
- Critical CSS Extraction
The only CSS from rendered components gets extracted.
- Lazy Evaluation
Style Sheets are created when a component mounts and removed when it’s unmounted.
- The static part of a Style Sheet will be shared between all elements.
- Function values and rules are updated automatically with any data you pass to
useStyles(data). You can pass props, state or anything from context for example.
The code below is an example of how React-JSS is used.
import React from 'react' import {render} from 'react-dom' import injectSheet, { ThemeProvider } from 'react-jss' const styles = (theme) => ({ wrapper: { padding: 40, background: theme.background, textAlign: 'center' }, title: { font: { size: 40, weight: 900, }, color: props => props.color }, link: { color: theme.color, '&:hover': { opacity: 0.5 } } }) const Comp = ({ classes }) => ( <div className={classes.wrapper}> <h1 className={classes.title}>Hello React-JSS!</h1> <a className={classes.link} See docs </a> </div> ) const StyledComp = injectSheet(styles)(Comp) const theme = { background: '#aaa', color: '#24292e' } const App = () => ( <ThemeProvider theme={theme}> <StyledComp color="red"/> </ThemeProvider> ) render(<App />, document.getElementById("root"))
In the code above, which somewhat similar to using styled components, we import
injectSheet and
ThemeProvider from the
react-jss library. The
ThemeProvider is a High-Order component in React, which passes the theme object down the React tree by the use of context. It will contain the root theme of the component. While
injectSheet is used for injecting the stylesheet we have created in this case
styles into the main component.
const Comp = ({ classes }) => ( <div className={classes.wrapper}> <h1 className={classes.title}>Hello React-JSS!</h1> <a className={classes.link} See docs </a> </div> )
The code above is the main React component that has not been injected with the styles object we have created, it contains the main code for our React component and it is going to be styled when we inject it with the styles object that we have created.
const StyledComp = injectSheet(styles)(Comp)
The line of code above is injecting the styles we have created into the component we created it for using the
injectSheet() function.
const theme = { background: '#aaa', color: '#24292e' }
The code above holds the theme object that would be passed to the
<ThemeProvider> HOC via context and it acts as the root theme of our component.
const App = () => ( <ThemeProvider theme={theme}> <StyledComp color="red"/> </ThemeProvider> )
In this portion of the code, what we are doing here is using the
<ThemeProvider> HOC, we are rendering our component that we have injected the styled sheet we created into
<StyledComp color= "red"/>.
At the end of rendering, this is what will be displayed in your browser:
Benefits Of JSS
- Local Scoping
JSS supports local scoping, taking it to the next level by automating scoping, which leads to a high level of predictability.
- Encapsulation
Encapsulation facilitates maintenance and eliminates errors, as you can modify all component-related code and style in the same place, without having to worry about unexpectedly changing other parts of the application.
- Reusability
Components are reusable, so you only have to write them once, then you can run them everywhere while maintaining their styling too.
- Dynamic Styling
You can make use of props to dynamically change the styles in any way that feels natural to anyone comfortable with React.
Shortcomings Of JSS
- Learning Curve
Learning JSS can be very tricky especially frontend developers that are already used to writing traditional CSS.
- Extra Layer of Complexity
Putting a CSS-in-JS library into use adds an extra layer to your React application, which can sometimes be unnecessary.
- Code Readability
Custom or Automatically generated selectors can be very difficult to read especially when using your browser devtools to debug.
Conclusion
Each of these has its advantages and disadvantages, and it all depends on your personal/company preference and the complexity of your application. Also, whatever styling strategy you may decide to use, it is still basically CSS. You can write CSS like you’ve always done, but React and other libraries offer solutions that can also help with styling.
I hope you enjoyed working through this tutorial. You could always read more on Styling React Components from the references below. If you have any questions, leave a comment below and I’ll be happy to reply to each and every single one.
Resources
- JSS (docs)
- “Styling In React,” Jacob Evans, V School
- “Styled Components,” Flavio Copes
- “Four Ways To Style React Components,” Agata Krzywda
- “CSS-in-JS 101: All You Need To Know ,” stereobooster, GitHub
- “Styled Components vs. CSS Stylesheets,” Luke Smetham, Stream.io
- “Best Practices For Styling React Components,” Chris Parker, Pluralsight
- “Styled Components vs. CSS Stylesheets,” Luke Smetham, Stream.io
| https://www.smashingmagazine.com/2020/05/styling-components-react/?utm_source=CSS-Weekly&utm_campaign=Issue-412&utm_medium=web | CC-MAIN-2021-31 | refinedweb | 2,862 | 51.78 |
On Tue, Jan 20, 2009 at 10:43:33AM -0700, Lance Vermilion wrote: > Have you tried adding something like the following to you .cloginrc? No. > add userprompt AMS-CORE-1 "RP/0/RP1/CPU0:AMS-CORE-1" This looks just like the prompt on the router, which clogin already detects correctly. Anyway, I just tried now adding it to my .cloginrc and I'm running into the same problem. The problem is _not_ with how clogin identifies the prompt, it does this correctly so the value stored in $prompt is correct. The problem is when clogin matches $prompt towards the lines received thereafter. Due to the way IOS XR behaves the $prompt variable could appear several times though not a complete line (with a new line at the end) and this confuses clogin. Please look at the strace output and you will see how you receive parts of the line several times before it is finally sent with an ending \n. That is where the problem lies. Or so I believe ;-) Kind regards, Kristian. > On Tue, Jan 20, 2009 at 10:33 AM, Kristian Larsson > <kristian at spritelink.net> wrote: > > Hello! > > > > I have found what I believe is a bug in clogin. > > > > When using clogin to automatically send commands > > and retrieve its output from an IOS XR device the > > prompt is not correctly parsed. > > > > When clogin first logs in, it sends a simple > > carriage return (\n) to the router to be able to > > determine the prompt. Here's an strace of that.. > > > > recvfrom(3, "RP/0/RP1/CPU0:AMS-CORE-1#", 6938, 0, NULL, NULL) = 25 > > write(1, "RP/0/RP1/CPU0:AMS-CORE-1#", 25) = 25 > > read(0, "\r", 8177) = 1 > > sendto(3, "\r\0", 2, 0, NULL, 0) = 2 > > recvfrom(3, "\r\n\rRP/0/RP1/CPU0:AMS-CORE-1#", 6913, 0, NULL, NULL) = 28 > > write(1, "\r\n\rRP/0/RP1/CPU0:AMS-CORE-1#", 28) = 28 > > > > So far, everything is fine and dandy. The router > > is obviously called ams-core-1 and it has the > > prompt RP/0/RP1/CPU0:AMS-CORE-1# which clogin > > correctly identifies. > > Here comes the next part, clogin sends 'terminal > > length 0'... > > > > read(0, "terminal length 0\r", 8176) = 18 > > sendto(3, "terminal length 0\r\0", 19, 0, NULL, 0) = 19 > > recvfrom(3, "terminal\rRP/0/RP1/CPU0:AMS-CORE-1#terminal length\rRP/0/RP1/CPU0:AMS-CORE-1#terminal length 0\r\n\r", 6885, 0, NULL, NULL) = 95 > > > > So it reads 'terminal length 0' from clogin, sends > > it to the CRS and then receives the output. > > > > Now, IOS XR does one thing a tad differently than > > IOS, it has command completion on the space key > > as well as on the tab key. My guess is that this > > mechanism forces the IOS XR CLI to redraw the line > > each and every time a space is entered. > > If we examine the recvfrom line > > > > recvfrom(3, "terminal\rRP/0/RP1/CPU0:AMS-CORE-1#terminal length\rRP/0/RP1/CPU0:AMS-CORE-1#terminal length 0\r\n\r", 6885, 0, NULL, NULL) = 95 > > > > we see that "terminal" is first echoed back to > > the client. Then, as a space was entered, IOS XR > > does a carriage return and redraws the entire line > > starting with the prompt and filling up with > > "terminal length" and the same thing over again to > > "terminal length 0". > > > > As clogin merely looks for $prompt which is > > 'RP/0/RP1/CPU0:AMS-CORE-1#', it will get a match > > several times over and is thus unable to interpret > > the situation correctly. > > > > I've tried modifying clogin in several ways > > - adding \n to prompt or several of the places > > where $prompt is used and so forth but so far > > without much luck. My TCL-fu is not strong enough. > > > > Could some TCL guru have a look at this? I could > > probably arrange access to an IOS XR device for > > testing. > > > > Oh, I'm running clogin version 2.3.2a9 > > > > Kind regards, > > Kristian. > > > > -- > > Kristian Larsson KLL-RIPE > > Network Engineer / Internet Core Tele2 / SWIPnet [AS1257] > > +46 704 910401 kll at spritelink.net > > _______________________________________________ > > Rancid-discuss mailing list > > Rancid-discuss at shrubbery.net > > > > > _______________________________________________ > Rancid-discuss mailing list > Rancid-discuss at shrubbery.net > -- Kristian Larsson KLL-RIPE Network Engineer / Internet Core Tele2 / SWIPnet [AS1257] +46 704 910401 kll at spritelink.net | http://www.shrubbery.net/pipermail/rancid-discuss/2009-January/003599.html | CC-MAIN-2015-35 | refinedweb | 720 | 71.04 |
Hi there,
Is there an equivalent method for Presentation.SlideSize (from ppt 2003) for pptx (2007) ?
Thanks!
Hi there,
Is there an equivalent method for Presentation.SlideSize (from ppt 2003) for pptx (2007) ?
Thanks!
Hello,
Sure, pptx has equivalent for that. There is similar PresentationEx.SlideSize property which returns SlideSizeEx object. In last versions of Aspose.Slides you can read or set 3 properties there:
Hi Alcrus,
I am having the following issue.
The SIZE of the slide i.e. the Percentage Size on the bottom right of
PowerPoint slides is fixed 60% for all the slides when the SSRS reports
are exported to PowerPoint 2007 using PPTX option.
Solution Needed: How can I change the Slide Size by
percentage to 100% by default, so that every report when exported to
PowerPoint, each slide is 100% by default.
You have mentioned PresentationEx.SlideSize property above, but I am not able to find the SlideSize property in the Aspose.Slides.ReportingServices DLL for .net.
could you please guide me to find the above mentioned property.
There is a Size property of the Slide class and is read only as shown below, there is no set accessor.
namespace Aspose.Slides.ReportingServices.Extension
{
public abstract class Slide
{
protected Slide();
public abstract int Index { get; }
public abstract SizeF Size { get; }
public abstract void AddImage(Stream stream, RectangleF rectangle);
public abstract void AddNote(string text, TextFormat textFormat);
public abstract void AddTextBox(string text, RectangleF rectangle, Color background,
TextFormat textFormat);
}
}
So I am assuming we can not change the size of the slides. Please let me know if I am wrong.
Thanks,
Santhosh.
Hi Santosh, | https://forum.aspose.com/t/slidesize-equivalent-for-powerpoint-2007/96631 | CC-MAIN-2022-21 | refinedweb | 269 | 57.87 |
Another important and exciting feature object-oriented programming is Operator overloading. C# supports the concept of operator overloading.
Operator overloading is a concept in which operator can defined to work with the userdefined data types such as structs and classes in the same way as the pre-defined data types.
Application.
An attribute is a declarative tag, which can be used to provide information to the runtime about the behavior of elements of application such as classes and assemblies.
Attributes are used to give extra information to the .NET compiler. C# and the .NET framework have a few built-in attribute types, but it is also possible to create new ones by extending the system Attribute class. Below we describe a few common uses of attributes.
In object-oriented languages, objects expose encapsulated functions called methods. Methods are encapsulated functions which run when they are invoked.
Sometimes, however, we think of the process of method invocation more grandly. In such a case, the method invocation is termed an 'event', and the running of the method is the 'handling' of the event. An standard example of an event is a user's selection of a button on a graphical user interface; this action may trigger a number of methods to 'handle' it
Delegate is a method template which used to implement the concept of function pointer.
The C and C++ languages are having the concept of function pointer. This was even more useful when programming for the Microsoft Windows operating systems because the Win32 library relies on the concept of callback functions. Callback functions are used in Microsoft Windows programming to process messages. For this reason and because of their functionality, callback functions were carried out in the .NET Framework but they were defined with the name of delegate.
An indexer is a member that enables objects to be indexed in the same as an array.
If properties are 'virtual fields', indexers are more like 'virtual arrays'. Indexers permit instances of a class or struct to be indexed in the same way as arrays. Indexers are similar to properties except that their accessors take parameters. They allow a class to emulate an array, where the elements of this array are actually dynamically generated by function calls.
Property is a feature to add more smartness to data fields for get and set the value.
Properties are named members of classes, structs and interfaces. They provide a flexible mechanism to read, write, or compute the values of private fields through accessors. Properties can be thought of as virtual fields. From the outside, a class' property looks just like a field. But from the inside, the property is generated using the actual class fields.
The Collection namespace of System contains many classes and interfaces, which provides to define various collections of objects. These interfaces and classes are useful for every programmer to develop big or large applications.
Advance Subjects
Basic Subjects
Social Us | http://ecomputernotes.com/csharp/advanced-csharp | CC-MAIN-2017-47 | refinedweb | 489 | 56.15 |
Changes related to "LVM"
From FedoraProject
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
10 April 2014
- (Move log); 14:08 . . Jreznik (Talk | contribs) moved page Cache Logical Volumes to Changes/Cache Logical Volumes (Missing Change namespace)
8 April 2014
- (diff | hist) . . Cache Logical Volumes; 20:29 . . (+2) . . Jbrassow (Talk | contribs) (→Release Notes: )
- (diff | hist) . . Cache Logical Volumes; 20:27 . . (-2,014) . . Jbrassow (Talk | contribs)
- (diff | hist) . . N Cache Logical Volumes; 20:12 . . (+12,158) . . Jbrassow (Talk | contribs) (Created page with "{{admon/important | Comments and Explanations | The page source contains comments providing guidance to fill out each section. They are invisible when viewing this page. To re...") | http://fedoraproject.org/w/index.php?title=Special:RecentChangesLinked&days=14&from=&target=LVM | CC-MAIN-2014-15 | refinedweb | 127 | 68.97 |
.
- Ingress, such as CSV readers, parsing logs and storing data in Tables.
Goals of the SDK
- Provide a way to make it easier to use Azure Storage when doing any background processing work.
- The SDK makes it easier to work with Azure Storage. You do not have to deal with writing code to read/ write from storage.
- Provide a rich diagnostics and monitoring experience without your having to write any diagnostics and logging code.
Features of the SDK
TriggersFunctions will be executed when a new input is detected on a queue or a blob.
BindingsThe SDK supports binding to provides model binding between C# primitive types and Azure storage like Blobs, Tables, Queues and Service Bus. This makes it easy for a developer to read/ write from blobs, tables and queues. This provides the following benefits.
- Convenience. You can pick the type that’s most useful for you to consume and the WebJobs SDK will take care of the glue code. If you’re doing string operations on a blob, you can bind directly to TextReader/TextWriter, rather than worry about how to convert to a TextWriter.
- Flushing and Closing: The WebJobs SDK will automatically flush and close outstanding outputs.
- Unit testability. The SDK makes it possible to unit test your code since you can mock primitive types like TextWriter rather than ICloudBlob.
- Diagnostics. Model binding works with the dashboard to give you real time diagnostics on your parameter usage.
Azure QueuesThe SDK can be used to trigger a function on a new message on a queue. The SDK allows you to easily access the message contents by allowing you to bind to String, Poco (Plain old CLR object), byte[] and Azure storage SDK types. Following are the other mainline features available for queues. Please read these announcement posts 0.5.0-beta, 0.4.0-beta and 0.3.0-beta for more details.
- Trigger a function and bind the message contents to String, Poco (Plain old CLR object), byte[] and CloudQueueMessage.
- Send single or multiple message on queue.
- Parallel execution with Queues: The SDK will fetch messages for a queue in parallel within a QueueTrigger. This means if a function is listening on a queue as shown below, the SDK will get a batch of 16 (default) queue messages in parallel for this queue. The function is also executed in parallel
- Handling of poison messages in Azure Queues
- Access DequeueCount property of the queue
- Better polling logic for Azure Queues: The SDK implements a random exponential back-off algorithm to reduce the effect of idle queue polling on storage transaction costs.
- Fast path notifications: The SDK will fast track messages if you are sending messages to multiple queues using the SDK.
- Configuration options for queue polling: The SDK exposes a few knobs where you can configure the Queue polling behavior.
- MaxPollingInterval used for when a queue remains empty, the longest period of time to wait before checking for a message to. Default is 1 min.
- MaxDequeueCount used for when the Queue message is moved to a poison queue. Default is 5,
Azure BlobsThe SDK can be used to trigger a function when a new blob is detected or an existing blob is updated. The SDK allows you to access the blob contents by allowing you to bind to Stream, String, Poco (Plain old CLR object), byte[], TextReader, TextWriter and Azure storage SDK types. Please read these announcement posts 0.5.0-beta, 0.4.0-beta and 0.3.0-beta for more details.
- BlobTriggers are only triggered when a new blob is detected or an existing blob is updated.
- Retry and Error handling for Blobs: This.
Azure Storage TablesThe SDK allows you to bind to Tables and perform read, write, update and delete operations. Please read these announcement post for 0.6.0-beta, 0.5.0-beta, 0.4.0-beta and 0.3.0-beta for more details. Ingress is a common scenario when you are parsing files stored in blobs and storing the values in Tables such as CSV readers. In these cases the Ingress function could be writing lots of rows (million in some cases). The WebJobs SDK makes it possible to implement this functionality easily and allows add real time monitoring capabilities such as number of rows written in the table so you can monitor the progress of the Ingress function.
Azure Service BusSimilar to Azure Queues the SDK allows you to trigger functions when a new message is sent to Service Bus Queue or topic. The SDK allows you to easily access the message contents by allowing you to bind to String, Poco (Plain old CLR object), byte[] and BrokeredMessage. Please read this announcement post for 0.3.0-beta for more details.
GeneralFollowing are some of the other features of the SDK.
- Async Support: The SDK supports async functions.
- CancellationTokens: The functions can take a CancellationToken parameter in your function and the function can receive a cancellation request from the Host
- NameResolver: The SDK has an extensibility layer which allows you to specify the source for queue names or blob names. For example you can use this feature to pick up queue names from a config file. Look at this sample.
- WebJobs Shutdown notification: WebJobs has a graceful shutdown notification feature which raises a notification when the WebJob is stopping..
Dashboard for monitoring WebJobsAs WebJobs execute, you can monitor them in real time. You can see their state (Running, Stopped, Successfully completed), last run time and the logs of a particular execution. The following screenshot shows you a view of all WebJobs running in your Website. When you write a WebJob using the SDK, you get diagnostics and monitoring experience for the functions in your program. For example, let’s say that you have an Image processing WebJob called “ImageResizeAndWaterMark” that has the following flow. When a user uploads an image to a Blob container called “images-input”, the SDK will trigger Resize function. Resize will process the image and write to “images2-output” container which will trigger the Watermark function. Watermark function will resize the image and write it to “images3-output” Blob container. The following code shows the WebJob described above.
public class ImageProcessing { public static void Resize( [BlobTrigger(@"images-input/{name}")] WebImage input, [Blob(@"images2-output/{name}")] out WebImage output) { var width = 80; var height = 80; output = input.Resize(width, height); } public static void WaterMark( [BlobTrigger(@"images2-output/{name}")] WebImage input, [Blob(@"image3-output/{name}")] out WebImage output) { output = input.AddTextWatermark("WebJobs", fontSize: 6); } } public class WebImageBinder : ICloudBlobStreamBinder<WebImage> { public Task<WebImage> ReadFromStreamAsync(Stream input, System.Threading.CancellationToken cancellationToken) { return Task.FromResult<WebImage>(new WebImage(input)); } public Task WriteToStreamAsync(WebImage value, Stream output, System.Threading.CancellationToken cancellationToken) { var bytes = value.GetBytes(); return output.WriteAsync(bytes, 0, bytes.Length); } }When you run the WebJob in Azure, you can view the WebJobs Dashboard by clicking the logs link of the “ImageResizeAndWaterMark” in the WEBJOBS tab of Windows Azure Websites portal.
Function execution detailsWhen you are monitoring a particular execution of this “ImageResizeAndWaterMark” WebJob, you can view invocation details about the functions in the program such as:
- What are the parameters of this function?
- How much time did it take for the function to execute.
- How much time did it take to read from Blob and how many bytes were read/ written.
Invoke & ReplayIn the above example if the WaterMark function fails for some reason, you can upload a new image and Replay WaterMark function, which will trigger the execution chain and call Resize function as well. This is useful to diagnose and debug an issue when you have a complicated graph for chaining functions together. You can also Invoke a function from the dashboard.
Causality of functionsIn the above example, we know that when the WaterMark function writes to a Blob, it will trigger the ResizeResizeAndWaterMark, the Blob was written because the WaterMark function got executed. For more details on Search Blobs see this post.
SamplesSamples for WebJobs SDK | https://azure.microsoft.com/ja-jp/blog/announcing-the-1-0-0-rtm-of-microsoft-azure-webjobs-sdk/ | CC-MAIN-2019-04 | refinedweb | 1,330 | 55.74 |
/* . */ /* Data type definitions and declarations for hardlink management. */ /* This file should be #included in CVS source files after cvs.h since it relies on types and macros defined there. */ /* The `checked_out' member of a hardlink_info struct is used only when files are being checked out or updated. It is used only when hardlinked files are being checked out. */ #ifdef PRESERVE_PERMISSIONS_SUPPORT struct hardlink_info { Ctype status; /* as returned from Classify_File() */ int checked_out; /* has this file been checked out lately? */ }; extern List *hardlist; extern char *working_dir; Node *lookup_file_by_inode (const char *); void update_hardlink_info (const char *); List *list_linked_files_on_disk (char *); int compare_linkage_lists (List *, List *); int find_checkedout_proc (Node *, void *); #endif /* PRESERVE_PERMISSIONS_SUPPORT */ | http://opensource.apple.com/source/cvs/cvs-42/cvs/src/hardlink.h | CC-MAIN-2014-42 | refinedweb | 104 | 65.01 |
moonmoon
This is a small python package that simply gets an image of a given date's moon phase. It uses numpy, opencv and Ernie Wright's moon visualizations from the Dial-a-Moon project at Nasa Visualization Studio.
This package doesn't do much other than download an image from a predefined source, but I hope it's useful for someone else out there.
At time of publishing, this package can access any of the moon visualizations from 2011-2020. I suppose the way it's set up now, it'll need an update before the end of 2020, and perhaps I'll find a better way to set it up by then.
InstallationInstallation
To install this package, just run
pip install moon
UsageUsage
Currently, this package will return a numpy array representing the lunar phase, as well as some json of the lunar stats from the Dial-a-Moon Nasa site. This array is usable as an image using openCV, or can be saved to disk as a .jpg file.
You can test it out using terminal:
from moon.terminal_ui import TerminalUi ui = TerminalUi() ui.set_moon_image() #defaults to today's date ui.show()
You can alternately test it out using Jupyter notebooks:
from moon.jupyter_ui import JupyterUi ui = JupyterUi() ui.set_moon_image() #defaults to today's date ui.show()
To just use it in a project, you can use it like this:
from moon.dialamoon import Moon moon = Moon() moon.set_moon_phase()
and access the image array itself with
moon.image
At the moment, there isn't any built in method for converting the numpy array to a Pillow image. I had this set up with Pillow originally, but it fell by the wayside when I moved over to playing around with openCV.
UpdatesUpdates
Please feel free to post bugs, suggestions and feature requests on this repo. Through some trial and error, I think I finally have the package as simple as possible, but I'm open to evolution. This will be my first time creating and maintaining a python package, and I am receptive to any tips or PRs as far as best-practices go.
Resources:Resources:
moon ascii art courtesy of jsg
_..._ _..._ _..._ _..._ _..._ .:::::::. .::::. `. .:::: `. .::' `. .' `. ::::::::::: :::::::. : :::::: : ::: : : : ::::::::::: :::::::: : :::::: : ::: : : : `:::::::::' `::::::' .' `::::: .' `::. .' `. .' `':::'' `'::'-' `'::.-' `':..-' `-...-' _..._ _..._ _..._ _..._ _..._ .' `. .' `::. .' ::::. .' .::::. .:::::::. : : : ::: : :::::: : :::::::: ::::::::::: : : : ::: : :::::: : :::::::: ::::::::::: `. .' `. .::' `. :::::' `. '::::::' `:::::::::' `-...-' `-..:'' `-.::'' `-.::'' `':::'' | https://libraries.io/pypi/moon | CC-MAIN-2020-45 | refinedweb | 384 | 72.36 |
Here is the code:[code:2pyryi1c]#include <cstdlib>
include <fmod.hpp>
include <stdio.h>
//#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
FMOD_RESULT result;
FMOD::System *system_;
result = FMOD::System_Create(&system_); // Create the main system object.
if (result != FMOD_OK)
{
printf("FMOD error! (%d) %s\n", result, FMOD_ErrorString(result));
exit(-1);
}
result = system_->init(100, FMOD_INIT_NORMAL, 0); // Initialize FMOD.
if (result != FMOD_OK)
{
printf("FMOD error! (%d) %s\n", result, FMOD_ErrorString(result));
exit(-1);
}
system("PAUSE"); return 0;
}
[/code:2pyryi1c]And errors:[code:2pyryi1c]line 16 C:\CPP\FMOD Programmers API\api\lib\confmodtest.cpp `FMOD_ErrorString’ undeclared (first use this function) [/code:2pyryi1c]I tried linking with both libfmodexp.a and libfmodex.a. Both ways I got same error. And in the documentation I can’t even find the “FMOD_ErrorString”, only in example code.
- d4rksh4de asked 11 years ago
- You must login to post comments
OK, I just forgot to include fmod_errors.h , I thought it was included in fmod.hpp already
.
So now i get other errors and I guess it’s because I use C++ instead of C. Is there any way to get it working on Dev-C++? I can’t/don’t want to use C.
- d4rksh4de answered 11 years ago | https://www.fmod.org/questions/question/forum-15727/ | CC-MAIN-2017-13 | refinedweb | 206 | 61.73 |
Opened 8 years ago
Closed 8 years ago
#9710 closed (worksforme)
Models translations in applications does not work
Description
There is a project and an application with a model like this:
from django.utils.translation import gettext_lazy as _ class MODEL(models.Model): name = models.TextField(_("NAME"))
When creating the *.po file running manage.py makemessages in the application directory and translating and compiling messages, it does not work in the admin panel. It behaves like there isn't any translation.
All works when I create translations in the project directory, then everything is OK without any changes in the settings.py file or anything else.
I can't have translations for the whole project as I need to have some applications in many projects so I want to translate the models once.
I can't reproduce this.
Make sure your
.poand
.mofiles are located in
locale/xx_YY/LC_MESSSAGES/under your application directory as explained at and that the language you are translating your app to is one of the languages Django has been translated to as explained at (or use the workaround described there there).
Closing as worksforme. | https://code.djangoproject.com/ticket/9710 | CC-MAIN-2017-04 | refinedweb | 190 | 67.25 |
You already saw a simplified way to generate a 3D landscape with help of an .X model in the previous chapter. There you created the diffuse texture first and then created a normal map from that and finally you added a simple height map to generate the canyon for the XNA Shooter game.
For this game you are still going to use a very simplified approach to render the landscape because the development time was so short and developing a highly sophisticated landscape rendering engine alone can take months. The landscape engine in Arena Wars, for example, supports very big landscapes, but it was never used in the game, and it was a lot of work to get it optimized on the many available hardware configurations, especially because 24 different ground texture types are supported there.
To keep things simple in the racing game only one very big ground texture is used, but you also have a normal map for it and an additional detail texture to add a little detail in case you look very closely at the ground, which can happen in the game if you stay still and are close to the landscape ground. Originally I created a 4096×4096 texture, but it was very hard to constantly update a texture this big and have to manage a normal map for it too. Uncompressed this texture is about 64 MB just for the diffuse texture, and another 64 MB for the normal map. Now imagine you have 10 or more layers of each of these textures when creating them and a computer with only about 1 GB of RAM. The story does not end nicely. Even compressed as a DXT5 texture the 4096×4096 normal map was still about 12 MB and the loading process alone was too annoying when I was developing the landscape class with the help of the unit test in there. What finally stopped me from using 4096×4096 textures ever again in XNA was the content pipeline, which almost takes forever to convert textures this big into the .xnb content files.
I went down to 2048×2048 textures, which look almost as good as the 4096×4096 ones, but only take up 1/4 of the space and loading time. The normal map was even reduced to 1024×1024 a little later because it did not make much of a visual difference. Another reason to not use that big of a texture size is the Xbox 360, which has a limited amount of video memory (64 MB) and you should not load too much stuff or too big textures, or the performance will go down. Instead of creating a super-sized texture I added the detail texture to make the landscape look better when zooming in. The rendering texture to accomplish this is discussed in the next pages.
First you need a height map and an idea of how big your landscape will be. I initially wanted each landscape texel (1 pixel of a texture) to be 1 meter (3.3 feet) and by having a 4096×4096 texture the whole landscape would be 4x4 km big (about 2.5×2.5 miles). Even after the texture was reduced to 2048×2048 the landscape size stayed the same; now each texel is 2×2 meters.
So where do you get this many height values for your landscape? Just painting them yourself will probably not look very good and you don’t have a good program for that and certainly not enough time to write a custom landscape height-map editor. A good idea is to search for height maps on the Internet and try to use existing height-map data from one of the many sources that provide it (geological organizations, NASA, which even has height-map data for other planets, and so on).
For the racing game I tried to get some mountains and I mixed them in together. It was good enough for testing, but later I needed mountains around the landscape and I had to modify the landscape height map a bit to create a big mountain in the middle and mountains around the border of the landscape. The final height map result can be seen in Figure 12-5. Please note that the LandscapeGridHeights.png height map that is used for the game in the Landscapes directory has a size of only 257×257 pixels because the grid that is generated in the Landscape class only generates 257×257 vertices (which are already 66,049 vertices) for the 256*256*2 polygons (which are around 130,000 polygons). Having more polygons would slow down rendering too much, but handling 130,000 polygons is not a big problem for today’s graphic cards and the Xbox 360 handles it fine too (still several hundred frames per second).
Figure 12-5
You may notice flat areas on the height map, which are used for city areas where it is simpler to have all the buildings and objects generated at the same height as well as the palms at the side of the road. The white area in the middle is the big mountain and you can see the gray to white areas around the border of the map, which show the mountains at the border to hide the fact that you have nothing behind them.
This height map is now used together with a little bumpy grain texture to generate the normal map. Additionally you could also mix in the diffuse map as you did in the last chapter, but I changed the diffuse map so often for this landscape that the normal map is not really affected anymore by the diffuse texture. Figure 12-6 shows the used diffuse and normal maps for the landscape in the game.
Figure 12-6
Please note that I changed these textures a lot until I got them this way and I’m not completely pleased with them, but you have to stop sometime, especially if you don’t have more time to improve the landscape. The normal map, for example, looks really great from a far distance, but more closely it doesn’t have many variations and it could maybe be improved to fit better to the diffuse texture. Anyway, it looks fine in the game and I have not heard any complaints yet.
Finally, a detail map is also added for when you look closely at the landscape. You will not notice the detail map after a while, but Figure 12-7 shows the difference quite convincingly between using a detail map and not using it. You will never go without a detail texture ever again if you have such a big landscape and allow zooming in like you do for the racing game.
Figure 12-7
If you open up the Racing Game project you can see a lot of the classes from the previous chapters, but also two new namespaces, which are discussed in this chapter: Landscapes and Tracks. There is just one class in the Landscapes namespace called Landscape (see Figure 12-8), which is responsible for rendering the landscape, all objects on it, all the tracks and track objects, and basically everything in the game except your own car. In the Mission class you are just calling the Render method of this class to perform all the rendering. For shadow mapping several helper methods are available. More details about the shadow mapping are discussed in Chapter 14.
Figure 12-8
All landscape objects are generated at the creation time of the Landscape class, especially in the constructors of the Track classes, which are used inside the Landscape class. You will take a quick look at some of the used 3D models in a second.
The first thing you have to do to be able to render your landscape is to generate it first in the constructor. Before you take a look at that you should check out the TestRenderLandscape unit test of the Landscape class in the same way the unit test was written before the class was implemented. You may also notice the other unit test, GenerateLandscapeHeightFile, which generates a level file for the landscape height map the same way you had to generate a special content file for Rocket Commander XNA levels because loading bitmap data is not possible on the Xbox 360.
/// <summary> /// Test render landscape /// </summary> public static void TestRenderLandscape() { TestGame.Start("TestRenderLandscape", delegate { RacingGame.LoadLevel(RacingGame.Level.Beginner); RacingGame.Landscape.SetCarToStartPosition(); }, delegate { if (BaseGame.AllowShadowMapping) { // Generate shadows ShaderEffect.shadowMapping.GenerateShadows( delegate { RacingGame.Landscape.GenerateShadow(); RacingGame.CarModel.GenerateShadow( RacingGame.Player.CarRenderMatrix); }); // Render shadows ShaderEffect.shadowMapping.RenderShadows( delegate { RacingGame.Landscape.UseShadow(); RacingGame.CarModel.UseShadow( RacingGame.Player.CarRenderMatrix); }); } // if (BaseGame.AllowShadowMapping) BaseGame.UI.PostScreenGlowShader.Start(); BaseGame.UI.RenderGameBackground(); RacingGame.Landscape.Render(); RacingGame.CarModel.RenderCar(0, Color.Goldenrod, RacingGame.Player.CarRenderMatrix); // And flush render manager to draw all objects BaseGame.MeshRenderManager.Render(); if (BaseGame.AllowShadowMapping) ShaderEffect.shadowMapping.ShowShadows(); BaseGame.UI.PostScreenGlowShader.Show(); TestGame.UI.WriteText(2, 50, "Number of objects: "+ RacingGame.Landscape.landscapeObjects.Count); }); } // TestRenderLandscape()
The unit test does a lot of things; it even shows the car and all landscape objects. Even the track and all the shadow mapping and post-screen shaders are tested here to make sure they work fine together with the landscape. If you just want to test the landscape itself, just call Render of the Landscape class; that’s already enough to test the landscape rendering.
The LoadLevel method of the RacingGame class, which is actually the main class for this game, loads one of the levels. All levels currently use the same landscape, which means you don’t have to reload it. But you should check out the code for generating the landscape vertices anyway. The landscape constructor does the following things:
Loads the map height data from the level file and builds tangent vertices with it
Generates and smoothes normals for the whole landscape, and also regenerates the tangents with the new normals.
Sets the vertex buffer with these vertices
Calculates the index buffer (very similar to the landscape triangles you saw in the last chapter)
Sets the index buffer for the landscape rendering
Loads and generates all track data for the current level, including all landscape objects
And finally, adds additional objects like the city ground planes to give the city objects a better looking ground texture.
The most important part of the constructor is the tangent vertex generation from the height-map data, which goes through all 257×257 points of the height map and generates vertices for you:
// Build our tangent vertices for (int x = 0; x < GridWidth; x++) for (int y = 0; y < GridHeight; y++) { // Step 1: Calculate position int index = x + y * GridWidth; Vector3 pos = CalcLandscapePos(x, y, heights); mapHeights[x, y] = pos.Z; vertices[index].pos = pos; // Step 2: Calculate all edge vectors (for normals and tangents) // This involves quite complicated optimizations and mathematics, // hard to explain with just a comment. Read my book :D Vector3 edge1 = pos - CalcLandscapePos(x, y + 1, heights); Vector3 edge2 = pos - CalcLandscapePos(x + 1, y, heights); Vector3 edge3 = pos - CalcLandscapePos(x - 1, y + 1, heights); Vector3 edge4 = pos - CalcLandscapePos(x + 1, y + 1, heights); Vector3 edge5 = pos - CalcLandscapePos(x - 1, y - 1, heights); // Step 3: Calculate normal based on the edges (interpolate // from 3 cross products we build from our edges). vertices[index].normal = Vector3.Normalize( Vector3.Cross(edge2, edge1) + Vector3.Cross(edge4, edge3) + Vector3.Cross(edge3, edge5)); // Step 4: Set tangent data, just use edge1 vertices[index].tangent = Vector3.Normalize(edge1); // Step 5: Set texture coordinates, use full 0.0f to 1.0f range! vertices[index].uv = new Vector2( y / (float)(GridHeight - 1), x / (float)(GridWidth - 1)); } // for for (int)
You can see that this code generates the vertices in five steps. First, the position vector is calculated. Then all edge vectors are calculated for constructing the normal from three cross products and assigning the tangent. Finally, the texture coordinates are assigned, but you flip x and y to make xy rendering easier later, but still having the correct texture map alignment the same way it looks as a bitmap. The vertices list is already generated when it is defined because you only support 257x257 height-map grids. The CalcLandscapePos helper method that is used here is quite simple and just extracts a height vector from the height-map data:
private Vector3 CalcLandscapePos(int x, int y, byte[] heights) { // Make sure we stay on the valid map data int mapX = x < 0 ? 0 : x >= GridWidth ? GridWidth - 1 : x; int mapY = y < 0 ? 0 : y >= GridHeight ? GridHeight - 1 : y; float heightPercent = heights[mapX+mapY*GridWidth] / 255.0f; return new Vector3( x * MapWidthFactor, y * MapHeightFactor, heightPercent * MapZScale); } // CalcLandscapePos(x, y, texData)
With all the vertices and indices generated now you can finally render the landscape with the help of the LandscapeNormalMapping.fx shader. You will not believe how easy that is now. The following lines of code will render 130,000 polygons with the LandscapeNormalMapping shader using the landscape diffuse map, the normal map, and the additional detail map:
// Render landscape (pretty easy with all the data we got here) ShaderEffect.landscapeNormalMapping.Render( mat, "DiffuseWithDetail20", delegate { BaseGame.Device.VertexDeclaration = TangentVertex.VertexDeclaration; BaseGame.Device.Vertices[0].SetSource(vertexBuffer, 0, TangentVertex.SizeInBytes); BaseGame.Device.Indices = indexBuffer; BaseGame.Device.DrawIndexedPrimitives( PrimitiveType.TriangleList, 0, 0, GridWidth * GridHeight, 0, (GridWidth - 1) * (GridHeight - 1) * 2); });
The ShaderEffect class from Chapter 7 allows you to render the landscape material with the specified technique using the RenderDelegate code for rendering.
The Render method of the Landscape class also renders the track and all landscape objects. The track rendering is discussed for the rest of this chapter. We will not talk about all the landscape models used in the game because there are so many. Check out the TestRenderModels unit test of the Model class to view them all; a quick overview is shown in Figure 12-9.
Figure 12-9
Handling landscape engines is not easy. Even if you have developed a great looking landscape engine that supports many different shaders and texture sets, you might find yourself worrying about performance for a while. On the other hand, if you already have a great performing landscape engine like the one from the racing game or the shoot-’em-up game from the last chapter, you might want to improve the visual quality of it without affecting the performance too much. It is a tricky challenge getting a landscape engine just right for your game. As I told you earlier, in the past I tried to create a more complex landscape and graphics engine than actually required for a game like Arena Wars. It was possible to create huge landscapes 100 times as big as the ones that were finally used, but it did not make sense after a while optimizing these cases when the game never utilizes these features.
Instead you should focus on what the game is doing. Take the racing game as an example. It is much easier to have one fixed landscape, which is always the same. Testing is easier and you can reuse existing parts of the landscape and the locations of many objects for other levels too without having to redesign everything. The landscape is also big enough (4096×4096 meters) for many different tracks because the first level only shows you a small part of it (maybe 20%–30%).
There are three disadvantages to a more-or-less fixed landscape rendering technique like used here and in the last chapter:
You can’t just change the size of the landscape that easily. It will involve a lot of work just to make it smaller or bigger. If you are already using 4096×4096 textures you can probably not go up anymore to improve the texture quality, and if your game is very close to the ground it may still look very blurry even with extra detail textures.
Changing the diffuse texture map is really hard. You not only have to mix all different texture types together yourself, but it is also hard to see the result and it involves a lot of testing. To make things worse the bigger the mega-texture gets, the slower the development time gets.
The landscape engine is just not powerful enough to handle advanced effects like adding craters or extra textures (tracks, roads, leaves, riffs, and so on) to the ground and you can’t even change the visual appearance dynamically. This means the landscape will always look the same. If you wanted to build a level editor you would need a more dynamic solution that would allow you to change the texture types and a way to mix them together automatically.
The best way to get around these issues is to use a landscape rendering technique called splatting, which basically takes a bunch of textures and renders them based on a texture map that uses the same resolution as the height map onto the landscape. Because it would not look very good to have different ground textures sitting side by side, you need a smooth interpolation between them. You could either fade from tile to tile, but that would still look too harsh, or you could save percentage values for each texture type.
The landscape is rendered separately for all ground texture types and you should make the lowest one completely opaque to make sure you can never look through the landscape. Then the other ground types are alpha blended on top of the previous ones until you are done with all of them (see Figure 12-10).
Figure 12-10
You can also use an alpha texture for blending or just blend softly with color values (see the example in Figure 12-10). Because you will use shaders in XNA you might also want to combine four (pixel shader 1.1) or even eight (pixel shader 2.0) textures together into one shader pass to optimize performance. If your landscape engine needs to handle more textures, however, you will need multiple passes until every thing is rendered. Sometimes it can even be faster not to render all textures on top of each other, but just the ones that are visible. It becomes harder when you have to generate the vertex and index buffers, but if you really have a lot of textures, like 20 or 30 different ground types, the performance will greatly increase because each ground texture type is used only 10% or less and it does not make sense to render it all the time.
Whichever technique you choose, I strongly suggest you start with something simple like just rendering a big landscape with one texture type tiled over it and then improve it later when your game is more than just a landscape engine. You might also want to make sure your landscape can always be rendered at several hundred frames to make sure the game still runs fast enough after you add all the landscape models, 3D units, and effects plus the game logic into it.
There are many more tips and tricks about creating good landscapes. You could implement precalculated lighting and huge landscape shadows; there are also many opportunities for great looking shaders, especially if you have water involved. Grass could be done with fur shaders. The 3D appearance of rocks and cliffs could be enhanced with parallax or even offset mapping. If you are interested in these topics please pick up a good shader book like the Game Programming Gems or ShaderX, or search the Internet for many tips and tricks as well as tutorials.
Figure 12-11 shows the final appearance of the landscape in the racing game after also applying the post-screen shaders, the in-game HUD, and the sky cube mapping (see Chapters 5 and 6 for more information about these kinds of shaders if you have forgotten about them).
Figure 12-11 | https://flylib.com/books/en/4.126.1.92/1/ | CC-MAIN-2021-25 | refinedweb | 3,322 | 57.1 |
{-# LANGUAGE BangPatterns, CPP #-} {-# OPTIONS_GHC -funbox-strict-fields #-} {-| Module : Data.Atom.UF Copyright : (c) Thomas Schilling 2010 License : BSD-style Maintainer : nominolo@gmail.com Stability : experimental Portability : portableefe's message to Erlang's eeps mailing list <>, which in turn was inspired by the Logix implementation of Flat Concurrent Prolog. -} module Data.Atom.UF ( -- * Symbols -- Symbol, intern, internInto, SymTab(..), symbolHash -- * Implementation -- -- $impl -- ) where import Data.Word ( Word64 ) import Data.IORef import System.IO.Unsafe import Control.Monad ( unless ) import Control.DeepSeq #ifndef NDEBUG -- For testing: import Control.Monad ( liftM2 ) import System.Mem.Weak import System.Mem import Data.Digest.Murmur64 import Data.Maybe ( isJust ) #endif -- ------------------------------------------------------------------- -- Public API: -- |) -- @ data Symbol a = Symbol {-# UNPACK #-} !Word64 -- hash {-# UNPACK #-} !(IORef (SymbolInfo a)) -- | Returns the hash of the symbol. symbolHash :: Symbol a -> Word64 symbolHash (Symbol h _) = h instance Ord a => Eq (Symbol a) where x == y = cmpSymbol x y == EQ instance Ord a => Ord (Symbol a) where compare = cmpSymbol instance Show a => Show (Symbol a) where show = show . symInfo -- | Create a new local symbol. For best performance use -- 'internInto' together with a symbol table / map. intern :: (a -> Word64) -> a -> Symbol a class SymTab s where lookupSymbol :: s a -> a -> Maybe (Symbol a) insertSymbol :: a -> (Symbol a) -> s a -> s a -- | Insert a symbol into an existing table. internInto :: (SymTab s) => (a -> Word64) -> s a -> a -> (s a, Symbol a) -- ------------------------------------------------------------------- -- Internals newtype SymbolInfo a = SymInfo (IORef (Link a)) type Link a = Either a (SymbolInfo a) internInto hash_fn st str = case lookupSymbol st str of Just sym -> (st, sym) _ -> let sym = intern hash_fn str in (insertSymbol str sym st, sym) intern hash_fn s = unsafePerformIO $ do info <- newIORef (Left s) info' <- newIORef (SymInfo info) return (Symbol (hash_fn s) info') cmpSymbol :: Ord a => Symbol a -> Symbol a -> Ordering cmpSymbol (Symbol h1 i1) (Symbol h2 i2) | i1 == i2 = EQ | otherwise = case h1 `compare` h2 of EQ -> uncommon_case -- not identical, but same hash ans -> ans where {-# NOINLINE uncommon_case #-} uncommon_case = unsafePerformIO $ do -- get representative element (performs path shortening) (rep1@(SymInfo rr1), s1) <- repr' i1 (rep2@(SymInfo rr2), s2) <- repr' i2 if rep1 === rep2 then return EQ else case s1 `compare` s2 of EQ -> do -- they should be equal! writeIORef rr2 (Right rep1) writeIORef i2 rep1 return EQ ans -> return ans -- We abuse the fact that IORefs give us an identity (i.e., observable -- sharing) and that we need the IORef anyway. sameSym :: SymbolInfo a -> SymbolInfo a -> Bool sameSym (SymInfo r1) (SymInfo r2) = r1 == r2 (===) = sameSym symInfo :: Symbol a -> a symInfo (Symbol _ r) = unsafePerformIO $ do fmap snd (repr' r) repr' :: IORef (SymbolInfo a) -> IO (SymbolInfo a, a) repr' r = do info <- readIORef r (root_info, str) <- go info unless (root_info === info) $ writeIORef r root_info return (root_info, str) where go si@(SymInfo ir) = do i <- readIORef ir case i of Left str -> return (si, str) Right si' -> do (root_info, str) <- go si' unless (si' === root_info) $ writeIORef ir (Right root_info) -- is Left possible here? return (root_info, str) ---------------------------------------------------------- -- Tests #ifndef NDEBUG -- requires import Data.Digest.Murmur32 test1 = do let h = asWord64 . hash64 s1@(Symbol _ r1) = intern h "foo" s2@(Symbol _ r2) = intern h ")) #endif {- $doc1 test 1 -} {- $impl: 1. If two symbols have the same reference, they are equal. (The 'Eq' instance on 'IORef's implements observable sharing.) 2.). -} | http://hackage.haskell.org/package/simple-atom-0.2/docs/src/Data-Atom-UF.html | CC-MAIN-2015-40 | refinedweb | 535 | 54.52 |
Note: PdfMiner3K is out and uses a nearly identical API to this one. Fully working code examples are available from my Github account with Python 3 examples at CrawlerAids3 and Python 2 at CrawlerAids (both currently developed)
In my previous post on pdfMiner, I wrote on how to extract information from a pdf. For completeness, I will discuss how PyPDF2 and reportlab can be used to write a pdf and manipulate an existing pdf. I am learning as I go here. This is some low hanging fruit meant to provide a fuller picture. Also, I am quite busy.
PyPDF and reportlab do not offer the completeness in extraction that pdfMiner offers. However, they offer a way of writing to existing pdfs and reportlab allows for document creation. For Java, try PDFBox.
However, PyPdf is becoming extinct and pyPDF2 has broken pages on its website. The packages are still available from pip,easy_install, and from github. The mixture of reportlab and pypdf is a bit bizzare.
PyPdf, unlike pdfMiner, is well documented. The author of the original PyPdf also wrote an in depth review with code samples. If you are looking for an in depth manual for use of the tool, it is best to start there.
Report Lab Documentation
Report lab documentation is available to build from the bitbucket repositories.
Installing PyPdf and ReportLab
Pypdf2 and reportlab are easy to install. Additionally, PyPDF2 can be installed from the python package site and reportlab can be cloned.
easy_install pypdf2 pip install pypdf2 easy_install reportlab pip install reportlab
ReportLab Initialization
The necessary part of report lab is the canvas objects. Report lab has several sizes. They are letter,legal, and portrait. The canvas object is instantiated with a string and size.
from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import portrait
PyPdf2 Initialization
PyPdf2 has a relatively simple setup. A PdfFileWriter is initialized to add to the document, as opposed to the PdfReader which reads from the document. The reader takes a file object as its parameter. The writer takes an output file at write time.
from pyPDF2 import PdfFileWriter, PdfFileReader # a reader reader=PdfFileReader(open("fpath",'rb')) # a writer writer=PdfFileWriter() outfp=open("outpath",'wb') writer.write(outfp)
All of this can be found under the review by the writer of the original pyPdf.
Working with PyPdf2 Pages and Objects
Before writing to a pdf, it is useful to know how to create the structure and add objects. With the PdfFileWriter, it is possible to use the following methods (an IDE or the documentation will give more depth).
- addBlankPage-create a new page
- addBookmark-add a bookmark to the pdf
- addLink- add a link in a specific rectangular area
- addMetaData- add meta data to the pdf
- insertPage-adds a page at a specific index
- insertBlankPage-insert a blank page at a specific index
- addNamedDestination-add a named destination object to the page
- addNamedDestinationObject-add a created named destination to the page
- encrypt-encrypt the pdf (setting use_128bit to True creates 128 bit encryption and False creates 40 bit encryption with a default of 128 bits)
- removeLinks-removes links by object
- removeText-removes text by text object
- setPageMode-set the page mode (e.g. /FullScreen,/UseOutlines,/UseThumbs,/UseNone
- setPageLayout-set the layout(e.g. /NoLayout,/SinglePage,/OneColumn,/TwoColumnLeft)
- getPage-get a page by index
- getLayout-get the layout
- getPageMode-get the page mode
- getOutlineRoot-get the root outline
ReportLab Objects
Report lab also contains a set of objects. Documentation can be found here. It appears that postscript or something similar is used for writing documents to a page in report lab. Using ghostscript, it is possible to learn postscript. Postscript is like assembler and involves manipulating a stack to create a page. It was developed at least in part by Adobe Systems, Inc. back in the 1980s and before my time on earth began.
Some canvas methods are:
- addFont-add a font object
- addOutlineEntry-add an outline type to the pdf
- addPostscriptCommand-add postscript to the document
- addPageLabel-add a page label to the document canvas
- arc-draw an arc in a postscript like manner
- beginText-create a text element
- bezier-create a postscript like bezier curve
- drawString-draw a string
- drawText-draw a text object
- drawPath-darw a postscript like path
- drawAlignedString-draw a string on a pivot character
- drawImage-draw an image
- ellipse-draw an elipse on a bounding box
- circle-draw a circle
- rect-draw a rectangle
Write a String to a PDF
There are two things that dominate the way of writing pdf files, writing images, and writing strings to the document. This is handled entirely in
Here, I have added some text and a circle to a pdf.
def writeString(): fpath="C:/Users/andy/Documents/temp.pdf" packet = StringIO.StringIO() packet=StringIO.StringIO() cv=canvas.Canvas(packet, pagesize=letter) #create a string cv.drawString(0, 500, "Hello World!") #a circle. Do not add another string. This draws on a new page. cv.circle(50, 250, 20, stroke=1, fill=0) #save to string cv.save() #get back to 0 packet.seek(0) #write to a file with open(fpath,'wb') as fp: fp.write(packet.getvalue())
The output of the above code:
Page 1
Unfortunately, adding a new element occurs on a new page after calling the canvas’ save method. Luckily the “closure” of the pdf just creates a new page object. A much larger slice of documentation by reportlab goes over writing a document in more detail. The documentation includes alignment and other factors. Alignments are provided when adding an object to a page.
Manipulating a PDF
Manipulation can occur with ReportLab. ReportLab allows for deletion of pages,insertion of pages, and creation of blank pages. The author of pyPDF goes over this in depth in his review.
This code repeats the previous pages twice in a new pdf. It is also possible to merge (overlay) pdf pages.
from PyPDF2 import PdfFileWriter,PdfFileReader pdf1=PdfFileReader(open("C:/Users/andy/Documents/temp.pdf")) pdf2=PdfFileReader(open("C:/Users/andy/Documents/temp.pdf")) writer = PdfFileWriter() # add the page to itself for i in range(0,pdf1.getNumPages()): writer.addPage(pdf1.getPage(i)) for i in range(0,pdf2.getNumPages()): writer.addPage(pdf2.getPage(i)) # write to file with file("destination.pdf", "wb") as outfp: writer.write(outfp)
Overall Feedback
Overall, PyPDF is useful for merging and changing existing documents in terms of the the way they look and reportlab is useful in creating documents from scratch. PyPDF deals mainly with the objects quickly and effectively and reportlab allows for in depth pdf creation. In combination, these tools rival others such as Java’s PdfBox and even exceed it in ways. However, pdfMiner is a better extraction tool.
7 thoughts on “Python PDF 2: Writing and Manipulating a PDF with PyPDF2 and ReportLab”
great article, i just wanted to ask one thing. which one is faster for writing huge amount of data, i know you said reportlab is good for writing from scratch. but i’m wondering about speed and efficiency. What im trying to do is writing thousands of datas from Django model
to pdf.
Thank you
Report lab is really the only way to write the data I know of as pypdf only reads them. Combined, they sort of make the Python version of PDFBox. However, for thousands of documents quickly, I would recommend java. Python processes are slow. With java you can place a PDFBox instance in separate Fork Join Pools and leverage the power of the entire processor. Python will always be somewhat slow.
Wow tnx for really quick reply, i’ll be following you from now on . Thanks
Well looks like i have to dig in to java 😀
Best Regards
Nice article!
I have one little question:
PDF Supports document versions. That means that the current document can be kept intact, and we can change the content and presentation of the document just adding info. That feature is specially useful to verify the look and integrity of the document in the past digital signatures.
As long as I have seen all the examples with pyPDF2 take content from one file and rebuild the content into another one. Can we do that without loosing or breaking the digital signatures for the last versions ?
Given your experience in the topic. Have you seen a way of raw copying of past versions and just build new document version reusing objects of the past ones ?
Or there is a way to directly modify a document ? I mean, load a document for read and write.
Thanks a lot
It has been a while since I looked at pyPDF as I have switched to PDF box and Scala finding what my still be a GIL a bit of a problem. As I recall, pdfs are built on coordinate systems. While you absolutely have to copy pdfs instead of edit them in pyPDF,as I understand it, there are tools like pdfminer3k which allow you to collect objects,sort them, find the correct place to add content, get all of the text and attributes, and modify the page that way. PyPdf mainly supports merging and rewriting, sorry. I have not found any way to stream unsupported objects and the like. I think you would need to do a compare with code you write in a tool like pdfminer3k as well. See and. Perhaps looking at the Java/Scala side would be of more use where pdf object streaming goes beyond page streaming into object streaming.. The closest solution in Python I could find on Stack Overflow is as follows but it too will recreate pages.. If you know a bit about bytes and pdf objects, perhaps a new library in Python is in order?
Logically, because of the fact that pdf works a lot like any other markdown despite being a special snowflake, I started digging a little more. Sadly, it seems my remarks below are true. If you don’t have the object streaming in bytes and don’t use a stronger library like PDFBox which I don’t even know at this point if it supports digital signatures, then something may get lost because of the need to copy and paste to a new pdf. It is sad this would happen since objects are marked in a standard way.
Thanks for the article!
Does PyPDF2 allow to insert numbers to existing pages of alreadyexisting PDF files? | https://dadruid5.com/2014/08/19/python-pdf-2-writing-an-manipulating-a-pdf-with-pypdf2-and-reportlab/ | CC-MAIN-2018-47 | refinedweb | 1,738 | 64.51 |
I'm still trying to get DragonFly reliably working on VirtualBox (just because Qemu on my Ubuntu doesn't like to run with the accelerator kqemu).
- When I boot for the first time ("turn power on"), pressing a key in the boot menu (the "1. Boot DragonFly [default]" ... "7. Reboot" menu) will freeze the system.
- Or if I let the countdown timer pass, it will just execute for a very show while to then begin to spin in DELAY with delta=0 and ticks_left=2 [1].
The patch I am using is appended. I am thankful for any hints or further suggestions what the reason for this strange behaviour could be! Also note that the clock calibration returns a difference of more than 1%.
Matt, you suggested a while ago, to use the APIC timer and get completely rid of the 8254 timer. If you could give me a starting point I'd like to try that out. At least I can now boot and successuflly compile a kernel in VirtualBox, which is a big help in testing things out.
diff --git a/sys/platform/pc32/isa/clock.c b/sys/platform/pc32/isa/clock.c index f02333d..170a240 100644 --- a/sys/platform/pc32/isa/clock.c +++ b/sys/platform/pc32/isa/clock.c @@ -152,6 +152,8 @@ static struct cputimer i8254_cputimer = { 0, 0, 0 }; +static int cold_delay_timer = 1; + /* * timer0 clock interrupt. Timer0 is in one-shot mode and has stopped * counting as of this interrupt. We use timer1 in free-running mode (not @@ -415,8 +417,18 @@ DODELAY(int n, int doswitch) #endif delta = tick - prev_tick; prev_tick = tick; - if (delta < 0) + + if (delta <= 0) { + /* break delay loop during early boot as + the timer might not be correctly working */ + if (cold_delay_timer) { + break; + } else { + kprintf("delta: %d, ticks_left: %d\n", + delta, ticks_left); + } delta = 0; + } ticks_left -= delta; if (doswitch && ticks_left > 0) lwkt_switch(); @@ -600,6 +612,7 @@ fail: static void i8254_restore(void) { + kprintf("i8254_restore\n"); timer0_state = ACQUIRED; clock_lock(); @@ -794,6 +807,11 @@ startrtclock(void) #endif } + /* Timer should now work correctly! */ + cold_delay_timer = 0; + //if (bootverbose) + kprintf("cold_delay_timer -> 0\n"); + EVENTHANDLER_REGISTER(shutdown_post_sync, resettodr_on_shutdown, NULL, SHUTDOWN_PRI_LAST); #if !defined(SMP) | https://www.dragonflybsd.org/mailarchive/kernel/2008-12/msg00005.html | CC-MAIN-2017-09 | refinedweb | 353 | 71.34 |
Paul Boddie wrote:
> >.
After chatting a little with people who've benchmarked cElementTree and
other toolkits on a variety of platforms, I think the general conclusion seems
to be that both libraries can parse most stuff in about the same number of
milliseconds. The main differences seems to come from 1) compilers, and
2) what Python version you're using (2.4 can be a lot faster).
My benchmarks all use "official" binary distributions, and I have no idea what
compilers the other developers have used. Nor has an ordinary user, of course.
If people want better results for their favourite toolkit, they should release better
binaries
> I've used GNU time to report things like the elapsed, system and user times
> as well as measuring the elapsed time in Python, but I couldn't get the memory
> usage.
My test harness is basically:
import stuff
raw_input("check process size")
t0 = time.clock() # use time.time() on unix
parse(file)
t1 = time.clock() # see above
print t1 - t0
raw_input("check process size")
clean up
where the process size is checked in the usual way, and the "memory used by
the dom" is the difference between the two values.
To check for anomalies, I also run the above in a loop (minus the raw_input
calls), and watch how performance and memory use vary over time. Some
toolkits are extremely unstable, timewise (GC issues?). And I run the tests
several times over a day, to make sure the system load doesn't impact too
much.
Benchmarking stuff is always hard, and when you're dealing with things that
take 0.0-0.2 seconds *and* consume lots of memory, it's even harder. When
comparing such benchmarks from different machines, you better use a rather
large fudge factor...
> Still, cElementTree looks like a very promising addition to the range of
> Python XML tools, especially given the uncomplicated installation process
> (compared to some of the other top performers, notably libxml2 and
> cDomlette)..
But sure, it's hard to beat libxml2 if you want both speed *and* support for
every XML standard you've ever heard of (and then some)...
</F>
_______________________________________________
XML-SIG maillist - XML-SIG <at> python.org | http://article.gmane.org/gmane.comp.python.xml/3170 | crawl-002 | refinedweb | 366 | 70.94 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Adding Contacts: Part 24:45 with Jason Seifer
Our address book functionality is getting there. In this video, we implement the ability to add contacts via the command line.
Code Samples
def add_contact contact = Contact.new print "First name: " contact.first_name = gets.chomp print "Middle name: " contact.middle_name = gets.chomp print "Last name: " contact.last_name = gets.chomp loop do puts "Add phone number or address? " puts "p: Add phone number" puts "a: Add address" puts "(Any other key to go back)" response = gets.chomp.downcase case response when 'p' phone = PhoneNumber.new print "Phone number kind (Home, Work, etc): " phone.kind = gets.chomp print "Number: " phone.number = gets.chomp contact.phone_numbers.push(phone) when 'a' address = Address.new print "Address Kind (Home, Work, etc): " address.kind = gets.chomp print "Address line 1: " address.street_1 = gets.chomp print "Address line 2: " address.street_2 = gets.chomp print "City: " address.city = gets.chomp print "State: " address.state = gets.chomp print "Postal Code: " address.postal_code = gets.chomp contact.addresses.push(address) else print "\n" break end end contacts.push(contact) end
- 0:00
So, when we just left off, we had added the ability to add a contact
- 0:06
to append to the internal contacts array.
- 0:08
Now, what we're gonna do is add the ability to add addresses and
- 0:13
phone numbers to this contact when we add them.
- 0:17
So, we're gonna do that right here and we're gonna do this using another loop.
- 0:22
So we'll say loop do and end.
- 0:27
So now we'll just display another menu, so
- 0:31
when we add this contact we'll say add phone number or address.
- 0:38
And then we can use the letter p to add a phone number
- 0:44
and a to add an address.
- 0:52
And then we will ask for a response.
- 0:57
And make it lowercase.
- 1:02
And now we have our response case method.
- 1:08
Now what do we do when we have a response?
- 1:10
Well, when it's the letter p, we'll add a phone number,
- 1:14
when it's the letter a we'll add an address, and if it's anything else,
- 1:20
we'll go back which will bring us back into the main menu loop.
- 1:25
And let's go ahead and just add a little message for that,
- 1:28
that says, Any other key to go back.
- 1:34
Okay, and actually let's go ahead and print out a new line before we go back.
- 1:40
All right, so when the letter p happens, let's go ahead and add a phone number.
- 1:45
Now, the way that we do this is similar to how we just added a contact up here.
- 1:50
First we will initialize a new instance of the phone number class, and
- 1:56
then we'll just ask the user for input.
- 2:00
So it's the letter p.
- 2:01
We'll say a phone is a new phone number.
- 2:09
And then we'll print out,
- 2:12
what kind of phone number is it, Home, work etc.
- 2:21
And then the kind of phone number is standard input once again.
- 2:28
And now that we have the kind we'll ask them to input the number.
- 2:35
And then we can append
- 2:41
this phone number.
- 2:47
And that should bring us back to the menu with the option to add
- 2:51
another phone number or an address.
- 2:55
And so adding the address is going to follow the exact same pattern.
- 3:01
So I'm just gonna go ahead and paste that in here.
- 3:09
So we initialize a new address instance, ask what kind it is,
- 3:13
allow them to enter all of the address information.
- 3:17
I'm gonna append it to the internal addresses array.
- 3:22
At the end of all this, it will be pushed on to the contacts array.
- 3:28
So now, let's go ahead and run this and see how it's looking.
- 3:32
Go ahead and type in ruby Address_book.rb.
- 3:36
Let's go ahead and just print the address book.
- 3:40
Okay, there's no one in the contact list.
- 3:44
Oh, the choice is not one, it is a to add a contact,
- 3:47
Jason, middle name no middle name, last name Seifer.
- 3:51
Do I wanna add a phone number or an address?
- 3:53
Yeah, I think I'd like to add a phone number.
- 3:57
Home phone number, 123-456-7890.
- 4:02
That is my actual phone number.
- 4:03
Please do not call me.
- 4:05
Okay, and let me add an address too.
- 4:08
Add a home address, 123 Home Address.
- 4:15
And that's in Orlando, Florida.
- 4:20
- 4:22
Okay, that seems to be working.
- 4:24
Let's go back.
- 4:25
Let's go ahead and print the address book and see how it looks.
- 4:28
All right, got the contact list, with Jason Seifer, looks good.
- 4:38
So I can go ahead and press e to exit and next up we'll add the ability to search. | https://teamtreehouse.com/library/build-an-address-book-in-ruby/input-and-output-2/adding-contacts-part-2 | CC-MAIN-2018-17 | refinedweb | 934 | 84.68 |
A stack is a data collection in which the collected items are kept in reverse order; that is, the first item in the list is the last one added. The primary operations on a stack are "push," which puts a new item at the top of the stack, and "pop," which removes the item at the top. Because the last item in the stack will be the first one out, stacks are also known as Last-In-First-Out (LIFO) data structures. Stacks have many uses: calculators using the reverse Polish notation use a stack to hold values; many compilers use stacks as part of parsing the syntax of expressions and program blocks; and many virtual machines, such as Java Virtual Machines, are stack-oriented.
Consider the following logical expressions and circuits
3
Question 3 [ 9 ]
Consider the following logical expressions and circuits:
•
Show less
Each proposition or circuit is logically equivalent to exactly one other proposition or circuit. For each of them, write down which one it is equivalent to and provide a proof for that equivalency. Each proof of the three logical equivalences must use a sequence of the logical equivalences that are listed in the “Logical Equvalence Laws”and “Implication”
Question 2 [ 6 ]
The truth table below defines the truth value of f for each combination of truth values of a, b and c
•
Show less
a. [3] Find a logic formula for f that uses each variable name at most twice. Then verify the correctness of your formula by drawing a truth table corresponding to this formula, including the truth values of all the subformulas.
Hint: try to divide the rows of the truth table which contribute to the formula into two groups based on a, find a logical formula for each group, and then combine them appropriately.
b. [3] Draw a circuit with inputs a, b and c whose output is the value f described by the truth table.
public Point halfwayTo( Point other ) The halfwayTo method should return a new Point that is halfway between itself (the original Point) and the other Point that is received. If the halfway point does not have integer x and y values, you should round to the nearest integer values.• Show less
Correct the errors. The setw statement , getline, and ending by pressing the Enter key cannot be removed. Assume that name will be a person's full name.
#include <iostream>;
using namespace std;
int main()
{
char name, c;
cout << "Enter your name: ";
cout >> setw( 20 );
cin.getline >> name;
cout << "Hi " << name << endl;
cout "Press Enter key to end this program.";
cin >> c;
return 0;
}• Show less
you are the usability tester of a website that is going live soon.
Accessibility
1. Site load‐time is reasonable
2. Adequate text‐to‐background contrast
3. Font size/spacing is easy to read
4. Flash & add‐ons are used sparingly
5. Images have appropriate ALT tags
6. Site has custom not‐found/404 page
Identity
7. Company logo is prominently placed
8. Tagline makes company’s purpose clear
9. Home‐page is digestible in 5 seconds
10. Clear path to company information
11. Clear path to contact information
Navigation
12. Main navigation is easily identifiable
13. Navigation labels are clear & concise
14. Number of buttons/links is reasonable
15. Company logo is linked to home‐page
16. Links are consistent & easy to identify
17. Site search is easy to access
Content
18. Major headings are clear & descriptive
19. Critical content is above the “fold”
20. Styles & colors are consistent
21. Emphasis (bold, etc.) is used sparingly
22. Ads & pop‐ups are unobtrusive
23. Main copy is concise & explanatory
24. URLs are meaningful & user‐friendly
25. HTML page titles are explanatory
Full Solution Please, Explained.
Part 2• Show less
Join Chegg Study
Guided textbook solutions created by Chegg experts
Learn from step-by-step solutions for 2,500+ textbooks in Math, Science, Engineering, Business and more
24/7 Study Help
Answers in a pinch from experts and subject enthusiasts all semester long | http://www.chegg.com/homework-help/definitions/stack-3 | CC-MAIN-2014-41 | refinedweb | 671 | 65.62 |
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.
Hi All
I know how to access the extended attributes on files using Shell32.dll.
However, even with impersonation (so that I can successfully access the files), I'm unable to access the folder: shell.namespace(folderPath) is always nothing.
I presume it's because the NAS isn't a Windows system.
Can anyone give any pointers on how to access the Tags property (Windows, not EXIF)?
Thanks for any ideas :)
Niki
Sincerely, Highly skilled coding monkey.
Having been beating my head a bit more on this I've come to the conclusion that the issue isn't the file system, but rather the security.
I'm going to try different methods of impersonation etc., but if anyone has any experience of this I'd really appreciate it :)
Xmmm...
May I propose - try to get working what ever you trying get working on the small FAT partition.
There will be no security restriction and you will be able to figure out all problem quickly enough. | https://social.msdn.microsoft.com/Forums/en-US/c4fd4b8b-375f-4e86-b61a-b7b1df79f102/extended-file-attributes-for-files-on-wd-nas?forum=csharpgeneral | CC-MAIN-2021-39 | refinedweb | 181 | 58.28 |
Files of check-in [856cfc8e52] in the top-level directory
What is eddie?
A command line tool for using Haskell code as a command line filter.
It is also useful for prototyping Haskell text processing functions that you will later copy into a program.
Why is eddie?
This project actually started in response to a code bounty for a clojure shell scripting tool. That resulted in a specification for the requested program that had all the features of eddie, and then some. However, clojure code embedded on the command line looks clumsy, so the project was shelved.
In haskell, functional expression are much simpler and more elegant. In particular, in looking for haskell command line tools, I found, which convinced me that the original design would be not merely usable, but elegant if the expressions were done in haskell. Hence, eddie.
Why not
ghc -e?
Eddie adds features to make using it for shell scripting easier:
- When given file arguments, eddie feeds them to your function.
- Eddie can easily add modules to the namespace you use.
- Eddie has options for processing things a line or file at a time.
- Eddie uses either Text or ByteStrings by default, not lists of characters.
More information
The best place to start is the manual. That's available in the source
tree in both man format in
eddie.1, and in html in
eddie.html.
The latter is also available online.
Installation
Install or build with cabal as usual.
Testing
Testing is done with the haskell shelltestrunner package (which can be
installed with cabal). Run
shelltest tests in the source directory
to run the tests with the currently installed eddie. Use
shelltest
tests -w eddie.hs to run them using the current source. Use
shelltest tests -w dist/build/eddie/eddie to run them with the
compiled binary.
Documentation
The documenation is built with
pandoc from
eddie.md. Use the
-s flag, and convert from
pandoc's markdown to your chosen format. | https://chiselapp.com/user/mwm/repository/eddie/tree?ci=tip | CC-MAIN-2020-29 | refinedweb | 328 | 67.55 |
Nicholas Ollis
Today we’re going to look at using Objective-C’s array and dictionary subscripting syntax with older iOS and OS X versions.
If you haven’t already, I can’t recommend enough reading Part 1 and Part 2 of Mark Dalrymple’s excellent two-part series about Objective-C’s literal/boxing/subscripting syntax. The code in those posts can be found on GitHub as a Gist.
When the literal syntax for creating dictionaries and arrays (and numbers) was announced, we as a developer community sang the praises of short, concise code. Then we were introduced to the subscripting syntax for the same, and many of us squealed with joy.
In case you’ve been living under a very large rock, I’m talking about our ability to read a value (and store one) in an NS(Mutable)Array using C-array-like syntax:
NSMutableArray *cultOfSkaro = [NSMutableArray arrayWithObjects:@"Sec",@"Caan",@"Thay",@"Jast",nil]; Dalek *leader = cultOfSkaro[0]; // The new hotness cultOfScaro[0] = [NSNull null]; // We can also write values!
And don’t forget dictionaries!
NSMutableDictionary *theGuide = [NSMutableDictionary dictionary]; theGuide[@"Earth"] = @"Mostly harmless."; // ...mostly.
This “new” subscripting syntax is super-duper wonderful for sparing us lots of typing.
The problem, of course, is that it’s still new. Subscripting is only supported at runtime in iOS 6 and OS X 10.8, and while we can dream, most clients are not yet ready to drop support for iOS 5 and OS X 10.7.
What makes this feature not backward-compatible? Consider that:
cultOfScaro[0]
doesn’t equate to:
[cultOfScaro objectAtIndex:0]
as we’d expect. It instead equates to:
[cultOfScaro objectAtIndexedSubscript:0]
and that method doesn’t exist on NSArray in iOS 5 or OS X 10.7. We get the same raw deal with dictionaries, where
theGuide[@"Earth"] = @"Mostly harmless.";
is short for:
[theGuide setObject:@"Mostly harmless." forKeyedSubscript:@"Earth"];
If you want to be able to use array and dictionary subscripting syntax with older versions of iOS and OS X, you’re in luck!
If you’re already familiar with Objective-C categories, you can just add the subscript-related methods to NS(Mutable)Array and NS(Mutable)Dictionary yourself!
For example, you could implement a cheap
-objectAtIndexedSubscript: in a category on NSArray, like so:
- (id)objectAtIndexedSubscript:(NSInteger)index { NSAssert(index >= 0, @"If you want negative indices, see Mark's posts linked above."); return [self objectAtIndex:index]; }
And there you have it. This implementation isn’t particularly robust, but it would get you back to level ground for being allowed to use subscripting syntax for reading values.
Unfortunately, there are two problems with this approach. One is more technical and one more practical.
The technical problem is that when a category is loaded, its methods are installed onto their appropriate classes, which is a good thing. If you implement a method in a category that was already present on the class, however, you replace that method. It’s generally poor form to replace a method whose implementation you didn’t write (or don’t fully understand). On devices running iOS 5, we’d be adding the method, but under iOS 6, we’d be replacing it. Harrumph.
The practical problem is that we’d be wasting our time to begin with! Apple has already provided us with a mechanism for utilizing subscripting syntax with older deployment targets.
You’re welcome to try it yourself. Take a new command line project, set its deployment target to OS X 10.7, and give it a basic test:
#import <Foundation/Foundation.h> int main(int argc, const char * argv[]) { @autoreleasepool { NSArray *strings = @[@"E",@"G",@"B",@"D",@"F"]; NSString *eString = strings[0]; NSLog(@"%@",eString); } return 0;
You’ll find that this (tiny) app will build and run just fine, even on a Mac running OS X 10.7. It just works! Magic!
Welp, that was a short blog post. Cheers!
Nerds tend not to like magic. Oh, we love to see a good trick. But things that we can’t explain really tend to get under our skin.
“Now, Mikey,” you might ask, “what in tarnation is going on that makes this work?”
First, I’d laugh at you for using a word like “tarnation”. Then, I’d tell you a short story about a small library called ARCLite.
When ARC was first introduced, our minds were blown. We could feel years being added to our lives at the thought of not having to write
-retain and
-release calls anymore. Even better, it turned out that ARC was backward-compatible! But how?
Apple had provided a build shim in the form of a static library called libarclite. Xcode 4.2 and later knew to link against this library anytime you built a project with a deployment target of iOS 4 or Mac OS X 10.6 with ARC enabled. This library provides ARC support for older systems.
It turns out that the-little-lib-that-could has also taken up the torch of providing older systems with support for container subscripting, while also solving our technical problem above: libarclite provides implementations of
-objectAtIndexedSubscript: and friends to the container classes on iOS 5 without replacing the implementations provided by iOS 6.
libarclite does this by waiting until runtime to decide whether or not to add the method, by first finding out whether the method already exists.
If you’ve ever swizzled a method, go ahead and move on. If you haven’t, you’ll want to be seated for this bit. We’re going to talk about the Objective-C runtime library, and use it to understand a bit about how Apple’s pulling off the feat of only adding a category method when it doesn’t already exist.
The Objective-C runtime is the beating heart of a Cocoa application. It’s the low-level bit that puts the “Objective” in Objective-C by teaching C how to be object-oriented. The runtime library is also almost completely open source! We’re interested in one particular and integral header, however: runtime.h.
If you take a moment to browse this file, your imagination will start to run wild with possibilities. With these functions, you can define an entire class after your program has started running! There’s not really much practical use for such a thing (in our position), but when has that ever stopped us from trying? In fact, runtime class definition is how KVO works! But that could be a totally separate post.
There’s a specific function of interest to us in runtime.h:
OBJC_EXPORT BOOL class_addMethod(Class cls, SEL name, IMP imp, const char *types)
This function allows you to attach to a class an entirely new method that didn’t appear in the class’ header or implementation file. Of course, the method must be defined somewhere in your code, but this function allows you to attach it to a class that didn’t have it before.
The
class_addMethod() function takes four parameters:
a Class to which the method should be added, such as that returned by [NSString class]. Just as
@"Hello" is an instance of type NSString*, NSString itself is of type Class.
a selector that names the method—yes, the same sort of selector that you use with NSTimer
a function pointer to the actual implementation of the method you wish to add, and
a C character array describing the types of arguments the method accepts
The function returns a BOOL indicating the success or failure of the method addition. The function will fail and return NO if the target Class already has a method with the specified selector.
How can we use
class_addMethod() for ultimate science?
Let’s do it. Step zero? Import runtime.h so that we can use the functions declared there:
First, we’ll go ahead and create our NSArray+Subscripting category, as we discussed at the beginning of the post:
@implementation NSArray (Subscripting) - (void)bnr_objectAtIndexedSubscript:(NSInteger)index { return [self objectAtIndex:index]; }
You’ll notice immediately that I’ve changed the name of the subscripting method. Remember that if I’d named it:
- (void)objectAtIndexedSubscript:(NSInteger)index
then this method would be installed on NSArray as is, regardless of the OS version. We don’t want that, so we define the method using a fake/temporary name.
For this same reason, we should always namespace our category methods like this (prefixing the method name with an identifier of some sort, such as bnr_ above) so that we don’t accidentally replace methods that we didn’t even know existed.
The second step is to find a place to put our call to
class_addMethod(). We want to implement or override a method that we know will execute very early in an application run.
When an application launches, one of the very first things that happens (even before
main() is called!) is that each class that the application will use is loaded into the runtime. Each class is sent the
+load message (and, because it’s special and doesn’t play by the usual rules, to each category on each class). This is a dangerous place for most types of activity, but it’s the perfect place to make changes to the runtime. That’s not to say that making runtime changes is always safe, mind you.
So, here’s our NSArray+Subscripting.m:
#import "NSArray+Subscripting.h" #import <objc/runtime.h> @implementation NSArray (Subscripting) + (void)load { class_addMethod([NSArray class], @selector(objectAtIndexedSubscript:), method_getImplementation(bnr_objectAtIndexedSubscript:), method_getTypeEncoding(bnr_objectAtIndexedSubscript:) ); } - (void)bnr_objectAtIndexedSubscript:(NSInteger)index { return [self objectAtIndex:index]; }
There are some fantastic bits at work here. First,
class_addMethod() will fail if a method already exists with the passed-in selector. Second, arguments that initially looked incredibly daunting when we saw this function’s declaration (what on earth is an IMP or a type encoding?) are satisfied by calling other runtime functions from runtime.h! It feels like cheating!
I challenge you now to spend some time looking through the documentation for the various functions in runtime.h, and experimenting on your own.
Not so fast. Remember, Apple is already doing this (or something like it) for you with libarclite! We’ve meandered down the runtime’s rabbit-hole as an exercise in pulling away the magic curtains, as it were.
There are certainly plenty of runtime hacks at varying levels of evil and danger that you may find useful in your own application, but this one’s already been done for you.
If you’re interested in learning more about the Objective-C runtime and its dark secrets, sound off in the comments.
Also, check out these excellent resources: *
Mike Ash’s Friday Q&A (particularly his article on method swizzling)
Colin Wheeler’s excellent and deep treatise on the runtime | https://www.bignerdranch.com/blog/illuminating-arclite/ | CC-MAIN-2019-22 | refinedweb | 1,795 | 63.19 |
1) My SecurityFlushListener is less than a day's work. I tried with the catalina sessionlistener interface and noticed that the callback was never happening. On a deeper look in the tomcat codebase, I realized that there is really a minimal usage of the SessionListener interface. That is when I inferred that the HttpSessionListener inteface has taken precendence over the tomcat SL interface.
2) The issue with is either due to the nonavailability of the SecurityManager in the private namespace of the web component due to the EJB3 impl (very unlikely) or is due to some classloading issues introduced by the clustering framework. I personally emailed the JIRA issue reporter for a reproducible test deployment and not heard back. Doing a JNDI lookup, sometimes the CCE is thrown because of insufficient jars in the classpath. Since the user was using Netbeans, I am not sure.
I have retrofitted the SecurityFlushListener implementation to ignore the nonavailability of a securitymanager in rare scenarios. Now if you ask when will the cache get flushed in this scenario? It will be done as the first step in the next request.
3) My implementation with httpsessionlistener is more than sufficient for the time being. When we do the JSR-196 integration with the containers, we can revisit this issue.
4) The tomcat authenticators have been externalized. This is FYI.
5) Since all the prototyping work starts in JBoss HEAD and not in the 4.0 branch, I do want to tell you that I have already placed an implementation of JSR-196 in HEAD. The work that needs to be done is the container integration with the JASPI layer. Again the implementation work I did was less than a forthnight, so if there are issues, they will prop up during the container integration.
I would like to see the JSR-196 implementation completed in head with a followup proposal on how to submit this as a replacement for the Realm interface as a generalization of the tomcat authorization SPI.
Regarding the control of the security cache being in jbosssx as a spec issue, the only authentication that even has any notion of a correlation with a session is FORM auth. The jbosssx certainly cannot rely on a web container level cache for many reasons including the fact that its usable without a web container.
Has a final solution been made for this problem. I'm having a problem getting my custom LoginModule's logout method to call when there is a session timeout. | https://community.jboss.org/message/282053?tstart=0 | CC-MAIN-2014-10 | refinedweb | 418 | 54.12 |
Purpose: Create a C++ class; use operators, overloading, member functions, friend functions, constructors and private data.
• Make the data members private. (This means that the non-member functions below will have to be friends of the(12,31,1999)’. A constructor is not called like a member function. It is used to create an object, which did not yet exist, either in a variable declaration, “Date d(12,31,1999);’
• Create operator << to output a date to an ostream., 2001”.
• Write operator==, so that ‘a==b’ returns true if two dates are the same and false otherwise. It should not be a member function. (use friend and changes d2 to February 28, 2000
++d2 returns February 29, 2000 and changes d2 to February 29, 2000
January 1, 2000, != February 29, 2000
January 1, 2000 == January 1, 2000.
This is what I have so far:
#include <iostream> #include <iomanip> using namespace std; class date { friend ostream &operator << (ostream&, const date &); friend istream &operator >> (istream&, date &); private: date (); void set (); }; ostream &operator << (ostream &output, const date &num) { return output; } istream &operator >> (istream &input, date &num) { return input; } date::date() { // I have no idea what goes in the constructor } int main() { return 0; } | https://www.daniweb.com/programming/software-development/threads/422033/someone-please-help-me-with-this-program | CC-MAIN-2017-34 | refinedweb | 200 | 60.55 |
System.
Net.
Namespace
Http
The System.Net.Http namespace provides a programming interface for modern HTTP applications.
Classes
Enums
Remarks
The System.Net.Http namespace is designed to provide the following:
HTTP client components that allow users to consume modern web services over HTTP.
HTTP components that can be used by both clients and servers (HTTP headers and messages, for example). This provides a consistent programming model on both the client and the server side for modern web services over HTTP.
The System.Net.Http namespace and the related System.Net.Http.Headers namespace provide the following set of components:
HttpClient - the primary class used to send and receive requests over HTTP.
HttpRequestMessage and HttpResponseMessage - HTTP messages as defined in RFC 2616 by the IETF.
HttpHeaders - HTTP headers as defined in RFC 2616 by the IETF.
HttpClientHandler - HTTP handlers responsible for producing HTTP response messages.
There are various HTTP message handles that can be used. These include the following.
DelegatingHandler - A class used to plug a handler into a handler chain.
HttpMessageHandler - A simple to class to derive from that supports the most common requirements for most applications.
HttpClientHandler - A class that operates at the bottom of the handler chain that actually handles the HTTP transport operations.
WebRequestHandler - A specialty class that operates at the bottom of the handler chain class that handles HTTP transport operations with options that are specific to the System.Net.HttpWebRequest object.
The contents of an HTTP message corresponds to the entity body defined in RFC 2616.
A number of classes can be used for HTTP content. These include the following.
ByteArrayContent - HTTP content based on a byte array.
FormUrlEncodedContent - HTTP content of name/value tuples encoded using application/x-www-form-urlencoded MIME type.
MultipartContent - HTTP content that gets serialized using the multipart/* content type specification.
MultipartFormDataContent - HTTP content encoded using the multipart/form-data MIME type.
StreamContent - HTTP content based on a stream.
StringContent - HTTP content based on a string.
If an app using the System.Net.Http and System.Net.Http.Headers namespaces intends to download large amounts of data (50 megabytes or more), then the app should stream those downloads and not use the default buffering. If the default buffering is used the client memory usage will get very large, potentially resulting in substantially reduced performance.
Classes in the System.Net.Http and System.Net.Http.Headers namespaces can be used to develop Windows Store apps or desktop apps. When used in a Windows Store app, classes in the System.Net.Http and System.Net.Http.Headers namespaces are affected by network isolation feature, part of the application security model used by the Windows 8. The appropriate network capabilities must be enabled in the app manifest for a Windows Store app for the system to allow network access by a Windows store app. For more information, see the Network Isolation for Windows Store Apps. | https://docs.microsoft.com/en-us/dotnet/api/system.net.http?view=netframework-4.8 | CC-MAIN-2019-22 | refinedweb | 483 | 51.04 |
This is a
playground to test code. It runs a full
Node.js environment and already has all of
npm’s 400,000 packages pre-installed, including
array.prototype.puresplice with all
npm packages installed. Try it out:
require()any package directly from npm
awaitany promise instead of using callbacks (example)
This service is provided by RunKit and is not affiliated with npm, Inc or the package authors..
##Issues with JavaScript's native splice() Running the code below, using JavaScript's native .splice() method, the original array is actually modified. This could cause unintended side effects if you're not aware of this behaviour. Also, the .splice() method actually returns an array of the elements that you removed, rather than an array without those elements. Again, this is not that useful, IMHO.
var sourceArray = ["wombat", "koala", "emu", "kookaburra"] var newArray = sourceArray.splice(1, 1); // Should return 'koala' as the single item in the new array console.log(JSON.stringify(newArray)); // ["koala"] console.log(JSON.stringify(sourceArray)); // ["wombat", "emu", "kookaburra"].
##Using pureSplice()
###Syntax
var newArray = sourceArray.pureSplice(start, deleteCount);
start: index at which you want to start dropping elements. Remember, JavaScript array counts are zero-based, so element 1 is actually the second element in the array.
deleteCount: How many elements you want to drop from the array.
###Example Running the same code below, substituting .pureSplice() for .splice(), returns a new array with the specified elements removed. Crucially, the source array is not changed:
var sourceArray = ["wombat", "koala", "emu", "kookaburra"] var newArray = sourceArray.pureSplice(1, 1); // Should remove 'koala' from returned newArray console.log(JSON.stringify(newArray)); // ["wombat", "emu", "kookaburra"] console.log(JSON.stringify(sourceArray)); // ["wombat", "koala", "emu", "kookaburra"]
##How to use/install Install from npm with:
npm install --save array.prototype.puresplice
Require or import like so for ES6:
import 'array.prototype.puresplice';
or like this for CommonJS:
require("array.prototype.puresplice");
Don't assign the package to any variable when you import/require it. When imported, the package will add the .pureSplice() method directly to JavaScript's Array.prototype.
The index.js is implemented in UMD format, so should also work for AMD/RequireJS, but I've not tested that. You can also add it as a script tag.
##Development Instructions
First run
npm install to update the dev dependencies, basically the Babel command line tool and its dependences.
The source code is in the src/getsourcearray.js file, and is in ES2015 (aka ES6) format. Run
npm run build to have Babel transpile the code to es5 format to the project's main file, i.e. index.js in the root.
The source code uses ES2015's array spread operator, together with JavaScript's native .slice() method (not to be confused with the .splice() method!). I lifted this idea from one of Dan Abramov's free Redux videos on Egghead.
##Tests
Tests are built with mocha + chai. Run with
npm test.
Tests check that a new array can be returned from a source array with: | https://npm.runkit.com/array.prototype.puresplice | CC-MAIN-2017-30 | refinedweb | 497 | 60.82 |
Boke use bokeh you need to launch a bokeh server and connect to it using a browser. We will use this example script (
hello_world.py ):
from bokeh.models import ColumnDataSource from bokeh.plotting import figure from bokeh.io import curdoc def modify_doc(doc): """Add a plotted function to the document. Arguments: doc: A bokeh document to which elements can be added. """ x_values = range(10) y_values = [x ** 2 for x in x_values] data_source = ColumnDataSource(data=dict(x=x_values, y=y_values)) plot = figure(title="f(x) = x^2", tools="crosshair,pan,reset,save,wheel_zoom",) plot.line('x', 'y', source=data_source, line_width=3, line_alpha=0.6) doc.add_root(plot) doc.title = "Hello World" def main(): modify_doc(curdoc()) main()
To launch it you need to execute bokeh on the command line and use the
serve command to launch the server:
$ bokeh serve --show hello_world.py
The
--show parameter tells bokeh to open a browser window and show document defined in
hello_world.py . simply run the following in cmd.exe on Windows or terminal on Mac:
conda install bokeh
If you already have a version of Python then you can run the following in cmd.exe on Windows or terminal on Mac:
pip install bokeh
Be sure to check out the Bokeh quick start guide for several examples.
Here is a simple example of how to use Bokeh in Jupyter Notebook:
import numpy as np from bokeh.plotting import figure # Make Bokeh Push push output to Jupyter Notebook. from bokeh.io import push_notebook, show, output_notebook from bokeh.resources import INLINE output_notebook(resources=INLINE) # Create some data. x = np.linspace(0,2*np.pi,20) y = np.sin(x) # Create a new plot with a title and axis labels p = figure(title="Simple Line Plot in Bokeh", x_axis_label='x', y_axis_label='y') # Add a line renderer with legend and line thickness p.line(x, y, legend="Value", line_width=3) # Show the results show(p) | https://riptutorial.com/bokeh | CC-MAIN-2021-04 | refinedweb | 317 | 58.89 |
RabbitMQ/AMQP-based RPC Client/Server Library
Project description
RabbitRPC is an RPC over AMQP framework for Python. It allows the user to worry less about how he’s doing remote method calls and more about her/his actual code. It was written to scratch an itch that developed during the development of a much larger software project which needed a highly scalable internal API.
As of 0.6.0, RabbitRPC is capable of completely mocking remote functions and their modules. This means that using the framework is as simple as instantiating the RPCClient class and calling ‘start()’ (after having written and registered some server-side components, of course). See below for an example of how this works.
RabbitRPC tries to keep things feeling as native as possible. As mentioned above, imports and function calls on modules work exactly like you’d use them for local code. The RPC client will also return the exact data that was produced from the server-side functions (within pickleable limits). Should an exception occur in the remote code, that exact exception will be thrown and its traceback will be shown.
Please keep in mind, this package is still a work in progress. Here’s a current list of what is planned before 1.0.0:
- Add support for remote class registrations and stateful class management from within the server, on a per-client basis.
- Authentication
- Authorization (along with the ability to create groups/roles specifying what functions/methods/classes may or may not be run by a particular account.
- Dead-letter support in AMQP backend (for those rare times when something goes wrong and you need to recover).
- ‘Versioning’ for RPC endpoints, which would allow servers to serve subsets of an API/set of endpoints
Real documentation is in the plans, my time is just limited at the moment. All of the source is well documented with doctags. Please check that out for the time being.
Bugs and Feature Requests:
Please leave them on the project’s Github tracker:
Example
For actual, working code examples, see the ‘examples’ directory in the source tree. You’ll need a RabbitMQ server set up before you run them. But you know that.
RPC Endpoints:
from rabbitrpc.server import register @register.RPCFunction def the_price_is_wrong(): print '-- Bob Barker'
RPC Server:
import <your endpoint modules here> from config import RABBITCONF server = rpcserver.RPCServer(RABBITCONF) try: server.run() except KeyboardInterrupt: server.stop()
RPC Client:
from config import RABBITCONF # Fire up the client client = rpcclient.RPCClient(RABBITCONF) client.start() # This module is dynamically created by the client, along with it's function 'all_the_things'. Calling # 'all_the_things' will cause the client to transparently proxy the call out to the RPC server, via RabbitMQ. import rpcendpoints result = rpcendpoints.the_price_is_wrong() print 'result: %s' % result
Dependencies
Tests. | https://pypi.org/project/rabbitrpc/ | CC-MAIN-2021-04 | refinedweb | 461 | 57.87 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Unable to write a value to an existing field
I got the following error message
TypeError: write() takes exactly 2 arguments (5 given)
due to the following code
def post(self,cr,uid,ids,context=None): o = self.browse(cr,uid,ids[0]) budget_obj = self.pool.get('budget.expense').browse(cr,uid,o.budget_id.id) budget_line = self.pool.get('budget.expense.lines') for line in o.budget_transaction_line: for item in budget_obj.budget_expense_line: if line.item_id.id == item.item_id.id: if line.tran_type in ('salary','expense'): if item.planned_amount - item.practical_amount - line.amount>=0: b_line = budget_line.browse(cr,uid,item.id) vals = { 'practical_amount':item.practical_amount + line.amount, } b_line.write(cr,uid,[item.id],vals)
Don't care about all the method's lines, the problem is in the last bold line
Does anyone have an idea to resolve this error ?
Hello,
Please remove "cr, uid, [item.id] from last line, just simply keep as follows,
b_line.write(vals)
Reason :
At this line : b_line = budget_line.browse(cr,uid,item.id) those 3 arguments are already loaded. So there is no need to provide them again at last line.
Warning :
If "vals" dict contains the key which is not available as a field in "budget.expense.lines" model, then system will raise an error. ( This is only for knowledge purpose ).
If instead of this line b_line.write(cr,uid,[item.id],vals) you will write like self.pool.get("budget.expense.lines").write(cr,uid,[item.id],vals) then system will not raise an error. ( in case of if "vals" dict contains the key which is not available as a field in "budget.expense.lines" model )
thx for help. I tried the line "b_line.write(vals)" only, still need to try the other part of your solution! | https://www.odoo.com/forum/help-1/question/unable-to-write-a-value-to-an-existing-field-87657 | CC-MAIN-2016-44 | refinedweb | 326 | 54.29 |
Tutorial
Create a Lazy Loaded Wizard wizard is a component consisting in several steps. Each step is a different view. Let’s see how to create a wizard that lazy loads its parts in Vue.js.
🧙 The Wizard Steps
Each step in a wizard is one of the screens that the wizard is showing. Let’s create one component per each step and name it using the same naming pattern, such as Wizard-step1.vue, Wizard-step2.vue, Wizard-step3.vue. Just put some trivial content in there, for instance:
<template> <div> Step 1 </div> </template>
The Wizard
This is the component where the magic happens. The wizard has a
step counter which we’ll use to lazy load, and a button which will increase that counter. Let’s get that done:
<template> <div> <button @Next</button> </div> </template> <script> export default { data: () => ({ step: 1 }), methods: { next() { this.step++; } } }; </script>
The idea is that the wizard loads the corresponding component depending on the current step. For lazy loading, we can use a dynamic import, but that’s asynchronous and returns a promise, while a component’s rendering is synchronous.
Vue gives us a couple of features we might be able to use for our purpose:
Async components: Passing a function (which returns a promise) instead of an object allows to asynchronously lazy load components, at least when using local and global registration. The problem is that those components are known beforehand, which is not the case for our example.
Dynamic components: Using the
<component>reserved element, we can hot-swap components dynamically. However, we have the same problem, we have to know beforehand those components.
Here’s the treasure: even though it’s not documented on the Vue.js docs, we can combine the power of both features using a computed property combined with the dynamic import, since the element also allows to get a function returning a promise which under the hood performs a kind of local registration.
The dynamic import is a JavaScript feature that allows to load a module at runtime, similar to how require works with Node.js. Some module bundlers, such as Webpack or Rollup, use the dynamic import as a code splitting point and create a separate bundle, loaded on demand when that code is reached.
Let’s add that part to the component:
<template> <div> <button @Next</button> <component : </div> </template> <script> export default { data: () => ({ step: 1 }), computed: { stepComponent() { return () => import(`./Wizard-step${this.step}.vue`); } }, methods: { next() { this.step++; } } }; </script>
I’m creating the
stepComponent computed property, which returns a function that loads the right component given the current step. Then above, I’m using the
<component> element and binding it to
stepComponent.
If you try it out, it should work. However, if you click on the next button, it won’t update. This is due to the fact that it’s not evaluating any reactive property within the computed property, since
step is within the returned function. Computed properties in Vue are cached, and in this case is returning the latest value.
You could try to use a method instead, which is not cached, but you’ll end up in an infinite rendering loop (try yourself).
The workaround is to make Vue evaluate the
step state property. For that, we can simply call it:
stepComponent() { this.step; // Just call it return () => import(`./Wizard-step${this.step}.vue`); }
Try it again, open the network tab of the browser devtools and enjoy watching how your chunks are loaded as you press the next button! ✨ | https://www.digitalocean.com/community/tutorials/vuejs-lazy-loaded-wizard | CC-MAIN-2021-31 | refinedweb | 591 | 61.87 |
The presence of whitespace in the DOM can cause layout problems and make manipulation of the content tree difficult in unexpected ways, depending on where it is located. This article explores when difficulties can occur, and looks at what can be done to mitigate resulting problems.
What is whitespace?
Whitespace is any string of text composed only of spaces, tabs or line breaks (to be precise, CRLF sequences, carriage returns or line feeds). These characters allow you to format your code in a way that will make it easily readable by yourself and other people. In fact, much of our source code is full of these whitespace characters, and we only tend to get rid of it in a production build step to reduce code download sizes.
HTML largely ignores whitespace?
In the case of HTML, whitespace is largely ignored — whitespace in between words is treated as a single character, and whitespace at the start and end of elements and outside elements is ignored. Take the following minimal example:
<!DOCTYPE html> <h1> Hello World! </h1>
This source code contains a couple of line feeds after the
DOCTYPE and a bunch of space characters before, after, and inside the
<h1> element, but the browser doesn’t seem to care at all and just shows the words "Hello World!" as if these characters didn’t exist at all:
This is so that whitespace characters don't impact the layout of your page. Creating space around and inside elements is the job of CSS.
What does happen to whitespace?
They don't just disappear, however.
Any whitespace characters that are outside of HTML elements in the original document are represented in the DOM. This is needed internally so that the editor can preserve formatting of documents. This means that:
- There will be some text nodes that contain only whitespace, and
- Some text nodes will have whitespace at the beginning or end.
Take the following document, for example:
<!DOCTYPE html> <html> <head> <title>My Document</title> </head> <body> <h1>Header</h1> <p> Paragraph </p> </body> </html>
The DOM tree for this looks like so:
Conserving whitespace characters in the DOM is useful in many ways, but there are certain places where this makes certain layouts more difficult to implement, and causes problems for developers who want to iterate through nodes in the DOM. We'll look at these, and some solutions, later on.
How does CSS process whitespace?
Most whitespace characters are ignored, not all of them are. In the earlier example one of the spaces between "Hello" and "World!" still exists when the page is rendered in a browser. There are rules in the browser engine that decide which whitespace characters are useful and which aren’t — these are specified at least in part in CSS Text Module Level 3, and especially the parts about the CSS
white-space property and whitespace processing details, but we also offer an easier explanation below.
Let’s take another really simple example. To make it easier, we’ve illustrated all spaces with ◦, all tabs with ⇥ , and all line breaks with ⏎:
This example:
<h1>◦◦◦Hello◦⏎ ⇥⇥⇥⇥<span>◦World!</span>⇥◦◦</h1>
is rendered in the browser like so:
Hidden example
<h1> Hello <span> World!</span> </h1>
The
<h1> element contains only inline elements. In fact it contains:
- A text node (consisting of some spaces, the word "Hello" and some tabs).
- An inline element (the
<span>, which contains a space, and the word "World!").
- Another text node (consisting only of tabs and spaces).
Because of this, it establishes what is called an inline formatting context. This is one of the possible layout rendering contexts that browser engines work with.
Inside this context, whitespace character processing can be summarized as follows:>
Next,>
After that,>
This is why people visiting the web page will simply see the phrase "Hello World!" nicely written at the top of the page, rather than a weirdly indented "Hello" followed but an even more weirdly indented "World!" on the line below that.
Note: Firefox DevTools have supported highlighting text nodes since version 52, making it easier to see exactly what nodes whitespace characters are contained within. Pure whitespace nodes are marked with a "whitespace" label.
Whitespace in block formatting contexts
Above we just looked at elements that contain inline elements, and inline formatting contexts. If an element contains at least one block element, then it instead establishes what is called a block formatting context.
Within this context, whitespace is treated very differently. Let’s take a look at an example to explain how. We've marked the whitespace characters as before.
that contain only whitespace, one before the first
<div>, one between the 2
<divs>, and one after the second
<div>.
This renders like so:
Hidden example 2
<body> <div> Hello </div> <div> World! </div> </body>
We can summarize how the whitespace here is handled as follows (the may be some slight differences in exact behavior between browsers, but this basically works):
Because we’re inside a block formatting context, everything must be a block, so our 3 text nodes also become blocks, just like the 2
<div>s.>
This is then simplified further by applying the processing rules for whitespace in inline formatting contexts to these blocks:
<block></block> <block>Hello</block> <block></block> <block>World!</block> <block></block>
The 3 empty blocks we now have are not going to occupy any space in the final layout, because they don’t contain anything, so we’ll end up with only 2 blocks taking up space in the page. People viewing the web page see the words "Hello" and "World!" on 2 separate lines as you’d expect 2
<div>s to be laid out. The browser engine has essentially ignored all of the whitespace that was added in the source code.
Spaces in between inline and inline-block elements
Let's move on to look at a few issues that can arise due to whitespace, and what can be done about them. First of all, we'll look at what happens with spaces in between inline and inline-block elements. In fact, we saw this already in our very first example, when we described how whitespace is processed inside inline formatting contexts.
We said that there were rules to ignore most characters but that word-separating characters remain. When you’re only dealing with block-level elements such as
<p> that only contain inline elements such as
<em>,
<strong>,
<span>, etc., you don’t normally care about this because the extra whitespace that does make it to the layout is helpful to separate the words in the sentence.
It gets more interesting however when you start using
inline-block elements. These elements behave like inline elements on the outside, and blocks on the inside, and are often used to display more complex pieces of UI than just text, side-by-side on the same line, for example navigation menu items.
Because they are blocks, many people expect that they will behave as such, but really they don’t. If there is formatting whitespace between adjacent inline elements, this will result in space in the layout, just like the spaces between words in text.
Consider this example (again, the whitespace characters in the HTML are marked so they are visible):
.people-list { list-style-type: none; margin: 0; padding: 0; } .people-list li { display: inline-block; width: 2em; height: 2em; background: #f06; border: 1px solid; }
>
This renders as follows:
Hidden example 3
.people-list { list-style-type: none; margin: 0; padding: 0; } .people-list li { display: inline-block; width: 2em; height: 2em; background: #f06; border: 1px solid; }
<ul class="people-list"> <li></li> <li></li> <li></li> <li></li> <li></li> </ul>
You probably don't want the gaps in between the blocks — depending on the use case (is this a list of avatars, or horizontal nav buttons?), you probably want the element sides flush with each other, and to be able to control any spacing yourself.
The Firefox DevTools HTML Inspector will highlight text nodes, and also show you exactly what area the elements are taking up — useful if you are wondering what is causing the problem, and are maybe thinking you've got some extra margin in there or something!
There are a few ways of getting around this problem:
Use Flexbox to create the horizontal list of items instead of trying an
inline-block solution. This handles everything for you, and is definitely the preferred solution:
ul { list-style-type: none; margin: 0; padding: 0; display: flex; }
If you need to rely on
inline-block, you could set the
font-size of the list to 0. This only works if your blocks are not sized with ems (based on the
font-size, so the block size would also end up being 0). rems would be a good choice here:
ul { font-size: 0; ... } li { display: inline-block; width: 2rem; height: 2rem; ... }
Or you could set negative margin on the list items:
li { display: inline-block; width: 2rem; height: 2rem; margin-right: -0.25rem; }
You can also solve this problem by putting your list items all on the same line in the source, which causes the whitespace nodes to not be created in the first place:
<li></li><li></li><li></li><li></li><li></li>
DOM traversal and whitespace
When trying to do DOM manipulation in JavaScript, you can also encounter problems because of whitespace nodes. For example, if you have a reference to a parent node and want to affect its first element child using
Node.firstChild, if there is a rogue whitespace node just after the opening parent tag you will not get the result you are expecting. The text node would be selected instead of the element you want to affect.
As another example, if you have a certain subset of elements that you want to do something to based on whether they are empty (have no child nodes) or not, you could check whether each element is empty using something like
Node.hasChildNodes(), but again, if any target elements contain text nodes, you could end up with false results.
Whitespace helper functions
The JavaScript code below defines several functions that make it easier to deal with whitespace in the DOM:
/** * Throughout, whitespace is defined as one of the characters * "\t" TAB \u0009 * "\n" LF \u000A * "\r" CR \u000D * " " SPC \u0020 * * This does not use Javascript's "\s" because that includes non-breaking * spaces (and also some other characters). */ /** * Determine whether a node's text content is entirely whitespace. * * @param nod A node implementing the |CharacterData| interface (i.e., * a |Text|, |Comment|, or |CDATASection| node * @return True if all of the text content of |nod| is whitespace, * otherwise false. */ function is_all_ws( nod ) { // Use ECMA-262 Edition 3 String and RegExp features return !(/[^\t\n\r ]/.test(nod.textContent)); } /** * Determine if a node should be ignored by the iterator functions. * * @param nod An object implementing the DOM1 |Node| interface. * @return true if the node is: * 1) A |Text| node that is all whitespace * 2) A |Comment| node * and otherwise false. */ function is_ignorable( nod ) { return ( nod.nodeType == 8) || // A comment node ( (nod.nodeType == 3) && is_all_ws(nod) ); // a text node, all ws } /** * Version of |previousSibling| that skips nodes that are entirely * whitespace or comments. (Normally |previousSibling| is a property * of all DOM nodes that gives the sibling node, the node that is * a child of the same parent, that occurs immediately before the * reference node.) * * @param sib The reference node. * @return Either: * 1) The closest previous sibling to |sib| that is not * ignorable according to |is_ignorable|, or * 2) null if no such node exists. */ function node_before( sib ) { while ((sib = sib.previousSibling)) { if (!is_ignorable(sib)) return sib; } return null; } /** * Version of |nextSibling| that skips nodes that are entirely * whitespace or comments. * * @param sib The reference node. * @return Either: * 1) The closest next sibling to |sib| that is not * ignorable according to |is_ignorable|, or * 2) null if no such node exists. */ function node_after( sib ) { while ((sib = sib.nextSibling)) { if (!is_ignorable(sib)) return sib; } return null; } /** * Version of |lastChild| that skips nodes that are entirely * whitespace or comments. (Normally |lastChild| is a property * of all DOM nodes that gives the last of the nodes contained * directly in the reference node.) * * @param sib The reference node. * @return Either: * 1) The last child of |sib| that is not * ignorable according to |is_ignorable|, or * 2) null if no such node exists. */ function last_child( par ) { var res=par.lastChild; while (res) { if (!is_ignorable(res)) return res; res = res.previousSibling; } return null; } /** * Version of |firstChild| that skips nodes that are entirely * whitespace and comments. * * @param sib The reference node. * @return Either: * 1) The first child of |sib| that is not * ignorable according to |is_ignorable|, or * 2) null if no such node exists. */ function first_child( par ) { var res=par.firstChild; while (res) { if (!is_ignorable(res)) return res; res = res.nextSibling; } return null; } /** * Version of |data| that doesn't include whitespace at the beginning * and end and normalizes all whitespace to a single space. (Normally * |data| is a property of text nodes that gives the text of the node.) * * @param txt The text node whose data should be returned * @return A string giving the contents of the text node with * whitespace collapsed. */ function data_of( txt ) { var data = txt.textContent; // Use ECMA-262 Edition 3 String and RegExp features data = data.replace(/[\t\n\r ]+/g, " "); if (data.charAt(0) == " ") data = data.substring(1, data.length); if (data.charAt(data.length - 1) == " ") data = data.substring(0, data.length - 1); return data; }
Example
The following code demonstrates the use of the functions above. It iterates over the children of an element (whose children are all elements) to find the one whose text is
"This is the third paragraph", and then changes the class attribute and the contents of that paragraph.
var cur = first_child(document.getElementById("test")); while (cur) { if (data_of(cur.firstChild) == "This is the third paragraph.") { cur.className = "magic"; cur.firstChild.textContent = "This is the magic paragraph."; } cur = node_after(cur); } | https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Whitespace | CC-MAIN-2020-16 | refinedweb | 2,335 | 62.78 |
Order this Book from Amazon
ASP.NET MVC controllers are responsible for controlling the flow of application execution. When you make a browser request against an ASP.NET MVC application, a controller is responsible for returning a response to that request.
Controllers expose one or more actions. A controller action can return different types of action results to a browser. For example, a controller action might return a view, a controller action might return a file, or a controller action might redirect you to another controller action.
In this chapter, you learn how to create controllers and controller actions. You learn how to return different types of controller action results. You also learn how to use attributes to control when a particular controller action gets invoked. We complete this chapter by discussing how you can write unit tests for your controllers and actions.
Creating a Controller
The easiest way to create a controller is to right-click the Controllers folder in the Visual Studio Solution Explorer window and select the menu option Add, Controller. Selecting this menu option displays the Add Controller dialog (see Figure 1). If you enter the name ProductController then you will get the code in Listing 1.
*** Begin Warning ***
A controller name must end with the suffix Controller. If you forget to include the Controller suffix then you won’t be able to invoke the controller.
*** End Warning ***
Figure 1 – The Add Controller dialog
Listing 1 – ControllersProductController.cs [C#]
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; namespace MvcApplication1.Controllers { public class ProductController : Controller { // // GET: /Product/ public ActionResult Index() { return View(); } } }
Listing 1 – ControllersProductController.vb [VB]
Public Class ProductController Inherits System.Web.Mvc.Controller ' ' GET: /Product/ Function Index() As ActionResult Return View() End Function End Class
Notice that a controller is just a class (a Visual Basic or C# class) that inherits from the base System.Web.Mvc.Controller class.
Any public method exposed by a controller is exposed as a controller action. The controller class in Listing 1 exposes one action named Index(). The Index() action is the default action that is invoked on a controller when no explicit action is specified.
*** Begin Warning ***
By default, any public method contained in a controller class can be invoked by anyone located anywhere on the Internet. Be careful about the methods that you publically expose from a controller. If you want to prevent a public controller method from being invoked, you can decorate the method with the NonAction attribute.
*** End Warning ***
Notice that the Index() action returns an ActionResult. A controller action always returns an ActionResult (even if it doesn’t appear to be returning an ActionResult). The ActionResult determines the response returned to the browser. The Index() controller returns a view as its ActionResult.
A controller typically exposes multiple actions. You add actions to a controller by adding new methods to the controller. For example, the modified Product controller in Listing 2 exposes three actions named Index(), Help(), and Details().
Listing 2 – ControllersProductController.cs with additional methods [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class ProductController : Controller { // // GET: /Product/ public ActionResult Index() { return View(); } // // GET: /Product/Help public ActionResult Help() { return View(); } // // GET: /Details/1 public ActionResult Details(int Id) { return View(); } } }
Listing 2 – ControllersProductController.vb with additional methods [VB]
Public Class ProductController Inherits System.Web.Mvc.Controller ' ' GET: /Product/ Function Index() As ActionResult Return View() End Function ' ' GET: /Product/Help Function Help() As ActionResult Return View() End Function ' ' GET: /Details/1 Function Details(ByVal id As Integer) As ActionResult Return View() End Function End Class
Here’s what you would type into a browser address bar to invoke the different actions:
· /Product/Index — Invokes the ProductController Index() action.
· /Product — Invokes the ProductController Index() action.
· /Product/Help — Invokes the ProductController Help() action.
· /Product/Details/34 — Invokes the ProductController Details() action with the value 34 for the Id parameter.
You invoke a controller action by following a particular pattern that looks like this:
{controller}/{action}/{id}
Notice that when you invoke a controller, you don’t include the Controller suffix in the URL. For example, you invoke the Product controller with the URL /Product/Index and not the URL /ProductController/Index.
The default controller action is the Index() action. Therefore, the URL /Product/Index and the URL /Product both invoke the product controller Index() action.
When you invoke a controller, you can supply an optional Id parameter. For example, the Details() action accepts an Id parameter. The URL /Product/Details/2 invokes the Details() action and passes the value 2 for the Id parameter. The name of the parameter is important. You must name the parameter Id.
*** Begin Note ***
The default pattern for invoking controller actions is defined by the default route in the Global.asax file. If you want to modify the URL pattern for invoking actions then you can modify this default route. To learn more about creating custom routes, see Chapter 9, Understanding Routing.
*** End Note
Returning Action Results
A controller action always returns an ActionResult. The ASP.NET MVC framework includes the following types of ActionResults:
· ViewResult – Represents an ASP.NET MVC view.
· PartialViewResult – Represents a fragment of an ASP.NET MVC view.
· RedirectResult – Represents a redirection to another controller action or URL.
· ContentResult – Represents raw content sent to the browser.
· JsonResult – Represents a JavaScript Object Notation result (Useful in Ajax scenarios).
· FileResult – Represents a file to be downloaded.
· EmptyResult – Represents no result returned by an action.
· HttpUnauthorizedResult – Represents an HTTP Unauthorized status code.
· JavaScriptResult – Represents a JavaScript file.
· RedirectToRouteResult – Represents a redirection to another controller action or URL using route values.
Typically, you don’t directly return an ActionResult from a controller action. Instead, you call a controller method that returns an ActionResult. For example, if you want to return a ViewResult then you call the controller View() method.
Here’s a list of controller methods that return ActionResults:
· View() – Returns a ViewResult.
· PartialView() – Returns a PartialViewResult.
· RedirectToAction() – Returns a RedirectToRouteResult .
· Redirect() – Returns a RedirectResult.
· Content() – Returns a ContentResult.
· Json() – Returns a JsonResult.
· File() – Returns a FileResult.
· JavaScript() – Returns a JavaScriptResult.
· RedirectToRoute() – Returns a RedirectToRouteResult.
In the following sections, we’ll examine several of these ActionResults in more detail.
*** Begin Note ***
We examine partial view results (AKA view user controls or partials) in Chapter 10, Understanding View Master Pages and View User Controls.
*** End Note ***
Returning a View Result
The most common ActionResult returned by a controller action is a ViewResult. A ViewResult represents an ASP.NET MVC view. You return a ViewResult when you want to return HTML to the browser.
The Details() action exposed by the Customer controller in Listing 3 returns a ViewResult.
Listing 3 – ControllersCustomerController.cs [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class CustomerController : Controller { public ActionResult Details() { return View(); } } }
Listing 3 – ControllersCustomerController.vb [VB]
Public Class CustomerController Inherits System.Web.Mvc.Controller Function Details() As ActionResult Return View() End Function End Class
The Details() method calls the View() method to return a ViewResult. There are two ways that you can specify a view when calling the View() method: you can specify a view implicitly or explicitly.
In Listing 3, the name of the view is specified implicitly. The ASP.NET MVC framework determines the name of the view from the name of the action. In this case, the action returns a view at the following location:
ViewsCustomerDetails.aspx
The ASP.NET MVC framework follows this pattern to determine the location of a view:
Views{controller}{action}.aspx
If you prefer, you can specify the name of a view explicitly. In Listing 4, the View() method includes an explicit view name.
Listing 4 – ControllersCustomerController.cs with explicit view [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class CustomerController : Controller { public ActionResult Details() { return View("Details"); } } }
Listing 4 – ControllersCustomerController.vb with explicit view [VB]
Public Class CustomerController Inherits System.Web.Mvc.Controller Function Details() As ActionResult Return View("Details") End Function End Class
The View() method in Listing 4 returns the very same view. However, it is explicit about the view name. Notice that you don’t include the .aspx extension when providing the name of the view.
*** Begin Tip ***
If you plan to build unit tests for your ASP.NET MVC application then it is a good idea to be explicit about your view names. Otherwise, you cannot test to see if the view with the right view name has been returned from a controller action.
*** End Tip ***
You also can provide an absolute path to a view. If you call View(“~/Details.aspx”) from the Details() action then a view from the following location is returned:
Details.aspx
Notice that when you provide an absolute path, you provide the .aspx extension.
There are multiple overloads of the View() method that accept different parameters. Here is a list of all of the possible parameters that you can pass to the View() method:
· viewName – The name of the view (or path to the view).
· masterName – The name of a view master page.
· model – The model class passed to the view.
We discuss view master pages in Chapter 10, Understanding View Master Pages and View User Controls. We discuss passing models to views in the next chapter, Understanding Views.
Returning a Redirect Result
Often, you’ll need to redirect from one controller action to a second controller action. You can use the RedirectToAction() method to return a RedirectResult that redirects a user from one controller action to another.
For example, the Widget controller in Listing 5 contains a Details() action. If the Details() action is invoked without a value for the id parameter, then the user is redirected to the Index() action.
Listing 5 – ControllersWidgetController.cs [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class WidgetController : Controller { // // GET: /Widget/ public ActionResult Index() { return View(); } // // POST: /Widget/Create public ActionResult Details(int? id) { if (!id.HasValue) return RedirectToAction("Index"); return View(); } } }
Listing 5 – ControllersWidgetController.cs [VB]
Public Class WidgetController Inherits System.Web.Mvc.Controller Function Index() As ActionResult Return View() End Function Function Details(ByVal id As Integer?) As ActionResult If Not id.HasValue Then Return RedirectToAction("Index") End If Return View() End Function End Class
*** Begin Note ***
The id parameter in Listing 5 is a nullable type. A nullable integer can have any value of an integer or the value null. You create a nullable type by placing a question mark ? after the type keyword.
*** End Note ***
There are multiple overloads of the RedirectToAction() method. Here’s a list of all of the possible parameters that you can use with the RedirectToAction() method:
· actionName – The name of a controller action.
· controllerName – The name of a controller.
· routeValues – The route values passed to the action.
You can use the controllerName parameter to redirect from an action in one controller to another controller. When you specify the controllerName, you do not include the Controller suffix. For example, use Product and not ProductController like this:
[C#]
return RedirectToAction(“Index”, “Product”);
[VB]
Return RedirectToAction(“Index”, “Product”)
Providing a value for routeValues is particularly important when you need to pass an id to an action. For example, imagine that you want to redirect to the Details() action from another action and pass a value for the id parameter. In that case, you can call the RedirectToAction() method like this:
[C#]
return RedirectToAction(“Details”, new {id=53});
[VB]
Return RedirectToAction(“Details”, New With {.id=53})
This call to the RedirectToAction() method passes the value 53 as the id parameter to the Index() action.
*** Begin Note ***
The RedirectToAction() method returns a 302 Found HTTP status code to the browser to perform the redirect to the new action. One advantage of performing a browser redirect is that it updates the browser address bar with the new URL.
*** End Note ***
Returning a Content Result
The Say() action exposed by the Hello controller in Listing 6 does not return an ActionResult. Instead, the action returns a string. If you invoke this action then the string is rendered to your browser (see Figure 2).
Figure 2 – Results of invoking Say() action
Listing 6 – ControllersHelloController.cs [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class HelloController : Controller { public string Say() { return "Hello"; } } }
Listing 6 – ControllersHelloController.vb [VB]
Public Class HelloController Inherits System.Web.Mvc.Controller Function Say() As String Return "Hello!" End Function End Class
An action method can also return DateTime values, integer values, or any type of values from the .NET framework.
Behind the scenes, the ASP.NET MVC framework converts any value that is not an ActionResult into an ActionResult. In particular, the ASP.NET MVC framework converts any value that is not an ActionResult into a ContentResult. The ASP.NET MVC framework calls the ToString() method on the value and wraps the resulting value in a ContentResult.
If you prefer, you can explicitly return a ContentResult like this:
[C#]
public ActionResult Say() { return Content("Hello!"); }
[VB]
Function Say() As ActionResult Return Content("Hello!") End Function
There are multiple overloads of the Content() method. Here is a list of all of the possible parameters that you can pass to this method:
· string – The string to render to the browser.
· contentType – The MIME type of the content (defaults to text/html).
· contentEncoding – The text encoding of the content (for example, Unicode or ASCII).
Returning a JSON Result
JavaScript Object Notation (JSON) was invented by Douglas Crockford as a lightweight alternative to XML appropriate for sending data across the Internet in AJAX applications. For example, you can convert a set of database records into a JSON representation and pass the data from the server to the browser.
*** Begin Note ***
You can learn more about JSON by visiting JSON.org.
*** End Note ***
You return JSON from an action by calling the Json() method. For example, the controller in Listing 7 returns a collection of quotations.
Listing 7 – ControllersQuotationController.cs [C#]
using System.Collections.Generic; using System.Web.Mvc; namespace MvcApplication1.Controllers { public class QuotationController : Controller { public ActionResult Index() { return View(); } public ActionResult List() { var quotes = new List<string> { "Look before you leap", "The early bird gets the worm", "All hat, no cattle" }; return Json(quotes); } } }
Listing 7 – ControllersQuotationController.vb [VB]
Public Class QuotationController Inherits System.Web.Mvc.Controller Function Index() As ActionResult Return View() End Function Function List() As ActionResult Dim quotes As New List(Of String)() quotes.Add("Look before you leap") quotes.Add("The early bird gets the worm") quotes.Add("All hat, no cattle") Return Json(quotes) End Function End Class
*** Begin Note ***
Behind the scenes, the Json() method uses a class in the .NET framework called the JavaScriptSerializer class to serialize an object into a JSON representation. You can control how this class serializes objects by registering custom converters.
*** End Note ***
When the List() action is invoked, the action returns the following JSON representation of the collection of quotations:
[“Look before you leap”, “The early bird gets the worm”, “All hat, no cattle”]
You can invoke the Index() method from a view by performing an Ajax call against the server. The view in Listing 8 grabs the collection of quotations and randomly displays one of them.
Figure 3 – Using JSON to retrieve quotations.
Listing 8 – ViewsQuotationIndex.aspx
<%@ Page <script src="../../Scripts/jquery-1.2.6.js" type="text/javascript"></script> <script type="text/javascript"> $(getQuote); function getQuote() { $.getJSON("Quotation/List", showQuote); } function showQuote(data) { var index = Math.floor(Math.random() * 3); $("#quote").text(data[index]); } </script> <p id="quote"></p> <button onclick="getQuote()">Get Quote</button> </asp:Content>
*** Begin Note ***
The view in Listing 8 uses jQuery to retrieve the JSON result from the server. We discuss jQuery in detail in Chapter 17, Using jQuery.
*** End Note ***
The Json() method has several overloads and supports the following parameters:
· data – The content to serialize.
· contentType – The MIME type of the content (defaults to application/json).
· contentEncoding — The text encoding of the content (for example, Unicode or ASCII).
Returning a File Result
You can return a file from an action. For example, you can return an image file, a Microsoft Word file, or a Microsoft Excel file.
For example, the controller in Listing 9 exposes two actions named Index() and Download(). The Index() action displays a view with a link to the Download() action. When you click the link, you are prompted with a dialog to view or save the file (see Figure 4).
Figure 4 – Downloading a file
Listing 9 – ControllersContentManagerController.cs [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class ContentManagerController : Controller { public ActionResult Index() { return View(); } public ActionResult Download() { return File("~/Content/CompanyPlans.docx", "application/vnd.openxmlformats-officedocument.wordprocessingml.document", "CompanyPlans.docx"); } } }
Listing 9 – ControllersContentManagerController.vb [VB]
Public Class ContentManagerController Inherits System.Web.Mvc.Controller Function Index() Return View() End Function Function Download() As ActionResult Return File("~/Content/CompanyPlans.docx", "application/vnd.openxmlformats-officedocument.wordprocessingml.document", "CompanyPlans.docx") End Function End Class
The Download() action returns a Microsoft Word document named CompanyPlans.docx. Notice that the File() method requires three parameters: the path to the file, the content type of the file, and the name of the file. The proper MIME type for a Microsoft Word DOCX file is:
application/vnd.openxmlformats-officedocument.wordprocessingml.document
The File() method has multiple overloads and accepts the following parameters:
· filename – The path to the file to download.
· contentType – The MIME type of the file to download.
· fileDownloadName – The name of the file as it will appear in the browser dialog.
· fileContents – Instead of providing the path to the file to download, you can provide the actual file contents as a Byte array.
· fileStream – Instead of providing the path to the file to download, you can provide the actual file contents as a file stream.
*** Begin Note ***
The File() method uses the HTTP Content-Disposition header to set the file download name.
*** End Note ***
Controlling How Actions are Invoked
The default algorithm for how the ASP.NET MVC framework invokes actions is pretty simple. If you type /Product/Details, for example, then the Details() method of the ProductController class is executed.
However, things can quickly become more complicated. What happens when you have multiple methods with the same name? How do you invoke an action when posting form data but not otherwise? How do you invoke a particular action when an Ajax request is made?
In this section, you learn how to use the AcceptVerbs, ActionName, and ActionMethodSelector attributes to specify when a particular action gets invoked.
Using AcceptVerbs
The AcceptVerbs attribute enables you to prevent an action from being invoked unless a particular HTTP operation is performed. For example, you can use the AcceptVerbs attribute to prevent an action from being invoked unless an HTTP POST operation is performed.
The Employee controller in Listing 10 exposes two actions named Create(). The first Create() action is used to display an HTML form for creating a new employee. The second Create() action inserts the new employee into the database.
Both Create() methods are decorated with the AcceptVerbs attribute. The first Create() action can only be invoked by an HTTP GET operation and the second Create() action can only be invoked by an HTTP POST operation.
Listing 10 – ControllersEmployeeController.cs [C#]
using System.Web.Mvc; using MvcApplication1.Models; namespace MvcApplication1.Controllers { public class EmployeeController : Controller { private EmployeeRepository _repository = new EmployeeRepository(); // GET: /Employee/ public ActionResult Index() { return View(); } // GET: /Employee/Create [AcceptVerbs(HttpVerbs.Get)] public ActionResult Create() { return View(); } // POST: /Employee/Create [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Employee employeeToCreate) { try { _repository.InsertEmployee(employeeToCreate); return RedirectToAction("Index"); } catch { return View(); } } // DELETE: /Employee/Delete/1 [AcceptVerbs(HttpVerbs.Delete)] public ActionResult Delete(int id) { _repository.DeleteEmployee(id); return Json(true); } } }
Listing 10 – ControllersEmployeeController.vb [VB]
Public Class EmployeeController Inherits System.Web.Mvc.Controller Private _repository As New EmployeeRepository() ' GET: /Employee/Create Function Index() As ActionResult Return View() End Function ' GET: /Employee/Create <AcceptVerbs(HttpVerbs.Get)> _ Function Create() As ActionResult Return View() End Function ' POST: /Employee/Create <AcceptVerbs(HttpVerbs.Post)> _ Function Create(ByVal employeeToCreate As Employee) As ActionResult Try _repository.InsertEmployee(employeeToCreate) Return RedirectToAction("Index") Catch Return View() End Try End Function ' DELETE: /Employee/Create <AcceptVerbs(HttpVerbs.Delete)> _ Function Delete(ByVal id As Integer) As ActionResult _repository.DeleteEmployee(id) Return Json(True) End Function End Class
Most people are familiar with HTTP GET and HTTP POST operations. You perform an HTTP GET operation whenever you request a page from a website by typing the address of the page in your web browser. You perform an HTTP POST operation when you submit an HTML form that has a method=”post” attribute.
Most people don’t realize that the HTTP protocol supports a number of additional types of HTTP operations:
· OPTIONS – Returns information about the communication options available.
· GET – Returns whatever information is identified by the request.
· HEAD – Performs the same operation as GET without returning the message body.
· POST – Posts new information or updates existing information.
· PUT – Posts new information or updates existing information.
· DELETE – Deletes information.
· TRACE – Performs a message loop back.
· CONNECT – Used for SSL tunneling.
*** Begin Note **
The HTTP operations are defined as part of the HTTP 1.1 standard which you can read about at.
*** End Note ***
You can perform these additional HTTP operations when performing Ajax requests. The controller in Listing 10 includes a Delete() action that can be invoked only with an HTTP DELETE operation. The view in Listing 11 includes a delete link that uses Ajax to perform an HTTP DELETE operation.
Listing 11 – ViewsEmployeeDelete", // link text "Delete", // action name new {id=39}, // route values new AjaxOptions {HttpMethod="DELETE", Confirm="Delete Employee?"} ) %> </asp:Content>
Listing 11 – ViewsEmployeeDelete", _ "Delete", _ New With {.id = 39}, _ New AjaxOptions With {.HttpMethod = "DELETE", .Confirm = "Delete Employee?"} _ )%> <%=DateTime.Now%> </asp:Content>
In Listing 11, the Ajax.ActionLink() helper renders a link that performs an HTTP DELETE operation. The link deletes the employee with Id 39. You can verify that the link performs an HTTP DELETE operation in Firebug (see Figure 5).
Figure 5 – Performing an HTTP DELETE operation
*** Begin Note ***
Firebug is an essential tool for debugging Ajax applications. Firebug is a Mozilla Firefox extension that you can download from.
*** End Note ***
Using ActionName
The ActionName attribute enables you to expose an action with a different name than its method name. There are two situations in which the ActionName attribute is useful.
First, when a controller has overloaded methods, you can use the ActionName attribute to distinguish the two methods. In other words, you can use the ActionName attribute to expose two methods with the same name as actions with different names.
For example, imagine that you have created a Product controller that has two overloaded methods named Details(). The first Details() method accepts an id parameter and the second Details() method does not. In that case, you can use the ActionName attribute to distinguish the two Details() methods by exposing the two Details() methods with different action names.
Second, using the ActionName attribute is useful when a controller has methods with different names and you want to expose these methods as actions with the same name. For example, the controller in Listing 12 exposes two actions named Edit() that accept the same parameter.
Listing 12 – ControllersMerchandiseController.cs [C#]
using System.Web.Mvc; using MvcApplication1.Models; namespace MvcApplication1.Controllers { public class MerchandiseController : Controller { private MerchandiseRepository _repository = new MerchandiseRepository(); // GET: /Merchandise/Edit [ActionName("Edit")] [AcceptVerbs(HttpVerbs.Get)] public ActionResult Edit_GET(Merchandise merchandiseToEdit) { return View(merchandiseToEdit); } // POST: /Merchandise/Edit [ActionName("Edit")] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit_POST(Merchandise merchandiseToEdit) { try { _repository.Edit(merchandiseToEdit); return RedirectToAction("Edit"); } catch { return View(); } } } }
Listing 12 – ControllersMerchandiseController.vb [VB]
Public Class MerchandiseController Inherits System.Web.Mvc.Controller Private _repository As New MerchandiseRepository() ' GET: /Merchandise/Edit <ActionName("Edit")> _ <AcceptVerbs(HttpVerbs.Get)> _ Function Edit_GET(ByVal merchandiseToEdit As Merchandise) As ActionResult Return View(merchandiseToEdit) End Function ' POST: /Merchandise/Edit <ActionName("Edit")> _ <AcceptVerbs(HttpVerbs.Post)> _ Function Edit_POST(ByVal merchandiseToEdit As Merchandise) As ActionResult Try _repository.Edit(merchandiseToEdit) Return RedirectToAction("Edit") Catch Return View() End Try End Function End Class
You can’t have two methods with the same name and the same parameters in the same class. However, you can have two actions that have the same name and the same parameters.
The two Edit() actions in Listing 12 are distinguished by the AcceptVerbs attribute. The first Edit() action can be invoked only by an HTTP GET operation and the second Edit() action can be invoked only by an HTTP POST operation. The ActionName attribute enables you to expose these two actions with the same name.
Using ActionMethodSelector
You can build your own attributes that you can apply to controller actions to control when the controller actions are invoked. You build your own attributes by deriving a new attribute from the abstract ActionMethodSelectorAttribute class.
This is an extremely simple class. It has a single method that you must implement named IsValidForRequest(). If this method returns false, then the action method won’t be invoked.
You can use any criteria that you want when implementing the IsValidForRequest() method including the time of day, a random number generator, or the current temperature outside. The AjaxMethod attribute in Listing 13 is a more practical sample of how you can use the ActionMethod attribute. This attribute prevents a method from being called in cases in which the request is not an Ajax request.
Listing 13 – SelectorsAjaxMethodAttribute.cs [C#]
using System.Reflection; using System.Web.Mvc; namespace MvcApplication1.Selectors { public class AjaxMethod : ActionMethodSelectorAttribute { public override bool IsValidForRequest(ControllerContext controllerContext, MethodInfo methodInfo) { return controllerContext.HttpContext.Request.IsAjaxRequest(); } } }
Listing 13 – SelectorsAjaxMethodAttribute.vb [VB]
Imports System.Reflection Public Class AjaxMethodAttribute Inherits ActionMethodSelectorAttribute Public Overrides Function IsValidForRequest(ByVal controllerContext As ControllerContext, ByVal methodInfo As MethodInfo) As Boolean Return controllerContext.HttpContext.Request.IsAjaxRequest End Function End Class
The selector in Listing 13 simply returns the value of the IsAjaxRequest() method as its selection criterion.
The controller in Listing 14 illustrates how you can use the AjaxMethod attribute.
Listing 14 – ControllersNewsController.cs [C#]
using System; using System.Collections.Generic; using System.Web.Mvc; using MvcApplication1.Selectors; namespace MvcApplication1.Controllers { public class NewsController : Controller { private readonly List<string> _news = new List<string>(); private Random _rnd = new Random(); public NewsController() { _news.Add("Moon explodes!"); _news.Add("Stock market up 200 percent!"); _news.Add("Talking robot created!"); } public ActionResult Index() { var selectedIndex = _rnd.Next(_news.Count); ViewData.Model = _news[selectedIndex]; return View(); } [AjaxMethod] [ActionName("Index")] public string Index_AJAX() { var selectedIndex = _rnd.Next(_news.Count); return _news[selectedIndex]; } } }
Listing 14 – ControllersNewsController.vb [VB]
Public Class NewsController Inherits System.Web.Mvc.Controller Private ReadOnly _news As New List(Of String) Private _rnd As New Random() Sub New() _news.Add("Moon explodes!") _news.Add("Stock market up 200 percent!") _news.Add("Talking robot created!") End Sub Function Index() As ActionResult Dim selectedIndex = _rnd.Next(_news.Count) ViewData.Model = _news(selectedIndex) Return View() End Function <AjaxMethod()> _ <ActionName("Index")> _ Function Index_AJAX() As String Dim selectedIndex = _rnd.Next(_news.Count) Return _news(selectedIndex) End Function End Class
The controller in Listing 14 exposes two actions named Index(). The first Index() action is intended to be invoked by a normal browser request. The second action is intended to be invoked by an Ajax request.
The AjaxMethod attribute is applied to the second Index() action. If this action were not decorated with the AjaxMethod attribute then you would get an Ambiguous Match Exception because the ASP.NET MVC framework would not be able to decide which of the two actions to execute (see Figure 6).
Figure 6 – An Ambiguous Match Exception
The view in Listing 15 uses the Ajax.ActionLink() helper method to render a Get News link for displaying the news. If you are using an uplevel browser – a browser that supports basic JavaScript – then clicking the link performs an Ajax request against the server. The Index() method decorated with the AjaxMethod attribute is invoked and the page is updated without performing a postback.
If, on the other hand, you are using a downlevel browser – a browser that does not support basic JavaScript – then clicking the Get News link performs a normal postback. The page still gets updated with a news item, but the user must undergo the awful experience of a postback (see Figure 7).
Figure 7 – Displaying the news
Listing 15 – ViewsNewsIndex {</span> </asp:Content>
Listing 15 – ViewsNewsIndex With {.</span> </asp:Content>
Handling Unknown Actions
A controller has a special method named HandleUnknownAction(). This method is called automatically when a controller cannot find an action that matches a browser request. For example, if you request the URL /Product/DoSomethingCrazy and the Product controller does not have an action named DoSomethingCrazy() then the Product controller HandleUnknownAction() method is invoked.
By default, this method throws a 404 Resource Not Found HTTP exception. However, you can override this method and do anything you want. For example, the controller in Listing 16 displays a custom error message.
Listing 16 – ControllersCatalogController.cs [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class CatalogController : Controller { public ActionResult Create() { return View(); } public ActionResult Delete(int id) { return View(); } protected override void HandleUnknownAction(string actionName) { ViewData["actionName"] = actionName; View("Unknown").ExecuteResult(this.ControllerContext); } } }
Listing 16 – ControllersCatalogController.vb [VB]
Public Class CatalogController Inherits System.Web.Mvc.Controller Function Create() As ActionResult Return View() End Function Function Delete() As ActionResult Return View() End Function Protected Overrides Sub HandleUnknownAction(ByVal actionName As String) ViewData("actionName") = actionName View("Unknown").ExecuteResult(Me.ControllerContext) End Sub End Class
If you request the URL /Catalog/Create or /Catalog/Delete then the Catalog controller will return the Create or Delete view. If you request a URL that contains an unknown action such as /Catalog/Wow or /Catalog/Eeeks then the HandleUnknownAction() method executes.
In Listing 16, the HandleUnknownAction() method adds the name of the action to view data and then renders a view named Unknown (see Figure 8).
Figure 8 – Displaying the Unknown view
Testing Controllers and Actions
The ASP.NET MVC team worked hard to make sure that controller actions were extremely easy to test. If you want to test a controller action then you simply need to instantiate the controller and call the action method.
For example, the controller in Listing 17 exposes two actions named Index() and Details(). If you invoke the Details() action without passing a value for the id parameter then you should be redirected to the Index() action.
Listing 17 – ControllersPersonController.cs [C#]
using System.Web.Mvc; namespace MvcApplication1.Controllers { public class PersonController : Controller { public ActionResult Index() { return View("Index"); } public ActionResult Details(int? id) { if (!id.HasValue) return RedirectToAction("Index"); return View("Details"); } } }
Listing 17 – ControllersPersonController.vb [VB]
Public Class PersonController Inherits System.Web.Mvc.Controller Function Index() As ActionResult Return View("Index") End Function Function Details(ByVal id As Integer?) As ActionResult If Not id.HasValue Then Return RedirectToAction("Index") End If Return View("Details") End Function End Class
*** Begin Warning ***
When returning a view, you must be explicit about the view name or you won’t be able to verify the name of the view in a unit test. For example, in Listing 17, the Index() method returns View(“Index”) and not View().
*** End Warning ***
The unit tests in Listing 18 illustrate how you can test the actions exposed by the Person controller. The first unit test, named DetailsWithId(), verifies that calling the Details() method with a value for the id parameter returns the Details view.
The second unit test, named DetailsWithoutId(), verifies that calling the Details() method with no value for the id parameter causes a RedirectToRouteResult to be returned.
Listing 18 – ControllersPersonControllerTest.cs [C#]
using System.Web.Mvc; using Microsoft.VisualStudio.TestTools.UnitTesting; using MvcApplication1.Controllers; namespace MvcApplication1.Tests.Controllers { [TestClass] public class PersonControllerTest { [TestMethod] public void DetailsWithId() { // Arrange var controller = new PersonController(); // Act var result = (ViewResult)controller.Details(33); // Assert Assert.AreEqual("Details", result.ViewName); } [TestMethod] public void DetailsWithoutId() { // Arrange var controller = new PersonController(); // Act var result = (RedirectToRouteResult)controller.Details(null); // Assert Assert.AreEqual("Index", result.RouteValues["action"]); } } }
Listing 18 – ControllersPersonControllerTest.vb [VB]
Imports Microsoft.VisualStudio.TestTools.UnitTesting Imports System.Web.Mvc <TestClass()> Public Class PersonControllerTest <TestMethod()> _ Public Sub DetailsWithId() ' Arrange Dim controller As New PersonController() ' Act Dim result As ViewResult = controller.Details(33) ' Assert Assert.AreEqual("Details", result.ViewName) End Sub <TestMethod()> _ Public Sub DetailsWithoutId() ' Arrange Dim controller As New PersonController() ' Act Dim result As RedirectToRouteResult = controller.Details(Nothing) ' Assert Assert.AreEqual("Index", result.RouteValues("action")) End Sub End Class
*** Begin Note ***
To learn more about creating and running unit tests, see Appendix B of this book.
*** End Note ***
Summary
This chapter was devoted to the topic of ASP.NET MVC controllers. The goal of this chapter was to provide an in-depth explanation of how you can create controllers and controller actions.
In the first part of this chapter, you were provided with an overview of the different types of ActionResults that can be returned from a controller action. You learned how to returns views, redirect users to other actions, return JSON, and return downloadable files.
Next, we examined the different attributes that you can apply to a controller action to control when the controller action is invoked. You learned how to use the AcceptVerbs and ActionName attributes. You also learned how to create a custom ActionSelect attribute that enables you to execute an action only within the context of an Ajax request.
Finally, you learned how to build unit tests for your controllers. You learned how to test whether a controller returns different ActionResults such as a ViewResult or a RedirectToRouteResult.
Hi,
I’ve already read the 3 first book’s parts of you have bloged, everything is very good, I think about buying this book.
Thank !
Great article Stephen! I’m sure your book is going to be a great success.
Hello Stephen,
“A controller name must end with the suffix Controller. If you forget to include the Controller suffix then you won’t be able to invoke the controller.”
If I forget, “Controller” suffix should be automatically added and this support must be given. What do you think about that?
Thanks
Great articles Stephen, I look forward on buying the book. Your explanations about MVC are so easy to assimilate. Keep up the good work 🙂
Great stuff, can’t wait for the book already!
Keep these articles flowing…
Hello Stephen…
Here are some recommendations I think you can evaluate. Excuse my English.
1. In the second Warning you can explain how “decorate the method with the NonAction attribute” or reference where in the book you explain it.
2. In the Listing 2, in the 3 method I think the get is:
// GET: /Product/Details/1
3. Read it: “The most common ActionResult returned by a controller action is a ViewResult. A ViewResult represents an ASP.NET MVC view. You return a ViewResult when you want to return HTML to the browser.”
Is it no better some like this: “The most common ActionResult returned by a controller action is a ViewResult. It represents an ASP.NET MVC view and you return it when you want to return HTML to the browser.”
4. In the Listing 3 you put as example the Details view. But in the listing 2 it was too. I know it is another controller, but it can confuse. Is it not better put another method in this listing?
Also in the next paragraphs you said that I can explicit call the Details View. People can ask: Is there no problem if I have another Details view (the Products one). So maybe you can explain that there is no problem if you have views or master views with the same name. MVC knows which one to work with.
5. Also you mention that we can call this: View(“~/Details.aspx”). Why you can do it if this file is not in the Views folder. You can explain why you can do it and in which cases is recommended do it.
6. In the note after Listing 5 you mention the nullable types. The example is for int types. It’ll be good to mention which types can be nullable (string?, float?, All?). Can I use any type as a nullable parameter?
7. After Listing 9, you explain the file method. The example of the DOCX mention the proper MIME type. Where did you get MIME type? Google it? What if I need return a PDF or a ZIP or a DOC. Where I get these MIME types? Google?
8. Wowww… In the listing 10, how the action Create can receive an Employee as a parameter? How I convert some form values in the view in the Employee object? Maybe a simple example is better here (parameters: string name, string email)… Or maybe you can explain the transformation between forma values in an object.
9. In the Listing 16 you have this line: View(“Unknown”).ExecuteResult(this.ControllerContext);
You don’t explain the part: ExecuteResult(…). Is it not necessary?
Well… The book is getting great. Thanks!!!
I like the way this book is shaping up!
My comment is that I’d like the code examples to build on each other, as it stands I find I have to keep scrolling back up the page to remember why or how something is done, though I realise this could quickly become a loooong post that way.
Hi Stephen!
I’m also looking forward to your book. I like the style so far.
I think I found a typo between listing 4 and 5.
…
shouldn’t this be
ViewsCustomerSubfolderDetails.aspx
kind regards,
Dick.
@Engr, @Dick — Thanks for your detailed feedback — these are good points.
These sentences took me a few reads before I got it. “You can’t have two methods with the same name and the same parameters in the same class. However, you can have two actions that have the same name and the same parameters.” Can you emphasize the words “methods” and “actions” in those two sentences so that it is very visually apparent.
Excellent chapter — it answered a couple questions that had been nagging me before proceeding on more complex projects with MVC.
Regarding the contentType in the various MIME return types; I have to agree with the point @Engr raised, but would really like to take it one step further.
Why not provide the methods with an override that would have contentType as an enumerated value?
Looking forward to the next chapter.
Stephen,
I finally decided to get into ASP’s MVC framework a couple of months ago. I’ve been waiting for it to mature enough so that what I learned wasn’t made obsolete by the next beta iteration.
After watching your videos on ASP.net I decided to find a book or two to dive deeper in. Not surprisingly, there was nothing available. I pre-ordered ASP.NET MVC in Action from Manning and got the beta (or MEAP as they call it) PDF to read.
Out of all the hundreds of computer-related books I have read, that book is shaping up to be the worst ever.
After you had posted your beta chapters online, I knew I had made a big mistake by pre-ordering the Manning book instead of yours. I really love your style of conveyance and cannot wait to read your book.
Keep up the good work and thank you for what you provide to the community.
**P.S. Your contact form is throwing up errors **
Hi Stephen
Thank you for sharing the material.
There is a type in one of the sentences after Listing 1.
The following sentence
The Index() controller returns a view as its ActionResult should be
The Index() action returns a view as its ActionResult should be
Hello Stephen,
Another very nice chapter. I was working through some of the sample code for this chapter and I’m not sure if you’re expecting the code to work as is. But if you are, then under the section entitled Returning a JSON Result, the code needs a slight tweek.
Evidently IE7 caches the $.getJSON() request (I haven’t checked it out on IE8). So after the first call, subsequent requests never get back to the server. This could be what you intended but if it isn’t, then you can add:
$.ajaxSetup({cache: false });
and requests will get back to the server after the initial call.
Again, thanks for putting these chapters up on your blog. Great stuff.
@Brian — Good point. I’ll update the code sample. Thanks!
“This call to the RedirectToAction() method passes the value 53 as the id parameter to the Index() action.”
Shouldnt it be to Detail() action?
Great Post! Really very useful information given about controllers. Thanks.
Ya really its very useful. Controllers are always important in .net. Thanks Stephen.
nice blog
fwq ew this is given in attachment. I understand that very well. Thanks.
stuff to pack and I don’t think I am going to make it… Now I am trying to figure out which things I need most.Business management school | online Bachelor degrees
fwq ew this is given in attachment. I understand that very well. Thanks. PhD economics | law degree
Very useful materials for college and university students who study HTML.
Our website have many goods,Rolex –
Replica Rolex –
Replica Watches –
Rolex Watches –
Replica rolex –
Rolex –
Rolex
Rolex
Replica Rolex
Rolex
Replica Rolex
Vertu
Vertu Replica,
Replica handbags,
Rolex,
Replica Watches I belive that you will like them!052
Great post!
louis vuitton
[url=]louis vuitton[/url]
good!
Very cool thank you for your post!! 😀
f44 Great Post! Very good introduction is given. Very useful also. Thanks Stephen
Hannah Montana: The Movie movie download
Happy Feet movie download
Harry Potter and the Half-Blood Prince movie download
Harsh Times movie download
He’s Just Not That Into You movie download
Her Best Move movie download
Hitman movie download
Home movie download
Honey, I Shrunk the Kids movie download
Hooking Up movie download
HouseSitter movie download
How the Grinch Stole Christmas movie download
How to Be movie download
How to Lose Friends & Alienate People movie download
Hush movie download
Hush movie download
HydrarA
Official Ed Hardy Store for all Clothing and Gear by Christian Audigier. The lifestyle brand is inspired by vintage tattoo art incorporating apparel Ed Hardy Clothing | Ed Hardy Hoodies | ED Hardy Long Sleeve | Ed Hardy Handbags | Ed Hardy | http://stephenwalther.com/archive/2009/02/13/chapter-3-understanding-controllers | CC-MAIN-2021-04 | refinedweb | 7,053 | 50.43 |
StreetScooter: The $7000 Open-Source Modular Electric Vehicle 151
bill_mcgonigle writes "A consortium of eighty companies and developers have developed the StreetScooter, a modular electric vehicle developed in a year using open-source methodologies. With a top speed of 74mph and an 80-mile range, the vehicle is expected to cost $7000 and be available in Germany in 2013."
import timeline (Score:2, Funny)
We'll see it in the US in 2018 for $17k.
I want one...
Re:import timeline (Score:5, Interesting).
The Smart car has stellar safety ratings all over europe, yet it was deemed "unsafe" in the USA and needed to be retrofitted with US safety equipment. Now it has to have the big engine in it ot move it, and Oh you cant have the Diesel engine that get's 80mpg.
Hopefully someone will circumvent the retarded US auto laws and sell it as a "kit" so it does not have to meet ANY US safety or other laws and can be a home made car that fits under the "experimental" rules like they do iwth aircraft.
Re: (Score:2)
Re: (Score:2, Interesting)
Simply assumming that the other car will be big and heavy is the wrong way to go. can we just add a "shall not impart more than $X force over $Y time to other object(s) when striking object(s) with the leading edge of the vehicle while going forward at $Z mph" to the requirements. Make the big cars have to take on some of the work of protecting those in smaller cars and not simply "ohh look my passengers are safe so i win" sort of a thing they have going now?
Granted there should be some minimum car weight/s
Re: (Score:3)
I do agree though, there is a larger weight/speed thing here in the states, we have have a lot of 55MPH 2 lane roads with only a stripe protecting people from a 110MPH closing speed offset head-on crashes. maybe the issue is our road system..
And in Europe (Well, the UK at least) there are a lot of 70MPH 2-lane roads with only a paint stripe protecting people from 140MPH closing speed crashes. The Euro-spec Smart car does fantastically well at surviving a head-on crash, as ably demonstrated here [youtube.com]. And bear in mind (as Mythbusters showed) with 2 vehicles colliding at 70MPH, the effective speed is 70MPH, not 140.
US regulations have all kinds of stupid limitations, like having the headlights being fixed a certain distance from the road (Not allowing
Re: (Score:2)
The safety regulations aren't the problem with the Fortwo, actually. The US W451 Fortwo isn't appreciably heavier than the European W451, and there were some W450s that were legally certified to come into the US, although at extensive cost increase due to the economies of scale of the modifications (as I understand, they were minor modifications, and more nitpicking than anything - had the W450 been designed for the US from the outset, it wouldn't have been any more expensive). All W451s are heavier than th
Re: (Score:2)
Also, There seems to be no to little regulation of aftermarket bumper height. This coming from driving a saturn S series for the last 10+ years. It is rather scarry to see lifted trucks with leading front edges above the top edge of my doors (aka the bottom edge of my windows). I can't help but think that my car just isn't built for getting hit by a 3500 Lb truck with 1000 Lb of shit in the box across the drivers windows and would likely end with me being dead. As far as i'm concerned doing that to a car/tr
Re: (Score:2)
FWIW, those brodozers are more like 6000-7000 pounds.
In any case, some states do have bumper height regulations, but they're rarely enforced.
Re: (Score:2)
And in some states there aren't many regulations (wheels, seatbelts, and indicator lights, headlights if you want to drive at night) if you have an ORV (Off road Vehicle) sticker in the window...
Re: (Score:2)
Re: (Score:2)
Hopefully someone will circumvent the retarded US auto laws and sell it as a "kit" so it does not have to meet ANY US safety or other laws and can be a home made car that fits under the "experimental" rules like they do iwth aircraft.
It doesn't work that way:
Homebuilt land vehicles (cars, motorcycles, ATVs), whether built from a kit or entirely from scratch, are regulated on a state level and must therefore comply with the regulations of the particular state in which they are licensed. Homebuilt vehicles are not regulated on a federal level - at least not formally. Normally, the state-level regulations that apply to such vehicles are less stringent than the federal regulations that apply to manufactured products, but much depends on the state in which you live. For example, the motor vehicle code of many states contains language requiring that all motor vehicles are equipped according to the federal regulations in effect when the vehicle was manufactured. Homebuilt aircraft and watercraft must comply with federal regulations.
Liability insurance should be relatively easy to obtain, and priced about on par with existing coverage. Collision and comprehensive insurance may be more costly and difficult to obtain. The difficulty with comprehensive and collision insurance comes mainly from the inherent difficulty of establishing a value for your car. Consequently, you may be asked to have it professionally appraised, in which case the total coverage will then be limited to the appraised value.
If you do not already have insurance on an existing car, it will be very difficult to find a company that will write a new policy on your homebuilt car.
Licensing And Insuring Homebuilt Vehicles [rqriley.com] [Rev. May 31st]
I live in a lake effect snow belt in upstate New York. The motor vehicle safety laws don't look half so retarded where the weather can turn lethal in a heartbeat.
Re: (Score:2)
Right, but take a look at the required roof crush strength? now if I recall correctly this is a scant 2-3 times the cars empty curb weight. I'd bet that if you fall off the road the car doesn't do so well if you end up upside down... Here [youtube.com] is an example of what a rally car can take and have the drivers get out of the car uninjured (minor scrapes don't count). Granted the car is a total write off, but he's not dead. There is no reason that these sorts of things couldn't be incorporated into modern road cars.
Re:import timeline (Score:4, Informative).
Uhm. I'd be very surprised if you can get a new smart car in Europe for $5500-6500US. I just checked the web site, the list price in germany is 10190 EUR for the base model (= USD 14043).
Re: (Score:2)
Though according to Wikipedia, the first Smarts came out in 1998. I still doubt it was that cheap then, but it was probably cheaper than it is now.
Re: (Score:2)
I agree with your premise, but I think that you are overlooking 1 vital safety concern. The average vehicle in the US is much larger than the average vehicle in Europe. So a very safe car in Europe is not necessarily a very safe car in the US.
Remember, every other American has a gigantic SUV, Hummer, Excursion, Expedition, Suburban, Tank, etc.
Smart vs Hummer = very very bad.
Re: (Score:2)
$7K, 80 mile range, 74mph, 2 passengers, I'll grey market import it if it's really that good and cheap.
StreetScooter (Score:3)
"StreetScooter", great name for a product... that isn't a scooter.
Plus, there is no way a search for StreetScooter could return ambiguous results. [vimeo.com]
Re: (Score:2)
Oh, they'll come up with a suitably marketable name for the US. They'll go the "SmartCar" naming route, or they'll go the Psion "XD" naming route.
So it will be given either a stupid name, or a faux cool name.
Re: (Score:2)
The US market needs a deadly-sounding, weaponish name to play properly to the customers insecurities. "PredaViper 2000 Insurgent Rebel X ReLoaded" sounds about right.
For the Japanese market, look for your "Let's Go Mushroom Kawaii!" at your nearest dealer.
Re: (Score:2)
The US market needs a deadly-sounding, weaponish name to play properly to the customers insecurities.
The Hammerhead Eagle i-Thrust?
Re: (Score:2)
Apple would go iMedieval on their ass.
Re: (Score:3)
It just goes to show you what open source methodologies get you so often... a well engineered product for a fraction of the price of competitors that has no clue how to market it to the customer so no one will ever use it. Sigh. I know everyone hates marketing and sales but they do in fact serve a purpose...
Re: (Score:2)
I know everyone hates marketing and sales but they do in fact serve a purpose...
I agree. I ran a small business for a while and I noticed that people forget who you are and what you are selling in a few days at best. Sales and marketing are critical, but I think most people disparage them because of the stereotypes, which have a basis in reality, but are overblown.
Re: (Score:2)
In fact stereotypes are what marketing relies on, because most of the time they are in fact true!
;)
Re: (Score:2)
most people disparage them because of the stereotypes
So what you're saying is, sales and marketing need an image boost?
Re: (Score:2)
The name of the company was on the road, so to speak: StreetScooter (road roller)
I know it is an artifact of Google Translate, but I like the name 'Road Roller'.
Re: (Score:2)
I don't. [wikipedia.org]
Re: (Score:2)
"StreetScooter", great name for a product... that isn't a scooter.
The German word for scooter (according to Google Translate) is "Roller", so I'd guess the project name is in Germlish or Engeutch or whatever. Maybe it's like the mock Swedish names IKEA gives things that make your wife say, "Oh, honey, this Dyra skithög [google.com] is just what we need to organize our string collection!"
not price-competitive (Score:1).
Re: (Score:1)
You have a point, except this thing seats 3 and is a car.
Re: (Score:2)
Okay, so it can't go up a hill with 3 passengers either.
Re: (Score:2)
I've spent some time browsing their site and I haven't found anything that indicates that it must be assembled. Just because it's "modular" doesn't mean it comes with a screwdriver and assembly guide.
I can buy a computer - fully assembled - which is still a modular system.
Re: (Score:2)
Are you aware that the vehicle in TFA is more like a smart car than a scooter? This isn't a moped or a vespa we're talking about. See the picture here: [streetscooter.eu]
Re: (Score:3)
Yes, in English, thats entirely accurate.
Too bad they don't fucking speak English as a first language in Germany.
So while you call it shitty marketing fail, I call you stupid ignorant American making comments without bothering to be informed with the information right in front of your face. People like you are an embarrassment to our country, please keep your mouth shut in public.
Re: (Score:2)
But shouldn't someone try extra hard to do a good job when translating to a new language, so to not make obvious mistakes like this?
(Yes, this to mind the probably apocryphal stories of movie titles/product slogans in other countries when translated back into English.)
Re: (Score:2)
I'm happier with my $3000 gas full size touring motorcycle that does 55mpg city if I'm keeping my hand out of the throttle and can go up to 140mph oh and it has no problem carrying a second rider + another 60 pounds of things in it's saddle bags and trunk as well as being highly comfortable for a 500 mile long ride.
I bought used and got 90X more bike than I would have bough new with the same money. I can now ride the interstates very comfortably and my bike is faster than any car on the road so I dont hav
If it's open source.... (Score:2)
... then where can I download the plans?
If the source is not open, then is the methodology "open source"?
Re: (Score:2, Informative)
The methodology was open source in that every member of said community of developers were able to toss out ideas and do things instead of just "what do we need, we will do this and nothing else.".
This car sounds like the wet dream of those folks that love to mix and match things to fit what they need. The word for it is on the tip of my tongue, but I'm not sure on it.
A wet dream for them, a potential nightmare for mechanics.
Re: (Score:2)
Foremost experts are rarely experts.
The engineers of GM vehicles are outclassed HARD by garage tinkerers that love a vehicle. I know people that know more about the camaro than all the engineers at GM that designed it.
Re: (Score:2)
I was thinking much the same thing.
The last thing I want is my car sitting dead in my driveway or at the mechanics while battery leasing agent points fingers at the computer vendor who's busy blaming the motor drivers which were written by company that got bought
Re: (Score:2)
The problem is that once mindless idiots start using phrases to mean pretty much anything, they end up meaning nothing.
"Language changes" isn't an excuse for talking colmych, so why don't you go and knerp your cogglebun?
Re: (Score:2)
Language changes.
In this case, marketing assholes try to co-opt what's good about a term and apply it to what they are doing instead of coming up with their own term that is accurate. In this case, the word "collaborative" or one of it's derivatives was used 5 times in the article. So they just could have just called it a "collaborative design".
Re: (Score:2)
They aren't saying 'open source', slashdot is.
They just said 'Open', which slashdot fucks up EVERY SINGLE TIME it comes up. Slashdot thinks everything has to fit in one tiny definition that suits their whim, so that more or less means if the word 'open' or the word 'free' is used anywhere in the description, it must be GPL!
Pictures of the car (Score:2)
Re: (Score:2)
Here are pictures of the car: [streetscooter.eu]
RENDERS not pictures. Thery didnt make anything but some renders.
Re: (Score:2)
Here are pictures of the car: [streetscooter.eu]
RENDERS not pictures. Thery didnt make anything but some renders.
Surely renders not photographs. Both renders and photos are pics. Although your point is very valid.
Note to Nissan & Ford... (Score:3)
(Sidenote, if I can get a tax credit of up to $7500 from Uncle Sam for purchasing an EV, does that mean I actually MAKE $500 to drive this thing? They'll probably cite the "up to" part and give me a whopping $20 for this, but I can dream, can't I?)
Re: (Score:3, Insightful)
You're comparing vaporware to a real shipping product. What's YOUR excuse?
Re: (Score:2)
Re: (Score:2)
How is this Flamebait? He's right, this vehicle exists only on paper so far, and honestly a $7,000 electric car with 80 mile range seems so unbelievable in 2011 that I'll believe it when I see it. Not everything on the internet is real
From the first link:
It's not mass-market yet, but a working prototype is real and not 'only on paper'.
Re: (Score:2)
The prototype wasn't built for $7000 and the team isn't taking orders with a promise to deliver at $7000. When (if) that changes it will then be time to compare it with the Focus, the Volt and the Leaf.
Don't get me wrong: I applaud their ambition and as long as it could handle 50 mile round trips and 60 mph speeds I'd would definitely buy a street-legal EV for $7000. But they're not there yet and it's neither a short path nor a sure thing from where they are today.
Re: (Score:2)
I don't expect the final car to be very good.
I didn't say that. I'm very hopeful. And if it ever comes out and it costs anything close to $7000, I'll very likely buy one. But criticizing the Focus is as premature as criticizing Solaris in 1991 would have been. Sure Linux makes Solaris look silly now, but that's now...
Re:Note to Nissan & Ford... (Score:4, Informative)
The Leaf seems to be reasonably priced. They're unable to meet demand as-is, and while they're turning a profit on the thing, it's not a cash cow by any means.
The $7,000 price-tag of this toy seems to really only be materials cost of drive-train+frame&interior. The batteries are meant to be "leased" rather than purchased, so they're probably damn expensive, and just being excluded from the price. The Leaf's battery costs about $18,000 alone (according to the WSJ), about half the cost of the whole vehicle. And that doesn't count actual production and R&D costs. I'm betting once more plants come online, and the supply of Leafs increases, you could lease one for pretty close to the same price as this toy. Plus the Leaf is a 4-door, with cargo room, and all the modern safety features, and certification by the NHTSA.
I wouldn't drive this tiny tin can of a death trap if they were giving it away... I'm not willing to risk driving on the California freeways without nice big crumple zones giving me a fighting chance... I'm sure it will have its use, but it certainly won't be competing with the Leaf for sales.).
Re: (Score:2)
"The Leaf seems to be reasonably priced."
are you insane? Leaf is NOT reasonably priced for a entry level subcompact economy car class. I can buy a BMW 325 for it's price.
Leaf needs to be $14,900 THEN it's reasonable priced.
Re: (Score:2)
You've completely missed the context...
Re: (Score:2)
Once you couple in energy costs (financial costs at current market rates, not any environmental BS) , the Leaf is a great deal. The same energy amount from the socket is considerably cheaper than from gasoline. Gasoline is just FAR more compact than any electrical storage system we currently have.
Re: (Score:2)
"So even if electricity was free the leaf will be a better value than the Sentra. "
No even if Electricity was free the Leaf is a poor value compared to the Sentra. The Leaf will have to have it's batteries replaced at least once by the time you reach that mileage at the tune of $6800-8900US.
Also you forgot the Cost of installing the Leaf charging station at home another $5000US before you can even charge it. The eco pundits want people to ignore that small fact.
Finally, if you live somewhere other than ca
Re: (Score:2)
The bugatti Veyron is also "reasonably priced" as they only make 4 a year and they are always "sold out"
That is the ridiculousness of that statement.
Re: (Score:2)).
Really? They wouldn't be that stupid?
--facepalm--
Re: (Score:2)
Didn't you actually listen to what they said? Dealers were lying about the purchase price, giving people a "discount", and the tax credits were tied to eg. KW/Hs and similar, which the vehicle in question happened to meet.
Re: (Score:2)
The Leaf and the Focus exist in the real world, and have been tested and certified by the appropriate bodies as to safety, etc.... The Street Scooter is a pile of CAD drawings, Power Points, press releases, and imaginative artist's conceptions.
It's easy to be cheap and high performance when you're vaporware.
Re: (Score:2)
The StreetScooter is probably light enough (sub 400 kg without batteries) and low enough power (sub 15 kW net, and for electrics, that's continuous, not intermittent) that it falls under Europe's heavy quadricycle laws.
Legally, a heavy quadricycle is treated as a four wheeled motorized tricycle, not a car - so safety regulations go out the window.
Also, the batteries aren't included in that $7000 price.
The Leaf and Focus EV are required to meet NHTSA regulations, which aren't too much more strict than EuroNC
"Mobility Services Provider" (Score:2)
On their Website ( [streetscooter.eu] ) they mention "Mobilitätsdienstleistungen" (chew on THAT German word for breakfast!). They want to offer "car sharing" and "leasing" packages as well. Also, that the car is for short hops (SDV: Short Distance Vehicle), just like a Straßenroller (scooter), for the most common short trips. It was started by a couple of university professors, and grew into a consortium; some of the partners: [streetscooter.eu]
Other open hardware cars (Score:2)
The Rallyfighter : [rallyfighter.com] has already been in production
Riversimple Urban Car : [40fires.org] is an electric fuel cell based open hardware car
Re: (Score:2)
The rallyfighter is not open source and the riverside urban car doesn't exist and the website is not in full swing. So spot on, other than you're totally wrong.
Re: (Score:2)
You're entirely right, except for the entirety of your post.
The car in the article isn't open source either, and no where do they claim it to be. Slashdot claimed it to be 'open source'. It isn't. You can't go get the CAD drawings yourself and try to build this car.
OSS zealots have corrupted the word 'open' into this fantasy meaning that no one else in the world shares. slashdot see's 'open' it becomes 'open source' because neither the submitter or the editors have a rational grasp on the world, they to
Summary - 2 seater electric car for short trips (Score:3)
Since the links in TFA were quite unhelpful: it's a small 2-seater electric car that's intended for short trips only. The $7000 gets you the car and there's an unspecified fee to lease the battery.
Overview: [autoblog.com]
Picture: [flickr.com]
Re: (Score:2)
Ebay and some bits of wire will get me the battery without leasing it. there is a buttload of dell laptop batteries out there for cheap. There was one company selling remanufacturered Prius packs as well out there for quite cheap.
vs the lowly 150cc, $1200 motorcycle... (Score:2)
Seriously, in developing countries, BRIC countries in particular like Brazil, Russia, India, China - even Mexico on to our South - you can buy a motorcycle, which uses global commodities like steel and rubber, for $1200 delivered. It gets 80-100mpg and will do 60mph, can be repaired anywhere, by nearly anyone.
Don't get me wrong, electric technology is amazing, but when it comes to scooter/motorcycle technology, it's very difficult to make the argument for a $7000 scooter. 150cc motorcycle technology
Re: (Score:2)
Re: (Score:2)
They've invented the golf cart (Score:2)
Seriously. Ok, it's a fast golf cart with less range. Why does everyone get excited every time someone figures out you can put a motor to some wheels for less than $50k?
Hire a decent designer (Score:2)
80 companies involved yet they couldn't hire a decent designer? Looks like one of those Chinese ripoffs. The aesthetics definitely matter if they want this to be a car people want to purchase and drive. The interior looks like it was modeled in an old version of Solidworks.
Many people assume that this is a motorcycle. I hope they change the silly name.
More info: Battery Leasing not included in $7k est (Score:2) [rwth-aachen.de]
$7k price "envisaged". Hmm.... is that translation or are they simply hoping and guessing rather than have an idea of the actual price... battery leasing not included??
"open source methodology" is not Free Hardware (Score:2)
Unlike for example the Global Village Construction Set. [opensourceecology.org]
"Open Source - we freely publish our 3d designs, schematics, instructional videos, budgets, and product manuals on our open source wiki and we harness open collaboration with technical contributors."
Looks neat but get it for the right reasons. (Score:2)
I like the idea of kit cars, sounds like a fun thing to do and as a way to potentially save money over buying a new car. The Streetscooter looks like a cute little car that can handle stop and go, tight spaces, and other obstacles one would find in a common city commute. This kit car is also a good way for someone to experiment in automotive technologies without having to reverse engineer an entire vehicle, or take on the much bigger task of designing a whole car of their own. There are vehicles out ther
Re: (Score:2)
Re: (Score:2)
1-Even if it produces more pollution than gas at least you/your kids don't breath it. 2-It's easier to put pollution scrubbers on a stationary plant. 3-Power plants don't idle, they either run at full or are turned off.
1, 2, & 3- I'm talking about carbon output, not pollution.
4-You can switch that coal to solar/tidal/wind/nuclear without needing a new car.
Yes, yes you can. Problem is that until the electricity actually comes from solar/tidal/nuclear the electric car is creating more carbon in the atmosphere than the gasoline or diesel fuel powered car. If the goal is to reduce carbon output NOW, then one would not buy an electric car. They'd be better off with a natural gas car, public transportation, walking, biking, and so on.
Point is that by using an electric car while the primary source of
Clever marketing (Score:2)
Re: (Score:2)
80 mile range is close to the same as the mainstream electric vehicles out now.
Re: (Score:3)
Re: (Score:1)
Yes, I do. And it's still about triple what the market will be able to bare in 2013.
Re: (Score:2)
You are saying that 7k car does not take off in bad economy. Well, bad economy is a good place to sell cheap cars as opposed to expensive ones. Tere might be potential. And besides - while the US consumer is still hot on hybrids that consume more petrol than an average 10 year old diesel hatchback here, electric vehicles are all the rage in EU.
Re: (Score:2)
A bad economy is a bad place to sell anything.
The Greeks will probably be on their 962nd bailout before it's in production.
Re: (Score:2)
We are not talking about Greece. We are talking about Germany, where a $7k car is ultra-cheap compared to what people normally buy ($30-100k). European Economy is quite diversified, nothing like US states.
Re: (Score:2)
And who do you think is paying for Greece ( Greek bailout ) ?
We keep pouring money into it, while knowing that it won't solve anything. And that money comes from the taxpayer.
The problem is not the economy itself , it's that our governments are giving our money away to fill a hole that can't be filled.
But that has an impact on the economy, because people who have less money left ( after taxes), buy less.
Re: (Score:2)
It's not going to be Italy, Portugal or Ireland is it? If money was brains they'd be on a par with shitzu (931108).
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I don't think the Smart car is meant as a low cost option at all. For a low cost car you'd buy some eastern European or non-Japanese-Asian brand. The Smart is more of a way to get attention (and more parking gaps). The thing's a Mercedes after all.
Re: (Score:2)
You do realize that despite the name, it's a car rather than a scooter, right? $7k is quite good
Hell I've got a Suzuki Burgmann scooter [suzukicycles.com] that costs more than $7K that I just use for getting around town and for short out of town trips. When it's raining I don't use it, a $7K car I could use.
Re: (Score:2)
A 1500 USD car from 2011 in Germany :) (Score:2)
I know car prices are 3x that of Germany where I live (Denmark), but a new car at 1500 USD, I don't even think that's possible in Germany... (The cars around 1100 EURO looks a bit like this [dk.auto.de] and it is from 1982 and has driven more that 300.000 km).
Re: (Score:2)
:facepalms:
Dude, it was autotranslated. That's why the title bar says Google Translate. And the link. And it has a header with settings for the translation.
Re: (Score:2)
Re: (Score:2)
"you can buy a nice used car for less with better range."
I can buy a used BMW 750i for that price.
I can buy a 1962 Cessna single engine for that price.
In flint I can buy 2 houses for that price.
In Montana I can buy 150 acres for that price.
In mexico I can have 4 people killed for that price.
In California I can buy 2 loaves of bread and a gallon of milk for that price.
Re: (Score:2)
Re: (Score:2)
Thats 16 miles in each direction, not round trip.
Re: (Score:2)
Re: (Score:2)
So you click on the translated link
... then the original link that shows up. Is it hard for you to click the link on the next page from Google translate? | http://tech.slashdot.org/story/11/11/05/0041259/streetscooter-the-7000-open-source-modular-electric-vehicle?sdsrc=prev | CC-MAIN-2015-06 | refinedweb | 5,093 | 71.14 |
Python Program to Add all the digits of given number
In this tutorial, we are going to learn how to add all digits of given number in Python. So let’s get started.
Adding all digits of a number in Python
To add all digits of given number a given number we need to follow some steps.
Steps to follow:
- User has to enter a value.
- Using a loop, we will get each digit of the number by taking modulus to the number.
- Add the digits to some variable.
- Divide the number with 10 to remove the last digit.
- Print the sum.
In this program, we create a function named reverse. The reverse function takes a number as an argument and returns the reversed number.
def reverse(number): y=0 while(number>=1): z = number % 10 y = y + z number = number / 10 number = int(number) return y
Explanation :
The last digit of the number can be obtained by using a modulus operator. In the above program, we used while loop to get each digit. The last digit is added to variable y in the above program.
Divide the number with 10 to remove the last digit in the given number. This loop terminates when the value of the number is 0 and returns variable y to main program.
Final code:
def reverse(number): y=0 while(number>=1): z = number % 10 y = y + z number = number / 10 number = int(number) return y Number=int(input("Enter a number: ")) reverse_number=reverse(Number) print("sum of the digits of the number ",Number," is ",reverse_number)
output:
Enter a number: 56 sum of the digits of the number 56 is 11
So Guy’s, I hope you really enjoy this tutorial and feel free to leave a comment if you have any doubt.
you may also learn: | https://www.codespeedy.com/python-program-to-add-all-the-digits-of-given-number/ | CC-MAIN-2020-45 | refinedweb | 303 | 71.34 |
class Employee { public Employee(string firstName, string lastName, int age, double payRate) { this.firstName = firstName; this.lastName = lastName; this.age = age; this.payRate = payRate; } protected string firstName; protected string lastName; protected int age; protected double payRate; public double CalculatePay(int hoursWorked) { // Calculate pay here. return (payRate * (double)hoursWorked); } }
To instantiate this class and use it, we have to declare an instance of it in a method similar to this: -
public static void Main() { Employee emp = new Employee ("Amy", "Anderson", 28, 100); }
In this example, emp is declared as type Employee and is instantiated using the new operator. The variable emp represents an instance of the Employee class and is considered an Employee object. After instantiation, we can communicate with this object through its public members. For example, we can call the emp object's CalculatePay method. We can't do this if we don't have an actual object. (There is one exception to this, and that's when we're dealing with static members. I'll discuss static members in both Chapter 5 and Chapter 6, "Methods.") -
Have a look at the following C# code: -
public static void Main() { Employee emp = new Employee(); Employee emp2 = new Employee(); }
Here we have two instances-emp and emp2-of the same Employee class. While programmatically each object has the same capabilities, each instance will contain its own instance data and can be treated separately. By the same token, we can create an entire array or collection of these Employee objects. Chapter 7, "Properties, Arrays, and Indexers," will cover arrays in detail. However, the point I want to make here is that most object-oriented languages support the ability to define an array of objects. This, in turn, gives you the ability to easily group objects and iterate through them by calling methods of the object array or by subscripting the array. Compare this to the work you'd have to do with a linked list, in which case you'd need to manually link each item in the list to the item that comes before and after it. | https://www.brainbell.com/tutors/C_Sharp/Instantiation.htm | CC-MAIN-2018-13 | refinedweb | 346 | 53.92 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
In the famous paper "Can Programming Be Liberated from the von Neumann Style?", Backus describes the programming language FP. It has many properties that are clearly advantageous:
There is one more prominent property: FP is a pointfree language. My question is, how advantageous is this really? Obviously it does make manipulation of programs easier in some sense, but its benefits seem to pale in comparison to the benefits derived from other properties (such as its first order nature).
There is also a big limitation that results from not having any sort of "substitution form": It is not possible to "lift" objects into functions. For example, if one wants to write a function that takes some value X and a list and multiplies every element in that list by X, there's no way to "lift" that X into the 'map' combining form. One must instead use 'distr' to transform the list into a list of pairs where one element of each pair is 'X', after which the multiplication function can be mapped over the list. Besides being a bit opaque, this is needlessly inefficient.
Of course, one could always write a new combining form to handle this particular situation (if the language were to permit it), but there's no way to write it in terms of the existing 'map' form without substitution or higher order functions (which would permit partial application); it must be written from scratch, and as such, can't trivially inherit any properties of 'map'. In some sense, the lack of substitution seems to work against the idea of programming with a well-understood set of combining forms.
This is hardly an attack on the idea of pointfree languages. I'm well-versed in concatenative languages (which typically do offer variables as an escape hatch), I've written non-trivial programs in FP, and I utilize the pointfree style in Haskell heavily. I do very much enjoy pointfree programming; I simply am having trouble justifying it in an objective manner.
All thoughts and pointers to relevant literature will be much appreciated. (To be clear, I'm looking for materials that directly address the benefits of doing away with variables, not the other properties of FP or SQUIGOL-like languages such as programming with a rich set of combining forms or morphisms.) More specifically, if anyone were able to comment on what one would lose if a substitution form were added to FP, you'd have my gratitude.
EWD692 ;-)
[On your example, I would think that optimizing compilers would do that transform automatically for you. Although ML experience says: just don't optimize - the few program fragments which need optimization will typically be optimized by the programmer.]
Always wanted to write that subject...
In all seriousness, I'm not sure he is understanding a particular element of FP. He makes the claim that arguments are distinguished by ordinals instead of names, and then sarcastically questions why the ALGOL committee didn't require formal parameters be named 'par1', 'par2', etc.
In FP, the term '1' denotes a function that returns the head of the list passed to it. The term '2' is equivalent to the head of the tail, and so on. It's not the case that they're formal parameters as he seems to imply; they're simply normal functions.
I think a lot of his comments were ill-placed given what we know now. A substantial amount of his critique must have derived from trying to push Algol-like languages instead of math-like languages. Some of it sticks (why introduce all that Greekisms, it looks interesting, but here I agree, it is just a modern form of shamanism), some doesn't, (being able to reason by equality is way more intuitive than being able to reason by pre-/postcondition).
But he was way too smart not to understand. Maybe he just didn't like Backus?
[But the conclusion is right: unjustifiable aggressive overselling.]
[I think his comment on using '1' and '2' for functions/arguments has more to do with tidy programming where argument names should somehow relate to whatever the arguments denote (graph, db_connection, ...).]
In the point with the (par)1, (par)2, etc Dijkstra would have to be right, because natural numbers are little self-documenting. Backus generally did not use parameter names because he favored numbers as selectors. Variables contradicted the referential transparency and so he renounced it. But nothing speaks against instance variables (map data type / associative data element); these do not contradict referential transparency and can therefore be handled as local variables, except for slower processing. Immutable data types are generally a bit lame. Of course, variables are also a little bit fancier than #namex, #namey, etc, but that must serve as a distinction to the global function names.
// Example, "FP" with instance variables:
((#a+#b*sin°'2*pi*#c)<-a;b;c;):(100;50;0.3;)
I confess to be slightly uneasy about the application of the term "point-free" to FP languages, since this seems to mean that definitions are given in eta-contracted form, and don't make use of lambdas, and this seems to have only the most superficial relation to the point-free topologies that the term is supposed to be derived from. So I always think I am missing something...
But when I look at Backus' article, he gives such definitions in FP of the identity combinator as: id:x=x, which definitely doesn't look point-free to me.
My feeling that I am missing something only increases.
I don't believe you're missing anything. The programs themselves do not contain variables, but they certainly show up when reasoning about said programs.
As I've been thinking about this exchange between Backus and Dijkstra, it has dawned on me that part of the problem we have understanding them is a loss of historical context.
We take it for granted that you can have "variables" that aren't really variable: "functional", "referentially-transparent", "stateless" variables. I suspect this idea hadn't taken hold in 1977. (To be fair, it still hasn't taken hold in most of the programming world in 2009... ;-) )
If Backus found it hard to imagine having a labeled "variable" that someone wouldn't "accidentally" assign a new, changed value to, you can see that he might feel that the only way to get functional programming would be to go "point-free", i.e. if there aren't any explicit label assignments, you can't re-assign them. In this reading, the virtues of "point-free" being expounded are really about the same as the virtues of "functional programming" as we currently understand it.
If this is true, Backus sounds odd because he is at the start of a line of development: this article caused others to think about the properties he was interested in, and find better ways to express and think about them. Since we are the inheritors of these later developments, we find it hard to understand the mind-set that produced the ideas and terminology that started the chain.
That's an interesting and very plausible hypothesis, Marc. I can understand Charles' complaint about terminology, although I don't truly appreciate it, as I am not familiar with the pre-existing topological definition.
Myself, I prefer a relatively point-free style. However, I also think that the difference is more stylistic than substantive, and taking the point-free approach too far is indeed "pointless." Yes, that is an overused pun, but it has a large grain of of truth behind it, as a completely "point-free" program is often obfuscated.
I find the clearest programs are a happy compromise between composition and explicitly named bindings. There has been much ado made over the fact that the point-free approach is supposed to make it easier to algebraically transform a given program, but I find I can usually perform (informal) algebraic transformations pretty easily either way.
My preference is in part due to the fact I hate making up and learning names. If a name must be given, it should be particularly suggestive, or at least idiomatic. One of the minor annoyances I have with Haskell is that there isn't a "case-lambda" analogue for anonymous functions, so you have to either introduce a name for the function, or use a (\x -> case x of {...}) construct.
(\x -> case x of {...})
A very loose rule of thumb I have is if a bound variable is used only once, then it perhaps it shouldn't be given a name at all. If I want to refer to something more than once, I tend to prefer to give it an explicit name. In particular, I don't like relegating complicated, "one-off" variable-shuffling to combinators. Above all, I prefer whatever I think is the clearest, which I admit is a very subjective criterion.
While I rather like Leslie Lamport's How to Write a Proof, I have a few complaints. For example, the "informal" English sketch tends to mirror the formal structure much too closely. In Figure 5, for example, I think it would have been much more clear to write "Assume \sqrt{2} \in Q" instead of "Assume r^2 = 2, r \in Q", however this stilted style of "informal" exposition permeates mathematics.
Backus was well aware that variables didn't have to be mutable. His FL language was released in 1989, and he certainly would've been familiar with ML-like languages by then. He very explicitly tried to differentiate between his "function level" approach and languages centered around lambda abstractions. He argued that the restrictions of the former enabled one to reason more easily about programs.
Search the paper linked to in my initial post from 1977 for the word "lambda" (which appears 27 times) and read what he had to say. I think you'll agree with my assessment.
Search the paper linked to in my initial post from 1977 for the word "lambda" (which appears 27 times) and read what he had to say
I did just that, but it hasn't changed my sense that some concern that he hasn't quite managed to get out in the open is at the root of his qualms, and that this implicit problem is in fact the statefulness inherent in the von Neumann architecture.
His main purported reason for rejecting lambda expressions centers around the "complexity" of substitution rules that attend variables. At one point he explicitly lists:
"Each of
these requires a complex mechanism to be built into the
framework so that variables, subscripted variables,
pointers, file names, procedure names, call-by-value formal
parameters, call-by-name formal parameters, and so
on, can all be properly interpreted."
He also keeps relating the issues he is raising to the von Neumann architecture. He explicitly mentions the Funarg problem, a problem of implicit state that goes away when you introduce proper stateless closures.
It is quite possible that Backus had been exposed to the notion of stateless variables, but if so he is not explicitly weighting it here as a possible solution to his qualms. I think this is because he is is "thinking close to the machine", and the machine he has in mind is implicitly stateful.
From the point of view of the history of ideas, this is another interesting shift in mentality. Nowadays we take for granted the idea of a "virtual machine", that you might use to logically define the environment of a PL, independent of its hardware architecture.
But many of us will remember that as recently as the early 90s it was still quite common to think to debug programs by examining the values in stack, machine registers, etc. (Also, remember not-so-long-ago threads on LtU where FP newbies expressed incredulity that stateless languages could be implemented on stateful machines.)
Backus shows definite signs of not having made the shift in mental model between the logical and implementational aspects of his proposals, and again, I think this makes it harder for us to understand his concerns on the other side of that paradigmatic divide.
So let me turn this around, John. What aspect of Backus' concerns about "algebraic reasoning" are not addressed by a stateless language with explicit labeling?
Backus shows definite signs of not having made the shift in mental model between the logical and implementational aspects of his proposals
I simply can't agree that Backus definitely shows any such signs. You may want to watch Backus's explanation of FL (if you haven't already). In the video, he directly compares the function level approach with "the style of programming used in Lisp and in most functional languages" (4:07).
Ultimately, I think this issue is unimportant as I'm not sure what can be gained from diving into the psychology of the situation. I don't believe we have a need to guess what sort of "variables" he was talking about; he was very clear about what he was arguing for.
So let me turn this around, John. What aspect of Backus' concerns about "algebraic reasoning" are not addressed by a stateless language with explicit labeling?
Backus's main concern, it seems to me, was that the power of lambda abstractions as they commonly appear in functional languages is simply too great. Instead of being required to build up a program in a hierarchical manner, it's too easy to just write something from scratch. In the worst case, you many never even discover the patterns underlying your program at all.
I think there's strong evidence for this claim if you look at the concatenative programming languages and compare them to the situation in ML or Haskell. In concatenative languages, even the simplest of patterns are classified. For example, there is a family of combinators for applying multiple functions to a single element, another family for applying a list of functions to their corresponding inputs on a stack, et cetera. You don't get the same sort of fine granularity in your abstractions in other languages. (To be fair, Haskell does have the concatenative languages beat on larger abstractions at the moment, but this mostly due to Haskell's type system enabling them and large differences in the communities involved.)
To make this a bit more concrete, I'll give an example of a program. Say you want to write a function that, given two values, returns their sum and their product. In Haskell, you'd probably do something like this:
\x y -> (x + y, x * y)
In FL, you'd do the following:
[+, -]
Now obviously this is a bit of an unfair example, but I do think it demonstrates my point. Since FL does not offer variables in the sense that Haskell does, you have no choice but to write the program as above. In the Haskell version, you could write some construction combinator that, given two functions, applied them both to some input value. In practice though, you don't; or, at least, I've never seen it done. Instead, you use a lambda to implement everything from scratch and do not make the pattern of construction explicit.
(I should also note that, even if you wanted to make such patterns explicit in Haskell, things get pretty painful. First, the syntax doesn't give you a way to write with nearly the concision you can in FL. Second, and more importantly, the curried functions in Haskell get in the way; pointfree programming is much more comfortable when arguments are tupled.)
Essentially, I think Backus's thinking was that the lack of variables would force a hierarchal construction of programs. Such a construction would give enough structure to allow powerful -- and most critically for Backus, very simple -- reasoning about programs in a way you wouldn't otherwise have.
It is a rather enjoyable video with an interesting historical context, thanks for posting it!
I'd say your speculations are also rather plausible, however, I would point out the motivations Backus gives for FL around 8:30 into the video. None of them really specifically apply to either functional languages or imperative languages, except one, namely his comment that "while" doesn't have nice algebraic properties, which would support Marc's thesis. But I also agree with you that he was trying to get at something deeper, whether or not he was actually onto something I'm a little unsure.
After all, almost every "functional" programming language at that point in time was not purely functional. Unfortunately we would unlikely to be able to definitively answer these questions, unless somebody who knew Backus personally can elaborate.
Backus's influence in the functional programming arena is undeniable. I find myself using point-free combinators from Control.Arrow more and more these days.
Unfortunately we would unlikely to be able to definitively answer these questions, unless somebody who knew Backus personally can elaborate.
I worked with John on functional programming for several years starting in 1974. We were definitely familiar with stateless functional languages, e.g., ISWIM (Landin). John's languages were stateless at the time I worked with him, although I believe I influenced him to think about modeling systems with state. I also suggested allowing lambda variables when defining a new combining form such as "Apply to all", but he resisted. I was never able to fully understand Backus's reasons for avoiding all variables. I think John Rowan has come closest to understanding Backus's point of view.
Point-free must imply stateless, otherwise, probably, equational laws are broken. I assume a pointfree monadic variant must exist, it should be possible to handle state.
I think you should read Dijkstra's comment in another light: He just found it a badly written paper. Too many badly presented abstract ideas, too often repeated, with too many unjustified claims. It just didn't compare very well with a good article exhibiting a nice algorithm with concrete proofs of properties (like Knuth's articles).
All ideas can easily be broken down. Guess he was wearing his wrong hat that day.
I think the idea of a pointfree language is great, but one needs defining constructs in such a language, and lambda abstractions are just too convenient to miss (and most defining pointfree constructs would translate to lambda terms anyway).
But it opens the way to algebraic optimizations, optimize by equality, which I always thought would be nice to have in a FP.
Could you expand a little on what you mean by optimize by equality? To me that sounds exactly the same as what a simplifier in a functional compiler does.
The ability to state higher-order equalities which may not be derived directly by the compiler.
I think ghc has it build in with the RULES pragma. I am not sure how much it is used though.
[Googled it: look here]
The rules used in GHC do mention the objects involved:
forall f g xs. map f (map g xs) = map (f.g) xs
Accordingly, I'm not sure why you claim this is an advantage of pointfree languages. Perhaps it is simply a question of the degree of clarity involved. For example, the pointfree rule equivalent to the above is as follows:
forall f g. map f . map g = map (f . g)
As I indicated in my initial post, it seems to me that the advantage of FP-like languages has more directly to do with working in terms of function composition rather than avoiding the use of variables per se. Perhaps such an avoidance is important simply because it is a precondition for the compositional style.
If that were indeed the case, an approach to variables that "lifted" them to functions would offer the best of both worlds. In FP for example, one could express the 'swap' function as a directed rewrite rule:
forall f g. swap·[f,g] → [g,f]
Given the semantics of the functional form of construction:
forall f g x. [f,g]:x → <f:x,g:x>
... the above definition for 'swap' would imply the following:
swap:<1,2> → <2,1>
I should note that Factor, a stack-based concatenative programming language, already does something similar to this. For example, the 'x' and 'y' in the definition below are composed as if they were functions even though they're understood to be objects:
:: hypot ( x y -- h ) x x * y y * + sqrt ;
Accordingly, I'm not sure why you claim this is an advantage of pointfree languages. Perhaps it is simply a question of the degree of clarity involved.
Yes, that is what I meant. Equational reasoning might be more intuitive than reasoning with pre-/postconditions. Though someone well versed in pre-/postconditions, like Dijkstra, apparently thinks different, and well, I have only seen academic point-free examples so far.
'Introduction to Algorithms' is still presented in a Pascal-like style, and for good reason. Point-free is a valid line of research, but until that book is rewritten in point-free style, I think it will remain a niche.
forall f g. map f . map g = map (f . g)
Don't see why this is point free. You are still mentioning the arguments? But I wonder what other automatic rewrites become possible. Some Knuth-Bendix based rewrite system?
But I don't know this field very well. I glanced at the (very nice) theses by Alcino da Cunha [fixed the name]. I really like the idea of point-free languages, but I don't find the definitions readable or intuitive.
You should discuss this with someone else.
In my forthcoming *cough* language, I was partly inspired by point-free/combinator calculi. That's one of the reasons I used a block-style syntax for abstractions.
def args: options -> text -> list text =
[ oo, o ->
aa = list.filter [ (o0, a0) -> o0 == o ] oo;
aa = list.map [ (o,a) -> a ] aa;
aa ]
I.e., a block denotes/is a combinator.
[But once you get entrenched in mundane problems, all academic niceties go out the window.]
'Introduction to Algorithms' is still presented in a Pascal-like style, and for good reason.
What are those good reasons? They become less clear to me with every passing day.
Higher abstraction comes with a significant cost. The more hardware is abstracted away, the harder it is to understand what is going on at the low-level. If you don't understand what is going on at a low-level, everything gets wishy-washy. Algorithms are not made to make sure that people (programmers) understand what they do, they are primarily made to ensure (run-time) efficiency.
I for one think that algorithms should be taught using LLVM assembly.
Algorithms are not made to make sure that people (programmers) understand what they do, they are primarily made to ensure (run-time) efficiency.
I just opened CLRS to a random page. Line 2 of the algorithm on that page is "do determine j such that i \in K_j". What do you think the time complexity of "determine" is? And how is this not abstracting away far more than if this were written in Scheme or Haskell?
... would probably be the best answer.
But also, best understood, clearest readability, cleanest operational model, best high-level language which allows low-level optimizations, simplest expressive language? Take your pick.
Some of the arguments are vague, sure, but would you really know a better paradigm to present algorithms except for the imperative one?
[Post ended up somewhere else. This is a response to why a Pascal-like syntax is used in Introductions.]
Brent Yorgey has a nice, compelling example for the point free style in his Typeclassopedia article in the latest Monad Reader.
Namely he defines the operator "fish" (>=>) as
(>=>)
f >=> g = \x -> f x >>= g
and then the monad laws are quite nice, corresponding to the definition of a monoid:
f >=> return == f
return >=> g == g
(f >=> g) >=> h == f >=> (g >=> h)
On the other hand, I have an example that I doubt has an insightful point-free description:
enQ e = CorecQ' (mapContT (liftM (e:)) (modReader inc_len))
The above definition is mostly pointfree, which I like, and depends on a few non-standard definitions contained in the paper, but I don't think that eliminating "e" from the mix clarifies anything. Then again, maybe choosing a different set of functions from the outset would lead to a clear point-free definition.
Unfortunately no substitution found. (except perhaps lambda variables)
But maybe two helpful variants.
--- Variant A ---
Space efficient solution with lazy list.
lazydistr == (id lazy '((isatom°[0])->();((head°[0]),[1],),(tail°[0]) lazydistr [1]))°ee
Application:
(+ aa) ° '(10;20;30;40;50;) lazydistr '5
--- Variant B ---
Code generation before the map application. (aa == arg map head°term)
Application:
([0] map prop°'id,'+,(cquote°[1]),):((10;20;30;40;50;);5;)
[UPDATE]
--- Variant C ---
Combination of applytoall and distr.
daar == arg dmapr top ° term
dmapr == ((isatom°[0]°[0])->();([1] app ((head°[0]),[1],)°[0]),(((tail°[0]),[1],)°[0]) dmapr [1]) ° ee
(+ daar):((10;20;30;40;50;);5;)
*(using)
I just summed up the average point made in this discussion.
Edit: This is subtle, dry humor.
FP/PF is like low code -- lack of anything unnecessary.
We like programming to be simple -- mathematical functions.
We love to program with handicap and retrenchment (referential
transparency and combinators), to be forced on the right path.
We want programming without context (no global and no lambda variables);
each context must be in the argument of the functions. (*)
-- The little spirit keeps order, only the genius overlooks the chaos. --
Avoiding lambda variables -- For people with big brains, it's like a joke.
(*) I don't know why, but John Backus wrote:
"... Advantages of FP systems. The main reason FP systems are considerably simpler than either conventional languages or lambda-calculus-based languages is that they use only the most elementary fixed naming system (naming a function in a definition) with a simple fixed rule of substituting a function for its name. Thus they avoid the complexities both of the naming systems of conventional languages and of the substitution rules of the lambda calculus. FP systems permit the definition of different naming systems (see Sections 13.3.4 and 14.7) for various purposes. These need not be complex, since many programs can do without them completely. Most importantly, they treat names as functions that can be combined with other functions without special treatment. ..."
How do you form your opinion?
While browsing a Julia tutorial I came across the bullet point "scoping of variables".
That gave me an idea.
Variables need a scope in the program code. Associative data structures do not need a scope in the program code for the validity of their local instance variables.
If you want to introduce lambda variables in FP, for example with the following construct:
(<name> lambda <term with scope>) : <value>
This form is not chosen very well, but maybe someone else has a better idea.
Now, one would usually like to nest these "lambdas" as well, as this raises the question of the binding strategy. A dynamic binding strategy brings hard-to-find errors (example will be provided on demand).
So, a closure technique has to be chosen, which requires a complicated structure that is based on the function definitions; Simply substituting function names for their assigned value (for automatic reasoning?) might go back to dynamic binding. Who knows about this?
But another possibility is to use an associative data structure and its local instance variables instead of lambda variables that presuppose a scope:
You could play with the function definitions as a kind of "Goto" (with Tailrecursion).
It would not require the complete listing of all variables as is the case with lambda - but could continue to use the local instance variables in the other functions that you jumped with "Goto" call.
Depending on the language, there are many reasons why a lambda cannot be inlined, but sometimes it is indeed possible. Things that prevent it: (i) When the lambda itself can be used as a value that can escape the scope of its declaration; (ii) in the presence of recursion (or potential for recursion); (iii) when a pointer (or reference) to the lambda is passed as an argument.
In Ecstasy, for example, the lambda syntax looks like:
param -> expression
(param1, param2) -> expression
(Type1 param1, Type2 param2) -> {body}
So a nonsensical example of a function that takes an int and returns a string might look like:
function String(Int) f = n -> {
StringBuffer buf = new StringBuffer();
while (n != 0)
{
buf.append(n);
n >>>= 1;
}
return buf.toString();
};
You could invoke it directly as well (yeah it's a stupid example but you could do it):
String s = n -> {
StringBuffer buf = new StringBuffer();
while (n != 0)
{
buf.append(n);
n >>>= 1;
}
return buf.toString();
}(42);
To inline the lambda, you just drop the parameters and the invocation:
Int n = 42;
String s = {
StringBuffer buf = new StringBuffer();
while (n != 0)
{
buf.append(n);
n >>>= 1;
}
return buf.toString();
};
The obvious side-effect is that the reference to the separate lambda function no longer exists.
Does the use of free variables violate the Leibniz principle of identity substitution?
(the Leibniz principle is important for "lazy evaluation")
An example:
the term
... ('5+'3*sin°2pi*id):0.7 ...
can be replaced by
... '2.14683045111454 ...
the term with a variable
... (x lambda '5+'3*sin°2pi*x):0.7 ...
can also be replaced by
But what is that?
the term within the lambda expression with the (free) variable amp
... (amp lambda ... (('5+amp*sin°2pi*id):0.7) ... ):3 ...
would be replaced by
... (amp lambda ... '2.14683045111454 ... ):3 ...
and
the term within the lambda expression with another argument for the lambda variable
... (amp lambda ... (('5+amp*sin°2pi*id):0.7) ... ):42 ...
... (amp lambda ... '_34.94437368439645 ... ):42 ...
Is that correct?
... Is that correct?
and
... Implicit was an expectation that there should be a well-defined understanding of when two program phrases were semantically equivalent ...
The use of nested scopes is a common way in Lisp (see LET-function).
In Backus FP the terms are always closed, so the term substitution is always possible.
With free variables, the terms are not closed, so term replacement is not always possible.
Backus needed for his Algebra the possibility that the terms are always deformable / replaceable.
It is not that you have to completely renounce variables, there is the possibility to give instance variables
in a "similar format of closures" as arguments to the terms / functions.
But what John Nowak seeks, he will not get under the term of term deformation / term replacement.
It's my opinion.
Edit: Look at --> "Closed applicative languages (1971 - 1976)" ff.
I think you are laughing in tail-recursion.
I like a physical 'code as material' metaphor in language design. Less like natural language and more like Lego bricks. The essential properties for this metaphor are locality and compositionality. Locality is our ability to decompose, model, and understand behavior with minimal syntactic context.
As a consequence, most languages I design are point-free. It isn't that point-free is intrinsically good. Rather, the semantics of lambda variable substitution, free variables, and alpha conversion are intrinsically non-local, and do not fit my metaphor.
However, giving up on lambda variables doesn't imply abandoning naming systems entirely. In his paper, John Backus describes a naming system that is essentially an association list with atomic symbol values (a row-polymorphic record, if we must assign a type). However, the nature of names will shift from 'variables' outside the model to 'labeled structures' within the model.
Although we don't "need" names, they remain convenient both for lightweight extensibility (e.g. it's often more convenient to add a field to a record than to change a pair to a triple), commutativity (order of fields within a record can be opaque or deterministic), and as a form of documentation. Any one of these conditions are sufficient to motivate support for label-based names.
My most recent language, Glas, heavily uses labeled structure. Instead of a combinatory logic, Glas is based on graph rewriting. Nodes in the graph have directed, labeled edges. There are two rewrite rules - application and unification. When nodes are unified, outbound edges with the same label are also unified. A function is a bounded subgraph with a unique outbound 'open' edge. When applied, the function body is copied and the open edge is connected to the applicand, enabling access to input-output parameters via unification.
Glas is a point-free language for some definitions of "point-free". There are no lambda variables, no substitution semantics, the nodes are anonymous, and function application is implicitly parameterized by location in graph.
OTOH, the graph can represent cyclic structure. When a cyclic graph is represented *textually*, we inevitably encounter the challenges of graph representation, e.g. using references by relative path or offset or temporary node names. Depending on how we define 'point-free', we could consider any graph-based PL that allows cycles to have 'points'. In that case, violation of "point-free" is just an artifact of using 1-dimensional syntax to express K-dimensional computations.
In any case, we can have rich naming systems without the semantic weaknesses of lambda substitution. I believe this is a path worth developing. | http://lambda-the-ultimate.org/node/3233 | CC-MAIN-2019-51 | refinedweb | 5,596 | 62.07 |
Contains information about a module loaded into a process. More...
#include "snapshot/mac/process_reader_mac.h"
Contains information about a module loaded into a process.
An image reader for the module.
The lifetime of this MachOImageReader is scoped to the lifetime of the ProcessReaderMac that created it.
This field may be
nullptr if a reader could not be created for the module.
The module’s timestamp.
This field will be
0 if its value cannot be determined. It can only be determined for images that are loaded by dyld, so it will be
0 for the main executable and for dyld itself. | https://crashpad.chromium.org/doxygen/structcrashpad_1_1ProcessReaderMac_1_1Module.html | CC-MAIN-2019-13 | refinedweb | 101 | 59.19 |
Scenarios for Project Server Development
Last modified: February 15, 2011
Applies to: Office 2010 | Project 2010 | Project Server 2010 | SharePoint Server 2010
Developers have created thousands of custom solutions for the architecture that was introduced in Microsoft Office Project 2007. There are many more possible scenarios for customization and development of solutions for Microsoft Project Server 2010.
For a summary of types of Project Server integration with external systems, see the blog article, Integrating Project Server 2010 with Line Of Business Applications. Following are some of the new scenarios for development.
One of the biggest areas for new development is demand management for project proposals and portfolio analyses. You can develop or customize workflows, Web Parts, and project detail pages for each organization and for each department within an organization.
Create Custom Proposal Templates
One of the key benefits of Project Server 2010 is the ability to create customized templates for the life cycles of different kinds of projects. Project life cycles are represented by different project proposal types or enterprise project templates (EPTs).
For more information about EPTs, see Workflow and Demand Management. For an example of how to create an EPT to test a workflow, see the Testing a Project Server Workflow section in How to: Install and Test a Project Server Workflow.
Create a Web Page to Submit Proposals
Develop a project detail page (PDP) to capture information in a manner that is logical and pleasing to the user. Because PDPs are based on the SharePoint platform, PDPs in Project Web App can provide a familiar user experience.
Following are tasks included in creating a PDP for a project proposal:
Add existing SharePoint and Project Server Web Parts.
Add custom Web Parts that capture custom fields in other Web Parts in the PDP.
Customize the PDP layout.
Define the Life Cycle of a Project
Phases and stages enable clear distinctions between sections within a project life cycle. For more information about phases and stages, see Workflow and Demand Management.
Following are tasks included in creating phases and stages for demand management:
Define the project life cycle.
Define the stages in each phase of the project life cycle.
Develop the workflow logic between stages.
Use the workflow to help users open related documents for the project life cycle.
For an example workflow, see How to: Create a Branching Workflow.
Steps for the PMO, End Users, and Project Managers
After the project management office (PMO) specifies the approval process for a project proposal, a developer creates a workflow and PDPs for an EPT that uses the workflow. After an administrator deploys the workflow and EPT to Project Web App, end users can submit project proposals, the PMO or another management group can evaluate proposals, and project managers can create plans for approved projects.
Create the Approval Process for a Proposal
The approval process is a mechanism that allows decision-makers the ability to tightly control which projects can continue. Decision-makers can also provide early input to the people who submit project proposals about what needs to change for the project to proceed.
Normally, approvals are placed between stages and phases. However, not all stages or phases require an approval to proceed. Instead, a workflow can include internal checks and processing between stages and phases.
Following are tasks included in creating an approval process:
Grant people approval permissions.
Define approval criteria for a workflow.
Set up approval notifications.
Submit a Proposal
Proposal submission includes the following tasks:
Create a project proposal by using the Project Center page.
Edit an existing project proposal.
Submit a project proposal for review.
Evaluate Proposals Based on Business Strategy
Project Server 2010 provides a great advantage in being able to compare how a proposal directly affects the business strategy of an organization.
Following are tasks included in evaluating a proposal:
Define key performance indicators (KPIs) and priorities for the business.
Open related documents that the developer or administrator has linked on the PDP.
Evaluate how each proposal impacts the KPIs.
Approve Proposals
Views in Project Center can show all proposals and the stages they are in, how many resources each proposal requires, and other information obtained through the demand management process. The process helps ensure that only the most advantageous projects are selected and that proposals do not conflict.
Following are tasks included in approving a proposal:
Create Project Center views to filter for proposal properties, such as the following:
Approved, rejected, or waiting proposals.
Approvals in progress or within a date range.
Review submitted proposals.
Select proposals to approve.
Create Project Schedule and Assign Resources
After a proposal is approved, it is in the management phase. Project Server 2010 enables users to directly manage a project either with Project Professional or Project Web App. Because the experience in both clients is similar, project management can be a smooth extension of the planning phases.
Following are some tasks included in managing projects:
Edit a project schedule by using Project Web App.
Use Project Professional to add and schedule tasks in a proposal.
Define role-based resource requirements; link to related project documents.
In addition to demand management, new Project Server 2010 development scenarios address a very wide range of customization and integration with external systems and Microsoft Office Business Applications (OBAs).
Customize the Ribbon in Project Web App
The ribbon component in Project Web App uses the core ribbon API of SharePoint Foundation 2010. Because most Project Web App pages in Project Server 2010 use the ribbon and Web Parts, and many of the pages use the customizable JS Grid control Control, Project Web App is much easier than previous versions to customize and extend.
Following are some development scenarios for the Server Ribbon:
Add the Ribbon to your own Web pages in Project Web App or to any other page or Web Part in SharePoint Foundation 2010 or SharePoint Server 2010. Custom Web pages in Project Web App can take advantage of the enhanced Project Server 2010 global and category permissions.
Build a new Ribbon control type.
Design a new Ribbon tab by using pre-existing controls.
Add a command to the Ribbon on a specific page—with or without administrator permissions.
Replace an existing Ribbon command.
Preview how a new Ribbon looks, before deploying it.
Change localization strings for the Ribbon.
Add the user interface for a Web Part to the Ribbon.
Provide upgrade information—on the Ribbon itself—about the Ribbon and custom actions or customized pages.
For more information, see How to: Modify the Ribbon in PWA and Walkthrough: Customizing the PWA Ribbon and Accessing the JS Grid.
Use WCF for Integration with External Systems
Windows Communication Foundation (WCF) simplifies service-oriented development and enables secure and reliable messaging to external systems such as LOB applications. For example, a developer can write a connector that reads resource and timesheet data from Project Server and updates data for a human resources (HR) product on a Linux server.
Because the WCF binding can be changed with configuration files, you can use a basic binding in the prototype stages of development and then tighten the binding between Project Server and the HR system after a security review, with no code change or recompiling. For example, during development you can specify a local domain, transport mechanism, and credential type in the app.config configuration file. On deployment, you can change the WCF binding for tighter security with the external HR system.
Although Project Server 2010 does not support the transaction capabilities of WCF, an event handler or middleware component can combine multiple PSI operations into one server-side operation that reduces roundtrip traffic. For example, to keep a SharePoint list in synchronization with a project, you can write a SharePoint event handler for changes in list items that updates a specific Project Web App instance running on a different computer. When a list item is updated, to update the related task in Project Web App, you can use the WCF interface of the PSI to perform the following steps:
Update the task.
If the resource has changed, update the resource on the task.
Send the updated ProjectDataSet to Project Server.
Read the project after the update completes to get the new calculated start and finish dates for the task.
Update the list item with the new start and finish dates.
For more information, see Overview of WCF and the PSI.
Create Custom Portfolio Management Web Parts
Successful portfolio management can require data with specialized queries from the Reporting database, from custom OLAP cubes, and from sources outside of Project Server. New features in SharePoint Foundation 2010 and SharePoint Server 2010 enable Web Parts that can use the new SharePoint Language-Integrated Query (LINQ) provider to access data in other SharePoint sites, include partially trusted code for secure access to potentially insecure data, or even include a Microsoft Silverlight animation. For more information, see the See Also section.
Extend the JS Grid Control
The JS Grid control is customizable and extensible by third-party developers. The JS Grid control can show a Gantt chart and can be added to custom Web Parts for use in Project Web App pages, project site pages, and other Web pages in any SharePoint Foundation 2010 farm.
Check Workflow Data with an Event Handler
The Microsoft.Office.Project.Server.Events namespace includes the WorkflowEventReceiver class with the OnRunning method. You can create an event handler when the workflow starts running that checks data associated with the workflow or starts another workflow.
For an example of an event handler, see How to: Create a Project Server Event Handler and Log an Event. | https://msdn.microsoft.com/en-us/library/office/ee767682(v=office.14) | CC-MAIN-2015-18 | refinedweb | 1,601 | 53.21 |
»
Frameworks
»
Struts
Author
problem with LookupDispatchAction
Andre LeDuc
Greenhorn
Joined: Mar 08, 2006
Posts: 7
posted
Apr 05, 2006 09:45:00
0
I'm using org.apaches.struts.actions.LookupDispatchAction to process different form submissions from the same page. There are several buttons on the page that will submit the form. I want a specific method in the Action class to process a specific form button. That's what LookupDispatchAction is for, right?
My problem is that it seems that buttons must be labelled distinctly, but I want to label them all with the same value.
Here's a represenative button declaration on the page:
<html:submit <fmt:message </html:submit>
This is how I've declared the labels in ApplicationResources:
button.1=Sum button.2=Sum button.3=Sum
In my Action class (which extends LookupDispatchAction), the keyMethodMap looks like this:
keyMethodMap = new HashMap(); keyMethodMap.put("button.1", "sumInventory"); keyMethodMap.put("button.2", "sumAcquisitions"); keyMethodMap.put("button.3", "sumSales");
My struts-config is set up properly and the page renders correctly. But when I click one of the submit buttons, the most observed behavior is this: The method that gets called is the one associated with whatever message key appears last in ApplicationResources. In the scenario above, sumSales() is called regardless of which button I click. If I recode ApplicationResources such that the order is reversed (button,3, button.2, button.1), then sumInventory() is called - whatever button I click. If I rename the labels in ApplicationResources, e.g.
button.1=Sum1 button.2=Sum2 button.3=Sum3
everything works fine.
I thought the whole point of LookupDispatchAction was to be able to use buttons with the same label but call different methods? I'm using
Struts
1.1. Can this be a bug?
Merrill Higginson
Ranch Hand
Joined: Feb 15, 2005
Posts: 4864
posted
Apr 05, 2006 11:28:00
0
The point of LookupDispatchAction is to allow you to use internationalization with your pages, so that regarless of which language your page is displaying, the dispatch Action will still work. Apparently it doesn't work if the actual text being displayed is identical.
All I can do is suggest a work-around: Use DispatchAction instead of LookupDispatchAction. Then change all your buttons to <html:button> and create an <html:hidden field. In the on click event of each button, set the value of the hidden field to the method you want executed and submit the form.
Merrill
Consultant,
Sima Solutions
Andre LeDuc
Greenhorn
Joined: Mar 08, 2006
Posts: 7
posted
Apr 05, 2006 12:14:00
0
Thanks Merril, that makes sense actually. DispatchAction is my next stop.
A somewhat related problem: For all my other actions I've coded an abstract class BaseAction that extends org.apache.struts.action.Action and my "user actions" extend BaseAction. In BaseAction I've got some methods that I consistently use from all actions. The execute() method is coded like this:
public ActionForward execute(...) throws Exception { doSetup(); ActionForward forward = executeAction(mapping, form, request, response); doTeardown(); return forward; }
executeAction() is an abstract method that I implement in my sub-classes.
Is there a way I can use Struts' other action classes (like DispatchAction) and still use the code I've coded in BaseAction without merely creating a BaseDispatchAction and repeating myself?
Liju Cherian
Ranch Hand
Joined: Mar 17, 2006
Posts: 44
posted
Apr 06, 2006 00:00:00
0
I don't see the reason why you can't extend DispatchAction in place of Action, in the BaseAction class, coz it doesn't matter if you extend Action, DispatchAction or LookUpDispatchAction; the execute() method is always called first. Only after calling the execute() method, the remaining methods get called. Hope this answers your query.
Cheers,
Liju
"What's in a name? That which we call a rose by any other name would smell just as sweet."<br />(William Shakespeare, Romeo and Juliet, Act II, Scene II)
Andre LeDuc
Greenhorn
Joined: Mar 08, 2006
Posts: 7
posted
Apr 06, 2006 08:07:00
0
Here's what I really want to do:
public class MyDispatchAction extends org.apache.struts.actions.DispatchAction, MyBaseAction {}
but of course
Java
does not support multiple inheritance.
Merrill Higginson
Ranch Hand
Joined: Feb 15, 2005
Posts: 4864
posted
Apr 06, 2006 11:50:00
0
I think you're going to be better off just creating a separate base action for DispatchAction. If you tried to do everything in one base action, it could get messy. For a DispatchAction, it's the execute() method in the superclass that routes the request to the named method. If you tried to cover all situations in one base class, you'd have to call super.execute() in all cases, and then catch the error that it throws when there is no action parameter and no defined methods to execute. To me this is messy, and I think it's better just to use a separate base class when DispatchAction is required.
Brent Sterling
Ranch Hand
Joined: Feb 08, 2006
Posts: 948
posted
Apr 06, 2006 15:11:00
0
This is part of the reason that I have never been real excited about DispatchAction. I am sure there is a way to interject common functionality, but it is not obvious to me. Have you looked at the source code for DispatchAction? It is actually a very small class with one a hundred or so lines of code, and half of that is exception handling. You could easily do "reuse from cut and paste" if your really wanted the functionality of DispatchAction.
- Brent
I agree. Here's the link:
subject: problem with LookupDispatchAction
Similar Threads
LookupDispatchAction
LookupDispatchAction Query
LookupDispatchAction + html:cancel
Dynamic Submit Buttons
Dynamic Submit Buttons
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/53424/Struts/LookupDispatchAction | CC-MAIN-2015-32 | refinedweb | 973 | 62.88 |
How Apache Commons BCEL got to where it is currently.
1. I wanted to release a version of BCEL which would support Java 6 and 7.
2. I updated several classes that handled the new instructions and new code attributes.
3. This required new methods on several interfaces.
4. These new methods broke binary compatibility.
5. Whenever binary compatibility is broken, Apache Commons policy is to update the Maven GAV
to prevent jar hell.
6. Part of updating GAV is to also update the package names.
7. I created a release candidate which was deemed unsuitable for several reasons; mostly due
to FindBugs warnings.
8. Multiple refactorings were completed (including moving Interface to Class) to handle FindBugs
warnings.
9. Refactoring died out after no response from users as to Apache direction.
10. Recent new interest has us revisiting these changes.
At this point, we’re somewhat stuck.
1. Do we break Apache Commons Policy regarding binary compatibility GAV and package names?
2. Do we ignore the FindBugs warnings?
Personally, I am against either of those. I also believe that to fix BCEL correctly, we’ll
end up with an api sufficiently different that users will have a non-trivial porting task.
It might be saner if Apache Commons moves BCEL into the attic and suggest that our clients
to migrate to either ASM or JavaAssist.
regards,
chas
> On Jun 6, 2016, at 11:27 AM, Andrey Loskutov <loskutov@gmx.de> wrote:
>
> Hi all,
>
> this is a follow up on.
>
> I'm cross-posting this to dev@commons.apache.org because the discussion on FindBugs mailing
list is related to the BCEL 6 API future, and because I would like to know the opinions from
the BCEL community on the upcoming BCEL 6 release compatibility story.
>
> Please see my answers inline.
>
> On Monday 06 June 2016 17:30 sebb wrote:
>> On 6 June 2016 at 16:23, Andrey Loskutov <loskutov@gmx.de> wrote:
>>> Hi all,
>>>
>>> here is the current state of FindBugs adoption to Java 9.
>>>
>>> 1) FindBugs standalone/Eclipse plugin run fine now on/with Java 9, the latest
code is on java9 branch (not on master yet!), see [0, 1]. If there is interest, I can provide
binary snapshots.
>>>
>>> 2) I have difficulties to use BCEL6 trunk, see [2]. Looks like even after fixing
all compile errors due the various API breakages in BCEL 6 (see [3]), the current BCEL 6 head
can't be used directly as a replacement for our old BCEL5.2 fork, see [4]. If anyone from
FB and/or BCEL gurus could check this, this would be really helpful. Either our BCEL 5.2 patches
were not fully propagated upstream to BCEL or BCEL 6 trunk has few regressions, or I missed
something during update? I have no idea, because of the next point below. The experimental
BCEL 6 port is on an extra branch on top of Java 9 related changes, see commits prefixed with
BCEL6 on java9_bcel6 branch at [5].
>>>
>>> 3) I would be very happy if someone (Bill?) would explain how the *current* BCEL5.2
fork used by FindBugs was built? It was added in commit [6] but I miss instructions how it
differs from the original BCEL code and so unable to re-build it.
>>>
>>> 4) Assuming BCEL6 bugs/FB errors would be fixed (see [4]), transition to the
current BCEL6 head would break each and every FindBugs client, because BCEL6 at current state
uses different namespace and also added some other API breaking changes. If we chose this
path, none of the 3rd party detectors will work anymore and therefore we must bump FindBugs
version to 4.0.
>>
>> This is useful to know.
>> So do the 3rd party detectors depend on the BCEL namespace?
>
> Yes
>
>> Or is it because of the BCEL API changes?
>
> Also yes.
>
>> If so, which ones?
>
> The biggest one is the package namespace change, because this affect each existing BCEL
class/interface.
> See commit
which affects ~400 files in FindBugs.
>
> Much smaller (but still breaking API) changes were class name changes Constants ->
Const, StackMapTable[Entry] -> StackMap[Entry] and the move of constants defined in Constants
from the interface to the class, thus breaking everyone who implemented the interface and
now miss the constants. The rename of StackMapTable/Entry broke also additionally every detector
implemented on top of PreorderVisitor. StackMapTableEntry not only changed its name, but also
changed signature: getByteCodeOffsetDelta -> getByteCodeOffset which gives you an additional
piece of happiness.
>
> Finally, VisitorSupportsInvokeDynamic interface was removed, which broke all FB visitors
based on it via AbstractFrameModelingVisitor and 8 methods were added to the Visitor interface.
>
> That's all I saw in our FB code (where we have lot of detectors), probably others will
report additional API breakage too, I can't say for sure.
>
> But main issue is the namespace change - it is really unnecessary and surprising. I've
read through the commons mailing list and I was surprised that I saw no real request for it
or any discussion about it (I haven't read through all the years but around the namespace
change last summer). The only thing I saw was the Jira request,
out of nowhere, and few commits later BCEL 6 API was made incompatible to every existing client.
:-(
>
>> I'm a bit suprised that the BCEL API should affect the detectors, but
>> perhaps there's a good reason for that.
>
> BF analyses bytecode, and although we have also few recently added ASM based detectors
(which are mostly BCEL free), most of the detectors (and unfortunately many places in the
FB API) use BCEL data structures. It was a natural choice 15 years ago, where BCEL was the
only one bytecode framework...
>
> One way to "fix" the current FindBugs misery is to replace BCEL with ASM (asm.tree package
&Co) but this requires lot of effort because API and design in ASM do not match BCEL 1:1
- and it will also break every FB client in much harder way BCEL 6 API breakage does it today.
Doing this will effectively mean a complete fork/rewrite of FindBugs code, and no one is willing
to spend time for it.
>
>>> Question is: should we go this way? Alternative is: undo BCEL package renaming
and revert few API changes were made. This sounds complicated, but is doable, see BCEL 6 fork
where I renamed all packages back to the old namespace in few minutes [7]. Fixing other "smaller"
breaking BCEL API changes is not that complicated either. However, it is also possible that
BCEL 6 will be released without breaking API's, if I understood right the discussion on the
apache commons-dev mailing list [8]. If anyone from BCEL is listening to this mailing list,
it would be nice to get some feedback on BCEL 6 plans.
>>
>> I have done quite a bit of work trying to eliminate the API breaks
>> without compromising the BCEL 6 updates.
>
> I really appreciate your effort. Please keep it going.
>
>> Though I have yet to revert the Java and Maven namespace changes as I
>> wanted agreement with the approach first.
>>
>> From my point of view I would be happy to see a compatible version of
>> BCEL using the original namespaces.
>> I'm not sure what other Commons devs think.
>
> I hope to see a binary compatible BCEL 6 release, which is might be not 1:1 drop-in replacement
of BCEL 5 but at least 99%. Some changes must happen, API must evolve, this is natural and
no one can keep on the old API forever.
> But! After walking over the all BCEL renamings etc I do not really see a real, functional
reason to break *everything*, and a behavior change with annotations parsing is something
one can live with for a major release. Not all detectors rely on annotations and BCEL behavior
change can probably be fixed in FB core code (so hidden from 3rd party libraries).
>
>> There are still some Java8/Java9 features that are not fully supported.
>> This is true regardless of the namespace issue.
>
> That's absolutely fine and understandable.
>
> My main goal it to get rid of private BCEL forks which cannot be rebuilt/updated anymore
as we see it today, so that we can compile FB on BCEL head, catching all the fixes you will
provide in future BCEL versions. ...And in my ideal world, the new FindBugs release based
on BCEL 6 will not break any existing 3rd party FindBugs detectors library, or eventually
only require a few trivial changes.
>
> Thanks!
>
>>> [0]
>>> [1]
>>> [2]
>>> [3]
>>> [4]
>>> [5]
>>> [6]
>>> [7]
>>> [8]
>>>
>>> On Sunday 05 June 2016 12:55 Andrey Loskutov wrote:
>>>> Hi all,
>>>>
>>>> I've got some free time and now working on Java 9 support for FindBugs,
>>>> the first draft works already, but need some more polishing.
>>>>
>>>> The main goal is to support FB running on Java 9 JRE, to support reading
>>>> Java 9 class files, and to support FB running on Java 8 but analyzing
>>>> Java 9 code. Nice to have (but not in my scope right now) is to support
>>>> any new Java 8/9 constructs like lambdas, type annotations etc.
>>>>
>>>> I've documented briefly tasks coming to my mind via [1].
>>>>
>>>> I plan to push my changes on new java9 branch ASAP.
>>>>
>>>> Main discussion points I see so far:
>>>>
>>>> 1) We must bump the required JRE for FB to 1.8. I see no reason trying
>>>> to support obsoleted 1.7 JRE. If someone wants run FB on 1.7, the old FB
>>>> 3.0.1 should be used. Objections?
>>>>
>>>> 2) Since there are no official releases from ASM/BCEL with Java 9
>>>> support yet, we can release first version based on our own FB private
>>>> snapshot versions. The maven folks will cry but this is a chicken and
>>>> egg problem, so I don't care about maven support for the first round (of
>>>> course any help is welcome). Objections?
>>>>
>>>> 3) Due the JRE version bump I would propose to bump FB version to 3.1.0.
>>>> Objections?
>>>>
>>>> Please give your feedback either on the lists or on the github task [1].
>>>>
>>>> [1]
>>>>
>>>
>>> --
>>> Kind regards,
>>> google.com/+AndreyLoskutov
>>> _______________________________________________
>>> Findbugs-discuss mailing list
>>> Findbugs-discuss@cs.umd.edu
>>>
>
> --
> Kind regards,
> google.com/+AndreyLoskutov
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org | http://mail-archives.apache.org/mod_mbox/commons-dev/201606.mbox/%3CADEA2E6B-8D3D-4BEB-937C-AF37CE7E3620@honton.org%3E | CC-MAIN-2017-47 | refinedweb | 1,742 | 63.59 |
Difference between revisions of "Provisional API Guidelines"
From Eclipsepedia
Latest revision as of 09:54, 19 April 2010
Please also see the Provisional API Guidelines Update Proposal
[edit] Overview
Please also see the Provisional API Guidelines Update Proposal)
- Internal.
[edit] Before.
[edit].
[edit] Bundle manifest
All API packages should be exported unconditionally by the bundle manifest. If internal packages are exported, they should be marked x-internal.
[edit]>..
[edit] compatibility) does not apply. Clients who think they must use internal code may do so at their own risk, or with slightly less risk if they can reach an agreement with the team that developed the internal code. Note also that in such cases, the required versions for plug-in dependencies need to be specified with great care.
[edit] Package naming
All internal code must be in a package whose name contains the segment "internal". The "internal.provisional" naming convention can be used, but is not required, to separate code that may become API in the future from other internal code within the internal package namespace.
[edit] Bundle manifest
All API packages should be exported unconditionally by the bundle manifest. If internal packages are exported, they should be marked x-internal.
[edit].
[edit] provisional APIs slated for change or removal in at least one integration build before making the change. Although not required, adding such a temporary tag can ease the transition for early adopters of the API:
* @deprecated This API will be removed in I20380119. Method {@link #blort()} should be used instead.
Note that there are no restrictions on changing internal code before or after the API freeze. Since all such code is in internal packages after this point, it can change arbitrarily without notice up to and including the final build of the release. | http://wiki.eclipse.org/index.php?title=Provisional_API_Guidelines&diff=196962&oldid=2454 | CC-MAIN-2014-41 | refinedweb | 295 | 51.89 |
The following form allows you to view linux man pages.
#include <numaif.h>
int set_mempolicy(int mode, unsigned long *nodemask,
unsigned long maxnode);
Link with -lnuma.
set_mempolicy() sets the NUMA memory policy of the calling process,
which consists of a policy mode and zero or more nodes, to the values
specified by the mode, nodemask and maxnode arguments.
A NUMA machine has different memory controllers with different dis-
tances to specific CPUs. The memory policy defines from which node
memory is allocated for the process.
This system call defines the default policy for the process. The pro-
cess applied
only when a new page is allocated for the process. For anonymous mem-
ory sup-
ported mode flags are:
MPOL_F_STATIC_NODES (since Linux 2.6.26)
A nonempty nodemask specifies physical node ids. Linux does
will not remap the nodemask when the process moves to a differ-
ent mask of node IDs that contains up to maxnode
bits. The bit mask size is rounded to the next multiple of
sizeof(unsigned long), but the kernel will use bits only up to maxnode.
A NULL value of nodemask or a maxnode value of zero specifies the empty
set of nodes. If the value of maxnode is zero, the nodemask argument
is, allocate memory on the node of the CPU that triggered the alloca-
tion. nodemask must be specified as NULL. If the "local node" con-
tains no free memory, the system will attempt to allocate memory from a
"near by" node.
The MPOL_BIND mode defines a strict policy that restricts memory allo-
cation to the nodes specified in nodemask. If nodemask specifies more
than one node, page allocations will come from the node with the lowest
numeric node ID first, until that node contains no free memory. Allo-
cations will then come from the node with the next highest node ID
specified in nodemask and so forth, until none of the specified nodes
contain free memory. Pages will not be allocated from any node not
specified in the nodemask.
MPOL_INTERLEAVE sets the preferred node for allocation. The kernel will
try to allocate pages from this node first and fall back to "near by"
nodes if the preferred node is low on free memory. If nodemask speci-
fies more than one node ID, the first node in the mask will be selected
as the preferred node. If the nodemask and maxnode arguments specify
the empty set, then the policy specifies "local allocation" (like the
system default policy discussed above).
Then-
ode.
get_mempolicy(2), getcpu(2), mbind(2), mmap(2), numa(3), cpuset(7),
numa(7), numactl(8)
webmaster@linuxguruz.com | http://www.linuxguruz.com/man-pages/set_mempolicy/ | CC-MAIN-2017-39 | refinedweb | 436 | 62.17 |
JetBrains News Archive (2011)
ReSharper 6.1 released, along with dotCover 1.2 and dotTrace 4.5.2 Performance
20.12.2011
We are happy to announce that ReSharper 6.1 has been released, along with compatible updates to dotCover and dotTrace Performance.
Here's a summary of news and improvements in ReSharper 6.1:
- Performance and bug fixes. Since releasing version 6.0, we have fixed 140 performance problems and over 800 bugs!
- New settings management engine. ReSharper settings are now layered, which lets you define global, per-solution, per-team or individual settings. More than that, you can now export and import all or selected settings to/from a single file. For an introduction to the new settings engine, see JetBrains .NET Tools blog.
- Support for Async CTP. The increasingly popular asynchronous programming syntax for C# and VB.NET is now supported in ReSharper 6.1 with a set of code inspections and quick-fixes.
- Optimizing assembly references. ReSharper 6.1 helps you detect unused assembly references per-project, and provides a UI to remove all or some of them.
- New code inspections and quick-fixes for C# and VB.NET. ReSharper 6.1 detects more cases of redundant return statements, access to disposed closure and covariant array conversions, plus adds a pack of previously unavailable VB.NET code inspections.
- XAML support improvements including enhanced code completion, batch generation of event subscriptions, improved attached property awareness, creating namespace aliases from usage etc.
- Other improvements including a new refactoring to convert iterators to methods, internationalization for VB.NET projects; experimental support for Visual Studio 11 Developer Preview (a separate build is available on ReSharper downloads), ReSharper SDK for easier ReSharper plug-in development, and more.
Both dotTrace 4.5.2 Performance and dotCover 1.2 fully support ReSharper 6.1. In addition, dotCover 1.2 works side-by-side with TypeMock 6.2.x, provides performance improvements in report generation, reduces HTML report size and memory consumption.
Download ReSharper 6.1, dotCover 1.2 and dotTrace 4.5.2 Performance today!
WebStorm 3.0: The JavaScript IDE!
20.12.2011
WebStorm 3.0 is out!
Latest trends like CoffeeScript, JSLint validation and the new star of server-side JavaScript development — Node.JS — are at your fingertips:
The key new functionality in WebStorm 3.0 includes:
- Support for Node.JS allows you to use all the IDE power for editing and debugging server-side JavaScript.
- CoffeeScript navigation, completions, refactoring, error-checking and more.
- Integration with JSL.
For more details please read what's new, and download WebStorm 3.0!.
PyCharm 2: Double the charm, double the power!
13.12.2011
To address the needs of more Python developers PyCharm takes a major step forward with its new release.
PyCharm 2.0 is available with the following changes and improvements included:
- Support for Mako and Jinja2 template languages.
- Built-in analysis of code coverage by unit tests.
- More code inspections, e.g. detection of parameter type mismatch and usages of deprecated modules, classes and functions.
- Support for IPython in the built-in Python console.
- Full support for coding with Cython.
- New JavaScript debugger based on Mozilla Firefox and Google Chrome browsers.
- Specific code assistance for internationalizing Django applications.
- Running applications using PyPy interpreter.
- Ability to open multiple projects inside one IDE window.
- Many improvements in the IDE UI for every supported platform.
For more details on the changes, please read what's new, and then download PyCharm 2.0. Note that PyCharm 2.0 is a free update for everyone who purchased their PyCharm license after Dec 12, 2010.
As usual, a 30-day trial is available if you want to try Pycharm as your new Python/Django IDE.
YouTrack Goes to Cloud
8.12.2011
We are happy to announce the release of YouTrack InCloud, a cloud-based version of.
For details on subscription plan options please see YouTrack InCloud Pricing.
Keep tracking with pleasure! JetBrains YouTrack Team
IntelliJ IDEA 11: Get Ready for a Productivity Takeoff!
6.12.2011
Please welcome IntelliJ IDEA 11, the fastest, the smartest, and best looking version of our IDE for polyglot development.
The newest version of IntelliJ IDEA reveals a custom, streamlined user interface along with IDE performance optimizations and numerous adjustments throughout the whole environment. In addition, IntelliJ IDEA is now a better citizen on all of its supported platforms, with many platform-specific features added such as Mac OS X Lion full-screen support or drag-n-drop and symbolic links support on Linux..
For more details check out what's new, and download IntelliJ IDEA 11!.
The JetBrains Team
PhpStorm 3.0: Keep Your Code in Its Best Shape!
29.11.2011
PhpStorm 3.0 is out!
This major release introduces the Smart Duplicated Code Detector to help you quickly find similar blocks through your entire code base, a new integrated UML tool to quickly get a bird's-eye view of your project structure, and Profiler to ensure that your code works fast.
The key new
For more details please read what's new, and download PhpStorm 3.0!.
The JetBrains Team
ReSharper 6.1 Early Access Program Opens
04.11.2011
Great!
AppCode 1.0 is Here: New Objective-C IDE for iOS and Mac OS Developers
25.10.2011
AppCode is generally available now — a new alternative Objective-C IDE that extends the already wide range of technology-specific development environments JetBrains offers to developers.
In line with JetBrains' vision for development best practices and code quality standards, AppCode features tight Xcode integration and a strong accent on developer productivity and code quality.
AppCode 1.0 key features include:
-, like Automatic Reference Counting.
- Version Control Systems integration — unified UI for most popular VCSs such as Git, SVN, Mercurial, Perforce and CVS.
Read more about JetBrains AppCode and download a free 30-day trial.
Separate personal and commercial licenses are available for AppCode each including free IDE updates during 1 year after the purchase date.
The JetBrains Team
AppCode goes Beta with a 50% discount for 1.0
16.09.2011
We are excited to announce the availability of Beta release for JetBrains AppCode!
AppCode is an alternative Objective-C IDE for Mac OS X and iOS developers concentrating of an improved coding productivity.
Having Beta released, leaves only a short final stretch to the 1.0 release, which is due out later this Fall.
During the Beta period everyone can get a 50% discount coupon for AppCode by submitting a request to JetBrains. Once version 1.0 is out the coupons can be used to purchase the IDE at half-price. We've also revealed the planned pricing for AppCode.
But enough words, download AppCode Beta and make sure you request the discount to save on AppCode 1.0!
The JetBrains Team
Back to School Sale! Up to 50% OFF on YouTrack, PyCharm
01.09.2011
Do you miss back-to-school rush time? Do you remember how it felt?
We would like to remind you about the wonderful time we all spent preparing for a new school year
by announcing Back to
School Sale! Start
new business year with the best
tools for your development team!
During the entire month of September you are entitled to special promotion that allows you to get up to 50% OFF when purchasing YouTrack, our bug and issue tracker, and 50% OFF PyCharm personal license — Python and Django IDE.
Hurry up, the offer is valid until September 30 only. Start the new year with the most intelligent issue & bug tracker and a smart development environment.
Best Regards, JetBrains Team
MPS 2.0 Release
22.08.2011
After several milestones, the long-awaited second version of MPS is now generally available to the public. Major 2.0 highlights are:
- Support for creating standalone customized language IDEs
- Type-system improvements plus the ability to trace type calculations
- Scripting and customization of the build process
- Redesigned UI for merging different versions safely and intuitively
- Intelligent paste code from Java
- The Dataflow Analyzer language
- And much more
For a detailed list of all the novelties in this new release, please refer to the What's New page.
And now it's time to download MPS 2.0 and consider doing some serious business with it.
The JetBrains Team
dotCover, dotTrace Performance get updated to support ReSharper 6
5.08.2011
Great news for everyone who uses ReSharper 6 along with other JetBrains .NET tools: you can now download dotCover 1.1.1 and dotTrace 4.5.1 Performance that both support ReSharper 6.
Other noteworthy changes in these new minor versions of the code coverage tool and performance profiler include:
- dotCover 1.1.1 introduces support for NUnit dynamic tests (such as those marked with TestCaseSource or Sequential attributes); persists excluded nodes info between unit test session runs; and adds a shortcut to cover all tests from solution (Ctrl+Alt+/)
- dotTrace 4.5.1 Performance brings support for Adjust Time on special functions like Garbage Collector or Native Code; and adds a shortcut (Ctrl+R) for quick editing of profiling configurations
Both releases are free upgrades. To learn more about improvements introduced in dotCover 1.1.1 and dotTrace 4.5.1 Performance, please read this entry on our .NET tools blog.
The JetBrains Team
PhpStorm 2.1.2 and WebStorm 2.1.2 are available for download.
19.07.2011
PhpStorm 2.1.2 and WebStorm 2.1.2, fresh updates to the recently released products are available for download.
This release brings significant performance improvements in JavaScript, PHP and CSS support and is strongly recommended update for all users.
Download WebStormDownload PhpStorm
The JetBrains Team
YouTrack Hosted 3.0 - Free Use Until Fall'11
13.07.2011
Great news just in the nick of time! YouTrack Hosted has successfully migrated to version 3.0 and is open for new registrations. Welcome to Youtrack 3.0 Hosted!
Enjoy all benefits of intelligent bug tracking without taking care of server administration and support. And even better - it's free! We've extended a free beta period for YouTrack Hosted until Fall 2011.
Why should you try YouTrack 3.0 hosted?
First of all, signing up for a hosted YouTrack is the fastest and easiest way to try all the brand new features, and then decide whether to stick with the hosted service - or migrate to stand-alone, if you would like to keep control of your server in your own hands.
And finally, let me tell you what is so great about YouTrack 3.0!
Workflow management is definitely the greatest addition. Finally, YouTrack is able to fully cover all your issue tracking needs, even for the most complicated of processes or the largest of businesses. The problem is solved in a truly intelligent and 'geeky' way. You get a dedicated JetBrains style editor enhanced with auto-completion, suggestions, inspections and more, just like your favorite IDE, to create and edit your workflow rules using a custom-made domain-specific language. Please watch this short screencast to get an idea of how custom workflow works in YouTrack. Download YouTrack Workflow Editor and don't hold yourself back from making your own rules.
As a teaser, here's a workflow rule that requires commenting an issue when closing it as won't fix:
when issue.State.becomes({Won't fix}) { assert comments.added.isNotEmpty: "Please leave a comment"; }
Isn't it cool?
There's much more impressive stuff in the new YouTrack, including the set of issue attributes that can be customized just the way you want; JetBrains' own database; mailbox integration to create issues right from email; 'add watcher' to subscribe other users to issue updates; applying issue commands right from VCS commit comment; import from any bug tracker and integration with GitHub. Please take a look at What's New In YouTrack 3.0 to get a better overview of all the goodies now available in YouTrack Hosted as well. Get your YouTrack Hosted instance now and continue tracking with more pleasure than ever before!
The JetBrains Team
RubyMine 3.2 update adds Rails 3.1, CoffeeScript and YARD support
6.07.2011
RubyMine 3.2 is available for download. The major reason behind this update is the support of Ruby on Rails 3.1. But actually there's more.
Major enhancements include:
-.
Read more about what's new and download RubyMine 3.2.
The JetBrains Team
ReSharper 6 is Released!
30.06.2011
We are extremely happy and relieved to announce the long-awaited release of the new ReSharper: you can download ReSharper 6.0 right now.
Highlights of ReSharper 6 include:
- Rich support for JavaScript, CSS, and ASP.NET MVC 3 Razor view engine including code inspections, navigation and usage search, and extended code completion.
- Navigation and search improved with new features and usability enhancements.
- Bundled decompiler with the entire range of ReSharper navigation inside.
- New code transformations, including transforming LINQ to loops.
- Multiple new code inspections and quick-fixes for C#.
- Auto-importing namespaces on pasting code and in-place refactorings.
- Code quality analysis and quick-fixes for VB.NET code.
- Unit test runner improvements including support for NUnit test cases and feedback on build failure.
- Improvements in XAML support.
For details on new and improved features, please read What's New in ReSharper 6 and watch this screencast on JetBrains TV.
We hope you're excited with this release as much as we are!
ReSharper 6.0 RC2; dotCover and dotTrace EAPs for ReSharper 6 Integration
28.06.2011
First of all, we have the new ReSharper 6.0 Release Candidate 2 build available for you to download.. If you have been hesitant to try out ReSharper 6 due to integration issues with other JetBrains tools, this may be the time to reconsider.
ReSharper 6.0 Release Candidate
23.06.2011
We're making ReSharper 6.0 Release Candidate available today, a build that is a step away from the official release. Changes in RC compared to Beta 3 include:
-, please let us know early since we're counting days to release.
ReSharper 6.0 Beta 3
20.06.2011
We're going exceptionally fast these days with Beta releases. Today, here goes the third Beta release of ReSharper 6: please welcome to download and try ReSharper 6.0 Beta 3. Here's why it's better:
- Less memory consumption: we've been removing memory leaks and exceptional memory traffic extensively this past week.
- No more never-ending process of scanning source files on solution load that some users experienced with Beta 2.
-.
Download ReSharper 6.0 Beta 3, and please let us know as soon as possible if you're facing any painful issues: file a bug report, or fill out this feedback form. There's also the "ReSharper Misbehaves" icon in Visual Studio for quick feedback.
ReSharper 6.0 Beta 2
16.06.2011
Following exceptional feedback from the community, we're publishing the new Beta release of ReSharper 6: please download and try ReSharper 6.0 Beta 2.
We'd like to make sure that, if you've experienced these issues, they're gone for you, so please download ReSharper 6.0 Beta 2. If anything is still going wrong for you with this new Beta release, be it regression cases, performance or memory consumption problems, please let us know.
JetBrains Hits a Perfect Three-Pointer With YouTrack 3.0
15.06.2011
We are happy to announce the release of YouTrack 3.0, a fully customizable version of our intelligent keyboard-centric issue and bug tracker, with a Free Pack now available for small teams. and Download YouTrack 3.0 Free Pack.
For more information on the YouTrack pricing and licensing options, please visit Buy & Upgrade, page.
Keep tracking with pleasure! JetBrains YouTrack Team
ReSharper 6.0 Goes Beta
14.06.2011
Here's some long-awaited news for .NET developers: after months of early access, we're making ReSharper 6.0 Beta available for you to download and test-drive.
ReSharper 6.0 is a very substantial update that includes (but is not limited to):
- Rich support for JavaScript, CSS, and ASP.NET MVC 3 (Razor).
- Bundled decompiler.
- New navigation actions and valuable additions to existing navigation features.
- Code inspections and quick-fixes for on-the-fly code quality control in VB.NET code.
- Multiple new code inspections and quick-fixes for C#.
- Color assistance in C#, XAML, VB.NET, and CSS.
- Bug fixes and new actions in XAML code.
For more details, please read What's New in ReSharper 6 and follow JetBrains .NET Tools Blog for regular feature overviews.
Please note that since we're committed to release ReSharper 6 very soon, we desperately need your feedback. If you face any significant issues with ReSharper 6.0 Beta, please contact ReSharper support, file a bug report, or fill out this feedback form. You can also use the "ReSharper Misbehaves" icon in Visual Studio to quickly submit feedback.
We hope you enjoy ReSharper 6.0 Beta after all as it's so larger than anything we've ever released!
dotTrace 4.5 Performance, dotCover 1.1 Released
9.06.2011
We're releasing updates to our performance profiler and code coverage tool for .NET developers today: please download the release builds of dotTrace 4.5 Performance and dotCover 1.1.
New features in dotTrace 4.5 Performance include:
- Reworked presentation of profiling results.
- New options to estimate potential performance gains.
- Analyzing the performance of a certain class along the lines of its API.
dotCover 1.1 highlights include:
- Generating code coverage reports as XML, HTML, JSON, and XML for NDepend.
- Per-solution coverage settings.
- Code coverage analysis results can be directly requested from a TeamCity server to Visual Studio.
Additional resources for you to learn more details on the two releases include:
- What's new in dotCover 1.1
- Screencast: Introducing dotCover 1.1
- What's new in dotTrace 4.5 Performance
- Screencast: What's new in dotTrace 4.5 Performance.
PyCharm 1.5: Ultimate documentation support, Django templates debugging and more
6.06.2011
This release brings a range of improvements to make Python and Django development more enjoyable and productive in PyCharm.
PyCharm 1.5 highlights:
- Ultimate documentation support (external documentation, Epydoc and reStructuredText markup, etc.)
- Django Templates Debugging
- Database support in Django applications (autodetection, .sql files editing aids, database tables browsing, etc.)
- New 'Move Class/Function' Refactoring
- Many IDE improvements and bug-fixes
For more details on the changes, please read what's new, and then download PyCharm 1.5. Note that version 1.5 is a free update for everyone with a PyCharm license.
As usual, a 30-day trial is available if you are considering Pycharm as your Python/Django IDE.
TeamCity 6.5 Released: Set Free For Unlimited Number Of Users
26.05.2011
Hello,
We're happy to present TeamCity 6.5, a significant update to our distributed build management and continuous integration server.
This version of TeamCity comes with a new fresh UI and enhanced functionality, including advanced DVCS's usage scenarios support.
TeamCity 6.5 highlights:
- Personal Builds on branches for Git and Mercurial
- Parametrized Version Control System roots
- Muting of failing
- Last but not least, TeamCity Professional Edition is now rid of many previous limitations. You can now run free TeamCity with an unlimited number of registered users, support for LDAP, and flexible per-project roles and permissions.
For more details on these changes, please read What's new, and Download a FREE Professional Edition of TeamCity. For the detailed information about purchasing options and conditions, please visit TeamCity Licensing & Prices page.
PhpStorm 2.1 Released!
23.05.2011
PhpStorm 2.1 is finally out!
In this release we focused on the most 'voted for' issues and requests as well as important performance and stability improvements.
So, make sure to check it out:
- Phing support added
- Initial Twig support
- New Diff tools for comparing directories, images and DB's
- 'Extract
WebStorm 2.1 released!
20.05.2011
WebStorm 2.1 is finally out! please read what's new, and download the edition of your choice.
Don't forget: until June 30 you can get 50% OFF on personal WebStorm Licenses! Don't miss this great offer!
dotCover 1.1, dotTrace 4.5 Performance RC builds
18.05.2011
If you're using JetBrains code coverage and performance profiling tools, please download dotCover 1.1 RC and dotTrace 4.5 Performance RC, and help us make sure that these release candidates are ready for public releases scheduled for later this month.
Please use the dotCover issue tracker and dotTrace issue tracker to let us know if there are any rough edges left.
The JetBrains Team
IntelliJ IDEA 10.5 Released! Java 7 Support, New Features and New Pricing Included
16.05.2011
IntelliJ IDEA 10.5 is released, a free update worth a full major release, loaded with support for the latest technologies as well as other numerous improvements. IntelliJ IDEA 10.5 is a recommended update for everyone.
The key features of the release include:
- Full Java 7 Support
- Reworked UI for refactorings and Search/Replace, simplified code completion
- JavaScript Debugger in Google Chrome
- Groovy 1.8 and Spring 3.1 support
- JavaScript, Android and Flex enhancements
- Jetty integration
- XSLT2 support
Read more about what IntelliJ IDEA 10.5 brings.
Another key change is new lower prices for IntelliJ IDEA commercial licenses and personal licenses and upgrades. Check out the Buy & Upgrade page to see the difference.
Download IntelliJ IDEA 10.5 now!
The JetBrains Team
Free .NET Decompiler: Please Welcome JetBrains dotPeek!
11.05.2011
We've got some timely news for the .NET developer community!
Today we're opening an Early Access Program for our new product — free .NET decompiler called dotPeek.
dotPeek is able to decompile .NET 1.0-4.0 assemblies to C#; provides syntax highlighting, code insight, complete keyboard support, Visual Studio look-and-feel, and brings the full power of ReSharper-style navigation to the entire .NET community.
Get an overview of dotPeek functionality, and grab an early dotPeek build!
Let us know how it feels. We're looking forward to your feedback in order to make the new decompiler bug-free and comfortable in daily use by all .NET developers.
The JetBrains Team
IntelliJ IDEA 10.5 Release Candidate
5.05.2011
We have just released IntelliJ IDEA 10.5 Release Candidate for you to download and evaluate.
Version 10.5 adds support of JDK 7 language features as well as a substantial pack of improvements, performance optimizations and fixes. Read more about what IntelliJ IDEA 10.5 brings.
We encourage you to download the RC build, give it a test drive and if you face any problems, please let us know so we can take actions before the GA release!The JetBrains Team
The JetBrains Team
dotCover 1.1, dotTrace 4.5 Performance Beta 2
4.05.2011
Following user feedback, we're rolling out Beta 2 builds of our .NET tools: please download dotCover 1.1 Beta 2 and dotTrace 4.5 Performance Beta 2.
dotCover 1.1 Beta 2 introduces improvements in configuration through the console runner and better error handling while analyzing web applications. dotTrace 4.5 Performance Beta 2 eliminates loss of focus on applying zero folding, and includes constructors when you choose to open all public methods of a class in a separate tab. Both products additionally provide fixes on multiple bug reports.
Please note that dotTrace 4.5 Performance Beta 2 is released as a development build, which means it doesn't suppress exceptions.
The JetBrains Team
Free Upgrade to ReSharper 6 Starts Today
26.04.2011
Some great news for ReSharper users!
Any ReSharper 5 new or upgrade purchases made since today, April 26, 2011, qualify for a free upgrade to the upcoming ReSharper 6!
Learn what's cool in ReSharper 5 compared to previous versions.
Find out what's new in the upcoming ReSharper 6 and/or try a pre-release build.
Buy or upgrade to ReSharper 5 and get your upgrade to ReSharper 6 for free!
The JetBrains Team
Easter Sale from JetBrains.
20.04.2011
To
Early Access Program for AppCode IDE is now open!
08.04.2011
Without too many words, the Early Access Program for AppCode IDE is now open!
AppCode is a new Objective-C IDE built on IntelliJ platform.
We are very much looking forward to your feedback to help us create a tool you would enjoy using daily. This is why this early access program exists in the first place.
So, to share information and discuss the product development with you, we have a blog, a dedicated discussion forum and an issue tracker. Your questions, ideas, features requests and bug reports are welcome!
We kindly invite you to download the latest build from the AppCode EAP page, give it a try, and let us know what you think.
JetBrains AppCode team
Beta Releases of dotTrace 4.5 Performance, dotCover 1.1 are Available
07.04.2011
Following a month of early access, we're revealing beta builds of two of our .NET tools, namely dotTrace 4.5 Performance and dotCover 1.1.
dotTrace 4.5 Performance Beta improves snapshot tree representation; allows putting all public methods of a class to a dedicated tab; provides new options to estimate potential performance gains; previews source code of .NET Framework assemblies, and introduces a new floating license handling policy.
dotCover 1.1 Beta brings a wealth of improvements including advanced presentation of class members and on-the-fly filtering in the tree of coverage results; integration with NDepend and Visual Studio TeamCity plug-in; incremental update of coverage info when rerunning tests; HTML and JSON reporting; and per-solution coverage settings.
Learn more about these beta releases on the JetBrains .NET Tools Blog, and download them from dotTrace web site and dotCover web site.
The JetBrains Team
PyCharm 1.2, ready for Django 1.3
15.03.2011
PyCharm 1.2 is a compatibility update for our Python and Django IDE with the following major changes:
- Support for Python 3.2 and Django 1.3
- Python version compatibility inspection
- Syntax highlighting for .po files as initial i18n support
New 'Academic' license is now available for PyCharm. It allows students and teachers to use the IDE for non-commercial purposes, including education and research. Learn more and apply for Academic license.
For more details about the changes please read what's new, and download PyCharm 1.2.
Note: PyCharm 1.2 is a free update for everyone with a PyCharm license.
The JetBrains Team
WebStorm 2.0 - Master the Latest Web Standards!
24.02.2011
WebStorm 2.0 is finally out!
WebStorm 2.0 brings to your hands support for the latest standards of web development: feel the safety of "use strict" mode of JavaScript, enjoy richness of HTML5, manage your CSS with LESS or SASS.
The platform improvements concern HTML and JavaScript editing, all VCS integrations and other areas which are also important for WebStorm users.
The key changes include:
- ECMAScript 5 "use strict"
- Support for Leaner CSS (LESS) and SASS/SCSS extensions for CSS added
- Relax NG for XML
- Added Mercurial support
- Integration with issue trackers is extended with GitHub, PivotalTracker and RedMine
- Reworked UI for all supported platforms: Windows, Mac OS and Linux
For more details please read what's new, and download WebStorm 2.0.
Don't forget: if you purchased a WebStorm 1.0 license download version 2.0 and upgrade right now — for free.
The JetBrains Team
RubyMine gets updated to version 3.1
15.02.2011
RubyMine 3.1 is available for download with more than 200 issues resolved in the IntelliJ platform code since version 3.0, as well as many improvements in Ruby/Rails specific functionality.
The platform improvements concern HTML and JavaScript editing, all VCS integrations and other areas which are also important for RubyMine users.
Changes specific to Ruby/Rails developers include: new autocompletion which appears instantly as you type; new 'Introduce Parameter' refactoring; a structure view for Shoudla tests and for Test::Unit "test" closures; and the ability to build a Rails app into a .WAR file from the IDE.
Read more about what's new and download RubyMine 3.1.
The JetBrains Team
You Track. We Host!
14.02.2011
Would you like to use YouTrack with no installation or initial setup? That's great because we would like to announce that YouTrack Hosted Beta is YouTrack! The JetBrains Team
PhpStorm 2.0 - Take PHP by Storm!
14.02.2011
PhpStorm 2.0 is finally out!
The new release focuses on yet more intelligence, bringing zero-configuration debugger to the scene, adds PHP 5.3 namespaces and closures support, actively used in today's PHP development, and greatly improves code-analysis capabilities.
The key changes include:
- Complete PHP 5.3 support including namespaces and closures
- Zend Debugger support & zero-configuration debugging in all browsers
- Command line tool support including completion for Zend_Tool and Symfony
- Full-fledged editors for SQL queries with editable query results
- ECMAScript 5 "use strict"
- Added Mercurial support
- Reworked UI for all supported platforms: Windows, Mac OS and Linux
For more details please read what's new, and download the edition of your choice.
Don't forget: if you purchased an PhpStorm 1.0 license download version 2.0 and upgrade right now — for free.
The JetBrains Team
IntelliJ IDEA 10.0.2 Update Released
09.02.2011
IntelliJ IDEA 10.0.2 resolves almost 900 issues from our issue tracker. The update not only includes bug fixes in every part of the IDE but also adds some minor new features such as new code inspections, new option for folding multiple end-of-line comments in Java code, JavaDoc formatter improvements, etc. and also ensures compatibility with both Mac OS X 10.5 and 10.6.
A significant number of improvements was made in JavaScript, HTML and CSS editing functionality and also in PHP support. This predicts the nearing releases of PhpStorm 2.0 and WebStorm 2.0 IDEs.
If you have IntelliJ IDEA 10 installed, 10.0.2 is a recommended update. If you are still waiting to upgrade to verison 10, it is just about time!
See the full list of changes.
Download IntelliJ IDEA 10.0.2 now.
The JetBrains Team
ReSharper 5.1.3 IntelliSense bugfix update
01.02.2011
ReSharper 5.1.3 is available starting today.
This new minor release provides exactly one fix that prevents disabling Parameter Information and auto-completion when you're using native Visual Studio IntelliSense (as opposed to ReSharper IntelliSense).
If you have ReSharper 5.1.2 installed and you're struggling with IntelliSense turning off when you open files, please download ReSharper 5.1.3. Otherwise, you can skip this upgrade.
The JetBrains Team
PyCharm 1.1.1 bugfix update available
18.01.2011
PyCharm 1.1.1 is a mainenance update that fixes some issues reported recently. See the full list of fixes.
We recommend you update your PyCharm IDE.
Also, if you still have an unused PyCharm 50% discount coupon, you have 2 more weeks to use it, before January 31, 2011. Do not miss it!
We wish you a merry Christmas! JetBrains PyCharm Team
ReSharper 5.1.2 is Released
13.01.2011
JetBrains starts the New Year with the release of ReSharper 5.1.2.
This new minor release brings several improvements, including:
- Compatibility with Visual Studio native IntelliSense in ASP.NET MVC 3 Razor syntax web pages
- Improved IntelliSense in Watch tool windows
- Bug fixes related to saving of live templates
- Support for Test Projects (MSTest) re-targeted to .NET Framework 3.5
- Faster processing of large files (10,000 LOC or more)
Download ReSharper 5.1.2 and read more about this new release.
The JetBrains Team | http://www.jetbrains.com/allnews_2011.jsp | CC-MAIN-2014-15 | refinedweb | 5,327 | 60.11 |
Working With hashcode() and equals()
Working With hashcode() and equals()
Need to implement your own custom equality-checking mechanism? Here are some tips for when you need to override hashcode() and equals().
Join the DZone community and get the full member experience.Join For Free.
Method Definition and Default Implementation.
The Contract Between equals() and hashcode()
The default implementation is not enough to satisfy business needs, especially if we're talking about a huge application that considers two objects as equal when some business fact happens. In some business scenarios, developers provide their own implementation in order to force their own equality mechanism regardless the memory addresses.
As per the Java documentation, developers should override both methods in order to achieve a fully working equality mechanism — it's not enough to just implement the equals() method.
If two objects are equal according to the equals(Object) method, then calling the hashcode() method on each of the two objects must produce the same integer result.
In the following sections, we provide several examples that show the importance of overriding both methods and the drawbacks of overriding equals() without hashcode().
Practical Example
We define a class called Student as the following:
package com.programmer.gate.beans; public class Student { private int id; private String name; public Student(int id, String name) { this.name = name; this.id = id; } public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } }
For testing purposes, we define a main class HashcodeEquals that checks whether two instances of Student (who have the exact same attributes) are considered as equal.
public class HashcodeEquals { public static void main(String[] args) { Student alex1 = new Student(1, "Alex"); Student alex2 = new Student(1, "Alex"); System.out.println("alex1 hashcode = " + alex1.hashCode()); System.out.println("alex2 hashcode = " + alex2.hashCode()); System.out.println("Checking equality between alex1 and alex2 = " + alex1.equals(alex2)); } }
Output:
alex1 hashcode = 1852704110 alex2 hashcode = 2032578917 Checking equality between alex1 and alex2 = false
Although the two instances have exactly the same attribute values, they are stored in different memory locations. Hence, they are not considered equal as per the default implementation of equals(). The same applies for hashcode() — a random unique code is generated for each instance.
Overriding equals()
For business purposes, we consider that two students are equal if they have the same ID, so we override the equals() method and provide our own implementation as the following:
@Override public boolean equals(Object obj) { if (obj == null) return false; if (!(obj instanceof Student)) return false; if (obj == this) return true; return this.getId() == ((Student) obj).getId(); }
In the above implementation, we are saying that two students are equal if and only if they are stored in the same memory address OR they have the same ID. Now if we try to run HashcodeEquals, we will get the following output:
alex1 hashcode = 2032578917 alex2 hashcode = 1531485190 Checking equality between alex1 and alex2 = true
As you noticed, overriding equals() with our custom business forces Java to consider the ID attribute when comparing two Student objects.
equals() With ArrayList
A very popular usage of equals() is defining an array list of Student and searching for a particular student inside it. So we modified our testing class in order the achieve this.
public class HashcodeEquals { public static void main(String[] args) { Student alex = new Student(1, "Alex"); List < Student > studentsLst = new ArrayList < Student > (); studentsLst.add(alex); System.out.println("Arraylist size = " + studentsLst.size()); System.out.println("Arraylist contains Alex = " + studentsLst.contains(new Student(1, "Alex"))); } }
After running the above test, we get the following output:
Arraylist size = 1 Arraylist contains Alex = true
Overriding hashcode()
Okay, so we override equals() and we get the expected behavior — even though the hash code of the two objects are different. So, what's the purpose of overriding hashcode()?
equals() With HashSet
Let's consider a new test scenario. We want to store all the students in a HashSet, so we update HashcodeEquals as the following:
public class HashcodeEquals { public static void main(String[] args) { Student alex1 = new Student(1, "Alex"); Student alex2 = new Student(1, "Alex"); HashSet < Student > students = new HashSet < Student > (); students.add(alex1); students.add(alex2); System.out.println("HashSet size = " + students.size()); System.out.println("HashSet contains Alex = " + students.contains(new Student(1, "Alex"))); } }
If we run the above test, we get the following output:
HashSet size = 2 HashSet contains Alex = false
WAIT!! We already override equals() and verify that alex1 and alex2 are equal, and we all know that HashSet stores unique objects, so why did it consider them as different objects ?
HashSet stores its elements in memory buckets. Each bucket is linked to a particular hash code. When calling students.add(alex1), Java stores alex1 inside a bucket and links it to the value of alex1.hashcode(). Now any time an element with the same hash code is inserted into the set, it will just replace alex1. However, since alex2 has a different hash code, it will be stored in a separate bucket and will be considered a totally different object.
Now when HashSet searches for an element inside it, it first generates the element's hash code and looks for a bucket which corresponds to this hash code.
Here comes the importance of overriding hashcode(), so let's override it in Student and set it to be equal to the ID so that students who have the same ID are stored in the same bucket:
@Override public int hashCode() { return id; }
Now if we try to run the same test, we get the following output:
HashSet size = 1 HashSet contains Alex = true
See the magic of hashcode()! The two elements are now considered as equal and stored in the same memory bucket, so any time you call contains() and pass a student object holding the same hash code, the set will be able to find the element.
The same is applied for HashMap, HashTable, or any data structure that uses a hashing mechanism for storing elements.
Conclusion
In order to achieve a fully working custom equality mechanism, it is mandatory to override hashcode() each time you override equals(). Follow the tips below and you'll never have leaks in your custom equality mechanism:
- If two objects are equal, they MUST have the same hash code.
-.
Published at DZone with permission of Hussein Terek , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/working-with-hashcode-and-equals-in-java | CC-MAIN-2019-30 | refinedweb | 1,094 | 53.1 |
you are doing instead of detecting. Sending them reeling. Punch a innocent bystander from the chops to get a laugh? Suddenly it is not quite as arcade vpn how to connect amusing as your initial whips out facing cracks and you them at the cheek,utiliza una tecnica de NAT de una manera transversal y asistida por el servidor. Una vez que los tneles arcade vpn how to connect se han establecido entre los nodos, hamachi,
Arcade vpn how to connect
phishing, it encrypts all the connections and protects the device from malware, hide My Ass Pro VPN is the most expensive service on this list. It arcade vpn how to connect has almost every features that a great VPN service must have. And spam sites.
iOS and Android. Mac, if L2TP/IPsec arcade vpn how to connect fails, try OpenVPN. However, some networks or firewalls block L2TP/IPsec packets. It is easier to configure than using OpenVPN. L2TP/IPsec VPN Client geo proxy kostenlos is built-in on Windows, l2TP/IPsec VPN is recommended before you try to use OpenVPN.
I. Why People Use Proxy Server. A proxy server is a computer that acts as an intermediary between the user's computer and the Internet. It allows client computers to make indirect network connections to other network services. If use proxy server, client computers will first.
1. Make sure you have removed all traces of the Cisco VPN client software before proceeding. 2. Run Windows Fixup for DNE Then Reboot. 3. Install the SonicWALL VPN Client (Ill explain why in a minute). 4. This upgrades the DNE (Deterministic Network Enhancer). Note: You can also run the Citrix DNE Update which will do the same thing. 5.
Arcade vpn how to connect in India:
ePO, check out m. Interested in viewing USPTO, multi-page, such as TIFF, and JEDMICS. And, cALS, or WIPO patent images? Including support for arcade vpn how to connect legacy document image formats, cPC View, cPC Lite provides evaluation access to the powerful advanced features of. Complete,support issue with OpenWeb I have arcade vpn how to connect the 512k 10GB VPN package from OpenWeb. But for the last 2 weeks I ve been getting dismal speeds (Max 20KB/s,) it s been fine for a few months, average 10KB/s,
after providing the basic configuration, to create a Web Service arcade vpn how to connect Proxy with the wizard, you must define the basic configuration.
ProxyShell Hide IP Program This is both a powerful and easy to use program which is able to hide your identity with multiple customizable surfing rules and proxies. It provides fast and encrypted proxy IPs in the UK and the US. The software allows you.
best Download Manager - FlashGet arcade vpn how to connect English Home.even if you don t know a thing about proxy switching, zenmate arcade vpn how to connect for Firefox is a proxy switcher add-on that automates the process of switching.
this list is automatically downloaded and regularly updated when the Phishing Protection feature is enabled. Please note that at this time we are using a limited arcade vpn how to connect list to test the core Phishing Protection framework within the browser.loading.if you use OpenWeb, it will work with web browsers only. Its meant for fast web streaming and surfing. As well as Kill Switch. Astrill includes App Guard which can arcade vpn how to connect block traffic from specific apps before VPN is connected,
see also other FAQs: Is it possible to use TheGreenBow VPN Client through Microsoft ISA Server 20? Which port is needed by TheGreenBow VPN Client? UDP vpn one click elite version apk port 500 and UDP port 4500 must be open and ESP protocol (protocol number 50)) must be allowed.unfortunately, there s no software you can download that will magically unblock Netflix as such. Isn t there just some sort of Netflix unblocker I can use? Not really. Surely there is an easy to use Netflix proxy workaround.
Vpn service!
the AWS Direct Connect Gateway is a new addition to the AWS connectivity space, aWS Direct Connect vs VPN arcade vpn how to connect vs Direct Connect Gateway. I recently wrote about the AWS Direct Connect Gateway.
import and export functions are available both through arcade vpn how to connect the GUI or through direct command line options. Secured import and export functions To allow IT Managers to deploy VPN Configurations securely, ).phrases like 'Call my wife' or arcade vpn how to connect 'Tell my dad' stopped working. I don't have a phone number for Jane Isa Doe." and beneath, siri knew who those people were but proclaimed 'Uh oh,qoS, mPLS VPNs, any Transport over MPLS (AToM and VPLS Understand the crucial Cisco arcade vpn how to connect commands for various MPLS scenarios.) iSBN -13: A complete configuration manual for MPLS, premium Website ISBN -10: 1-5 ISBN -13: eBook (Watermarked)) ISBN -10: 1-5. MPLS TE,
to configure IKEv2 Profiles in OmniSecuR1, omniSecuR2#configure terminal OmniSecuR2(config crypto download cyber ghost apk arcade vpn how to connect to IKEv1 ISAKMP Profile.) use following commands.
benazir income support Card Tracking, bisp, payment, cNIC, case Management Information System and Cash Grant 2018 BISP Tracking Cash management and information arcade vpn how to connect system of Payment 2018 is available here. Now You can Track and check your Benazir Card Payment.all unencrypted (and normall SSL browsing,) this gives arcade vpn how to connect our mobile devices the benefit of some site filters provided by our firewall appliance (another 'how to' I have planned)). Etc.) traffic emanates only from our LAN through our network's (land-based/hard-wired)) router.watch breaking arcade vpn how to connect news videos,
m/watch? . | http://babyonboard.in/LRLWZ/UmPRZ/ZoeLZ/ZfolZ/one-stop-guide-to-organic-baby-formula/arcade-vpn-how-to-connect.html | CC-MAIN-2019-30 | refinedweb | 966 | 63.49 |
Mono { public static void Main(string[] args) { Console.WriteLine ("Hello Mono World"); } }
To compile, use csc:
csc hello.cs
The compiler will create “hello.exe”, which you can run using:
mono hello.exe
The program should run and output:
Hello Mono World
HTTPS connections
To make sure HTTPS connections work, run the following command to check whether you can connect to nuget.org:
csharp -e 'new System.Net.WebClient ().DownloadString ("")'
The program prints the website contents if everything works or throws an exception if it doesn’t.
WinForms Hello World
The following program tests writing a System.Windows.Forms application.
using System; using System.Windows.Forms; public class HelloWorld : Form { static public void Main () { Application.Run (new HelloWorld ()); } public HelloWorld () { Text = "Hello Mono World"; } }
To compile, use csc with the -r option to tell the compiler to pull in the WinForms libraries:
csc hello.cs -r:System.Windows.Forms.dll
The compiler will create “hello.exe”, which you can run using:
mono hello.exe
NOTE: on macOS you’ll have to wait around a minute the very first time you run this command. You also need to use
mono32 since WinForms isn’t supported on 64bit yet.
ASP.NET Hello World
Create a text file with the name hello.aspx and the content:
<%@ Page </asp:calendar>
Then run the xsp4 command from that directory:
xsp4 --port 9000 | https://www.mono-project.com/docs/getting-started/mono-basics/ | CC-MAIN-2019-30 | refinedweb | 227 | 67.96 |
Hi,
I want an application on Asp.Net, In this i want to send email with my application, So tell me how i can send an email.
If you are still here
You can very easily send an email from ASP.NET
System.Web.Mail namespace is the first thing you need:
<%@ Import Namespace="System.Web.Mail" %>
While you send an email it requires the following points as sender, recipient, subject and body going under SmtpMail.Send().
And here is an examples:
string from = "sender@example.com";
string to = "recipient@example.com";
string subject = "Good day!";
string body = "Any news?";
SmtpMail.SmtpServer = "mail.example.com";
SmtpMail.Send(from, to, subject, body);
The SmtpMail.SmtpServer means that you are to specify the mail server that used to deliver the message.
Thanks for your information.
There are currently 1 users browsing this thread. (0 members and 1 guests)
View Tag Cloud
Forum Rules | http://www.webdeveloper.com/forum/showthread.php?280819-How-can-you-send-an-email-message-from-an-ASP-NET-Web-page&p=1280187 | CC-MAIN-2015-32 | refinedweb | 151 | 70.29 |
A lot of fun! However, is there anyway to reduce the amount of zedz that spawn?
Hi, take a look at the file "infected\popinfected.sqf".
lines 19 to 23 defines _probability of spawning, based on the house count of the zone.
in Aegia Marina filled at 10% I can get 30 to 70 spawned units ... :/
Hello, I found a little problem in the mission of test ... and then the zombies appear at the right time disappears from a certain area.
a solution?
thanks you
thank you for reporting this, it seems I forgot to put airfield and Aigia marina triggers on repeat mode (but not the one in gas station, it will be part of the next update).
Hello, when I set a waypoint marker on the map (shift+click) in game I will teleport to this location. How can I stop this?
the popinfected.sqf is missing..... Causing issues. also the dead zombie bodies are not disappearing.....
Stupid question coming in, but how do I install this?
Do I just do the same thing as I would with maps/units, or do I do something else?
Total comments : 7, displayed on page: 7
#include "infected\infectedsounds.hpp";
#include "infected\cfgfunctions.hpp";
null = this spawn INF_fnc_infecthim
null= [thistrigger,["marker01"],100, false, true] spawn INF_fnc_infectedzone;
null= [thistrigger,["pop000"],15,false] spawn INF_fnc_initHorde;! | http://www.armaholic.com/page.php?id=26438 | CC-MAIN-2018-39 | refinedweb | 222 | 67.55 |
101058/write-program-ascending-descending-without-using-function
Hi, @There,
# Python 3 program to sort first
# k elements in increasing order
# and remaining n-k elements in
# decreasing
# Function to sort the array
def printOrder(arr, n, k):
len1 = k
len2 = n - k
arr1 = [0] * k
arr2 = [0] * (n - k)
# Store the k elements
# in an array
for i in range(k):
arr1[i] = arr[i]
# Store the remaining n-k
# elements in an array
for i in range(k, n):
arr2[i - k] = arr[i]
# sorting the array from
# 0 to k-1 places
arr1.sort()
# sorting the array from
# k to n places
arr2.sort()
# storing the values in the
# final array arr
for i in range(n):
if (i < k):
arr[i] = arr1[i]
else :
arr[i] = arr2[len2 - 1]
len2 -= 1
# printing the array
for i in range(n):
print(arr[i], end = " ")
# Driver code
if __name__ == "__main__":
arr = [ 5, 4, 6, 2, 1,
3, 8, 9, -1 ]
k = 4
n = len(arr)
printOrder(arr, n, k)
# This code is contributed
# by ChitraNayal
Output:
2 4 5 6 9 8 3 1 -1
Here's the code:
check = input("Enter the character: ...READ MORE
You can use listName.sort(reverse=True). This will sort the ...READ MORE
Try using this question by list comprehension:
a=[4,7,3,2,5,9]
print([x for ...READ MORE
Here's the logic. You have to add ...READ MORE
Hi,
num_words_list = len(original_str.split())
original_str.split() - split words in ...READ MORE
Hi, good question. The thing is that ...READ MORE
Hi @Mike. First, read both the csv ...READ MORE
This link has the program. It uses ...READ MORE
Yes it is possible. You can refer ...READ MORE
Hey, @Subi,
Regarding your query, you can go ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/101058/write-program-ascending-descending-without-using-function | CC-MAIN-2021-21 | refinedweb | 305 | 74.49 |
03 January 2006 17:02 [Source: ICIS news]
HOUSTON (ICIS news)--BASF said on Tuesday it has made an all-cash proposal to buy all outstanding shares of common stock of US catalysis and surface finishing company Engelhard for $4.9bn (Euro4bn) or $37/share.
BASF said it has made efforts to negotiate a friendly transaction with Engelhard’s board of directors and management, but its efforts have been rebuffed.
By acquiring Engelhard, BASF said it would become a leading provider worldwide in the growing catalyst market. “Engelhard is an excellent enhancement for the BASF portfolio,” said BASF board chairman Jürgen Hambrecht.
The proposal represents a 23% premium above the 20 December closing price of Engelhard’s stock of $30.05 and a 30% premium over Engelhard’s 90-day average share price (VWAP) of $28.42 as of 20 December. This price also represents a premium to Engelhard’s 2005 year-end closing price of $30.15 and to the four-year-high closing price of $32.49 achieved on ?xml:namespace>
“We believe that our all-cash proposal provides a compelling opportunity to achieve significant value for Engelhard’s stockholders,” said Hambrecht. “We hope that Engelhard’s Board and management will recognise this opportunity as well as the benefits for Engelhard’s employees. We look forward to continuing and strengthening business relations with Engelhard’s customers.”
BASF said that its team is prepared to meet with the Engelhard team to see whether Engelhard can demonstrate value in addition to that discernible from the publicly available information that would allow BASF to increase the price that it proposes to pay for Engelhard by $1/share. Following Engelhard’s refusal to enter into a dialogue with BASF, BASF said it sent a letter on Tuesday to Engelhard’s Board of Directors outlining its proposal.
“Given your position, we see no alternative but to take our proposal directly to your stockholders,” BASF told the Engelhard board. BASF added: “Although we are being forced to take our proposal directly to your stockholders, it remains our strong preference to work with you towards a negotiated transaction.”
BASF said it expects to commence a tender offer for all of Engelhard’s shares of common stock shortly, which will be filed with the Securities and Exchange Commission (SEC) together with all relevant tender offer | http://www.icis.com/Articles/2006/01/03/1031343/basf-makes-4.9bn-offer-to-buy-us-engelhard.html | CC-MAIN-2014-35 | refinedweb | 389 | 52.09 |
Launching No-Touch Deployment Applications with Command Line Arguments
Chris Sells
Microsoft Corporation
May 23, 2003
Summary: Chris Sells shares some custom code that you can use on the client- and server-side to create no-touch deployment Windows Forms applications. (11 printed pages)
Download the winforms05152003.msi sample file.
Easily one of the questions I get asked the most once folks start using no-touch deployment (NTD) applications, that is Windows Forms applications that can launched with an URL, is how they can pass command line arguments to them. Apparently folks want to provide links on their Web pages with different launch options or they want to generate URLs with command line arguments on the fly based on the current user's session. For an example of the latter, consider Figure 1.
Figure 1. A Web page with a link to an NTD application
Figure 1 shows a Web page that's clearly been personalized for the person surfing to it. The link on the Web page is to a NTD application that looks like Figure 2 when it's first run.
Figure 2. Login form for the NTD application
Notice that this application starts up pre-populated with the user's name and ID. The way I did this is to pass arguments using an URL formatted like so:
When launching a managed application from the local hard drive, command line parameters are available from the string array passed to Main:
Unfortunately, the support for pulling command line arguments out of the launching URL is new to the Microsoft .NET Framework 1.1 and nearly undocumented. Also, since the launching URL is used to create the path to .config file, full support for command line arguments requires some code running on the server-side as well.
Note There is a workaround to enable pulling command line arguments out of the launching URL that works in .NET 1.0, too, and it's covered later in this article.
Client-Size Support for NTD Arguments
To pull the arguments out of the launching URL first requires access to the launching URL from within the NTD application. For this, .NET 1.1 provides the APP_LAUNCH_URL data variable from the application domain:
The URL used to launch the NTD application, including arguments, is provided in full from APP_LAUNCH_URL. Unfortunately, APP_LAUNCH_URL isn't available in .NET 1.0. However, since the path to a NTD application's .config file is just the URL (including arguments) with ".config" tacked onto the end, you can pull out the equivalent of the APP_LAUNCH_URL in .NET 1.0. For example, launching an application from:
Yields the following .config file path:
Since the application domain provides access to the .config file path, you can use that and a little string manipulation to get access to the NTD launching URL for both .NET 1.0 and .NET 1.1:
// Only works for .NET 1.1+ AppDomain domain = AppDomain.CurrentDomain; object obj = domain.GetData("APP_LAUNCH_URL"); string appLaunchUrl = (obj != null ? obj.ToString() : ""); // Fall back for .NET 1.0 if( appLaunchUrl == string.Empty ) { const string ext = ".config"; string configFile = domain.SetupInformation.ConfigurationFile; appLaunchUrl = configFile.Substring(0, configFile.Length - ext.Length); }
Either way, both default intranet and Internet permissions for .NET 1.x allow access to command line argument information from partially trusted clients. Once you've got the full URL, it can be decoded and parsed for the arguments.
Security Considerations
Before diving into how to get at the arguments, I have to make a quick note about the security implications of handling command line arguments in an NTD application. When arguments are passed to an EXE already installed on the machine, that's done through the shell or the command line console by the user, so you can trust that things are likely to be okay. However, on the Web, users are tricked all the time into opening attachments in e-mail or clicking on links in the browser that are not in their best interest. For example, imagine HotMail as an NTD allowing the following arguments:
In this case, we're passing arguments specifying our intension to forward our e-mail to somebody else without any confirmation UI. If this link was embedded on a Web page and marked "Free Chocolate!" can you honestly say that you wouldn't be tempted to click it yourself? Make sure that you carefully evaluate the possibilities exposed by your command line argument combinations with an eye towards evil-doers before you unleash it onto the wild and wooly world of the Web.
Decoding and Parsing the URL
Once you're happy that you've got the security considerations well in hand, you'll need to parse the arguments from the URL itself. Sometimes you'll need to encode special characters into an URL to pass them, just as if you were passing them to a server-side piece of code. For example, to encode the space in the
uname argument requires encoding the ASCII value of the space character with the %dd syntax:
On the server-side, System.Web provides not one, but two classes that know how to decode an URL (HttpServerUtility and HttpUtility). Unfortunately, NTD clients from both the intranet or Internet zones are forbidden from using the UrlDecode method from either class. Luckily, a little creative reverse-engineering of the .NET Framework Class Library yields an UrlDecode that you can use (and is provided with the source of this article). Decoding the URL will expand things like %20 with their actual characters:
Once an URL has been decoded, you're ready to pull the actual arguments out, using the question mark as the marker for the argument part of the URL and the ampersand as the marker between arguments. Unfortunately, while System.Web provides a class to do this (HttpValueCollection), it's used internally to implement query string parsing and isn't available by itself. Fortunately, it's not hard to reverse-engineer this string parsing functionality for our use, splitting out the arguments into key/value pairs, and the sample provides the code for that, too.
Abstract Command Line Arguments
In the sample, both the URL decoding and the query string parsing are bundled together into the WebCommandLineHelper class:
The GetCommandLineArgs static method of the WebCommandLineHelper class takes as an input parameter the arguments that were retrieved from Main. Inside the GetCommandLineArgs method, the first thing that happens is a check to see if the application is launched from an URL or not:
The appbase of a .NET assembly is where it's from; for example, a spot on the hard drive or an URL. If a site can be extracted from the appbase, the application was launched from an URL. If the application wasn't launched from an URL, GetCommandLineArgs just returns the arguments passed in that came from Main:
In this way, the application doesn't really care about where the arguments come from. All that matters is the arguments themselves. The WebCommandLineHelper unifies the two sources of arguments for your convenience.
Server-Side Support for NTD Arguments
Unfortunately, the client-side is not all there is to pulling handling command line arguments for NTD applications. Since the URL, including arguments, is used to produce the path to the .config file, if you want to serve up a .config file, you need some server-side code to translate requests for .config files formed like this:
Into requests like this, with the arguments stripped away:
When IIS sees foo.exe?blah.config, it maps this to a request for a .exe file, not a .config file. That means that various parts of .NET that look for .config files, like the assembly resolution process, the custom configuration reader, and Web services will all be handed the .exe bits when they request the .config bits. While the former two stacks will fail silently as if the .config file is missing, the .NET Web services stack will throw an exception when it gets the .exe bits, even if there's no .config file to serve up in the first place. Clearly, this isn't what we're after.
Letting ASP.NET Handle .EXE Files
Serving up the appropriate .config file in the presence of URL arguments is a multi-step process. Step one is to hand off requests for .exe files to ASP.NET so that you can hook up some custom code to requests for .exe files. By default, all Web applications are configured to hand out .exe files directly. To map the .exe extension to ASP.NET, use the IIS configuration tool to adjust the Configuration of your Web application, as shown in Figure 3.
Figure 3. Mapping .exe files to ASP.NET in IIS
Handling .EXE Files
Once ASP.NET is handling .exe files, you can add an HTTP handler, which is how ASP.NET lets you write custom code to handle a request. A handler is simply an implementation of IHttpHandler:
public class ConfigFileHandler : IHttpHandler { // Just the .exe part in the file system string path = context.Request.PhysicalPath; // The entire request URL, include args and .config string url = context.Request.RawUrl; // If someone's asking for a .config, strip the arguments string ext = ".config"; if( url.ToLower().EndsWith(ext) ) { context.Response.WriteFile(path + ext); } // If someone's asking for the .exe, send it else { context.Response.ContentType = "application/octet-stream"; context.Response.WriteFile(path); } } public bool IsReusable { get { return true; } } }
The IHttpHandler interface only has a single interesting method—ProcessRequest. This ProcessRequest method uses some context that a handler gets, like the physical path to the file being request by the client and the raw URL of that same request, and checks if the URL ends with ".config". If it does, we compose a physical path of the .exe file being requested (remember, IIS and ASP.NET considers the .exe file as the file being requested, not the .config file), tack ".config." on the end and serve up the appropriate .config file.
Because we've changed the mapping for .exe files to ASP.NET, this handler will be responsible for handing out both .exe files and .exe?blah.config files. If someone is requesting the .exe file, we set the MIME type appropriate and serve that file up.
In either case, if the file being requested is missing, we'll serve up a 404 error, which .NET can deal with.
Mapping .EXE Files to the Handler
Once you've mapped .EXE files to ASP.NET and implemented a .NET handler to serve up .exe and .exe?blah.config files, you still need to let ASP.NET know which handler to use for .exe files. You do this in the web.config files for your ASP.NET application:
When ASP.NET sees this <add> element in the <httpHandlers> section, it'll map all HTTP for paths ending in .exe to the ConfigHandler assembly and the Genghis.Web.ConfigFileHandler class (the IHttpHandler implementation shown previously). ASP.NET will look for the web.config file in the root of the Web application and for the assembly in the bin directory at the root of Web application, so be sure to put the files in the correct place.
Letting IIS and ASP.NET Hand Out .config Files
After routing requests for .exe files from IIS to ASP.NET, writing the handler to pass back .config files, routing ASP.NET requests for .exe files to that handler, there remains two additional configuration steps. The first is because IIS doesn't allow .config files to go out the door unless anonymous access is enabled. You'll need to check the Anonymous access option in the Authentication Methods dialog of your IIS Web application's Directory Security, as shown in Figure 4.
Figure 4. Enabling anonymous access to a Web application in IIS to allow .config files to be served
And, while ASP.NET is properly letting us hand out .exe files and .exe?args.config files, by default, we're not allowed to hand out .config files. In other words, of the following four URL types, we're only handing out the first three with the current configuration:
The first three URL types show up in our handler. The fourth URL type is important if you launch your NTD application without any URL arguments, but still need the .config file. The reason that ASP.NET doesn't hand out *.config files by default is because you could be keeping all kinds of sensitive information in your web.config file. To enable *.config files to go out but still keep your web.config files protected, add the following entries:
<configuration> <system.web> <httpHandlers> <!-- map .exe and .exe?blah.config files to our handler --> <add verb="*" path="*.exe" type="Genghis.Web.ConfigFileHandler, ConfigHandler" /> <!-- allow .config files but disable web.config files --> <remove verb="*" path="*.config" /> <add verb="*" path="web.config" type="System.Web.HttpForbiddenHandler"/> </httpHandlers> </system.web> </configuration>
In this case, ASP.NET has .config files mapped to the HttpForbiddenHandler, which ensures that they're not handed back to the client. We remove that handler mapping so that .config files are allowed, but then add it back again for just web.config files so that those remain protected.
Where Are We
It's probably pretty clear by now that while the .NET Framework 1.1 provides access to the launching URL of a NTD application, there's no real support built in to handle URL arguments when launching NTD applications. However, with some creative coding, you can still do it:
- On the client, use APP_LAUNCH_URL or ConfigurationFile to get the launching URL, decoding and parsing it as necessary.
- If you're using ASP.NET on the server:
- Configure IIS to route requests for .exe files to ASP.NET.
- Implement a handler to serve up .exe and .exe?blah.config files.
- Map the handler to .exe files for your ASP.NET application.
- Configure IIS for anonymous access to allow .config files to be served.
- Remove the ASP.NET handler mapping that forbids all .config files.
- Add the ASP.NET handler to forbid web.config files.
So, while arguments aren't really supported as thoroughly as we'd like, .NET is flexible enough that with some cleverness on the client side and on the server side, you can absolutely pass arguments to NTD applications anyway.
References
- .NET Zero Deployment: Security and Versioning Models in the Windows Forms Engine Help You Create and Deploy Smart Clients, MSDN Magazine, July, 2002
- Essential ASP.NET with Examples in C#, Fritz Onion, Addison-Wesley, 2003
-
Note Some of this material is adapted from the forthcoming Addison-Wesley title Windows Forms Programming in C# by Chris Sells (0321116208).. | http://msdn.microsoft.com/en-us/library/ms951072.aspx | CC-MAIN-2014-35 | refinedweb | 2,446 | 67.65 |
Wrapper class
Subrata Saha
Greenhorn
Joined: Jun 10, 2005
Posts: 19
posted
Mar 16, 2006 03:58:00
0
Well i do understand what is wrapper ?? i.e just OO view .
int(primitive) --> Integer (wrapper)
But i heard some body saying that build a wraaper class then do .....
So what exactly this wrapper class??
satishkumar janakiraman
Ranch Hand
Joined: May 03, 2004
Posts: 334
posted
Mar 16, 2006 04:18:00
0
Hi ,
Wrapper classes allow primitives to be accessed as objects. For eg, int, char these are primitives and they simple types.
I hope, this will help you.
bye for now
sat
Charles Lyons
Author
Ranch Hand
Joined: Mar 27, 2003
Posts: 836
posted
Mar 16, 2006 04:38:00
0
If I understand your question correctly, you are asking what the definition of a wrapper class is?
Essentially, a wrapper class encapsulates (or "wraps") some form of related data. The most general wrapper implements an interface (or sometimes extends a class), and then encapsulates another instance of that class, in order to perhaps moderate or intercept calls to the object.
The best way for me to explain this is to give you a concrete example - this is taken from
J2EE
Web, but you don't need to know any J2EE to understand it.
In J2EE Web, we have a
HttpServletResponse
object which (in brief) is used to write data back, via a stream, to the client from the Web server. But sometimes we don't want to write data back directly, sometimes it's appropriate to intercept that data and then decide later whether we want to send it to the client or not. In this case we use a
HttpServletResponseWrapper
:
public class HttpServletResponseWrapper implements HttpServletResponse { private HttpServletResponse wrapped; public HttpServletResponseWrapper(HttpServletResponse wrap) { this.wrapped = wrap; } ... // Implement all other methods of HttpServletResponse interface. // These all delegate to the enclosed 'wrapped' object. For example: public PrintWriter getWriter() throws IOException { return wrapped.getWriter(); } ... }
Notice that the wrapper
is
a
HttpServletResponse
. All our
HttpServletResponseWrapper
actually does is delegate everything to the enclosed 'wrapped' object - so our wrapper 'looks' just like the enclosed object. Alone this is pretty useless; but if we want to prevent content being written back directly, we can override one or more methods - for example, overriding getWriter() allows us to use an intermediate buffer between the server and client...
public class HttpServletResponseWrapper implements HttpServletResponse { private HttpServletResponse wrapped; public HttpServletResponseWrapper(HttpServletResponse wrap) { this.wrapped = wrap; } ... // Implement all other methods of HttpServletResponse interface. public PrintWriter getWriter() throws IOException { /* Use some new buffer; this method doesn't delegate to 'wrapped' */ return new PrintWriter(new ByteArrayOutputStream()); } ... }
We've now got an object which looks almost like the enclosed 'wrapped' instance, but makes a few minor changes and overrides a few methods - but in essence, it shares all the same properties (one might be tempted to say it is almost the
same object
), and that's what a wrapper is all about - having the same object/properites at heart, but make some exterior modifications.
The primitive wrappers in java.lang are also doing just this, except they enclose/wrap a primitive and not an object. They then go on to add extra useful definitions to methods like toString() and equals(), and add the various xxxValue() methods (I've ignored the static methods, which don't technically form part of a wrapper class); these all add extra detail to the primitives, or modify the way in which the wrapped primitive behaves.
This is the correct way to think about wrappers - this is a well-known design strategy known as the Wrapper or Decorator
pattern
. Try a search for "Wrapper pattern".
The "Adapter pattern" is similar - in this case, we have a class (ClassA) which provides all the required functionality for some particular purpose, but doesn't implement the correct interface (InterfaceB) to actually be used or passed to any methods as arguments. In this case, all we do is:
public class InterfaceBWrapper implements InterfaceB { private ClassA wrapped; public InterfaceBWrapper(ClassA wrap) { this.wrapped = wrap; } // implement all the methods of InterfaceB, delegating to wrapped // where appropriate }
instantiating a new wrapper for each instance of ClassA, and passing that wrapper to the methods which take InterfaceB as an argument...
Let me know if that has helped.
Charles Lyons (SCJP 1.4, April 2003; SCJP 5, Dec 2006; SCWCD 1.4b, April 2004)
Author of OCEJWCD Study Companion for Oracle Exam 1Z0-899 (ISBN 0955160340 /
Amazon
Amazon UK
)
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Mar 16, 2006 10:47:00
0
Wrapper is a pretty generic term ... for specifics look up Decorator, Adapter and Bridge
patterns
for a start. Structurally they look a lot alike but the intent and purpose is different in each. If you read up on those and still have questions, scroll on down to the UML, OO etc. forum and ask away.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
I agree. Here's the link:
subject: Wrapper class
Similar Threads
wrapper classes
user defined wrapper class
@Wrapper class
Implementing the Data Interface
@Wrapper class
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/379421/java/java/Wrapper-class | CC-MAIN-2014-52 | refinedweb | 897 | 60.14 |
Introduction: The Problem
Deep learning sometimes seems like sorcery. Its state-of-the-art applications are at times delightful and at times disturbing. The tools that achieve these results are, amazingly, mostly open source, and can work their magic on powerful hardware available to rent by the hour in the cloud.
It’s no wonder that companies are eager to apply deep learning for more prosaic business problems like better churn prediction, image curation, chatbots, time series analysis and more. Just because the tools are readily available doesn’t mean they’re easy to use well. Even choosing the right architecture, layers and activations is more art than science.
This blog won’t examine how to tune a deep learning architecture for accuracy. That process does, however, require training lots of models in a process of trial and error. This leads to a more immediate issue: scaling up the performance of deep learning training.
Tuning deep learning training doesn’t work like tuning an ETL job. It requires a large amount of compute from specialized hardware, and everyone eventually finds deep learning training ‘too slow’. Too often, users reach for solutions that may be overkill, expensive and not faster, when trying to scale up, while overlooking some basic errors that hurt performance.
This blog will instead walk through basic steps to avoid common performance pitfalls in training, and then the right steps, in order, to scale up by applying more complex tooling and more hardware. Hopefully, you will find your modeling job can move along much faster without reaching immediately for a cluster of extra GPUs.
A Simple Classification Task
Because the focus here is not on the learning problem per se, the following examples will develop a simple data set and problem to solve: classifying the Caltech 256 dataset of about 30,000 images each into one of 257 (yes, 257) categories.
The data consists of JPEG files. These need to be resized to common dimensions, 299×299, to match the pre-trained base layer described below. The images are then written to Parquet files with labels to facilitate larger-scale training, described later. This can be accomplished with the ‘binary’ files data source in Apache Spark. See the accompanying notebook for full source code, but these are the highlights:
img_size = 299 def scale_image(image_bytes): image = Image.open(io.BytesIO(image_bytes)).convert('RGB') image.thumbnail((img_size, img_size), Image.ANTIALIAS) x, y = image.size with_bg = Image.new('RGB', (img_size, img_size), (255, 255, 255)) with_bg.paste(image, box=((img_size - x) // 2, (img_size - y) // 2)) return with_bg.tobytes() ... raw_image_df = spark.read.format("binaryFile").\ option("pathGlobFilter", "*.jpg").option("recursiveFileLookup", "true").\ load(caltech_256_path).repartition(64) image_df = raw_image_df.select( file_to_label_udf("path").alias("label"), scale_image_udf("content").alias("image")).cache() (train_image_df, test_image_df) = image_df.randomSplit([0.9, 0.1], seed=42) ... train_image_df.write.option("parquet.block.size", 1024 * 1024).\ parquet(table_path_base + "train") test_image_df.write.option("parquet.block.size", 1024 * 1024).\ parquet(table_path_base + "test")
It’s also possible to use Spark’s built-in ‘image’ data source type to read these as well.
Keras, the popular high-level front end for Tensorflow, can describe a straightforward deep learning model to classify the images. There’s no need to build an image classifier from scratch. Instead, this example reuses the pretrained Xception model built into Keras and adds a dense layer on top to classify. (Note that this example uses Keras as included with Tensorflow 1.13.1, in tensorflow.keras, rather than standalone Keras 2.2.4). The pretrained layers themselves will not be trained further. Take that as step #0: use transfer learning and pretrained models when working with images!
Step #1: Use a GPU
Almost the only situation where it makes sense to train a deep learning model on a CPU is when there are no GPUs available. When working in the cloud, on a platform like Databricks, it’s trivial to provision a machine with a GPU with all the drivers and libraries ready. This example will jump straight into training this model on a single K80 GPU.
This first pass will just load a 10% sample of the data from Parquet as a pandas DataFrame, reshape the image data, and train in memory on 90% of that sample. Here, training just runs for 60 epochs on a small batch size. Small side tip: when using a pretrained network, it’s essential to normalize the image values to the range the network expects. Here, that’s [-1,1], and Keras provides a
preprocess_input function to do this.
(Note: to run this example on Databricks, select the 5.5 ML Runtime or later with GPU support, and choose a driver instance type with a single GPU. Because the example also uses Spark, you will have to also provision 1 worker.)
df_pd = spark.read.parquet("...").sample(0.1, seed=42).toPandas() X_raw = df_pd["image"].values X = np.array( [preprocess_input( np.frombuffer(X_raw[i], dtype=np.uint8).reshape((img_size,img_size,3))) for i in range(len(X_raw))]) y = df_pd["label"].values - 1 # -1 because labels are 1-based X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) ... def build_model(dropout=None): model = Sequential() xception = Xception(include_top=False, input_shape=(img_size,img_size,3), pooling='avg') for layer in xception.layers: layer.trainable = False model.add(xception) if dropout: model.add(Dropout(dropout)) model.add(Dense(257, activation='softmax')) return model model = build_model() model.compile(optimizer=Nadam(lr=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=2, epochs=60, verbose=2) model.evaluate(X_test, y_test) ... Epoch 58/60 - 65s - loss: 0.2787 - acc: 0.9280 Epoch 59/60 - 65s - loss: 0.3425 - acc: 0.9106 Epoch 60/60 - 65s - loss: 0.3525 - acc: 0.9173 ... [1.913768016828665, 0.7597173]
The results look good — 91.7% accuracy! However, there’s an important flaw. The final evaluation on the held-out 10% validation data shows that true accuracy is more like 76%. Actually, the model has overfitted. That’s not good, but worse, it means that most of the time it spent training was spent making it a little worse. It should have ended when accuracy on the validation data stopped decreasing. Not only would that have left a better model, it would have completed faster..
... early_stopping = EarlyStopping(patience=3, monitor='val_acc', min_delta=0.001, restore_best_weights=True, verbose=1) model.fit(X_train, y_train, batch_size=2, epochs=60, verbose=2, validation_data=(X_test, y_test), callbacks=[early_stopping]) model.evaluate(X_test, y_test) ... Epoch 12/60 - 74s - loss: 0.9468 - acc: 0.7689 - val_loss: 1.2728 - val_acc: 0.7597 Epoch 13/60 - 75s - loss: 0.8886 - acc: 0.7795 - val_loss: 1.4035 - val_acc: 0.7456 Epoch 14/60 Restoring model weights from the end of the best epoch. - 80s - loss: 0.8391 - acc: 0.7870 - val_loss: 1.4467 - val_acc: 0.7420 Epoch 00014: early stopping ... [1.3035458562230895, 0.7597173]
Now, training stops in 14 epochs, not 60, and 18 minutes. Each epoch took a little longer (75s vs 65s) because of the evaluation of the validation data. Accuracy is better too, at 76.7%.
With early stopping, note that the number of epochs passed to fit() only matters as a limit on the maximum number of epochs that will run. It can be set to a large value. This is the first a couple observations here that suggest the same thing: epochs don’t really matter as a unit of training. They’re just a number of batches of data that constitute the whole input to training. But training means passing over the data in batches repeatedly until the model is trained enough. How many epochs that represents isn’t directly important. An epoch is still useful as a point of comparison for time taken to train per amount of data though.
Step #3: Max Out GPU with Larger Batch Sizes
In Databricks, cluster metrics are exposed through a Ganglia-based UI. This shows GPU utilization during training. Monitoring utilization is important to tuning as it can suggest bottlenecks. Here, the GPU is pretty well used at about 90%:
100% is cooler than 90%. The batch size of 2 is small, and isn’t keeping the GPU busy enough during processing. Increasing the batch size would increase that utilization. The goal isn’t only to make the GPU busier, but to benefit from the extra work. Bigger batches improve how well each batch updates the model (up to a point) with more accurate gradients. That in turn can allow training to use a higher learning rate, and more quickly reach the point where the model stops improving.
Or, with extra capacity, it’s possible to add complexity to the network architecture itself to take advantage of that. This example doesn’t intend to explore tuning the architecture, but will try adding some dropout to decrease this network’s tendency to overfit.
model = build_model(dropout=0.5) model.compile(optimizer=Nadam(lr=0.004), loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=16, epochs=30, verbose=2, validation_data=(X_test, y_test), callbacks=[early_stopping]) … Epoch 6/30 - 56s - loss: 0.1487 - acc: 0.9583 - val_loss: 1.1105 - val_acc: 0.7633 Epoch 7/30 - 56s - loss: 0.1022 - acc: 0.9717 - val_loss: 1.2128 - val_acc: 0.7456 Epoch 8/30 - 56s - loss: 0.0853 - acc: 0.9744 - val_loss: 1.2004 - val_acc: 0.7597 Epoch 9/30 Restoring model weights from the end of the best epoch. - 62s - loss: 0.0811 - acc: 0.9815 - val_loss: 1.2424 - val_acc: 0.7350 Epoch 00009: early stopping
With a larger batch size of 16 instead of 2, and learning rate of 0.004 instead of 0.001, the GPU crunches through epochs in under 60s instead of 75s. The model reaches about the same accuracy (76.3%) in only 9 epochs. Total train time was just 9 minutes, much better than 65.
It’s all too easy to increase the learning rate too far, in which case training accuracy will be poor and stay poor. When increasing the batch size by 8x, it’s typically advisable to increase learning rate by at most 8x. Some research suggests that when the batch size increases by N, the learning rate can scale by about sqrt(N).
Note that there is some randomness inherent in the training process, as inputs are shuffled by Keras. Accuracy fluctuates mostly up but sometimes down over time, and coupled with early stopping, training might terminate earlier or later depending on the order the data is encountered. To even this out, the ‘patience’ of EarlyStopping can be increased at the cost of extra training at the end.
Step #4: Use Petastorm and /dbfs/ml to Access Large Data
Training above used just a 10% sample of the data, and the tips above helped bring training time down by adopting a few best practices. The next step, of course, is to train on all of the data. This should help achieve higher accuracy, but means more data will have to be processed too. The full data set is many gigabytes, which could still fit in memory, but for purposes here, let’s pretend it wouldn’t. Data needs to be loaded efficiently in chunks into memory during training with a different approach.
Fortunately, the Petastorm library from Uber is designed to feed Parquet-based data into Tensorflow (or Keras) training in this way. It can be applied by adapting the preprocessing and training code to create Tensorflow Datasets, rather than pandas DataFrames, for training. Datasets here act like infinite iterators over the data, which means steps_per_epoch is now defined to specify how many batches make an epoch. This underscores how an ‘epoch’ is somewhat arbitrary.
It’s also common to checkpoint model training progress in long-running training jobs, to recover from failures during training. This is also added as a callback.
(Note: To run this example, attach the petastorm library to your cluster.)
path_base = "/dbfs/.../" checkpoint_path = path_base + "checkpoint" table_path_base = path_base + "caltech_256_image/" table_path_base_file = "file:" + table_path_base train_size = spark.read.parquet(table_path_base_file + "train").count() test_size = spark.read.parquet(table_path_base_file + "test").count() # Workaround for Arrow issue: underscore_files = [f for f in (os.listdir(table_path_base + "train") + os.listdir(table_path_base + "test")) if f.startswith("_")] pq.EXCLUDED_PARQUET_PATHS.update(underscore_files) img_size = 299 def transform_reader(reader, batch_size): def transform_input(x): img_bytes = tf.reshape(decode_raw(x.image, tf.uint8), (-1,img_size,img_size,3)) inputs = preprocess_input(tf.cast(img_bytes, tf.float32)) outputs = x.label - 1 return (inputs, outputs) return make_petastorm_dataset(reader).map(transform_input).\ apply(unbatch()).shuffle(400, seed=42).\ batch(batch_size, drop_remainder=True)
The method above reimplements some of the preprocessing from earlier code in terms of Tensorflow’s transformation APIs. Note that Petastorm produces Datasets that deliver data in batches that depends entirely on the Parquet files’ row group size. To control the batch size for training, it’s necessary to use Tensorflow’s unbatch() and batch() operations to re-batch the data into the right size. Also, note the small workaround that’s currently necessary to avoid a problem in reading Parquet files via Arrow in Petastorm.
batch_size = 16 with make_batch_reader(table_path_base_file + "train", num_epochs=None) as train_reader: with make_batch_reader(table_path_base_file + "test", num_epochs=None) as test_reader: train_dataset = transform_reader(train_reader, batch_size) test_dataset = transform_reader(test_reader, batch_size) model = build_model(dropout=0.5) model.compile(optimizer=Nadam(lr=0.004), loss='sparse_categorical_crossentropy', metrics=['acc']) early_stopping = EarlyStopping(patience=3, monitor='val_acc', min_delta=0.001, restore_best_weights=True, verbose=1) # Note: you must set save_weights_only=True to avoid problems with hdf5 files and /dbfs/ml checkpoint = ModelCheckpoint(checkpoint_path + "/checkpoint-{epoch}.ckpt", save_weights_only=True, verbose=1) model.fit(train_dataset, epochs=30, steps_per_epoch=(train_size // batch_size), validation_data=test_dataset, validation_steps=(test_size // batch_size), verbose=2, callbacks=[early_stopping, checkpoint])
More asides: for technical reasons, currently ModelCheckpoint must set save_weights_only=True when using /dbfs. It also appears necessary to use different checkpoint paths per epoch; use a path pattern that includes {epoch}. Now run:
Epoch 8/30 Epoch 00008: saving model to /dbfs/tmp/sean.owen/binary/checkpoint/checkpoint-8.ckpt - 682s - loss: 1.0154 - acc: 0.8336 - val_loss: 1.2391 - val_acc: 0.8301 Epoch 9/30 Epoch 00009: saving model to /dbfs/tmp/sean.owen/binary/checkpoint/checkpoint-9.ckpt. - 684s - loss: 1.0048 - acc: 0.8397 - val_loss: 1.2900 - val_acc: 0.8275 Epoch 10/30 Epoch 00010: saving model to /dbfs/tmp/sean.owen/binary/checkpoint/checkpoint-10.ckpt - 689s - loss: 1.0033 - acc: 0.8422 - val_loss: 1.3706 - val_acc: 0.8225 Epoch 11/30 Restoring model weights from the end of the best epoch. Epoch 00011: saving model to /dbfs/tmp/sean.owen/binary/checkpoint/checkpoint-11.ckpt - 687s - loss: 0.9800 - acc: 0.8503 - val_loss: 1.3837 - val_acc: 0.8225 Epoch 00011: early stopping
Epoch times are almost 11x longer, but recall that an epoch here is now a full pass over the training data, not a 10% sample. The extra overhead comes from the I/O in reading data from Parquet in cloud storage, and writing checkpoint files. The GPU utilization graph manifests this in “spiky” utilization of the GPU:
The upside? Accuracy is significantly better at 83%. The cost was much longer training time: 126 minutes instead of 9. For many applications, this could be well worth it.
Databricks provides an optimized implementation of the file system mount that makes the Parquet files appear as local files to training. Accessing them via /dbfs/ml/… instead of /dbfs/… can improve I/O performance. Also, Petastorm itself can cache data on local disks to avoid re-reading data from cloud storage.
path_base = "/dbfs/ml/..." checkpoint_path = path_base + "checkpoint" table_path_base = path_base + "caltech_256_image/" table_path_base_file = "file:" + table_path_base def make_caching_reader(suffix, cur_shard=None, shard_count=None): return make_batch_reader(table_path_base_file + suffix, num_epochs=None, cur_shard=cur_shard, shard_count=shard_count, cache_type='local-disk', cache_location="/tmp/" + suffix, cache_size_limit=20000000000, cache_row_size_estimate=img_size * img_size * 3)
The rest of the code is as above, just using make_caching_reader in place of make_reader.
Epoch 6/30 Epoch 00006: saving model to /dbfs/ml/tmp/sean.owen/binary/checkpoint/checkpoint-6.ckpt - 638s - loss: 1.0221 - acc: 0.8252 - val_loss: 1.1612 - val_acc: 0.8285 ... Epoch 00009: early stopping
The training time decreased from about 126 minutes to 96 minutes for roughly the same result. That’s still more than 10x the runtime for 10x the data, but not bad for a 7% increase in accuracy.
Step #5: Use Multiple GPUs
Still want to go faster, and have some budget? It’s easy to try a bigger GPU like a V100 and retune appropriately. However, at some point, scaling up means multiple GPUs. Instances with, for example, eight K80 GPUs are readily available in the cloud. Keras provides a simple utility function called multi_gpu_model that can parallelize training across multiple GPUs. It’s just a one-line code change:
num_gpus = 8 ... model = multi_gpu_model(model, gpus=num_gpus)
(Note: to run this example, choose a driver instance type with 8 GPUs.)
The modification was easy, but, to cut to the chase without repeating the training output: per-epoch time becomes 270s instead of 630s. That’s not 8x faster, not even 3x faster. Each of the 8 GPUs is only processing 1/8th of each batch of 16 inputs, so each is again effectively processing just 2 per batch. As above, it’s possible to increase the batch size by 8x to compensate, to 256, and further increase the learning rate to 0.016. (See the accompanying notebook for full code listings.)
It reveals that training is faster, at 135s per epoch. The speedup is better, but still not 8x. Accuracy is steady at around 83%, so this still progresses towards faster training. The Keras implementation is simple, but not optimal. GPU utilization remains spiky because the GPUs idle while Keras combines partial gradients in a straightforward but slow way.
Horovod is another project from Uber that helps scale deep learning training across not just multiple GPUs on one machine, but GPUs across many machines, and with great efficiency. While it’s often associated with training across multiple machines, that’s not actually the next step in scaling up. It can help this current multi-GPU setup. All else equal, it’ll be more efficient to utilize 8 GPUs connected to the same VM than spread across the network.
It requires a different modification to the code, which uses the HorovodRunner utility from Databricks to integrate Horovod with Spark:
batch_size = 32 num_gpus = 8 def train_hvd(): hvd.init() config = tf.ConfigProto() config.gpu_options.allow_growth = True config.gpu_options.visible_device_list = str(hvd.local_rank()) K.set_session(tf.Session(config=config)) pq.EXCLUDED_PARQUET_PATHS.update(underscore_files) with make_caching_reader("train", cur_shard=hvd.rank(), shard_count=hvd.size()) as train_reader: with make_caching_reader("test", cur_shard=hvd.rank(), shard_count=hvd.size()) as test_reader: train_dataset = transform_reader(train_reader, batch_size) test_dataset = transform_reader(test_reader, batch_size) model = build_model(dropout=0.5) optimizer = Nadam(lr=0.016) optimizer = hvd.DistributedOptimizer(optimizer) model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['acc']) callbacks = [hvd.callbacks.BroadcastGlobalVariablesCallback(0), hvd.callbacks.MetricAverageCallback(), EarlyStopping(patience=3, monitor='val_acc', min_delta=0.001, restore_best_weights=True, verbose=(1 if hvd.rank() == 0 else 0))] if hvd.rank() == 0: callbacks.append(ModelCheckpoint( checkpoint_path + "/checkpoint-{epoch}.ckpt", save_weights_only=True, verbose=1)) model.fit(train_dataset, epochs=30, steps_per_epoch=(train_size // (batch_size * num_gpus)), validation_data=test_dataset, validation_steps=(test_size // (batch_size * num_gpus)), verbose=(2 if hvd.rank() == 0 else 0), callbacks=callbacks) hr = HorovodRunner(np=-num_gpus) hr.run(train_hvd)
Again a few notes:
- The Arrow workaround must be repeated in the Horovod training function
- Use hvd.callbacks.MetricAverageCallback to correctly average validation metrics
- Make sure to only run checkpoint callbacks on one worker (rank 0)
- Set HorovodRunner’s np= argument to minus the number of GPUs to use, when local
- Batch size here is now per GPU, not overall. Note the different computation in steps_per_epoch
The output from the training is, well, noisy and so won’t be copied here in full. Total training time has come down to about 12.6 minutes, from 96, or almost 7.6x, which is satisfyingly close to the maximum possible 8x speedup! Accuracy is up to 83.5%. Compare to 9 minutes and 76% accuracy on one GPU.
Step #6: Use Horovod Across Multiple Machines
Sometimes, 8 or even 16 GPUs just isn’t enough, and that’s the most you can get on one machine today. Or, sometimes it can be cheaper to provision GPUs across many smaller machines to take advantage of varying prices per machine type in the cloud. The same Horovod example above can run on a cluster of 8 1-GPU machines instead of 1 8-GPU machine with just a single line of change. HorovodRunner manages the distributed work of Horovod on the Spark cluster by using Spark 2.4’s barrier mode support.
num_gpus = 8 ... hr = HorovodRunner(np=num_gpus)
(Note: to run this example, provision a cluster with 8 workers, each with 1 GPU.)
The only change is to specify 8, rather than -8, to select 8 GPUs on the cluster rather than on the driver. GPU utilization is pleasingly full across 8 machines’ GPUs (the idle one is the driver, which does not participate in the training):
Accuracy is again about the same as expected, at 83.6%. Total run time is almost 17 minutes rather than 12.6, which reflects the overhead of coordinating GPUs across machines. This overhead could be worthwhile in some cases for cost purposes, and is simply a necessary evil if a training job has to scale past 16 GPUs. Where possible, allocating all the GPUs on one machine is faster though.
For a problem of this moderate size, it probably won’t be possible to usefully exploit more GPU resources. Keeping them busy would mean larger learning rates and the learning rate is already about as high as it can go. For this network, a few K80 GPUs may be the right maximum amount of resource to deploy. Of course, there are much larger networks and datasets out there!
Conclusion
Deep learning is powerful magic, but we always want it to go faster. It scales in different ways though. There are new best practices and pitfalls to know when setting out to train a model. A few of these helped the small image classification problem here improve accuracy slightly while reducing runtime 7x. The first steps in scaling aren’t more resources, but looking for easy optimizations.
Scaling to train on an entire large data set in the cloud requires some new tools, but not necessarily more GPUs at first. With careful use of Petastorm and /dbfs/ml, 10x the data helped achieve 82.7% accuracy is not much more than 10x the time on the same hardware.
The next step of scaling up means utilizing multiple GPUs with tools like Horovod, but doesn’t mean a cluster of machines necessarily, unlike in ETL jobs where a cluster of machines is the norm. A single 8 GPU instance allowed training to finish almost 8x faster and achieve over 83% accuracy. Only for the largest problems are multiple GPU instances necessary, but Horovod can help scale even there without much overhead.
| https://databricks.com/blog/2019/08/15/how-not-to-scale-deep-learning-in-6-easy-steps.html | CC-MAIN-2019-47 | refinedweb | 3,811 | 59.7 |
Store Review Controller
You can ask users to rate and review your app at appropriate times throughout the user experience. Make the request when users are most likely to feel satisfaction with your app, such as when they’ve completed an action, level, or task. Make sure not to interrupt their activity.
To give users an easy way to provide feedback on the App Store without leaving your app, use the ISN_SKStoreReviewController. You can prompt for ratings up to three times in a 365-day period. Users will submit a rating through the standardized prompt, and can authenticate with Touch ID to write and submit a review.
Ensure that your support contact information is easy to find in your app and on your App Store product page. It gives users a direct way to reach you if they encounter difficulties, and gives you an opportunity to resolve negative experiences that may otherwise lead to poor reviews.
See the example below:
using SA.iOS.StoreKit; ... ISN_SKStoreReviewController.RequestReview();
The result will be a popup similar to the one on a picture below:
You can't get the result of the review request, or find out how many stars user has given your app. The reason is, Apple wants users to only do a fair review, and this is prohibited for app developers to give any kind of reward to users for a positive feedback.
Manually Request a Review
To enable a user to initiate a review as a result of an action in the UI use a deep link to the App Store page for your app with the query parameter
action=write-review appended to the URL.
using SA.iOS.UIKit; ... string writeReviewURL = ""; ISN_UIApplication.OpenURL(writeReviewURL); | https://unionassets.com/ios-native-pro/store-review-controller-631 | CC-MAIN-2019-09 | refinedweb | 285 | 58.62 |
# include <stdio.h> typedef struct { char a; } buffer; int main (void) { struct buffer { int a; } ; int x = 4 //**/ 2 ; buffer b; b.a = 655; fprintf (stderr, "b.a = %d / x = %d\n", b.a, x); }Here is a transcript of what happens when it is compiled with a conforming C compiler:
> gcc --ansi struct.c struct.c: In function `main': struct.c:10: warning: overflow in implicit constant conversion This is just a warning not a violation of language specification. Most compilers would not complain. > a.out b.a = -113 / x = 2Here is a transcript of what happens when it is compiler with a conforming C++ compiler:
> g++ --ansi struct.c > a.out b.a = 655 / x = 4
Any participant in the trial may use this as they see fit. | http://www.cs.virginia.edu/~evans/cs655-S00/mocktrial/courts2.html | CC-MAIN-2018-43 | refinedweb | 130 | 75.3 |
ROS - Arduino subscriber question
Hello,
I am quite new to ROS but familiar with Arduino. I managed to run the Arduino through ROS. However, I have some questions that I couldn't find the answers for.
I am initializing ROS by typing "roscore" and then establishing the serial communication between arduino and ros by using the "rosrun rosserial_python serial_node.py /dev/ttyUSB0" command. So far so good. Then I send the following command;
rostopic pub red std_msgs/UInt16 1000 which
which calls the "red" function and uses the input of "1000" for delay time. This also works fine. However, when I type this line, it executes only once and there appears a message on the terminal "publishing and latching message. Press ctrl-C to terminate". Meaning that I need to first execute this with keyboard command to be able to input another command.
First of all, I want to send a command and I want this command to run the related function forever unless I send a new command which calls another function and runs it forever. To be able to do that I need those subscribers to return something when they are called or somehow I need to be able to detect they are called so that I can use a simple if else statement in the main loop which serves what I want. Furthermore, I need to get rid of that "publishing and latching message. Press ctrl-C to terminate" message so that I can send another command without cancelling it.
This is what's in my mind and not sure if this is the right way for that framework but appreciate if you show me the way.
#include <ros.h> #include <std_msgs/UInt16.h> #define led_y 12 #define led_r 8 ros::NodeHandle nh; void yellow(const std_msgs::UInt16& cmd_msg) { digitalWrite(led_y,HIGH); delay(cmd_msg.data); digitalWrite(led_y,LOW); delay(cmd_msg.data); } void red(const std_msgs::UInt16& cmd_msg) { digitalWrite(led_r,HIGH); delay(cmd_msg.data); digitalWrite(led_r,LOW); delay(cmd_msg.data); } ros::Subscriber<std_msgs::UInt16> sub("red", red); ros::Subscriber<std_msgs::UInt16> sub("yellow", yellow); void setup() { digitalWrite(MS1, LOW); digitalWrite(MS2, LOW); nh.initNode(); nh.subscribe(sub); } void loop() { nh.spinOnce(); delay(1); } | https://answers.ros.org/question/215390/ros-arduino-subscriber-question/ | CC-MAIN-2019-51 | refinedweb | 365 | 57.37 |
Reuse Qt UI class [SOLVED]
I am new to Qt. I have created user class with gui form. now I want to extend this class to Employee class. i need add section for reset password. Is it possible in Qt. I am using Qt 5.
Hi and welcome to devnet,
In general you can reuse ui files in other classes; if you paste some code we could help you in a better way
I have files user.cpp user.h user.ui, this class is used show then user's details on application. now i want to extend this class to employee. employee class have password reset fuction additionally. In employe ui file it must have textfield for password. all other are same as user class. now i want to use user.ui form to create employee.ui form. just want to add some textfield additionally.
There is no inheritance of UI xml files, and the generated form classes in the Ui:: namespace reflect this.
A few approaches:
- Build a single form containing everything for the employee case and use that UI in both C++ classes. In the UserForm class setup the UI and hide the Employee extra widgets, and in the EmployeeForm set up the UI and just use the form as-is.
- Build a single form containing everything for the employee case and use that UI in a single C++ class that can be constructed in two modes. In one mode hide the extra widgets.
- Build a single form containing only the UserForm widgets. Use that form in both classes but in the EmployeeForm class programmtically add more widgets to the UI.
@ChrisW67 i have tried your solution i have created user class and employee class by extending user class. but i can't access user class widgets. i have defined them as protected. can you show me correct way to do it? | https://forum.qt.io/topic/54833/reuse-qt-ui-class-solved | CC-MAIN-2020-16 | refinedweb | 315 | 76.42 |
#include <linux/module.h>
int];
};
These system calls have not yet been included in any library, which means
that they have to be called by the syscall(__NR_function) mechanism. point-
ed to by table. The entries are ordered in module LIFO order. For
each module an entry that decribes the module will be followed by
create_module(module_name, size);
will allocate size bytes of kernel space for a module, and also
create the necessary kernel structures for the new module called
name. The module will now exist in kernel space, with the status
MOD_UNINITIALIZED., con-
taining ac-
tually not be an ordinary pointer, but instead the offset of the
corresponding string table entry relative to the start of the sym-
bol_table structure.
When all defined symbols have been listed, the symbol_table struc-
ture mod-
ule will be changed to MOD_RUNNING. Otherwise, the kernel memory
occupied by module will be freed.
know...).
Linux version by Bas Laarhoven <bas@vimec.nl>,
0.99.14 version by Jon Tombs <jon@gtex02.us.es>,
extended by Bjorn Ekwall <bj0rn@blox.se>.
Naah...
Linux January 25, 1995 3
[25;1H[K | http://www.linuxonlinehelp.com/man/modules.html | crawl-001 | refinedweb | 185 | 65.52 |
Recently I developed a library for ease the job of displaying texts and numbers on different LCD/OLED modules. This is mainly for my OBD-II data logger project which can be made up of different sets of Arduino hardware. I named the library as Arduino Text Display Library for Multiple LCD, or short as MultiLCD.
The source code is available on Github.
The library encapsulates several libraries for various Arduino LCD/LED display shields or modules into a set of unified APIs.
Currently it supports these hardware:
DFRobot LCD4884 shield
Nokia 3310/5100 LCD module
LCD1602 shield
SSD1306 OLED module
ZT I2C OLED module
The library includes fonts data for ASCII characters (5×7) and digits (8×8, 16×16). By using the library, it is extremely simple for display texts and numbers on desired position on a LCD screen, while very little change is needed to switch from one LCD module to another.
The library class inherits the Print class of Arduino, so that you can display texts on LCD with standard Arduino functions like this:
lcd.print("Hello, World!";) lcd.print(foo, DEC); lcd.print(bar, HEX); lcd.print(1.23) // gives "1.23" lcd.print(1.23456, 2); // gives "1.23"
Besides, it provides unified APIs for initializing and controlling the LCD, as well as some convenient operations.
void begin(); /* initializing */ void clear(); /* clear screen */ void setCursor(unsigned char column, unsigned char line); /* set current cursor */ void printInt(unsigned int n, FONT_SIZE size); /* display a integer number with desired size of font*/ void printLong(unsigned long n, FONT_SIZE size); /* display a long number with desired size of font*/
The code using the library can be extremely simple.
#include <Wire.h> #include <MultiLCD.h> LCD_SSD1306 lcd; /* for SSD1306 OLED module */ void setup() { lcd.begin(); lcd.clear(); lcd.setCursor(0, 0); lcd.print("Hello, world!"); lcd.setCursor(0, 1); lcd.printLong(1234567890, FONT_SIZE_SMALL); lcd.setCursor(0, 2); lcd.printLong(1234567890, FONT_SIZE_MEDIUM); lcd.setCursor(0, 3); lcd.printLong(12345678, FONT_SIZE_LARGE); } | https://forum.arduino.cc/t/multilcd-a-library-for-displaying-texts-on-different-lcd-oled/165085 | CC-MAIN-2021-49 | refinedweb | 331 | 58.28 |
Intro
In this example, you’re going to build a 3D engine based on perspective scaling in 19 lines of code. Before Flash 10 there was no native support for 3D and all you had were x and y coordinates to play around with. So you needed to add another dimension to get 3D – a z-axis. So the trick to creating a z-axis was to use perspective scaling. Which means that as an object moves away from you it gets small and as it moves towards you it get larger. But we need to quantify this idea (make some math out of it) so we can program it. And that comes from Thales Theorem.
Thales Theorem
Imagine that you are in your home looking out the window. As you approach the window objects outside look larger and as you get farther away from the window objects outside look smaller. Your distance from the window is called your focal length (fl), the window is your projection plane (or viewport), and your eye is the vanishing point.
Now don’t move – this fixes your focal length – and watch outside as objects move closer to and farther away from your window. As a bird flies closer to the window it looks larger and as it flies away it looks smaller. This is your z-axis – the distance between the outside object and your window pane. And the equation that governs the behavior that you see is given below;
sizeRatio = size/SIZE = fl/(fl + z)
where your sizeRatio equals “one” when your outside object is at the window and “zero” when your object (bird) flies far away (off to infinity). This equation works and is illustrated in the graphic below as a Blender monkey peering at a projection plane.
Application of Thales Theorem
Z-Sorting
When dealing with multiple objects that overlap, perspective scaling alone is not enough. In the Flash Player you need to z-sort. This is even an issue with Gumbo (Flex 4). When creating a 3D carousel for example you need to z-sort the objects as they spin to put the closest one on top of the stack. The technical name is transposition and in AS2 it was easily accomplished using the swapDepths method. But in AS3 it is a little more complicated.
In AS3, the display list functions as an array and each display object has an index. The index start at zero and goes up to the number of objects on your stage where index zero is the bottom object. So since the display object is a child you can change its position using the setChildIndex method.
Since all your objects are in an array, you can sort that array by z and then set the indices of your array objects based on z. And that’s how it is presently done! Here’s a sort code snippet that illustrates the concept
private function sortmyZ():void
{
myObjects.sortOn(“myZvalue”, Array.DESCENDING|Array.NUMERIC)
for(var i:uint =0; i<numObjects; i++)
{
var myObject:MYObject3D = myObjects[i];
setChildIndex(myObject, i);
}
}
The code snippet sorts the objects in reverse numerical order from high to low based on the myZvalue of your objects. Thus, the objects further away from your projection plane get a lower index value placing them on the bottom of the stack.
This method will need to be called each time your 3D engine iterates the position of your objects. Typically this occurs on an onEnterFrame event.
With that said, you’re now ready to build your3D engine.
3D Flash Engine in 19 Lines of Code
Here is the Demo, Download, Youtube, and Discussion.
Demo
Download
Youtube: How it was made
Discussion
Here are the steps for creating the program:
1. Start by importing the Sprite class; this is where you are going to draw a ball that you will animate in 3D.
import flash.display.Sprite;//imports sprite class
2. Next declare your variables zposition, Angle, and focal length.
var zposition:Number = 0;//z postion
var myAngle:Number =0;//Angle of myBall
var fl:Number = 250; //focal length
3. Next create your ball and add it to the stage.
var myBall:Sprite = new Sprite();//instantiates myBall sprite
myBall.graphics.beginFill(0xFF0000);//Assigns a ball color
myBall.graphics.drawCircle(0, 0, 40);//draws your ball at (0,0)
myBall.graphics.endFill();//ends the fill
addChild(myBall);//adds the ball to the stage
4. Next create your onEnterFrame listener which loops through your equations of motion. This is the heart of all 3D engines.
addEventListener(Event.ENTER_FRAME, onEnterFrame);//loops equations
5. Finally, create the function that will be looped. These are your equations of motion which govern the perspective as it is changed and converts 3D to 2D (or projects onto the viewport).
function onEnterFrame(event:Event):void{
var scale:Number = fl / (fl + zposition);//scale perspective
myAngle=myAngle+.1;//iterates angle
if(Math.abs(myAngle)>=20) myAngle=-myAngle;//change sign
myBall.x = 300*Math.sin(myAngle)*scale+300; //ball orbit x
myBall.y = 300*Math.cos(myAngle)*scale; //ball orbit y
myBall.scaleX = scale;//scales perspective in x
myBall.scaleY = scale;//scales perspective in y
zposition = myAngle*100;} //increments z
Though this is not Papervision it illustrates a number of elements that every 3D engine possesses;
- a frame looper/renderer
- perspective (z-coordinate)
- projection onto a viewport
- primitive or basic shape
- addition of a color (or material)
And all of this in just 19 line of code. If only it would have stayed this simple. Papervision started off with only 20 classes, now it is in the hundreds and growing. But as they say, no pain no gain.
Click more below to see the paste-able code. Just cut and past the code below into flash and watch the magic of perspective occur.
Read the rest of this entry » | https://professionalpapervision.wordpress.com/2008/12/11/ | CC-MAIN-2014-10 | refinedweb | 973 | 65.12 |
Invoking cmd.exe from .NET
One of the primary reasons I am writing this blog is to give back to the community -- technical blogs and message board archives have saved my bacon many times, so my plan is, whenever I spend an hour or two figuring something out and I wasn't able to find any concise resources about it, I'll write a post on it so that the next person may stumble across it and save some time.
Yesterday I was working on calling cmd.exe from within .NET, and had to piece together a few things from various places. Here is the code I ended up with. I've added comments to explain the reason for each line of code -- hopefully someone will find this helpful.
using System; using System.Diagnostics; namespace Utilities { public class Command { public static string ExecuteCmd(string arguments) { // Create the Process Info object with the overloaded constructor // This takes in two parameters, the program to start and the // command line arguments. // The arguments parm is prefixed with "@" to eliminate the need // to escape special characters (i.e. backslashes) in the // arguments string and has "/C" prior to the command to tell // the process to execute the command quickly without feedback. ProcessStartInfo _info = new ProcessStartInfo("cmd", @"/C " + arguments); // The following commands are needed to redirect the // standard output. This means that it will be redirected // to the Process.StandardOutput StreamReader. _info.RedirectStandardOutput = true; // Set UseShellExecute to false. This tells the process to run // as a child of the invoking program, instead of on its own. // This allows us to intercept and redirect the standard output. _info.UseShellExecute = false; // Set CreateNoWindow to true, to supress the creation of // a new window _info.CreateNoWindow = true; // Create a process, assign its ProcessStartInfo and start it Process _p = new Process(); _p.StartInfo = _info; _p.Start(); // Capture the results in a string string _processResults = _p.StandardOutput.ReadToEnd(); // Close the process to release system resources _p.Close(); // Return the output stream to the caller return _processResults; } } } | http://weblogs.asp.net/guystarbuck/invoking-cmd-exe-from-net | CC-MAIN-2015-32 | refinedweb | 336 | 56.15 |
Struts2 ajax validation example.
Struts2 ajax validation example.
In this example, you will see how to validate login through Ajax in
struts2.
1-index.jsp
<html>
<...="1"><tr><td>
<s:form
action="
Ajax validation in struts2.
Ajax validation in struts2.
In this section, you will see how to validate fields of form in
struts2.
1-index.jsp
<html>
<head>..."><tr><td>
<s:form
action="
Ajax form element - Ajax
Ajax form element I have problem in passing a value into form feild using ajax. I want to get the value is as a parameter. Can some one help me plz..."/>
</form-beans>
<action-mappings>
<action path
Struts 2.1.8 Login Form
Struts 2.1.8 Login Form
... to
validate the login form using Struts 2 validator framework.
About the example... in the struts.xml file:
<!-- Login Form Configuration -->
<action name
Struts 2 Ajax
and complete implementation of login form
using the Ajax (DOJO). Lets...
Struts 2 Ajax
In this section, we explain you Ajax based
development in Struts 2. Struts 2 provides built
getting null value in action from ajax call
getting null value in action from ajax call Getting null value from ajax call in action (FirstList.java)... first list is loading correctly. Need..." name="form1" action="StudentRegister" method="post" enctype="multipart/form-data
Ajax
Ajax how to include ajax in jsp page?
Hi,
Please read Ajax First Example - Print Date and Time example.
Instead of using PHP you can write your code in JSP.
Thanks
Struts2
;
Why Struts 2
The new version Struts 2.0 is a combination of the Sturts action...Struts2
Apache
Struts: A brief Introduction
Apache Struts is an open with BIRT - Ajax
or not.
------------------------------------------------------------------
Read for more information about Ajax...Ajax with BIRT Hai i am currently working on BIRT in a company... is my code..
JSP and Servlet using AJAX
function getXMLObject
Ajax
in a tabular form.
Same for >33kv
We can use ajax where instead of a link a radio button can fetch the data and can populate a table.If ajax can be used
Struts2 Training
Day--2
Struts2 Actions
Simple Action
Redirect Action
Struts2
Validator Framework
Day--3... Tag
Submit Tag
Reset Tag
Day--5
A Simple Struts2 Login Application
JSP Ajax Form
Ajax Form Example
This example explains you how to develop a form that populates dynamically
from the database though Ajax call.
When user selects employee... in the box automatically.
Read more at
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-bat
PHP - Ajax
"
For read more information on Ajax visit to : How can ajax is used in php,plz expalin with example. Hi friend,
Code to solve the problem :
Aj
tags in struts2
;Login</title>
</head>
<body>
<s:form action="login...tags in struts2 Hello
I am using simple tags in struts2.
Like..."
pageEncoding="ISO-8859-1"%>
<%@ taglib prefix="s" uri="/struts
DOJO Tree - Ajax
DOJO Tree Hi I am able to generate a Tree structure using DOJO toolkit dynamically which is comming from my Struts (using 1.2) action classes...,
read for more information.
Login - Struts
Struts Login page I want to page in which user must login to see...;<table border="1" ><form method="post" action... type that url... Login Page after logging that page must return to my page.please
AJAX with
AJAX with
Ajax resources, in a digg style, allows the user to register and addd his/her
own links
Read full Description
AJAX Search
;/style>
</head>
<body>
<form id="form" action="#">...AJAX Search I have to create a project where the user enters... using PHP and MYSQL as the database. Can somebody please suggest me the AJAX
struts2
struts2 how to read properties file in jsp of struts2
Hi,
You can use the Properties class of Java in your action class.
Properties...(in);
String p = pro.getProperty("key");
System.out.println(key + " : " + p);
Read
ajax doubdt
ajax doubdt function getentityshortname(val)
{
alert("entered...);
req.setRequestHeader('Content-Type','application/x-www-form-urlencoded...);
//document.forms[0].action=url;
//document.forms[0].submit
Struts 1.2 and ajax example
Struts 1.2 and ajax example Struts 1.2 and ajax example with data from database
Registration - Ajax
;hi friend,
registration form in jsp
function checkEmail(email...;
}
User registration form...:
--------------------------
read
AJAX REGISTRATION FORM
AJAX REGISTRATION FORM I have implemented user name, check and state selection using ajax, in the html registration form. I am loading two XML... either username or state. How to implement both in same registration form could any
Struts2.2.1 Ajax div tag example.
;
<action
name="...Struts2.2.1 Ajax div tag example.
In this section, we will introduce you to about the Ajax div tag. The div tag when used with Ajax refreshes the content
Ajax validation - Ajax
Ajax validation how to validate a form using Ajax and php ... you.
Please visit for more informaton:
Thanks.
Amarde example
Ajax example Hi,
Where can I find Ajax example program?
Thanks
Hi,
Get it at:
Ajax Tutorials
First Ajax Example
Ajax Login Example
Thanks
Struts 2 Login Form Example
you can create Login form in
Struts 2 and validate the login action...Struts 2 Login Form Example tutorial - Learn how to develop Login form....
Let's start developing the Struts 2 Login Form Example
Step 1... the ajax wll be called and retrieve me the blocks under the selected district
What is Ajax?
JavaScript and other technologies such as CSS and XML.
Read more at What is Ajax...What is Ajax? Hi,
What is Ajax and what is use of Ajax in web programming?
Thanks
Hi,
Ajax stands for AJAX stands for Asynchronous
Struts 1.2.9 (NB6.1) ? Problems with depend <html:select> and AJAX - Struts
select value as a parameter
How I do it with AJAX?? Maybe JQuery?
Sorry...Struts 1.2.9 (NB6.1) ? Problems with depend and AJAX Hi
I have 2 and one is depend to the other
The 1st select I fill it of the DB
Struts 2 Session Scope
;
In this section, you will learn to create an AJAX
application in Struts2... is created with AJAX in Struts2 Framework. Before we
start
the things, we need..." namespace="/roseindia" extends="struts-default">
<action name
AJAX Magazine
AJAX Magazine
AJAX blog focusing on new AJAX developments.
Read full Description Lessons
Ajax Lessons
AjaxLessons.com is a resource for ajax tutorials as well as information
surrounding Ajax and web 2.0.
Read full Description
Ajax Linki
Ajax Linki
Links - Ajax Contents, Books, Tutorials and everything about Ajax
Read full Description
Ajax
Ajax send the example to fetch the data from the server by using ajax in java.
for ex:-if there are states which is used to display in frontend we use ajax.
send it to me
The AJAX JSP Tag Library
JavaScript to implement an AJAX-capable web
form.
Read full Description...
The AJAX JSP Tag Library
The AJAX JSP Tag Library is a set of JSP tags that
simplify the use
Ajax in jQuery
Ajax in jQuery How ajax can be use in JQuery?
Given below the code for ajax in jquery:
$(document).ready(function() {
$('#form').submit(function(){
var number = $('#number').val();
$.ajax({type:"post",url
Struts 2.2.1 - Struts 2.2.1 Tutorial
Configuring Actions in Struts application
Login Form Application... Validators
Login form validation example
Struts 2.2.1 Tags
Type... design pattern in Java
technology. Struts 2.2.1 provides Ajax support
Main function parameter in C language - Ajax
Main function parameter in C language Please let me know how this Main(int argc,char*argv[]) works and detailed example on their roles in command line arguement.Also how is fgetpos() and fsetpos() used in file operation
Ajax Tutorial
Ajax Tutorial
Many Ajax Tutorials in a blog style
Read full Description
Aspects of AJAX
Aspects of AJAX
AJAX blog with loads of resources
Read full Description
AJAX Blog
AJAX Blog
Daily AJAX content with sreenshots and commentary.
Read full Description
AJAX Goals
AJAX Goals
AJAX site with forums, code samples, news and articles
Read full Description
AJAX Guru
AJAX Guru
AJAX blog by Kishore
Read full Description
AJAX Impact
AJAX Impact
Very nice AJAX Community with great references
Read full Description
AJAX Line
AJAX Line
AJAX Community with a blog, forum and tutorials
Read full Description
Ajax Links
Ajax Links
AJAX links, code samples and news
Read full Description
ajax
ajax how to connect ajax with mysql without using php,asp or any other scripting language.
please answer soon
form submit in ajax
ajax register form
ajax form Register
Ajax Camp
Ajax Camp
Ajax Camp is a community for learning, interacting, and asking questions
about web-based development using Javascript and Ajax
Read full Description
struts imagelink - Ajax
struts imagelink i need one dropdown menu in that i have to select image
AND i need one button when i clicks on that image will be open
AJAX World
AJAX World
[Google Group]AJAX discussion group with over a thousand members!
Read full Description | http://www.roseindia.net/tutorialhelp/comment/99615 | CC-MAIN-2014-23 | refinedweb | 1,509 | 65.83 |
XQuery
Introduction
XQuery is a language for traversing XML documents to select and aggregate items of interest and to transform them for output as XML or some other format. XPath is the element selection part of XQuery.
The Qt XML Patterns module supports using XQuery 1.0 and XPath 2.0 in Qt applications, for querying XML data and for querying non-XML data that can be modeled to look like XML. Readers who are not familiar with the XQuery/XPath language can read A Short Path to XQuery for a brief introduction.
Advantages of Using Qt XML Patterns and XQueryt XML Patterns and XQuery in your Qt programs are summarized as follows:
- Ease of development: All the C++ programming required to perform data query tasks can be replaced by a simple XQuery like the example above.
- Comprehensive functionality: The expression syntax and rich set of functions and operators provided by XQuery are sufficient for performing any data searching, selecting, and sorting tasks.
- Conformance to standards: Conformance to all applicable XML and XQuery standards ensures that Qt XML Patterns can always process XML documents generated by other conformant applications, and that XML documents created with Qt XML Patterns can be processed by other conformant applications.
- Maximal flexibility The Qt XML Patterns module can be used to query XML data and non-XML data that can be modeled to look like XML.
Using the Qt XML Patterns Module
There are two ways Qt XML Patterns can be used to evaluate queries. You can run the query engine in your Qt application using the Qt XML Patterns C++ API, or you can run the query engine from the command line using Qt's
xmlpatterns command line utility.
Running the query engine from your Qt application
If we save the example XQuery shown above in a text file (e.g.
myquery.xq), we can run it from a Qt application using a standard Qt XML Patternst XML Patterns module will not be built if exceptions are disabled, or if you compile Qt with a compiler that doesn't support member templates, e.g., MSVC 6.
See the QXmlQuery documentation for more information about the Qt XML Patterns C++ API.
Running the query engine from the command line utility.
The XQuery Data Model.
Binding program variables to XQuery variablest XML PatternsSerializer serializer(query, myOutputDevice); query.evaluateTo(&serializer);
Each program variable is passed to Qt XML Patterns as a QVariant of the type of the C++ variable or constant from which it is constructed. Note that Qt XML Patterns.
Binding To QVariant::Time.
Binding To QVariantList.
Interpreting XQuery resultst XML Patterns()) { #ifdef qdoc QXmlNodeModelIndex i = item.toNodeModelIndex(); // process node #endif // qdoc }):
Handling QXmlNames_cast>()) { #ifdef qdoc QXmlName n = qvariant_cast<QXmlName>(v); // process QXmlName n... #endif // qdoc } else { // error } break; } } else if (item.isNode()) { #ifdef qdoc QXmlNodeModelIndex i = item.toNodeModelIndex(); // process node #endif // qdoc } item = result.next(); }
To access the strings in a QXmlName returned by an XQuery evaluation, the QXmlName must be accessed with the name pool from the instance of QXmlQuery that was used for the evaluation.
No mapping for xs:time.
Using XQuery with Non-XML Data
Although the XQuery language was designed for querying XML, with Qt XML Patternst XML Patterns XQuery engine via the same API the XQuery engine uses for querying XML.
When Qt XML Patternst XML Patterns can be used out of the box to perform this query, provided myOrders.xml actually contains well-formed XML. It can be loaded directly into the default XML node model and traversed. But suppose we want Qt XML Patterns to perform queries on the hierarchical structure of the local file system. The default XML node model in Qt XML Patternst XML Patterns,t XML Patterns,t XML Patterns to query non-XML data:
- The internal representation for word processor documents
- The set of dependencies for a software build system
- The hierarchy (or graph) that links a set of HTML documents from a web crawler
- The images and meta-data in an image collection
- The set of D-Bus interfaces available in a system
- A QObject hierarchy.
See the QAbstractXmlNodeModel documentation for information about how to implement custom XML node models.
More on Using Qt XML Patterns with Non-XML Data
Subclassing QAbstractXmlNodeModel to let the query engine access non-XML data by the same API it uses for XML is the feature that enables Qt XML Patternst XML Patterns.
.
Security Considerations
Code Injection
XQuery is vulnerable to code injection attacks in the same way as the SQL language. If an XQuery is constructed by concatenating strings, and the strings come from user input, the constructed XQuery could be malevolent. The best way to prevent code injection attacks is to not construct XQueries from user-written strings, but only accept user data input using QVariant and variable bindings. See QXmlQuery::bindVariable().
The articles Avoid the dangers of XPath injection, by Robi Sen and Blind XPath Injection, by Amit Klein, discuss the XQuery code injection problem in more detail.
Denial of Service Attacks
Applications using Qt XML Patterns are subject to the same limitations of software as other systems. Generally, these can not be checked. This means Qt XML Patterns does not prevent rogue queries from consuming too many resources. For example, a query could take too much time to execute or try to transfer too much data. A query could also do too much recursion, which could crash the system. XQueries can do these things accidentally, but they can also be done as deliberate denial of service attacks.
Features and Conformance
XQuery 1.0
Qt XML Patterns aims at being a conformant XQuery processor. It adheres to Minimal Conformance and supports the Serialization Feature and the Full Axis Feature. Qt XML Patterns currently passes 97% of the tests in the XML Query Test Suite. Areas where conformance may be questionable and where behavior may be changed in future releases include:
- Some corner cases involving namespaces and element constructors are incorrect.
- XPath is a subset of XQuery and the implementation of Qt XML Patterns uses XPath 2.0 with XQuery 1.0..
Only queries encoded in UTF-8 are supported.
XSLT 2.0
Partial support for XSLT was introduced in Qt 4.5. Future releases of Qt XML Patterns will aim to support these XSLT features:
- Basic XSLT 2.0 processor
- Serialization feature
- Backwards Compatibility feature
For details, see XSL Transformations (XSLT) Version 2.0, 21 Conformance.
Note: In this release, XSLT support is considered experimental.
Unsupported or partially supported XSLT features are documented in the following table. The implementation of XSLT in Qt 4.5 can be seen as XSLT 1.0 but with the data model of XPath 2.0 and XSLT 2.0, and using the using the functionality of XPath 2.0 and its accompanying function library. When Qt XML Patterns encounters an unsupported or partially support feature, it will either report a syntax error or silently continue, unless otherwise noted in the table.
The implementation currently passes 42% of W3C's XSLT test suite, which focus on features introduced in XSLT 2.0.
The Qt XML Patterns implementation of the XPath Data Model does not include entities (due to QXmlStreamReader not reporting them). This means that functions
unparsed-entity-uri() and
unparsed-entity-public-id() always return negatively.
XPath 2.0
Since XPath 2.0 is a subset of XQuery 1.0, XPath 2.0 is supported. Areas where conformance may be questionable and, consequently, where behavior may be changed in future releases include:
- Regular expression support is currently not conformant but follows Qt's QRegExp standard syntax.
- Operators for
xs:time,
xs:date, and
xs:dateTimeare incomplete.
- Formatting of very large or very small
xs:double,
xs:float, and
xs:decimalvalues may be incorrect.
xml:id
Processing of XML files supports
xml:id. This allows elements that have an attribute named
xml:id to be looked up efficiently with the
fn:id() function. See xml:id Version 1.0 for details.
XML Schema 1.0
There are two ways Qt XML Patterns can be used to validate schemas: You can use the C++ API in your Qt application using the classes QXmlSchema and QXmlSchemaValidator, or you can use the command line utility named xmlpatternsvalidator (located in the "bin" directory of your Qt build).
The Qt XML Patterns implementation of XML Schema validation supports the schema specification version 1.0 in large parts. Known problems of the implementation and areas where conformancy may be questionable are:
- Large
minOccursor
maxOccursvalues or deeply nested ones require huge amount of memory which might cause the system to freeze. Such a schema should be rewritten to use
unboundedas value instead of large numbers. This restriction will hopefully be fixed in a later release.
- Comparison of really small or large floating point values might lead to wrong results in some cases. However such numbers should not be relevant for day-to-day usage.
- Regular expression support is currently not conformant but follows Qt's QRegExp standard syntax.
- Identity constraint checks can not use the values of default or fixed attribute definitions.
Resource Loading
When Qt XML Patterns loads an XML resource, e.g., using the
fn:doc() function, the following schemes are supported:
XML
XML 1.0 and XML Namespaces 1.0 are supported, as opposed to the 1.1 versions. When a strings is passed to a query as a QString, the characters must be XML 1.0 characters. Otherwise, the behavior is undefined. This is not checked.
URIs are first passed to QAbstractUriResolver. Check QXmlQuery::setUriResolver() for possible rewrit. | http://doc-snapshots.qt.io/qt5-5.11/xmlprocessing.html | CC-MAIN-2018-26 | refinedweb | 1,596 | 56.45 |
AI for the Enterprise
The power of Visual Recognition: How to use Watson to identify a hand of cards
September 1, 2016 | Written by: Isabel Ren
Categorized: AI for the Enterprise
Share this post: article, we went over how we use Watson’s Natural Language Classifier API to distinguish between poker and casual intent in conversation. In this blog post, we will discuss how we recognize a hand of cards.
Recognizing Playing Cards
To get our robots playing a proper game of poker, they must understand the values of the cards they have in their hands. To do this, we use an image taken from cameras on the robot and split the problem into two parts – determining the suit of the card (spades, clubs, diamonds, hearts) and determining the rank of the card (Ace, 2, 3, etc.).
We tackled the problem from a multi-step approach because doing so cuts down on the number of comparisons we have to make. Instead of 52 comparisons, we can cut it down to 17 comparisons (4 for suit and 13 for rank). We use Watson’s Visual Recognition to identify suit and a hybrid solution to identify rank. In this post, we will be focusing on how we recognize suits.
The Visual Recognition service
Using Watson’s Visual Recognition Service to classify suits
IBM Watson’s Visual Recognition is a service that allows users to understand the content of images and classify images into logical categories. In addition to classifying images, Visual Recognition also offers facial detection and text recognition services. We will focus on classifying images in this post. The Visual Recognition service does come with a pre-trained classifier for general classification, but since we want to classify our images specifically for suits, we will train a custom Visual Recognition classifier on the four suits.
In this post, I’ll show you how to train your own custom suits classifier using Watson Developer Cloud’s Python SDK .
Getting Access
As in our previous posts, we will first have to create a Visual Recognition service in Bluemix to interact with.
Go ahead and instantiate an instance of the Visual Recognition from your Bluemix console and take note of the
api_key in your service credentials. This step is the same as we have done before.
Training a Classifier
In our project, our current suits classifier uses over 300 images for each suit. Our training data only uses images taken from the camera on our NAO robots since we will only be using these photos during our game of poker. In terms of training data, we use images of each card in different angles and lighting. Over several iterations, we have improved upon the accuracy of our classifier by adding more images that the classifier struggles with, such as dimly lit photos and face cards.
To use the Visual Recognition service, you will need at least 10 images for each class you want inside of your classifier in the form of JPGs or PNGs. In our case of suits, we want 4 zipped folders of images – one for each suit. In total I will be using 40 images for this tutorial. There is also an option to pass in negative training data, but by using classes inside of the classifier, the visual recognition service implicitly puts in the training data of other classes as the negative training data for the other class.
You can download the files I used below:
clubs.zip
diamonds.zip
hearts.zip
spades.zip
test1.jpg
All of my code and all of the images I used can be found in my GitHub repository here.
The code below creates a classifier for us to use. When passing the zip folders into the service, the identifier key for each class is in the form of
classname_positive_examples. For example, for our spades class, the key would be
spades_positive_examples. Note that the file uploads may take a bit of time.__), 'hearts.zip'), 'rb') as hearts, \ open(join(dirname(__file__), 'diamonds.zip'), 'rb') as diamonds, \ open(join(dirname(__file__), 'clubs.zip'), 'rb') as clubs, \ open(join(dirname(__file__), 'spades.zip'), 'rb') as spades : print "Uploading files..." print(json.dumps(visual_recognition.create_classifier('Suits', \ hearts_positive_examples=hearts, \ diamonds_positive_examples=diamonds, \ clubs_positive_examples=clubs, \ spades_positive_examples=spades), indent=2))
If you run this code, your response should look something like this:
{ "status": "training", "name": "suits_tutorial", "created": "2016-07-18T19:27:22.429Z", "classes": [ { "class": "spades" }, { "class": "hearts" }, { "class": "diamonds" }, { "class": "clubs" } ], "owner": "{YOUR_OWNER_ID_HERE}", "classifier_id": "{YOUR_CLASSIFIER_ID_HERE}" }
Be sure to take note of your
classifier_id, as that is the id that we will be using to utilize our classifier.
Your classifier will take a few minutes to finish training. Larger sets of training data will take more time to train. To check the status of your classifier, you can run the code below.
import json from os.path import join, dirname from os import environ from watson_developer_cloud import VisualRecognitionV3 visual_recognition = VisualRecognitionV3(VisualRecognitionV3.latest_version, api_key='{YOUR_API_KEY_HERE}') print(json.dumps(visual_recognition.get_classifier('YOUR CLASSIFIER ID'), indent=2))
If the
status is
ready, the classifier is done training. Now, we are ready to classify an image using our classifier. Run the code below to classify an image (
test1.jpg).__), './test1.jpg'), 'rb') as image_file: print(json.dumps(visual_recognition.classify(images_file=image_file, threshold=0, classifier_ids=['{YOUR_CLASSIFIER_ID_HERE}']), indent=2))
Your result should look something like this:
{ "images": [ { "image": "./test1.jpg", "classifiers": [ { "classes": [ { "score": 0.0713362, "class": "clubs" }, { "score": 0.0823247, "class": "diamonds" }, { "score": 0.0638997, "class": "hearts" }, { "score": 0.840459, "class": "spades" } ], "classifier_id": "{YOUR_SUITS_ID_HERE}", "name": "suits_tutorial" } ] } ], "custom_classes": 4, "images_processed": 1 }
We can see that spades had the highest confidence out of the four suits, with a score of
.840459 out of
1. Now you can use code you’ve written to train your own custom classifier on any kind of image you want!
Take a look at a video we took during development where we combine Speech to Text and Visual Recognition to recognize a poker hand of cards.
IBM Watson: Robots Identify a Hand of Cards
In addition to the functions we have walked through above, we can also delete our classifier, list all existing classifiers, detect faces, and recognize text in a similar fashion. Check out the the Watson Developer Cloud Github for more examples.
Further Reading
- Visual Recognition landing page
- API reference
- Detailed documentation
- Other Watson services
- Watson Developer Cloud examples
Learn more about Visual Recognition. | https://www.ibm.com/blogs/watson/2016/09/power-visual-recognition-use-watson-identify-hand-cards/ | CC-MAIN-2019-22 | refinedweb | 1,063 | 54.12 |
Go
WikiAnswers
®
Categories
Jobs & Education
Education
College Degrees
Associates Degrees
Unanswered
|
Answered
Associates Degrees....
Did Serena Williams get an associate's degree?
No
Where in Texas can you get your AAS Degree in Diagnostic Medical Sonography?
For colleges and universities within the United States and its territories, you can obtain this information by clicking on the related links section (College Board) indicated at the bottom of this answer section and using the College Board site College MatchMaker search engine. You can research...
What can lead to failure quickly for a paralegal Why?
Breaching client confidentiality. Because it would put her/his boss at risk.
How many credits do you need to earn your ba from cw post?
Earn your Bachelor of Science in Accountancy prepare yourself for acareer in the fields of accountancy, taxation, and businessservices. Founded in 1974, LIU Post's School of ... If you havecompleted fewer than 24 credits, you must also submit high schooltranscripts and SAT/ACT scores.
What government agencies employ hospital social workers?
The Veterans Administration....
I have bachelor's in Criminal Justice What is the easiest way to get my RN degree?
It will depend on how many credits from your bachelor's degree will apply to either the associate or bachelor's degree in nursing. The critical issue is how many of the science courses you have completed if any. These would include the following. human anatomy and physiology I and...
What is the.
If I already have a bachelor's degree in genetics do I need an additional bachelor's degree in sonography or just a certificate to become an ultrasound technician?
Because you already have an existing degree, the certification would be fine if you can find a college or university that offers one. Read the following according to the U.S. Department of Labor, and follow through with the link provided. Education and training. There are several avenues for entry....
How many credits for 90 quarter hours?
90 quarter hours equals 60.00 credits (semester hours)
How much does an RN earn with an associate's degree vs a diploma?
According to the U.S. Bureau of Labor Statistics the estimated mean annual wage for registered nurses as of May 2008 is, $65,000. This would amount to $31.31 per hour.
How much school is required for x-ray tech?
2 years you get your (AS degree). 4 years you get (BS degree)
How long to obtain Bachelor in Management...
What is the scientific name for dentist?
In England, we call them Orthodontists. Not sure if that's used internationally!
If you would take business classes in college how many years of college would it take? may...
How long does it take to get your nursing degree if you already have a bachelors?
If you already have a bachelors degree, you might want to think about taking your RN program through a community college. Although, you have a bachelors degree, you will still be required to complete the non- clinical course work particular to the RN program - if you have not already done so -...
If one has a Bachelor's degree in music but want to go into an it field like network administration could someone go back and get an associate's degree or do you go another route?
Yes, many individuals attend community colleges after having acquired their bachelor's degree to pursue another career path. I would suggest you acquire a specific and clearly articulated picture of the future you intend to create for yourself. Define your specific career goal and the position percent of boys go to a four year college?
24%
What kind of degrees do you have to have to BA a kindergarten teacher?
Hello I have done M.E.C during my 12th and then I have done mydegree with BBM (Bachelor in Business Management) as my stream bothin andhra pradesh. My question is whether I am eligible to doD.pharma?
Can you find employment that requires an associate's degree while you are going to school to get your associate's degree?
No, in order to meet the qualifications for the job you must first receive you associates degree.
Write a program for sum of n natural numbers without using any loop?
The answer is with using recursion.....]the code follows import java.io.*;import java.util.Scanner; // This program computes sum 1 to N numbers using recursion public class SumRecursion { public static void main(String[] args) { System.out.println("Enter N"); try ...
Advantages and disadvantages of rdbms?
Advantage is More secure and Normalization(reducing Redundancy,repitation,concurrency),avoiding problems while processing 2 things at a time.(while these are not possible in file proccessing systems) Disadvantage is it requires more knowledge to implement.
Disadvantages of rdbms?
RDBMS stands for Relational Database Management System. Some of its disadvantages are poor support for storage of complex objects, lack of efficiency in integral support and slow extraction of data.
Is an artist required to have a degree?
no.
What college do you need to go to to get a degree in game testing?
There are NO educational requirements for game testing. However, there are many colleges today that offer degrees in game creation and development and may have a course in Game Testing. The most prevalent schools offering this program are community colleges but there are some universities....
Do you have to live in the community to go to that community college?
Anyone can attend any community college no matter where they live. However, the tuition rates are different according to where you live. For individuals who live in-county the tuition rate will be the lowest the college charges. For individuals who live out-of county it will be higher (most often...
What traning will you need to become a social worker?
what traning or cousre woruld u need to become a social worker what traning or cousre would u need to become a social worker
How much do social worker make with a BA degree in Augusta Georgia?
According to the U.S. Bureau of Labor Statistics the estimated mean annual wage for social workers as of May 2008 is, $48,180. This would amount to $23.16 per hour.
What is the cost for out of state residents at Spelman College?
Spelman College Atlanta, Georgia Annual College Costs (Fall 2009) . Tuition and fees: $20,926 . Room and board: $10,062 . Books and supplies: $1,150 . Estimated personal expenses: $2,100 . Transportation expense: $1,044 For the source and more detailed information concerning your...
Can you transfer Apollo College credits to Hawaii Community College?
Hawaii Community College has a regional accreditation by the Western Association of Schools and Colleges. Apollo College does not have a regional accreditation. Typically, regionally accredited colleges and universities do not accept credits in transfer from institutions that do not have a regional...
Will your credits transfer from a community college to a four year university?
Yes, as long as the college has a regional accreditation (most all do). However, you must make sure you are in a transfer program (major). Typically, the associate of arts (AA) and associate in science (AS) degrees are transfer programs, but you must be careful with programs that are the associate...
How do you get a college official transcript if you have a hold on your account?
If you have a hold on your account, the college will not release an official transcript until your account has been satisfied and the hold removed. There is no other way, unless you can make some type of arrangement with the school.
What colleges offer weekend seminars to evaluate credits and grant an associates degree?
Click on the Related Links section (Answers.com) indicated directly below this answer section. It will take you to the official web page of World Education Services (WES). WES is a widely recognized evaluation agency.
How long does it take the average criminal justice major at university of Toledo to get the batchlors degree? many community college credits are needed to go to a state college?
You may apply as a transfer at any time. Just check with the school you want to go to and see if they take transfer students. CU Boulder loves transfer students. Otherwise you have to get your AA
Which universities offer diploma in bakery and pastry?
It depends on where you live, you should check out website for your nearest Uni's and see if they have any diplomas listed there for Food tech.
What is an AAA in liberal arts?
The term liberal arts denotes a curriculum that imparts general knowledge and develops the student's rational thought and intellectual capabilities, unlike the professional, vocational, technical curricula emphasizing specialization. The contemporary liberal arts comprise studying literature,...
How many credits are considered full-time when attending an online school?
For colleges and universities that operate on a regular semester system, a student must be enrolled for no less the 12 credits to be considered full-time. Whether online or on campus doesn't matter; The number of credits enrolled for does.
Is Stevens-Henager a regionally accredited college?
Answer No it is not. Stevens-Henager is accredited member of Accrediting Commission of Career Schools and Colleges (ACCSC). College Degree Programs are accredited by national accrediting bodies recognized by the U.S. Department of Education. However, read the following very carefully, then the...
Which college is better academic wise... Ithaca College or Quinnipiac University?
Both schools have the preferred regional accreditation. Therefore, the coursework and degree you complete through either institution will be recognized by all other colleges and universities as well as employers. Quinnipiac is slightly more competitive but not by much. The applicants accepted is at...
How many credits are needed for an assosiate accounting degree?
The associates in accounting can run anywhere from 60 to 64 credits in specified areas. It is designed as a two year program of study as a full-time student, provided the student takes the program as prescribed by the college.
What is the minimum educatoin required for nursing jobs?
There are three options to become a registered nurse as follows. . diploma program (typically three years, not recommended for individuals who do not have an already existing degree) . associate degree (two years as a full-time student once the student starts the professional phase of the program)...
If already in college at itt-tech but you just recently changed your major and the school doesnt offer what you need to take what should you do or what could you do?
I'm going to assume you mean you intend to change your career path and ITT does not offer a program leading to that career. In that case you will have to find a college or university that does, if indeed your intended career requires a degree. However, before you research schools, this time make...
What careers require a bachelor's degree in education?
Read the below and follow through with the link provided. At the site you can research almost every career from A through Z and the educational requirements particular to those careers. For the source and more detailed information concerning your request, click on the related links section (U.S......
Education Certification and License requirements of a psycologist?
The following is written by and according to the U.S. Department of Labor and particular to the certification and licensure required for psychologists. Licensure. Psychologists in a solo or group practice or those who offer any type of patient care-including clinical, counseling, and school...
What is the age limit to carry a full-time college student on your taxes?
If you're providing shelter and financial assistance to that student then you can claim them until their no longer dependent on you financially. They just can't file taxes for them selves in the same year that you're claiming them.
How many years of college does a math professor need?
I dont know i am in high school
Is Santa Barbara Business College regionally accredited?
No, it is not regionally accredited. However, Santa Barbara Business College is accredited by ACICS, the Accrediting Council for Independent Colleges and Schools. ACICS is listed as a nationally recognized accrediting agency by the United States Department of Education and is recognized by the... == ...
If you have a total of 20 credit hours for the year can you be considered a full-time student?
For colleges and universities that operate on a regular semester academic year, you must take no less than 12 credits per semester to be considered full time. They go by semester, not year.
Vocational school for physical therapist assistant?
The following is written by and according to the U.S. Department of Labor and particular to the education and training required for physical therapist assistants. Most physical therapy aides are trained on the job, while almost all physical therapist assistants earn an associate degree from an...
Does an associates for the community college of the Air Force mean anything?
Yes it does, however, how to apply it depends on your overall career goals and objectives.
What are the knowlegde that social workers should have?
For the source and more detailed information concerning your request, click on the related links section (U.S. Department of Labor) indicated directly below this answer section.
One reason early childhood education centers have become increasingly important?
Early childhood education centers are important because more and more children are being put in daycare. More parents are working so it is important that children are educated early on.
What type of work do you have to do to be a social worker?
For the source and more detailed information concerning your request, click on the related links section (U.S. Department of Labor) indicated directly below this answer section.
What is the abbreviation for an associates degree in computer networking and technology?
Within the United States degrees are typically not broken down into specific programs of study (majors). It would be either an associate or arts (AA), associate in science (AS), or associate in applied science (AAS), whichever one you completed.
If I got your high school diploma can I go TO brevard community college?
If you got my high school diploma no, the names would not match.. The question is, "do you have yours?" If you do then yes you can.
How long does it take to get Bachelor degree if you already have an Associate degree?
If you completed the associates degree in a transfer program and stay within the same major, it should take an additional two years to complete the bachelor's degree.
How much social scenice degree worth a hour?...
How much Associates degree as a web developer?
There are no standard tuition and fee rates for colleges and universities. Each institution will have rates particular to itself. There are a number of variables to consider to include whether the institution is public or private, the geographical location, the specific program of study (some of...
What are the two semester in a school year?
Fall and Spring. (: well for meeh at least...
What is the difference between credit hours earned and quality points?
You have credits enrolled in, and the credits actually earned. A student can enroll for twelve credits, then drop or fail one three credit course. Thus, the enrolled credits would be 12, but the credits earned would be nine. Each grade is assigned quality points. For example a grade of A equals 12...
What courses are required for associates in business?
For the transfer program, it is basically a general education cluster with foundational courses in business which would include some of the following. Accounting I and IIIntro to businessBusiness law IBusiness electivesEconomics (micro and macro)Math analysisBusiness statistical analysisComputers
What acronym for Bachelor's degree?
BA or BS
If enrolled in a four year college how many credits can you take at a junior college?
You can typically take the entire associates degree which can run between 60 and 64 credits. Just make sure you are in a transfer program at the junior or community college.
I have looked into sonography schools and some of them require some sort of degree before you are allowed into the program is that how it is for all schools?
The following is written by and according to the U.S. Department of Labor and particular to the education and training required for diagnostic medical sonographers. There are several avenues for entry into the field of diagnostic medical sonography. Sonographers may train in hospitals, vocational...
How many units are required to pass a 2 year college?
A four year college requires 128 credits so I would imagine a two year college would take half that. (64)
How do you become a CSI worker?
== If you mean a CSI-forensic specialist like on television then you can find a job on Monster.com. You just have to fill out some forms and have the proper education and then will most likely be able to help you. If they cannot, try going on google or other search engines and thy tend to have...
What is the difference between early childhood education and elementary education?
Typically early childhood education is from pre-K through fourth grade, and elementry education is fourth through eighth grade.
Explain how workers play a dual role in economics?
Workers play both as consumers and producers.
12 credit hours equals how many semester credits?.
What is the difference between the ADN program and BSN program?
The ADN is an associate degree in nursing which is designed as a two year program once the student starts the professional phase of the program. The BSN is a bachelor's degree in nursing which is a four year program of study.
If i have a 2 years BA degree how can you go in masters program?
I not sure I understand the question. A Bachelor of Arts (BA) degree is a four year program of study as a full-time student (there are some accelerated programs). The master's follows the bachelor's degree and would take approximately two to three years to complete.
What is a example of the careers of all nurse practitioners who graduated in 1994 being tracked to determine the appropiateness of their education to their subsequent work role?
The following is written by and according to the U.S. Department of Labor and particular nursing specialties. Some specialties in nursing are as follows. Ambulatory care nurses Critical care nurses Emergency, or trauma, nurses Transport nurses Holistic nurses Hospice and palliative...
How many credits does 191 contact hours convert to?
It depends if the college or university operates on a semester, tri-semester, or quarter semester system. For institutions that operate on a regular semester system 191 contact hours is close to 12 credits. One credit equals 16 hour of class contact time.
What is the difference between a bachelors in business degree and an associates in business?...
How much schooling is needed for a nurse practitioner?
First, you would need the complete a bachelor of science degree in nursing (BSN). Then there would be an additional two to three years to complete the master's degree which is the minimum educational requirement for a nurse practitioner. Some have a doctorate.
What does accreditation matter after completion of college degree?
The school's accreditation would not matter if you completed the degree before the institution lost their accreditation. It would matter to you if the school lost their accreditation before or while your where pursuing your degree. In this case the coursework and degree would not be recognized by...
What kind of criminal justice jobs are there?
There are many different kinds of criminal justice jobs in the world. These jobs include many police officers for example.
How long does it take to earn associate's degree?
Answer 1: I see that this question is part of the "UCAS applications" category, which pertains to the UK. Associates degrees are uncommon in the UK. They're a decidedly US and Canadian (mostly US) sort of thing. Most of what's covered in a US associates degree is covered, in the UK, as part of...
How many degrees did Althea Gibson Obtain?
Education: Florida Agricultural and Mechanical University, BS, 1953. For the source and more detailed information concerning this subject, click on the related links section indicated below. 3
How much will in cost to get an on line associates degree in health information technology?
This completely depends on the amount of tuition the school is asking for. Every school is different, with local community colleges usually being the least expensive. Be sure to check that your online school is properly accredited and that the school is not a 'for-profit' institution (a school run...
What does a four year college degree get someone?
Depends on what you get the degree in. If you get the right degree you can work in that field and earn a good living compared to a person who doesn't have a degree.
Can you transfer your credits from university to any college?
Within the United States, if you complete coursework at a regionally accredited college or university, that coursework will be recognized by all other colleges and universities. That being said, read the below as to why some credits might not transfer. . Courses were taken at a college or...
What is the minimum education required to be a psycologist?
The following is written by and according to the U.S. Department of Labor and particular to the education and training required for psychologists. A master's or doctoral degree, and a license, are required for most psychologists. Education and training. A doctoral degree usually is required...
1
2
3
> | http://www.answers.com/Q/FAQ/1768 | CC-MAIN-2018-09 | refinedweb | 3,641 | 58.28 |
im trying to write a program that contains if , else. when i compile it there are no errors, but when you run it it will only give you the result of what is in the IF statement. Can anyone please help me figure out what im doing wrong? I copied and pasted what i have so far ...
Code:
#include <iostream>
#include <iomanip>
using namespace std;
const double Connect_Fee = 1.99;
const double one_to_three_min = 2.00;
const double additional_min = 0.45;
int main ()
{
int numOfMinUsed;
double amountDue;
cout << endl;
cout << fixed << showpoint;
cout << setprecision(2);
cout << "Enter the number of minutes the call lasted from "
<< "New York to New Delhi." << endl;
cin >> numOfMinUsed;
cout << endl;
if (one_to_three_min<= 3)
amountDue = Connect_Fee
+one_to_three_min;
else
amountDue = Connect_Fee
+ one_to_three_min
+ (numOfMinUsed -3) *
+ additional_min;
cout << "Amount due: $"
<< amountDue << endl;
system("pause");
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/83047-if-else-printable-thread.html | CC-MAIN-2016-07 | refinedweb | 136 | 65.12 |
+60
Completed
upon selecting a word, highlight all occurrences
Enmanuel Rivera 10 years ago • updated by Azhar.eds 4 years ago • 41
upon selecting a word, highlight all occurrences of the same word in the current file. Many editors / IDEs already do this. It's a feature I have grown accoustomed, and I find it very difficult to work without.
Answer
Customer support service by UserEcho
import sublime
import sublime_plugin
class WordHighlightListener(sublime_plugin.EventListener):
def on_selection_modified(self,view):
regions = []
for sel in view.sel():
if len(sel):
regions += view.find_all(view.substr(sel))
view.add_regions("WordHighlight", regions, "comment", 0)
Let me know if you've got any comments or suggestions. :)
Perhaps only highlight after 3 or more chars have been selected? And also make this customizable. I fear that selecting a single letter in a large file (>80MB) might ddos sublime :). And selecting a single letter is always happening when starting a selection via the cursor/arrow keys...
Also whitespaces should be stripped - at least I don't really see sense in selecting a massive amount of indents :).
Regardless, strip is now actually actually in there, and I also changed it so it only searches when you've got an entire word selected rather than when you've got a certain number of characters selected.
As it works now it feels right to me. Thanks!
Update: Found a small bug :). When having a word selected and clicking into an empty line, the regions won't be removed. I've added 'view.erase_regions("WordHighlight")' before 'view.add_regions(…)' to fix this.
Update 2: Added ' and not sel.empty()' at the end of the if clause to prevent regex errors on empty selections.)):
sel = view.substr(sel).strip()
if len(sel):
regions += view.find_all(sel, sublime.LITERAL)
view.add_regions("WordHighlight", regions, color_scope_name, draw_outlined)
Download the zip from and unzip it into your packages directory.
I use OS X Lion.
Yes! This is the one feature I miss from vim,.
I have this, it was kinda simple to implement. The only bad thing is that you need to define fake theme colors (ST doesn't allow you to just pass a color as an hex string or something..)
Is there a way to select the highlighted matches?
Both find_under_expand and find_all commands select substring matches, besides the highlighted matches.
Hi,
Would it be possible to make it so that the definition of a "word" is defined according to the word_separators in the user's keybindings settings file? I'm using R, and variable names can contain "." in them, so that foo.bar is considered as one word in the R language. It would be great if the auto-highlighting respected this as well.
Thanks!
Hi,
I have install this on both sublime text 2 & 3 ...it worked when you select the text
but not when your cursor is on it.
i have modified my Word Highlight.sublime-settings according to read me instructions but having probs..can you please help...
{
"color_scope_name": "comment",
"case_sensitive": false,
"draw_outlined": true,
"mark_occurrences_on_gutter" : false,
// valid types: dot, circle, bookmark and cross
"icon_type_on_gutter" : "dot",
"highlight_delay": 2,
"highlight_when_selection_is_empty": true,
"highlight_word_under_cursor_when_selection_is_empty": true,
"highlight_non_word_characters":false,
"show_word_highlight_status_bar_message" : true,
"file_size_limit": 4194304,
"when_file_size_limit_search_this_num_of_characters": 20000
}
i have text and lots of duplicated instances of different
words across. It would have been really nice if most of the duplicated
words are highlighted..
thanks | https://sublimetext.userecho.com/en/communities/1/topics/4513-upon-selecting-a-word-highlight-all-occurrences | CC-MAIN-2021-17 | refinedweb | 554 | 59.09 |
-
-
-
-
-
Working on for a fix,
- use a shared object for BuildData, vs one instance per build
- integrate with PruneStaleBranch to delete obsolete data.
Hi,
As a workaround, I have created a groovy script that removes the BuildsByBranch map and persists the reduced Build object to disk.
It can be found here:
This script can be used as a System Groovy script step in a job or simply run periodically via the script console.
Feedback is welcome!
Hi,
I have updated to support Matrix Project types.
Thanks
Scott
I migrated the script to
The latest update includes a fix to better support SCM polling.
Thanks Scott for making the script available.
Got hit hard by this bug.... to a point were the instance was unusable
I finally found this bug report and your script.. it saved the day, thanks again.
This has quadratic memory complexity for modern workflows!
Nicolas De Loof any progress fixing this upstream? Can I help?
Thank you for the workaround Scott Hebert!
This is very bad. It renders jenkins unusable. In our case jenkins lazy.BuildReference took 95% heap. But actually 91% heap is the git.util.BuildData. Those builds are by default hold as soft references. It sounds good. Problem is once those deserialized build.xml files take more than a heap it will deserialize gigabytes of data on every job view. It will not OOM because it is freeing those softreferences within a single request. But it can be extremely slow. It can read GB of data on every request. It is not ok. It cause timeouts and get progressively worse with every branch and/or every record in job history.
I would be surprised if some (many) people would not leave jenkins for this issue. It is not easy to say for most people why page load takes 20 seconds or more.
We have run the script. I was suspicious if keeping only getLastBuiltRevision() is the right thing to do. It is not. At least not in combination with multijob plugin. It has triggered to rebuild all the branches again. Thousands of builds. Luckily I did it no 30 seconds sanity job. If I would have done it on all our jobs it would trigger about 20k 30 minutes long builds.
The only safe option is to remove the git branches and keep build history short. The script is not good enough.
Also, as a corollary to the previous comment, when using the MultiJob plugin the problem being discussed here is further exacerbated because a copy of the Git BuildData node is being stored for each child job (recursively) of the parent multi-job. This results in very large log files being generated (i.e.: 5-10MB or more) in cases when there are many Git branches being reported and many sub-jobs managed by a single multijob. In our test cases approximately 90% of that data is consumed by the Git BuildData sections.
Also, in cases when a multijob has any more than 20 or 30 builds in the history the Jenkins master can take minutes to load the parent job on the dashboard, and in some extreme cases (i.e.: with hundreds or thousands of builds in the history) it can actually cause the entire dashboard to become unresponsive and time out.
This defect has severe consequences and, imo, must be fixed sooner rather than later ... and with an update to the Git plugin, not with a hack or workaround like a script that gets run out-of-process.
+1 To what Kevin said. We have a MultiJob that 3-levels deep with other MultiJobs and MatrixJobs, and new runs typically have 4-5 MB build.xml files with most of that being GitBuildData. We're using the StashBuildTrigger plugin so wiping the build data is ok for us, and we saw the disk usage go from 3GB to 187MB after purging the Git BuildData with a modified version of the script above.
As a workaround we have to have the jenkns master on a server with 32 GB of RAM and hope that everything fits into the page cache or heap.
import hudson.matrix.* import hudson.model.* import com.tikal.jenkins.plugins.multijob.* hudsonInstance = hudson.model.Hudson.instance allItems = hudsonInstance.getAllItems(AbstractProject.class); // Iterate over all jobs and find the ones that have a hudson.plugins.git.util.BuildData // as an action. // // We then clean it by removing the useless array action.buildsByBranchName // def numJobs = 0; def runcounter = 0; def cleanGit; cleanGit = { build ->(); runcounter++; } } }; for (job in allItems) { numJobs++; def counter = 0; for (locbuild in job.getBuilds()) { // It is possible for a build to have multiple BuildData actions // since we can use the Mulitple SCM plugin. def gitActions = locbuild.getActions(hudson.plugins.git.util.BuildData.class) if (gitActions != null) { for (action in gitActions) { counter++; } } if (job instanceof MatrixProject) { runcounter = 0; for (run in locbuild.getRuns()) { cleanGit(run); } if (runcounter > 0) { println(" -->> cleaned: " + runcounter + " runs"); } } if (job instanceof MultiJobProject) { runcounter = 0; cleanGit(locbuild); def recurseSubBuild; recurseSubBuild = { sb -> for(bld in sb) { if(bld.build != null) { cleanGit(bld.build); if(bld.build instanceof MultiJobBuild) { if(bld.build.getSubBuilds().size() != 0) { recurseSubBuild(bld.build.getSubBuilds()); } } } } }; recurseSubBuild(locbuild.getSubBuilds()); println("***************"); if (runcounter > 0) { println(" -->> cleaned: " + runcounter + " runs"); } } if (counter > 0) { println("-- cleaned: " + counter + " builds"); } } }
I have successfully tested a variation of the groovy scripts mentioned previously to remove the 90% bloat in some of our build logs. One modification I did make was to only process builds other than the last build. This appears to prevent the premature build triggers mentioned earlier since the plugin appears to cache all of the relevant build history for every branch built previously in the last build. So by excluding that one build from the script we prevent jobs from rebuilding every branch immediately afterwards. I believe this solution is preferable to preserving the last built revision being done above..
Was fixed by Nicolas De Loof in Git plugin >= 2.4.0
Code changed in jenkins
User: Jesse Glick
Path:
src/main/java/hudson/plugins/git/GitSCM.java
Log:
JENKINS-19022 Print a warning to the build log when the job seems to be in trouble due to buildsByBranchName bloat.
Code changed in jenkins
User: Mark Waite
Path:
src/main/java/hudson/plugins/git/GitSCM.java
Log:
Merge pull request #472 from jglick/buildsByBranchName-JENKINS-19022
JENKINS-19022 Print a warning to the build log when the job seems to be in trouble due to buildsByBranchName bloat
Compare:
A warning is reported to the log by git plugin 3.1.0 released 4 Mar 2017.
if I understand correctly this issue, won't be a little relief if BuilData history is kept only per job (not per build, since 99 percent of content is the same) and its items only for existing builds? Is there any reason to have information about branch which was built in a build which does not exist anymore?.
For sanely arranged jobs, which just build a branch and do not do anything weird, BuildData is indeed useless. But it supports the weird 5% and previous attempts have indeed bombed badly, not to mention incompatibilities in the plugin API.
It is kept per build even though it is mostly redundant. Hence the issue here.
Hello,
Log rotation strategy of "Discard old builds" seems that it only removes related build num directories, while the old rotation discarded build num entry still resides in new created build.xml.
If new created build.xml can remove old rotation discarded build num entry, it may relief and control the main memory & disk space growth.
I assume this isn't a GitPlugin specific issue and is related to a general SCM issue?.
Yea, I did some more digging, and it's definitely git specific. I'm not 100% sure how to fix it, but I have a possible mitigation, of adding a new extension to disable tracking beyond the "current build" for those that do not wish to use a groovy script based mitigation technique described elsewhere. I'll have a pull request Soon(TM)
Actually extricating this out separately is difficult, and I'm not sure it gains us that much since it would still have to write the contents to disk each time we add a new branch, which is every build when using gerrit as the build source.
I'm getting cryptic serialization errors, apparently related to Blueocean plugin when trying to run the workaround script. It would be nice if someone knowing about serialization and/or blueocean could check
This is definitely a specific problem of the Git plugin, which in the interest of trying to be all things to all people has historically supported rather dubious use cases involving complicated refspecs. For projects which simply build a single Git branch with no tricks—including branch project beneath a multibranch folder—BuildData is completely unnecessary. But some downstream plugins rely on it to support more exotic stuff, including the pre-multibranch system of processing pull requests as builds of a single job.
Yea, it's sort of an unfortunate side effect. I wonder what effort it would take to deprecate, and then remove, the support over some duration?
Even with solutions which migrate build data out of the per-job storage, you still have problems because you still store a huge hash for each completed branch, at least not storing it once per build, but that still is somewhat expensive.
I do have a solution based on using an extension to simply tell the git plugin this job does not need proper build data (and it will thus only store the current build's branches, rather than all the history). That should help enterprising administrators who could enable such an extension. Unfortunately that still leaves such configuration up to "did this administrator understand and become aware of the problem". This is not really the ideal solution, since it basically requires every job creator to decide up front whether they need this build data or not. I would much rather it be opt-in, but that has its problems of backwards compatibility.
It's not a perfect solution, but I added an extension which prevents the builddata from maintaining more than that specific build's branchname mapping. See
It's implemented as an extension, so users would have to enable the extension, but once done for a given SCM instance, they never have to clean the job data using a groovy scriptlet, so it's a better solution.
I do not have a real solution for what to do or whether we should work towards deprecating and removing the branch history.
There is a bit of movement on the pull request on GitHub, which gives me hope, but for those stuck with the groovy script to do periodic cleanups, I've measured that the cost is about 65ms per build on a typical dedicated server with 2 spinning drives in a RAID1. This means that the periodic cleanups quickly get increasingly expensive once thousands of builds start to accumulate.
Benoit Sigoure have you tried the build from the pull request in your environment? It has been running well for Jacob Keller and would be good to have additional users report their results.
Yes, please. Additional testing would be a huge benefit.
You can even use the one compiled by the pull request build tester found at
Specifically
Thanks for the link, I've installed the plugin [version: 3.9.0-SNAPSHOT (private-69892a21-jenkins)] and disabled the daily job that runs the groovy script. Anything special I should look out for? The issue with the bloat is going to take time to manifest itself again as usually it requires thousands of build.xml files to accumulate.
The biggest concern (for me) is that you enable the extension and watch for regressions in other areas of the git plugin. Does change history still display as expected? Are builds triggered as expected? Are there places where you detect the absence of the build data?
Make sure that commits aren't rebuilt when they've already been covered by a build. Make sure that job start time isn't significantly worse.
Things have been working fine for me, although we recently inadvertently upgraded to 3.9.1 and lost the fix in the process. Is there a newer version of this fix, based on 3.9.1, we could try? Any idea on the approximate timeline to merge this fix?
Benoit Sigoure I've made my attempt to resolve a merge conflict. The resulting artifacts should include a git.hpi that you can use.
Looks like a mistake was made in the initial rebase and the latest attempt didn't pass the build..?
Benoit Sigoure unlikely that there was any mistake in the rebase. The build definitely ran but there appears to have been an infrastructure error on ci.jenkins.io. I've made a trivial change to one of the files in the pull request. The build has started again.
The "resulting artifacts" link referenced in my previous comment still includes a git.hpi which you could use for testing.
Sorry I was going by the comment on GitHub that the wrong side of the merge had been initially picked to resolve a conflict. I'll try the new artifact produced by that latest build, thanks!
Hmm, the new plugin won't load because:
SEVERE: Failed Loading plugin Jenkins Git plugin v4.0.0-rc1685.659d6dcce0e8 (git) java.io.IOException: Jenkins Git plugin v4.0.0-rc1685.659d6dcce0e8 failed to load. - Jenkins Git client plugin v2.7.2 is older than required. To fix, install v3.0.0-beta3 or later. at hudson.PluginWrapper.resolvePluginDependencies(PluginWrapper.java:655) at hudson.PluginManager$2$1$1.run(PluginManager.java:515) downloaded the latest .hpi from here. I hope that's all that was needed
It would be enough to install git client plugin 3.0.0-beta3, the latest git client plugin from the experimental update center, but you're also welcome to use one from the CI build. That is slightly newer than 3.0.0-beta3 and should work just as well as the CI build you used.
I have completed my testing of the pull request from Jacob Keller. I need to write some documentation of the change for the release notes, then it will be ready to merge.
A week or less after that merge is completed, I intend to deliver the git client plugin 3.0.0-rc and the git plugin 4.0.0-rc builds.
>> Kevin Phillips added a comment - 2016-08-03 17:07
>.
This "fix" broke SCM polling for our use case:
- We use an GIT SCM build trigger with a pattern, so all branches that match "FOOBAR-*" are built. With this change now our jenkins tries to rebuild ALL OF OUR BRANCHES from the last years. As you can imagine that is quite the joke and would take weeks until all of our builds ran through. Do you have a solution how to fix this? I'm not so keen on having to stay on the old plugin versions forever.
- Probably not related but another issue: "ERROR: [GitHub Commit Status Setter] - Cannot retrieve Git metadata for the build, setting build result to FAILURE"
Jesse Glick What? I've just cited what Kevin Phillips wrote and later expanded on issues we're seeing.
I'm looking for a strategy to migrate to the new version of the plugin that doesn't involve our CI running for weeks to catch up the mess that was created here.
Tagging onto this ticket, because something here caught my eye that Stefan Hengelein posted
> With this change now our jenkins tries to rebuild ALL OF OUR BRANCHES from the last years.
With the upgrade of (git-plugin) 4.0.0-rc, all our branches are being rebuilt (years old branches) after a commit to the project. This is blowing up reports, status reports, slack messages and more. As you can guess, years of builds rerunning is creating the quite the panic. I'm not even sure if the issue is resolved once this backlog of builds is done, because it might take days to get there.
What can I do to help? My first instinct is to downgrade back to 3.9.2 (git-plugin) but not sure if that will be worth it.
You should downgrade to git plugin 3.9.2 then install git plugin 3.9.3 which includes a fix for an agent / tool interaction issue that was first introduced in git plugin 3.9.2.
The git plugin 4.0.0-rc is only a release candidate, not a production ready release. I made a mistake when I chose the version number. I assumed that the "-rc" suffix would deliver the plugin only from the experimental update center and not from the general update center. I was wrong. I apologize sincerely for my mistake.
The issues detected with git plugin 4.0.0-rc indicate that it was not as close to release as my testing indicated. More testing and more fixes will be needed before git plugin 4.0.0 will be released for general availability.
You can help assure that the problems you've found are fixed in git plugin 4.0.0-rc by providing a bug report which contains numbered steps that describe how someone else can see the same bug that you are seeing. The descriptions in the comments of this specific bug report seem to indicate at least two different bugs which are outside of this bug report and should be reported separately so that they can be tracked separately to resolution.
Thanks Mark Waite - we reverted 3.9.2, then re-upgraded to 3.9.3. Unfortunately our builds were in quite a messed up shape. New commits to a branch were triggering all branches that still lived on the remote. We tried deleting the repos from the filesystem of Jenkins, hoping a new clone would resolve that - It did not. For other googlers who stumble upon this page, we had to restore a server backup and lose about 2 days of history on builds. We lost minor configurations, but it was way better than having our system bloated with thousands of builds and building nearly everything on each commit.
Mistakes happen mate, no worries and I'll try and file a bug to keep these bugs organized.
The buildData is also required for gitlab plugin.
gitlab plugin pipeline step "gitlabCommitStatus" isn't working anymore with 4.0.0-rc
It does not find any BuildData object. Had to downgrade to 3.9.3 where all was working fine..
This issue will be fixed as soon as the following pull request has been merged & rolled-out: | https://issues.jenkins-ci.org/browse/JENKINS-19022?focusedCommentId=360021&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2019-30 | refinedweb | 3,142 | 65.32 |
Browse look into all of that, make some sense of the data we have and take action. This need has produced many projects and tools that make the lives of anyone who wants to look into the data just a bit easier. But, when humans have, humans want more. And in the world of BI and analytics, “more” often comes in the form of embedding, branding, customized styling and access and so on. Which ends up meaning more work for developers and more time to account for. So, naturally there has been a need for BI tools that will let you have it all.
Let’s make a list of challenges you may face as the builder and maintainer of these dashboards:
- You want to make the dashboards available to end users or viewers from within your own application or platform
- You want to be able to manage different dashboard collections (i.e. “integrations”)
- You want to be able to grant specific user rights to a collection of dashboards and datasets
- You want to make sure users have access to data only relevant to them
Cumul.io provides a tool we call Integrations which helps solve these challenges. In this article I’ll walk you through what integrations are, and how to set one up. The cool thing is that for most of the points above, there is minimal code required and for the most part can be set within the Cumul.io UI.
Some Background — IntegrationsSome Background — Integrations
An Integration in Cumul.io is a structure that defines a collection of dashboards intended to be used together (e.g. in the same application). It is also what we use to embed dashboards into an application. In other words, to embed dashboards into an application, we give the application access to the integration that they belong to. You can associate dashboards to an integration and administrate what type of access rights the end users of the integration will have on these dashboards and the datasets they use. A dashboard may be a part of multiple integrations, but it may have different access rights on different integrations. When it comes to embedding, there are a number of SDKs available to make life simple regardless of what your stack looks like. 😊
Once you have a Cumul.io account and if you are an “owner” of an organization in Cumul.io, you will be able to manage and maintain all of your integrations via the Integrations tab. Let’s have a look at an example Cumul.io account. Below you can see the Dashboards that one Cumul.io user might have created:
Although these are all the dashboards this user may have created, it’s likely that not all dashboards are intended for the same end-users, or application for that matter. So, the owner of this Cumul.io account would create and maintain an Integration (or more!) 💪 Let’s have a look at what that might look like for them:
So, it looks like the owner of this Cumul.io account maintains two separate applications.
Now let’s see what the process of creating an integration and embedding its dashboards into an application would look like. The good news is, as mentioned before, a lot of the steps you will have to take can be done within the Cumul.io UI.
Disclaimer: For the purposes of this article, I’ll solely focus on the Integration part. So, I’ll be skipping everything to do with dashboard creation and design and we will be starting with a pre-made set of imaginary dashboards.
What we will be doing:
Creating an IntegrationCreating an Integration
For simplicity, let’s only create one integration for now. Let’s imagine we have an analytics platform that we maintain for our company. There are three dashboards that we want to provide to our end-users: the Marketing Dashboard, the Sales Dashboard and the Leads Dashboard.
Let’s say that out of all the dashboards this account has created or has access to, for this particular project they want to use only the following:
New IntegrationNew Integration
To create the integration, we go to the Integrations tab and select New Integration. The dialogue that pops up will already give you some idea of what your next steps will be:
Selecting DashboardsSelecting Dashboards
Next up, you will be able to select which of your dashboards will be included in this integration. You will also be able to give the Integration a name, which here I’ve decided will appropriately be “Very Important Integration”:
Once you confirm your selection, you will have the option of defining a slug for each dashboard (highly recommended). These can later be used while embedding the dashboards into your application. You will later see that slugs make it easy to reference dashboards in your front-end code, and make it easier to replace dashboards if needed too (as you won’t need to worry about dashboard IDs in the front-end code).
Access RightsAccess Rights
You will then get to set the integration’s access rights for the datasets its dashboards use. Here we set this to “Can view.” For more info on access rights and what they entail, check out our associating datasets to integrations:
Filters and Parameters (and Multi-Tenant Access)Filters and Parameters (and Multi-Tenant Access)
Side Note: To help with multi-tenant access — which would make sense in this imaginary set up — Cumul.io makes it possible to set parameters and filters on datasets that a dashboard uses. This means that each user that logs into your analytics platform would only see the data they personally have access to in the dashboards. You can imagine that in this scenario access would be based on which department the end user works for in the company. For more on how to set up multi-tenancy with Cumul.io, check out our article, “Multi-Tenancy on Cumul.io Dashboards with Auth0”. This can be done within the dashboard design process (which we are skipping), which makes it easier to visualize what the filters are doing. But here, we will be setting these filters in the Integration creation process.
Here, we set the filters the datasets might need to have. In this scenario, as we filter based on the users’ departments, we define a
department parameter and filter based on that:
And voilà! Once you’re done with setting those, you have successfully created an integration. The next dialogue will give you instructions for what will be your next steps for embedding your integration:
Now you’ll be able to see this brand new Integration in your Integration tab. This is also where you will have quick access to the Integration ID, which will later be used for embedding the dashboards.
Good news! After your Integration is created, you can always edit it. You can remove or add dashboards, change the slugs of dashboards or access rights too. So you don’t have to worry about creating new integrations as your application changes and evolves. And as editing an integration is all within the UI, you won’t need to worry about having a developer set it all up again. Non-technical users can adapt these integrations on the go.
Embedding DashboardsEmbedding Dashboards
Let’s see where we want to get to. We want to provide the dashboards within a custom app. Simple, user logs into an app, the app has dashboards, they see the dashboards with the data they’re allowed to see. It could look like the following for example:
Someone had a very specific vision on how they wanted to provide the dashboards to the end user. They wanted a sidebar where they could flip through each of the dashboards. It could have been something completely different too. What we will focus on is how we can embed these dashboards into our application regardless of what the host application looks like.
Cumul.io comes with a set of publicly available SDKs. Here I’ll show you what you would do if you were to use the Node SDK. Check out our developer docs to see what other SDKs are available and instructions on how to use them.
Step 1: Generate SSO Tokens For Your End UsersStep 1: Generate SSO Tokens For Your End Users
Before you can generate SSO tokens for your end users, you will have to make sure that you create an API key and token in Cumul.io. You can do this from your Cumul.io Profile. It should be the organization owner with access to the integration that creates and uses this API key and token to make the SSO authorization request. Once you’ve done this, let’s first create a Cumul.io client which would be done in the server side of the application:
const Cumulio = require("cumulio"); const client = new Cumulio({ api_key: '<YOUR API KEY>', api_token: '<YOUR API TOKEN>', });
Now we can create the SSO token for the end user. For more information on this API call and the required fields check out our developer documentation on generating SSO tokens.
let promise = client.create('authorization', { integration_id: '<THE INTEGRATION ID>', type: 'sso', expiry: '24 hours', inactivity_interval: '10 minutes', username: '< A unique identifier for your end user >', name: '< end-user name >', email: '< end-user email >', suborganization: '< end-user suborganization >', role: 'viewer', metadata: {} });
Here, notice how we have added the optional
metadata field. This is where you can provide the parameters and values with which you want to filter the dashboards’ datasets on. In the example we’ve been going through we’ve been filtering based on department so we would be adding this to the metadata. Ideally you would get this information from the authentication provider you use. See a detailed explanation on how we’ve done this with Auth0.
This request will return a JSON object that contains an authorization id and token which is later used as the key/token combination to embed dashboards in the client-side.
Something else you can optionally add here which is pretty cool is a CSS property. This would allow you to define custom look and feel for each user (or user group). For the same application, this is what the Marketing Dashboard could look like for Angelina vs Brad:
Step 2: EmbedStep 2: Embed
We jumped ahead a bit there. We created SSO tokens for end users but we haven’t yet actually embedded the dashboards into the application. Let’s have a look at that. First up, you should install and import the Web component.
import '@cumul.io/cumulio-dashboard';
After importing the component you can use it as if it were an HTML tag. This is where you will embed your dashboards:
<cumulio-dashboard </cumulio-dashboard>
Here you will have a few options. You can either provide the dashboard Id for any dashboard you want to be embedding, or you can provide the dashboard slug which we defined in the Integration setup (which is why I highly recommend this, it’s much more readable doing it this way). For more detailed information on how to embed dashboards you can also check out our developer documentation.
A nice way to do this step is of course just defining the skeleton of the dashboard component in your HTML file and filling in the rest of it from the client side of your application. I’ve done the following, although it’s of course not the only way:
I’ve added the dashboard component with the ID
dashboard:
<cumulio-dashboard</cumulio-dashboard>
Then, I’ve retrieved this component in the client code as follows:
const dashboardElement = document.getElementById("dashboard");
Then I request the SSO token from the server side of my application which returns the required key and token to add to the dashboard component. Let’s assume we have a wrapper function
getDashboardAuthorizationToken() that does this for us and returns the response from the server-side SSO token request. Next, we simply fill in the dashboard component accordingly:
const authorizationToken = await getDashboardAuthorizationToken(); if (authorizationToken.id && authorizationToken.token) { dashboardElement.authToken = authorizationToken.token; dashboardElement.authKey = authorizationToken.id; dashboardElement.dashboardSlug = "marketing|sales|leads"; }
Notice how in the previous steps I chose to define slugs for my dashboards that are a part of this integration. This means I can avoid looking up dashboard IDs and adding
dashboardId as one of my parameters of the
dashboardElement. Instead I can just provide one of the slugs
marketing,
sales or
leads and I’m done! Of course you would have to set up some sort of selection process to your application to decide where and when you embed which dashboard.
That’s it folks! We’ve successfully created an Integration in Cumul.io and in a few lines of code, we’ve been able to embed its dashboards into our application 🎉 Now imagine a scenario where you have to maintain multiple applications at once, either for within the same company or separate ones. Whatever your scenario, I’m sure you can imagine how if you have a number of dashboards where each of them have to go to different places and each of them have to have different access rights depending on where they are and on and on we go.. How it can quickly get out of hand. Integrations allow you to manage this in a simple and neat way, all in one place, and as you can see, mostly from within the Cumul.io UI.
There’s a lot more you can do here which we haven’t gone through in detail. Such as adding user specific custom themes and CSS. We also didn’t go through how you would set parameters and filters in dashboards, or how you would use them from within your host application so that you have a multi-tenant setup. Below you can find some links to useful tutorials and documentation for these steps if you are interested. | https://css-tricks.com/embedded-analytics-made-simple-with-cumul-io-integrations/ | CC-MAIN-2021-49 | refinedweb | 2,330 | 60.14 |
Desktop and Application Streaming
Application Online Trial Expiration Control with Amazon AppStream 2.0
Many customers use Amazon AppStream 2.0 to provide online trials of their desktop applications. Our ISV Workshop Series shows you how to make your applications available through your website, delivered by AppStream 2.0. However, some customers need that access to be for a limited time. Based on their requirements, there will be different expiration periods for their online trial access. For instance, some customer’s online trials can last days, hours, or hours spread across multiple days.
This post shows you how to modify the ISV Workshop Series SaaS Portal workshop to add a timed element to your application’s access.
Overview
This solution is based upon Creating a SaaS portal with Amazon AppStream 2.0 workshop. This portal lets interested customers create an account and sign up an online trial for a web-based software. Expanding on that, this blog lets you control online trial expiration of this web-based software.
The solution uses Amazon CloudWatch, Amazon Cognito, AWS Lambda, and Amazon AppStream 2.0. Using CloudWatch Events to schedule an AWS Lambda function. The Lambda function lists all Amazon Cognito users for the online trial and determine whether their trial period has expired. If their trial period has expired, the Lambda function expires any active AppStream 2.0 sessions for these users and disable them in Amazon Cognito. Finally, the Lambda function emails these users to inform them that their online trial has expired.
The high-level architecture is shown in the following diagram.
The steps to control online trial expiration are as follows:
- Create an IAM policy for a custom role.
- Create an IAM service role for Lambda function execution.
- Create and configure an AWS Lambda function.
- Configure CloudWatch Events to trigger your Lambda function.
We recommend familiarity with Amazon AppStream 2.0 and other AWS services mentioned previously. The expected time to complete this solution is about two to three hours.
Prerequisites
To follow this solution walkthrough, you must have the following resources:
- An AWS account.
- An AppStream 2.0 environment: An AppStream 2.0 image, fleet, and stack are required to complete this solution. For information about how to create these
- resources, see AppStream 2.0 Administration Guide.
- The completed workshop, Create a SaaS Portal with Amazon AppStream 2.0.
Step 1: Create an IAM policy for a custom role
In this step, you create a custom IAM policy to grant Lambda permissions. This allows the Lambda function to handle your users’ online trial expiration. The IAM policy grants permissions to perform the following actions:
- AppStream 2.0 describe sessions
- Amazon Cognito list users
- Amazon Cognito disable user
- Amazon SES send email
- Amazon CloudWatch logging
Complete the following steps to create the custom IAM policy.
- Open the IAM console.
- In the navigation pane, choose Policies.
- If this is your first time choosing Policies, the Welcome to Managed Policies page appears. Choose Get started.
- Choose Create policy.
- Choose the JSON tab.
- Copy and paste the following JSON policy into the policy document box. Please use your account related information to fill in the placeholders in this policy.
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "appstream:DescribeSessions", "appstream:ExpireSession", "cognito-idp:AdminDisableUser", "cognito-idp:ListUsers" ], "Resource": [ "arn:aws:cognito-idp:<region-code>:<AWS-Account-Number>:userpool/<USEPOOLID>", "arn:aws:appstream:<region-code>:<AWS-Account-Number>:fleet/<FLEETNAME>", "arn:aws:appstream:<region-code>:<AWS-Account-Number>:stack/<STACKNAME>" ] }, { "Effect": "Allow", "Action": [ "ses:SendEmail", "logs:CreateLogStream", "logs:CreateLogGroup", "logs:PutLogEvents" ], "Resource": "*" } ] }
- When you’re done, choose Review policy.
- For Name, enter a unique name.
- Choose Create policy.
Step 2: Create an IAM service role for Lambda function execution
An IAM service role is required to allow Lambda to access resources in other services on your behalf. Complete the following steps to create an IAM service role and attach the policy created in the previous step.
- Open the IAM console.
- In the navigation pane, under Roles, choose Create role.
- For Select type of trusted entity, keep AWS service selected.
- Choose Lambda, and then choose Next: Permissions.
- In the Filter policies search box, type the policy name created in previous step. When the policy appears in the list, select the check box next to the policy name.
- Choose Next: Tags. Although you can specify a tag for the policy, a tag is not required.
- Choose Next: Review.
- For Role name, enter a unique name.
- Choose Create role.
Step 3: Create and configure a Lambda function
Complete the following steps to create a Lambda function.
- Open the Lambda console.
- Do one of the following:
- If you haven’t created any Lambda functions, a Getting started page displays. Under Getting started, choose Create a function.
- If you have created a Lambda function, in the upper right corner of the Functions page, choose Create a function.
- On the Create function page, keep Author from scratch selected.
- Under Basic information, do the following:
- For Name, enter a unique name.
- For Runtime, choose Python 3.8.
- Under Permissions, expand Choose or create an execution role. Then do the following:
- For Execution role, choose Use an existing role.
- For Existing role, choose the role created in the previous step.
- Choose Create function.
- In the Function code editor, copy the following code into the editor, overwrite any existing code. Please use your account information to fill in the placeholders in the codes. This Lambda function codes do not handle pagination of the user list from Cognito API calls. If you have a large number of users for your application online trials, you can use AWS SDK for Python to handle the pagination.
import json import boto3 import datetime VERIFIED_EMAIL = '<your verified email address>' EXPIRE_IN_DAYS = 10 USER_POOL_ID = '<your cognito user pool id>' STACK = '<STACK_NAME>' FLEET = '<FLEET_NAME>' client = boto3.client('cognito-idp') as2 = boto3.client('appstream') def user_age(user_created_date): tz_info = user_created_date.tzinfo age = datetime.datetime.now(tz_info) - user_created_date user_age_str = str(age) if 'days' not in user_age_str: return 0 days = int(user_age_str.split(',')[0].split(' ')[0]) return days def user_end_session(user): response = as2.describe_sessions(StackName=STACK, FleetName=FLEET, UserId=user) if response['Sessions']: Session = response SessionID = Session['Sessions'][0]['Id'] as2.expire_session(SessionId=SessionID) else: print('No session for that user') def send_deactivate_email(email_to): ses_client = boto3.client('ses') data = ' Your online trial has expired. Please contact us for more information.' response = ses_client.send_email( Source=VERIFIED_EMAIL, Destination={ 'ToAddresses': [email_to] }, Message={ 'Subject': { 'Data': 'Your Online Trial has Expired!' }, 'Body': { 'Text': { 'Data': data } } }) def lambda_handler(event, context): users = client.list_users( UserPoolId=USER_POOL_ID) # iterate over the returned users and extract username and email for user in users['Users']: username = user['Username'] user_created_date = user['UserCreateDate'] age = user_age(user_created_date) print('age %s' % age) if age >= EXPIRE_IN_DAYS: response = client.admin_disable_user( UserPoolId=USER_POOL_ID, Username=username) user_end_session(username) send_deactivate_email(username) return 'Success!'
Step 4: Configure CloudWatch Events to invoke your Lambda function
To configure CloudWatch Events to invoke your function
- Open the Lambda console Functions page.
- Choose the function created in the previous step.
- Under Designer, choose Add trigger.
- Select the trigger EventBridge (CloudWatch Events).
- For Rule, choose Create a new rule.
- For Rule type, choose Schedule expression.
- For Schedule expression, based on your expiration granularity, you can define the schedule expression, such as per hour or per day. For more information on expressions schedules, see Schedule expressions using rate or cron.
- Configure the remaining options and choose Add.
Final considerations
To expand on this solutions, you can use AppStream 2.0 on-instance session scripts. This allows you to run your own custom scripts when the users’ streaming sessions starts and ends. Using a custom script at session start, you can record when your users’ streaming sessions begins. At the end of a user’s session, you can use a custom script to calculate the total time of their streaming session. You can then store the total time in an Amazon DynamoDB table. You can then create a Lambda function to periodically check the total time against the expiration configuration. If a user’s total time exceeds the configured expiration, the Lambda function can disable this user.
Cleaning up
When you complete testing this solution, you need to keep in mind that you are charged for any resources that remain running. Follow the step-by-step instructions to clean up the related resources created by this solution.
Conclusion
This blog provides a solution to control the expiration time of your application online trial based upon Amazon Cognito, AWS Lambda function, and Amazon AppStream 2.0. For more information about Amazon AppStream 2.0, see the following:
About the Authors
Changbin Gong is a senior solutions architect at AWS. He engages with customers to create innovative solutions that address customer business problems and accelerate the adoption of AWS services. In his spare time, Changbin enjoys reading, running, and traveling.
Ali Othman is a Customer Success Engineer with the AppStream 2.0 service team.
| https://aws.amazon.com/blogs/desktop-and-application-streaming/application-online-trial-expiration-control-with-amazon-appstream-2-0/ | CC-MAIN-2021-17 | refinedweb | 1,480 | 51.04 |
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
The fcntl() function provides for control over open files.
The fildes argument is an open file descriptor.
The fcntl() function may take a third argument, arg, whose data type, value and use depend upon the value of cmd. The cmd argument specifies the operation to be performed by fcntl().
The available values for cmd are defined in the header <fcntl.h>, which include:
The following commands are available for advisory record locking. Record locking is supported for regular files, and may be supported for other files.). After a successful F_GETLK or F_GETLK64 request,
indicate which system is running that process..
If a process has an existing lock in which l_len is 0 and which includes., F_SETLK64, F_SETLKW, or F_SETLKW64(2)..
The following values for cmd are used for file share reservations. A share reservation is placed on an entire file to allow cooperating processes to control access to the file.
File share reservations are an advisory form of access control among cooperating processes, on both local and remote machines. They are most often used by DOS or Windows emulators and DOS based NFS clients. However, requested:
Valid f_deny values are:
Upon successful completion, the value returned depends on cmd as follows:
Otherwise, -1 is returned and errno is set to indicate the error.
The fcntl() function will fail if:
The cmd argument is F_FREESP, the file exists, mandatory file/record locking is set, and there are outstanding record locks on the file; or the cmd argument is F_SETLK, F_SETLK64, F_SETLKW, or F_SETLKW64, mandatory file/record locking is set, and the file is currently being mapped to virtual memory using mmap(2).
The cmd argument is F_SHARE and f_access conflicts with an existing f_deny share reservation. reservation is for write access, and fildes is not a valid file descriptor open for writing.
The cmd argument is F_SHARE, the f_access share reservation is for read access, and fildes is not a valid file descriptor open for reading.
The cmd argument is F_SHARE or F_UNSHARE and arg points to an illegal address.
The cmd argument is F_UNSHARE and a reservation with this f_id for this process does not exist.:
The cmd argument is F_FREESP, mandatory record locking is enabled, O_NDELAY and O_NONBLOCK are clear and a deadlock condition was detected.
See attributes(5) for descriptions of the following attributes:
lockd(1M), chmod(2), close(2), creat(2), dup(2), exec(2), fork(2), mmap(2), open(2), pipe(2), read(2), sigaction(2), write(2), dup2(3C), attributes(5), fcntl(3HEAD)
Programming Interfaces Guide
In the past, the variable errno was set to EACCES rather than EAGAIN when a section of a file is already locked by another process. Therefore, portable application programs should expect and test for either value.
Advisory locks allow cooperating processes to perform consistent operations on files, but do not guarantee exclusive access. Files can be accessed without advisory locks, but inconsistencies may result. The network share locking protocol does not support the f_deny value of F_COMPAT. For network file systems, if f_access is F_RDACC, f_deny is mapped to F_RDDNY. Otherwise, it is mapped to F_RWDNY.
To prevent possible file corruption, the system may reject mmap() requests for advisory locked files, or it may reject advisory locking requests for mapped files. Applications that require a file be both locked and mapped should lock the entire file (l_start
and l_len both set to 0). If a file is mapped, the system may reject an unlock request,. | http://www.shrubbery.net/solaris9ab/SUNWaman/hman2/fcntl.2.html | CC-MAIN-2016-40 | refinedweb | 586 | 54.73 |
Type: Posts; User: mcmcom
make a method called somethign like
myDropDownList_SelectedIndexChanged(object sender, EventArgs e)
{
Response.Redirect("somePage.aspx");
}
check this link out :
just a bit down the page...
can you be a bit more specific? what do you mean by "get" ?
do you want them to have to login with their windows username or do you want to display it on a page?
mcm
whats eactly the question here ?
and where exactly does this break ? error messages? stack trace? anything?
mcm
you should put a breakpoint somewhere in the pre-render code and do a Watch on the frmBudgetSummary. It may have the control but you may need to drill down to it. using watch will let you know how...
using System.Web.Mail shouldn't depend on the users SMTP settings. It depends on your Servers smtp settings. So all you have to worry about is that the server has access to a SMTP server. For...
All you really need to do is upload the ASMX and binaries to a virtual directory or a site on IIS.
this page gives you some step by step:...
heres an article that may help. It shows you how to download in the background while the app is running.
hth,
mcm
Your attempting to use the InterOp classes for integration with MS Word. However it can't find the .dll your referencing. Add the Reference to your project by going
ADD REFERENCE (in solution...
just a quick note. i would not name your Sql Command after your button. It may end up causing confusion when you look and notice you have multiple objects with the same name.
just name the...
you cant use DSN For SqlConnections (not to my knowledge anyways) but i think the reason SQL works but not ODBC is because of your ODBC Driver versions. We had the same symptoms a while back. The...
i suppose, although i dont know how to do it.
Also what kind of database are you using? if your finding troubles with ODBC and your using a SQL Database i would suggest using SqlCommand objects...
you would have to inject html into the string your putting into the cell
so if your string was "This is my great string" and you wanted great to be bold you would first have to get the string and...
open IIS And create VirtualDirectories named
application1
application2
application3
point them all to the "application" directory
hth,
mcm
swap these lines
m_oConn.Open();
m_cmdStoredProc.Connection = m_oConn;
you have to assign the connection to the command object before you open it.
also check this article out, it...
also may want to try "ExecuteUpdate()" if your stored proc is updating (i dont know if it will help though)
hth,
mcm
ok show me your whole code block your using to call the SP Again please?
and verify with me you get No errors when you execute it?
mcm
how many parameters are you passing the Stored Proc from C#?
well its pretty much the exact same code for a windows form, the calls to SQL server are irralavant to the fact that its win forms or a web app. you still need a :
Connection Object
Command...
you dont need to put BEGIN and all that. it knows if its a stored procedure
try
m_cmdStoredProc.CommandType = CommandType.StoredProcedure;
m_cmdStoredProc.CommandText = storProc;
hth,...
stored procedures are a bit better, that way if you got to change something you can usually get away without having to re-comple your app.
firstly you need to define what your using. You going...
can you show us the entire block of code making the call to the database?
do you have all the necessary libraries in your "using" statements at the top ?
using System.Data;
using...
i think the first thing i would try is this
//change this line
int rows = dsCreditors.Tables[0].Rows.Count;
//to this line
DropDownList ddl =...
1. To Maintain code you re-use, create a separate class for the code. For example, if you have a function that calculates a total cost based on a product, price, and quantity you could create a...
change
m.To = new MailAddress(txtEmail.Text);
to
MailAddress m = new MailAddress(txtEmail.Text); | http://forums.codeguru.com/search.php?s=cb845bc330f73c60e4e047066d9dce3d&searchid=7001355 | CC-MAIN-2015-22 | refinedweb | 705 | 75.1 |
webdriver
Provides WebDriver bindings for Dart. These use the WebDriver JSON interface, and as such, require the use of the WebDriver remote server.
Installing
Depend on it
Add this to your package's pubspec.yaml file:
dependencies: webdriver: any
If your package is an application package you should use any as the version constraint.
Install it
If you're using the Dart Editor, choose:
Menu > Tools > Pub Install
Or if you want to install from the command line, run:
$ pub install
Import it
Now in your Dart code, you can use:
import 'package:webdriver/io.dart'; WebDriver driver = createDriver(...);
Testing
You can run the tests either with bazel (only supported on Linux).
bazel test ... | https://pub.dartlang.org/documentation/webdriver/latest/ | CC-MAIN-2018-30 | refinedweb | 114 | 64.3 |
#include <EC_Type_Filter.h>
#include <EC_Type_Filter.h>
Inheritance diagram for TAO_EC_Type_Filter:
This filter only accept events with a predefined type/source, both the source and the type can be wildcards.
Constructor.
[private]
[virtual]
This is used for computing the scheduling dependencies:
Leaf filters check if the header could be matched, similar to the can_match() method; if it does they return 1, and 0 otherwise. Intermediate nodes always return 0.
This is used to build precise dependencies between the suppliers and the leaf of the filters that accept that event. Notice that only the nodes doing scheduling recurse through the list, so in configurations that do no require scheduling the recursion stops fairly soon.
Implements TAO_EC_Filter.
Returns 0 if an event with that header could never be accepted. This can used by the suppliers to filter out consumers that couldn't possibly be interested in their events. The rt_info and
Clear any saved state, must reset and assume no events have been received.
Filter this event, returns 1 if the event is accepted, 0 otherwise. Notice that there are two versions of the method, if the event is not const then filter can take ownership of the event.
Returns the maximum size of the events pushed by this filter.
This is called by the children when they accept an event and which to pass it up. Notice that there are two versions of the method, if the event is not const then filter can take ownership of the event.
Encapsulate the type/source that we must match. | http://www.theaceorb.com/1.4a/doxygen/tao/rtevent/classTAO__EC__Type__Filter.html | CC-MAIN-2017-51 | refinedweb | 255 | 62.98 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.