text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
With the release of Oracle Enterprise Repository 12c another product was released. Oracle API Catalog 12c (OAC) allows you to build a catalog of your organization APIs. OAC provides a layer of visibility to those APIs so application development knows what and which one to use. OAC includes a simple metamodel for an API asset, automation to populate OAC, and the ability for users to search OAC for APIs and understand the details of the APIs to assess their fit in the user’s application. Installation I’m not going to bore you with the details about the installation by giving a installation guide. It took me about 40 minutes from scratch (excluding downlOERoad time). The steps are describes in the installation guide Oracle provides. OAC is part of the OER 12c installation jar, but can be licensed and installed, as an own managed domain, without licensing and installing OER.. The default username and password is admin / weblogic1. The first time you are asked to change the password.. As admin, navigate to “Admin”, you can configure Users and Departments, control Sessions, change System settings and Import/Export the catalog. This blog will go in to the Admin features later on.. For this blog I changed the HarvesterSettings.xml, which is decribed here, and added OAC en SOA server information, including projects to harvest.. To see, edit and publish the API asset details click on the specific row. The details page opens which shows information about the asset like Type of Asset, Endpoint, Harvester properties and WSDL summary including namespace, porttype and methods. Besides the details you can perform some actions (from left to right): -. Publish a draft API To publish an API you just need to change the API status to published and saving the asset. On the overview page the API status is changed and the API details can be exported to Excel and PDF. When going back to the dashboard the recently published API is visible. When you click on the name of the API asset the user is redirected to the details page.. The department section gives the opportunity to add the user to a specific department. Departments is not more then just a filter of which users belong to that department. It is not necessary to add the new user to a department.. Also create a user, developer1. with the Developer role. Switching user To switch to a different user you can sign out of the OAC console by selecting the “Sign Out” options under your user menu. Just click on the arrow on the right side of your name. Logging in as Curator will give the same functions as an Administrator, but without the Admin tasks. Logging in as Developer will give even less options. A developer can only search in published API and add APIs to there favorites.. When you click on the “View Usage History” link a pop-up is opened which shows a graph with usage over past months and which users uses the API. For questions you can email me on robert.van.molken@amis.nl. 11 thoughts on “In-depth look into Oracle API Catalog (OAC) 12c” I install this patches: – p18791727_121300_Generic.zip – p18718889_121300_Generic.zip – p19721053_121300_Generic.zip (Plugin JDev 12c to see OER). And i validate with this: opatch.bat lsinventory But when i can see from Jdev12c, i can not see el OEC Inventary Service. Some idea of this problem ? No sorry no idea. Which version of JDev 12c are you using? Good day , I just try almost everything mentioned, but what is not mentioned is that after the installation can show one problem, when from console add the service (URL ) via ‘ -file harvest.bat URL_SERVICIO ‘ on the console can get an error: ” java.lang.Exception : password must be encrypted ” In my case to complete mentioned i believes and install: ofm_oer_oac_generic, patches, RCU, domain , etc. , can successfully start the Web console, but when you add the URL this error encryption shown in image always comes out , it may be due .. ? Hi a question I am wanting to install the OER but I’m requiring patches: 18718889 and 18791727, these patches are Free or have any cost ..?, I wonder if there is any external repository for not having to go through ‘Oracle Support’. Thank you. Patches can only be downloaded from Oracle Support site What about the following error when select an asset in OAC after run harvest? An error occurred while loading the specified Asset. Please contact the registrar. What kind of resource are you harvesting? OSB, SOA or a WSDL/WADL (http or file)? Hi robert i’m trying to populate OAC with API assets from file: harvest.bat -file “C:\1\AsyncResponseService.wsdl” but the following occurs (log): 0 [main] WARN com.oracle.oer.sync.framework.impl.DefaultPluginManager – unable to initialize harvester plugin file: c:\oac_oracle\oer\tools\harvester\plugins_oac\mds.starter 2195 [main] WARN com.oracle.oer.sync.framework.impl.DefaultPluginManager – unable to initialize harvester plugin file: c:\oac_oracle\oer\tools\harvester\plugins_oac\soasuite11g.remotereader 2243 [main] INFO com.oracle.oer.sync.framework.MetadataManager – oracle enterprise_repository_harvester version: v12.1.3.0.0-141027_0001-1634845 2412 [main] WARN com.oracle.oer.sync.framework.impl.DefaultPluginManager – unable to initialize harvester plugin file: c:\oac_oracle\oac\tools\harvester\plugins_oac\mds.starter 2609 [main] WARN com.oracle.oer.sync.framework.impl.DefaultPluginManager – unable to initialize harvester plugin file: c:\oac_oracle\oac\tools\harvester\plugins_oac\soasuite11g.remotereader 9064 [main] INFO com.oracle.oer.sync.framework.MetadataManager – successfully completed the harvest 9065 [main] INFO com.oracle.oer.sync.plugin.writer.oer.OERWriter – starting oac shutdown and clean up… New API listed at console API/AsyncResponseService SOAP 1.0 Draft but at detail page shows an error: “API: {}API/AsyncResponseService (1.0) An error occurred while loading the specified Asset. Please contact the registrar. ” OAC installed in location: C:\OAC_Oracle\oer\tools\harvester Can u help me? Thanks in advance Hi robert , i am new to Oracle fusion middleware 12c and trying to download and configure Fmw, WLS and BI Publisher , the problem im facing is when ever i download fmw_12.1.3.0.0_infrastructure_Disk1_1of1.zip from OTN ( as im a free user there ) i cant unzip it , winrar breaks the operation by saying this message : Checksum errror in fmw_12.1.3.0.0_infrastructure_Disk1_1of1.jar , file is corrupt , dont know what to do , i download it again n again from OTN but got same error , here i am using Windows 7 64 bit , JDK 1.8.0_25 , and already install oracle databage 11g , hope u can help 🙂 I think you can check if the file size of the download is the same as oracle describes on the download page. But first try unzipping it using another tool like 7zip, it that does not work then email me back and I will download it and put the jar on my dropbox. Other thing to keep in mind is that fmw 12c does not work om JDK 1.8, it only works on 1.7. In 1.8 the security model is changed and fmw 12c won’t start up.
https://technology.amis.nl/2014/11/14/in-depth-look-oracle-api-catalog-oac-12c/
CC-MAIN-2020-34
refinedweb
1,185
57.67
Archived GeneralDiscussion. changing header --Simon Michael, 2003/03/01 15:04 GMT > I'd like to customize misc_/ZWiki/ZWikiPage_icon.gif The faq Dean pointed out looks like it will help (yay, go FAQ!). > I tried creating a folder named misc_ to put > ZWiki/ZWikiPage_icon.gif into .. that failed due to duplicate ID /misc_ is a special zope internal folder for product icons & such. > I then tried installing standard_wiki_header, it made the 'manage this > page' rename reparent stuff to disappear. The reason: using standard_wiki_header forces it to also use standard_wiki_footer (the built-in version in this case). But, standard_wiki_footer is not as up to date as the wikipage template used by zwiki.org. You may want to use that instead (see -> Customization for nearly absolute beginners ). comment divisions --Simon Michael, 2003/03/01 15:07 GMT Yes.. but, it's harder to pick out the subject if the whole line is bold.. and I want minimal decoration. It could be made customizable. It does put an icon when in a CMF site. I'm going to make it always put --. fascinating leaked mail from Davos --Simon Michael, 2003/03/01 19:27 GMT and discussion at lawmeme: (courtesy of slashdot: ) 2003/03/02 10:50 GMT is there an easy way to add DTML to a page to list all it's children? --Mike Beaton, 2003/03/02 10:57 GMT 2003/03/02 10:59 GMT newbie question --Mike Beaton, 2003/03/02 Do I have to submit comments via email in order to automatically get my comment title formatted in bold and my name shown? (I'm just using the comment form to submit this...) newbie question --SimonMichael, 2003/03/02 15:41 GMT No, web and email comments have the same effect. There used to be a "with heading" checkbox, but we got rid of it. [Customization]? how do I add an editable property to a ZWiki? --2003/03/02 15:42 GMT I would like to add a property to the ZWiki Page class, and make it easily editable from the management interface (and possibly from the Zwiki page itself). How do I go about it? I have no problem with Python, but the ZWiki code is definitely nontrivial, so I would like some suggestions. thanks. Zwiki version 0.16.0 released --Simon Michael, 2003/03/02 15:54 GMT The "Ack! I forgot" release. Cheers, -Simon Summary CMF skin updates, various mail tweaks to support mailing list integration, enhancements to comment behaviour, WWML, misc. bugfixes. Changes newbie question --MichaelBeaton?, 2003/03/02 17:46 GMT Ah yes, it would help if I had noticed the subject: line under the comment form ;-) Sorry! Thanks for the feedback. fit helper, status? --DeanGoodmanson, 2003/03/03 02:41 GMT Simon & Python Fit folks: Would this HTML table helper generator be handy for the Py FIT effort? Got any specific plans for this one? Thoughts on fitness ? I found their calling a SubPage? a SubWiki confusing. Related ORA Blog entry free word link wiki idea --FlorianKonnertz, 2003/03/03 15:59 GMT Hi Simon, hi folks! I want to have a hypertext wikilike tool where all words are checked for linking without special patterns, just some escape characters. I guess for such a wiki a LinkDatabase is needed. - What do you think: Is it better to modify the current ZwikiCode or write it new? FullLinkedWiki --FlorianKonnertz, 2003/03/04 07:09 GMT I start to make a new PageType? fulllinked (maybe you know a better name for this) and make some notes on FullLinkedWiki. fit helper, status? --Simon Michael, 2003/03/04 16:35 GMT Thanks for the links Dean. FitNesse? looks nifty. That makes three public wiki engines with support for ZwikiAndFit. I think their use of "subwiki" is actually the same as ours. It is an actual namespace of wiki pages, not just a single page. Differences: we put the new pages in a subfolder, they just prefix Parentpage. to the page names. Also we make links to the parent wiki automatically, they don't (you have to write .Pagename). It's interesting and I like how a subwiki can sprout from any page (no need to have special access to create folders. It has some aspects of our subwikis and our page hierarchy. On the other hand it seems a bit confusing right now. is there an easy way to add DTML to a page to list all it's children? --Simon Michael, 2003/03/04 16:44 GMT Yes, maybe something like these: <dtml-var offspring> <dtml-var offspringIdsAsList> <dtml-var "_.wikilink(offspringIdsAsList())"> how do I add an editable property to a ZWiki? --Simon Michael, 2003/03/04 16:49 GMT I'd start with one of the existing properties, and just copy what it does. Eg look at how last_edit_time is set up at the top of ZWikiPage. Adding your property as a class attribute and an entry in _properties may be enough. Later, you might separate your stuff into a mixin class for maintainability. Also upgrade() can add missing properties and update old ones. how do I add an editable property to a ZWiki? --Simon Michael, 2003/03/04 17:02 GMT Simon Michael <simon@joyful.com> writes: > Adding your property as a class attribute and an entry in _properties > may be enough. PS). subpage zwiki design --DeanGoodmanson, 2003/03/04 17:13 GMT Thanks for your thoughts. I see how they're similar, and like both. What are your opinions on adding auto-subpage notation (sprout) functionality to FreeformLink s? Such as [.ThisIsaSubPageofCurrentPage] would render as [GeneralDiscussion.ThisIsaSubPageofCurrentPage] I chose "." as / or \ gets mangled in the URL. Not sure of the zope scripting/namespacing implications. I'll refrain from listing other implications. This functionaity is would help a lot for a decent templating/outlining/fill-in-the-blanks-mulit-page starter which I and a friend have been contemplating. It looked like anonymous erased the main page of zwiki.org. I went to the History and checked the previous revision and pressed Copy to Present and it asked me for a password, but the page came back. So was the page really gone and I fixed it, before the password prompt came up? Yes, I think that's right. There is a known zope issue which requests authentication in history when it shouldn't. Thanks for the repair. CVSBackend? --PieterB, 2003/03/05 08:22 GMT Does anybody know a Wiki which has a CVS backend? (using CVS as a storage backend for revisions and content) I found this page on Twiki, describing there aint a Twiki CVS backend. And PyleWiki has an option for a CVS backend (according to) It would be nice to be able to use other backend than ZODB for Zwiki. Any thoughts? CVSBackend? --DeanGoodmanson, 2003/03/05 16:53 GMT What benefits are you hoping to get from a CVS backend? If more explicit backup, is an add-on mechanism for archiving every page OK? Perhaps by "sending" every page to said repository upon change. Properties & Restore are still issues. Is ZODB the problem or the file format? ZMailboy?, EditMembers? updated --SimonMichael, 2003/03/05 18:33 GMT I had an inquiry so have posted the latest code on these pages. They should now have all the pieces you need to do the same. (simple mailman/CMF integration & member management). subscription question --DeanGoodmanson, 2003/03/05 21:28 GMT Anybody have a recipe for a "1-click subscribe" button? I'm currently seeing a need for a "subscribe to all tracker pages" (but not every page) functionality. I think a more convenient subscription mechanism might make this unnecesary, and/or pave the way for it technically. RFC: four simplifications --SimonMichael, 2003/03/06 07:30 GMT Here are four related ideas which I think have promise: - invariant ids - currently there is no invariant relationship between a page's name and it's id. The id is usually assigned based on the name (canonicalIdFrom) but title and id can also be completely unrelated and links still work (eg issue pages). This is the reason linking large numbers of freeform links is expensive, because you have to check the titles of all pages (directly or via catalog). Is this a useful or a superfluous feature ? Doesn't it give rise to confusion ? We could simplify and say a page's id is always the canonicalIdFrom the title (page name). Then checking a freeform link is easy. - new issue naming scheme - this means issues will no longer be able to have a long name and a numeric id. What if we embed the number at the beginning of the name instead. Eg "0444 having trouble with...", which will have id "0444HavingTroubleWith...". Relying on standard_error_message we can still reference issues with a standard short url (zwiki.org/0444), they will sort correctly etc. - drop punctuation from ids - although we can abbreviate them, issue urls will be longer and more ugly than they were. To alleviate this somewhat, let's drop punctuation characters from ids instead of quoting them. This means punctuation in freeform links will be insignificant, like spaces. This sounds fine, it was really accented characters I wanted to preserve, not general punctuation. - merge issue page type - instead of having a separate page type for issues, we could select "issue" behaviour based on the page name. If a page's name begins with a number, it's an "issue" and should have the issue form displayed and be listed in the IssueTracker. I figure issues are a common and useful enough concept it wouldn't hurt to merge this into the default page type, and using one kind of page everywhere reduces confusion. Easier to change a page to an issue and vice-versa, whether that would be useful I'm not sure. RFC: four simplifications --DeanGoodmanson, 2003/03/06 08:08 GMT #1 Answer: This problem does give a LOT of rise to confusion. Freeform links do not backlink well (for me). Tweeked titles don't show up on searches (tracker downer). The freeform speed issue gets a lot of groans in my neck of the woods...and our shop prefers freeform. #3, Drop punctiation. Does this mean no more (or much fewer) messy URL's , and garbled reparent text box confusion? #2,#4, Merge issue page type - Sounds fine, but don't drop the "Issue" prefix. - "Issue0004Browser X Breaks Y" is more explicit than "0004Browser X Breaks Y" - will make migrating to the new system easier. - Open to new wonderful ZwikiRecordbook stuff like "Feature0005MakeZwikiDoBackflips" "Task0001BuildCrouchAndSpring" "Fit0001CrouchTests" "Blog142003JohnDoe" .. basically "type#id#Description" pattern. - Dropping Issue page type also clears up the problem with pages created from an Issue page. Issue???? ;-) --- You caught me amidst number crunching, and I needed the mental break. Hope it's coherent. Tell me where it isn't and I'd much rather re-write than have it dismissed. Leaving out other questions and opportunistic possibilities. Summary: It all sounds OK, I'm not sure the complete ramifications of loss of punctuation, and I would not like the type prefix name dropped from record type pages. RFC: four simplifications --SimonMichael, 2003/03/06 11:43 GMT Thanks for the feedback Dean. Don't worry, everything that goes by here is food for thought even when I don't respond directly. RFC: four simplifications --SimonMichael, 2003/03/06 20:19 GMT I've installed the least troublesome of these changes, #3. I believe this has no impact on existing pages, only new free-form pages. Compare: old: <dtml-var "oldCanonicalIdFrom('IssueNo0010 sometimes pages don\'t appear in contents (page deletion leaves orphans)')"> new: IssueNo0010SometimesPagesDontAppearInContentsPageDeletionLeavesOrphans Unlike the other punctuation characters, ' is not counted as a word separator, as you can see. I haven't done anything to issue pages, only renamed issue 10 to see how these ideas would look. For now the name starts with IssueNo for compatibility. With invariant ids we could no longer use short bare wiki links for issue pages (IssueNo0010?), only #10 sometimes pages don't appear in contents (page deletion leaves orphans) or #10 sometimes pages don't appear in contents (page deletion leaves orphans). Alternatives ? Don't include the number in the name; don't include the description in the name; allow numbers in the middle of wiki names; ... ? None of these are very appealing. RFC: four simplifications --SimonMichael, 2003/03/06 22:27 GMT After checking out various implications - code, performance, Frontpage, IssueTracker, KnownIssues, contents, issue properties form, incoming links, etc. - I lean - slightly - towards going ahead with renaming all issues to [IssueNoNNNN issue description] and always using freeform links for issues, to conform with the invariant ids plan. Not today though. All other ideas welcome. Images and ZWiki --ArnoPucher, 2003/03/07 06:31 GMT Has it been a topic how to incorporate an image to a ZWiki pages in the means of supporting a ZWiki mechanismn (not to import it to Zope and refer it into the page manually ? I mean incorparationg images that belong TO a page (maybe a new property - pictureList or so) Images --DeanGoodmanson, 2003/03/07 07:33 GMT There was a conversation somewhere (I think and issue page?) regarding a more page specific file upload & management system. Prototypes for associating (and easily managing that association) a page with uploaded files is something that would be appreciated. ZWiki's global is more usable than Squishdot's node specific, but a happy medium would be best. Issues: id Uniqueness w/ sharability/multiple associations, and a few of us have run into some strange catalog issues here. RFC: four simplifications --FlorianKonnertz, 2003/03/07 09:45 GMT Just briefly some notes about the recent suggestions (but please keep in mind i'm not completely familiar with the internas): - new issue naming scheme - sounds good! - keep "Issue" - What about Issue0004-Browser X Breaks Y(much better to read)? - drop punctuation from ids - ok, it should generally be a convention to avoid punctation in issue titles, so dropping remaining punct. is useful imo - Merge issue page type - Deans idea sounds VERY good :) A few add. comments and questions about the FullLinkedWiki idea: - I haven't understood the reason, why the first letter of each id is capitalized (legal zope id?), i have to change this for my FullLinkedWiki, will this make troubles? If yes, why? - Having an id seperated in two or three parts as Dean said would be nice, a leading number and the second part for add. metadata (pagetype and rendering) sounds good. login blues --HerbertHrachovec?, 2003/03/08 17:19 GMT I need to offer a login prompt to my ZWiki. In order to achieve this I proceeded according to the hint in CustomizationFAQ?, inserting the appropriate dtml-call into a login-page. Now, when I try to log in the dialog comes up allright, but am not redirected to the FrontPage. I just get a page echoing the dtml-call. Can anybody help? login blues resolved --HerbertHrachovec?, 2003/03/09 11:36 GMT I found the answer to my query. CustomizationFAQ? is wrong. It advises you to isert anglebracket dtml-call "RESPONSE.redirect(_['URL1']?+ /FrontPage)anglebracket " which should be anglebracket dtml-call "RESPONSE.redirect(_['URL1']?+ /FrontPage)" anglebracket Adding an editable property to a ZWiki --WalterAprile?, 2003/03/09 I now understand that this is done modifying ZWikiPage.py and following the model of the class properties already present in the file. Simon told me that Adding your property as a class attribute and an entry in _properties may be enough. Later, you might separate your stuff into a mixin class for maintainability. Also upgrade() can add missing properties and update old ones.).The next question is, would this affect the already existing poor little ZWikiPages?? I don't know what happens when Zope/Zwiki tries to load objects whose definition has changed... I am somewhat afraid of making the experiment, though. authenticate user --WimBekker, 2003/03/10 09:02 GMT I want reparent, rename and delete (like below) only to appear for authenticated users. So I've changed their permissions. But when does a user become authenticated? The only way I can do it is by managing the site, but then the user is also manager. WimBekker, 2003/03/10 09:19 GMT When the page is of type "issuedtml" i need an extra button. I've added this to wikipage.zpt with: "tal:condition="python:request.get( page_type,'') == 'issuedtml'". But this button is never displayed. When I add the same text to the page (issuedtml), the button is displayed. I really could need some good advice how to solve this ;) TextIndexNG? --DeanGoodmanson, 2003/03/10 18:38 GMT Has anyone used TextIndexNG? for better searching of a ZWiki? checking page type --Simon Michael, 2003/03/11 00:08 GMT zwiki-wiki@zwiki.org (WimBekker) writes: > When the page is of type "issuedtml" i need an extra button. I've added this > to wikipage.zpt with: "tal:condition="python:request.get( page_type,'') == > 'issuedtml'". But this button is never displayed. When I add the same text to Chat me on irc some time (sm). authenticate user --Simon Michael, 2003/03/11 00:17 GMT > I want reparent, rename and delete (like below) only to appear for > authenticated users. So I've changed their permissions. But when does a user > become authenticated? The only way I can do it is by managing the site, but > then the user is also manager. In the default wikipage.zpt, the condition for displaying the page management form is: request.get('zwiki_displaymode',0) == 'full' and (user.has_permission('Zwiki: Rename pages',here) or user.has_permission('Zwiki: Delete pages',here)) and (user.has_role('Authenticated') or request.get('zwiki_username','')) You can customize wikipage and change this to whatever you want. Eg remove the condition, so the form is always displayed; login should be triggered when someone tries to rename or delete. Hope this helps. page renaming in progress --SimonMichael, 2003/03/11 05:04 GMT The grand renaming of issue pages (to give them canonical ids) is in progress. Sorry about the disruption to the site (I'm not able to unit test renames). I hoped to include this operation in the page auto-upgrade but it seems too slow so at present one must run upgradeAll or PAGE/upgradeId or rename pages manually. page renaming in progress --2003/03/11 12:21 GMT All the old IssueNo?0xxx links are broken now, also the [title bla bla] links. That's very bad. Can we handle this somehow? Leave placeholder pages?? page renaming, issue links broken --DeanGoodmanson, 2003/03/11 15:29 GMT The quick fix to these it to turn the IssueNo0010? into IssueNo0010? : "IssueNo0010":IssueNo0010 which will link the the appropriate issue. I've done that sucessfully with a FAQ page. More explicit may be: ZWiki:IssueNo0010 page renaming, issue links broken --2003/03/11 16:56 GMT But how to fix all the broken links on the whole site? you'd need a script for that.... zopetime --FlorianKonnertz, 2003/03/11 18:09 GMT I have so many problems with my new site/ new server, it drives me mad. Have you any idea about changes in Zope-2.6.1 and Zwiki-0.16 and problems both working together regarding ZopeTime?? For example: Traceback (innermost last): * Module ZPublisher?.Publish, line 98, in publish * Module ZPublisher?.mapply, line 88, in mapply * Module ZPublisher?.Publish, line 39, in call_object * Module Products.ZWiki.ZWikiPage?, line 1991, in comment AttributeError?: strftime I could give you several kB of error pages, if you don't have enough yourself (which is what i hope :-) page renaming, issue links broken --SimonMichael, 2003/03/11 18:13 GMT Yes, I'm going to do it with a script. Other things noted on KnownIssues. This has turned out to be quite a big task. FYI, page id == canonicalIdFrom(title) is now true for all pages. Adding an editable property to a ZWiki --SimonMichael, 2003/03/11 19:52 GMT Walter - I had the same questions.. the old comments in upgrade() and the "backwards compatilibility" section of ZWikiPage.py might help. Basically if you add your attribute and property declaration to the class, it's available on all zwiki pages next time you restart zope. If you add it to a particular instance (eg in upgrade()) it will only be on that page. Zope/python is quite flexible like this. authenticate user --WimBekker, 2003/03/12 09:19 GMT I've kind a got things working but not like I want. I cannot debug (don't know how) which for me allways is a great way to learn. So I have to make changes and see how the reflect. I've got a huge tal:condition problem. It works in wikipage.zpt but not in the form itself? I'm working on updating my user page to explain in more details what I'm trying to do. (Chat me on irc some time (sm). where? how?) authenticate user --SimonMichael, 2003/03/12 16:07 GMT Hi Wim - irc.freenode.net, #zwiki. Sounds like time to expand your zope debugging skills. I was chatting with Florian about this yesterday. Maybe we can capture some of the techniques and generate a nice ZopeDebugging intro. authenticate user --SimonMichael, 2003/03/12 16:17 GMT PS nice home page update, thanks for sharing details of what you're up to. google indexing --DeanGoodmanson, 2003/03/12 21:59 GMT If it matters, I think is being indexed by Google. The SearchPage? is also, under a few different paramenters. (?expr= type of example links.) strange problem with catalog --DeanGoodmanson, 2003/03/13 04:59 GMT Simon: Would you mind adding more details to the strange problem with the catalog bug fix? I see the change in CatalogAwareness.py : def catalog ... return getattr(self, -> getattr(self.folder() (line 74) I'd like to know what that may have been causing or fixed. After the change I noticed that a search on a cataloged subwiki didn't go via brute force...which I would'a checked pre-change!:-) Yet I have other RecentChanges? strangeness... strange problem with catalog --SimonMichael, 2003/03/13 05:05 GMT I found catalog() returning None because self.Catalog failed to acquire. self.folder().Catalog worked. I don't know why, I just changed it. In other news.. RecentChanges? on this site has just broken. RecentChanges? anomalies --DeanGoodmanson, 2003/03/13 05:18 GMT My old code would fail in catalog mode when I submitted with a value for number of pages, with header. (OK without.) Only in catalog mode (a SubWiki) and not in brute force (root wiki). See DeanGoodmanson for traceback. Not a big deal,I planned to upgrade to the new one. So I did. The latest RecentChanges? has the following issues: - It fails without a catalog, and in a simulated catalog fail where the CatalogAwareness?.py fix hasn't been applied. (Couldn't test it well on ZWiki's subwiki ZWiki:subwiki/NewestRecentChanges I got the similar "guarded item" error.) - Where I have a catalog (sub-wiki) the "notes" field is not showing up. RecentChange? crash --DeanGoodmanson, 2003/03/13 14:47 GMT The only extra information I can add about RecentChanges? crashing is that I added some pages to ZWiki:/subwiki RecentChange? crash --SimonMichael, 2003/03/13 19:15 GMT I don't think that was it.. I'll investigate later. Thanks.. Plone problems --2003/03/13 21:49 GMT I have set up the CMFWiki that came pre-installed with Plone. Overall it's working fine but I have a few problems: - Backlinks are not being updated. If I de-link a page from its parent (add ! before the WikiWord?), the backlinks still show the parent as linked. - Index is not being updated. Search returns results for non-existent pages. - Sometimes, I see superfluous html, head, title, and body tags within the Wiki area. I'm seeing this problem on other Plone sites too (e.g.) - Some block-level html elements that I add, e.g. h5, get wrapped with p tags. Other block-level elements, like table, don't. JEdit? --DeanGoodmanson, 2003/03/13 23:18 GMT JEdit?'s Code2HTML HTML colorized syntax output worked great for dropping code samples into my ZWiki for many, many languages and doc. types. Not even a need to strip the HEAD or BODY tags. :-} RecentChange? crash --SimonMichael, 2003/03/14 05:47 GMT It was a combination of things.. I've fixed a bug introduced some time back when I moved a try statement during debugging. RC's catalog-based code fails but it now falls back to brute-force and works as it should. The catalog version is failing because of some no longer existing pages which have remained in the catalog (NewestRecentChanges?, NewerRecentChanges?). I don't know how that happened, unless it's related to my change to rename() yesterday.. zwiki.org mail aliases messed up --SimonMichael, 2003/03/14 06:05 GMT I lost the zwiki.org mail aliases through some mixup with another domain. Will replace, but things may be a little screwed up for a bit. RecentChanges? crash --DeanGoodmanson, 2003/03/14 06:16 GMT Thanks for getting the info so quick. Guess I owe you some testing follow-up. :-) Latest page with catalogawareness fix: Brute force=Works. :-) Subwiki(w/catalog)=Note field note displayed. last_log ?? Kinda confused about that "missing" attribute with the last_log var. When I plop some text next to the it doesn't display. It's as if the field is never missing, but the catalog view (caveat:subwiki) never displays the Note field regardless. Side note: CatalogAwareness?.py does not refresh, I had to do a zope restart, as you warned me previously about product refresh not always 100% reliable. If this makes sense, I'll log it as a minor issue. Plone problems --Simon Michael, 2003/03/14 06:20 GMT FYI, I prefer not to spend time on CMFWiki problems; someone else here or on the zope-cmf list may know more. zwiki.org mail aliases messed up --SimonMichael, 2003/03/14 06:53 GMT Mail aliases should be back to normal now. The catalog has been rebuilt (I'm assuming this request is going to complete). I had problems with it trying to re-link one of the big pages and failing due to max recursion error. I also saw some really weird things, semi-existing pages, etc, which turned out to be due to some zeo process having half died. Stuff breaking all over. I'd better go to bed now. WimBekker, 2003/03/14 12:36 GMT Though not perfect, I'm now able to do some debugging with pdb. At least I can walk the code and inspect variables etc. Now if someone has a nice IDE in stead of the command line debugging :) You guys are so busy here. This allmost is a PrivateDiscussion? page ! Debuggers --DeanGoodmanson, 2003/03/14 15:16 GMT >> This allmost is a PrivateDiscussion?? page ! I like to consider it an OpenDiscussion...but if the chatter gets annoying, subscrbing via Digest Mode may help. :-) I think that the Wing IDE and Komodo offer zope debugging, more info at ZopeDebugging . There's also some news that debugging support may be built into Boa constructor. , Debuggers --SimonMichael, 2003/03/15 01:24 GMT Wim, I use emacs and EmacsWiki:PdbTrack. Traffic here has been mostly developer-oriented lately.. I don't feel as if there's enough to justify separate user and dev pages, do you ? new page type --SimonMichael, 2003/03/16 06:06 GMT ChangeLog?: *New general-purpose page type incorporating issue and fit support: stxprelinkdtmlfitissuehtml (STX + links + DTML + fit tests + issue + HTML). I think these are useful enough and will stay out of the way well enough to justify turning them on more widely. fit tests are run if there are tables whose first cell begins with fit. or fittests.. An issue properties form is displayed if the page name begins with IssueNo?. This is the new default page type on zwiki.org. linking speedups --SimonMichael, 2003/03/16 07:44 GMT Basic optimization based on the new invariant ids. - Frontpage down --ArnoPucher, 2003/03/17 06:31 GMT just a note - the Frontpage renders with an error; unfortunatly my knowledge isn't enough to repair :( side effect of the "new" ZWiki version ? Frontpage down the traceback --ArnoPucher, 2003/03/17 06:34 GMT Traceback of: Site Error An error was encountered while publishing this resource. KeyError 214, in __call__ Module Products.ZWiki.ZWikiPage, line 226, in _render Module Products.ZWiki.ZWikiPage, line 395, in render_Util, line 201, in eval __traceback_info__: id Module <string>, line 2, in f Module AccessControl.DTML, line 32, in guarded_getitem Module AccessControl.ZopeGuards, line 90, in guarded_getitem Module OFS.ObjectManager, line 652, in __getitem__ KeyError - Frontpage down --Simon Michael, 2003/03/17 16:36 GMT An entry for /zwiki/recycle_bin/IssueNo0468Test was left in the catalog after that page was deleted, somehow, causing FrontPage to break. I removed the catalog entry by hand and made FrontPage more robust in that situation. Documentation for help with Zope and Zwiki permissions as they relate to each other --2003/03/17 18:23 GMT Hi all, I saw just enough on the GeneralDiscussion page at zwiki.org to realize that permissions in Zwiki are moderated by the default wikipage.zpt which included, for example the suggestion that "the condition for displaying the page management form is: request.get( zwiki_displaymode,0) == full and (user.has_permission( Zwiki: Rename pages,here) or user.has_permission( Zwiki: Delete pages,here)) and (user.has_role( Authenticated) or request.get( zwiki_username,''))" At the suggestion of the host at the Zope site, zettai.net, where I am running some instances of Zwiki, I tried to get where I wanted with permissions by changing the permissions under the security tab in Zope. But I see that the Zwiki scripts and templates have a role to play as well which may explain why I cannot get the search and RecentChanges? functions to work for anonymous users. I was trying to get the Zwiki to run like it does here at Zwiki.Org. Where is the best place to read up on where Zope permissions leave off and the scripting in the nature of permissions in Zwiki takes over. Thanks, JohnDeBruyn Documentation for help with Zope and Zwiki permissions as they relate to each other --Simon Michael, 2003/03/17 19:31 GMT Hi John, to start with, configure the Zwiki: ... or Add ZWiki Pages permissions appropriately. Other relevant permissions are Add Documents, Images, and Files (for file upload) and FTP access. Zope's Access contents information and View permissions are needed for basic page viewing, of course. Zwiki's upgradeAll method requires Manage properties permission on the folder. In a few places (renaming, deleting, and displaying the page management form) zwiki also requires some user name - if not a zope login, then a username cookie from UserOptions?. This is to give a little extra protection from random anonymous visitors. I think that's it in a nutshell. If anyone finds other details please post. 2003/03/17 21:04 GMT Hi Simon: Much appreciated. I set Add ZWiki Pages, "Add Documents, Images and Files," FTP access, Access contents and View Permissions to anonymous ... and still neither the search function nor RecentChanges? would run in anonymous mode. Here is what came up for me: Site Error An error was encountered while publishing this resource. Unauthorized 221, in __call__ Module Products.ZWiki.ZWikiPage, line 233, in _render Module Products.ZWiki.ZWikiPage, line 416, in render_stxprelinkdtmlhtml Module OFS.DTMLDocument, line 131, in __call__ Module DocumentTemplate.DT_String, line 474, in __call__ Module DocumentTemplate.DT_In, line 678, in renderwob Module AccessControl.DTML, line 32, in guarded_getitem Module AccessControl.ZopeGuards, line 94, in guarded_getitem---- Perhaps I need to get into the templates and scripts to let anonymous users search and access RecentChanges?? Your thought on my predicament are much appreciated, JohnDeBruyn permissions problem --SimonMichael, 2003/03/17 23:09 GMT It could be that there's one or more pages in the wiki which have more restricted permissions. SearchPage? and RecentChanges? will tend to show those, since they look up attributes of many pages. On RecentChanges?, try removing all the code or try limiting the number of pages shown (to exclude the problem page). There are two products that would help you find the problem page - VerboseSecurity?, and another whose name I don't remember which will report all non-standard permissions settings.. status, news --SimonMichael, 2003/03/20 20:16 GMT I had an opportunity to do some quality thinking recently, and wrote down a bunch of ideas that I liked the look of. Several of these have made it into the code recently and it's looking like an interesting release (ChangeLog?). I'm closing the gap between tracker issues and ordinary pages - on this site, they are the same except for the naming - still a few bugs to be fixed there. Also I have more WWML updates to merge from PeterMerel. Other than that we're in bugfix mode right now. More page type thought needed later - I feel we'll most often want to offer just two page types, "wiki" (all zwiki's bells & whistles plus the wiki's chosen formatting markup - stx/wwml/restx..) and "html" (impersonates a standard web page for non-techie or non-wiki users, no nasty surprises). Also the situation with enabling/controlling/being aware of DTML is not completely satisfactory, I'm not sure how to improve that. If you were trying to stop the war and are feeling discouraged, I saw some good uplifting stuff at . We need to remember the big picture. I wish every one of you a peaceful week, free from danger. This month's release will be going out to the people in Iraq.. freeform links with included colon --DeanGoodmanson, 2003/03/20 20:41 GMT WikiPedia is currently down. :-( Does the [w:WelcomeVisitors] case translate OK to [w]:WelcomeVisitors ? In retrospect I can't see any major technical or noteworthy social issues if it's internal to the brackets. (need to revisit coming RestructuredText bracket rules,but..) p.s. Have you implemented a max-size on a freeform link? What size, and did it help performance? dtml security --DeanGoodmanson, 2003/03/20 21:52 GMT I would like "dtml enabled page content editing" to be a security line item tied to page modification. Pages with DTML support lose the EDIT and COMMENT features unless authenticated. This would probably require a notice to be displayed when the access is restricted pages (due to dynamic UI). Caveat: I think the FAQ page will break as the edits are done anonymously. Debuggers --2003/03/21 13:14 GMT Whow, I've installed Boa constructor. This is exactly what I was looking for. Now I've to get Zope running. I have installed Python23 on my system. My Zope installation uses Python21. Also, somewhere is python21.dll called. I cannot figure out where exactly, but I believe it has something to do with zlib. But when I use zlib from version 23 I get the same error. Here's how far I'm now: I start Boa, run z2.py. This gives the error unable to find python21.dll. This error also happens when I run z2.py from explorer (python23 loads and runs it). In Boa, I get a Traceback: z2.py: import ZServer? .... HTTPResponse?.py: import zlib,struct. When I copy python21.dll to the path, I get an import error: module use of python21.dll conflicts with this version of Python. (same traceback). I definitely could use some help on how to get past this obstacle. Currently I use Zope 2.6.0 and python 2.1.3 (for Zope). Making "full" the default --DeanGoodmanson, 2003/03/21 13:29 GMT I thought this would work for making "full" (not simple) the default ''zwiki_displaymode'' but it doesn't. I placed the following in the top of my standard_wiki_header: <dtml-unless zwiki_displaymode> <dtml-call "REQUEST.set('zwiki_displaymode','full')"> </dtml-unless> I was hoping I wouldn't have to change every instance of the checking in the standard_wiki_header, footer. We have Safari users who can't save cookies, and full is better than simple for our intranet use. Debuggers --DeanGoodmanson, 2003/03/21 13:36 GMT You'll probably get a much quicker answer for that specific question on the Zope mailing list. Warning: It's a heavy traffic list, I subscribe in digest mode but rarely keep up. script disabling --DeanGoodmanson, 2003/03/21 16:23 GMT Reminder to self: Document how to remove scripting safeguard from zwiki code for intranet usage. IssueTracker is down due with GuardedItem? traceback. I might have spurned it requesting a delete to a spam bug (which incidentally showed up twice in the recentchanges.) On to my point: <disabled script <!-- Begin function varitext(text){ text=document print(text) } // End --> <disabled /script> Script examples in pre scenario's are munged. Workaround (?): < s cript <!-- Begin function varitext(text){ text=document print(text) } // End --> </ s cript> User Interface feedback --DeanGoodmanson, 2003/03/21 21:54 GMT Got some fresh opinions today on a local wiki. Standards STX or HTML discussions. (Boy am I looking forward to ReSTX). Big one: The "edit" link doesn't stand out enough. Has anyone else run into this? What did you do about it? I'm considering adding an Edit button before "Add a Comment", or increasing the text size on the word "edit"...other thoughts? tracker fixes, more performance improvements --Simon Michael, 2003/03/22 00:24 GMT Sorry for the downtime today y'all; big old tracker and linking changes. I posted an update on ZwikiIssueTracker. Some cleanup of the page lookup methods has yielded another speedup and reduced memory usage. freeform links with included colon --Simon Michael, 2003/03/22 00:26 GMT zwiki-wiki@zwiki.org (DeanGoodmanson) writes: > Does the [w:WelcomeVisitors] case translate OK to [w]:WelcomeVisitors ? In I don't like the latter much - too many false positives, and makes links less clear for new editors. > p.s. Have you implemented a max-size on a freeform link? What size, and did it > help performance? No I haven't. dtml security --Simon Michael, 2003/03/22 00:30 GMT zwiki-wiki@zwiki.org (DeanGoodmanson) writes: > I would like "dtml enabled page content editing" to be a security line item > tied to page modification. Pages with DTML support lose the EDIT and COMMENT > features unless authenticated. You might be right.. add a Zwiki: edit executable pages or Zwiki: edit dynamic pages permission. It will create more confusion, but probably ease concerns about DTML. > Caveat: I think the FAQ page will break as the edits are done > anonymously. No problem for this site, I would grant that permission to anonymous. Making "full" the default --Simon Michael, 2003/03/22 00:46 GMT > I thought this would work for making "full" (not simple) the default > ''zwiki_displaymode'' but it doesn't. I'd have thought so too. I think I tried the same thing. Anyone ? What if you remove the unless ? User Interface feedback --Simon Michael, 2003/03/22 00:52 GMT It used to be a bold edit this page, and bigger. I expect some people will find a button even better. Or, how about including it at the top of the page, under the search box. I did this the other day so I could remove the footer entirely (and the search box, for a really simple page). I liked it a lot. Making "full" the default --Simon Michael, 2003/03/22 01:00 GMT REQUEST.set may leave an existing value in REQUEST.cookies, and perhaps REQUEST.get looks there first. Zwiki UNCLinkSupport --PieterB, 2003/03/22 14:13 GMT Ok, I would like it to be possible to use UNC-style linking on Zwiki. So \\SERVER\SHARE would be transformed to <a href="">\\SERVER\SHARE</a> This works both in IE and Mozilla on Win32 (checked most recent versions). I think it should be possible to enable UNC-linking per Zwiki web or always enable it. Any one would mind if \\SERVER\SHARE would always be auto-hyperlinked? Simon, can you tell me where/how Zwiki expands URL's? Then I'll try to get it to work for UNC-shares. Pieter Add a comment feedback --PieterB, 2003/03/22 14:19 GMT Sorry for duplication of the last post. Add a comment should give the user feedback that the button is pressed (and being processed). Perhaps it's good to use a page before going back to GeneralDiscussion:#bottom. So - User presses add a commentbutton - Zwiki displays a page "Thank you for adding your comment. Your comment is being processed. Please wait..." - Zwiki redirects to GeneralDiscussion#bottom so that the user can see the comment has been processed. Zwiki UNCLinkSupport --SimonMichael, 2003/03/22 16:09 GMT Hi Pieter. Good idea. As usual it's a tradeoff, one person's useful feature is another's inexplicable surprising magical behaviour. It would be nice to have this available. You'll probably need to modify anywikilinkexpr (used by _preLink) and _renderLink (generates actual links), to start with. proposal ideas --SimonMichael, 2003/03/22 16:16 GMT You created the UNCLinkSupport page for this proposal. To keep track of such a thing one could do any of: - add a suitable WikiBadge - make it a wishlist issue (by renaming) - make it a full-blown "project" (link it at ZwikiDevelopment?#projects) - Dean may have things to add on this RFC, OpenQuestions?, ... ? proposal ideas --SimonMichael, 2003/03/22 16:19 GMT More options: - use the original SuggestionsAndComments - continue to hash it out on GeneralDiscussion proposal ideas --SimonMichael, 2003/03/22 16:28 GMT - link on/parent under an existing project page (WikiLinkingIssues fits the bill here) Re: UNCLinkSupport proposal ideas and SiteStats --PieterB, 2003/03/22 16:40 GMT Simon, I don't think I will create UNCLinkSupport very soon, so I like to use the UNCLinkSupport as a badge and use the backlinks function to find the information. I'll try to make the page more visible in various Zwiki places. Something else... I've been working with AwStats? for a different project, and begin to like using advanced webstatistics tools to try to improve websites. I was thinking of good metrics for a site like zwiki: - number of pages on Zwiki - number of different usersin current month - number of pagemaintainers in current month - number of GeneralDiscussion posts in current month - total number of zwiki tracker issues (open/closed) - number of open zwiki tracker issues - downloads on zwiki tar file (dev/stable version) - ... Re: proposal ideas (and RFC and OpenQuestions?) --PieterB, 2003/03/22 17:48 GMT I'll put UNCLinkSupport under RFC. I didn't look at the RFC-page yet and whas amazed why I missed this. Is it similar to the Plone Improvement Process (PLIP) and a leightweight Python Enhancement Proposal ( PEPs?). For what it's worth: I'm -1 on OpenQuestions? and +1 on RFC. I think a good threading mechanism for Zwiki replaces the need for an OpenQuestions? page. Regexps.py question --DeanGoodmanson, 2003/03/22 18:00 GMT In the url string,url = r'["=]?((about|gopher|http|https|ftp|mailto|file):%s)' % (urlchars), what is the ["=]? for? Also, do you need to validate the protocols? Wouldn't "://" be enough? Hmmm... I think a lot of our Zwiki content is very usefull for Zope and Python developers who are not familar with Zwiki. How about adding a 'Powered by ZWiki, Zope and Python' to the main template? That would searchengines learn that Zwiki is related to Python and Zope. It also will increase the Google's PageRank? of Zope and Python. Re: proposal ideas (and RFC and OpenQuestions?) --SimonMichael, 2003/03/22 19:52 GMT At present I'm -1 on OpenQuestions?, -0 on RFC (we also have ZwikiModifications and tracker issues and the zwikidevelopment project list and this discussion page). Regexps.py question --SimonMichael, 2003/03/22 20:01 GMT Good question.. I don't remember. The cvs history might or might not tell us. We should remove it, check unit tests, and test on our sites for a few weeks. It's worth matching the protocols since they are a small well-defined set and it reduces false positives. Good idea, add these to FrontPage if you get the urge. Re: UNCLinkSupport proposal ideas and SiteStats --SimonMichael, 2003/03/22 20:12 GMT Re UNCLinkSupport - it's not a badge. You're not going to be marking other pages with this. Also, it's not necessary or even desirable for this to be visible in lots of places - just in the right place. Clarifying the wiki and the proposal process is the thing. more site statistics --SimonMichael, 2003/03/22 20:15 GMT The stats you describe would be terrific. Would you need the logs, or could you work out some things from the existing webalizer pages ? more site statistics --SimonMichael, 2003/03/22 20:19 GMT Actually, several of those could be worked out with DTML. FrontPage has some examples. Add a comment feedback --SimonMichael, 2003/03/22 20:26 GMT You're right, duplicate comments due to slow page saving are a problem. I like the idea of a fast-displaying, interim status page when saving is slow, if it can be made trouble-free and not slow down normal edits. (I'd rather page saves were always fast, but I don't think we can guarantee that.) comment() would be a good place to experiment. RESPONSE.write may be useful. linking speed! --DeanGoodmanson, 2003/03/23 02:05 GMT I created STXBulletTest? a while ago due to frustration with freeform linking rendering speed. I'm happy to report that page rendered quite fast today. stuff for thought --DeanGoodmanson, 2003/03/23 04:42 GMT I posted a comment at the StyleSheet? page a minor suggestion/question to how to make including a style sheet easier. I created the BookLookup page based on previous discussion on how to implement WikiWiki's URL lookup feature. (see page parents.) linking speed! --SimonMichael, 2003/03/23 04:59 GMT I haven't checked it, but there should be no real difference between wiki name and freeform linking speed now. why are pages moving around ? --SimonMichael, 2003/03/23 07:46 GMT It's a stupid StyleSheet? trick :) IssueTrackerZPT --FlorianKonnertz, 2003/03/24 21:39 GMT I made an IssueTrackerZPT version. A few corrections are still needed, please help me with them. jpinela, 2003/03/24 22:10 GMT tracker confusion --FlorianKonnertz, 2003/03/24 22:33 GMT I have a tracker in the main wiki and another one in a SubWiki. After upgrading my issues are mixed up and the links are wrong. I tried everything, updated the catalog, cleared it and build again, but the problem remains... Has anybody else experienced this problem also? tracker confusion --SimonMichael, 2003/03/25 01:00 GMT How do you mean mixed up ? Also can you give details of a link that's wrong. Do you see problems if upgrading a single wiki ? I would hope that there's no interaction between your main and sub wikis.. but that may be a vain hope, I think subwikis tend to use the parent's catalog. If that's the problem, perhaps you need to install a catalog in the subwiki. PageMaintainers considered harmful --SimonMichael, 2003/03/25 02:26 GMT Does it really help ? I have started to find it actually stifling, at least in this wiki. Perhaps it would be better for maintainers to just identify themselves and their current work on the PageMaintainers page. Less intrusive, and all visible in one place. tracker confusion --FlorianKonnertz, 2003/03/25 07:18 GMT I have a catalog in both, sub- and parent wiki, it was needed in the subwiki from the beginning. Now the SubWiki is ok, but the IssueTracker in the parentwiki shows all issues mixed. See NooWiki:IssueTrackerZPT - the parent wiki issues are noowiki related and the subwiki issues are ie. about "computing all systems", "grooveking" etc. Both catalogs were cleared and rebuilt, so i have no idea what i can try next. - AFAICS it's because the parent wiki catalog holds the pages from all subwikis also, so i should restrict him to the parent wiki (maybe have another catalog instance for all wikis?), but how to do so? Is there an option to achieve this? ZMI madness --FlorianKonnertz, 2003/03/25 07:40 GMT I'd appreciate the following change in the Zwiki file structure in the Zope-filebase: Create an own folder for all system files. - Reason: When i work a lot with them in ZMI, i have to wait too long until ZMI has loaded and displayed all the page names (>1000). Futhermore i have images with lowercase names hanging between the important files and disturbing the clearness. - This should be an easy modification, i guess. - Comments, ideas? ZMI madness --DeanGoodmanson, 2003/03/25 13:31 GMT I've created a ZwikiAdmin? page which has /manage links to all of the non-page files I edit regularly (or repeatedly), which seems to help. HelpPage notes --DeanGoodmanson, 2003/03/25 14:46 GMT * AccessKeys links were a bit off, I'm not sure if my tweek helped. (Problem: The AccessKeys page is RemoteWikiLink?'ed referenced on previous distributions of the HelpPage.) - The "Formatting rules in a nutshell" is very helpful but very dense, it could use line breaks, which I couldn't add by adding 's... em vs. i --DeanGoodmanson, 2003/03/25 14:55 GMT StructuredText single asterisk (sp) notation for "italic" renders as the em emphasis tag, not the italics tag. Would HTML gurus out there please advise? I set our em style to red, which misses the intent more often than not, as red is more bold than bold. (I think I'll hack my locally running STX..but don't find that as a beneficial product solution.) ZMI madness --FlorianKonnertz, 2003/03/25 14:57 GMT Thanks for the suggestion, I'll go for it. - Even ExternalEditor links can be set. :) - Of course i had already the /manage links to the different zope instances i use, but not yet to the files. - Another possibility for convenience is keeping the main folder always open in one browser tab, of course. But please tell me anyhow: what are the arguments about my idea? At least one would have a clear ZMI also. ZMI madness --DeanGoodmanson, 2003/03/25 16:39 GMT You can also put many your non-zwiki pages in the root folder at leave the rest to aquisition? 2003/03/26 03:22 GMT When a user adds a comment to a wiki page, I see that their user name gets inserted as a signature, any thoughts on how to (when using it with a plone site to make that username into a link to their home page? 0.17rc1 released --SimonMichael, 2003/03/26 08:01 GMT ReleaseNotes, still in progress but will hopefully tell you what to expect. Peace. ZMI madness --Simon Michael, 2003/03/26 16:22 GMT > But please tell me anyhow: what are the arguments about my idea? At > least one would have a clear ZMI also. If you're asking me.. I have the same problem, but I like the simplicity and natural urls. Other ideas - 1. use a btreefolder ? May not help if the problem is browser rendering speed. 2. use transparent folders ? Let us know if you do it. Re: X_26_2320013_3b_26_2325991_3b (deleted) --Simon Michael, 2003/03/26 16:26 GMT zwiki-wiki@zwiki.org (DeanGoodmanson) writes: > This page was deleted. As a matter of interest.. what did that page look like ? Was it chinese or other international characters ? HelpPage notes --Simon Michael, 2003/03/26 16:32 GMT > * AccessKeys links were a bit off, I'm not sure if my tweek > helped. (Problem: The AccessKeys page is RemoteWikiLink?'ed referenced on > previous distributions of the HelpPage.) Thanks, I'll check. Yes, HelpPage needs to work for new wikis as well as ours, since I try to reuse it as is, thanks for thinking about that. > * The "Formatting rules in a nutshell" is very helpful but very dense, > it could use line breaks, which I couldn't add by adding 's... Glad it helps.. improvements welcome, but this one is supposed to be dense (compact) so you can see it all at a glance. There are plenty of rambling stx tutorials out there (no really good ones yet though). plone user name link --Simon Michael, 2003/03/26 16:44 GMT > When a user adds a comment to a wiki page, I see that their user name > gets inserted as a signature, any thoughts on how to (when using it with > a plone site to make that username into a link to their home page? They could choose wikinames for their user names, which would link to a home page in the wiki.. which could link or redirect to their plone member area.. You could hack comment() so that instead of just the bare username, it puts a html or stx link to /Members/username in the heading. Re: X_26_2320013_3b_26_2325991_3b (deleted) --DeanGoodmanson, 2003/03/26 16:52 GMT It seemed to be primarily learning how to create a page gibberish. What I gleaned from digest mails: 尚 馬克 羅森格 , (? ?? ???), and "erfef" HelpPage notes --DeanGoodmanson, 2003/03/26 16:59 biggest stumbling block. "site=website, service, ??" "tool=Tool, Application, Product, Environment ???" >> but this one is supposed to be dense (compact) so you can see it all at a glance. The removal of the funky characters was a defenite improvement. It seems like a list in paragraph form which didn't read or scan well to me. If I was a master tweeker I'd put a half-space margin between the list items. It was a nit-pick and I'm happy to drop it. I'll post an improvement if I come up with one. I'm currently about half-way of reviewing my site's help pages, and I'll try to post a summary later at DeanGoodmanson. Re: X_26_2320013_3b_26_2325991_3b (deleted) --Simon Michael, 2003/03/26 17:33 GMT Ah yes, I could just look in RecycleBin?. These is actually a good international characters example, so I've salvaged it for InternationalCharactersInPageNames. HelpPage notes --Simon Michael, 2003/03/26 17:36 Yes, though I tried to avoid these details here. What did you think of Glossary ? > The removal of the funky characters was a defenite improvement. Yes, someone added those. > It was a nit-pick and I'm happy to drop it. I'll post an improvement if > I come up with one. I'm currently about half-way of reviewing my site's > help pages, Great. I know JordanCarswell has done some excellent help pages too. Glossary --DeanGoodmanson, 2003/03/26 18:09 GMT >> What did you think of Glossary ? Very useful. Good example of appropriate cross-reference linking. In WikiMania? I wondered why many of the terms weren't wiki links, then quickly satisfied by the the "See" references for more information. cleaned up ReleaseNotes some more --SimonMichael, 2003/03/26 18:25 GMT Glossary --SimonMichael, 2003/03/26 18:30 GMT Great. I tried to make the explanations self contained and do minimal linking - one link offering greater detail if appropriate. If it were possible without a bunch of ugly html, I'd move the links to a side margin running down the right side. 0.17rc2 released --SimonMichael, 2003/03/26 20:00 GMT With finished release notes and fix for a product-breaking fit dependency (thanks LarryP?). ZwikiRollout? --DeanGoodmanson, 2003/03/26 20:30 GMT Looking for contributors to add content and nodes to an outline I've placed at ZwikiRollout?. If the page may not be intuitive, so please paste questions, comments, misc. at the bottom and I'll reply and integrate it accordingly. editform what-not --DeanGoodmanson, 2003/03/28 03:29 GMT Anyone considered putting "taller/shorter" buttons on the editform similar to the ZMI? The other option is to make the text box height 85% (?) of the screen--which I haven't been able to succesfully do. ...perhaps these cookies would also need to sit in UserOptions?. Thanks for the tweeks to the most recent version. Does that TextFormattingRules page/link ship with the app? The larger (move small tag) and "edit this page" text worked for my edit link concerns. editform what-not --simon, 2003/03/28 15:15 GMT Dean - TFR does not ship, I think - I tried to minimize the shipped pages to reduce maintenance. In that case the link won't appear. Good. good article --simon, 2003/03/28 20:25 GMT Hmm, that's good: The Warmakers Are Not Evil. They're Just Misguided - Really turned off random margins --SimonMichael, 2003/03/29 05:19 GMT Turned off the random page positioning in StyleSheet?. Zwiki rocks, again... --TonyRossini, 2003/03/29 06:22 GMT I just spent time redoing my WWW site (trying to get past writers' block on some papers I'm writing), and with all the docs that are available, set up mailin and a clean FAQ much faster than last time (a few hours, rather than a few days). This was an upgrade from 0.9 to 0.17rc1 (with the Fix.py patch). It just rocks for research support. Zwiki rocks, again... --SimonMichael, 2003/03/29 08:45 GMT Hi Tony.. that is great to hear, thanks for posting (inspired me to update WikiMail). PieterB, 2003/03/29 16:16 GMT Just created a ZopeRoadmap? page listing current Zope developments within the community. I moved the old stuff to ZopeRoadmapArchive?. zope roadmap --SimonMichael, 2003/03/29 16:22 GMT Nice! A Challenge --2003/03/29 16:46 GMT I have a challenge for y'all... I am using zope/zwiki and latexwiki to create my Notes pages. (Please do not add that link to your list of wiki's because my server cannot handle much load -- rendering latex is very CPU-intensive) Soon (Sept 1) I will change jobs and will be spending a lot of time on the train, where I will probably want to add lots of new nodes to my Notes. (I plan to read physics papers on the train, and of course, make notes) So the challenge is this: I want to set up some kind of disconnected operation so that I can modify the pages on my laptop, and synchronize with the main server once I get to my office/home. Disconnected editing of zwiki nodes isn't enough, I want to be able to commit it and see the latex rendered on my laptop, thus I will need the entire zope/zwiki/latexwiki infrastructure running on my laptop. I will be the only person doing any editing so conflicts shouldn't be a problem, in principle. So...any suggestions? Does anything similar exist already? Zope-dev, new install mechanism --PieterB, 2003/03/29 19:05 GMT Hi Fred and Chris, I've read some of the zope-dev thread's on the new install mechanism (e.g. ). I wonder if it's possible to check the Python stacksize during installation of Zope as well (just like the 2Gb-limit is tested). This might make the default install more robust on platforms such as FreeBSD?/AIX. See for a description of the Python stacksize problem. I don't know if the problem still exists on Python 2.2 and FreeBSD?/AIX. (I'm not subscribed to the zope-dev mailinglist, therefore sending it directly to you, and cc'ing to the Zwiki community because that FreeBSD? bug caused a lot of problems there). Re: Zope-dev, new install mechanism --PieterB, 2003/03/29 19:45 GMT I'll see what I can do. That would probably be after Zope 2.7 alpha is released. Is there a timeframe for the Zope 2.7 release? Chris wrote: > Hi Pieter, > > This patch should be merged in to Python, but until it is, if you send > me a way to check for the stack size, I can put it in the script. I > don't use BSD so I can't come up with one myself. Python / freebsd patch --PieterB, 2003/03/29 20:23 GMT Hi Bob, I saw that your Python patch for FreeBSD? is automatically applied to the FreeBSD? ports as of 6/2/2003. Do you think it's possible to write a Python/C program to check the stack size? That would make it possible to check for that during a Zope install. I'm not familar with writing Pthreads applications. Please discuss this at Zwiki's GeneralDiscussion or at zope-dev mailinglist. Thanks, A Challenge --SimonMichael, 2003/03/30 19:17 GMT Hi.. it sounds like you want to run a zope+zwiki+latexwiki installation on your laptop, with the full up-to-date wiki installed locally. Edit on the train, then when you are connected again, copy your updated wiki pages to the online zope server with ZSyncer. This won't be smart about merging simultaneous changes, but if it's just you editing that should be fine. As for performance, I expect LatexWiki doesn't do pre-rendering like the newer code - this would perform much better. If you post a url for the latex wiki rendering code here I might be able to suggest an update. Re: A Challenge --2003/03/30 19:32 GMT Ah ZSyncer! That's what I need! Thanks! LatexWiki is. It does caching of rendered images, but I'm not sure about pre-rendering. BTW the email gateway is bouncing my replies... SearchBoxExperiment? --DeanGoodmanson, 2003/03/31 04:29 GMT Implementing a "search titles and jump if there's a match" checkbox on my header didn't work in a subwiki (cataloged), but did in the "root" wiki (not cataloged.) It worked in the root and sub here at Zwiki, both cataloged. Guess I need to upgrade. :-) A Challenge --SimonMichael, 2003/03/31 16:04 GMT latexwiki.rootnode.com is dead. Re your bounced replies - they tell you why, but it's buried in the cruft. SearchBoxExperiment? --SimonMichael, 2003/03/31 16:04 GMT Guess the updated pageWith* methods have helped out there. A Challenge --2003/03/31 16:31 GMT I have posted to GeneralDiscussion before via email (in Jan). The bounce says I am not authorized to post, yet I am subscribed and the email is the same. We discussed the LatexWiki stuff briefly in Jan, but anyway if you want to take a look I have placed the code on my server: RFC: release branches, cvs, renaming product directory --SimonMichael, 2003/03/31 16:42 GMT I forgot we were in prerelease mode and checked in a bunch of new stuff after rc2. I think it's time to use branches for releases, so development can continue uninterrupted on the trunk. So I think my next steps should be: - make a branch called release_0_17_0 or release_0_17, where the release_0_17_0rc2 tag is now (how, exactly ?) - make the latest changes for rc3 - update release scripts to use the branch - release rc3 & 0.17 Also, limi pointed out that CVSRepository checkout is complicated unnecessarily by having a cvs module name ( zwiki) different from the checked out directory name ( ZWiki). This can create confusion for, eg, people checking out with Tortoise CVS. So, the cvs module name should probably change to ZWiki (How ?). Having come this far, should I take the opportunity to change ZWiki directory to the preferred Zwiki spelling ? Or, accept that it will be Products/ZWiki for ever ? Ie instead of Products/ZWiki we would have Products/Zwiki. See, no gratuitous wiki spelling, like Products/Localizer, Products/Formulator, etc. How difficult and/or painful would this be, both in the cvs repository and on the zope side ? Consequences ? RFC: release branches, cvs, renaming product directory --DeanGoodmanson, 2003/03/31 16:50 GMT >> Having come this far, should I take the opportunity to change ZWiki directory to the preferred Zwiki spelling If you need to make many changes, go ahead with Zwiki. I'm not sure what problems there would be product side. I see this as an opportunity to denote a major version change, and potentially the ability to upgrade in parrallel? Freezope won't upgrade from 0.9.0 due to end-user complications. Could they then install 0.17.0 and that could be used at Zwiki and deprecating (without forcing immediate update/object fixing) from ZWiki as 0.9.0?
http://zwiki.org/GeneralDiscussion200303
CC-MAIN-2019-22
refinedweb
10,833
66.74
In the past, we discussed detail curve creation and modifying their colour. Several parties including Shifali and Dharanidharan inquired about creating detail curves on a specified level. Here is a solution to that issue by Joe Ye. Question: I am using the following code to draw some circles using arcs: XYZ p = XYZ.Zero; XYZ norm = XYZ.BasisZ; double startAngle = 0; double endAngle = 2 * Math.PI; double radius = 1.23; Plane plane = new Plane( norm, p ); Arc arc = app.Create.NewArc( plane, radius, startAngle, endAngle ); DetailArc detailArc = doc.Create.NewDetailCurve( doc.ActiveView, arc ) as DetailArc; This code draws an arc in the active level of the active view. I would like to specify the level as a parameter and draw the arc on that level independently of the current active level. In the code above, I am setting the Z coordinate to the height of the level, hoping the curves will be drawn on the desired level. Instead, they are drawn on the active level of the active view of the document. Is there any way to draw the arcs on a specified level independent of the current active level? Answer: You just need to set the first argument of the NewDetailCurve method to the specific level ViewPlan. For example, to draw an arc on Level 2, first find the ViewPlan instance associated with that level. To find the target view plan, you can iterate over all ViewPlan instances and check their GenLevel property. If it matches the desired target level, this is the one we need. Here is some sample code illustrating this: using System; using Autodesk.Revit; using Autodesk.Revit.Elements; using Autodesk.Revit.Geometry; public class RevitCommand : IExternalCommand { public IExternalCommand.Result Execute( ExternalCommandData commandData, ref string messages, ElementSet elements ) { Application app = commandData.Application; Document doc = app.ActiveDocument; // Create an arc on the plane whose // center is at the plane origin: XYZ end0 = new XYZ( 0, 0, 1 ); XYZ end1 = new XYZ( 1, 3, 2 ); XYZ norm; if( end0.X == end1.X ) { norm = XYZ.BasisZ; } else if ( end0.Y == end1.Y ) { norm = XYZ.BasisZ; } else { norm = XYZ.BasisZ; } double startAngle = 0; double endAngle = 2 * Math.PI; double radius = 5; Plane objPlane = app.Create.NewPlane( norm, XYZ.Zero ); // ViewPlan of "Level 2" ViewPlan vp2 = null; ElementIterator ei = doc.get_Elements( typeof( ViewPlan ) ); while( ei.MoveNext() ) { ViewPlan vp = ei.Current as ViewPlan; if( vp.GenLevel.Name.Equals( "Level 2" ) ) { vp2 = vp; break; } } if( null == vp2 ) { vp2 = doc.ActiveView as ViewPlan; } if( null != vp2 ) { // draw the circle: Arc arc = app.Create.NewArc( objPlane, radius, startAngle, endAngle ); DetailArc detailArc = doc.Create.NewDetailCurve( vp2, arc ) as DetailArc; } return (null == vp2) ? IExternalCommand.Result.Failed : IExternalCommand.Result.Succeeded; } } Another point I would like to mention: I noticed in your code that you create the Plane instance using one of its constructor member methods. This works fine as long as you are not working in VSTA. You can also use the dedicated Autodesk.Revit.Creation.Application method NewPlane to create the plane object. UV and XYZ instances can also be created both ways, either using their constructor member methods or dedicated creation application methods. If you are working in VSTA, you have to use the application creation methods, because the constructors will not work. For completeness sake, here is the complete detail_curve_level source code and Visual Studio solution. Many thanks to Joe for providing it!
https://thebuildingcoder.typepad.com/blog/2010/02/detail-curve-on-level.html
CC-MAIN-2020-34
refinedweb
558
52.46
Kilian <kil@gnu.ch> writes: > Goswin von Brederlow wrote: >> Kilian <kil@gnu.ch> writes: >> >>>. >>> Thanks a lot for your work! I'd be willing to do that, unfortunately >>> it doesn't compile on my box; I get: >>> >>> Making all in harness >>> make[1]: Entering directory `/usr/local/src/mbr-1.1.5.orig/harness' >>> gcc -DPACKAGE=\"mbr\" -DVERSION=\"1.1.5\" -I. -I. -Wall >>> -Wstrict-prototypes -Wmissing-prototypes -g -c bios.c >> Did you apply the patch? > > Here is what I did: > > $ apt-get source mbr > $ tar xzvf mbr_1.1.5.orig.tar.gz > $ cd mbr-1.1.5.orig > $ zcat ../mbr_1.1.5-2.diff.gz | patch -p1 'apt-get source mbr' should already run dpkg-source -x on the dsc file to extract. Why do you have to untar and patch it manualy and why don't you use 'dpkg-source -x *dsc' for it? > $ patch -p1 < ../mbr-x86_64.patch # your patch > $ ./configure > $ make > > Both patches applied smoothly, no rejects at all. > >> This should be "gcc -m32" there. Without it it will create 64bit code >> which won't work. > > Hm. Should this flag '-m32' be in your patch? Because I don't find it > there.. The patch includes this: | -CC = gcc | -CFLAGS = -g -Wall | +KERNEL_ARCH := $(shell uname -m) | + | +CC = gcc -m32 | +LD = ld -melf_i386 >>> In file included from bios.c:2: >>> vm86.h:4:22: asm/vm86.h: No such file or directory >> sh-3.1# cat /usr/include/asm/vm86.h /* All asm/ files are generated >> and point to the corresponding >> * file in asm-x86_64 or asm-i386. >> */ >> #ifdef __i386__ >> # include <asm-i386/vm86.h> >> #else >> # error This header is not available for x86_64 >> #endif > > $ cat /usr/include/asm/vm86.h > cat: /usr/include/asm/vm86.h: No such file or directory > >> sh-3.1# dpkg -S /usr/include/asm/vm86.h linux-kernel-headers: >> /usr/include/asm/vm86.h > > $ dpkg -S /usr/include/i386-linux/asm/vm86.h > linux-kernel-headers: /usr/include/i386-linux/asm/vm86.h > >> And you don't seem to be on sid. I don't have that file either on >> sarge so it won't work there. > > OK, does that mean that there is no reasonable way I can get mbr on > sarge to compile? You need the sid linux-kernel-headers and also have to set "CC = gcc-3.4 -m32". The Build-Depends need to change to ia32-libs-dev as well. >> > > This won't install here: > > $ dpkg -i mbr_1.1.5-2.1_amd64.deb > (Reading database ... 18347 files and directories currently installed.) > Preparing to replace mbr 1.1.5-2.1 (using mbr_1.1.5-2.1_amd64.deb) ... > Unpacking replacement mbr ... > dpkg: dependency problems prevent configuration of mbr: > mbr depends on libc6-i386 (>= 2.3.5-1); however: > Package libc6-i386 is not installed. > dpkg: error processing mbr (--install): > dependency problems - leaving unconfigured > Errors were encountered while processing: > mbr In sarge the 32bit libc6 is included in ia32-libs. For testing purposed you can --force-depends. > And I can't install libc6-i386 on my system: > > Package libc6-i386 is not available, but is referred to by another package. > This may mean that the package is missing, has been obsoleted, or > is only available from another source > E: Package libc6-i386 has no installation candidate > > > I'm really stuck. Apparently lilo would be the best choice because it > supports md-devices, right? But I can't use lilo because I don't have > a working install-mbr. Lilo only suggests mbr. Why can't you use it to install lilo directly into the mbr? > I would give grub a try, but I am not quite sure about the steps > required for this. First off, the NEWS says that since 0.90 - > 2001-07-11, "Linux software RAID support is added (only for > RAID-1)". Well great, I have (or better want) a RAID-1. But everything > I read about grub and RAID talks of sda/sdb or the like, not of > mdX. Does grub support it, but not really, or what? > > Now, is it sufficient if I run > > grub-install --root-directory=/boot --no-floppy /dev/sda > grub-install --root-directory=/boot --no-floppy /dev/sdb man grub-install: grub-install copies GRUB images into the DIR/boot directory specfied by --root-directory, and uses the grub shell to install grub into the boot sector. Your command line would install to /boot/boot/. grub-install --no-floppy /dev/md0 should be sufficient to boot but not to get failsave booting in case sda dies. It will only install on the first device mdadm -D lists for the raid and I assume this is sda. Running grub-install on the individual disks will also not result in a bootable sdb disk. As shown below your device.map lists sdb as second bios disk. But if sda fail sdb becomes the first bios disk and grub fails to find the second. You have to tell grub that sdb will be the first disk by putting "(hd0) /dev/sdb" into device.map when installing on sdb. > ? There exists no menu.lst, device.map or whatever on my system yet, > do I have to manually set them up before running grub-install? If yes, > how should they look like? I would figure device.map needs to state > > (hd0) /dev/sda > (hd1) /dev/sdb > > But how does menu.lst have to look like? > > My fstab for the RAID system will look like this: > > /dev/md0 /boot xfs defaults 0 0 > /dev/md1 none swap swap > /dev/md2 / xfs defaults 0 0 > /dev/md3 /home xfs defaults 0 0 > /dev/md4 /var xfs defaults 0 0 > /dev/md5 /var/log xfs defaults 0 0 > /dev/md6 /var/tmp xfs defaults 0 0 > /dev/md7 /tmp xfs defaults 0 0 > proc /proc proc You should look into lvm. My usual setup is: /dev/md0 / (includes /boot) /dev/md1 swap /dev/md2 lvm /usr, /var, /home on lvm. tmp as tmpfs Alternatively you could use partitions on raid to limit the number of actual raid devices. The maximum number of raid devices used to be 8 which would put you at the limit. > Thanks very much for your time, I really appreciate the help I've > gotten from all you guys so far very much! > > -- Kilian MfG Goswin
https://lists.debian.org/debian-amd64/2006/05/msg00292.html
CC-MAIN-2016-07
refinedweb
1,052
68.57
» I/O and Streams Author Blocking on calls to read/writeObject Mike Curwen Ranch Hand Joined: Feb 20, 2001 Posts: 3695 I like... posted Apr 03, 2003 10:32:00 0 Apologies if this is a newbie question. I've had very little call for Streams so far in my coding life. I've inherited some code that I'm trying to make some sense of. Imagine two instances of an object. They are communicating over ethernet ports. Each object has an Object RXTransmission( InputStream is, OutputStream os) and boolean TXTransmission( InputStream is, OutputStream os, Object o) The point is to transmit the 'o' object from one place to the other. The code uses CRC32 checking to determine if the object was successfully sent. And it appears that they do so in a hand-shaking manner. " TX:Here's the object. RX: I have that object, I've calculated the checksum, I'm sending you the checksum. TX:I have the checksum, and it matches the one I made before sending. I'm sending you a Boolean 'true'. RX:I see that 'true' and so I'm convinced the object was received successfully. " Here's the real question. Will calls to readObject() (on ObjectInputStream ) block until there is an object to read on that stream? Or will they return immediately with 'null' ? I've read somewhere that stream in java are "blocking". Is this what is meant (and what NIO is supposed to 'fix')? The answer to the first question must be yes, or else I'll have to scratch my head a bit more. Thomas Paul mister krabs Ranch Hand Joined: May 05, 2000 Posts: 13974 posted Apr 03, 2003 10:44:00 0 readObject() is a blocking call. And so is writeObject(). [ April 03, 2003: Message edited by: Thomas Paul ] Associate Instructor - Hofstra University Amazon Top 750 reviewer - Blog - Unresolved References - Book Review Blog Mike Curwen Ranch Hand Joined: Feb 20, 2001 Posts: 3695 I like... posted Apr 03, 2003 11:20:00 0 So I'm trying to research where giants have tread (or so I suspect). How *does* one detect a blocked method? Not at all? No, I can't use nio, I'm stuck with 1.1.8 I've got a hint from a rancher (Jim Yingst) on this link (the Follow-up section at the top) Here's a better one: The reason I'm enquiring further is that I've heard from co-workers that sometimes they have to go in and kill the "listener" (running on a company box here), or sometimes the "sender" (at a client's site) will stop working until they reboot the system next morning. I'm thinking that the blocked i/o could be to blame. It will just sit there until ... until you kill the JVM? It all seems so extreme, but I guess that's why everyone is so happy with nio ? Apologies again if this is neophyte. Peter den Haan author Ranch Hand Joined: Apr 20, 2000 Posts: 3252 posted Apr 03, 2003 11:33:00 0 I'm not sure if this will work on such an early JVM, but on recent versions running java in a console and pressing Control-Break gives you a very interesting thread dump. If you're on a Unix platform, I'm not sure either; try kill -SIGQUIT. - Peter [ April 04, 2003: Message edited by: Peter den Haan ] Jim Yingst Wanderer Sheriff Joined: Jan 30, 2000 Posts: 18671 posted Apr 03, 2003 12:15:00 0 I've got a hint from a rancher (Jim Yingst) on this link (the Follow-up section at the top) Ah, my moment of fame. Unfortunately my contribution there was to point out that you can't use available() to prevent blocking if you also need to be able to detect an end-of-file. When you get to the end of file, available() returns 0, and you don't have any way of knowing if that means the file is at an end, or the next read() will block - until you actually attempt the read(), at which point you may block indefinitely - which is what we were trying to avoid in the first place. So you can't really use the method Doud Lea was originally suggesting. I think that to implement some sort of timeout using pre-nio techniques, you basically need to have two threads - one to do the read, and one to wake up periodically to see if there's been any change. If there's no change after a set timeout period, the best way I know to stop the block is to close() the stream that's blocked. Be aware that this isn't guaranteed to work (unless you're using nio-derived streams) - in some situations, close() may simply block as well. Depends how the stream is set up, I think, and what platform and JVM you're using, probably. But it often works, and it's worth a try, especially if no one has a beter alternative. [ April 03, 2003: Message edited by: Jim Yingst ] "I'm not back." - Bill Harding, Twister Mike Curwen Ranch Hand Joined: Feb 20, 2001 Posts: 3695 I like... posted Apr 03, 2003 20:58:00 0 What if my input stream is for communicating through a COM port (modem). I'm not reading a file, so I don't care about EOF. If available() returns 0, then maybe I should sleep for 5 seconds and try again... And if it's STILL not available(), I can assume that "ok, that's all she wrote!" If on the other hand, that is NOT all she wrote, then the client that is trying to write to the stream will block (or will it?) Something like that? Jim Yingst Wanderer Sheriff Joined: Jan 30, 2000 Posts: 18671 posted Apr 04, 2003 13:53:00 0 Well, maybe. You're using a Socket then? Is the socket going to be permanently open? Is there some end-of-session message that tells you when you're done? Or do you just catch a SocketException if the socket is close from the other side while you're still waiting for more info? Checking available() again after 5 sec may work, but it also seems like it will often introduce an unnecessary lag. Consider: A requests an object from B. There's a little latency in the network - maybe it takes .5 sec before A sees a response. If A checked availabe() immediately after the request, it returned 0. Now A is waiting for 5 sec, even though B's response started arriving after .5 sec. Perhaps if available() == 0, A should recheck every .5 sec or so, and then keep rechecking unless the total delay exceeds 5 sec. Actually, I think I've got an old class that may be of help to you - let me see if I can find it. If on the other hand, that is NOT all she wrote, then the client that is trying to write to the stream will block (or will it?) No, if available() > 0, the stream will not block - it will return at least as many bytes as are available (unless the byte[] buffer you're storing them in is too small). Mike Curwen Ranch Hand Joined: Feb 20, 2001 Posts: 3695 I like... posted Apr 05, 2003 08:42:00 0 I'm not entirely sure how the package works. This is code I 'inherited' at work. The package import at the top is gnu.io , for which the javadocs are utterly in want of something more than just the method signature. He has a comment beside that import that reads "for win32, replace this with javax.comm" (i'm going from memory, could it be javx.com ?) I'm at home for the weekend, so I don't have the code with me (that uses this gnu.io package). But the thing is.. I'd have to make those suggested changes in the gnu.io package. I think the only thing we do is try to .open() a communication port. Once the SerialPortEvent is sent to our registered listener, we have to see if it's a "ring" (I assume someone is dialing in to our modem) or if it's the other thing.. someone sending data. So the blocking happens inside their package I think. As for your class... I wouldn't mind seeing what you have. It will probably help clarify a few things re: sockets and i/o Jim Yingst Wanderer Sheriff Joined: Jan 30, 2000 Posts: 18671 posted Apr 05, 2003 11:39:00 0 Here's the old class I was working on. It's still somewhat rough, as I found a workaround to the original problem that motivated me (now forgotten). Still has debug print statements and the like. This has nothing to do with Sockets specifically - it's an attempt to make a FilterInputStream which can be chained to any other InputStream in order to provide timeout capability. However whether it works or not depends on whether the underlying stream can be closed without blocking. The main() method attempts to use a timeout on System.in, but unforturnately this doesn't work on my JDK/OS - close() simply blocks. The terminate() method shows other more aggressive things I was trying to see if they'd work. I'd say, give this a try to see if it works for whatever stream types you're dealing with. If close() blocks, or you need code you can trust on other unknown platforms as well, try something uing available() as previously discussed instead. That will probably be slower than TimeoutInputStream, but can be guaranteed much more easily. import java.io.*; /* * Each TimeoutInputStream has an associated TimerThread which monitors * whether it's gone too long without any activity. All communication * between the TimeoutInputStream and its TimerThread is synchronized * using the TimerThread instance (held in instance variable "timer") * as the monitor, rather than the TimeoutInputStream instance. * This is done to ensure that no outside code can send notify() or * interrupt() through the monitor. */ public class TimeoutInputStream extends FilterInputStream { private TimerThread timer = new TimerThread(); private int timeout; // number of milliseconds to wait before timing out private long lastActivityTime; // value read from System.currentTimeMillis() at last activity private boolean isReading; private boolean isOpen = true; private Thread readingThread; public TimeoutInputStream(InputStream inner, int timeout) { super(inner); if (timeout <= 0) throw new IllegalArgumentException("timeout = " + timeout); synchronized (timer) { this.timeout = timeout; timer.start(); } } /** * Users are strongly encouraged to use one of the other read() methods instead of this one * - this has a lot of performance overhead to read just one byte. You might as well grab as * many bytes as you can at once. */ public int read() throws IOException { startRead(); int value = in.read(); endRead(); return value; } public int read(byte[] b) throws IOException { startRead(); int value = in.read(b, 0, b.length); endRead(); return value; } public int read(byte[] b, int off, int len) throws IOException { startRead(); int value = in.read(b, off, len); endRead(); return value; } private void startRead() { synchronized (timer) { lastActivityTime = System.currentTimeMillis(); isReading = true; readingThread = Thread.currentThread(); System.out.println("\tstartRead: " + lastActivityTime); timer.notify(); } } private void endRead() { synchronized (timer) { lastActivityTime = System.currentTimeMillis(); isReading = false; readingThread = null; System.out.println("\tendRead: " + lastActivityTime); } } public void close() throws IOException { super.close(); synchronized (timer) { isOpen = false; timer.notifyAll(); } } private class TimerThread extends Thread { public void run() { synchronized (this) { while (isOpen) { if (!isReading) { System.out.println("\twait for startRead"); // wait until notified (indicating start of read, or a close) try { wait(); } catch (InterruptedException e) {} } else { long timeNow = System.currentTimeMillis(); long alreadyWaited = timeNow - lastActivityTime; System.out.println("\ttiming: now: " + timeNow + "\twaited: " + alreadyWaited); if (alreadyWaited >= timeout) { terminate(); } else { System.out.println("\twait now for " + (timeout - alreadyWaited)); try { wait(timeout - alreadyWaited); } catch (InterruptedException e) {} } } } } } } private void terminate() { // hope this interrupts the blocking read System.out.println("closing stream..."); try { close(); } catch (IOException e) { e.printStackTrace(); } System.out.println("close() completed"); /* try { Thread.sleep(2000); } catch (InterruptedException e) {} System.out.println("interrupt parent thread"); readingThread.interrupt(); try { Thread.sleep(2000); } catch (InterruptedException e) {} System.out.println("stop parent thread"); readingThread.stop(); try { close(); } catch (IOException e) { e.printStackTrace(); } */ } public static void main(String[] args) throws IOException { System.out.println("type stuff - hit return to send (3 sec timeout)"); InputStream in = new TimeoutInputStream(System.in, 3000); while (true) { int n = in.read(); System.out.println(n); } } } [ April 05, 2003: Message edited by: Jim Yingst ] I agree. Here's the link: subject: Blocking on calls to read/writeObject Similar Threads Optimal I/O operations in Java Zipping an inputstream midp 2 i/o question File transferring Non Blocking HttpPost Request / Client-Server communication All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/276072/java-io/java/Blocking-calls-read-writeObject
CC-MAIN-2015-06
refinedweb
2,139
65.01
0 I'm supposed to merge a ordered data of two files into a third file, keeping the data in order. I'm suppose to create a MergeFiles application that merges the integers ordered from low to high in two files into a third file, keeping the order from low to high. Then should merge the two files by taking one element at a time from each, and the third file should contain the numbers from both file from lowest to highest. so, i saved the numbers in wordpad as data1.txt, and data2.txt. Data1: 11 25 36 45 56 78 90 Data2: 1 3 5 7 54 32 78 99 I can't compile my program because there are still errors and missing codes... can someone help me edit it, and is this how i do it according to the requirements. thx. so far: package mergetwofiles; import java.io.BufferedReader; import java.io.FileReader; import java.io.FileWriter; public class Main { public static void main(String[] args) { FileReader file1=new FileReader("Data1.txt"); FileReader file2=new FileReader("Data2.txt"); BufferedReader br1 = new BufferedReader of(file1); BufferedReader br2 = new BufferedReader of(file2); String temp1, temp2; while(br1.readLine() !=null) { temp1=br1.readLine()+temp1; } while(br2.readLine()!=null) { temp2=br2.readLine()+temp2; } String temp = temp1 + temp2; FileWriter fw=new FileWriter("data3.txt"); char buffer[]=new char[temp.length]; temp.getChars(0,temp.length(),buffer,0); fw.write(buffer); file1.close(); file2.close(); fw.close(); } }
https://www.daniweb.com/programming/software-development/threads/235793/merging-two-files
CC-MAIN-2018-39
refinedweb
244
66.64
This Zerodha Varsity article explains succinctly what Max Pain theory is. I urge you to read it because I’ve used this article as base to perform all calculations. I’ll try to give a brief summary of it here. According to Max pain theory, at any given point of time option writers (sellers) will try to sell option contracts which will expire worthless at expiry. So Max pain is the point where option buyers will feel the maximum pain and option writers on the other hand stand to gain most as options is a zero sum game. (investopedia link) If you want to just want to look at the Max Pain values, it is already available on niftytrader.in, but read on if you want to know how to calculate it. pip install requests bs4 pandas lxml matplotlibmatplotlib Please download option_chain.py for this exercise and place it in your project folder. You can also read thru previous post if you want understand how to fetch option chain from NSE’s website. from option_chain import option_chain nifty_chain = option_chain("NIFTY", "OPTIDX", "30JUL2020") ⚠️ Please remember to replace the expiry date as appropriate because you might be reading this article post 30 July 2020 def total_loss_at_strike(chain, expiry_price): """Calculate loss at strike price""" # All call options with strike price below the expiry price will result in loss for option writers in_money_calls = chain[chain['Strike Price'] < expiry_price][["CE OI", "Strike Price"]] in_money_calls["CE loss"] = (expiry_price - in_money_calls['Strike Price'])*in_money_calls["CE OI"] # All put options with strike price above the expiry price will result in loss for option writers in_money_puts = chain[chain['Strike Price'] > expiry_price][["PE OI", "Strike Price"]] in_money_puts["PE loss"] = (in_money_puts['Strike Price'] - expiry_price)*in_money_puts["PE OI"] total_loss = in_money_calls["CE loss"].sum() + in_money_puts["PE loss"].sum() return total_loss List of strike prices strikes = list(nifty_chain['Strike Price']) Let us calculate loss for each strike price losses = [total_loss_at_strike(nifty_chain, strike)/1000000 for strike in strikes] Plotting losses vs strike price import matplotlib.pyplot as plt plt.plot(strikes, losses) plt.ylabel('Total loss in rs (Millon)') plt.show() Max pain value ie. Minimum loss to option writers at strike price m = losses.index(min(losses)) print("Max pain > {}".format(strikes[m])) Max pain > 10500.0 Again quoting from Zerodha Varsity with some modification- Most traders use this max pain level to identity the strikes which they can write. In this case, since 10500 is the expected expiry level, one can choose to write call options above 10500 or put options below 10500 and collect all the premiums. This max pain value is actually a moving target, it may change day to day so its a good to have a ±5% buffer range and revisit the strategy if max pain value goes beyond the range You can download option_chain.py and max-pain.ipynb. (Right click on the link and save)
https://marketsetup.in/posts/option-chain/max-pain/
CC-MAIN-2020-50
refinedweb
477
62.38
When you’ve done all the usual startup troubleshooting (running runonce.bat, resetting your preferences, and testing the display driver), but you still crash, it’s a good idea to test whether xsibatch can run. Sometimes this will give us a different error message, or another clue as to what’s going wrong. - Open a Softimage command prompt - In the command prompt, run these commands: - You should see something like this: ======================================================= Autodesk Softimage 10.0.422.0 ======================================================= License information: using [Processing] COMMAND: -processing -script C:\Users\blairs\AppData\Local\Temp\test.vbs ' INFO : Hello echo LogMessage "Hello" > %TEMP%\test.vbs xsibatch -processing -script %TEMP%\test.vbs “You should see something like this:” I don’t. I see ======================================================= Autodesk Softimage 9.0.243.0 ======================================================= and that’s all. Could be missing registry entries for VBScript or JScript. I’ve tried all of the suggestions in this post, nothing works. When I load Softimage, it crashes after the splash screen loads ? What happens when you run xsibatch? I assume Softimage was working on this machine before? If so, then what changed about the machine (updates, new software, that kind of thing…) XSIBatch just echo the License file, that’s it. Changes to the machine, include a second monitor, I didn’t think it would cause this, argh ! Thanks, this showed my problem, I got ======================================================= Autodesk Softimage 12.0.921.0 ======================================================= License information: using [Processing] Traceback (most recent call last): File “C:\Python27\lib\site.py”, line 563, in main() File “C:\Python27\lib\site.py”, line 545, in main known_paths = addusersitepackages(known_paths) File “C:\Python27\lib\site.py”, line 278, in addusersitepackages user_site = getusersitepackages() File “C:\Python27\lib\site.py”, line 253, in getusersitepackages user_base = getuserbase() # this will also set USER_BASE File “C:\Python27\lib\site.py”, line 243, in getuserbase USER_BASE = get_config_var(‘userbase’) File “C:\Python27\lib\sysconfig.py”, line 472, in get_config_var return get_config_vars().get(name) File “C:\Python27\lib\sysconfig.py”, line 405, in get_config_vars import re File “C:\Python27\lib\re.py”, line 105, in import sre_compile File “C:\Python27\lib\sre_compile.py”, line 14, in import sre_parse File “C:\Python27\lib\sre_parse.py”, line 17, in from sre_constants import * File “C:\Python27\lib\sre_constants.py”, line 18, in from _sre import MAXREPEAT ImportError: cannot import name MAXREPEAT So it is a documented issue with python 2.7.4 So I uninstalled python 2.7.4 and is installing 2.7.6, and will see what will happen. failed again ======================================================= Autodesk Softimage 12.0.921.0 ======================================================= License information: using [Processing] pythoncom error: PythonCOM Server – The ‘win32com.server.policy’ module could no t be loaded. ImportError: No module named win32com.server.policy pythoncom error: CPyFactory::CreateInstance failed to create instance. (80004005 ) ‘ ERROR : 2000 – Failed creating scripting engine: XSI.SIPython.1.
https://xsisupport.com/2011/04/12/diagnosing-startup-crashes-testing-xsibatch/?replytocom=10135
CC-MAIN-2022-05
refinedweb
461
62.14
The code provided here will allow animated gif files to be displayed on windows within your projects. It does not rely on any library to do this but rather manually scans the gif to determine its contents, decompresses the image data and displays it on the window of your choice. By doing this manually the code has some degree of portability on win32 systems allowing the same code to work in both desktop and pocket applications. Often while coding a project I have had the need to show some kind of animation, either as a status message to the user or to spice up an attribution dialog. After doing this manually several times using timers and and icons to switch images I needed a better more flexible way. The popularity of the GIF image format and it's ability to display animations made it a prime candidate to store the animations for my code. The class that you will interact with is gifDisplayer, it has four methods to interact with the desired gif file. //Calls the load from reader then prepairs bitmaps for display, //a gif must be loaded before it can be displayed. BOOL loadGif(CString filePath, HDC hDCin, BOOL animate, HWND hWnd, int multiplySizeBy, int Xoffset, int Yoffset); BOOL loadGif(unsigned char* bufferIn, unsigned long lengthIn, HDC hDCin, BOOL animate, HWND hWnd, int multiplySizeBy, int Xoffset, int Yoffset); //An image must be loaded by calling the load method from the gifReader first. BOOL displayGif(); //clean up. void unloadGif(); Multiple objects of this class can be used simultaneously. Multiple animations can be drawn to the same window simultaneously. What is not supported are multiple animations being stacked in the same place. If that is done the tranceparency will be lost. To use the class some static members will need to be set up: #include "stdafx.h" #include "gifDisplayer.h" HANDLE gifDisplayer::hAccessMutex=0; HANDLE gifDisplayer::hTimingThread=0; int gifDisplayer::countOfObjects=0; gifDisplayer* gifDisplayer::gifDisplayerObjectArray[MAX_SURFACES]; This is done from the main .cpp file or included in a file from there. You can instanciate objects from the class after that as normally would be done: //The parameter is the starting size for the decompressed gif. //This can be set small but a small size will make decompression take longer //because the output will have to be grown many times. gifDisplayer* gr=new gifDisplayer(100000); gifDisplayer* gr2=new gifDisplayer(100000); gifDisplayer* gr3=new gifDisplayer(100000); It is important to be sure your window has been drawn properly before calling loadGif(...) or displayGif() This is because in order to allow the gif to seem transparent on your window, the contents of the window will be copied to form the base over which the gif will show. To do this I post a message to the window after the window has been created allowing a delay for the initial drawing to occure. switch (message) { case SETUP_MESSAGE: { //on a mobile device this can be a bit slow //so its a good idea to show a wait cursor. SetCursor(LoadCursor (NULL,IDC_WAIT)); //load and display three gifs. gr->loadGif(gifPath,//the path of the file surfaceDC,//the device context of the window TRUE,//do you want to animate? hSurfaceWindow,//the handle of the window 1,//scale the gif by a factor of 1 (possible values 1,2,3,4...) 50,//x position 100);//y position gr->displayGif(); gr2->loadGif(gifPath2, surfaceDC, TRUE, hSurfaceWindow,1,150,200); gr2->displayGif(); gr3->loadGif(gifPath3, surfaceDC, TRUE, hSurfaceWindow,2,0,0); gr3->displayGif(); SetCursor(0); } break; Remember to call unloadGif() on each object before ending your program. The gifDisplayer class inherits from a number of other classes I wrote. You only need to include them in your project as shown in the demos. I split the functionality up this way because the scanning, decompressing and displaying are distinctly different parts that are better suited to being independant. Note that I use mfc cstrings and cobjectarrays in my code. The projects themselves are win32 and NOT mfc projects. Some changes need to be made in the project settings and included files to let this work in win32 code. It would be easy enough to replace these classes with your own if you want to get rid of mfc all together. All the code in this project is based on the document "LZWand GIF explained" by steve Blackstock and on the compuserve gif standard. Although these documents do not contain any c++ code they are an accurate description of what a gif is. Submitted 24 March 2008. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/mobile/gifManipulator.aspx
crawl-002
refinedweb
763
61.26
20131115 (Friday, 15 November 2013)¶ Plugins¶ Yesterday Joe and I had our first Skype meeting which lasted about 70 minutes. We had a brainstorming about a feature Joe would like to use: allow for chunks of custom js and css files and templates. Something which should use Django’s Form Assets (the Media class). How to name it? settings.SITE.add_feature()? It seems that Django does not parse these Media files nor provide a method to generate a single file from all those chunks. The whole Media class in fact is not meant for generating a file. We must differentiate between (1) self-maintained JS and CSS snippets and (2) external JS libraries. So yes, we’ll need to extend Django’s system. After some sleep I realized that we have been looking for the name “plugin”, and that this had been already almost implemented (in djangosite.Site). But I had never started to actually use it because there was no urgent need. Now I converted use_eidreader and use_eid_jslib to “Plugins”, and it works, and everything is much cooler! I’m fascinated! So the Site setting use_eidreader has been replaced by the lino.mixins.beid.BeIdReaderPlugin. Instead of setting use_eidreader to True you must override get_installed_plugins as follows: def get_installed_plugins(self): for p in super(Site,self).get_installed_plugins(): yield p yield 'lino.mixins.beid.BeIdReaderPlugin' The base class is (currently) defined in djangosite.djangosite_site.Plugin and should be used from dd.Plugin. The lino/config/plugins/eidreader.js snippet contains code which was previously in linolib.js (which definitively was not the right place) TODO: - move lino.mixins.beid to lino.plugins.beid. - The next plugins will be lino.plugins.davlink.DavLinkPlugin (replacing use_davlink) and/or lino.extjs.ExtJS3Plugin (replacing use_extjs). Note that the current implementation does not use Django’s Form Assets because that would cause more work when converting the existing use_xxx settings to plugins and because I don’t (yet) see any advantage…
http://luc.lino-framework.org/blog/2013/1115.html
CC-MAIN-2019-13
refinedweb
326
50.33
<summary>GhostDoc is a free add-in for Visual Studio that automatically generates XMLdocumentation comments for C#. Either by using existing documentation inheritedfrom base classes or implemented interfaces, or by deducing comments fromname and type of e.g. methods, properties or parameters.</summary> Quick Facts About this Release Version 2.1.2 fixes a problem with side-by-side installations of GhostDoc versions for Visual Studio 2005 and 2008. What’s New in GhostDoc 2.1.2: Note that VB.Net support is turned off by default and has to be turned on in the configuration dialog. great work, thanks a lot GhostDoc is truly incredible. Thank you for your hard work, it is really appreciated! Thanks again for a great tool! Hi I recently installed GhostDoc 2.1.2 and have a question. How can use GhostDoc to add XML comments in javascript file. Is it supported? Thanks! @manor: No, and there are no plans for future releases. I have Vista Ultimate x32 edition, with Visual Studio 2008 Professional. When I try to install Ghostdoc 2.1.2 using the MSI, it just hangs. The progress bar goes nowhere, and when I click cancel, the dialog does not go away. I am an admin on the machine. Even so, I've also tried the batch file/run as admin workaround to no avail. I consider Ghostdoc an essential add-in and would really appreciate any insight into getting this installed. Also, I do not have Visual Studio 2005 installed, and have not had any previous versions of GhostDoc installed on this machine. I previously had a version of C# 2008 Express installed but have removed it since installing VS 2008 Pro. @Seth: I'll have to set up a test system for a repro; I doubt I'll get to doing it before Xmas, though. In the meantime could you please install SonicFileFinder (sonicfilefinder.jens-schaller.de) and report the result? This may help me understanding the issue (and installing SonicFileFinder isn't a bad idea anyway ;-). Let's move further discussion to the GhostDoc forum () where it can be seen by other users - please repeat your post over there , I'll then append this answer to your post. Hi? Roland ! If you change code your GetToolsMenu method on this: private CommandBar GetToolsMenu() { return ((CommandBars)m_dteApplication.CommandBars)["Tools"]; } then you correct problem of searching for menu "tools" for that localization, which is absent in file YourProject.resx. P.S. On my localisation (russian)... Before: 1. Looking up resource "ruTools". 2. Localized name of the 'Tools' menu: 3. Couldn't find 'Tools' menu using the localized name ''. 4. Now trying the non-localized name 'Tools' 5. Couldn't find 'Tools' menu (tried both localized and english name). 6. Exception thrown in OnConnection (ext_cm_Startup) 3. Exception thrown in OnConnection (ext_cm_Startup) After no one errors. @daemon: Thanks for your feedback, I'll look into this. Could you please contact me via the contact form (weblogs.asp.net/.../contact.aspx) so we can communicate via email?
http://weblogs.asp.net/rweigelt/archive/2007/11/25/5338050.aspx
crawl-002
refinedweb
503
60.51
20 April 2010 03:20 [Source: ICIS news] SHANGHAI (ICIS news)--Qatofin is likely to achieve commercial production at its 450,000 tonne/year linear low-density polyethylene (LLDPE) plant in ?xml:namespace> “The plant was completed last year and we carried out some trial runs in August but shut [it] down again because of lack of ethylene as our feedstock supplier was behind schedule,” the source said at the ChinaPlas exhibition in Chinaplas, which is Asia's largest annual plastisc event, runs from 19-22 April. Ras Laffan Olefins Co’s 1.3m tonne/year cracker is in the process of being brought on-stream and once production has been stabilised, ethylene will be fed into the pipeline that links the cracker to Qatofin, he added. Also downstream of the new cracker is the Q-Chem 350,000 tonne/year high-density polyethylene (HDPE) and alpha olefins plants, which are due for start-up by the middle of the year. Qatofin is a joint venture between France's Total Petrochemicals (36%), and state-owned groups Qatar Petrochemical Co (QAPCO) (63%) and Qatar Petroleum (1%). Q-Chem is a joint venture between The cracker, meanwhile, is jointly owned by Q-Chem (53%) and Qatofin (46%). Qatar Petroleum owns the remaining 1%. With additional reporting
http://www.icis.com/Articles/2010/04/20/9351878/qatofin-eyes-commercial-ops-at-qatar-lldpe-plant-in.html
CC-MAIN-2014-35
refinedweb
214
58.21
This blog post introduces major changes in LIEF 0.9 as well as work in progress features that will be integrated in further releases. Changelog is available here. As a quick reminder, LIEF is a library developed at Quarkslab to parse and modify executable file formats. See our previous posts: Installation Release packages are available on the Github page and Python package can be installed with: $ pip install [--user] lief==0.9.0 Release highlight Android Formats This new version of LIEF comes with support for Android formats related to the ART runtime: OAT, VDEX, DEX and ART. As the OAT format is a derivation of ELF, it made sense to add it in LIEF. Basically, this format is used by Android to wrap native code being the result of Dalvik bytecode optimization. Regarding VDEX, DEX, and ART, these formats have somehow a relation with OAT and therefore we also choose to add them. For more information about these Android formats and how to use them, a tutorial is available in the LIEF documentation: Android Formats. We can currently only parse these formats, but their modification will come step by step in the project. Indeed, some attacks are based on the modification of the OAT format as it has been explained by Collin Mulliner in "Inside Android’s SafetyNetAttestation: Attack and Defense" [1] and "How Samsung Secures Your Wallet & How To Break It" [2] by Tencent’s Xuanwu Lab. In further version we plan to provide an API to add native code in OAT. JSON serialization As one purpose of this project is to provide an API that can be easily integrated in other projects, we are glad to announce that JSON serialization is now available for all LIEF objects. It means that one can now access to format information through a JSON interface. Previous versions had a JSON support for ELF and PE formats, the v0.9 now supports all formats and all objects. Objects can be serialized with the lief.to_json function: import lief gcc = lief.parse("/usr/bin/gcc") lief.to_json(gcc.header) {'entrypoint': 4209824, 'file_type': 'EXECUTABLE', 'header_size': 64, 'identity_class': 'CLASS64', 'identity_data': 'LSB', ... libSystem = lief.parse("/usr/lib/libSystem.dylib") lief.to_json(libSystem.commands[1]) {'command': 'SEGMENT', 'command_offset': 492, 'command_size': 464, 'content_hash': 18446744072658165641, 'data_hash': 1841536728, 'file_offset': 8192, 'file_size': 4096, 'flags': 0, 'init_protection': 3, 'max_protection': 7, 'name': '__DATA', 'numberof_sections': 6, 'sections': ['__nl_symbol_ptr', '__la_symbol_ptr', '__mod_init_func', '__const', '__data', '__common'], 'virtual_address': 8192, 'virtual_size': 4096} One can also disable the JSON module using a CMake configuration flag: $ cmake -DLIEF_ENABLE_JSON=off ... What's next LIEF v0.9 still has a poor support for Mach-O modification and only supports modifications on header and some Load commands. One of the primitives to do more general modification on Mach-O format is the ability to add arbitrary Load commands. Some tools [3] [4] exist to add commands, but they usually use padding between the load command table and the raw content or they remove / replace existing one. The main limitation with this technique is that the number of load command which can be added depends on the size of the padding. In LIEF, we took advantage of the fact that Mach-O are PIE to shift the content that follow the load command table. This enable us to inject more than one or two commands. To keep a consistent state of format (relocations, segment's virtual address, ...), the Mach-O builder of LIEF rebuilds the export-trie, regenerates binding opcode, rebase opcodes, ... In our tests, we succeeded in adding arbitrary number of LC_DYLIB command in clang as well as adding 10 new sections in the __TEXT segment. We are currently working on stabilization of the instrumentation process, but it should be merged soon in then master branch. Stay tuned! We will be also be presenting about file formats instrumentation at ReCon and Pass The Salt for a talk about file formats instrumentation. In this talk we will present techniques to perform code injection, hooking by using formats.
https://blog.quarkslab.com/lief-09.html
CC-MAIN-2019-09
refinedweb
662
52.7
Microchip PIC 16Fx microcontrollers are very well suited for a number of tasks. However, the programmer is left with several choices to program them: gputilspackage on a Unix system and program the PIC using the assembly language. sdccC compiler on a Unix system and program the PIC in C. PicForthForth compiler on a Unix system and program the PIC in Forth. We do believe that the latest is a very pleasant solution for PIC development, as Forth is particularily suited to embedded systems, and Unix is more user-friendly for the developper. Warning: this manual is a work-in-progress, and is in no way complete. This program is a Forth compiler for the Microchip PIC 16F87x and 16F88 family. The version described in this manual is PicForth 1.2.4. I needed to write some code on a PIC to control a digital model railroad system using the DCC (Digital Control Command) protocol. However, writing it in assembly is error-prone and writing it in C is no fun as C compiled code typically needs a lot of space. So I wrote this compiler, not for the purpose of writing a compiler, but as a tool to write my DCC engine. The compiler does not aim to be ANS Forth compliant. It has quite a few words already implemented, and I will implement more of them as needed. Of course, you are welcome to contribute some (see below for license information). At this time, many words are missing from standard Forth. For example, I have no multiply operation as I have no use for it at this time and won't spend time to implement things I don't need (remember, Forth is a tool before anything else). The compiler is released at the moment under the GNU General Public License version 2 (I intend to use the less restrictive BSD license in the future, but as it is based on gforth, I have to sort out those issues with gforth copyright holders). However, the code produced by using this compiler is not tainted by the GPL license at all. You can do whatever you want with it, and I claim absolutely no right on the input or output of this compiler. I encourage to use it for whatever you want. Note that I would really like people to send me their modifications (be they bug fixes or new features) so that I can incorporate them in the next release. Mary was a great inspiration source, I even kept some of the names from it. However, no code has been reused, as both Forth do not have the same goal. I would like to thank the following people, in no particular order: PicForth is supported through the following channels:> and gatewayed to a newsgroup at <>.> For a full introduction to the Forth programming language, please have a look at the appropriate section of the Open Directory (maintained by volunteers), at address <>. Only a small subset of the language will be presented here, sometimes overlooking details. The Forth programming language may look unusual to people used to other languages. First of all, the actions to execute are spelled one after each other. The sentence init mainloop cleanup will call, in turn, the word init, the word mainloop then the word cleanup. To define a new word, the : defining word is used, while the ; word ends the definition. The following code defines a new word doit which factors the three words used above: : doit init mainloop cleanup ; After it has been defined, the word doit can be called as other words by using its name. A Forth program is a collection of application-specific words. Each word, made of other words, will be used in turn to define new words, until the whole solution is described. Words are similar to subprograms in more conventional programming languages. Any non-blank character can be part of a word name. For example, \, ^, or $ are legal characters in a word name, and can even be a word name by themselves. In Forth, one does not use parenthesis to give arguments to called words. Instead, a stack is used, where the arguments can be pushed and where they can be popped from. The word + pops two arguments from the top of the stack and pushes their sum. To push an integer to the top of the stack, one writes its value. The sentence 3 5 + will push 3 on the stack, then 5, and calls the word + which removes 3 and 5 and pushes 8. Some words do manipulate the stack explicitely. dup duplicates the element at the top of the stack, while drop removes it. swap exchanges the two top elements. The following word that we name 2* (remember that this name is perfectly valid in Forth) does multiply the top of the stack by two, by adding it to itself: : 2* dup + ; The stack effect of a word is often written as a comment between parenthesis; those comments are ignored by the Forth compiler. The previously defined word could have been written: : 2* ( n -- 2*n ) dup + ; Elements on the stack are represented from left to right (top of the stack). For example, the - word which substract the top of the stack from the second element on the stack would have a stack comment looking like ( n1 n2 -- n1-n2 ). Let's assume that you want to multiply the top of the stack by four. You can define the 4* word as: : 4* ( n -- 4*n ) dup + dup + ; But remember that you can define your own words from existing words. If you now need a word which multiplies the top of the stack by four, you can use your previously defined 2* word: : 4* ( n -- 4*n) 2* 2* ; Definitions in Forth tend to be very short. The grouping of common parts in words is called factoring, and leads to very concise machine code. Two useful words allow you to access memory. @ gets the content of the memory byte whose address is at the top of the stack and ! stores, in the memory byte whose address is at the top of the stack, the following element. The code below defines a word mirror which mirrors the content of port A into port B (we will later see more practical ways of defining some of the words seen here): : porta 5 ; : portb 6 ; : mirror porta @ portb ! ; The defining word constant allows you to define named constants. Using this word, one can simplify the above example: 5 constant porta 6 constant portb : mirror porta @ portb ! ; The defining word variable reserves a byte in the PIC RAM and gives it a name: 5 constant porta variable counter : increment-counter counter @ 1 + counter ! ; : counter-to-porta counter @ porta ! ; Testing in Forth is done using a if construct, terminated by a then, with an optional else. Operators such as < or = can be used, and any non-null value is considered as true. The abs word changes the value on top of the stack to its absolute value (note that abs and negate are in fact already defined by PicForth): : negate 0 swap - ; : abs dup 0 < if negate then ; The word mirror duplicates port A to port B or port C, depending on its argument; 0 for port B, anything else for port C ( porta, portb and portc constant are already defined in PicForth), as are trisa, trisb and trisc: : mirror ( n -- ) porta @ swap if portb ! else portc ! then ; It is also possible to use Forth's case, of, endof and endcase. Several looping constructs are used in PicForth. The first of them is built upon begin and again, which here calls do-one-thing indefinitely: : mainloop begin do-one-thing again ; while and repeat can add a test in the loop and continue as long as the word continue? returns a non-null result: : mainloop begin do-one-thing continue? while repeat ; Note that while can be present anywhere between begin and repeat, letting you build elaborate constructs. Also, until allows you to wait for a condition. The following word calls do-one-thing until end? returns a non-null value: : mainloop begin do-one-thing end? until ; The last construct seen here is built around v-for and v-next. v-for takes a (non-included) high bound and a variable address on the stack. The following word main calls do-one-thing 10 times: variable count : main 10 count v-for do-one-thing count v-next ; Our first PicForth program will generate a rectangle wave signal on port B0 as fast as possible: 0 pin-b i/o : init i/o >output ; : pulse i/o high i/o low ; : mainloop begin pulse again ; main : program init mainloop ; The first line 0 pin-b i/o defines a new word i/o which, when executed, will push two integers 6 (corresponding to portb) and 0 on the stack. This way, instead of writing portb 0 to manipulate bit 0 of port B you can write i/o, which is shorter and lets you change it at only one place should you want to change which port is used. The second line uses the PicForth word >output which sets the port whose address and bit are on the stack in output mode. This defines a new init word which initializes our port B0 as an output. The third line creates a new word pulse which uses the PicForth words high and low to set a pin high or low. As a result, executing the pulse word will set the B0 pin high then low, this generating a pulse. The fourth line defines a mainloop word which calls pulse endlessly, thus generating the rectangle wave signal we want. The last line uses the PicForth word main. This word indicates to PicForth that the next word to be defined will be the one to call on reset. The word, called program here, calls init then mainloop. As mainloop never returns, the program runs until the end of time (which is usually considered quite a long time). The generated code looks like: 0x0000 018A clrf 0x0A 0x0001 280C goto 0x00C ; (init-picforth) 0x0002 0000 nop ; name: init ; max return-stack depth: 0 0x0003 1683 bsf 0x03,5 0x0004 1006 bcf 0x06,0 0x0005 1283 bcf 0x03,5 0x0006 0008 return ; name: pulse ; max return-stack depth: 0 0x0007 1406 bsf 0x06,0 0x0008 1006 bcf 0x06,0 0x0009 0008 return ; name: mainloop ; max return-stack depth: 1 0x000A 2007 call 0x007 ; pulse 0x000B 280A goto 0x00A ; mainloop (rs depth: 1) ; name: (init-picforth) ; max return-stack depth: 0 0x000C 3032 movlw 0x32 0x000D 0084 movwf 0x04 ; name: program ; max return-stack depth: 1 0x000E 2003 call 0x003 ; init 0x000F 280A goto 0x00A ; mainloop (rs depth: 1) Of course, it is possible to write less factored code for such a simple task, and write instead: 0 pin-b i/o main : program i/o >output begin i/o high i/o low repeat ; In this case, it generates effectively a code which is a bit shorter: 0x0000 018A clrf 0x0A 0x0001 2803 goto 0x003 ; (init-picforth) 0x0002 0000 nop ; name: (init-picforth) ; max return-stack depth: 0 0x0003 3032 movlw 0x32 0x0004 0084 movwf 0x04 ; name: program ; max return-stack depth: 0 0x0005 1683 bsf 0x03,5 0x0006 1006 bcf 0x06,0 0x0007 1283 bcf 0x03,5 0x0008 1406 bsf 0x06,0 0x0009 1006 bcf 0x06,0 0x000A 2808 goto 0x008 ; program + 0x003 However, do not let this short example mislead you. While the code looks more efficient and shorter (and it is), this is generally not true for real-life programs. For example, in a bigger program it would be quite common to have to call pulse from other places. It is possible to use inlined code by surrounding the words you want to inline by the macro and target words: 0 pin-b i/o macro : init i/o >output ; : pulse i/o high i/o low ; : mainloop begin pulse again ; target main : program init mainloop ; While this code is highly factored and easily maintainable, it generates the very same code as the less-factored version above. The only exception is exit: if this word is present in an inlined word, it will exit from the caller. As a rule, inlined word should only have one regular exit point at the end of the word. The stack is indexed by the only indirect register, fsr. The indf register automatically points to the top of stack. The w register is used as a scratch. Attempts to use it to cache the top of stack proved to be inefficient, as we often need a scratch register. The compiler is hosted on gforth, a free software compiler for Unix systems. The command line to use to compile file foo.fs into foo.hex, and getting a usable map into foo.map is: gforth picforth.fs -e 'include foo.fs file-dump foo.hex map bye' | \ sort -o foo.map Of course, you should automate this in a Makefile, such as the one provided with the compiler. If you install the GNU PIC utils (from), then you can read the assembled code by using gpdasm. The whole code space can be used. However, code generated in the first 2048 words is more efficient than the code generated in the following 2048 words; both are more efficient than the code generated for the remaining words. This is due to the PIC architecture which does not allow to see the code space as a flat zone. By executing gforth picforth.fs -e 'host picquit' (or make interactive from a Unix shell), you are dropped into an interactive mode, where you can use the following words to check your code: see ( "name" -- ) Disassemble a word map ( -- ) Print code memory map dis ( -- ) Disassemble the whole code section You can choose the architecture you want to compile for by using: pic16f87x ( -- ) Generate code for a PIC16F87x target pic16f88 ( -- ) Generate code for a PIC16F88 target Hexadecimal literals should be prefixed by a dollar sign $ to avoid confusion with existing constants (such as c for carry bit). This is a strong advice. The default base is hexadecimal. Do not change it before including libraries bundled with the compiler, as they do expect hexadecimal mode. The default stack size is 16. If you use the multitasker included in multitasker.fs (see below), each task gets an additionnal 8 bytes of task-specific stack. You can change the default stack size by using set-stack-size ( n -- ) in interpretation mode before using main. rlf-tos and rrf-tos respectively shift the top-of-stack left and right, with the carry entering the byte and the outgoing bit entering the carry. rlf! and rrf! respectively shift the given variable left and right, with the carry entering the byte and the outgoing bit entering the carry. lshift and rshift used with a constant shift, and 2* and 2/ do have the last exited bit in the carry. swapf-tos will swap the upper and lower nibble of the top-of-stack. There exists a v-for/ v-next structure (v stands for variable): v-for ( n addr -- ) Initialize addr content with n. v-next ( -- ) Decrement addr content. If content is not zero, jump to v-for location. The address has to be located in bank 0. Also, the words begin, again, while, until and repeat are implemented. You can choose the memory bank that will be used by the memory commands in interpretation mode by using the words bank0, bank1, bank2 and bank3 (check that it applies to your device first). Those commands do affect the subsequent create, variable, allot, , and here commands. However, note that you can only access indirectly variables located in bank 0 or in bank 1. Locations in other banks must be accessed using their static addresses. You can define your own memory sections using the words section, idata and udata. No check will be made to ensure that those sections do not overlap. In code words, you can adjust the current bank by inserting adjust-bank after a variable name. This will adjust the current bank and alter the variable address so that it fits between 00 and 7f. When a call, goto or return statement is encountered, the bank must have been restored to 0 using restore-bank). If the compiler detects a possible bank mismatch at variable access or call or return time, it will issue a warning (see below). By default, warnings are fatal. If you want them to be non-fatal, use the word non-fatal-warnings. The word fatal-warnings will restore the default situation. You can suspend the warnings temporarily by using suspend-warnings. This will let the old warnings state on the stack. This integer must be given back to restore-warnings to come back to the latest warnings state. Variables are not automatically initialized to zero, as this would waste too much code if it is not needed. If you want a variable explicitely initialized, use create and , such as in: create attempts 3 , An easy habit to fall into is using a whole 8-bit variable for storing a simple binary flag, then using another variable for another flag and so on. This not only wastes data memory, but setting and testing such values requires more code. The flag word lets you define a 1-bit variable which can then be modified with bit-set, bit-clr and bit-toggle words, and tested with bit-set? and bit-clr? words. The following example shows how to manipulate a emergency boolean value and exit a loop when it is set: flag emergency : xxx ... ; : yyy ... ; : do-futile-things ( -- ) emergency bit-clr begin xxx ( complex program that can set emergency ) yyy ( complex program that can set emergency ) emergency bit-set? until ; Tables can be created either in RAM (with run-time initialization, which is costly), in program flash memory or in the internal EEPROM. The following words allow you to create tables: table ( "name" -- ) Start a RAM table ftable ( "name" -- ) Start a program flash table eetable ( "name" -- ) Start an EEPROM flash table t, ( n -- ) Add one byte to the table table> ( "b1 .. bn" -- ) Add bytes b1 to bn to the table end-table ( -- ) End table declaration The following code shows a table called substitutions and a substitute word which takes a byte in area old-key and sets it at the right place in area new-key, according to the substitutions table. ftable substitutions table> 14 4 13 1 2 15 11 8 3 10 6 12 5 9 0 7 table> 0 15 7 4 14 2 13 1 12 6 12 11 9 5 3 8 table> 4 1 14 8 13 6 2 11 15 12 9 7 3 10 5 0 table> 15 12 8 2 4 9 1 7 5 11 3 14 10 0 6 13 end-table : substitute ( n -- ) dup old-key + @ swap substitutions new-key + ! ; Jump tables can be created by including the libjtable.fs and using the following words: jtable ( "name" -- ) Create a jump table t, ( word -- ) Add an element to the jump table end-table ( -- ) End table declaration Here is an example of jump tables usage: needs libjtable.fs : print ( -- ) ... ; : word1 ... ; : word2 ... ; : word3 ... ; jtable myjump word1 t, word2 t, word3 t, end-table main : main c" About to invoke jumptable" print 1 myjump c" Back from jumptable" print ; You can also define constants for simpler access and emulation of the sorely-missed ' and execute words: jtable execute word1 t, 0 constant 'word1 word2 t, 1 constant 'word2 word3 t, 2 constant 'word3 end-table main : main c" About to invoke jumptable" print 'word1 execute c" Back from jumptable" print ; A main word indicates that the next address is the main program. Use for example: main : main-program ( -- ) (do initialisations) (call mainloop) ; You can switch to macro mode by using the macro word. You get back to target mode by using the target word. You can include files using include file or needs file (which prevents from multiple inclusions to happen). There is a full prefix assembler included. Use code and end-code to define words written in assembler. ]asm and asm[ let you respectively switch to assembler mode and back during the compilation of a Forth word. The label: defining word can be used to define a label that will then be used with goto. See the piceeprom.fs file for an example. If you want to use interrupts, use include picisr.fs Two words do respectively save and restore the context around interrupt handling code: isr-save ( -- ) isr-restore-return ( -- ) Note that isr-save is called automatically, you do not need to call it explicitely. Also, the word isr is provided to notify that the next address is the isr handler. For example, you can write an interrupt handler with: isr : interrupt-handler ( -- ) (interrupt handling code here) isr-restore-return ; Do not forget that the return stack depth is only height. An interrupt can occur at any time unless you mask them or unset the GIE bit. Two facility words that manipulate GIE are also provided: enable-interrupts ( -- ) disable-interrupts ( -- ) You have to dispatch the interrupts and clear the interrupt bits manually before you return from the handler. You can also use the following two words to save the status of the GIE bit and disable interrupts, and to restore the previous GIE status: suspend-interrupts ( -- ) restore-interrupts ( -- ) Versions that do nothing are provided in the default compiler. Useful versions are redefined when using picisr.fs. Because of this, include picisr.fs as soon as possible, before other files and before using enable-interrupts and disable-interrupts. Other included files may fail to act properly if you don't. In Forth, argument passing is done on the stack. However, if you want to transmit the top-of-stack value in the w register (for example if a word typically takes a constant which is put on the stack just before calling it), you can use the defining word :: instead of :. All calls will automatically use this convention. Similarly, you can use the defining word ::code instead of code to force the caller to load the top-of-stack value into the w register. If you want to return a value in the w register, you can use the word >w which loads the top-of-stack into the w register before every exit point. After calling a word which returns its result in the w register, you can call w> to put the w register value onto the stack. Alternatively, you can use the return-in-w word after the word definition to indicate that the last-defined word returns its argument in the w register. In this case, all the callers will automatically append w> when needed. To ease bit manipulation, the following words are defined for port p: and! ( n p -- ) logical and with n /and! ( n p -- ) logical and with ~n /and ( a b -- c ) logical and of a and ~b or! ( n p -- ) logical or with n xor! ( n p -- ) logical xor with n invert! ( p -- ) invert content bit-set ( p b -- ) set bit b of p (both have to be constants) bit-clr ( p b -- ) clear bit b of p (both have to be constants) bit-toggle ( p b -- ) toggle bit b of p (both have to be constants) bit-mask ( p b -- m ) put 1<<b on stack bit-set? ( p b -- m ) put bit-mask (non-zero) on stack if bit b of p is set, zero otherwise bit-clr? ( p b -- f ) true if bit b of p is clear Six words help designate bit or port pins: bit ( n addr "name" -- ) ( Runtime: -- addr n ) pin-a ( n "name" -- ) ( Runtime: -- porta n ) pin-b ( n "name" -- ) ( Runtime: -- portb n ) pin-c ( n "name" -- ) ( Runtime: -- portc n ) pin-d ( n "name" -- ) ( Runtime: -- portd n ) pin-e ( n "name" -- ) ( Runtime: -- porte n ) For example, you can create a pin designating an error LED and manipulate it using: 3 pin-b error-led \ Error LED is on port B3 : error error-led bit-set ; \ Signal error : no-error error-led bit-clr ; \ Clear error To ease reading, the words high, low, high?, low? and toggle are aliases for, respectively, bit-set, bit-low, bit-set?, bit-clr? and bit-toggle. You can change the direction of a pin by using >input or >output after a pin defined with pin-x. For example, to set the error led port as an output, use: error-led >output A value in memory can be decremented using the 1 mem -! sequence. However, as this will be optimized to use the decf mem,f which does not position the c flag. Usually, this is fine, however, if you want to propagate a carry, you want this flag to be set. To that issue, you can use the 1 >w mem w-! sequence, which generates movlw 1; subwf mem,f and position the carry. Note that propagating the carry while incrementing is easier: the z flag is set if needed by the incf mem,f instruction generated by the use of the 1 mem +! sequence. If z is set, a carry has been generated. Here is an example to increment a 16 bits value held at location bar: : inc-16 ( adder -- ) 1 bar 1+ +! z bit-set? if 1 bar +! then ; This will generate the following code: ; name: inc-16 incf 0x34,f btfsc 0x03,2 incf 0x33,f return The word clrwdt is available from Forth to clear the watchdog timer. The word sleep is available from Forth to enter sleep mode. By using include piceeprom.fs you have access to new words allowing you to access the PIC EEPROM: ee@ ( a -- b ) read the content of a and return it ee! ( b a -- ) write b into a Also, in any case, you can store data in EEPROM using those words: eecreate ( "name" -- ) similar as create but in EEPROM space ee, ( b -- ) store byte in EEPROM s" ( <ccc>" -- eaddr n ) store string in EEPROM l" ( <ccc>" -- eaddr n ) strore string + character 13 in EEPROM Two words allow reading from and writing to the flash memory when the file picflash.fs is included with include picflash.fs Those words expect manipulate a 14 bits program memory cell whose 13 bits address is in EEADRH:EEADR. The data is read from or stored to EEDATH:EEDATA. flash-read ( -- ) flash-write ( -- ) If picisr.fs has been included before this file, interrupts will be properly disabled around flash writes. The libstrings.fs library defines two words useful for working with strings stored in flash memory: c" ( <ccc>" -- ) Define a packed 7-bits zero-terminated string str-char Get next char of previously encountered c" Note that c" must be used in target mode only and will not work properly in macro mode. The following example assumes that you have a emit word working, which outputs one character. : print ( -- ) begin str-char dup while emit repeat drop ; : greetings ( -- ) c" Welcome to this PicForth program" print ; It is necessary to include picflash.fs before libstrings.fs. A map can be generated in interactive mode using the map word. Two multitasker have been implemented. A basic priority-based cooperative multitasker allows you to concurrently run several indenpendant tasks. Each task should execute in a short time and will be called again next time (the entry point does not change). This looks like a state machine. To use this multitasker, use include priotasker.fs in your program. The following words can be used to define tasks (the entry point for the task is the next defined word): task ( prio "name" -- ) Define a new task with priority prio. By default, this task will be active. You can use the startand stopwords to control it. Those words can be used from an interrupt handler. task-cond ( prio "name" -- ) Define a new task with priority prio. By default, this task is inactive. You can enable it by using the signalword on it. If you use signalN times, then the task will be run exactly N times. signalcan be used from an interrupt handler. task-idle ( -- ) Define a new task which will be executed inconditionnaly when there is nothing else to do. Such a task can not be stopped. task-set ( bit port prio -- ) Define a new task with priority prio that will be run when bit bit of port port is set. task-clr ( bit port prio -- ) Define a new task with priority prio that will be run when bit bit of port port is clear. Priority 0 is the greatest one, while priority 255 corresponds to the lowest (idle) priority. You should use priority in the range 0-254 for your own tasks. The multitasker is run by using the word multitasker. This word takes care of scheduling the highest priority tasks first. It also clears the watchdog once per round. The multitasker looks for all tasks of priority 0 ready to execute. If it find some, it executes them and starts over. If it doesn't, it looks for priority 1 tasks ready to execute. If it find some, it executes them and starts over. If it doesn't, etc. It does this up to priority 255. Since each word is called each time from the beginning, there is no need to maintain task-specific stacks, as the stack has to be considered empty. The basic cooperative multitasker is much simpler. It allows you to relinguish the CPU whenever you want, provided that you are not in the middle of a call (context-switch only occurs during top-level calls). To use this multitasker, use include multitasker.fs at the top of your program. The following words are defined: task ( -- ) Create a new task with its own data stack. The task entry point will be the next defined word. yield ( -- ) Relinguish control so that another task gets a chance to execute. multitasker ( -- ) Code for the multitasker program. This word never returns. This multitasker makes no use of the return stack at all. However, each task takes four to six program words for initialization and five program words to resume the task, plus three or four program words per yield instruction. Context-switching takes at most 20 instruction cycles (4 microseconds max on a 20MHz PIC, 20 microseconds on a 4MHz PIC), and typically 16. Also, the multitasker takes care of clearing the watchdog timer at each round. Each task needs 3 bytes in RAM to save its context and 8 bytes for its data stack. You can change the size of the data stack by redefining the task-stack-size value after including multitasker.fs. This value must be changed in meta mode. However, one must take care of calling yield only from the toplevel word of each task, as the return stack is not addressable on this architecture. To be precise, only one task in the system is able to call yield from words deeper than the toplevel, but this is not recommended. Some libraries can be used to enhance your application: libcmove.fsimplementation of ANS Forth cmoveword libextra.fsimplementation of ANS Forth rotand 2swapwords as well as -rot libfetch.fsimplementation of ANS Forth @word with arbitrary addresses libjtable.fsjump tables liblshift.fsimplementation of ANS Forth lshiftword with arbitrary shift libnibble.fsnibbles and characters conversion libroll.fsimplementation of ANS Forth rollword as well as its counterpart -roll librshift.fsimplementation of ANS Forth rshiftword with arbitrary shift libstore.fsimplementation of ANS Forth !word with arbitrary addresses libstrings.fscounted strings in flash memory On PIC1687x, he configuration can be configured with the following words: set-fosc ( n -- ) Choose oscillator mode (default: fosc-rc) fosc-lp Low power fosc-xt External oscillator fosc-hs High-speed oscillator fosc-rc RC circuit set-wdte ( flag -- ) Watchdog timer enable (default: true) set-/pwrte ( flag -- ) Power-on timer disable (default: true) set-boden ( flag -- ) Brown-out detect enable (default: true) set-boren ( flag -- ) (alias for set-boden) set-lvp ( flag -- ) Low voltage programming (default: true) set-cpd ( flag -- ) EEPROM protection disable (default: true) set-wrt ( flag -- ) FLASH protection disable (default: true) set-debug ( flag -- ) In-circuit debugger disable (default: true) set-cp ( n -- ) Code protection (default: no-cp) no-cp No protection full-cp Full protection xxxxx Anything you want, with the right bits set (see datasheet) If you use a PIC16F8x, you can give the following extra parameter to set-fosc: fosc-extclk fosc-intrc-io fosc-intrc-clk fosc-extrc-io fosc-extrc-clk Also, the following words can be used: set-fcmen ( -- ) set-ieso ( -- ) This compiler release suffers from the following known limitations. Note that most of them (if not all) will disappear in subsequent releases. PicForth tries very hard to generate efficient code. The optimizer, which is on by default, can be turned off by using disallow-optimizations and back on by using allow-optimizations. The following optimizations are implemented: Tail recursion is implemented at exit and ; points. : x y z ; generates the following code for word x: call y goto z The sequence recurse exit also benefits from tail recursion. For example, the (particularily useless) dup dup drop sequence generates movf 0x00,w decf 0x04,f movwf 0x00 which in fact corresponds to a single dup. Also, the following sequence drop 3 generates movlw 0x03 movwf 0x00 while drop 0 gives clrf 0x00 Most operations use direct-access and literal variants when possible. The following sequence 9 and generates movlw 0x09 andwf 0x00,f Also, combined with the redundant push/pop eliminations, the following code dup 9 and if ... generates movf 0x00,w andlw 0x09 btfsc 0x03,2 The following sequence (with current and next being variables) current @ 1+ 7 and next ! generates movf 0x3B,w addlw 0x01 andlw 0x07 movwf 0x3C Short (one instruction) if actions are transformed into reversed conditions. For example, the following word: \ This word clears port a0 if port c2 is high, and sets port b1 \ in any case. : z portc 2 high? if porta 0 low then portb 1 high ; generates the following code: btfsc 0x07,2 ; skip next instruction if port c2 is low bcf 0x05,0 ; set port a0 low bsf 0x06,1 ; set port b1 high return ; return from word The compiler tries to remove useless bank manipulations. The following word :: ee@ ( addr -- n ) eeadr ! eepgd bit-set rd bit-set eedata @ ; generates: bsf 0x03,6 ; select bank 2 movwf 0x0d ; write into eeadr (in bank 2) bsf 0x03,5 ; select bank 3 bsf 0x0c,7 ; set bit eepgd of eecon1 (in bank 3) bsf 0x0c,0 ; set bit rd of eecon1 (in bank 3) bcf 0x03,5 ; select bank 2 movf 0x0c,w ; read eedata (in bank 2) bcf 0x03,6 ; select bank 0 decf 0x04,f ; decrement stack pointer movwf 0x00 ; place read value on top of stack return If an operation result is stored on the stack then popped into w, the operation is modified to target w directly. For example, the following word: : timer ( n -- ) invert tmr0 ! ; generates comf 0x00,w incf 0x04,f movwf 0x01 return If a and operation before a test can be rewritten using a bit test operation, it will. For example, the code: checksum @ 1 and if parity-error exit then ... will be compiled as: btfsc 0x33,0 goto 0x037 ; parity-error ... Using an explicit bit-test holds the same result: porta 3 high? if 1+ then will be compiled as: btfsc 0x05,3 incf 0x00,f Before a test, if the z status bit already holds the right result, no extra test will be generated. 9 and dup if 1+ then will be compiled as: movlw 0x09 andwf 0x00,f btfss 0x03,2 incf 0x00,f Also, the compiler detects operation which do not modify neither w or the top of stack. For example, dup checksum xor! dcc-high ! will be compiled as movf 0x00,w xorwf 0x6c,f incf 0x04,f movwf 0x5b The following word: : action-times ( n -- ) begin action 1- dup while repeat drop ; will be compiled as: call 0x022 ; call action decfsz 0x00,f goto 0x027 ; jump to call actionabove incf 0x04,f return The word: :: x ( n -- flag ) 3 < if a then ; generates addlw 0xFD btfss 0x03,0 goto a return The < test did not cause the value to be normalized to 0 or -1, as it is not needed. If a word is marked as returning the top of stack in w using return-in-w and the last instruction before returning is a constant push, retlw will be used. For example, : check-portd ( -- n/w ) portd 3 high? if 3 >w exit then portd 4 high? if 4 >w exit then 0 >w ; return-in-w will be compiled as btfsc 0x08,3 retlw 0x03 btfsc 0x08,4 retlw 0x04 retlw 0x00 The following code: variable v1 variable v2 : op v1 v2 -! ; generates the following code: movf 0x22,w subwf 0x23,f return Some files are included as examples with a Makefile. E.g, to build booster.hex, run make booster.fs: booster.fscode for a booster which handles overload and overheat signals; this also serves as an example for the priority-based multitasker generator.fscode for a DCC signal generator based on serial commands silver.fscode that runs on a silver card (a smartcard with a 16f876 and a 24c64 serial eeprom) taskexample.fsexample of multitasking code using the basic multitasker controller.fsanother multitasking example, used to control multiple peripherals and inputs using a serial link i2cloader.fsa flash and eeprom loader using an I2C bus to reprogram the PIC spifcard.fsproduction code used in the Ambience European project; includes i2c code, dialog with a bq2010 chip, interface with a smartcard reader using a TDA8004, interrupt code for implementing an in-house watchdog, working around I2C bugs and blinking a led, and analog to digital conversion
http://www.rfc1149.net/devel/doc/picforth.html
crawl-002
refinedweb
6,341
69.01
Are you sure? This action might not be possible to undo. Are you sure you want to continue? by George Peck, Lynbrook High School edited by Geoff Schmit, Naperville North High School Introduction Imagine you were describing how to draw a tree. You might say: 1. Draw a vertical line 2. At the top of the line, draw two smaller lines ("branches") in a v shape 3. At the ends of each of those two branches, draw two even smaller branches 4.. Procedure Here is some sample code to get you started with the Applet subclass and the Tree class: import java.awt.*; import java.applet.*; public class FractalTreeExample extends Applet { Tree joyce; public void init() { setBackground(Color.black); joyce = new Tree(); } public void paint(Graphics g) { joyce.draw(g); } } class Tree { double fractionLength; int smallestBranch; double branchAngle; int startX, startY, endX, endY; public Tree() { fractionLength = .8; smallestBranch = 10; branchAngle = .2; startX = 400; startY = 700; endX = 400; endY = 600; } public void draw(Graphics graphics) { graphics.setColor(Color.green); graphics.drawLine(startX, startY, endX, endY); // call recursive branch method... } } Notice that the Applet subclass: • how much smaller the branches are • how small the branches will get • the angle between the branches. Adjusting these parameters will change the appearance of the tree. Now add a branch method to the Tree class. The branch method will first draw two smaller branches off the end of the tree. It will then call itself recursively to draw two smaller branches off the ends of the previous branches. Extensions • add Scrollbar objects to adjust the parameters that affect the appearance of the tree • modify the algorithm to: • add asymmetry • adjust angles • adjust thickness • adjust color
https://www.scribd.com/doc/147031753/Fractal-Tree
CC-MAIN-2018-13
refinedweb
279
67.04
TypeScript is often described as the solution for making large scale JavaScript projects manageable. One of the arguments supporting this claim is that having type information helps catch a lot of mistakes that are easy to make and hard to spot. Adopting TypeScript might not always be an option, either because you are dealing with an old codebase or even by choice. Whatever the reason for sticking with plain JavaScript, it is possible to get a nearly identical development experience in terms of having intellisense and development time error highlighting. That is the topic of this blog post. VS Code and JavaScript intellisense If you create a new index.js in VS Code and type conso followed by Ctrl+space (or the Mac equivalent) you’ll see something similar to this: The source of the intellisense data is from the type definition files that that are bundled with VS Code, namely console is defined in [VS Code installation folder]/code/resources/app/extensions/node_modules/typescript/lib/lib.dom.d.ts. All the files with the .d.ts extension in that folder will contribute for what you see in the intellisense dropdown. TypeScript definition files are one of the sources of intellisense in VS Code. They are not the only source though. Another source is what VS Code infers from your code. Here’s an example of declaring a variable and assigning it a value. The intellisense is coherent with the type of that value: (and yes, you can call .blink() or .bold() on a string, even in Node.js) Here’s another example where the type is inferred from the usage of a variable in a class definition: And additionally to type inference, VS Code will add all the unique words on the file you are editing to the intellisense dropdown: Even though the type inference available in VS Code is very clever, it’s also very passive. It won’t warn you if you call myInstance.pethodName() instead of myInstance.methodName(): We usually only figure this out at runtime when we get a TypeError: myInstance.pethodA is not a function. Turns out that VS Code has a flag that is turned off by default that when turned on will enable type checking to run through your code, and report errors: The flag name is called checkJs and the easiest way to enable it is to open “Show all commands” ( Ctrl+Shift+p) and type “Open workspace settings” and then activate checkJs: You might discover that after turning on checkJs your file turns into a Christmas Tree of red squiggles. Some of these might be legitimate errors, but sometimes they might not. It doesn’t happen often but I’ve encountered instances where the type definition files for a JavaScript library don’t match the latest version (how this happens will become clearer later in the blog post). If this happens and you are sure that the code you have is correct you can always add at the very top of the file: //@ts-nocheck This will turn off type checking for the whole file. If you just want to ignore a statement you add this immediately before the statement to be ignored: //@ts-ignore variableThatHoldsANumber = false; //this won't be reported as an error Manually providing type information in JavaScript There are situation where it is impossible for type inference to figure out the type information about a variable. For example, if you call a REST endpoint and get a list of orders: const orders = await getOrdersForClient(clientId); There’s not enough information available for any useful type inference there. The “shape” of what an order looks like depends on what the server that hosts the REST api sends to us. We can, however, specify what an order looks like using JsDoc comments, and those will be picked up by VS Code and used to provide intellisense. Here’s how that could look like for the orders: /** @type {Array<{id: string, quantity: number, unitPrice: number, description: string}>} */ const orders = await getOrdersForClient(clientId); Here’s how that looks like in VS Code when you access an order: Even though this can look a little bit cumbersome it’s almost as flexible having TypeScript type information. Also, you can add it just where you need it. I found that if I’m not familiar with a legacy codebase that has no documentation, adding this type of JsDoc annotations can be really helpful in the process of becoming familiar with the codebase. Here are some examples of what you can do with JsDoc type annotations: Define a type and use it multiple times /** * @typedef {object} MyType * @property {string} aString * @property {number} aNumber * @property {Date} aDate */ /** @type {MyType} */ let foo; /** @type {MyType} */ let bar; If you use @typedef in a file that is a module (for VS Code to assume this there only needs to be an exports statement in the file) you can even import the type information from another file. For example if @typedef is in a file named my-type.js and you type this from another-file.js in the same folder: /** @type {import('./my_type').MyType} */ let baz; The intellisense for the baz variable will be based on MyType‘s type information. Function parameters and return values Another scenario where type inference can’t do much is regarding the parameter types in function definitions. For example: function send(type, args, onResponse) { //... } There’s not much that can be inferred here regarding the parameters type, args and onResponse. It’s the same for the return value of the function. Thankfully there’s JsDoc constructs that we can use to describe all of those, here’s how it would look like if type is a string, args can be anything and onResponse is an optional function function with two arguments, error and result and finally the return value is a Promise or nothing. It’s a pretty involved example, but it serves to illustrate that there’s really no restrictions on the type information we can provide. Here’s how that would look like: /** * You can add a normal comment here and that will show up when calling the function * @param {string} type You can add extra info after the params * @param {any} args As you type each param you'll see the intellisense updated with this description * @param {(error: any, response: any) => void} [onResponse] * @returns {Promise<any> | void} You can add extra an description here after returns */ function send(type, args, onResponse) { //... } And here it is in action: Class and inheritance One thing that happens often is that you have to create a class that inherits from other classes. Sometimes these classes can even be templeted. This is very common for example with React where it’s useful to have intellisense for the props and state of a class component. Here’s how we could do that for a component named ClickCounter whose state is a property named count which is a number and that also has a component prop named message of type string: /** @extends {React.Component<{message: string}, {count: number}>} */ export class ClickCounter extends React.Component { //this @param jsdoc statement is required if you want intellisense //in the ctor, to avoid repetition you can always define a @typedef //and reuse the type /** @param {{message: string}} props */ constructor(props) { super(props); this.state = { count: 0, } } render() { return ( <div onClick={_ => this.setState({ count: this.state.count + 1 })}>{this.props.message} - {this.state.count} </div> ); } } This is how it looks like when you are using your component: This also possible in function components, for example this function component would have the same intellisense on usage than the class component from the example above: /** * @param {object} props * @param {string} props.message */ export function ClickCounter(props) { const [count, setCount] = useState(0); return ( <div onClick={_ => setCount(count + 1)}>{props.message} - {count} </div> ); } Casting Sometimes you might want to force a variable to be of a particular type, for example imagine you have a variable that can be either a number or a string and you have this: if (typeof numberOrString === 'string') { //there will be intellisense for substring const firstTwoLetters = /** @type {string} */ (numberOrString).substring(0, 2); } Use type information from other modules Imagine you are writing code in Node.js and you have the following function: function doSomethignWithAReadableStream(stream) { //... } To enable intellisense for the stream parameter as a readable stream we need the type information that is in the stream module. We have to use the import syntax like this: /** @param {import('stream').Readable} stream */ function doSomethindWithAReadableStream(stream) { //... } There might be cases though where the module you want to import the type from isn’t available out of the box (as stream is). In those cases you can install an npm package with just the type information from DefinitelyTyped. There’s even a search tool for looking up the correct package with the typing information you need for a specific npm package. For example, imagine you wanted typing information for mocha‘s options, you’d install the type definition package: npm install @types/mocha --save-dev And then you could reference them in JsDoc and get intellisense for the options: Providing type information to consumers of your module/package If you were to create a module that exposed functions and classes with the JsDoc type annotations that we’ve been looking at in this blog post, you’d get intellisense for them when that module is consumed from another module. There’s an alternative way of doing this though, with type definition files. Say you have this very simple module using CommonJS and this module is defined in a file named say-hello.js: function sayHello(greeting) { console.log(greeting); } module.exports = { sayHello } If you create a file named say-hello.d.ts (and place it in the same folder as say-hello.js) with this inside: export function sayHello(message: string): void; And you import this function in another module, you’ll get the the typing information defined in the .d.ts file. In fact, this is the type of file that the TypeScript compiler generates (along with the .js files) when you compile with the --declaration flag. As a small aside, say that you are creating an npm module written totally in JavaScript that you want to share. Also, you haven’t included any JsDoc type annotations but you still want to provide intellisense. You can create a type declaration file, usually named index.d.ts or main.d.ts and update your package.json with the types (or typings) property set to the path to that file: { "name": "the-package-name", "author": "Rui", "version": "1.0.0", "main": "main.js", "types": "index.d.ts" } The type declarations that you put in index.d.ts define the intellisense you’ll get when you consume the npm package. The contents of index.d.ts don’t even have to match the code in the module (in fact that’s what the type definition packages in DefinitelyTyped do). I’m intentionally leaving the topic of how to write typescript definition files very light here because it’s a very dense topic and it’s usually easy to find how to provide type information in most cases in the official docs. That’s it, if you are thinking, what exactly were the two ways to take advantage of types in JavaScript? They are JsDoc’s type annotations and TypeScript definition files. It’s not really a good title though, I just wanted to try out something like “3 ways you can improve… blah blah” since they seem to captivate people’s attention better (couldn’t think of a third way, so there’ only 2, but they can be life changing right? Imagine those config objects with 30 properties that you have to lookup every time you need them and that you always mistype at least once! Not anymore). In the end the title is not great because these two ways are not mutually exclusive. Also, a .d.ts file does not affect the file it “describes”, i.e. if you create a type declaration file for module my-module.js and in that type declaration file you specify that functionA receives a parameter of type number and you invoke that function from functionB also inside my-module you won’t get intellisense for functionA. Only modules that require/import my-module will take advantage of the type information in the type declaration file.
https://www.blinkingcaret.com/2020/10/
CC-MAIN-2021-04
refinedweb
2,079
59.13
""" A Python dict implementation. """ import collections MINSIZE = 8 PERTURB_SHIFT = 5 dummy = "<dummy key>" class Entry(object): """ A hash table entry. Attributes: * key - The key for this entry. * hash - The has of the key. * value - The value associated with the key. """ __slots__ = ("key", "value", "hash") def __init__(self): self.key = None self.value = None self.hash = 0 def __repr__(self): return "<Entry: key={0} value={1}>".format(self.key, self.value) class Dict(object): """ A mapping interface implemented as a hash table. Attributes: * used - The number of entires used in the table. * filled - used + number of entries with a dummy key. * table - List of entries; contains the actual dict data. * mask - Length of table - 1. Used to fetch values. """ __slots__ = ("filled", "used", "mask", "table") def __init__(self, arg=None, **kwargs): self.clear() self._update(arg, kwargs) @classmethod def fromkeys(cls, keys, value=0): """ Return a new dictionary from a sequence of keys. """ d = cls() for key in keys: d[key] = value return d def clear(self): """ Clear the dictionary of all data. """ self.filled = 0 self.used = 0 self.mask = MINSIZE - 1 self.table = [] # Initialize the table to a clean slate of entries. for i in range(MINSIZE): self.table.append(Entry()) def pop(self, *args): """ Remove and return the value for a key. """ have_default = len(args) == 2 try: v = self[args[0]] except KeyError: if have_default: return args[1] raise else: del self[args[0]] return v def popitem(self): """ Remove and return any key-value pair from the dictionary. """ if self.used == 0: raise KeyError("empty dictionary") entry0 = self.table[0] entry = entry0 i = 0 if entry0.value is None: # The first entry in the table's hash is abused to hold the index to # the next place to look for a value to pop. i = entry0.hash if i > self.mask or i < i: i = 1 entry = self.table[i] while entry.value is None: i += 1 if i > self.mask: i = 1 entry = self.table[i] res = entry.key, entry.value self._del(entry) # Set the next place to start. entry0.hash = i + 1 return res def setdefault(self, key, default=0): """ If key is in the dictionary, return it. Otherwise, set it to the default value. """ val = self._lookup(key).value if val is None: self[key] = default return default return val def _lookup(self, key): """ Find the entry for a key. """ key_hash = hash(key) i = key_hash & self.mask entry = self.table[i] if entry.key is None or entry is key: return entry free = None if entry.key is dummy: free = entry elif entry.hash == key_hash and key == entry.key: return entry perturb = key_hash while True: i = (i << 2) + i + perturb + 1; entry = self.table[i & self.mask] if entry.key is None: return entry if free is None else free if entry.key is key or \ (entry.hash == key_hash and key == entry.key): return entry elif entry.key is dummy and free is None: free = dummy perturb >>= PERTURB_SHIFT assert False, "not reached" def _resize(self, minused): """ Resize the dictionary to at least minused. """ newsize = MINSIZE # Find the smalled value for newsize. while newsize <= minused and newsize > 0: newsize <<= 1 oldtable = self.table # Create a new table newsize long. newtable = [] while len(newtable) < newsize: newtable.append(Entry()) # Replace the old table. self.table = newtable self.used = 0 self.filled = 0 # Copy the old data into the new table. for entry in oldtable: if entry.value is not None: self._insert_into_clean(entry) elif entry.key is dummy: entry.key = None self.mask = newsize - 1 def _insert_into_clean(self, entry): """ Insert an item in a clean dict. This is a helper for resizing. """ i = entry.hash & self.mask new_entry = self.table[i] perturb = entry.hash while new_entry.key is not None: i = (i << 2) + i + perturb + 1 new_entry = self.table[i & self.mask] perturb >>= PERTURB_SHIFT new_entry.key = entry.key new_entry.value = entry.value new_entry.hash = entry.hash self.used += 1 self.filled += 1 def _insert(self, key, value): """ Add a new value to the dictionary or replace an old one. """ entry = self._lookup(key) if entry.value is None: self.used += 1 if entry.key is not dummy: self.filled += 1 entry.key = key entry.hash = hash(key) entry.value = value def _del(self, entry): """ Mark an entry as free with the dummy key. """ entry.key = dummy entry.value = None self.used -= 1 def __getitem__(self, key): value = self._lookup(key).value if value is None: # Check if we're a subclass. if type(self) is not Dict: # Try to call the __missing__ method. missing = getattr(self, "__missing__") if missing is not None: return missing(key) raise KeyError("no such key: {0!r}".format(key)) return value def __setitem__(self, key, what): # None is used as a marker for empty entries, so it can't be in a # dictionary. assert what is not None and key is not None, \ "key and value must not be None" old_used = self.used self._insert(key, what) # Maybe resize the dict. if not (self.used > old_used and self.filled*3 >= (self.mask + 1)*2): return # Large dictionaries (< 5000) are only doubled in size. factor = 2 if self.used > 5000 else 4 self._resize(factor*self.used) def __delitem__(self, key): entry = self._lookup(key) if entry.value is None: raise KeyError("no such key: {0!r}".format(key)) self._del(entry) def __contains__(self, key): """ Check if a key is in the dictionary. """ return self._lookup(key).value is not None def __eq__(self, other): if not isinstance(other, Dict): try: # Try to coerce the other to a Dict, so we can compare it. other = Dict(other) except TypeError: return NotImplemented if self.used != other.used: # They're not the same size. return False # Look through the table and compare every entry, breaking out early if # we find a difference. for entry in self.table: if entry.value is not None: try: bval = other[entry.key] except KeyError: return False if not bval == entry.value: return False return True def __ne__(self, other): return not self == other def keys(self): """ Return a list of keys in the dictionary. """ return [entry.key for entry in self.table if entry.value is not None] def values(self): """ Return a list of values in the dictionary. """ return [entry.value for entry in self.table if entry.value is not None] def items(self): """ Return a list of key-value pairs. """ return [(entry.key, entry.value) for entry in self.table if entry.value is not None] def __iter__(self): return DictKeysIterator(self) def itervalues(self): """ Return an iterator over the values in the dictionary. """ return DictValuesIterator(self) def iterkeys(self): """ Return an iterator over the keys in the dictionary. """ return DictKeysIterator(self) def iteritems(self): """ Return an iterator over key-value pairs. """ return DictItemsIterator(self) def _merge(self, mapping): """ Update the dictionary from a mapping. """ for key in mapping.keys(): self[key] = mapping[key] def _from_sequence(self, seq): for double in seq: if len(double) != 2: raise ValueError("{0!r} doesn't have a length of 2".format( double)) self[double[0]] = double[1] def _update(self, arg, kwargs): if arg: if isinstance(arg, collections.Mapping): self._merge(arg) else: self._from_sequence(arg) if kwargs: self._merge(kwargs) def update(self, arg=None, **kwargs): """ Update the dictionary from a mapping or sequence containing key-value pairs. Any existing values are overwritten. """ self._update(arg, kwargs) def get(self, key, default=0): """ Return the value for key if it exists otherwise the default. """ try: return self[key] except KeyError: return default def __len__(self): return self.used def __repr__(self): r = ["{0!r} : {1!r}".format(k, v) for k, v in self.iteritems()] return "Dict({" + ", ".join(r) + "})" collections.Mapping.register(Dict) class DictIterator(object): def __init__(self, d): self.d = d self.used = self.d.used self.len = self.d.used self.pos = 0 def __iter__(self): return self def next(self): # Check if the dictionary has been mutated under us. if self.used != self.d.used: # Make this state permanent. self.used = -1 raise RuntimeError("dictionary size changed during interation") i = self.pos while i <= self.d.mask and self.d.table[i].value is None: i += 1 self.pos = i + 1 if i > self.d.mask: # We're done. raise StopIteration self.len -= 1 return self._extract(self.d.table[i]) __next__ = next def _extract(self, entry): return getattr(entry, self.kind) def __len__(self): return self.len class DictKeysIterator(DictIterator): kind = "key" class DictValuesIterator(DictIterator): kind = "value" class DictItemsIterator(DictIterator): def _extract(self, entry): return entry.key, entry.value Sunday, October 19, 2008 Pure Python Dictionary Implementation For those curious about how CPython's dict implementation works, I've written a Python implementation using the same algorithms. Aside from the education value, it's pretty useless because it doesn't support None as a value and is extremely slow. You can get the source in a Bazaar repo: Posted by Benjamin at 9:44 AM 5 comments: Monday, October 13, 2008 First impressions of darcs This. Posted by Benjamin at 2:55 PM 4 comments: Wednesday, October 1, 2008 Python 2.6 released! Fire the cannons! Begin the fireworks! Scream at the top of your lungs! Clink your glasses! Python 2.6 is here! Download it, and learn what's new. This is my first final python release on the core team, and I'm quite proud of our baby. :) This is my first final python release on the core team, and I'm quite proud of our baby. :) Posted by Benjamin at 3:10 PM No comments:
http://pybites.blogspot.com/2008/10/
CC-MAIN-2018-51
refinedweb
1,596
63.36
Watch tweets on Twitter's user pages or search pages. Project description Twittcher (for twitter-watcher) is a Python module to make bots that will watch a Twitter user page or search page, and react to the tweets. It’s simple, small (currently ~150 lines of code), and doesn’t require any registration on Twitter or dev.twitter.com, as it doesn’t depend on the Twitter API (instead it parses the HTML). Twittcher is an open-source software originally written by Zulko, and released under the MIT licence. The project is hosted on Github, where you can report bugs, propose improvements, etc. Install If you have pip, install twittcher by typing in a terminal: (sudo) pip install twittcher Else, download the sources (on Github or PyPI), and in the same directory as the setup.py file, type this in a terminal: (sudo) python setup.py install Twittcher requires the Python package BeautifulSoup (a.k.a. bs4), which will be automatically installed when twittcher is installed. Examples of use There is currently no documentation for Twittcher, but the following examples should show you everything you need to get started. 1. Print the tweets of a given user Every 120 seconds, print all the new tweets by John D. Cook: from twittcher import UserWatcher UserWatcher("JohnDCook").watch_every(120) Result: Kicking off some simulations before I quit work for the day. #dejavu Author: JohnDCook Date: 15:43 - 24 juil. 2014 Link: “Too often we enjoy the comfort of opinion without the discomfort of thought." -- John F. Kennedy, Author: JerryWeinberg Date: 13:18 - 24 juil. 2014 Link: The default action of UserWatcher is to print the tweets, but you can ask any other action instead. For instance, here is how to only print the tweets that are actually written by John D. Cook (not the ones he retweets): from twittcher import UserWatcher def my_action(tweet): if tweet.username == "JohnDCook": print(tweet) UserWatcher("JohnDCook", action=my_action).watch_every(120) 2. Control a distant machine through Twitter ! Every 60 seconds, for any of my new tweets of the form cmd: my_command, run my_command in a terminal. Using simple tweets I can control any distant computer running this script. import subprocess from twittcher import UserWatcher def my_action(tweet): """ Execute the tweet's command, if any. """ if tweet.text.startswith("cmd: "): subprocess.Popen( tweet.text[5:], shell=True ) # Watch my account and react to my tweets bot = UserWatcher("Zulko___", action=my_action) bot.watch_every(60) For instance, the tweet cmd: firefox will open Firefox on the computer, and the tweet cmd: echo "Hello" will have the computer print Hello in a terminal. 3. Watch search results and send alert mails Every 20 seconds, send me all the new tweets in the Twitter search results for chocolate milk. from twittcher import TweetSender, SearchWatcher sender = TweetSender(smtp="smtp.gmail.com", port=587, login="tintin.zulko@gmail.com", password="fibo112358", # be nice, don't try. to_addrs="tintin.zulko@gmail.com", # where to send sender_id = "chocolate milk") bot = SearchWatcher("chocolate milk", action=sender.send) bot.watch_every(20) 4. Multibot watching If you want to run several bots at once, make sure that you leave a few seconds between the requests of the different bots. Here is how you print the new tweets of John D. Cook, Mathbabe, and Eolas. Each of them is watched every minute, with 20 seconds between the requests of two bots: import time import itertools from twittcher import UserWatcher bots = [ UserWatcher(user) for user in ["JohnDCook", "mathbabedotorg", "Maitre_Eolas"]] for bot in itertools.cycle(bots): bot.watch() time.sleep(20) 5. Saving the tweets A bot can save to a file the tweets that it has already seen, so that in future sessions it will remember not to process these tweets again, in case they still appear on the watched page. from twittcher import SearchWatcher bot = SearchWatcher("chocolate milk", database="choco.db") bot.watch_every(20) Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/twittcher/0.1.04/
CC-MAIN-2018-22
refinedweb
668
64.71
I have a ruby app and I am sending emails with this format found in the documentation : Net::SMTP.start('your.smtp.server', 25) do |smtp| smtp.send_message msgstr, 'from@address', 'to@address' end def send_notification(exception) msgstr = <<-END_OF_MESSAGE From: Exchange Errors <exchangeerrors@5112.mysite.com> To: Edmund Mai <emai@mysite.com> Subject: test message Date: Sat, 23 Jun 2001 16:26:43 +0900 Message-Id: <unique.message.id.string@mysite.com> This is a test message. END_OF_MESSAGE Net::SMTP.start('localhost', 25) do |smtp| smtp.send_message(msgstr, "exchangeerrors@5112.mysite.com", "emai@mysite.com") end end msgstr So I looked at the documentation and it looks like Net::SMTP doesn't support this. In the documentation it says this: What is This Library NOT?¶ ↑ This library does NOT provide functions to compose internet mails. You must create them by yourself. If you want better mail support, try RubyMail or TMail. You can get both libraries from RAA. () So I looked into the MailFactory gem (), which uses Net::SMTP actually: require 'net/smtp' require 'rubygems' require 'mailfactory' mail = MailFactory.new() mail.to = "test@test.com" mail.from = "sender@sender.com" mail.subject = "Here are some files for you!" mail.text = "This is what people with plain text mail readers will see" mail.html = "A little something <b>special</b> for people with HTML readers" mail.attach("/etc/fstab") mail.attach("/some/other/file") Net::SMTP.start('smtp1.testmailer.com', 25, 'mail.from.domain', fromaddress, password, :cram_md5) { |smtp| mail.to = toaddress smtp.send_message(mail.to_s(), fromaddress, toaddress) } and now it works!
https://codedump.io/share/jE5gEgREpguE/1/ruby-and-sending-emails-with-netsmtp-how-to-specify-email-subject
CC-MAIN-2017-39
refinedweb
260
53.68
Repository: parquet-format Updated Branches: refs/heads/master 65e851eae -> 041708da1 PARQUET-686: Add Order to store the order used for min/max stats. This adds a new enum, `Order`, that will be set to the order used to produce the min and max values in all `Statistics` objects (at the page level). `Order` has 8 symbols: `SIGNED`, `UNSIGNED`, and 6 symbols for custom orderings. This also adds a `CustomOrder` struct that is used to map the custom order symbols to string descriptors, such as [order keywords used by ICU collating sequences](). `CustomOrder` mappings are stored in the file footer. Author: Ryan Blue <blue@apache.org> Closes #46 from rdblue/PARQUET-686-add-stats-ordering and squashes the following commits: f878c34 [Ryan Blue] PARQUET-686: Remove Order enum. 9447fb8 [Ryan Blue] PARQUET-686: Use "is" instead of "must be". ffbb60b [Ryan Blue] PARQUET-686: Store ColumnOrder as a union. c6e43b0 [Ryan Blue] PARQUET-686: Add new min_value and max_value stats. eed4d47 [Ryan Blue] PARQUET-686: Add clarifications from review comments. 9962df8 [Ryan Blue] PARQUET-686: Remove is_ascending and number columns starting with 1. faa9edb [Ryan Blue] PARQUET-686: Add order specs to logical types. 4534062 [Ryan Blue] PARQUET-686: Add ColumnOrders to FileMetaData. Project: Commit: Tree: Diff: Branch: refs/heads/master Commit: 041708da1af52e7cb9288c331b542aa25b68a2b6 Parents: 65e851e Author: Ryan Blue <blue@apache.org> Authored: Mon Apr 17 11:23:41 2017 -0700 Committer: Ryan Blue <blue@apache.org> Committed: Mon Apr 17 11:23:41 2017 -0700 ---------------------------------------------------------------------- LogicalTypes.md | 30 +++++++++++++++++++ src/main/thrift/parquet.thrift | 59 ++++++++++++++++++++++++++++++++++++- 2 files changed, 88 insertions(+), 1 deletion(-) ---------------------------------------------------------------------- ---------------------------------------------------------------------- diff --git a/LogicalTypes.md b/LogicalTypes.md index c411dbf..29cf527 100644 --- a/LogicalTypes.md +++ b/LogicalTypes.md @@ -37,6 +37,8 @@ may require additional metadata fields, as well as rules for those fields. `UTF8` may only be used to annotate the binary primitive type and indicates that the byte array should be interpreted as a UTF-8 encoded character string. +The sort order used for `UTF8` strings is `UNSIGNED` byte-wise comparison. + ## Numeric Types ### Signed Integers @@ -55,6 +57,8 @@ allows. implied by the `int32` and `int64` primitive types if no other annotation is present and should be considered optional. +The sort order used for signed integer types is `SIGNED`. + ### Unsigned Integers `UINT_8`, `UINT_16`, `UINT_32`, and `UINT_64` annotations can be used to @@ -70,6 +74,8 @@ allows. `UINT_8`, `UINT_16`, and `UINT_32` must annotate an `int32` primitive type and `UINT_64` must annotate an `int64` primitive type. +The sort order used for unsigned integer types is `UNSIGNED`. + ### DECIMAL `DECIMAL` annotation represents arbitrary-precision signed decimal numbers of @@ -98,6 +104,15 @@ integer. A precision too large for the underlying type (see below) is an error. A `SchemaElement` with the `DECIMAL` `ConvertedType` must also have both `scale` and `precision` fields set, even if scale is 0 by default. +The sort order used for `DECIMAL` values is `SIGNED`. The order is equivalent +to signed comparison of decimal values. + +If the column uses `int32` or `int64` physical types, then signed comparison of +the integer values produces the correct ordering. If the physical type is +fixed, then the correct ordering can be produced by flipping the +most-significant bit in the first byte and then using unsigned byte-wise +comparison. + ## Date/Time Types ### DATE @@ -106,30 +121,40 @@ A `SchemaElement` with the `DECIMAL` `ConvertedType` must also have both annotate an `int32` that stores the number of days from the Unix epoch, 1 January 1970. +The sort order used for `DATE` is `SIGNED`. + ### TIME\_MILLIS `TIME_MILLIS` is used for a logical time type with millisecond precision, without a date. It must annotate an `int32` that stores the number of milliseconds after midnight. +The sort order used for `TIME\_MILLIS` is `SIGNED`. + ### TIME\_MICROS `TIME_MICROS` is used for a logical time type with microsecond precision, without a date. It must annotate an `int64` that stores the number of microseconds after midnight. +The sort order used for `TIME\_MICROS` is `SIGNED`. + ### TIMESTAMP\_MILLIS `TIMESTAMP_MILLIS` is used for a combined logical date and time type, with millisecond precision. It must annotate an `int64` that stores the number of milliseconds from the Unix epoch, 00:00:00.000 on 1 January 1970, UTC. +The sort order used for `TIMESTAMP\_MILLIS` is `SIGNED`. + ### TIMESTAMP\_MICROS `TIMESTAMP_MICROS` is used for a combined logical date and time type with microsecond precision. It must annotate an `int64` that stores the number of microseconds from the Unix epoch, 00:00:00.000000 on 1 January 1970, UTC. +The sort order used for `TIMESTAMP\_MICROS` is `SIGNED`. + ### INTERVAL `INTERVAL` is used for an interval of time. It must annotate a @@ -144,8 +169,13 @@ example, there is no requirement that a large number of days should be expressed as a mix of months and days because there is not a constant conversion from days to months. +The sort order used for `INTERVAL` is `UNSIGNED`, produced by sorting by +the value of months, then days, then milliseconds with unsigned comparison. + ## Embedded Types +Embedded types do not have type-specific orderings. + ### JSON `JSON` is used for an embedded JSON document. It must annotate a `binary` ---------------------------------------------------------------------- diff --git a/src/main/thrift/parquet.thrift b/src/main/thrift/parquet.thrift index e89bc80..47812ab 100644 --- a/src/main/thrift/parquet.thrift +++ b/src/main/thrift/parquet.thrift @@ -28,6 +28,17 @@ namespace java org.apache.parquet.format * with the encodings to control the on disk storage format. * For example INT16 is not included as a type since a good encoding of INT32 * would handle this. + * + * When a logical type is not present, the type-defined sort order of these + * physical types are: + * * BOOLEAN - false, true + * * INT32 - signed comparison + * * INT64 - signed comparison + * * INT96 - signed comparison + * * FLOAT - signed comparison + * * DOUBLE - signed comparison + * * BYTE_ARRAY - unsigned byte-wise comparison + * * FIXED_LEN_BYTE_ARRAY - unsigned byte-wise comparison */ enum Type { BOOLEAN = 0; @@ -202,13 +213,33 @@ enum FieldRepetitionType { * All fields are optional. */ struct Statistics { - /** min and max value of the column, encoded in PLAIN encoding */ + /** + * DEPRECATED: min and max value of the column. Use min_value and max_value. + * + * Values are encoded using PLAIN encoding, except that variable-length byte + * arrays do not include a length prefix. + * + * These fields encode min and max values determined by SIGNED comparison + * only. New files should use the correct order for a column's logical type + * and store the values in the min_value and max_value fields. + * + * To support older readers, these may be set when the column order is + * SIGNED. + */ 1: optional binary max; 2: optional binary min; /** count of null value in the column */ 3: optional i64 null_count; /** count of distinct values occurring */ 4: optional i64 distinct_count; + /** + * Min and max values for the column, determined by its ColumnOrder. + * + * Values are encoded using PLAIN encoding, except that variable-length byte + * arrays do not include a length prefix. + */ + 5: optional binary max_value; + 6: optional binary min_value; } /** @@ -547,6 +578,23 @@ struct RowGroup { 4: optional list<SortingColumn> sorting_columns } +/** Empty struct to signal the order defined by the physical or logical type */ +struct TypeDefinedOrder {} + +/** + * Union to specify the order used for min, max, and sorting values in a column. + * + * Possible values are: + * * TypeDefinedOrder - the column uses the order defined by its logical or + * physical type (if there is no logical type). + * + * If the reader does not support the value of this union, min and max stats + * for this column should be ignored. + */ +union ColumnOrder { + 1: TypeDefinedOrder TYPE_ORDER; +} + /** * Description for file metadata */ @@ -576,5 +624,14 @@ struct FileMetaData { * e.g. impala version 1.0 (build 6cf94d29b2b7115df4de2c06e2ab4326d721eb55) **/ 6: optional string created_by + + /** + * Sort order used for each column in this file. + * + * If this list is not present, then the order for each column is assumed to + * be Signed. In addition, min and max values for INTERVAL or DECIMAL stored + * as fixed or bytes should be ignored. + */ + 7: optional list<ColumnOrder> column_orders; }
http://mail-archives.apache.org/mod_mbox/parquet-commits/201704.mbox/%3Ca671c3a57f024ae1a32c5c4cabd8bc51@git.apache.org%3E
CC-MAIN-2019-51
refinedweb
1,291
56.96
In this chapter, we will study how to do inline plotting with the Jupyter Notebook. In order to display the plot inside the notebook, you need to initiate plotly’s notebook mode as follows − from plotly.offline import init_notebook_mode init_notebook_mode(connected = True) Keep rest of the script as it is and run the notebook cell by pressing Shift+Enter. Graph will be displayed offline inside the notebook itself. import plotly plotly.tools.set_credentials_file(username = 'lathkar', api_key = '************') from plotly.offline import iplot, init_notebook_mode init_notebook_mode(connected = True) import plotly import plotly.graph_objs as go import numpy as np import math #needed for definition of pi xpoints = np.arange(0, math.pi*2, 0.05) ypoints = np.sin(xpoints) trace0 = go.Scatter( x = xpoints, y = ypoints ) data = [trace0] plotly.offline.iplot({ "data": data,"layout": go.Layout(title="Sine wave")}) Jupyter notebook output will be as shown below − The plot output shows a tool bar at top right. It contains buttons for download as png, zoom in and out, box and lasso, select and hover. 12 Lectures 53 mins
https://www.tutorialspoint.com/plotly/plotly_plotting_inline_with_jupyter_notebook.htm
CC-MAIN-2022-21
refinedweb
175
61.22
PD controller is developed to control a quadrotor in 2-dimensional space. The quadrotor has two inputs: motor thrusts from the left and the right motors. They cause a net force and torque applied to the quadrotor: The net force and torque will be used as the two inputs to the quadrotor to simplify subsequent calculations. The new schematic diagram becomes: 3 unknowns so 3 equations are needed. Newton's 2nd law (y-axis): Newton's 2nd law (z-axis): Newton's 2nd law (rotation): States: Position and velocity are chosen as state variables because, State vector: Input vector: Output vector: Rewrite (eq. 1) in these new notations: Rewrite (eq. 2) in these new notations: Rewrite (eq. 3) in these new notations: Rearrange equations to express ẋ(t) and y(t) in terms of x(t) and u(t): Rewrite as matrix: The dynamics model is nonlinear because it is not of the form ẋ=Ax+Bu. PD controller is designed for linear systems so the dynamics model is first linearized. Model is linearized at the equilibrium point, that is, at hover configuration. At this point, the following is true: Trigonometric functions can also be linearized at this equilibrium point using first order Taylor approximation: Using these facts, linearized dynamics model can be written as: The objective is to design a controller (find the input force and torque functions) which makes the quadrotor track a trajectory (position, velocity, and acceleration as a function of time). PD controller can be written as, (eq. 4) can be solved for the inputs: Substitute the PD controller to obtain the control equations: There are still four unresolved variables: Because Φ cannot be commanded directly, let Φcommand=Φdesired: The derivatives of Φcommand can be approximated as 0: The derivative of Φcommand is proportional to the third derivative of y (jerk in the horizontal direction). The second derivative of Φcommand is proportional to the fourth derivative of y (snap in the horizontal direction). For non-aggressive trajectories with small jerk and snap, these can be approximated as 0. Python code simulation. Quadrotor hovers for 1 second and then tries to reach target position of z=0m and y=0.5. from math import sin, cos from scipy.integrate import solve_ivp import matplotlib.pyplot as plt # Constants g = 9.81 # Gravitational acceleration (m/s^2) m = 0.18 # Mass (kg) Ixx = 0.00025 # Mass moment of inertia (kg*m^2) L = 0.086 # Arm length (m) # Returns the desired position, velocity, and acceleration at a given time. # Trajectory is a step (y changes from 0 to 0.5 at t=1) # # t : Time (seconds), scalar # return: Desired position & velocity & acceleration, y, z, vy, vz, ay, az def trajectory(t): if t < 1: y = 0 else: y = 0.5 z = 0 vy = 0 vz = 0 ay = 0 az = 0 return y, z, vy, vz, ay, az # Returns force and moment to achieve desired state given current state. # Calculates using PD controller. # # x : Current state, [y, z, phi, vy, vz, phidot] # y_des : desired y # z_des : desired z # vy_des: desired y velocity # vz_des: desired z velocity # ay_des: desired y acceleration # az_des: desired z acceleration # return: Force and moment to achieve desired state def controller(x, y_des, z_des, vy_des, vz_des, ay_des, az_des): Kp_y = 0.4 Kv_y = 1.0 Kp_z = 0.4 Kv_z = 1.0 Kp_phi = 18 Kv_phi = 15 phi_c = -1/g * (ay_des + Kv_y * (vy_des - x[3]) + Kp_y * (y_des - x[0])) F = m * (g + az_des + Kv_z * (vz_des - x[4]) + Kp_z * (z_des - x[1])) M = Ixx * (Kv_phi * (-x[5]) + Kp_phi * (phi_c - x[2])) return F, M # Limit force and moment to prevent saturating the motor # Clamp F and M such that u1 and u2 are between 0 and 1.7658 # # u1 u2 # _____ _____ # |________| # # F = u1 + u2 # M = (u2 - u1)*L def clamp(F, M): u1 = 0.5*(F - M/L) u2 = 0.5*(F + M/L) if u1 < 0 or u1 > 1.7658 or u2 < 0 or u2 > 1.7658: print(f'motor saturation {u1} {u2}') u1_clamped = min(max(0, u1), 1.7658) u2_clamped = min(max(0, u2), 1.7658) F_clamped = u1_clamped + u2_clamped M_clamped = (u2_clamped - u1_clamped) * L return F_clamped, M_clamped # Equation of motion # dx/dt = f(t, x) # # t : Current time (seconds), scalar # x : Current state, [y, z, phi, vy, vz, phidot] # return: First derivative of state, [vy, vz, phidot, ay, az, phidotdot] def xdot(t, x): y_des, z_des, vy_des, vz_des, ay_des, az_des = trajectory(t) F, M = controller(x, y_des, z_des, vy_des, vz_des, ay_des, az_des) F_clamped, M_clamped = clamp(F, M) # First derivative, xdot = [vy, vz, phidot, ay, az, phidotdot] return [x[3], x[4], x[5], -F_clamped * sin(x[2]) / m, F_clamped * cos(x[2]) / m - g, M_clamped / Ixx] x0 = [0, 0, 0, 0, 0, 0] # Initial state [y0, z0, phi0, vy0, vz0, phidot0] t_span = [0, 20] # Simulation time (seconds) [from, to] # Solve for the states, x(t) = [y(t), z(t), phi(t), vy(t), vz(t), phidot(t)] sol = solve_ivp(xdot, t_span, x0) # Plot fig, axs = plt.subplots(3) axs[0].plot(sol.t, sol.y[0]) # y vs t axs[1].plot(sol.t, sol.y[1]) # z vs t axs[2].plot(sol.t, sol.y[2]) # phi vs
https://cookierobotics.com/052/
CC-MAIN-2021-31
refinedweb
862
61.97
AxPipe is a very small and efficient class library implementing an extended stream and pipe abstraction similar to Unix Shell pipes. Both push and pull model processing, run-time built multi-stage pipes, optional multi-threading applied per stage, splits, joins and a very small footprint are distinguishing features. It's fully documented, with sample code and some stock transformations included. The current implementation is for Win32 API with Visual Studio 6 or 2002, but porting to Unix should pose little difficulty, only the threading and co-routine support needs modification. AxPipe is a light-weight library easily learned and applied. Currently it is being used for development of the next version of AxCrypt,. It's still in active development, and I'd appreciate any and all constructive feedback and suggestions for improvements, or reasons for moving to an existing framework or library that does it better/faster/smaller/neater/safer/whatever. The source code comes with a complete demo application and complete documentation for every namespace, class, member, method, enumeration, define, template etc. All is also available on the project home page,, where updated versions will be found as well. There you will find code samples and further overviews.. The following is a minimal complete program demonstrating the basic usage. /*! \file \brief A most simple AxPipe program Read an input file, pass it through a do-nothing pipe, and write it to a file. No error checking at all for clarity. */ #include "stdafx.h" #include "AxPipe.h" #include "CFileMap.h" /// \brief Just pass data along - do nothing with it. class CPipeDoNothing : public AxPipe::CPipe { void Out(AxPipe::CSeg *pSeg) { // Insert your code here... Pump(pSeg); } }; int _tmain(int argc, _TCHAR* argv[]) { AxPipe::CGlobalInit axpipeInit; // It just has to be there. AxPipe::CSourceMemFile sourceFile; // The source is a memory mapped file sourceFile.Init(argv[1]); // Set the source file name // Append a stage, in it's own thread sourceFile.Append(new AxPipe::CThread<CPipeDoNothing>); // Continue to append a sink to accept the output sourceFile.Append((new AxPipe::CSinkMemFile)->Init(argv[2])); // Initialize, Process the data, End process and Finalize. sourceFile.Open()->Drain()->Close()->Plug(); // Check for any errors in any parts... if (sourceFile.GetErrorCode()) { // Print a clear text representation of the problem fprintf(stderr, sourceFile.GetErrorMsg()); return sourceFile.GetErrorCode(); } return 0; } If you've never seen the use for co-routines (CreateFiber() et. al. in Win32 API) here's a pretty nice use, turning a push model processing filter into a pull model ditto. Contrary, I believe, to most similar packages AxPipe allows you to optionally, and in run-time, build processing pipes with different stages and with or without threading on a stage-by-stage basis. CreateFiber() It's very restrictive in it's use of external dependencies, and is suitable for very small applications even with minimal run-time support. It does not use exceptions for that reason, and certainly not MFC or similar large frameworks. There are also some 'stock' transformations included, developed on a needed basis for the projects where I use AxPipe, but the long-term idea is to gather a number of common sources, transformations and sinks into a very easily re-usable library of ready-made.
http://www.codeproject.com/Articles/5822/AxPipe-Multi-Threaded-Binary-Stream-C-Class-Librar?msg=718497
CC-MAIN-2015-14
refinedweb
533
55.95
Rx-Volution! Coding expressively with RxJS An introduction to RxJS and examples for React By now it should be clear that building web apps sparks increasingly time-consuming challenges, and certainly we — the community — have been dealing with these challenges by coming up with nifty solutions. React, Redux, Webpack, Rollup, Styled Components… Pick your problem, pick your solution. And excuse the fact that “Your Favourite Library™” might be missing from this list… I won’t talk about fatigue, but instead about everyday problems in JavaScript. The huge growth in the number of libraries in the community is in my opinion not at all a sign of over-engineering, but it tackles some genuine problems in web development. And, oh boy, web development is more complex than it seems at first glance these days. I believe, this rant by Sean Larkin sums it up best: The rate at which the community moves can be scary, but ultimately the Open Source Community is trying to come up with solutions, to rid us from everyday hardships while developing for the web (and beyond.) In this post I want to talk about neither styling, nor tooling, nor even about view libraries. I’d like to talk about some hardships that exist in parallel to all the obvious paradigms and libraries that you might use every day. Common Hardships of JS developers I believe that there are a couple of common problems that naturally attract technical debt, prove to be more of a challenge than they should, or—simply put — cost us too much time. Some of them are: - Sending and composing Requests - Event handling - Animations and transitions - User input - Data flow & state management The problem that all of these things have in common is Concurrency. These are typical problems that need to be addressed with asynchronous code. If you think the answer to all of these problems is “Promises!” think again. Unfortunately only requests can nicely be wrapped in Promises, due to their singular result. Everything else doesn’t fit nicely into a Promise-world. Instead of starting right at Reactive Programming and RxJS though, I want to shine a light on the hardships that I’m talking about first. Let’s start with an example: A range slider To introduce an example that suffers from the aforementioned problems with concurrency, especially in terms of readability of the resulting code, I built a small range slider. This is a simple React component, that renders a bar with a small knob, that the user can slide horizontally. It stores the x value of the knob in the state. As we would like to handle dragging we use the knob element’s onMouseDown handler and register a mousemove and a mouseup event listener. When the user presses down we set isDragging in the state to true and after the user releases his mouse we set it to false. In the mousemove handler we recompute the knob’s position according to the mouse position while this isDragging flag is truthy. So to make this work we have to add and remove event listeners, add an event handler to our element and add state to know whether the user is currently dragging. “This is fine. Why would I want improve this?”, you might ask. While this is perfectly acceptable, the code becomes more unintuitive and the control flow harder to follow, as we’d add more code to this. And adding code becomes a definite possibility, if we wanted to add support for touch devices, or more knobs, or more complex behaviour. First of all, this is an article about RxJS. And since you are reading it right now, I can probably assume that you have your reasons already. But jokes aside — What if I told you that you could write this in a way so that more people could intuitively understand what it does? What if this piece of code could scale in complexity, but stay perfectly readable? This is where Reactive Programming comes in: Reactive Programming allows you to write more expressive, declarative code. What is Reactive Programming? The quickest description of it would be that in Reactive Programming you code with data streams. Therefore the key to understanding it is thinking in streams. “Reactive Programming is programming with asynchronous data streams.” — André Staltz Streams can be almost anything, since we’re just talking about values over time. Thus a stream could consist of: input events, changing data, caches, tweets — anything you would want to react to. You might have used streams in some form already. Maybe you have worked with Node.js’ streams, or maybe you have used Promises or Futures, which are comparable to streams with only a single value. The key to understanding why programming with streams is beneficial is understanding how this abstraction promotes simplicity while coding. Streams are collections of data over time and can thus be generally treated like a normal collection — i.e., like arrays. Composing, filtering, transforming values that you receive over time is easily solved with a library like RxJS. Furthermore Reactive Programming allows us to describe our code like we would describe the problem. Thinking back to our “range slider” example, we would express our problem naturally: “When we detect a mousedownon the knob, we let the knob’s position follow the mouse’s position with every mousemove, until we detect a mouseup.” Now that you’ve been through such a long introduction to understand the reasoning behind RxJS and Reactive Programming, you probably want to learn what RxJS is already, right? RxJS, Reactive Extensions for JavaScript Primarily RxJS is an Observable library. Observables are our primitive that represents the streams that have been mentioned above. They are still asynchronous collections — i.e., values over time. If you think about the three primitives that you use on a daily basis, there shouldn’t really be any surprises. The synchronous and singular primitive is just a value; The synchronous and plural primitive is an iterable e.g. an array. This is the first column in the table of JS primitives. In the second column we find the asynchronous and singular primitive: Promises. Due to how the table is laid out, we immediately recognise that our day-to-day work is missing something crucial: What about a primitive that is asynchronous and plural, i.e. something capturing multiple values over time? That’s where Observables come into the picture. It is a missing primitive that is set out to be standardised by the TC39 committee:. It is currently a Stage 1, and might therefore not be missing for long. How Observables work This is a diagram of an all-too-familiar promise. The x-axis represents time, the marble represents the result of the promise, and the vertical stroke represents the completion of the promise. (Read more about Marble Diagrams) While a Promise will resolve to a single value, or reject, an Observable can emit multiple values and complete, or error at any point in time: This example diagram’s Observable emits “A”, “B”, and “C” and completes afterwards. Notice that the completion is separate. So while a promise can only resolve to a single value or error, promise.then(value => { // result }, error => { // error }); An Observable can emit a value, complete, or error. observable$.subscribe(value => { // results }, error => { // error }, () => { // completion }); Note that the completion and error don’t hold any actual values, since we have a separate callback for them. Creating Observables Similarly to how any emitter can be wrapped inside a promise, return new Promise((resolve, reject) => { resolve('Hello World!'); }); We can just as well wrap something inside an Observable. return new Observable(observer => { observer.next('Hello World!'); observer.complete(); }); Luckily — like with Promise.resolve— there are numerous constructors in RxJS that often make it unnecessary to create Observables from scratch: Observable.of('Hello World!'); Observable.from([ 'A', 'B', 'C' ]); If you’re wondering why we’re receiving an observer when we’re creating an Observable, think of it as the consumer of the values. It is an object that holds our complete, and error callbacks. (Read more about observers) Operators! The cool — functional — part of RxJS Since Observables are essentially just collections, you’ll find that you can use methods on them as you would expect it from e.g. arrays. RxJS comes with operators to transform, filter, combine, and alter Observables in all kinds of different ways and flavours. That’s also why it’s sometimes called the “Lodash for Async”. So it totally comes with all the methods that you have on Arrays as well: .map, .concat, .filter, .reduce, .every, .some, .includes, but also .first, .last, and lots more… “Help! I can’t remember any operators!” Fear not, if you can’t remember these operators on the spot. Websites like RxMarbles and the RxJS docs can help you find the right one. Furthermore I’d like to introduce five basic operators that will suffice for getting started with RxJS. .map Operator The .map operator transforms the values of an Observable using a transformation function… Just like Array’s .map. .filter Operator The .filter operator filters the values of an Observable using a predicate function… Again, just like Array’s .filter. .concat The .concat operator concatenates Observables, like Array’s .concat. Since time plays a role with Observables, the concatenation will start emitting the values of the second Observable, only once the first one is completed. .merge Operator The .merge operator is similar to the .concat operator, but merges the emissions together — i.e., it starts Observable concurrently. Thus in the resulting Observable all values are combined and keep their original position on the time-axis. .mergeMap Operator You might know the .mergeMap operator by its alternative name .flatMap. It allows you to return a new Observable in the transformation function, whose values will be merged into the resulting Observable. There’s different variations to this operator as well, like .switchMap, but these five should be enough for you to play around and get familiar with RxJS. Keep in mind to learn as you go! Remember that you don’t need to memorise all operators before you start. Feel free to look up operators as you go. Trust me, you will learn the ones you need in no time! What does an operator do under the hood? Before we move on, there’s a crucial thing to say about operators. Implementation-details aside, an operator creates a new Observable that wraps around the original one. Let’s say you’d want to filter an Observable by applying a predicate; As values come in over time the filter operator would take them in, and emit only the ones that pass the predicate on the new Observable. This is important to grok, since it means that Observables adhere to all functional paradigms, not just the methods / operators themselves. A chain of observables can be combined. You can assign an Observable to a variable and reuse it in multiple, different chains, if you wanted to. Let’s refactor our range slider With the newly gained knowledge we can now go back to our above example of the range slider, and rewrite its logic with RxJS. The code should look very familiar, but instead of directly using event listeners on window, we can use the fromEvent constructor, that wraps a listener inside an Observable. Right off the bat we see that 22 lines were deleted and 9 added. We don’t really care about conciseness here, but it is a nice bonus. What we do care about is expressiveness, and we indeed see that the entire behaviour of our slider ends up inside the onMouseDown method. It ends up almost explaining itself: onMouseDown(){ mousemove$ .takeUntil(mouseup$) .map(({ clientX }) => this.computeX(clientX)) .subscribe(x => { this.setState({ x }) }); } It actually resembles the description of its behaviour that was stated earlier. “When we detect a mousedownon the knob, we let the knob’s position follow the mouse’s position with every mousemove, until we detect a mouseup.” - The mousedownis handled by React so it doesn’t fall into RxJS’ territory - We follow the mouse’s position using our mousemove$observable, the .mapoperator applying the xvalue calculation, and the actual setStateinside the subscription - We stop the observable, when we detect a mouseupevent, using the takeUntiloperator To further explain what’s going on with the takeUntil operator, here’s another diagram: Again, we can describe complex behaviour using a single, expressive operator. In this case, we can interrupt and complete a source Observable, when a second Observable emits a value. Handy! What if we bring onMouseDown into the RxJS-world as well? You’ve probably noticed that the example is still using React’s onMouseDown hook. What would the code look like if we had an Observable for that as well? If we had a stream down$ that emits when the user presses his mouse down, we want to emit the move$ events, until the user releases his mouse ( up$), for each of these “down-events”. Edit: After receiving a reply, asking for what this would look like in the “Rx Range Slider” example above, I refactored the code to illustrate this behaviour; adding a stream for the mousedownevents as well: What is the share operator on the fromEvent observables doing? Maybe you’ve also noticed, that the fromEvent observables, at the top of the example, is using a share operator. This is a special operator that revolves around laziness of RxJS’ Observables… Observables are lazy! (by default) With RxJS, Observables are always lazy by default. They won’t do anything until they’re being observed. That means that an observable chain won’t start or do anything, until you actually subscribe to it. This is called a “cold observable” — for some reason. The opposite is a “hot observable”, which can start running independently from their observer. You can turn cold into hot observables with operators like In the case of the .share operator, the subscription underneath the operator is shared. The underlying observables runs as long as at least one observer is present and listening to it. For our example this means, that we’re only attaching one event listener for the shared observable to the window, and we’re keeping it around as long as we’re using it. It’s useful to keep everything clean of unnecessary listeners. There’s a lot more to hot observables, but it would maybe deserve an introduction of its own… Let’s try a bigger example: Search Typeahead The Range Slider was nice for getting started, but this article should go through an example that touches more of the hardships, I mentioned at the beginning. This example is a search field for movies. It involves sending requests and displaying suggestions to the user. If you’ve gone through another RxJS talk or intro, something like this was probably in there. Not even kidding. It’s a very popular example. This example has more complex behaviour than the previous one, and thus has a little bit more code, but to break it down: - We’re using RxJS’ Ajax constructor that is wrapping the request inside an Observable, instead of a Promise, like the fetchAPI - We’re chaining a lot of operators and achieve debouncing by 200ms; filtering for search terms, that are long enough; ignoring non-distinct values; and so on… You don’t have to go through this, but we can see that complex code scales nicely and can still be read from top to bottom, without losing track: const search$ = Observable .fromEvent(this.input, 'keyup') .map(({ target }) => target.value) .filter(query => query.length > 2) .debounceTime(100) .distinctUntilChanged() .switchMap(query => fetchMovies(query)) .map(res => res.response.results .map(({ id, title, release_date }) => ({ id, label: title + ', ' + release_date })).slice(0, 5) ); Since this observable has to start when the component was mounted, has to stop once the component unmounts, and can run indefinitely, we subscribe to it inside React’s componentDidMount lifecycle hook and store the subscription: this.sub = search$.subscribe(...); In the componentWillUnmount hook we use this subscription to cancel the ongoing observable: this.sub.unsubscribe(); As expected, this tells us two things: - Unlike Promises an Observable can emit results indefinitely - Observables, unlike Promises, can be cancelled Cancelling an ongoing Observable subscription When we create an Observable from scratch we can actually return a function that will be called on tear down. That means that we can safely remove an event listener, or cancel an AJAX request. new Observable(observer => { observer.next('Hello World!'); return () => { // called on tear down }; }); For our example above, where we are using Observable.ajax this will cause the switchMap operator to cancel the ongoing AjaxObservable, before subscribing to the new one. Cancelling requests is often overlooked and neglected in web development, but with RxJS it becomes quite easy. Before we’re done… Why should you choose RxJS? Apart from RxJS there are a couple of Observable libraries these days. Some of them are xstream, Bacon, most.js, and Kefir. Nothing stops you from using a different one, especially if you understand the differences and trade-offs. Nonetheless there are some reasons why you might want to choose RxJS over some or even all of them. Crossplatform Pattern RxJS is an implementation of the decade-old ReactiveX. Not only does that mean that it’s proven, it means that implementations are available for a long list of other languages. This can help you tremendously, if you need to talk to developers on a different part of your company’s stack, who are using Rx as well. Java, .NET, Scala, Clojure, Swift, C++, Lua, Ruby, JRuby, Python, Groovy, Kotlin, PHP, Elixir… Furthermore RxJS has recently been rewritten from scratch using TypeScript by Netflix developers, with a special focus on performance and robustness. It is not as fast as most.js admittedly, but its community is a lot larger. Used at reputable companies Some of you might be relieved to hear this, some might not care at all, but Rx and RxJS are used at a lot of reputable companies, which should put your mind at ease. Thanks for staying along until the end! Dear reader, I hope this was useful and you’ve learned a great deal about RxJS and Reactive Programming. This is a rewrite of my talk on RxJS, which you can find at talk.philpl.com.
https://medium.com/@philpl/rx-volution-coding-expressively-with-rxjs-6b7924a12c1b
CC-MAIN-2018-30
refinedweb
3,065
62.78
Building and Rendering Your First Joystick Component October 29th, 2021 What You Will Learn in This Tutorial How to build a simple app and write a component using CheatCode's @joystick.js/ui framework and render it to the browser using the component When you created your app, if you open up the package.json file at the root of the project, you will see two dependencies listed: @joystick.js/ui and @joystick.js/node. Even though these are separate pacakges, they're designed to work together. To make that happen, we use the @joystick.js/cli package installed above. When we ran joystick start above, that connection was established. In the project that we created, you will see a folder /ui at the root of the project with three folders inside of it: /ui/components, /ui/layouts, and /ui/pages. When creating components in Joystick using the @joystick.js/ui package, we use these three types to stay organized: /ui/componentscontains miscellaneous Joystick components that are intended to be rendered alongside other components or composed together in pages. /ui/layoutscontains Joystick components that are meant to be wrappers that render static content (e.g., navigation elements or a footer) along with a dynamic page. /ui/pagescontains Joystick components that represent pages or URLs in our application that are intended to be compositions of HTML and other components mapped to a route. For this tutorial, we're going to focus on the last type, pages. The page we're going to create will render some dummy elements for us to demonstrate all of the features of a Joystick component. First, let's create the folder and file for the component. We'll call it dashboard and store it in /ui/pages/dashboard/index.js: /ui/pages/dashboard/index.js import ui from '@joystick.js/ui'; const Dashboard = ui.component({ render: () => { return ` <div class="dashboard"> <h4>Dashboard</h4> </div> `; }, }); export default Dashboard; To kick things off, we want to set up a skeleton for our component. Above, we're importing the ui object exported from the @joystick.js/ui package we hinted at earlier. To set up our component, we create a new variable Dashboard and assign it to a call to ui.component(), passing an object containing the definition for our component. At the bottom of our file, we make sure to export the Dashboard variable as the default as Joystick requires us to do this (we'll see why in a bit). Focusing on the render property we've set on the object passed to ui.component(), this is assigned to a function which is responsible for rendering the HTML markup for our component. In Joystick, components are built with pure HTML. Any HTML that you'd write in a plain .html file will work in a Joystick component. In our render() function, we return a string—written using backticks ``so that we can take advantage of JavaScript string interpolation (allowing us to embed dynamic values like variables or the result of calling a function inside of our HTML). Inside of that string, we write the HTML for our component—here, just a <div></div> tag with a class and an <h4></h4> tag inside of that to get us started. Though it may not look like much, if we were to render this now, we'd see our <h4></h4> rendered on screen. Before we do that, let's flesh out our HTML a bit more and add in some CSS: /ui/pages/dashboard/index.js import ui from '@joystick.js/ui'; const Dashboard = ui.component({ css: ` .dashboard { width: 100%; max-width: 1000px; margin: 0 auto; } .dashboard h4 { margin-bottom: 20px; } .dashboard input { display: block; padding: 20px; font-size: 16px; border: 1px solid #ddd; margin-bottom: 20px; } .dashboard button { border: none; background: #000; color: #fff; font-size: 16px; padding: 20px; border-radius: 3px; } `, render: () => { return ` <div class="dashboard"> <h4>Dashboard</h4> <input type="text" /> <button class="say-hello">Say Hello</button> </div> `; }, }); export default Dashboard; Same component, just adding a few things. Down in the render(), we've added an <input /> and a <button></button> (we'll put these to use in a bit). The important part here is the new css property. Again, using render() function. ``backticks (in addition to interpolation, this allows us to do a multi-line string in JavaScript), we've written some CSS for the markup down in our The idea here is that we want to isolate CSS on a per-component basis. This keeps us organized, but also avoids style collisions when using a single CSS file (or multiple CSS files imported into a single file). Behind the scenes, when our component is rendered, Joystick will take this CSS and automatically scope it to our component. This is how we avoid issues with the cascade in CSS creating overlapping or breaking styles. Styles are directly mapped to your component. In addition to dynamic scoping, Joystick will also automatically inject this CSS into the <head></head> of the HTML we render in the browser, meaning styles are automatically rendered alongside your component's HTML. Focusing on the CSS itself, notice that we're referencing elements and class names inside of our component's HTML—no need for anything special; Joystick will handle the tricky stuff for us. /ui/pages/dashboard/index.js import ui from '@joystick.js/ui'; const Dashboard = ui.component({ state: { name: 'Friend', }, methods: { sayHello: (component) => { window.alert(`Hello, ${component.state.name}!`); }, }, css: ` ... `, render: ({ state }) => { return ` <div class="dashboard"> <h4>Dashboard</h4> <p>I'm going to say "Hello, ${state.name}!"</p> <input type="text" /> <button class="say-hello">Say Hello</button> </div> `; }, }); export default Dashboard; Moving forward, next, to make our component interactive we're going to add a generic function to our component known as a method. The methods property here is assigned an object with custom-named functions that can be called from elsewhere in the component. Each method that we define is passed the entire component instance as the last available argument (e.g., if we called a method and passed it a value, that value would become the first argument and component would become the second). Here, we're defining a method sayHello that we want to display an alert dialog when called. Inside, we want it to display a message that says "Hello, <name> is the current value of the name property on the component's state object. Inside of a Joystick component, state represents the current visual state of the component (think "visual state of affairs"). That state can be data, settings for part of our UI—anything you'd like. To initialize our state value (also known as setting our "default" state), we add a state option to our component, also passed an object, with the names of the values we want to set on state when the component loads up. For our component, we want to set name on state. Here, we set the default value to 'Friend'. So it's clear, if we were to call the sayHello function as-is, we'd see an alert box pop up that said "Hello, Friend!" Let's wire that up now using our component's lifecycle methods. /ui/pages/dashboard/index.js import ui from '@joystick.js/ui'; const Dashboard = ui.component({ state: { name: 'Friend', }, lifecycle: { onMount: (component) => { component.methods.sayHello(); }, }, methods: { sayHello: (component) => { window.alert(`Hello, ${component.state.name}!`); }, }, css: ` ... `, render: ({ state }) => { return ` <div class="dashboard"> <h4>Dashboard</h4> <p>I'm going to say "Hello, ${state.name}!"</p> <input type="text" /> <button class="say-hello">Say Hello</button> </div> `; }, }); export default Dashboard; A Joystick component goes through several "stages of life" when we render it in the browser, what we refer to as its lifecycle. Here, we're adding an object to our component lifecycle which can be assigned three functions: onBeforeMounta function that's called immediately before a Joystick component is rendered in the browser. onMounta function that's called immediately after a Joystick component is rendered in the browser. onBeforeUnmounta function that's called immediately before a Joystick component is removed from the browser. To demonstrate our sayHello method, we're going to utilize the onMount lifecycle method/function (the name "method" is the term used to describe a function defined on an object in JavaScript) to call it. All lifecycle methods are passed the component instance, which means we can access our methods via that object. Inside of our onMount function, we call to component.methods.sayHello() to say "when this component is rendered on screen, display an alert window and greet the user." Almost done. To wrap up our component before we move on to routing, the last thing we want to do is wire up some DOM event handlers. /ui/pages/dashboard/index.js import ui from '@joystick.js/ui'; const Dashboard = ui.component({ state: { ... }, lifecycle: { .. }, methods: { ... }, css: ` ... `, events: { 'keyup input': (event, component) => { component.setState({ name: event.target.value }); }, 'click .say-hello': (event, component) => { component.methods.sayHello(); }, }, render: ({ state }) => { return ` <div class="dashboard"> <h4>Dashboard</h4> <p>I'm going to say "Hello, ${state.name}!"</p> <input type="text" /> <button class="say-hello">Say Hello</button> </div> `; }, }); export default Dashboard; First, let's focus on the events property we've added to our component. This is how we define and automatically scope DOM event listeners to our component. Listeners are defined by setting a callback function to a property whose name is a string with some DOM event type, followed by a space, followed by the DOM selector to attach the event to. Here, we're adding two event listeners: first, a keyup listener on our <input /> and second a click listener on our <button></button> using its class name say-hello. For our keyup event, we want to dynamically update our state.name value as we type into the input. To do it, we assign two arguments to our function, event which represents the keyup event from the DOM and component (our component instance) as the second. On the component instance, a .setState() method is defined which takes an object containing the properties we want to set (or overwrite) on state. In this case, we want to overwrite name, setting it to the current value of our input. Here, we use the plain JavaScript event.target.value property to access that value where event.target equals the HTML element triggering the event and value being the current value of that target. Down in our click event handler, we use the same argument structure, this time skipping usage of the event and accessing our sayHello() method via the component.methods object on our instance. The idea here being that whenever we click our button, our window.alert() in sayHello() will be triggered, displaying the most recent value (assuming we've typed something in our input, we'd expect to see that). Before we move on, we want to call out a minor change to our render() function's HTML. Notice that we've added a <p></p> which embeds the current value of state.name using a JavaScript interpolation expression ${state.name}. You'll notice that we've used JavaScript destructuring on the render() function, "plucking off" the state value from that object. That object is our component instance. Here, we use destructuring to eliminate the need to type component.state and instead just pluck off state directly. That's it for our component definition. Next, let's jump to the server and wire up a route so we can see it in the browser. Defining a route and using res.render() to render the component A route is the technical name for a URL that renders something in our application. To define a route, we need to move to the code that runs on the server-side of our application in the index.server.js file at the root of our project. /index.server.js import node from "@joystick.js/node"; import api from "./api"; node.app({ api, routes: { "/dashboard": (req, res) => { res.render("ui/pages/dashboard/index.js"); }, "/": (req, res) => { res.render("ui/pages/index/index.js", { layout: "ui/layouts/app/index.js", }); }, "*": (req, res) => { res.render("ui/pages/error/index.js", { layout: "ui/layouts/app/index.js", props: { statusCode: 404, }, }); }, }, }); In a Joystick app, the server-side counterpart to @joystick.js/ui is @joystick.js/node. This package is responsible for setting up our backend, specifically, spinning up an instance of Express.js and running an HTTP server for our app (by default, this is started on port 2600 but can be customized if we want). From that package, an object is exported that we've imported in the code above as node. On that object, we have a function .app() which is responsible for setting up our back-end. When we call it, we pass a few different options to it, the one we care about for this tutorial being routes which is set to an object of routes we want to define in our app. Above, we have two routes pre-defined (these are automatically included by joystick create via @joystick.js/cli): /and *, an index route and a catch-all, 404 route *. The one we care about here is the /dashboard route we've added (we've chosen this name as it matches the name of the page we defined but we could call this /pizza if we wanted). A route defined on the routes object is nothing more than an Express.js route (e.g., app.get()). The difference here is purely syntactic and for organization. We define all of our routes together for clarity and to keep our code consistent. Just like with a normal Express.js route, we have a callback function that's called when our route is visited (known as being a "match" for the URL in the browser). Inside of our callback here, we call to a special function defined by Joystick on the Express response object, res.render(), passing in the path to the page we want to render (Joystick requires that we pass the entire path, including the .js extension). Behind the scenes, Joystick will do a few things automatically: - Render our component as HTML (known as SSR or server-side rendering) to send back as the initial response to the browser. - Find the corresponding JS file that's been compiled (meaning, browser-safe code) by @joystick.js/cliand embed in the SSR'd HTML. - In development, Joystick also includes some utility functions and the HMR (hot module reload) script for automatically refreshing the browser when we change our code. - Locates all of the CSS in our component tree (we only have a single level to our tree, but if we nested components those would be scanned, too) and embeds it in the <head></head>tag of our HTML. With all of this done, the resulting HTML is returned to the browser and rendered for our user. Inside of the browser-safe JavaScript file for our page component, Joystick automatically includes the script necessary for "mounting" our component in the browser. This is a process known as hydrating. We initially send some dry, server-side rendered HTML back for the initial request and then load some JavaScript in the browser to hydrate that dry HTML by making it interactive again (i.e., loading the dynamic parts of our JavaScript in the browser). That's it. If we open up our browser and head to we should see our alert dialog display and after clicking okay, see our component. Try typing your name in the box and clicking the "Say Hello" button to see it in action. Wrapping up In this tutorial, we learned how to install the Joystick CLI ( @joystick.js/cli), create a new app, and build a Joystick component using @joystick.js/ui. We learned about the different features of a component like state, CSS, DOM events, and methods as well as how to define a route and render that component via the res.render() method on the server. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/building-and-rendering-your-first-joystick-component
CC-MAIN-2022-21
refinedweb
2,709
64.81
29 April 2011 17:55 [Source: ICIS news] HOUSTON (ICIS)--US butadiene (BD) producers nominated May contract prices at increases of 19-22 cents/lb, sources said Friday, potentially pushing prices to record highs for a fourth consecutive month. Two producers nominated May contract prices of 141 cents/lb ($3,109/tonne, €2,083/tonne), a third producer nominated a price of 144 cents/lb and a fourth producer nominated a price of 145 cents/lb. The US BD contract price for April is assessed by ICIS at 122-123 cents/lb FOB (free on board). The biggest factor pushing BD prices up tight supply. Two ?xml:namespace> Demand for BD and BD derivatives remains strong, a buyer said. Historically, the US BD contract was settled at the lowest of the four prices from the major Since that time, all but two settlements have been split, breaking a pattern that had lasted for more than 10 years. Major Major ($1 = €0.67) To learn more about
http://www.icis.com/Articles/2011/04/29/9456120/US-May-BD-nominations-announced-at-fourth-straight-record.html
CC-MAIN-2015-11
refinedweb
166
60.75
You can subscribe to this list here. Showing 2 results of 2 I love the Wiki philosophy, with one exception: intercaps words are automatically made links. Can I set up MoinMoin to force the page author to designate links? I don't want to have to backquote or put "!" in front of wiki names. "#format plain" does not work for me because it nevers lets you define links. I looked at the GaGa parser which automatically makes links of existing pages. Do I also need to hack parsers/wiki.py to do what I need? (BTW, I like Thomas' idea - I would want links to be either explicitly defined, or defined if they reference an existing page). Thanks. I am making a wikifarm, I would like to make the moin_config.py untouch. What I want is to use the directory name as sitename and interwikiname I have try to : import sys import os.path sitename = os.path.dirname(sys.path[0]) interwikiname = sitename But it sets the sitename as blank. Are there any way to set sitename as the directory name? I am using moinmoin on a win 98. Rgs, Kent Sin __________________________________________________ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://sourceforge.net/p/moin/mailman/moin-user/?viewmonth=200212&viewday=11
CC-MAIN-2016-07
refinedweb
204
78.04
BN_ZERO(3) OpenSSL BN_ZERO(3) BN_zero, BN_one, BN_value_one, BN_set_word, BN_get_word - BIGNUM assignment operations #include <openssl/bn.h> int BN_zero(BIGNUM *a); int BN_one(BIGNUM *a); const BIGNUM *BN_value_one(void); int BN_set_word(BIGNUM *a, unsigned long w); unsigned long BN_get_word(BIGNUM *a); BN_zero(), BN_one() and BN_set_word() set a to the values 0, 1 and w respectively. BN_zero() and BN_one() are macros. BN_value_one() returns a BIGNUM constant of value 1. This constant is useful for use in comparisons and assignment. BN_get_word() returns a, if it can be represented as an unsigned long. BN_get_word() returns the value a, and 0xffffffffL if a can- not be represented as an unsigned long. BN_zero(), BN_one() and BN_set_word() return 1 on success, 0 otherwise. BN_value_one() returns the constant. Someone might change the constant. If a BIGNUM is equal to 0xffffffffL it can be represented as an unsigned long but this value is also returned on error. bn(3), BN_bn2bin(3) BN_zero(), BN_one() and BN_set_word() are available in all versions of SSLeay and OpenSSL. BN_value_one() and BN_get_word() were added in SSLeay 0.8. BN_value_one() was changed to return a true const BIGNUM * in OpenSSL 0.9.
http://mirbsd.mirsolutions.de/htman/sparc/man3/BN_set_word.htm
crawl-003
refinedweb
188
67.25
11 March 2011 22:44 [Source: ICIS news] HOUSTON (ICIS)--Here is Friday’s end of day ?xml:namespace> CRUDE: Apr WTI: $101.16/bbl, down $1.54; Apr Brent: $113.84/bbl, down $1.59 NYMEX WTI crude futures finished down for the fourth consecutive session on perception that the massive earthquake in Japan could temporarily reduce oil demand from the world’s third largest consumer. WTI bottomed out at $99.01/bbl before the dip attracted buying on geopolitical worries. RBOB: Apr: $2.9877/gal, down 3.19 cents Reformulated gasoline blendstock for oxygenate blending (RBOB) futures fell under $3/gal for the third time in March. The price drop across came after a massive earthquake hit offshore NATURAL GAS: Apr: $3.889/MMBtu, up 5.9 cents The front-month natural gas futures contract increased 1.5% on speculation that lower prices in 2011 would spark demand from factories and power plants. Prices also increased as the earthquake in ETHANE: down at 64.25-64.50 cents/gal Mont Belvieu ethane ended the week at a low mark amid thin trading in the natural gas liquids (NGL) market. Prices mainly tracked the downward trend in crude futures. AROMATICS: toluene wider at $3.53-3.55/gal US n-grade toluene prices ended the week at $3.53-3.55/gal FOB (free on board) on Friday, unchanged from 4 March. The price range was slightly wider compared $3.54-3.55/gal FOB the previous day. OLEFINS: refinery-grade propylene tighter at 68.00-70.25 cents/lb March refinery-grade propylene (RGP) was bid at 68.00 cents/lb during the day, with offer levels unchanged from Thursday at 70.25 cents/lb. Friday's range was tighter compared with 67.50-70.25 cents/lb on Th
http://www.icis.com/Articles/2011/03/11/9443282/evening-snapshot-americas-markets-summary.html
CC-MAIN-2014-10
refinedweb
302
79.36
Opterator, revisited A few years ago, I wrote a simple decorator that introspects a main() method signature and docstring to generate an option parser. I never really used it, mostly because back then, I wasn’t too keen on third party dependencies. Nowadays with distribute being maintained, well-documented, and working (plus I know how to use it properly), I no longer have this concern. A friend recently linked me to the brilliantly designed docopt and reminded me of my interest in opterator. I revisited my code and decided it’s a pretty neat little design. So I ported it to Python 3. This took a matter of minutes, largely because I originally wrote it shortly after taking the pytest tutorial at Pycon 2009 and it was easy to find the failing code. It now supports Python 2.6, 2.7, and 3.3, according to tox. I originally designed opterator to be a full replacement for optparse and friends. However, my main purpose for it now is to create quick and dirty command line applications, often for my personal use. These apps usually have a couple of options and it doesn’t seem worth the trouble of setting up optparse. Yet, mucking around with sys.argv is also annoying. Opterator minimizes the boilerplate. Check out this basic example: from opterator import opterate @opterate def main(filename, color='red', verbose=False): print(filename, color, verbose) main() with three lines of boilerplate code, a function can be turned into a command line program that can be called like so: $ python examples/basic.py this_file this_file red False or so: python examples/basic.py this_file --color=blue this_file blue False or even so: $ python examples/basic.py --color=purple another_file --verbose another_file purple True And you even get a not totally (but somewhat) useless helpfile: $ python examples/basic.py -h Usage: basic.py [options] filename Options: -h, --help show this help message and exit -c COLOR, --color=COLOR -v, --verbose I noticed in your setup.py you have `from setuptools import setup` instead of `from distutils.core import setup`. Do you know if there’s any reason to use the setuptools setup() instead of the distutils version when you’re not also using other features from setuptools (e.g., find_packages())? Nice, clever, Pythonic! Thanks for sharing it. That’s great – I hope to use it in the future. I love how it takes all of the work out of setting up the options parser. Thanks for posting it.
http://archlinux.me/dusty/2012/12/08/opterator-revisited/
CC-MAIN-2015-06
refinedweb
416
58.38
Asyncio-based client for S3 Project Description The Release History Download Files The aio-s3 is a small library for accessing Amazon S3 Service that leverages python’s standard asyncio library. Only read operations are supported so far, contributions are welcome. Example Basically all methods supported so far are shown in this example: import asyncio from aios3.bucket import Bucket @asyncio.coroutine def main(): bucket = Bucket('uaprom-logs', aws_region='eu-west-1', aws_endpoint='s3-eu-west-1.amazonaws.com', aws_key='AKIAIOSFODNN7EXAMPLE', aws_secret='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY') # List keys based on prefix lst = yield from bu.list('some-prefix') response = yield from bu.get(lst[0]) print(len(response)) response = yield from bu.download(lst[0]) print("GOT Response", dir(response)) while 1: chunk = yield from response.read(65536) print("Received", len(chunk)) if not chunk: break asyncio.get_event_loop().run_until_complete(main()) Reference - Bucket(name, *, aws_key, aws_secret, aws_region, aws_endpoint, connector): - Creates a wrapper object for accessing S3 bucket. Note unlike in many other bindings you need to specify aws_region (and probably aws_endpoint) correctly (see a table). The connector is an aiohttp connector, which might be used to setup proxy or other useful things. - Bucket.list(prefix='', max_keys=1000): Lists items which start with prefix. Each returned item is a Key object. This method is coroutine. Note This method raises assertion error if there are more keys than max_keys. We do not have a method to return keys iteratively yet. - Bucket.get(key): - Fetches object names key. The key might be a string or Key object. Returns bytes. This method is coroutine. - Bucket.download(key): - Allows iteratively download the key. The object returned by the coroutine is an object having method .read(bufsize) which is a coroutine too. - Key Represents an S3 key returned by Bucket.list. Key has at least the following attributes: - key – the full name of the key stored in a bucket - last_modified – datetime.datetime object - etag – The ETag, usually md5 of the content with additional quotes - size – Size of the object in bytes - storage_class – Storage class of the object Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/aio-s3/
CC-MAIN-2017-51
refinedweb
360
51.85
This HTML version of is provided for convenience, but it is not the best format for the book. In particular, some of the symbols are not rendered correctly. You might prefer to read the PDF version, or you can buy a hardcopy from Amazon. The linear least squares fit in the previous chapter is an example of regression, which is the more general problem of fitting any kind of model to any kind of data. This use of the term “regression” is a historical accident; it is only indirectly related to the original meaning of the word. The goal of regression analysis is to describe the relationship between one set of variables, called the dependent variables, and another set of variables, called independent or explanatory variables. In the previous chapter we used mother’s age as an explanatory variable to predict birth weight as a dependent variable. When there is only one dependent and one explanatory variable, that’s simple regression. In this chapter, we move on to multiple regression, with more than one explanatory variable. If there is more than one dependent variable, that’s multivariate regression. If the relationship between the dependent and explanatory variable is linear, that’s linear regression. For example, if the dependent variable is y and the explanatory variables are x1 and x2, we would write the following linear regression model: where β0 is the intercept, β1 is the parameter associated with x1, β2 is the parameter associated with x2, and ε is the residual due to random variation or other unknown factors. Given a sequence of values for y and sequences for x1 and x2, we can find the parameters, β0, β1, and β2, that minimize the sum of ε2. This process is called ordinary least squares. The computation is similar to thinkstats2.LeastSquare, but generalized to deal with more than one explanatory variable. You can find the details at The code for this chapter is in regression.py. For information about downloading and working with this code, see Section 0.2. In the previous chapter I presented thinkstats2.LeastSquares, an implementation of simple linear regression intended to be easy to read. For multiple regression we’ll switch to StatsModels, a Python package that provides several forms of regression and other analyses. If you are using Anaconda, you already have StatsModels; otherwise you might have to install it. As an example, I’ll run the model from the previous chapter with StatsModels: import statsmodels.formula.api as smf live, firsts, others = first.MakeFrames() formula = 'totalwgt_lb ~ agepreg' model = smf.ols(formula, data=live) results = model.fit() statsmodels provides two interfaces (APIs); the “formula” API uses strings to identify the dependent and explanatory variables. It uses a syntax called patsy; in this example, the ~ operator separates the dependent variable on the left from the explanatory variables on the right. ~ smf.ols takes the formula string and the DataFrame, live, and returns an OLS object that represents the model. The name ols stands for “ordinary least squares.” The fit method fits the model to the data and returns a RegressionResults object that contains the results. The results are also available as attributes. params is a Series that maps from variable names to their parameters, so we can get the intercept and slope like this: inter = results.params['Intercept'] slope = results.params['agepreg'] The estimated parameters are 6.83 and 0.0175, the same as from LeastSquares. pvalues is a Series that maps from variable names to the associated p-values, so we can check whether the estimated slope is statistically significant: slope_pvalue = results.pvalues['agepreg'] The p-value associated with agepreg is 5.7e-11, which is less than 0.001, as expected. results.rsquared contains R2, which is 0.0047. results also provides f_pvalue, which is the p-value associated with the model as a whole, similar to testing whether R2 is statistically significant. f_pvalue And results provides resid, a sequence of residuals, and fittedvalues, a sequence of fitted values corresponding to agepreg. The results object provides summary(), which represents the results in a readable format. print(results.summary()) But it prints a lot of information that is not relevant (yet), so I use a simpler function called SummarizeResults. Here are the results of this model: Intercept 6.83 (0) agepreg 0.0175 (5.72e-11) R^2 0.004738 Std(ys) 1.408 Std(res) 1.405 Std(ys) is the standard deviation of the dependent variable, which is the RMSE if you have to guess birth weights without the benefit of any explanatory variables. Std(res) is the standard deviation of the residuals, which is the RMSE if your guesses are informed by the mother’s age. As we have already seen, knowing the mother’s age provides no substantial improvement to the predictions. In Section 4.5 we saw that first babies tend to be lighter than others, and this effect is statistically significant. But it is a strange result because there is no obvious mechanism that would cause first babies to be lighter. So we might wonder whether this relationship is spurious. In fact, there is a possible explanation for this effect. We have seen that birth weight depends on mother’s age, and we might expect that mothers of first babies are younger than others. With a few calculations we can check whether this explanation is plausible. Then we’ll use multiple regression to investigate more carefully. First, let’s see how big the difference in weight is: diff_weight = firsts.totalwgt_lb.mean() - others.totalwgt_lb.mean() First babies are 0.125 lbs lighter, or 2 ounces. And the difference in ages: diff_age = firsts.agepreg.mean() - others.agepreg.mean() The mothers of first babies are 3.59 years younger. Running the linear model again, we get the change in birth weight as a function of age: results = smf.ols('totalwgt_lb ~ agepreg', data=live).fit() slope = results.params['agepreg'] The slope is 0.0175 pounds per year. If we multiply the slope by the difference in ages, we get the expected difference in birth weight for first babies and others, due to mother’s age: slope * diff_age The result is 0.063, just about half of the observed difference. So we conclude, tentatively, that the observed difference in birth weight can be partly explained by the difference in mother’s age. Using multiple regression, we can explore these relationships more systematically. live['isfirst'] = live.birthord == 1 formula = 'totalwgt_lb ~ isfirst' results = smf.ols(formula, data=live).fit() The first line creates a new column named isfirst that is True for first babies and false otherwise. Then we fit a model using isfirst as an explanatory variable. Here are the results: Intercept 7.33 (0) isfirst[T.True] -0.125 (2.55e-05) R^2 0.00196 Because isfirst is a boolean, ols treats it as a categorical variable, which means that the values fall into categories, like True and False, and should not be treated as numbers. The estimated parameter is the effect on birth weight when isfirst is true, so the result, -0.125 lbs, is the difference in birth weight between first babies and others. The slope and the intercept are statistically significant, which means that they were unlikely to occur by chance, but the the R2 value for this model is small, which means that isfirst doesn’t account for a substantial part of the variation in birth weight. The results are similar with agepreg: Intercept 6.83 (0) agepreg 0.0175 (5.72e-11) R^2 0.004738 Again, the parameters are statistically significant, but R2 is low. These models confirm results we have already seen. But now we can fit a single model that includes both variables. With the formula totalwgt_lb ~ isfirst + agepreg, we get: totalwgt_lb ~ isfirst + agepreg Intercept 6.91 (0) isfirst[T.True] -0.0698 (0.0253) agepreg 0.0154 (3.93e-08) R^2 0.005289 In the combined model, the parameter for isfirst is smaller by about half, which means that part of the apparent effect of isfirst is actually accounted for by agepreg. And the p-value for isfirst is about 2.5%, which is on the border of statistical significance. R2 for this model is a little higher, which indicates that the two variables together account for more variation in birth weight than either alone (but not by much). Remembering that the contribution of agepreg might be nonlinear, we might consider adding a variable to capture more of this relationship. One option is to create a column, agepreg2, that contains the squares of the ages: live['agepreg2'] = live.agepreg**2 formula = 'totalwgt_lb ~ isfirst + agepreg + agepreg2' Now by estimating parameters for agepreg and agepreg2, we are effectively fitting a parabola: Intercept 5.69 (1.38e-86) isfirst[T.True] -0.0504 (0.109) agepreg 0.112 (3.23e-07) agepreg2 -0.00185 (8.8e-06) R^2 0.007462 The parameter of agepreg2 is negative, so the parabola curves downward, which is consistent with the shape of the lines in Figure 10.2. The quadratic model of agepreg accounts for more of the variability in birth weight; the parameter for isfirst is smaller in this model, and no longer statistically significant. Using computed variables like agepreg2 is a common way to fit polynomials and other functions to data. This process is still considered linear regression, because the dependent variable is a linear function of the explanatory variables, regardless of whether some variables are nonlinear functions of others. The following table summarizes the results of these regressions: The columns in this table are the explanatory variables and the coefficient of determination, R2. Each entry is an estimated parameter and either a p-value in parentheses or an asterisk to indicate a p-value less that 0.001. We conclude that the apparent difference in birth weight is explained, at least in part, by the difference in mother’s age. When we include mother’s age in the model, the effect of isfirst gets smaller, and the remaining effect might be due to chance. In this example, mother’s age acts as a control variable; including agepreg in the model “controls for” the difference in age between first-time mothers and others, making it possible to isolate the effect (if any) of isfirst. So far we have used regression models for explanation; for example, in the previous section we discovered that an apparent difference in birth weight is actually due to a difference in mother’s age. But the R2 values of those models is very low, which means that they have little predictive power. In this section we’ll try to do better. Suppose one of your co-workers is expecting a baby and there is an office pool to guess the baby’s birth weight (if you are not familiar with betting pools, see). Now suppose that you really want to win the pool. What could you do to improve your chances? Well, the NSFG dataset includes 244 variables about each pregnancy and another 3087 variables about each respondent. Maybe some of those variables have predictive power. To find out which ones are most useful, why not try them all? Testing the variables in the pregnancy table is easy, but in order to use the variables in the respondent table, we have to match up each pregnancy with a respondent. In theory we could iterate through the rows of the pregnancy table, use the caseid to find the corresponding respondent, and copy the values from the correspondent table into the pregnancy table. But that would be slow. A better option is to recognize this process as a join operation as defined in SQL and other relational database languages (see). Join is implemented as a DataFrame method, so we can perform the operation like this: live = live[live.prglngth>30] resp = chap01soln.ReadFemResp() resp.index = resp.caseid join = live.join(resp, on='caseid', rsuffix='_r') The first line selects records for pregnancies longer than 30 weeks, assuming that the office pool is formed several weeks before the due date. The next line reads the respondent file. The result is a DataFrame with integer indices; in order to look up respondents efficiently, I replace resp.index with resp.caseid. The join method is invoked on live, which is considered the “left” table, and passed resp, which is the “right” table. The keyword argument on indicates the variable used to match up rows from the two tables. In this example some column names appear in both tables, so we have to provide rsuffix, which is a string that will be appended to the names of overlapping columns from the right table. For example, both tables have a column named race that encodes the race of the respondent. The result of the join contains two columns named race and race_r. race_r The pandas implementation is fast. Joining the NSFG tables takes less than a second on an ordinary desktop computer. Now we can start testing variables. t = [] for name in join.columns: try: if join[name].var() < 1e-7: continue formula = 'totalwgt_lb ~ agepreg + ' + name model = smf.ols(formula, data=join) if model.nobs < len(join)/2: continue results = model.fit() except (ValueError, TypeError): continue t.append((results.rsquared, name)) For each variable we construct a model, compute R2, and append the results to a list. The models all include agepreg, since we already know that it has some predictive power. I check that each explanatory variable has some variability; otherwise the results of the regression are unreliable. I also check the number of observations for each model. Variables that contain a large number of nans are not good candidates for prediction. For most of these variables, we haven’t done any cleaning. Some of them are encoded in ways that don’t work very well for linear regression. As a result, we might overlook some variables that would be useful if they were cleaned properly. But maybe we will find some good candidates. The next step is to sort the results and select the variables that yield the highest values of R2. t.sort(reverse=True) for mse, name in t[:30]: print(name, mse) The first variable on the list is totalwgt_lb, followed by birthwgt_lb. Obviously, we can’t use birth weight to predict birth weight. totalwgt_lb birthwgt_lb Similarly prglngth has useful predictive power, but for the office pool we assume pregnancy length (and the related variables) are not known yet. The first useful predictive variable is babysex which indicates whether the baby is male or female. In the NSFG dataset, boys are about 0.3 lbs heavier. So, assuming that the sex of the baby is known, we can use it for prediction. Next is race, which indicates whether the respondent is white, black, or other. As an explanatory variable, race can be problematic. In datasets like the NSFG, race is correlated with many other variables, including income and other socioeconomic factors. In a regression model, race acts as a proxy variable, so apparent correlations with race are often caused, at least in part, by other factors. The next variable on the list is nbrnaliv, which indicates whether the pregnancy yielded multiple births. Twins and triplets tend to be smaller than other babies, so if we know whether our hypothetical co-worker is expecting twins, that would help. Next on the list is paydu, which indicates whether the respondent owns her home. It is one of several income-related variables that turn out to be predictive. In datasets like the NSFG, income and wealth are correlated with just about everything. In this example, income is related to diet, health, health care, and other factors likely to affect birth weight. Some of the other variables on the list are things that would not be known until later, like bfeedwks, the number of weeks the baby was breast fed. We can’t use these variables for prediction, but you might want to speculate on reasons bfeedwks might be correlated with birth weight. Sometimes you start with a theory and use data to test it. Other times you start with data and go looking for possible theories. The second approach, which this section demonstrates, is called data mining. An advantage of data mining is that it can discover unexpected patterns. A hazard is that many of the patterns it discovers are either random or spurious. Having identified potential explanatory variables, I tested a few models and settled on this one: formula = ('totalwgt_lb ~ agepreg + C(race) + babysex==1 + ' 'nbrnaliv>1 + paydu==1 + totincr') results = smf.ols(formula, data=join).fit() This formula uses some syntax we have not seen yet: C(race) tells the formula parser (Patsy) to treat race as a categorical variable, even though it is encoded numerically. The encoding for babysex is 1 for male, 2 for female; writing babysex==1 converts it to boolean, True for male and false for female. Similarly nbrnaliv>1 is True for multiple births and paydu==1 is True for respondents who own their houses. totincr is encoded numerically from 1-14, with each increment representing about $5000 in annual income. So we can treat these values as numerical, expressed in units of $5000. Here are the results of the model: Intercept 6.63 (0) C(race)[T.2] 0.357 (5.43e-29) C(race)[T.3] 0.266 (2.33e-07) babysex == 1[T.True] 0.295 (5.39e-29) nbrnaliv > 1[T.True] -1.38 (5.1e-37) paydu == 1[T.True] 0.12 (0.000114) agepreg 0.00741 (0.0035) totincr 0.0122 (0.00188) The estimated parameters for race are larger than I expected, especially since we control for income. The encoding is 1 for black, 2 for white, and 3 for other. Babies of black mothers are lighter than babies of other races by 0.27–0.36 lbs. As we’ve already seen, boys are heavier by about 0.3 lbs; twins and other multiplets are lighter by 1.4 lbs. People who own their homes have heavier babies by about 0.12 lbs, even when we control for income. The parameter for mother’s age is smaller than what we saw in Section 11.2, which suggests that some of the other variables are correlated with age, probably including paydu and totincr. All of these variables are statistically significant, some with very low p-values, but R2 is only 0.06, still quite small. RMSE without using the model is 1.27 lbs; with the model it drops to 1.23. So your chance of winning the pool is not substantially improved. Sorry! In the previous examples, some of the explanatory variables were numerical and some categorical (including boolean). But the dependent variable was always numerical. Linear regression can be generalized to handle other kinds of dependent variables. If the dependent variable is boolean, the generalized model is called logistic regression. If the dependent variable is an integer count, it’s called Poisson regression. As an example of logistic regression, let’s consider a variation on the office pool scenario. Suppose a friend of yours is pregnant and you want to predict whether the baby is a boy or a girl. You could use data from the NSFG to find factors that affect the “sex ratio”, which is conventionally defined to be the probability of having a boy. If you encode the dependent variable numerically, for example 0 for a girl and 1 for a boy, you could apply ordinary least squares, but there would be problems. The linear model might be something like this: Where y is the dependent variable, and x1 and x2 are explanatory variables. Then we could find the parameters that minimize the residuals. The problem with this approach is that it produces predictions that are hard to interpret. Given estimated parameters and values for x1 and x2, the model might predict y=0.5, but the only meaningful values of y are 0 and 1. It is tempting to interpret a result like that as a probability; for example, we might say that a respondent with particular values of x1 and x2 has a 50% chance of having a boy. But it is also possible for this model to predict y=1.1 or y=−0.1, and those are not valid probabilities. Logistic regression avoids this problem by expressing predictions in terms of odds rather than probabilities. If you are not familiar with odds, “odds in favor” of an event. Odds and probabilities are different representations of the same information. Given a probability, you can compute the odds like this: o = p / (1-p) Given odds in favor, you can convert to probability like this: p = o / (o+1) Logistic regression is based on the following model: Where o is the odds in favor of a particular outcome; in the example, o would be the odds of having a boy. Suppose we have estimated the parameters β0, β1, and β2 (I’ll explain how in a minute). And suppose we are given values for x1 and x2. We can compute the predicted value of logo, and then convert to a probability: o = np.exp(log_o) p = o / (o+1) So in the office pool scenario we could compute the predictive probability of having a boy. But how do we estimate the parameters? Unlike linear regression, logistic regression does not have a closed form solution, so it is solved by guessing an initial solution and improving it iteratively. The usual goal is to find the maximum-likelihood estimate (MLE), which is the set of parameters that maximizes the likelihood of the data. For example, suppose we have the following data: >>> y = np.array([0, 1, 0, 1]) >>> x1 = np.array([0, 0, 0, 1]) >>> x2 = np.array([0, 1, 1, 1]) And we start with the initial guesses β0=−1.5, β1=2.8, and β2=1.1: >>> beta = [-1.5, 2.8, 1.1] Then for each row we can compute log_o: log_o >>> log_o = beta[0] + beta[1] * x1 + beta[2] * x2 [-1.5 -0.4 -0.4 2.4] And convert from log odds to probabilities: >>> o = np.exp(log_o) [ 0.223 0.670 0.670 11.02 ] >>> p = o / (o+1) [ 0.182 0.401 0.401 0.916 ] Notice that when log_o is greater than 0, o is greater than 1 and p is greater than 0.5. The likelihood of an outcome is p when y==1 and 1-p when y==0. For example, if we think the probability of a boy is 0.8 and the outcome is a boy, the likelihood is 0.8; if the outcome is a girl, the likelihood is 0.2. We can compute that like this: >>> likes = y * p + (1-y) * (1-p) [ 0.817 0.401 0.598 0.916 ] The overall likelihood of the data is the product of likes: >>> like = np.prod(likes) 0.18 For these values of beta, the likelihood of the data is 0.18. The goal of logistic regression is to find parameters that maximize this likelihood. To do that, most statistics packages use an iterative solver like Newton’s method (see). StatsModels provides an implementation of logistic regression called logit, named for the function that converts from probability to log odds. To demonstrate its use, I’ll look for variables that affect the sex ratio. Again, I load the NSFG data and select pregnancies longer than 30 weeks: live, firsts, others = first.MakeFrames() df = live[live.prglngth>30] logit requires the dependent variable to be binary (rather than boolean), so I create a new column named boy, using astype(int) to convert to binary integers: df['boy'] = (df.babysex==1).astype(int) Factors that have been found to affect sex ratio include parents’ age, birth order, race, and social status. We can use logistic regression to see if these effects appear in the NSFG data. I’ll start with the mother’s age: import statsmodels.formula.api as smf model = smf.logit('boy ~ agepreg', data=df) results = model.fit() SummarizeResults(results) logit takes the same arguments as ols, a formula in Patsy syntax and a DataFrame. The result is a Logit object that represents the model. It contains attributes called endog and exog that contain the endogenous variable, another name for the dependent variable, and the exogenous variables, another name for the explanatory variables. Since they are NumPy arrays, it is sometimes convenient to convert them to DataFrames: endog = pandas.DataFrame(model.endog, columns=[model.endog_names]) exog = pandas.DataFrame(model.exog, columns=model.exog_names) The result of model.fit is a BinaryResults object, which is similar to the RegressionResults object we got from ols. Here is a summary of the results: Intercept 0.00579 (0.953) agepreg 0.00105 (0.783) R^2 6.144e-06 The parameter of agepreg is positive, which suggests that older mothers are more likely to have boys, but the p-value is 0.783, which means that the apparent effect could easily be due to chance. The coefficient of determination, R2, does not apply to logistic regression, but there are several alternatives that are used as “pseudo R2 values.” These values can be useful for comparing models. For example, here’s a model that includes several factors believed to be associated with sex ratio: formula = 'boy ~ agepreg + hpagelb + birthord + C(race)' model = smf.logit(formula, data=df) results = model.fit() Along with mother’s age, this model includes father’s age at birth (hpagelb), birth order (birthord), and race as a categorical variable. Here are the results: Intercept -0.0301 (0.772) C(race)[T.2] -0.0224 (0.66) C(race)[T.3] -0.000457 (0.996) agepreg -0.00267 (0.629) hpagelb 0.0047 (0.266) birthord 0.00501 (0.821) R^2 0.000144 None of the estimated parameters are statistically significant. The pseudo-R2 value is a little higher, but that could be due to chance. In the office pool scenario, we are most interested in the accuracy of the model: the number of successful predictions, compared with what we would expect by chance. In the NSFG data, there are more boys than girls, so the baseline strategy is to guess “boy” every time. The accuracy of this strategy is just the fraction of boys: actual = endog['boy'] baseline = actual.mean() Since actual is encoded in binary integers, the mean is the fraction of boys, which is 0.507. Here’s how we compute the accuracy of the model: predict = (results.predict() >= 0.5) true_pos = predict * actual true_neg = (1 - predict) * (1 - actual) results.predict returns a NumPy array of probabilities, which we round off to 0 or 1. Multiplying by actual yields 1 if we predict a boy and get it right, 0 otherwise. So, true_pos indicates “true positives”. true_pos Similarly, true_neg indicates the cases where we guess “girl” and get it right. Accuracy is the fraction of correct guesses: true_neg acc = (sum(true_pos) + sum(true_neg)) / len(actual) The result is 0.512, slightly better than the baseline, 0.507. But, you should not take this result too seriously. We used the same data to build and test the model, so the model may not have predictive power on new data. Nevertheless, let’s use the model to make a prediction for the office pool. Suppose your friend is 35 years old and white, her husband is 39, and they are expecting their third child: columns = ['agepreg', 'hpagelb', 'birthord', 'race'] new = pandas.DataFrame([[35, 39, 3, 2]], columns=columns) y = results.predict(new) To invoke results.predict for a new case, you have to construct a DataFrame with a column for each variable in the model. The result in this case is 0.52, so you should guess “boy.” But if the model improves your chances of winning, the difference is very small. My solution to these exercises is in chap11soln.ipynb. chap11soln.ipynb Some studies have shown this effect among humans, but results are mixed. In this chapter we tested some variables related to these factors, but didn’t find any with a statistically significant effect on sex ratio. As an exercise, use a data mining approach to test the other variables in the pregnancy and respondent files. Can you find any factors with a substantial effect? Suppose you meet a woman who is 35 years old, black, and a college graduate whose annual household income exceeds $75,000. How many children would you predict she has born? Suppose you meet a woman who is 25 years old, white, and a high school graduate whose annual household income is about $45,000. What is the probability that she is married, cohabitating, etc? Think Bayes Think Python Think Stats Think Complexity
http://greenteapress.com/thinkstats2/html/thinkstats2012.html
CC-MAIN-2017-47
refinedweb
4,804
57.67
Hi everyone! At the moment me and my buddys are developing a PC game and need some help with the programing. I have a turret that shoots the player, code works fine but I need help with random rotation. I want the turret to rotatate random when the player is at a distance. Like, a little bit to the left... then a little bit to the right, and to the right a little more maybe :) As if it were looking around for something. I have done the code so it works all the way exept this part. So here is my else statement that need the random rotation. else { transform.rotation.y += Random.Range(-1,1); } This ^ did not work, It just rotates once then stops. Have it in the update() 'transform.rotation is a Quaternion...a non-intuitive, 4D construct. Unless you fully understand the math behind them, you don't want to address the individual x,y,z,w components individually. You can use transform.eulerAngles, but you should never assign individual axes, nor should you depend on any specific eular angle representation. For example you could assign (180,0,0) and immediately read it back and get (0,180,180) which is the same 'physical' rotation in a different euler representation. The easiest way to deal with this issue is to treat eulerAngles as 'write-only'. That is keep your own Vector3. When you want to change an angle, change it in the Vector3 and then assign the Vector3. If you need to read an angle, read it from your own Vector3. Answer by Brahim113 · May 11, 2013 at 09:05 PM This is in the logic that i got from your code int rotate = Random.Range(-1,2); transform.Rotate(0,rotate,0,Space.Self); It will stand still. Rotate to the left or right do a random new number every frame. But it will look rly retarded (atleast what i saw when i tried it out) I would suggest you to atleast make it rotate in 1 second or more until it changes in a new random. I think this is something you are looking for. It is not perfect but it is something you could work on using UnityEngine; using System.Collections; public class PlayWithRotation : MonoBehaviour { int _bFlag; float _currentTime; float _delay = 1.0f; void Start () { _currentTime = Time.time; } void RotateTurret(int rot) { transform.Rotate(0,rot,0,Space.Self); } void Update () { if(Time.time > _currentTime) { _currentTime += _delay; //_bFlag will be -1 to 1 _bFlag = Random.Range(-1,2); } RotateTurret(_bFlag); } } Hope this helps you out! Did not understand any of those solutions :/ Im a noob programmer and we are under a thight deadline, so this became my sulotion: transform.Rotate(Time.deltaTime * 6, 0, 0, Space.World); Now it only rotates one way as idle animation, but that will do i guess. And I dont really understand most of the stuff u guys mentioned about eluer.angels, vector 3 and Quaternion. So *I guess I have to read up on it a little, when i get time :) thanks Answer by robertbu · May 15, 2013 at 07:52 PM Here are two more solutions. Attach them to a block in an empty scene to start. #pragma strict var MinAngle = -70.0; var MaxAngle = 70.0; var timestamp : float; var recalcFreq = 1.0; // Seconds before selecting a new angle var speed = 35.0; var qTo : Quaternion; var randomSeeking = true; function Update () { if (randomSeeking) { if (timestamp < Time.time) { timestamp = timestamp + recalcFreq; qTo = Quaternion.Euler(0.0, Random.Range(MinAngle, MaxAngle), 0.0); } transform.rotation = Quaternion.RotateTowards(transform.rotation, qTo, Time.deltaTime * speed); } } This second one produces a smoother seeking: #pragma strict var neutralAngle = 0.0; // Angle that is midway between the min and max angle. var angleRange = 70.0; // Delta (both sides) of the neutral Angle. var trigSpeed1 = 0.2; var trigSpeed2 = 0.5; var qTo : Quaternion; var speed = 20.0; // Speed of the rotation. var randomSeeking = true; function Update () { if (randomSeeking) { var angle = neutralAngle + (Mathf.Sin(trigSpeed1 * Time.time) + Mathf.Sin(trigSpeed2 * Time.time)) / 2.0 * angleRange; qTo = Quaternion.Euler(0.0, angle, 0.0); } //transform.rotation = Quaternion.RotateTowards(transform.rotation, qTo, Time.deltaTime * speed); transform.rotation = Quaternion.Lerp(transform.rotation, qTo, Time.deltaT. Multiple Cars not working 1 Answer Rotate Turret to Tag 1 Answer preferences, load second time etc 1 Answer Problem with draw distance in scene view 5 Answers door help please 2 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/454449/need-help-with-random-rotatition-on-game-turret.html
CC-MAIN-2022-27
refinedweb
745
69.07
Perl::Critic::DEVELOPER - How to make new Perl::Critic::Policy modules. For developers who want to create custom coding standards, the following tells how to create a Policy module for Perl::Critic. Although the Perl::Critic distribution already includes a number of Policies based on Damian Conway's book Perl Best Practices (which will be referred to via "PBP" from here on), Perl::Critic is not limited to his guidelines and can be used to enforce any practice, preference, or style that you want to follow. You can even write Policies to enforce contradictory guidelines. All you need to do is write a corresponding Perl::Critic::Policy objects. If there are no violations, then the Policy returns nothing. Policies are usually written based on existing policies, so let's look at one to see how it works. The RequireBlockGrep.pm Policy is relatively simple and demonstrates most of the important issues. The goal of this Policy is to enforce that every call to grep uses a block for the first argument and not an expression. The reasons for this Policy are discussed in detail in PBP. First, the Policy module needs to have a name. Perl::Critic uses Module::Pluggable to automatically discover all modules in the Perl::Critic::Policy namespace. Also, we've adopted the convention of grouping Policies into directories according to the chapters of PBP. Since the goal of this Policy is to enforce the use of block arguments to grep and it comes from the "Builtin Functions" chapter of PBP, we call it "Perl::Critic::Policy::BuiltinFunctions::RequireBlockGrep". package Perl::Critic::Policy::BuiltinFunctions::RequireBlockGrep; Next, we set some pragmas and load the modules that we'll need. All Policy modules inherit from the Perl::Critic::Policy class, which provides no-op implementations of the basic methods. Our job is to override these methods to make them do something useful. Technically, use strict and use warnings are optional, but we don't want Perl::Critic to be a hypocrite, now do we? use strict; use warnings; use Readonly; use Perl::Critic::Utils qw{ :severities :classification :ppi }; use base 'Perl::Critic::Policy'; our $VERSION = '1.05'; Next, we'll declare a description and explanation for this Policy. The description is always just a string that basically says "this is what's wrong." The explanation can be either a string with further details, or a reference to an array of integers that correspond to page numbers in PBP. We make them read-only because they never change. (See. However, if your Policy is configurable via .perlcriticrc, you should implement a supported_parameters() method and need to implement initialize_if_enabled() to examine the $config values. Since this Policy isn't configurable, we'll declare that by providing an implementation of supported_parameters() that returns an empty list. sub supported_parameters { return () } Next, we define the default_severity() method, which must return an integer indicating the severity of violating this Policy. Severity values range from 1 to 5, where 5 is the "most severe." In general, level 5 is reserved for things that are frequently misused and/or cause bugs. Level 1 is for things that are highly subjective or purely cosmetic. The Perl::Critic::Utils package exports several severity constants that you can use here via the :severities tag. sub default_severity { return $SEVERITY_HIGH } Likewise, the default_themes() method returns a list of theme names. Themes are intended to be named groups of Policies. All Policies that ship with Perl::Critic have a "core" theme. Since use of grep without blocks often leads to bugs, we include a "bugs" theme. And since this Policy comes directly from PBP, this Policy should be a member of the "pbp" theme. sub default_themes { return qw( core bugs pbp ) } As a Policy author, you can assign any themes you want to the Policy. If you're publishing a suite of custom Policies, we suggest that you create a unique theme that covers all the Policies in the distribution. That way, users can easily enable or disable all of your policies at once. For example, Policies in the respectively. The applies_to() method returns a list of PPI package names. (You can get that list of available package names via perldoc PPI.) As Perl::Critic traverses the document, it will call the violates() method from this module whenever it encounters one of the PPI types that are given here. In this case, we just want to test calls to grep. Since the token "grep" is a PPI::Token::Word, we return that package name from the applies_to() method. sub applies_to { return 'PPI::Token::Word' } If your Policy needs to analyze several different types of elements, the applies_to method may return the name of several PPI packages. If your Policy needs to examine the file as a whole, then the applies_to method should return PPI::Document. Since there is only one PPI::Document element, your Policy would only be invoked once per file. Now comes the interesting part. The violates() method does all the work. It is always called with 2 arguments: a reference to the current PPI element that Perl::Critic is traversing, and a reference to the entire PPI document. [And since this is an object method, there will be an additional argument that is a reference to this object ( $self), but you already knew that!] Since this Policy does not need access to the document as a whole, we ignore the last parameter by assigning to undef. sub violates { my ( $self, $elem, undef ) = @_; The violates() method then often performs some tests to make sure we have the right "type" of element. In our example, we know that the element will be a() method.) So we make sure that this PPI::Token::Word is, in fact, "grep". If it's not, then we don't need to bother examining it. return if $elem ne 'grep'; The PPI::Token::Word class is also used for barewords and methods called on object references. It is possible for someone to declare a bareword hash key as %hash = ( grep => 'foo'). We don't want to test those types of elements because they don't represent function calls to grep. So we use one of handy utility functions from Perl::Critic::Utils to make sure that this "grep" is actually in the right context. (The is_function_call() subroutine is brought in via the :classification tag.) return if ! is_function_call($elem); Now that we know this element is a call to the grep function, we can look at the nearby elements to see what kind of arguments are being passed to it. In the following paragraphs, we discuss how to do this manually in order to explore PPI; after that, we'll show how this Policy actually uses facilities provided by Perl::Critic::Utils to get this done. Every PPI element is linked to its siblings, parent, and children (if it has any). Since those siblings could just be whitespace, we use the snext_sibling() to get the next code-sibling (the "s" in snext_sibling stands for "significant"). my $sib = $elem->snext_sibling() method, passing in the description, explanation, and a reference to the PPI element that caused the violation. And that's all there is to it! return $self->violation( $DESC, $EXPL, $elem ); } 1; One last thing -- people are going to need to understand what is wrong with the code when your Policy finds a problem. It isn't reasonable to include all the details in your violation description or explanation. So please include a DESCRIPTION section in the POD for your Policy. It should succinctly describe the behavior and motivation for your Policy and include a few examples of both good and bad code. Here's an example: =pod =head1 NAME Perl::Critic::Policy::BuiltinFunctions::RequireBlockGrep =head1 DESCRIPTION The expression forms of C<grep> and C<map> are awkward and hard to read. Use the block forms instead. @matches = grep /pattern/, @list; #not ok @matches = grep { /pattern/ } @list; #ok @mapped = map transform($_), @list; #not ok @mapped = map { transform($_) } @list; #ok =cut When your policy has a section like this, users can invoke perlcritic with a --verbose parameter of 10 or 11 or with a "%d" escape to see it along with the rest of the output for violations of your policy.', }, ); } sub violates { my ($self, $element, $document) = @_; ... my $string = $self->{_a_string}; ... } Implemented in Perl::Critic::PolicyParameter::Behavior::Boolean. The value of the parameter will be either $TRUE or $FALSE. This behavior is not configurable. supported_parameters()example sub supported_parameters { return ( { name => 'a_boolean', description => 'An example boolean.', default_string => '1', behavior => 'boolean', }, ); } sub violates { my ($self, $element, $document) = @_; ... my $is_whatever = $self->{_a_boolean}; if ($is_whatever) { ... } ... }, }, ); } sub violates { my ($self, $element, $document) = @_; ... my $integer = $self->{_an_integer}; if ($integer > $TURNING_POINT) { ... } ... }} ], }, ); } sub violates { my ($self, $element, $document) = @_; ... my $list = $self->{_a_string_list}; my @list = keys %{$list}; ... return if not $list->{ $element->content() }; ... }, }, ); } sub violates { my ($self, $element, $document) = @_; ... my $single_value = $self->{_a_single_valued_enumeration}; ... my $multi_value = $self->{_a_multi_valued_enumeration}; if ( $multi_value->{fum} ) { ... } ... }. supported_parameters(). Currently, one of: A string representation of the default value of the parameter. A code ref to a custom parser for the parameter. A mandatory reference to an array of strings. Boolean indicating whether or not the user is restricted to a single value. Minimum allowed value, inclusive. Maximum allowed value, inclusive. A reference to an array of values that should always be included in the value of the parameter.>. In order to make it clear what can be done with a policy, you should always include a CONFIGURATION section in your POD, even if it's only to say: =head1 CONFIGURATION This Policy is not configurable except for the standard options. The Perl::Critic distribution also contains a framework for testing your Policy. See Test::Perl::Critic::Policy for the details.. This is part of Perl::Critic version 1.116. Chas. Owens has a blog post about developing in-house policies.
https://metacpan.org/pod/release/THALJEF/Perl-Critic-1.121/lib/Perl/Critic/DEVELOPER.pod
CC-MAIN-2014-23
refinedweb
1,642
56.35
While developing WCF client/services, I frequently encounter this annoying error whenever I run my client to connect to the service contract for the first time. I term them as "timewasters". This post will partly serve as a reminder to me, and hopefully someone will benefit from it if they came across the same problem. The story goes like this, you start up your usual Visual Studio 2005 to work on a simple WCF's application (you know the usual service <-> client stuffs). So you created your service and named your interface IContact with a namespace called Contact namespace Contact{ [ServiceContract] public interface IContact { [OperationContract] void Something(); }...} You then go on to create your configuration and service file, opened up your IIS, create a virtual directory and dumped the appropriate files into the virtual directory. You then test the directory from your Internet Explorer. Everything works beautifully. So now you do the easy part. Fire up the SDK command prompt and use the "svcutil" command to create the proxy needed for connection to the service. You create a client project, add the auto-generated proxy and output.config file and start to consume the service via the proxy you've just created. After all is done, you do a run and this came staring at you So what's wrong? It's all spelled out in the error description actually. The resolution is pretty simple, here's something you can take note so that this error message will be gone for good I guess if someone ran into the same problem again as I do, they might benefit from this post.
http://geekswithblogs.net/nestor/archive/2007/01/05/102828.aspx
CC-MAIN-2014-42
refinedweb
271
62.17
987learner 0 Posted November 24, 2008 I have a txt file that I'm using to conduct a search for a match string "datasource.0.ch_id". Presently, this string is in line 91 of the txt file. The actual string in line 91 is datasource.0.ch_id=168486 I wish extract 168486 out, base on matching "datasource.0.ch_id" string. The problem is this string can be in line 90 to 110. How do I get autoit to detect datasource.0.ch_id from line 90 to 110. There is only 1 instance of this string. Have only started with basic code. I only how to perform 2 possible checks (line 91 or line 93) using if... then..else. As there are 20 possibility in 90 to 110 range, I think a more efficient code should be use. But I have no clue how to do that. Could anyone kindly help me with it? All these check will only return 1 value (as in the msgbox value). I have to figure out how to get other values in this similar sense (another 11 of them). Thanks. $file =FileOpen ( "C:\customprogram\client\utility_context.txt", 0) If $file = -1 Then MsgBox(0, "Error", "Unable to open file.") Exit EndIf $chartid=FileReadLine ($file,91); read chart id on line 91 $chartchk=StringRegExp ( $chartid, "datasource.0.ch_id" , 0 ) ;MsgBox(0, "Chartchk1", $chartchk); 0 imply string not match , 1 imply string match if $chartchk=0 then $chartid=FileReadLine ($file,93) $chartchk=StringRegExp ( $chartid, "datasource.0.ch_id" , 0 ) ;ElseIf $chartchk=0 then ; $chartid=FileReadLine ($file,109) $Fchartid=StringMid ( $chartid, 20, 6 ) else $Fchartid=StringMid ( $chartid, 20, 6 ); base on line 93 input endif FileClose($file) MsgBox(64,"Details",$Fchartid) Share this post Link to post Share on other sites
https://www.autoitscript.com/forum/topic/84894-need-help-in-search-and-match-string/
CC-MAIN-2018-30
refinedweb
292
75.4
import keras keras.__version__ Using TensorFlow backend. '2.0.8' This notebook contains the second code sample found in Chapter 8, Section 5 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. [...] In what follows, we explain how to implement a GAN in Keras, in its barest form -- since GANs are quite advanced, diving deeply into the technical details would be out of scope for us. Our specific implementation will be a deep convolutional GAN, or DCGAN: a GAN where the generator and discriminator are deep convnets. In particular, it leverages a Conv2DTranspose layer for image upsampling in the generator. We will train our GAN on images from CIFAR10, a dataset of 50,000 32x32 RGB images belong to 10 classes (5,000 images per class). To make things even easier, we will only use images belonging to the class "frog". Schematically, our GAN looks like this: generatornetwork maps vectors of shape (latent_dim,)to images of shape (32, 32, 3). discriminatornetwork maps images of shape (32, 32, 3) to a binary score estimating the probability that the image is real. gannetwork chains the generator and the discriminator together: gan(x) = discriminator(generator(x)). Thus this gannetwork maps latent space vectors to the discriminator's assessment of the realism of these latent vectors as decoded by the generator. ganmodel. This means that, at every step, we move the weights of the generator in a direction that will make the discriminator more likely to classify as "real" the images decoded by the generator. I.e. we train the generator to fool the discriminator. Training GANs and tuning GAN implementations is notoriously difficult. There are a number of known "tricks" that one should keep in mind. Like most things in deep learning, it is more alchemy than science: these tricks are really just heuristics, not theory-backed guidelines. They are backed by some level of intuitive understanding of the phenomenon at hand, and they are known to work well empirically, albeit not necessarily in every context. Here are a few of the tricks that we leverage in our own implementation of a GAN generator and discriminator below. It is not an exhaustive list of GAN-related tricks; you will find many more across the GAN literature. tanhas the last activation in the generator, instead of sigmoid, which would be more commonly found in other types of models. LeakyReLUlayer instead of a ReLU activation. It is similar to ReLU but it relaxes sparsity constraints by allowing small negative activation values. Conv2DTranposeor Conv2Din both the generator and discriminator. First, we develop a generator model, which turns a vector (from the latent space -- during training it will sampled at random) into a candidate image. One of the many issues that commonly arise with GANs is that the generator gets stuck with generated images that look like noise. A possible solution is to use dropout on both the discriminator and generator. import keras from keras import layers import numpy as np latent_dim = 32 height = 32 width = 32 channels = 3 generator_input = keras.Input(shape=(latent_dim,)) # First, transform the input into a 16x16 128-channels feature map x = layers.Dense(128 * 16 * 16)(generator_input) x = layers.LeakyReLU()(x) x = layers.Reshape((16, 16, 128))(x) # Then, add a convolution layer x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) # Upsample to 32x32 x = layers.Conv2DTranspose(256, 4, strides=2, padding='same')(x) x = layers.LeakyReLU()(x) # Few more conv layers x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) # Produce a 32x32 1-channel feature map x = layers.Conv2D(channels, 7, activation='tanh', padding='same')(x) generator = keras.models.Model(generator_input, x) generator.summary() Using TensorFlow backend. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 32) 0 _________________________________________________________________ dense_1 (Dense) (None, 32768) 1081344 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 32768) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 16, 16, 128) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 16, 16, 256) 819456 _________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 16, 16, 256) 0 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 32, 32, 256) 1048832 _________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, 32, 32, 256) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 32, 32, 256) 1638656 _________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 32, 32, 256) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 32, 32, 256) 1638656 _________________________________________________________________ leaky_re_lu_5 (LeakyReLU) (None, 32, 32, 256) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 32, 32, 3) 37635 ================================================================= Total params: 6,264,579 Trainable params: 6,264,579 Non-trainable params: 0 _________________________________________________________________ discriminator_input = layers.Input(shape=(height, width, channels)) x = layers.Conv2D(128, 3)(discriminator_input) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Flatten()(x) # One dropout layer - important trick! x = layers.Dropout(0.4)(x) # Classification layer x = layers.Dense(1, activation='sigmoid')(x) discriminator = keras.models.Model(discriminator_input, x) discriminator.summary() # To stabilize training, we use learning rate decay # and gradient clipping (by value) in the optimizer. discriminator_optimizer = keras.optimizers.RMSprop(lr=0.0008, clipvalue=1.0, decay=1e-8) discriminator.compile(optimizer=discriminator_optimizer, loss='binary_crossentropy') _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) (None, 32, 32, 3) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 30, 30, 128) 3584 _________________________________________________________________ leaky_re_lu_6 (LeakyReLU) (None, 30, 30, 128) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 14, 14, 128) 262272 _________________________________________________________________ leaky_re_lu_7 (LeakyReLU) (None, 14, 14, 128) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 6, 6, 128) 262272 _________________________________________________________________ leaky_re_lu_8 (LeakyReLU) (None, 6, 6, 128) 0 _________________________________________________________________ conv2d_8 (Conv2D) (None, 2, 2, 128) 262272 _________________________________________________________________ leaky_re_lu_9 (LeakyReLU) (None, 2, 2, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 512) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 1) 513 ================================================================= Total params: 790,913 Trainable params: 790,913 Non-trainable params: 0 _________________________________________________________________ Finally, we setup the GAN, which chains the generator and the discriminator. This is the model that, when trained, will move the generator in a direction that improves its ability to fool the discriminator. This model turns latent space points into a classification decision, "fake" or "real", and it is meant to be trained with labels that are always "these are real images". So training gan will updates the weights of generator in a way that makes discriminator more likely to predict "real" when looking at fake images. Very importantly, we set the discriminator to be frozen during training (non-trainable): its weights will not be updated when training gan. If the discriminator weights could be updated during this process, then we would be training the discriminator to always predict "real", which is not what we want! # Set discriminator weights to non-trainable # (will only apply to the `gan` model) discriminator.trainable = False gan_input = keras.Input(shape=(latent_dim,)) gan_output = discriminator(generator(gan_input)) gan = keras.models.Model(gan_input, gan_output) gan_optimizer = keras.optimizers.RMSprop(lr=0.0004, clipvalue=1.0, decay=1e-8) gan.compile(optimizer=gan_optimizer, loss='binary_crossentropy') Now we can start training. To recapitulate, this is schematically what the training loop looks like: for each epoch: * Draw random points in the latent space (random noise). * Generate images with `generator` using this random noise. * Mix the generated images with real ones. * Train `discriminator` using these mixed images, with corresponding targets, either "real" (for the real images) or "fake" (for the generated images). * Draw new random points in the latent space. * Train `gan` using these random vectors, with targets that all say "these are real images". This will update the weights of the generator (only, since discriminator is frozen inside `gan`) to move them towards getting the discriminator to predict "these are real images" for generated images, i.e. this trains the generator to fool the discriminator. Let's implement it: import os from keras.preprocessing import image # Load CIFAR10 data (x_train, y_train), (_, _) = keras.datasets.cifar10.load_data() # Select frog images (class 6) x_train = x_train[y_train.flatten() == 6] # Normalize data x_train = x_train.reshape( (x_train.shape[0],) + (height, width, channels)).astype('float32') / 255. iterations = 10000 batch_size = 20 save_dir = '/home/ubuntu/gan_images/' # Start training loop start = 0 for step in range(iterations): # Sample random points in the latent space random_latent_vectors = np.random.normal(size=(batch_size, latent_dim)) # Decode them to fake images generated_images = generator.predict(random_latent_vectors) # Combine them with real images stop = start + batch_size real_images = x_train[start: stop] combined_images = np.concatenate([generated_images, real_images]) # Assemble labels discriminating real from fake images labels = np.concatenate([np.ones((batch_size, 1)), np.zeros((batch_size, 1))]) # Add random noise to the labels - important trick! labels += 0.05 * np.random.random(labels.shape) # Train the discriminator d_loss = discriminator.train_on_batch(combined_images, labels) # sample random points in the latent space random_latent_vectors = np.random.normal(size=(batch_size, latent_dim)) # Assemble labels that say "all real images" misleading_targets = np.zeros((batch_size, 1)) # Train the generator (via the gan model, # where the discriminator weights are frozen) a_loss = gan.train_on_batch(random_latent_vectors, misleading_targets) start += batch_size if start > len(x_train) - batch_size: start = 0 # Occasionally save / plot if step % 100 == 0: # Save model weights gan.save_weights('gan.h5') # Print metrics print('discriminator loss at step %s: %s' % (step, d_loss)) print('adversarial loss at step %s: %s' % (step, a_loss)) # Save one generated image img = image.array_to_img(generated_images[0] * 255., scale=False) img.save(os.path.join(save_dir, 'generated_frog' + str(step) + '.png')) # Save one real image, for comparison img = image.array_to_img(real_images[0] * 255., scale=False) img.save(os.path.join(save_dir, 'real_frog' + str(step) + '.png')) discriminator loss at step 0: 0.685675 adversarial loss at step 0: 0.667591 discriminator loss at step 100: 0.756201 adversarial loss at step 100: 0.820905 discriminator loss at step 200: 0.699047 adversarial loss at step 200: 0.776581 discriminator loss at step 300: 0.684602 adversarial loss at step 300: 0.513813 discriminator loss at step 400: 0.707092 adversarial loss at step 400: 0.716778 discriminator loss at step 500: 0.686278 adversarial loss at step 500: 0.741214 discriminator loss at step 600: 0.692786 adversarial loss at step 600: 0.745891 discriminator loss at step 700: 0.69771 adversarial loss at step 700: 0.781026 discriminator loss at step 800: 0.69236 adversarial loss at step 800: 0.748769 discriminator loss at step 900: 0.663193 adversarial loss at step 900: 0.689923 discriminator loss at step 1000: 0.706922 adversarial loss at step 1000: 0.741314 discriminator loss at step 1100: 0.682189 adversarial loss at step 1100: 0.76548 discriminator loss at step 1200: 0.687244 adversarial loss at step 1200: 0.746018 discriminator loss at step 1300: 0.697884 adversarial loss at step 1300: 0.766032 discriminator loss at step 1400: 0.691977 adversarial loss at step 1400: 0.735184 discriminator loss at step 1500: 0.696238 adversarial loss at step 1500: 0.738426 discriminator loss at step 1600: 0.698334 adversarial loss at step 1600: 0.741093 discriminator loss at step 1700: 0.70315 adversarial loss at step 1700: 0.736702 discriminator loss at step 1800: 0.693836 adversarial loss at step 1800: 0.742768 discriminator loss at step 1900: 0.69059 adversarial loss at step 1900: 0.741162 discriminator loss at step 2000: 0.696293 adversarial loss at step 2000: 0.755151 discriminator loss at step 2100: 0.686166 adversarial loss at step 2100: 0.755129 discriminator loss at step 2200: 0.692612 adversarial loss at step 2200: 0.772408 discriminator loss at step 2300: 0.704013 adversarial loss at step 2300: 0.776998 discriminator loss at step 2400: 0.693268 adversarial loss at step 2400: 0.70731 discriminator loss at step 2500: 0.684289 adversarial loss at step 2500: 0.742162 discriminator loss at step 2600: 0.700483 adversarial loss at step 2600: 0.734719 discriminator loss at step 2700: 0.699952 adversarial loss at step 2700: 0.759745 discriminator loss at step 2800: 0.697416 adversarial loss at step 2800: 0.733726 discriminator loss at step 2900: 0.697604 adversarial loss at step 2900: 0.740891 discriminator loss at step 3000: 0.698498 adversarial loss at step 3000: 0.754564 discriminator loss at step 3100: 0.695516 adversarial loss at step 3100: 0.759486 discriminator loss at step 3200: 0.693453 adversarial loss at step 3200: 0.769369 discriminator loss at step 3300: 1.5083 adversarial loss at step 3300: 0.726621 discriminator loss at step 3400: 0.686934 adversarial loss at step 3400: 0.747121 discriminator loss at step 3500: 0.689791 adversarial loss at step 3500: 0.751882 discriminator loss at step 3600: 0.71331 adversarial loss at step 3600: 0.704916 discriminator loss at step 3700: 0.690504 adversarial loss at step 3700: 0.853764 discriminator loss at step 3800: 0.688844 adversarial loss at step 3800: 0.791077 discriminator loss at step 3900: 0.679162 adversarial loss at step 3900: 0.724979 discriminator loss at step 4000: 0.676585 adversarial loss at step 4000: 0.69554 discriminator loss at step 4100: 0.693313 adversarial loss at step 4100: 0.742666 discriminator loss at step 4200: 0.678367 adversarial loss at step 4200: 0.778793 discriminator loss at step 4300: 0.699712 adversarial loss at step 4300: 0.740457 discriminator loss at step 4400: 0.697605 adversarial loss at step 4400: 0.755847 discriminator loss at step 4500: 0.710596 adversarial loss at step 4500: 0.814832 discriminator loss at step 4600: 0.706518 adversarial loss at step 4600: 0.83636 discriminator loss at step 4700: 0.687217 adversarial loss at step 4700: 0.775736 discriminator loss at step 4800: 0.769103 adversarial loss at step 4800: 0.774639 discriminator loss at step 4900: 0.692414 adversarial loss at step 4900: 0.775192 discriminator loss at step 5000: 0.715357 adversarial loss at step 5000: 0.775003 discriminator loss at step 5100: 0.703434 adversarial loss at step 5100: 0.940242 discriminator loss at step 5200: 0.704034 adversarial loss at step 5200: 0.708327 discriminator loss at step 5300: 0.698559 adversarial loss at step 5300: 0.730377 discriminator loss at step 5400: 0.684378 adversarial loss at step 5400: 0.759259 discriminator loss at step 5500: 0.693699 adversarial loss at step 5500: 0.700122 discriminator loss at step 5600: 0.715242 adversarial loss at step 5600: 0.808961 discriminator loss at step 5700: 0.689339 adversarial loss at step 5700: 0.621725 discriminator loss at step 5800: 0.679717 adversarial loss at step 5800: 0.787711 discriminator loss at step 5900: 0.700126 adversarial loss at step 5900: 0.742493 discriminator loss at step 6000: 0.692087 adversarial loss at step 6000: 0.839669 discriminator loss at step 6100: 0.677867 adversarial loss at step 6100: 0.797158 discriminator loss at step 6200: 0.70392 adversarial loss at step 6200: 0.842135 discriminator loss at step 6300: 0.688377 adversarial loss at step 6300: 0.718633 discriminator loss at step 6400: 0.781234 adversarial loss at step 6400: 0.710833 discriminator loss at step 6500: 0.682696 adversarial loss at step 6500: 0.739674 discriminator loss at step 6600: 0.693081 adversarial loss at step 6600: 0.747336 discriminator loss at step 6700: 0.681836 adversarial loss at step 6700: 0.780143 discriminator loss at step 6800: 0.728136 adversarial loss at step 6800: 0.838522 discriminator loss at step 6900: 0.660475 adversarial loss at step 6900: 0.717434 discriminator loss at step 7000: 0.672144 adversarial loss at step 7000: 0.948783 discriminator loss at step 7100: 0.692428 adversarial loss at step 7100: 0.837047 discriminator loss at step 7200: 0.731133 adversarial loss at step 7200: 0.728315 discriminator loss at step 7300: 0.671766 adversarial loss at step 7300: 0.793155 discriminator loss at step 7400: 0.712387 adversarial loss at step 7400: 0.807759 discriminator loss at step 7500: 0.68638 adversarial loss at step 7500: 0.967421 discriminator loss at step 7600: 0.690096 adversarial loss at step 7600: 0.811904 discriminator loss at step 7700: 0.702784 adversarial loss at step 7700: 0.867017 discriminator loss at step 7800: 0.674138 adversarial loss at step 7800: 0.837909 discriminator loss at step 7900: 0.674747 adversarial loss at step 7900: 0.743664 discriminator loss at step 8000: 0.680357 adversarial loss at step 8000: 0.810859 discriminator loss at step 8100: 0.688885 adversarial loss at step 8100: 0.786809 discriminator loss at step 8200: 0.671557 adversarial loss at step 8200: 0.784159 discriminator loss at step 8300: 0.70359 adversarial loss at step 8300: 0.95692 discriminator loss at step 8400: 0.720167 adversarial loss at step 8400: 1.14066 discriminator loss at step 8500: 0.747376 adversarial loss at step 8500: 0.630725 discriminator loss at step 8600: 0.688931 adversarial loss at step 8600: 0.849245 discriminator loss at step 8700: 0.707559 adversarial loss at step 8700: 0.713202 discriminator loss at step 8800: 0.673593 adversarial loss at step 8800: 0.832419 discriminator loss at step 8900: 0.6777 adversarial loss at step 8900: 0.773395 discriminator loss at step 9000: 0.659887 adversarial loss at step 9000: 0.77255 discriminator loss at step 9100: 0.675182 adversarial loss at step 9100: 0.749544 discriminator loss at step 9200: 0.687147 adversarial loss at step 9200: 0.836509 discriminator loss at step 9300: 0.690807 adversarial loss at step 9300: 0.829561 discriminator loss at step 9400: 0.656649 adversarial loss at step 9400: 0.788181 discriminator loss at step 9500: 0.703494 adversarial loss at step 9500: 0.78302 discriminator loss at step 9600: 0.680718 adversarial loss at step 9600: 0.813078 discriminator loss at step 9700: 0.704956 adversarial loss at step 9700: 0.761652 discriminator loss at step 9800: 0.673504 adversarial loss at step 9800: 0.853213 discriminator loss at step 9900: 0.669288 adversarial loss at step 9900: 0.677691 Let's display a few of our fake images: import matplotlib.pyplot as plt # Sample random points in the latent space random_latent_vectors = np.random.normal(size=(10, latent_dim)) # Decode them to fake images generated_images = generator.predict(random_latent_vectors) for i in range(generated_images.shape[0]): img = image.array_to_img(generated_images[i] * 255., scale=False) plt.figure() plt.imshow(img) plt.show() Froggy with some pixellated artifacts.
https://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/8.5-introduction-to-gans.ipynb
CC-MAIN-2019-18
refinedweb
3,053
54.29
🔼 Module Initializers In this latest version of C # 9.0, the [ModuleInitializer] attribute is used to specify a method that we can invoke before any code in the module, the destination method must be static, without any type of parameter and returned empty. using system; using System.Runtime.CompilerServices; class Program { static void Main(string[] args) { Console.WriteLine($"Data={Data}"); } public static string Data; [ModuleInitializer] public static void Init() { Data="This static method is invoked before any other method in the module"; } } 🔼 Extension GetEnumerator The foreach statement normally operates on a variable of type IEnumerator when it contains a definition of any public extension for GetEnumerator. This is how we can see it in this example 👇 using system; using System.Collections.Generic; IEnumerator<string> colors = new List<string> {"blue", "red", "green"}.GetEnumerator(); foreach (var colors in colors) { Console.WriteLine($"{color} is my favorite color"); } public static class Extensions { public static IEnumerator<T> GetEnumerator<T>(this IEnumerator<T> enumerator) => enumerator; } 🔼 Covariant Return Types In C# 9.0, the return types of override methods are usually much more specific than the declarations in the base type 👇 abstract class Weather { public abstract Temperature GetTemperature(); } class Spain : Weather { public override Celsius GetTemperature() => new Celsius(); } class USA : Weather { public override Farenheit GetTemperature() => ne Farenheit(); } class Temperature{ } class Celsius{ } class Farenheit{ } The GetTemperature () method has the return type Temperature, the derived class Spain overrides this method and returns a specific type Celsius. It is a feature that makes our code more flexible. ✅ 🔼 Init Accessor The init accessor makes immutable objects easier to create and use 👇 Point point1 = new() { X = 1, Y = 2}; Console.WriteLine(point1.ToString()); public record Point { public int X { get; init;} public int Y { get; init;} } The init accessor can be used with Structures, Registers and Classes. The init accessor can be used with Classes, Structures, and Registers 👇 Point point1 = new() { X = 1, Y = 2}; Point point2 = point1 with { Y = 4}; Console.WriteLine(point1.ToString()); public record Point { public int X { get; init;} public int Y { get; init;} } 🔼 Records Now we have a new type of reference called record that gives us equal value. To better understand it, we have this example 👇 Point point1 = new(1, 2); Console.WriteLine(point1.ToString()); Point point2 = new(1, 2)}; Console.WriteLine(point1.Equals(point2)); public record Point { public int X { get;} public int Y { get;} public Point(int x, int y) => (X, Y) = (x, y); } As we can see, the point record is immutable, you can greatly simplify the syntax using init accesor, since its properties are read-only. 🔼 Lambda Discard Parameters The next C# 9.0 improvement is being able to use discard (_) as an input parameter of a lambda expression in case that parameter is not used. //C#8 button.Click += (s, e) => {Message.Box.Show("Button clicked"); }; //C#9 button.Click += (_, _) => {Message.Box.Show("Button clicked"); }; It is a feature that also allows us to read the code in a cleaner and more beautiful way. 🔼 Target-Typed new Another very important feature in this latest version of C# is the ability to omit the type of a new expression when the object type is explicitly known. Let's see a quick and simple example 👇 Point point = new() {X = 1, Y = 2}; Console.WriteLine($"point:({point1.X}, {point.Y})"); public class Point { public int X { get; set; } public int Y { get; set; } } It is a very useful feature since it allows you to read the code in a clean way without having to duplicate the type. Point point = new(1, 2); Console.WriteLine($"point:({point1.X}, {point.Y})"); public class Point{ public int X { get; } public int Y { get; } public Point(int x, int y) => (X, Y) = (x, y); } 🔼 Top-Level Statements In C# 9.0, it is possible to write a top-level program after using declarations. Here we can see the example 👇 using System; Console.WriteLine("Hello World!"); With top-level declarations, you wouldn't need to declare any space between names, main method, or class program. This new feature can be very useful for programmers just starting out, as the compiler does all of these things for you. [CompilerGenerated] internal static class $program { private static void $main(string[] args) { Console.WriteLine("Hello World!"); } } Seeing the new features in C# 9.0, which help make programming much simpler and more intuitive. What can we expect in the future version? future versions of C#. Let's start 👍 🔼 File-level namespaces All of us when we started programming in C# we have created a "Hello World" application. Knowing this we also know that C# uses a block structure for namespaces. namespace HelloWorld { class Hello { static void Main(string[] args) { System.Console.WriteLine("Hello World!"); } } }. namespace HelloWorld; public class Hello { static void Main (string[] args) { System.Console.WriteLine("Hello World!"); } } namespace Company.Product; Company.Product.Component namespace Component { } It is clear that it is not a very big feature, but it is preferable that the more improvements there are, the easier and more intuitive the task of programming will be. 🔼. public class DataSlice { public string DataLabel { get; } public float DataValue { get; } public DataSlice(string dataLabel, float dataValue) { DataLabel = dataLabel; DataValue = dataValue; } }. var adultData = new DataSlice("Vaccinated adults", 741); By using the main constructor, property validation is not excluded. In the same way, its rules can be enforced in a property setter. Let's see an example 👇 public class DataSlice(string dataLabel, float dataValue) { public string DataLabel { get => dataLabel; set { if (value < 0) throw new ArgumentOutOfRangeException(); dataLabel = value; } } public float DataValue { get => dataValue; } } Other details are also possible (calling the base constructor in a derived class, adding constructors). The main downside to all of this is that the primary constructors could collide with the position registers. 🔼 Raw string literals We already know that the ordinary strings that C# has, tend to be quite messy since they need quotation marks (''), newlines (\ n) and backslashes (). What C# offers before this little problem is the use of special characters. For example, we can prefix a string with @ and have free rein to add all these details without any problem 👇 string path = "c:\\path\\backslashes"; string path = @"c:\pathh\backslashes"; string <name>year</name> <description>this is the actual year <ref part="2020">year</ref> actual year. </description> </part> """; If your concern is that there is a possibility of a triple quote sequence within the string, you can simply extend the delimiter so that you can use all the quotes you want, as long as the beginning and end are respected. string xml = """" Now """ is safe to use in your raw string. """"; In the same way as @ strings, newlines and whitespace are preserved in a raw string. What happens is that the common white space, that is, the amount that is used to bleed, is cut off. Let's see more simply with an example 👇 <part number="2021"> <name>year</name> <description>this is the actual year <ref part="2020">year</ref> actual year. </description> </part> To this 👇 <part number="2021"> <name>year</name> <description>this is the actual year <ref part="2021">year</ref> actual year. </description> </part> Conclution: To finish this article, we think that C# still has many years of travel ahead of it and it still has many things to add to make the task of programming even easier and more optimal. From Dotnetsafer we want to thank you for your time in reading this article and don't forget that in our .NET Blog you can learn more. And remember: Now you can try for free our C# obfuscator. You can also protect your applications directly from Visual Studio with the .NET Obfuscator for Visual Studio. Also, before that you can learn how to protect .NET applications. Discussion (1)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dotnetsafer/c-9-0-features-and-expectations-of-c-10-n7c
CC-MAIN-2022-33
refinedweb
1,287
52.9
MooseX::Declare::Syntax::MooseSetup - Common Moose namespaces declarations This role is basically an extension to NamespaceHandling. It adds all the common parts for Moose namespace definitions. Examples of this role can be found in the class and role keywords. Bool Object->auto_make_immutable () Since Moose::Roles can't be made immutable (this is not a bug or a missing feature, it would make no sense), this always returns false. List Object->imported_moose_symbols () This will return confess and blessed by default to provide as additional imports to the namespace. Str Object->import_symbols_from () The namespace from which the additional imports will be imported. This will return Moose by default. ArrayRef default_inner () This will provide the following default inner-handlers to the namspace: This is a MethodModifier handler that will start the signature of the generated method with $orig: $self to provide the original method in $orig. These four handlers are MethodModifier instances. This is an instance of the Clean keyword handler. The original method will never be called and all arguments are ignored at the moment. Object->add_namespace_customizations (Object $context, Str $package, HashRef $options) After all other customizations, this will first add code to import the "imported_moose_symbols" from the package returned in "import_symbols_from" to the preamble. Then it will add a code part that will immutabilize the class to the cleanup code if the "auto_make_immutable" method returned a true value and $options-{is}{mutable}> does not exist. CodeRef Object->handle_post_parsing (Object $context, Str $package, Str|Object $name) Generates a callback that sets up the roles in the global role storage for the current namespace. The $name parameter will be the specified name (in contrast to $package which will always be the fully qualified name) or the anonymous metaclass instance if none was specified. Object->setup_inner_for (ClassName $class) This will install a with function that will push its arguments onto a global storage array holding the roles of the current namespace. See MooseX::Declare
http://search.cpan.org/~flora/MooseX-Declare-0.32/lib/MooseX/Declare/Syntax/MooseSetup.pm
CC-MAIN-2014-35
refinedweb
320
52.19
AWK Notes Awk is great for editing delimited text files, and can be easily used in shell scripts. Back to general programming page Basic stuff Code blocks, which are executed for each line of the text file, are in curly braces: awk '{ print }' AZ_soilstations.csv # this will print the AZ_soilstations.csv file to the shell (assuming it is in the current working directory). Awk can operate each line of the file as a whole, or in fields separated by a special character: awk '{ print $0 }' AZ_soilstations.csv # this prints the whole line awk '{ print $1 }' AZ_soilstations.csv # this prints the line up to the first field separator (default is one or more spaces) The field separator (FS) can be set prior to running the code block: $ awk -F"," '{ print $1 " " $3 }' AZ_soilstations.csv # prints the first and third column of this comma delimited field Expressions, such as pattern matching, can be added before the code block. The following will output the network, station id, state, and name of all scan sites in the file: awk -F"," '/scan/ { print $1 " " $2 " " $9 " " $12 }' AZ_soilstations.csv # search expressions should be surrounded with forward slashes. # outputs: scan 2026 AZ WALNUT GULCH #1 Comparison expressions (==, <, >, <=, >=, !=, plus ~ (matches) and !~) can also be used: awk -F"," '$1=="snotel" { print $1 " " $2 " " $9 " " $12 }' AZ_soilstations.csv # Outputs: snotel 310 AZ BALDY snotel 1121 AZ FORT VALLEY snotel 969 AZ HAPPY JACK snotel 1125 AZ MORMON MTN SUMMIT snotel 861 AZ WHITE HORSE LAKE As can logical expressions such as AND (&&) and OR (||): awk -F "," '( $1 == "snotel" ) && ( $7 > 7000 ) { print }' AZ_soilstations.csv # prints only the lines of AZ snotel sites above 7000 ft elevation Other features: - Can use arithmetic operators (+-*/%^) - Can use conditional statements for control flow: - if ( //expression// ) //statement1// else //statement2// - while ( //expression// ) //statement// - for ( //expression1//; //expression// ; //expression2// ) //statement// - do //statement// while( //expression// ) - Some important environmental variables: - NF (number of columns) - NR (the current line that awk is working on) - END (true if awk reaches the EOF) - BEGIN (true before awk reads anything) Built in functions - gsub(r,s) substitutes s for r globally in current input line, returns the number of substitutions - gsub(r,s,t) substitutes s for r in t globally, returns number of substitutions - index(s,t) returns position of string t in s, 0 if not present - length(s) returns length of s - match(s,r) returns position in s where r occurs, 0 if not present - split(s,a) splits s into array a on FS, returns number of fields - split(s,a,r) splits s into array a on r, returns number of fields - sprintf(fmt, expr-list) returns expr-list formatted according to format string specified by fmt - sub(r,s) substitutes s for first r in current input line, returns number of substitutions - sub(r,s,t) substitutes s for first r in t, returns number of substitutions - substr(s,p) returns suffix s starting at position p - substr(s,p,n) returns substring of s length n starting at position p Command line usage Note that: - Input can come from files or be piped in from shell commands. - Output can be redirected into files or piped to bash, etc. - Lots of these are from here or here - Also see examples in the shell scripting page. Convert Windows/DOS newlines (CRLF) to Unix newlines (LF) from Unix. Removes the carriage return (\r) at the end of the line ($ at end of search pattern), leaving linefeed: awk '{ sub(/\r$/,""); print }' filename.txt This would remove one or more (+ in pattern) leading (\^ in pattern) spaces for each line: awk '{ sub(/^ +/,""); print }' filename.txt This would remove one or more leading spaces or tabs (brackets join multiple search terms) in the fifth column only (the $5 marks column 5): awk -F"," ' BEGIN{OFS=","} { gsub(/\^ +/,"", $5); print }' AZ_soilstations.csv # Prints the entire .csv file, spaces are removed from column 5. # Because this is a .csv the field separator must be set (-F) # To preserve comma delimiters on output the output field separator (OFS) must be set in a begin block. To do the same operation on several fields just add another statement to the code block: awk -F"," ' BEGIN{OFS=","} { gsub(/^ +/,"", $5); gsub(/^ +/,"", $7); print }' AZ_soilstations.csv This command will print a textfile containing sitenumbers for each of a sites SNOTEL files in a data directory with filenames like 828_ALL_WATERYEAR=2002.csv (all files begin with an integer site code and there are files for multiple years in the directory).: ls *.csv | awk -F"_" '{print $1}' > sitelist.txt This nifty bash command moves and renames an entire directory of files with awk: ls junk\* | awk '{print "mv"$0" ../trashdir/"$0".dat"}' | bash # No pattern to match, so for each line of input piped in by ls, prints mv junk1 ../trashdir/junk1.dat ....etc to bash Executing saved scripts BEGIN blocks allow initialization code (such as setting variables) before running the code block on each line of the input file: BEGIN { FS=":" } { print $1 } # Setting the field separator is best done in a BEGIN block before running the code block End blocks do end of script reporting or calculations: BEGIN { x=0 } /^$/ { x=x+1 } END { print "I found " x " blank lines. :)" } # Prints out the number of blank lines in the file If saved in a file the script above could be run with: awk -f myfile.awk AZ_soilstations.csv
https://earthscinotebook.readthedocs.io/en/latest/computing/awk/
CC-MAIN-2019-09
refinedweb
898
54.15
Articles have been included in Github.com/niumoo/JavaNotes Welcome to Star and Instructions. Welcome to my attention Public Number , articles are updated weekly. The previous article introduced the HashMap source code, which was very popular. Many students expressed their opinions. This time it came again. This time it is ConcurrentHashMap. As a thread-safe HashMap, it is also used frequently.So what is its storage structure and how does it work? 1. ConcurrentHashMap 1.7 1. Storage structure The storage structure of ConcurrentHashMap in Java 7 is illustrated above. ConcurrnetHashMap is composed of several Segments, and each Segment is a HashMap-like structure, so the interior of each HashMap can be expanded.However, once initialized, the number of Segments cannot be changed. The default number of Segments is 16. You can also assume that ConcurrentHashMap supports up to 16 threads of concurrency by default. 2. Initialization Explore the initialization process of ConcurrentHashMap through its parameterless construction. /** * Creates a new, empty map with a default initial capacity (16), * load factor (0.75) and concurrencyLevel (16). */ public ConcurrentHashMap() { this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL); } A parameterized construct is called in a parameterized construct, passing in the default values of three parameters whose values are. /** * Default Initialization Capacity */ static final int DEFAULT_INITIAL_CAPACITY = 16; /** * Default Load Factor */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * Default Concurrency Level */ static final int DEFAULT_CONCURRENCY_LEVEL = 16; Next, look at the internal implementation logic of this parametric constructor. @SuppressWarnings("unchecked") public ConcurrentHashMap(int initialCapacity,float loadFactor, int concurrencyLevel) { // Parameter Check if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); // Check concurrency level size, greater than 1< < 16, reset to 65536 if (concurrencyLevel > MAX_SEGMENTS) concurrencyLevel = MAX_SEGMENTS; // Find power-of-two sizes best matching arguments // How much power of 2 int sshift = 0; int ssize = 1; // This loop finds the second most recent value of 2 above concurrencyLevel while (ssize < concurrencyLevel) { ++sshift; ssize <<= 1; } // Record Segment Offset this.segmentShift = 32 - sshift; // Record Segment Mask this.segmentMask = ssize - 1; // Set capacity if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; // c = capacity / ssize, default 16 / 16 = 1, here is the calculation of HashMap-like capacity in each Segment int c = initialCapacity / ssize; if (c * ssize < initialCapacity) ++c; int cap = MIN_SEGMENT_TABLE_CAPACITY; //Capacity similar to HashMap in Segment s is at least a multiple of 2 or 2 while (cap < c) cap <<= 1; // create segments and segments[0] // Create an array of segments, set segments[0] Segment<k,v> s0 = new Segment<k,v>(loadFactor, (int)(cap * loadFactor), (HashEntry<k,v>[])new HashEntry[cap]); Segment<k,v>[] ss = (Segment<k,v>[])new Segment[ssize]; UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0] this.segments = ss; } Summarize the initialization logic for ConcurrnetHashMap in Java 7. - Required parameter checks. - Check concurrency level concurrencyLevel size and reset to maximum if greater than maximum.The default value for tragic construction is 16. - Find the power value of the nearest 2 above concurrencyLevel for the initial capacity size, which defaults to 16. - Record the segmentShift offset, which is the N in Capacity = 2 to the N power and is used to calculate the position later in Put.The default is 32 - sshift = 28. - Record segmentMask, default is ssize-1 = 16-1 = 15. - Initialize segments[0], default size is 2, load factor is 0.75, expansion threshold is 2*0.75=1.5, the second value is inserted for expansion. 3. put Next, the above initialization parameters continue to look at the put method source. /** * Maps the specified key to the specified value in this table. * Neither the key nor the value can be null. * * <p> The value can be retrieved by calling the <tt>get</tt> method * with a key that is equal to the original key. * * > * @throws NullPointerException if the specified key or value is null */ public V put(K key, V value) { Segment<k,v> s; if (value == null) throw new NullPointerException(); int hash = hash(key); // hash value is unsigned right-shifted 28 bits (obtained at initialization), then runs and runs with segmentMask=15 // In fact, it's about running and running a high 4-bit segmentMask (1111) int j = (hash >>> segmentShift) & segmentMask; if ((s = (Segment<k,v>)UNSAFE.getObject // nonvolatile; recheck (segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment // Initialize if the found Segment is empty s = ensureSegment(j); return s.put(key, hash, value, false); } /** * Returns the segment for the given index, creating it and * recording in segment table (via CAS) if not already present. * * @param k the index * @return the segment */ @SuppressWarnings("unchecked") private Segment<k,v> ensureSegment(int k) { final Segment<k,v>[] ss = this.segments; long u = (k << SSHIFT) + SBASE; // raw offset Segment<k,v> seg; // Determine if Segment at u is null if ((seg = (Segment<k,v>)UNSAFE.getObjectVolatile(ss, u)) == null) { Segment<k,v> proto = ss[0]; // use segment 0 as prototype // Get HashEntry<k, v>Initialization Length in segment 0 int cap = proto.table.length; // Gets the expanded load factor from the hash table in segment 0, and all segments have the same loadFactor float lf = proto.loadFactor; // Calculate Expansion Threshold int threshold = (int)(cap * lf); // Create a HashEntry array of cap capacity HashEntry<k,v>[] tab = (HashEntry<k,v>[])new HashEntry[cap]; if ((seg = (Segment<k,v>)UNSAFE.getObjectVolatile(ss, u)) == null) { // recheck // Check again if the Segment at the u location is null, as there may be other threads working on it Segment<k,v> s = new Segment<k,v>(lf, threshold, tab); // Spin checks if the Segment at the u position is null while ((seg = (Segment<k,v>)UNSAFE.getObjectVolatile(ss, u)) == null) { // Assignment using CAS will only succeed once if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s)) break; } } } return seg; } The source code above analyzed the process of ConcurrentHashMap when put ting a data, and the following combs the specific process. Calculate the location of the key to put and get the Segment at the specified location. Initialize the Segment if the Segment at the specified location is empty. Initialize Segment process: - Check that the calculated Segment is null. - To continue initializing for null, create a HashEntry array using the capacity and load factor of Segment[0]. - Check again if the calculated Segment at the specified location is null. - Initialize this Segment using the created HashEntry array. - Spin determines if the calculated Segment at the specified location is null, and CAS is used to assign the Segment at that location. Segment.put Insert key,value value. The above explores the operations for getting and initializing Segments.The put method for the last line of Segments has not been reviewed yet, so continue with the analysis. final V put(K key, int hash, V value, boolean onlyIfAbsent) { // Acquire ReentrantLock exclusive lock, not scanAndLockForPut. HashEntry<k,v> node = tryLock() ? null : scanAndLockForPut(key, hash, value); V oldValue; try { HashEntry<k,v>[] tab = table; // Calculate the data location to put int index = (tab.length - 1) & hash; // CAS Gets the value of the index coordinate HashEntry<k,v> first = entryAt(tab, index); for (HashEntry<k,v> e = first;;) { if (e != null) { // Check to see if the key already exists, if so, traverse the list to find the location, and replace the value after finding it K k; if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { oldValue = e.value; if (!onlyIfAbsent) { e.value = value; ++modCount; } break; } e = e.next; } else { // The first value does not indicate that the index position already has a value, there is a conflict, chain header interpolation. if (node != null) node.setNext(first); else node = new HashEntry<k,v>(hash, key, value, first); int c = count + 1; // Capacity greater than expansion threshold, smaller than maximum capacity, to expand if (c > threshold && tab.length < MAXIMUM_CAPACITY) rehash(node); else // The index position assigns a node, which may be an element or a chain table header setEntryAt(tab, index, node); ++modCount; count = c; oldValue = null; break; } } } finally { unlock(); } return oldValue; } Since Segments inherit ReentrantLock, it is easy to obtain locks within Segments, which is used by the put process. tryLock() acquires the lock, but cannot continue to acquire it using the scanAndLockForPut method. Calculate the index location where the put data will be placed and get the HashEntry at that location. Why traverse put new elements?Because the HashEntry obtained here may be an empty element or the chain table already exists, treat it differently. If HashEntry does not exist at this location: - If the current capacity is greater than the expansion threshold and less than the maximum capacity, the capacity is expanded. - Direct head insertion. If HashEntry exists at this location: - Determines if the key and hash values of the current elements in the list are consistent with the key and hash values to be put.Consistency Replace Value - Inconsistent, get the next node in the list until you find the same value to replace, or the list is not identical at all. - If the current capacity is greater than the expansion threshold and less than the maximum capacity, the capacity is expanded. - Direct chain header insertion. If the location you want to insert already exists before, return the old value after replacement, otherwise return null. The scanAndLockForPut operation in the first step above is not described here, and what this method does is continuously spin tryLock() to acquire a lock.lock() is used to block acquisition of locks when the number of spins is greater than the specified number.HashEntry to get the lower hash position in the table while spinning. private HashEntry<k,v> scanAndLockForPut(K key, int hash, V value) { HashEntry<k,v> first = entryForHash(this, hash); HashEntry<k,v> e = first; HashEntry<k,v> node = null; int retries = -1; // negative while locating node // Spin Acquisition Lock while (!tryLock()) { HashEntry<k,v> f; // to recheck first below if (retries < 0) { if (e == null) { if (node == null) // speculatively create node node = new HashEntry<k,v>(hash, key, value, null); retries = 0; } else if (key.equals(e.key)) retries = 0; else e = e.next; } else if (++retries > MAX_SCAN_RETRIES) { // Once the spin has reached the specified number of times, blocking waits until locks are acquired lock(); break; } else if ((retries & 1) == 0 && (f = entryForHash(this, hash)) != first) { e = first = f; // re-traverse if entry changed retries = -1; } } return node; } 4. Expand rehash ConcurrentHashMap will only grow twice as large.When the data in the old array is moved to the new array, the position either remains the same or changes to index+ oldSize, and the node in the parameter is inserted into the specified position using the chain header interpolation after expansion. private void rehash(HashEntry<k,v> node) { HashEntry<k,v>[] oldTable = table; // Old capacity int oldCapacity = oldTable.length; // New capacity, double expansion int newCapacity = oldCapacity << 1; // New Expansion Threshold threshold = (int)(newCapacity * loadFactor); // Create a new array HashEntry<k,v>[] newTable = (HashEntry<k,v>[]) new HashEntry[newCapacity]; // New mask, default 2 expands to 4, -1 to 3, and binary to 11. int sizeMask = newCapacity - 1; for (int i = 0; i < oldCapacity ; i++) { // Traversing through old arrays HashEntry<k,v> e = oldTable[i]; if (e != null) { HashEntry<k,v> next = e.next; // Calculate new locations, new locations may only be inconvenient or old locations + old capacity. int idx = e.hash & sizeMask; if (next == null) // Single node on list // If the current position is not a list, but an element, assign the value directly newTable[idx] = e; else { // Reuse consecutive sequence at same slot // If it's a chain list HashEntry<k,v> lastRun = e; int lastIdx = idx; // The new location may only be inconvenient or old + old capacity. // After traversal, the elements behind lastRun are all in the same position for (HashEntry<k,v> last = next; last != null; last = last.next) { int k = last.hash & sizeMask; if (k != lastIdx) { lastIdx = k; lastRun = last; } } // , the element locations behind lastRun are the same, assigning directly to the new location as a list of chains. newTable[lastIdx] = lastRun; // Clone remaining nodes for (HashEntry<k,v> p = e; p != lastRun; p = p.next) { // Traverses through the remaining elements and interpolates the header to the specified k position. V v = p.value; int h = p.hash; int k = h & sizeMask; HashEntry<k,v> n = newTable[k]; newTable[k] = new HashEntry<k,v>(h, p.key, v, n); } } } } // Head Interpolation to Insert New Nodes int nodeIndex = node.hash & sizeMask; // add the new node node.setNext(newTable[nodeIndex]); newTable[nodeIndex] = node; table = newTable; } Some students may be confused about the last two for loops. The first for here is to find a node whose new location is the same for all the next nodes behind it.Then assign this to the new location as a list of chains.The second for loop is to insert the remaining elements into the list of specified locations by header insertion.The reason for this may be based on probability statistics, where students with in-depth research can make comments. 5. get It's easy to get here. The get method only takes two steps. - Calculate the storage location of the key. - Traverses the specified location to find the value of the same key. public V get(Object key) { Segment<k,v> s; // manually integrate access methods to reduce overhead HashEntry<k,v>[] tab; int h = hash(key); long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE; // Calculate where to store the key if ((s = (Segment<k,v>)UNSAFE.getObjectVolatile(segments, u)) != null && (tab = s.table) != null) { for (HashEntry<k,v> e = (HashEntry<k,v>) UNSAFE.getObjectVolatile (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE); e != null; e = e.next) { // If it is a linked list, traverse to find the value of the same key. K k; if ((k = e.key) == key || (e.hash == h && key.equals(k))) return e.value; } } return null; } 2. ConcurrentHashMap 1.8 1. Storage structure You can see that the ConcurrentHashMap for Java8 has changed a lot compared to Java7, not the previous egment array + HashEntry array + Chain list, but the Node array + chain list / red-black tree.When the conflict chain is expressed to a certain length, the chain list is converted to a red-black tree. 2. Initialize initTable /** * Initializes table, using the size recorded in sizeCtl. */ private final Node<k,v>[] initTable() { Node<k,v>[] tab; int sc; while ((tab = table) == null || tab.length == 0) { // If sizeCtl < 0 ,Describe additional thread execution CAS Successful, initializing. if ((sc = sizeCtl) < 0) // Transfer CPU Usage Thread.yield(); // lost initialization race; just spin else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) { try { if ((tab = table) == null || tab.length == 0) { int n = (sc > 0) ? sc : DEFAULT_CAPACITY; @SuppressWarnings("unchecked") Node<k,v>[] nt = (Node<k,v>[])new Node<!--?,?-->[n]; table = tab = nt; sc = n - (n >>> 2); } } finally { sizeCtl = sc; } break; } } return tab; } From the source code, you can see that the initialization of ConcurrentHashMap is done by spin and CAS operations.It is important to note that the variable sizeCtl, whose value determines the current initialization state. - -1 indicates initialization is in progress - -N indicates that N-1 threads are expanding - Indicates the table initialization size if the table is not initialized - Indicates the table capacity if the table has already been initialized. 3. put Pass through the put source directly. public V put(K key, V value) { return putVal(key, value, false); } /** Implementation for put and putIfAbsent */ final V putVal(K key, V value, boolean onlyIfAbsent) { // key and value cannot be empty if (key == null || value == null) throw new NullPointerException(); int hash = spread(key.hashCode()); int binCount = 0; for (Node<k,v>[] tab = table;;) { // f = target location element Node<k,v> f; int n, i, fh;// Element hash value holding target position after fh if (tab == null || (n = tab.length) == 0) // Array bucket is empty, initialize array bucket (spin+CAS) tab = initTable(); else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) { // Empty barrel, CAS put in, no lock, directly break out when successful if (casTabAt(tab, i, null,new Node<k,v>(hash, key, value, null))) break; // no lock when adding to empty bin } else if ((fh = f.hash) == MOVED) tab = helpTransfer(tab, f); else { V oldVal = null; // Use synchronized locking to join nodes synchronized (f) { if (tabAt(tab, i) == f) { // Description is a list of chains if (fh >= 0) { binCount = 1; // Loop in new or overlay nodes) { // red-black tree; } hashcode is calculated from key. Determine if initialization is required. That is, Node located by the current key, if NULL indicates that the current location can write data, and CAS attempts to write, failures guarantee spin success. If the hashcode == MOVED == -1 at the current location, an expansion is required. If none is satisfied, the synchronized lock is used to write data. If the quantity is greater than TREEIFY_THRESHOLD is converted to a red-black tree. 4. get The get process is simple and passes directly through the source code. public V get(Object key) { Node<k,v>[] tab; Node<k,v> e, p; int n, eh; K ek; // hash location where key is located int h = spread(key.hashCode()); if ((tab = table) != null && (n = tab.length) > 0 && (e = tabAt(tab, (n - 1) & h)) != null) { // Head node hash values are the same if the specified location element exists if ((eh = e.hash) == h) { if ((ek = e.key) == key || (ek != null && key.equals(ek))) // The key hash values are equal, the key values are the same, returning the element value directly return e.val; } else if (eh < 0) // Head node hash value less than 0, indicating expansion or red-black tree, find return (p = e.find(h, key)) != null ? p.val : null; while ((e = e.next) != null) { // Is a linked list, traverse search if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek)))) return e.val; } } return null; } To summarize the get process: - Calculate the location based on the hash value. - Find the specified location, and if the header node is the one to find, return its value directly. - If the hash value of the header node is less than 0, it means that the tree is expanding or is red-black, look for it. - If it is a linked list, iterate through the lookup. Summary: Overall, ConcruuentHashMap has changed considerably in Java8 compared to Java7. 3. Summary ConcruuentHashMap uses a segmented lock in Java7, meaning that only one thread can operate on each Segment at the same time, and each Segment is a structure similar to a HashMap array that can be expanded and its conflicts translated into a list of chains.However, the number of segments cannot be changed once they are initialized. The Synchronized Lock CAS mechanism used by ConcruuentHashMap in Java8.The structure has also evolved from the Segment array + HashEntry array + chain table in Java7 to the Node array + chain list / red-black tree, which is similar to a HashEntry structure.It converts to a red-black tree when the conflict reaches a certain size and returns to the list when the conflict is less than a certain number. Some students may have questions about the performance of Synchronized lock. In fact, since the introduction of lock upgrade strategy, the performance of Synchronized lock is no longer a problem. Interested students can learn about the lock upgrade of Synchronized by themselves. Last words Articles have been included in Github.com/niumoo/JavaNotes Welcome to Star and Instructions.More front-line factory interviews, Java programmers need to master the core knowledge and other articles, also collated a lot of my text, welcome Star and perfect, hope we can become excellent together. Articles that are helpful and can be clicked "like" or "share" are all supported and I like them! Articles are updated continuously every week. To keep an eye on my updated articles and shared dry goods in real time, you can focus on the Unread Code Public Number or My Blog.
https://programmer.ink/think/don-t-know-concurrenthashmap-yet-this-source-code-is-analyzed-and-interpreted.html
CC-MAIN-2021-39
refinedweb
3,291
55.84
Now that you've got the basics down (you did read my first tutorial and studied up using the Language Guide right?) its time to witness the sleek, rumbling power of Boo Kung-Fu, but first, the requisite backstory: Just a few days ago I was wrestling with a problem in parsing XML – my primary issue was the fact that I had many attributes represented in an XML attribute that had to be converted to many different kinds of types. For example, consider this XML element: The color attribute has to be converted to a type of System.Drawing.Color, but name and style are strings; strength, speed, and stamina, however, needed to be converted to a set of integers. Sounds easy right? Man, I wish. I realized as the XML became more complicated and trickier to parse with some attributes being present, and some not, that I was creating a huge mess of unmaintainable code. There had to be a simpler and more elegant way that was easily extended. And there was: Boo. My inital implementation of attributes-to-data structures was over 70 lines long, and I hadn't even scratched the surface yet. Using my ultimate Boo fighting technique however, I produced something similar to the following code (this is just an example, sans error checking and junk). It took me about 25 minutes to write it up while doing the article, so it ain't half bad, says me. It was pretty easy to follow along, but I'll guide you through it anyway to explain some of the cool stuff. ParseAttributes takes two parameters, an XmlNode that is hopefully rife with attributes, and a special dictionary that holds the attribute name to be converted as a key, and a pointer to the method that is to do the conversion, as the value. For example, would map the "strength" property to the ToInt() conversion method. Then, in the body of ParseAttributes, we extract the ToInt method when we find an instance of the "strength" attribute, then we feed the value of "strength" into ToInt(), which returns an integer representation of the strength attribute - then, glory be, we take that return integer and stuff it into another dictionary and, when we've all finished, we return this dictionary. For future reference, ToInt() invokes a method, while ToInt passes a pointer to the ToInt() method, which is called a "callable" in Boo. If you need to extract a method pointer from an arraylist or something that only returns objects, make sure to cast it to a "callable" first before trying to use it. The return dictionary holds the mapping from attribute -> data-structure, which probably looks like this internally: You'll note the special conditional - if I want to pick up a attribute's value but not convert it I simply pass 'null' as that attributes method pointer (The 'converter' variable) in the dictionary, and it is passed by string into the dictionary to be returned. If you were feeling really twisted, you could use, instead and saved yourself one very precious line of code at the cost of your co-workers smacking you in the back of the head when they browse your source code. Returning to our regularily schedule program, this is what it looks like when we pass in a null instead of a "ToInt" or a "ToColor": comes out to be, in the returned dictionary, Now, its time we just pick and choose keys from the dictionary and assign them to their relevant fields in a Ninja object, right? No, sorry – I'm lazy, and since I've gone this far, I might as well go all the way, baby! Witness this method: This method is deceptively simple yet incredibly useful. It uses a feature of the .NET framework called "reflection" to dynamically assign values to fields possessed by "myObject." I won't go over it in any great detail; it is suffice to say that it works its magic in incredible and mysterious ways, so that the end result is the same as minus the clean syntax. So, there you have it - a clean, easy, extensible way to convert attributes to data structures and assign these data structures to their proper classes, a real world solution to a real world problem I was struggling with – yeah, that's right, something useful Another way using Duck Typing Note that there was another way to solve this design problem using a most excellent feature of Boo called Duck Typing and its interface, "IQuackFu," but that's another story for another day, my children. See XML Object for an example. Another way using XML Serialization Nice example. It looks like something that .NET XML serialization may be good for (although it for some reason chokes on the color type, so I worked around that). See XML Serialization for this version. 5 CommentsHide/Show Comments Jan 10, 2005 betson betson Very cool. My version is that way because the XML elements I was consuming were not guarnteed to be a standard format - attributes could have "mm" and "cm" post-fixing a length attribute, or a color attribute might be in as many as three different formats, none of which can't be consumed by the Color typeconverter without modification, and since I had to modify them anyway... I wasn't paying attention to the example xml element I made. Next time I'll make it more dastardedly and sick! Jan 11, 2005 dholton dholton Cool. Gave me an excuse to learn more about xml, anyway. If you like we can move my example to a page about XML Serialization instead, so your tutorial doesn't get too complicated. Jan 11, 2005 betson betson Naw. This is actually the perfect place for it; its related and people can compare/contrast both implementations without having to flip tabs. Also, I can't believe you gave your ninja dude better stats than mine. =D Jan 11, 2005 dholton dholton Sorry, I didn't see your comment, I already had moved it I think. I'll just keep it like this, there is a link to the page with my example. And that page links back to this one. Jan 25, 2007 Christopher Eyre The import section is missing: import System.Drawing otherwise great article.
http://docs.codehaus.org/display/BOO/Advanced+Tutorial%2C+Learning+Boo+Kung-Fu?focusedCommentId=17585
CC-MAIN-2013-20
refinedweb
1,054
66.27
Consult the talk page or me for any questions or concerns. See the Poo Lit Archives for the results from past competitions. The competition What is the Poo Lit Surprise? A writing competition held on Uncyclopedia for a small cash prize. It is designed to jump-start writing quality at Uncyclopedia. After the previous eight competitions, a significant number of featured articles for the following month(s) were from the PLS, and it could be argued that the overall quality of featured articles increased. Who can enter and what are the rules? All registered members of Uncyclopedia are encouraged to enter, though a few conditions and limitations apply. Judges are barred from entering the competition and banned. Unless your sockpuppet is BENSON of course.. Articles created prior to the competition cannot be submitted. Also, no plagiarism. Should we discover your work is not original, you will be disqualified. Using resources such as the Reefer Desk, Image Request, or Pee Review is forbidden. Users may, however, use Vital, Uncyclopedia:The Creative Process, Category:Rewrite, Special:Wantedpages, Uncyclopedia:Requested Articles and/or Inspire an Article to help get ideas. When is the PLS going to be held? - From) Where should I put my entry? The article should be placed on your namespace (i.e. the title should be in the format User:[username here]/[article name here]). Between May 10th ― May 23rd,. Boxers or briefs? Entries Best Main Namespace Article Also simply known as "Best Article", this category is for articles which are not of an alternate namespace (i.e. UnBooks, HowTo, UnNews, Why?, UnPoetia, etc.). Best Noob Article A category for noobs (users who have been on Uncyclopedia for three months or less). May be of an alternate namespace. Best Illustrated Article Based on how well an article's images contribute to the humor of the article. May be of an alternate namespace (images and article must be created by the user). Best Alternate Namespace Article The category which is strictly for alternate namespace articles (i.e. UnBooks, HowTo, UnNews, Why?, UnPoetia, UnScripts, etc.). Best Rewrite For articles which are rewrites of existing articles.
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Poo_Lit_Surprise?diff=next&oldid=2795387
CC-MAIN-2014-15
refinedweb
353
51.85
On 1/18/21 12:16 PM, Guido van Rossum wrote: I do worry about the best practice getting worse if your PEP 649 is accepted. A good part of what motivated me to start this second thread ("Let's Fix ...") was how much worse best practice would become if PEP 649 is accepted. But if we accept PEP 649, and take steps to fix the semantics of annotations, I think the resulting best practice will be excellent in the long-run. Let's assume for a minute that PEP 649 is accepted more-or-less like it is now. (The name resolution approach is clearly going to change but that won't affect the discussion here.) And let's assume that we also change the semantics so annotations are always defined (you can't delete them) and they're guaranteed to be either a dict or None. (Permitting __annotations__ to be None isn't settled yet, but it's most similar to current semantics, so let's just assume it for now.) Because the current semantics are kind of a mess, most people who examine annotations already have a function that gets the annotations for them. Given that, I really do think the best approach is to gate the code on version 3.10, like I've described before: if python version >= 3.10: def get_annotations(o): return o.__annotations__ else: def get_annotations(o): if isinstance(o, (type, types.ModuleType)): return o.__dict__.get("__annotations__", None) else: return o.__annotations__ This assumes returning None is fine. If it had to always return a valid dict, I'd add "or {}" to the end of every return statement. Given that it already has to be a function, I think this approach is readable and performant. And, at some future time when the code can drop support for Python < 3.10, we can throw away the if statement and the whole else block, keeping just the one-line function. At which point maybe we'd refactor away the function and just use "o.__annotations__" everywhere. I concede that, in the short term, now we've got nine lines and two if statements to do something that should be relatively straightforward--accessing the annotations on an object. But that's where we find ourselves. Current best practice is kind of a mess, and unfortunately PEP 649 breaks current best practice anyway. My goal is to fix the semantics so that long-term best practice is sensible, easy, and obvious. Cheers, /arry
https://mail.python.org/archives/list/python-dev@python.org/message/Y7KUIKUHFQJ564HM2JNFFZF5EJHLNR2P/attachment/2/attachment.htm
CC-MAIN-2021-39
refinedweb
418
64.41
- mid-2000s is legacy? NO WAY! I can not refer as legacy to anything later than 1998 Admin In my mind Y2k is a few years ago. Somehow it is almost 20 years ago. I'm getting old Admin It would be no excuse, but maybe they had a rule "multiple if statements are bad, use a switch" that was misapplied here? Admin How is newtonking an improvement over the good old tried-and-tested oldtonking? I want to carry on being tonked the same as I always have been. Admin It's not just you; I'm not exactly old yet I'm surprised how long ago the early 2000s were. Admin The real WTF is that Newtonsoft.Json is a library for JSON parsing and serialization. How the f*** did it pop up in XML-related code, and why would there be XML attributes in this namespace? Admin Well to be honest that's not the best code but it's hardly a WTF Admin My guess is that they expected the newtonsoft namespace definition to expand and so it'd just be a matter of adding new case statements. Clearly that didn't happen. Admin It wouldn't surprise me if there was JSON encoded in the XML or the other way around! I've seen worse. Admin I'm wondering if this was a misapplied attempt to leave room for additional valid values or namespaces in the future. Still not the way I'd implement that, but that's the only reason I can think of that you'd write such different methods on the same day Admin I've had to add tests to otherwise untested systems, and it really forced me to trace through a lot of the underlying WTF in the codebase and clean it up. I'm not sure if the investment was intended more for increased developer knowledge, cleaner code or both, but down the road, I noticed that it took less time to track down bugs, so FTW! Admin Legacy code: Admin So, the problem with legacy code here is using switch statements instead of ifs? Kids these days have it too easy... Admin "Legacy code is just code without unit tests" -- some smartass from a conference Seems good in theory, except that given how ridicolously badly designed and spaghettified some legacy code is, as soon as you write unit tests, the microsecond after you start refactoring, you most likely have to rewrite your unit tests from scratch, because the original code was so untestable. Admin What is it when your suite of tests is so old and horrible that it is legacy code itself? Admin no if statements, must be functional programming Admin [pre] public void unitTest() { assert(true) } [/pre] There. Code modernized. Problem solved. Admin There. Code modernized. Problem solved. With formatting (hopefully). Admin It's from the book "Working Efficiently with Legacy Code". Highly recommend. I guess the author has made a lot of speeches where this bold statement has come up. Admin Maybe he could update this: Admin Apparently all of my test suites are legacy. I'll need to start writing tests for them! Admin I started in this company in late 2002 as part of the effort to replace a key system, which we completed in late 2003. In 2006-7 we replaced that system with a new one. In 2014 we replaced it again, because the previous product had been EOLed for some time. (Company was bought out and the product discontinued.) I'm hoping we can stretch this one out to 10 years. At least nobody's likely to be buying out Oracle any time soon.
https://thedailywtf.com/articles/comments/legacy-switchout/1
CC-MAIN-2019-26
refinedweb
614
70.33
One error you may encounter when using pandas is: ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat This error occurs when you attempt to merge two pandas DataFrames but the column you’re merging on is an object in one DataFrame and an integer in the other DataFrame. The following example shows how to fix this error in practice. How to Reproduce the Error Suppose we create the following two pandas DataFrames: import pandas as pd #create DataFrame df1 = pd.DataFrame({'year': [2015, 2016, 2017, 2018, 2019, 2020, 2021], 'sales': [500, 534, 564, 671, 700, 840, 810]}) df2 = pd.DataFrame({'year': ['2015', '2016', '2017', '2018', '2019', '2020', '2021'], 'refunds': [31, 36, 40, 40, 43, 70, 62]}) #view DataFrames print(df1) year sales 0 2015 500 1 2016 534 2 2017 564 3 2018 671 4 2019 700 5 2020 840 6 2021 810 print(df2) year refunds 0 2015 31 1 2016 36 2 2017 40 3 2018 40 4 2019 43 5 2020 70 6 2021 62 Now suppose we attempt to merge the two DataFrames: #attempt to merge two DataFrames big_df = df1.merge(df2, on='year', how='left') ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat We receive a ValueError because the year variable in the first DataFrame is an integer but the year variable in the second DataFrame is an object. How to Fix the Error The easiest way to fix this error is to simply convert the year variable in the second DataFrame to an integer and then perform the merge. The following syntax shows how to do so: #convert year variable in df2 to integer df2['year']=df2['year'].astype(int) #merge two DataFrames big_df = df1.merge(df2, on='year', how='left') #view merged DataFrame big_df year sales refunds 0 2015 500 31 1 2016 534 36 2 2017 564 40 3 2018 671 40 4 2019 700 43 5 2020 840 70 6 2021 810 62 Notice that we don’t receive any ValueError and we are able to successfully merge the two DataFrames into one. Additional Resources The following tutorials explain how to fix other common errors in Python: How to Fix: columns overlap but no suffix specified How to Fix: ‘numpy.ndarray’ object has no attribute ‘append’ How to Fix: if using all scalar values, you must pass an index
https://www.statology.org/you-are-trying-to-merge-on-object-and-int64-columns/
CC-MAIN-2022-40
refinedweb
414
72.09
Writing an R-package I use name spaces to use functions from existing packages, e.g. raster::writeRaster(...) base base::sum(...) foo[base::which(base::sapply(bar, function())] No you don't need to reference base packages like this. You only need to reference non-base packages to ensure they are loaded into the function environment when functions from your package are run, either by using :: or @import in the Roxegen notes at the top of your script. See why you don't need to reference base packages)."(Hadley Wickham) The only time you need to reference base:: is if the namespace for your package contains a package that has an alternative function of the same name.
https://codedump.io/share/V0RMuVz7R4tr/1/name-space-of-base-package-needed
CC-MAIN-2017-34
refinedweb
117
58.82
Hello Janak,I've been working on a man page for the upcoming 2.6.16 unshare()syscall, using the documentation that you provided (thank you!)in your Documentation/unshare.txt patch as a base.Perhaps you would care to review that page (below), and makecorrections, if needed.While writing this page, I came up with a number of questions about the design of this interface:1. Your original documentation said: The flags argument specifies one or bitwise-or'ed of several of the following constants. However, my reading of the code (I have not yet tested the syscall) is that 'flags' can be zero. I don't see any problem with that, but it is in conflict with the statement above, so it may be worth confirming: what is intended behaviour? is 'flags' allowed to be zero?2. Reading the code and your documentation, I see that CLONE_VM implies CLONE_SIGHAND. Since CLONE_SIGHAND is not implemented (i.e., results in an EINVAL error), I take it that this means that at the moment CLONE_VM will not work (i.e., will result in EINVAL). Is this correct? If so, I will note this in the man page.3. The naming of the 'flags' bits is inconsistent. In your documentation you note:. The problem is that the flags don't simply reverse the meanings of the clone() flags of the same name: they do it inconsistently. That is, CLONE_FS, CLONE_FILES, and CLONE_VM *reverse* the effects of the clone() flags of the same name, but CLONE_NEWNS *has the same meaning* as the clone() flag of the same name. If *all* of the flags were simply reversed, that would be a little strange, but comprehensible; but the fact that one of them is not reversed is very confusing for users of the interface. An idea: why not define a new set of flags for unshare() which are synonyms of the clone() flags, but make their purpose more obvious to the user, i.e., something like the following: #define UNSHARE_VM CLONE_VM #define UNSHARE_FS CLONE_FS #define UNSHARE_FILES CLONE_FILES #define UNSHARE_NS CLONE_NEWNS etc. This would avoid confusion for the interface user. (Yes, I realize that this could be done in glibc, but why make the kernel and glibc definitions differ?)4. Would it not be wise to include a check of the following form at the entry to sys_unshare(): if (flags & ~(CLONE_FS | CLONE_FILES | CLONE_VM | CLONE_NEWNS | CLONE_SYSVSEM | CLONE_THREAD)) return -EINVAL; This future-proofs the interface against applications that try to specify extraneous bits in 'flags': if those bits happen to become meaningful in the future, then the application behavior would silently change. Adding this check now prevents applications trying to use those bits until such a time as they have meaning.Cheers,Michaelunshare.2 draft man page:.\" (C) 2006, Janak Desai <janak@us.ibm.com>.\" (C) 2006, Michael Kerrisk <mtk-manpages@gmx.ne>.\" Licensed under the GPL.\".TH UNSHARE 2 2005-03-10 "Linux 2.6.16" "Linux Programmer's Manual".SH NAMEunshare \- disassociate parts of the process execution context.SH SYNOPSIS.nf.B #include <sched.h>.sp.BI "int unshare(int " flags );.fi.SH DESCRIPTION.BR unshare () allows a process to disassociate parts of its executioncontext that are currently being shared with other processes. Part of the execution context, such as the namespace, is shared implicitly when a new process is created using .BR fork (2)or.BR vfork (2), while other parts, such as virtual memory, may beshared by explicit request when creating a process using .BR clone (2).The main use of .BR unshare (2)is to allow a process to control itsshared execution context without creating a new process.The .I flags argument is a bit mask that specifies which parts of the execution context should be unshared. This argument is specified by ORing together one or more.\" FIXME according to the code, it looks as though .\" flags can actually be zero.of the following constants:.TP.B CLONE_FILESReverse the effect of the.BR clone (2).B CLONE_FILESflag.Unshare the file descriptor table, so that the calling process no longer shares its file descriptors with any other process..TP.B CLONE_FSReverse the effect of the.BR clone (2).B CLONE_FS flag.Unshare file system attributes, so that the calling process no longer shares its root directory, current directory, or umask attributes with any other process..BR chroot (2),.BR chdir (2),or.BR umask (2).TP.B CLONE_NEWNSThis flag has the same effect as the.BR clone (2).B CLONE_NEWNSflag.Unshare the namespace, so that the calling process has a private copy of its namespace which is not shared with any other process.Specifying this flag automatically implies.B CLONE_FSas well..TP.B CLONE_VMReverse the effect of the.BR clone (2).B CLONE_VMflag..RB ( CLONE_VMis also implicitly set by.BR vfork (2),and can be reversed using this.BR unshare ()flag.)Unshare virtual memory, so that the calling process no longer shares its virtual address space with any other process..SH RETURN VALUEOn success, zero returned. On failure, \-1 is returned and .I errno is set to indicate the error..SH ERRORS.TP.B EPERM.I flagsspecified.B CLONE_NEWNS but the calling process was not privileged (did not have the.B CAP_SYS_ADMINcapability)..TP.B ENOMEMCannot allocate sufficient memory to copy parts of caller'scontext that need to be unshared..TP.B EINVALAn invalid but was specified in.IR flags ..SH CONFORMING TOThe.BR unshare (2)system call is Linux-specific..SH NOTESNot all of the process attributes that can be shared when a new process is created using.BR clone (2)can be unshared using.BR unshare ().In particular, as at kernel 2.6.16,.BR unshare () does not implement.BR CLONE_SIGHAND ,.BR CLONE_SYSVSEM ,or .BR CLONE_THREAD ..SH SEE ALSO.BR clone (2), .BR fork (2), .BR vfork (2), Documentation/unshare.txt--
http://lkml.org/lkml/2006/2/13/408
CC-MAIN-2016-18
refinedweb
968
69.79
(This articles has already been posted in. Now I post it again in my own blog.) The first part of this series discuss how to reuse OWS controls from Sharepoint.WebControls. Strictly speaking, there are few steps to employ them in webpart; first, create OWSForm as field containers; second, add OWS * Field controls to the form; next submit the form either using OWSSubmit control or your own control and finally collect the value from POST value collections. In fact, that Sharepoint.WebControls only define 2 ows field; dateField and numberField – which is far from our expectation. However if you keen to browse OWS.JS (the default javascript’s include in Sharepoint), then you’ll find that there are many other OWS inputs around such as URLField, BooleanField, NoteField, RichTextField, TextField etc. (Figure-1). Obviously, those input are just common input type found in HtmlControls of ASPNET or HTML basic tag. The OWS provides uniform appearance and finer user interface – such as the one we found in date picking tools of DateField or formatting tab of RichTextField etc. So our discussion here is to create the first custom OWS control, using available objects in OWS.JS scripts. For sake of simplicity and cleared purpose, I will take a simple example the TextField – giving you the methodology to create your own OWS*Field control and leaving the other field for you to implement. Figure-1. Many OWS field object doesn’t has correlated webcontrols. The Base Class: OWSControl Remember that every OWS * Field control we made must be glued properly to the form (OWSForm) (Part I). To enable this operation our control must recognize its parent – therefore we also expect that our control will determine its parent form automatically. The OWSControl has this kind of operation, so we can make it as the base class and inherit our control from it. Inheriting from OWSControl also assemble correct OWS control hierarchy (Part I). At last for the class, since we are going to post data through POST value collection, we also need to implement System.Web.UI.IPostBackDataHandler interface. public class OWSTextField : Microsoft.SharePoint.WebControls.OWSControl, IPostBackDataHandler Confirm Field Definition in OWS.JS Basically the new control renders javascript function for specified field as defined in OWS.JS. The javascript object and function has been assembled by Microsofts, so we must verify the parameters in the script objects definition. For example, figure-2 shows how TextField function was defined in OWS.JS – which will be rendered by our control. Figure-2. Consult OWS.JS for desired field object As we can see there are 6 fields need to be defined – with an additional “Required” flags : - frm is parent form name (the form container) - stName is current field name (the object’s field name) - stDisplay is display name for current field - stValue is value for current field - cchMaxLength is maximum character for the field - cchDisplaySize is textbox size - Required flag as an additional field required information Parent form name (1) can be obtained from the base class property “ParentOWSForm.Name“. The current field name (2) must be uniquely defined. Since, our control’s grandfather is WebControl we can get “UniqueID” from current control. And the rest of parameters (3-7) will be our control’s field. Next Implement LoadPostData LoadPostData is one of default contract when we implement IPostBackDataHandler interface. In this function we can implement a procedure to check whether the postback value collections contains our parameters and put the value in current control’s Value field. public bool LoadPostData(string postDataKey, NameValueCollection postCollection) { string text1 = postCollection[postDataKey]; if (((text1 != null) && (text1 != “”)) && (text1 != this.Value)) { this.Value = text1; return true; } return false; } Finally, Override Render Method Overriding render method is simple since you only need to construct javascript object for the field and call BuildUI() function of the object. Summary To take the advantage of uniform appearance and finer user interface for all input – we can use the OWS.JS field objects. Using those fields object can be as simple as rendering javascript to construct the object and then terminated by calling its BuildUI() function.
http://blog.libinuko.com/2006/08/02/sharepoint-webcontrol-part-ii-creating-custom-ows-control-for-webpart/
CC-MAIN-2019-09
refinedweb
682
52.7
I'm trying to unscramble a code using pythonic methods. The way to crack the code is by selecting the letter two places ahead of itself. For example if the code was abc cde alphabet = ["a","b","c"...."z"] def scramble(sentence): alphabet = ["a","b","c"...] solution = [] for i in sentence: newIndex = get index value of i, += 2 newLetter = translate newIndex into the corresponding letter solution.append(newLetter) for i in solution: print(i, end="") Try: >>>>> ''.join(chr(ord(c)+2) for c in s) 'cde' The above is not limited to standard ASCII: it works up through the unicode character set. >>>>> ''.join(chr(ord('a')+(ord(c)-ord('a')+2) % 26) for c in s) 'cdeab' If we want to modify the original only enough to get it working: from string import ascii_lowercase as alphabet def scramble(sentence): solution = [] for i in sentence: newIndex = (alphabet.index(i) + 2) % 26 newLetter = alphabet[newIndex] solution.append(newLetter) for i in solution: print(i, end="") Example: >>> scramble('abcz') cdeb
https://codedump.io/share/rgW4K76FqeNn/1/how-can-i-move-a-list-index-value-forward-39x39-amount-of-times-using-python
CC-MAIN-2018-13
refinedweb
167
57.57
Created on 2019-12-09 20:47 by Jonathan Slenders, last changed 2021-01-29 11:59 by robbiecares. We have a snippet of code that runs perfectly fine using the `SelectorEventLoop`, but crashes *sometimes* using the `ProactorEventLoop`. The traceback is the following. The exception cannot be caught within the asyncio application itself (e.g., it is not attached to any Future or propagated in a coroutine). It probably propagates in `run_until_complete()`. File "C:\Python38\lib\asyncio\proactor_events.py", line 768, in _loop_self_reading f.result() # may raise File "C:\Python38\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "C:\Python38\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) I can see that in `IocpProactor._poll`, `OSError` is caught and attached to the future, but not `ConnectionResetError`. I would expect that `ConnectionResetError` too will be attached to the future. In order to reproduce, run the following snippet on Python 3.8: from prompt_toolkit import prompt # pip install prompt_toolkit while 1: prompt('>') Hold down the enter key, and it'll trigger quickly. See also: Suppressing `ConnectionResetError` in `BaseProactorEventLoop._loop_self_reading`, like we do with `CancelledError` seems to fix it. Although I'm not sure what it causing the error, and whether we need to handle it somehow. ConnectionResetError means that the pipe socket is closed. Was the event loop closed? Can you provide a reproducer? Can you try to get debug logs to see what's going on? Thanks Victor for the reply. It looks like it's the self-socket in the BaseProactorEventLoop that gets closed. It's exactly this FD for which the exception is raised. We don't close the event loop anywhere. I also don't see `_close_self_pipe` being called anywhere. Debug logs don't provide any help. I'm looking into a reproducer. It looks like the following code will reproduce the issue: ``` import asyncio import threading loop = asyncio.get_event_loop() while True: def test(): loop.call_soon_threadsafe(loop.stop) threading.Thread(target=test).start() loop.run_forever() ``` Leave it running on Windows, in Python 3.8 for a few seconds, then it starts spawning `ConnectionResetError`s. Even simpler, the following code will crash after so many iterations: ``` import asyncio loop = asyncio.get_event_loop() while True: loop.call_soon_threadsafe(loop.stop) loop.run_forever() ``` Adding a little sleep of 0.01s after `run_forever()` prevents the issue. So, to me it looks like the cancellation of the `_OverlappedFuture` that wraps around the `_recv()` call from the self-pipe did not complete before we start `_recv()` again in the next `run_forever()` call. No idea if that makes sense... I just spent some time digging into this. Each call to `run_forever` starts a call to `_loop_self_reading`, then attempts to cancel it before returning: The comment at line 321 is not entirely accurate: the future will not resolve in the future, but it may have *already* resolved, and added its callback to the call_soon queue. This callback will run if the event loop is restarted again. Since `_loop_self_reading` calls itself, this results in two copies of the "loop" running concurrently and stepping on each other's `_self_reading_futures`. This appears to be fairly harmless except for noise in the logs when only one of the loops is stopped cleanly. I believe the simplest fix is for `_loop_self_reading` to compare its argument to `self._self_reading_future` to determine if it is the "current" loop and if not, don't reschedule anything. Please take a look at this as well: (ipython #12049 'Unhandled exception in event loop' (WinError 995)) and below Here is another way to reproduce this (or an extremely similar) error without a loop. Since may be a race condition, I'm not sure this works 100% of the time on all machines - but it did on several machines I tried. ``` import asyncio loop = asyncio.get_event_loop() def func(): pass f = loop.run_in_executor(None, func) loop.stop() loop.run_forever() loop.stop() loop.run_forever() loop.stop() loop.run_forever() ``` ``` Error on reading from the event loop self pipe loop: <ProactorEventLoop running=True closed=False debug=False> Traceback (most recent call last): File "C:\Miniconda3\envs\py38\lib\asyncio\windows_events.py", line 453, in finish_recv return ov.getresult() OSError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Miniconda3\envs\py38\lib\asyncio\proactor_events.py", line 768, in _loop_self_reading f.result() # may raise File "C:\Miniconda3\envs\py38\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "C:\Miniconda3\envs\py38\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request ``` Any input from the asyncio experts? I don't have an issue with handling the exception in this case, and hopefully when someone is up to the task of dealing with the range of edge cases throughout this loop implementation, hopefully they can get the ordering of waits right. I've posted a pull request with a test and fix:. It's a more targeted fix than cmeyer's PR (which I didn't even notice until now due to unfamiliarity with the BPO UI) New changeset ea5a6363c3f8cc90b7c0cc573922b10f296073b6 by Ben Darnell in branch 'master': bpo-39010: Fix errors logged on proactor loop restart (#22017) Thanks, Ben! I've been seeing failures on the Win10 buildbot 3.x branch that seem to correlate with the timing of this change - could there be some further work needed on Windows? Or, if it's a test-only artifact and the warnings are innocuous, something to ignore the output during the tests? Specifically, the following warnings occur during test_asyncio: Warning -- threading_cleanup() failed to cleanup 1 threads (count: 1, dangling: 2) Warning -- Dangling thread: <_MainThread(MainThread, started 5220)> See for the first failing build. I can confirm that those warnings appear to be coming from the test I added here. I'm not sure how to interpret them, though - what does it mean for the main thread to be dangling? I'm guessing the warning appears odd as we're seeing a thread shutdown data race. The message is produced by threading_cleanup in support/threading_helper.py, and it appears that between the first and second lines one of the initially dangling threads goes away, so the only one left to be enumerated is the main thread, at which point the function is simply listing all threads currently alive. But by that point it's just the main thread remaining. I notice in the test that you have a comment about needing to wait for f to complete or you get a warning about thread not shutting down cleanly. Was that a similar warning? The run_until_complete(f) line seems to have no effect on the buildbot. If I added a small sleep at the end of the test the warnings go away on the buildbot. The buildbot is a fairly fast machine, so perhaps the test just needs to wait somehow for the event loop to fully shut down or something. The most direct cause of the warnings seems to be the self.loop.close() call - if I just comment that out the test runs warning-free without any extra delay needed. I don't know much about asyncio tests, but it would appear the close call in the test defeats some of the logic in the close_loop teardown code that runs under TestCase (in utils.py), which under Windows is just a call to run_until_complete(loop.shutdown_default_executor()). If I add that same call to the test prior to the close it also passes cleanly. So if closing the loop in the test itself is crucial, I'd suggest including the extra run_until_complete call. If closing isn't crucial to the test, simply removing it seems to address the issue. I'm not sure if its removal then has any implications for the extra run_until_complete(f) call in the test, as I can't see any impact from that on the buildbot. I've fixed the test and added some commentary about the different levels of clean shutdown we must do to quiet all the warnings: New changeset be435ae2b064dc64f04475bec632862e1dbf605f by Ben Darnell in branch 'master': bpo-39010: Improve test shutdown (#22066) New changeset 49571c0b0e57e20d85727c738d9a0fe342dd2938 by Miss Islington (bot) in branch '3.9': bpo-39010: Fix errors logged on proactor loop restart (GH-22017) (#22034) New changeset a986b061a3cd4661eb9f857e2936291f1847bd15 by Miss Islington (bot) in branch '3.8': bpo-39010: Fix errors logged on proactor loop restart (GH-22017) (#22035) New changeset 40e2444c364eede59f193979df5a02ed958f56e9 by Miss Islington (bot) in branch '3.8': bpo-39010: Improve test shutdown (GH-22066) (#22083) New changeset 8d68e59f11267ded765d4af6d71a49784a12fad5 by Miss Islington (bot) in branch '3.9': bpo-39010: Improve test shutdown (GH-22066) (#22082)
https://bugs.python.org/issue39010
CC-MAIN-2021-21
refinedweb
1,479
65.01
Introduction: How to Build a Car Crash Warning System Well this is my first instructable. And the objective of this project is to build a car warning system based on the real car crash warning system using Arduino, sensors and some other stuffs.There are many ways to build this project and you can modify the code as your purpose. Don't worry about the code or the connections because it is not that hard. Actually it is pretty simple. Now let us get down to business. Step 1: Things You Will Need: - Arduino Uno - LCD display (white letters on blue display will look cool) - Ultrasonic sensor(I'm using HC - SR04, works well !) - Jumper wires (male to male) - Breadboard - Piezo buzzer - 10k resistor - potensiometer Step 2: Connect the Things. Then the sensor measures the amount of time take for the sounds to come back. then you can convert it into centimeters or inches using the code. What is the use of a phone without a battery ? We do not want that right ? So let us set the things. And be ready for a bumpy ride especially while connecting the LCD. We have many things to take care of while connecting different components. A small mistake will ruin the project. now let me let about the connection. This project will use about 9 digital pins of Arduino. The pins of LCD are connected to the digital pins 12, 11, 5, 3, 2. the buzzer is connected to the digital pin 8. the ultrasonic sensor has two pin - trig pin and Eco pin. I'm using HC-SR04 as the ultrasonic sensor as it is very cheap and accurate. You can also use other models of ultrasonic sensors such as the parallax pin))) which is very accurate but a bit expensive, the trig pin is connected to the digital pin 7 and the eco pin is connected to the digital pin 6.sounds hard ? The diagram in the top will clear all your doubts. The top picture shows how my project looks after connecting it. Step 3: The Code for Your Car Now this is the big boys game. lets begin, shall we ? I have tried my best to explain the code to you. well what this code does is that when the ultrasonic sensor detects an objects less than or equal to 20 inch it turn the buzzer on and also displays the distance in inch. well this is almost like the real car crash warning system. If you want to change the minimum distance all you need is to change the code "if (distance <=20)" to your required distance. I have used the LCD.display() and LCD.nodisplay() function to turn the lcd display on and off. and I also want to give you a tip NEVER COPY PASTE A CODE because it is Important that you make mistakes while writing a code. It helps you to lean it much easily. Now lets go for the code. You might see it as a long code, but once you read it will be just a piece of cake ! #include <LiquidCrystal.h> LiquidCrystal LCD(12, 11, 5, 4, 3, 2); //Create Liquid Crystal Object called LCD int trigPin=7; //Sensor Trip pin connected to Arduino pin 13 int echoPin=6; /. int buzzer = 8; //the positive wire of the buzzer is connected to pin 8 void setup() { // put your setup code here, to run once: Serial.begin(9600); pinMode(trigPin, OUTPUT); //declare trig pin as output pinMode(echoPin, INPUT); //declare echo pin as input pinMode(buzzer,OUTPUT);//declare buzzer as output LCD.begin(16,2); //Tell Arduino to start your 16 column 2 row LCD } state pingTime = pulseIn(echoPin, HIGH); //pingTime is presented in microceconds pingTime=pingTime/1000000; //convert pingTime to seconds by dividing by 1000000 (microseconds in a second) pingTime=pingTime/3600; //convert pingtime to hourse multipling by 63360 (inches per mile) if (targetDistance <=20) //if the distance of the obstacle is less than or equal to 20 inches { LCD.display(); //turn the lcd display on LCD.setCursor(0,0); //Set cursor to first column of second row LCD.print("WARNING"); //Print blanks to clear the row LCD.setCursor(0,1); //Set Cursor again to first column of second row LCD.print(targetDistance); //Print measured distance LCD.print("inch"); //Print your units. digitalWrite(buzzer,HIGH); //let the buzzer high delay(250); //pause to let things settle } else //if the distance of the object is greater than 20 inches. { digitalWrite(buzzer,LOW); //set the buzzer low LCD.noDisplay(); //set the lcd display off } } So as you can see this the code. Now type this code into your Arduino IDE. Verify and upload it. now you can do your own versions of this project as your need. So thank you for reading. And make your imagination grow and develop for projects. And for that Arduino is the best tool. Recommendations We have a be nice policy. Please be positive and constructive. Good Job. Easy to understand and apply. Do post more of your projects
http://www.instructables.com/id/How-to-Build-a-Car-Crash-Warning-System/
CC-MAIN-2018-13
refinedweb
841
75.2
Strive: a Haskell client for the Strava API I’m proud to announce the release of Strive, a Haskell client for the Strava V3 API. Although it hasn’t reached the 1.0 milestone, it’s both usable and stable. Motivation After working on Haskeleton and hs2048, I wanted to use Haskell to create something with real value. Seeing as I’m an avid cyclist and use Strava to track my rides, it made sense to write a client for their API. Plus, they didn’t already have one in Haskell. Goals In spite of still being an amateur Haskeller, I have some ideas about how packages should be written. My post introducing Haskeleton probably made that clear already. To that end, I tried to make Strive: Easy to use with a single import. All you need to do is import Strive. In order to do this, I had to learn about lenses. This makes the package easier to use and easier to play around with in GHCi. Quick to build. Installing Strive from scratch takes about ten minutes. Obviously it’s not as fast as installing a Ruby gem, but it’s pretty good as far as Haskell packages go. This is a cost that every user has to pay, so the faster the better. Depend on few other packages. Strive depends on 12 packages, many of which are included with the Haskell Platform. This not only makes installation faster, but it makes it easier to keep dependencies up to date. Be accessible to beginners. Anyone should be able to open any file in Strive and basically follow along. There are some exceptions, like lenses and Template Haskell, but they’re necessary to achieve other goals. This makes it easier for anyone to contribute. Hopefully these goals make for a package that is both easy to use and easy to develop for. Example I want to show you the bare minimum needed to get started. It covers Strava’s token exchange, which is necessary for getting a token. Once you have one, you can perform any request against their API. I will show one here as an example. import Strive import Data.Default (def) import Data.Text (unpack) -- Each API application has a client ID and a client secret. Get them -- both at <>. let clientId = 1790 let clientSecret = "..." -- To get authorized, you need to build a URL and visit it. let url = buildAuthorizeUrl clientId "" def putStrLn url -- After visiting the above URL, copy the code out of the request -- parameters. let code = "..." -- Finally you can exchange your code for a token, which can be used -- to access the API. Right response <- exchangeToken clientId clientSecret code let token = unpack (get accessToken response) -- Build a Strive client and use it to request the current athlete. client <- buildClient token Right athlete <- getCurrentAthlete client print (get firstname athlete) This example scratches the surface of what Strive provides. It covers 100% of Strava’s API. Check out the readme for a complete list of available endpoints.
http://taylor.fausak.me/2014/08/11/strive-a-haskell-client-for-the-strava-api/
CC-MAIN-2014-49
refinedweb
502
76.01
Continuing our look at how dtSearch makes full text indexing and search easy, we now move on to consider the strange topic of indexing databases. You might think that the idea of bringing in a separate piece of software to index a database is slightly crazy - after all aren't databases supposed to be all about indexes? The answer is yes, but by comparison with the sort of thing dtSearch is designed to do they are very simple indexes. Much of the time the documents that you want to index are stored in a collection of folders but increasingly databases are used to store documents of all types complete with a simple keyword or metadata index. This is fine for a lot of retrieval situations but what if you want to perform complex searches on the contents of the documents? For example, in the case of a data-based website the articles are stored in a database. If you want to create a full text search facility you have to index the database. This is where dtSearch comes in. It can index documents stored in a database or anywhere for that matter. In this article we take a look at how you can index documents of all kinds stored in container you care to think up - and not just a database. The first thing we need to learn is how to create an index under program control. Most of the time you want to create a full text index of a set of files stored in a collection of directories. This is something that dtSearch can do automatically using the desktop utility. In fact there is usually very little reason to write programs to create an index - but it can be done and it isn't difficult. You do however want to write code to create an index if the data source is something other than files stored in folder. So let's first look at how to create a standard index in code and then extend this to using a more general data source. First you will need a copy of dtSearch and to follow this example it is suggested you download the 30-day evaluation from dtsearch.com. It is also assumed that you already know how to create an index and search it using say C#. If not read Getting started with dtSearch. Start a new C# project and make sure you have added a reference to the dtSearch library and have added: using dtSearch.Engine; to the start of the project. Creating an index under program control with dtSearch is exceptionally simple. All you need is an IndexJob object: IndexJob indexJob = new IndexJob(); You simply set the properties of the IndexJob object to specify the index you want to create and call one of the Execute methods to build or update the index. So what do you have to specify to create an index? First you have to say where you want the index to be created: indexJob.IndexPath = @"C:\Users\ name\AppData\Local\dtSearch\test2"; There is no particular reason to use this location; it is just the default used by the dtSearch Desktop utility for the indexes it creates. Notice that you specify the directory that the files for the index are created in. Next you have to specify the folders and file that you would like to index. This is achieved using the FoldersToIndex string collection. You can add as many strings specifying paths to folders to this collection as you need. For the example we will add just one: indexJob.FoldersToIndex.Add(@"C:\Users\ name\Documents"); You can add a <+> to the end of the path to signify that all of the subfolders should be indexed. If you don't add <+> then just the content of the specified folder is indexed. You can also add include and exclude filters to specify which types of file are to be indexed. For simplicity we will ignore filters. Finally, we have to set some "Action" properties that indicate how the indexing operation should be performed. The ActionCreate property has to be set to true for the indexing operation to create a new index. If the index already exists then it is overwritten. The ActionAdd property allows new documents to be added to the index. To create a new empty index and add files to it you have to set both: indexJob.ActionCreate = true;indexJob.ActionAdd = true; The IndexJob is now setup with minimal configuration and we can start it going. The simplest way to do this is to use the Execute method. This starts the indexing off and only returns with a Boolean to indicate success or failure when the index is complete. So, to complete the program, we have to add: bool result = indexJob.Execute(); The complete program is: IndexJob indexJob = new IndexJob();indexJob.FoldersToIndex.Add(@"C:\Users\ name\Documents");indexJob.IndexPath = @"C:\Users\name\ AppData\Local\dtSearch\test2";indexJob.ActionCreate = true;indexJob.ActionAdd = true;bool result = indexJob.Execute(); Execute may be simple but it isn't really of much use. Do you really want your indexing program to wait unresponsively while the index is constructed? No, probably not. In most cases the construction of an index takes more time that you can afford to have the UI blocked for. The standard solution in this case is to run the long blocking process on another thread. In this case dtSearch makes this very easy for you. Instead of calling Execute, all you have to do is call ExecuteInThread and the call returns immediately and the indexing proceeds on another thread. You can keep control of the progress of the index using IsThreadDone, AbortThread and so on. Implementing a full indexing application using these facilities is fairly easy - everything works as you would expect - and so for simplicity of the example we will avoid the slight complication of making the indexing asynchronous. In this case it doesn't matter too much because the index is small and completed in a few minutes or less. One of the nice things about dtSearch is that it tends to implement facilities in ways that are simple, direct and probably the way you would choose to do it as well. Of course this means that you don't get the chance to use a lot of new jargon but you also get the program completed quicker. Rather than implementing lots of different interfaces to work with standard data exchange protocols dtSearch simply provides a DataSource class. This uses any protocol you care to name internally to retrieve the data and then presents it to the indexing engine in a simple and uniform way. Now in all probability you are already an expert on ADO, LINQ or RSS and so I'm not going to go over any of these technologies. What I am going to concentrate on is how the DataSource class is used to feed the data to the indexing engine. Let's get started.
http://i-programmer.info/programming/database/3408-full-text-database-indexing-with-dtsearch.html
CC-MAIN-2015-35
refinedweb
1,167
62.17
Lasso Regression Using Python Hi Everyone! Today, we will learn about Lasso regression/L1 regularization, the mathematics behind lit and how to implement lasso regression using Python! Building foundation to implement Lasso Regression using Python Sum of squares function - Firstly, let us have a look at the Sum of square of errors function, that is defined as - It is also important to note that the first requirement that should be fulfilled for any data set that we want to use for making machine learning models is that the data points should be random in nature and data size should be large. - But this requirement is not fulfilled sometimes. That is, in some cases, number of features/dimensions(D) is greater than the number of samples/observations(N). Thus, the data set becomes fatty(D >> N) in nature instead of skinny(D << N). - One thing to be noted is that even completely random noise can also improve R squared. But, this is very unwanted. We don’t want to let noise or unwanted features alter our outputs. This can be achieved by means of regularization. - In case of L1 regularization, few weights, corresponding to the most important features, are kept non-zero and other/ most of them are kept equal to zero. Gaussian distribution and probabilities - For any data set which is random in nature, it should follow Gaussian distribution. - Any Gaussian distribution is defined by its mean, µ and variance, and is represented by where X is the input matrix. - For any point xi, the probability of xi is given by the expression . - Also, because the occurrence of each xi is independent to the occurrence of other, the joint probability of each of them is given by Likelihood function - Also, linear regression is the solution which gives the maximum likelihood to the line of best fit. - Now, the question arises, what is likelihood? We define Likelihood as the probability of data X given a parameter of interest, in our case, it’s µ. So, we define likelihood function as . - Linear regression maximizes this function for the sake of finding the line of best fit. We do this by finding the value of µ which maximizes this function and we can say that it is very likely that our data has come from a population that has µ as mean. - For solving this, first we take the natural log of the likelihood function(L), then differentiate L wrt µ and then equate this to zero. Hence. this value of µ maximizes the likelihood function. Maximize likelihood and minimizing error function - One thing to note here is that maximizing likelihood function L is equivalent to minimizing error function E. Also, y is Gaussian distributed with mean transpose(w)*X and variance sigma-square or or where ε is Gaussian distributed noise with zero mean and sigma-square variance. - This is equivalent to saying that in linear regression, errors are Gaussian and the trend is linear. Why do we need regularization? - Now, let’s understand why the need for introduction to regularization was there. The answer is outliers! In the presence of outliers, the linear regression gets the line of best fit which has some diversion from the real trend. This is because it follows the method of least squares and in order to minimize the error, it makes the trend line bent towards the outliers. This makes the prediction less accurate and far from what could be in the absence of outliers. To handle this problem, we introduce the method of Regularization. The concept of Penalty - L1 regularization uses L1 norm as a penalty term. Likelihood and Prior probabilities - Plain squared error maximizes likelihood as shown above. But now, since we have two terms in the cost function, we no longer do this. We now have two probabilities, one is likelihood probability and other one is prior. Following formula gives Likelihood: and formula for prior is: - We call P(w) as prior because it represents our prior beliefs about w. Thus, now, J is proportional to -ln(P(Y|X, w))-ln(P(w)). Also, by Baye’s rule, we get, P(w|Y,X) is proportional to P(Y|X,w)*P(w). We call P(w|Y,X) as the Posterior probability. The method of maximizing P(w|Y,X) is called Maximizing A Posterior or MAP. Implementing Lasso Regression using Python Now, let’s see how to implement L1 regularization or Lasso Regression by using Gradient Descent(I will be covering gradient descent in a separate post). Importing libraries In [1]: from __future__ import print_function, division from builtins import range import numpy as np # importing numpy with alias np import matplotlib.pyplot as plt # importing matplotlib.pyplot with alias plt Defining number of observations and dimensions In [2]: No_of_observations = 50 No_of_Dimensions = 50 X_input = (np.random.random((No_of_observations, No_of_Dimensions))-0.5)*10 #Generating 50x50 matrix forX with random values centered round 0.5 w_dash = np.array([1, 0.5, -0.5] + [0]*(No_of_Dimensions-3)) # Making first 3 features significant by setting w for them as non-zero and others zero Y_output = X_input.dot(w_dash) + np.random.randn(No_of_observations)*0.5 #Setting Y = X.w + some random noise Learning rate for cost function In [3]: costs = [] #Setting empty list for costs w = np.random.randn(No_of_Dimensions)/np.sqrt(No_of_Dimensions) #Setting w to random values L1_coeff = 5 learning_rate = 0.001 #Setting learning rate to small value so that the gradient descent algo doesn't skip the minima In [4]: for i in range(500): Yhat = X_input.dot(w) delta = Yhat - Y_output #the error between predicted output and actual output w = w - learning_rate*(X_input.T.dot(delta) + L1_coeff*np.sign(w)) #performing gradient descent for w meanSquareError = delta.dot(delta)/No_of_observations #Finding mean square error costs.append(meanSquareError) #Appending mse for each iteration in costs list Plotting costs for Lasso Regression using Python In [5]: plt.plot(costs) plt.title("Plot of costs of L1 Regularization") plt.ylabel("Costs") plt.show() Printing weights In [6]: print("final w:", w) #The final w output. As you can see, first 3 w's are significant , the rest are very small final w: [ 9.65816491e-01 4.27099719e-01 -4.39501114e-01 7.26803718e-04 1.44676529e-03 4.29653783e-03 -1.88827800e-02 5.01402266e-03 -1.45435498e-02 2.98832870e-03 -1.94071569e-03 -1.47917010e-02 3.56488642e-02 2.44495593e-02 -3.40885499e-03 -2.23948913e-02 -8.56983401e-04 1.00292301e-02 3.33973800e-03 8.51922055e-03 -3.72198952e-02 5.31823613e-03 -3.35052948e-02 7.15853488e-03 -1.00094617e-02 -1.44190084e-03 2.96771082e-03 -6.51081371e-03 3.54465569e-02 -3.30111666e-02 4.42377796e-03 -7.87768360e-03 1.26511065e-02 -5.43831611e-04 -4.58914064e-04 5.53972101e-03 -8.31677251e-03 8.63159114e-03 -6.17622135e-03 -3.08958154e-03 1.39908214e-02 9.34415972e-03 -3.76350383e-03 -2.16322570e-03 3.84337810e-03 -6.68382801e-04 -2.84473367e-03 2.48744388e-03 -8.91564845e-03 6.97568406e-02] Plotting weights In [7]: # plot our w vs true w plt.plot(w_dash, label='true w') plt.plot(w, label='w_map') plt.legend() plt.show()
https://mlforanalytics.com/2018/05/29/lasso-regression-and-its-implementation-with-python/
CC-MAIN-2021-21
refinedweb
1,202
55.13
App to applets easily,since applet do not have main method in them. To create an applet we need to import java.appet.* package.This package contains definition of all the keywords of applets.We need to extend the class Applet to our class which contains the applet methods.It is just inheriting Applet class to out class.We need to make this class has public because some other class may need to use our class. Applets contains five inbuilt methods: - init() - start() - paint() - stop() - destroy() These are invoked in the order they are written above.when applet is started inti() method used for initialization of variable is invoked first.Then the methods start() is invoked.The paint method is used for displaying the text on the applet screen. Today we shall see a sample Applet program. import java.applet.*; import java.awt.*; /* <applet code="app_demo" height=200 width=200> </applet> */ public class app_demo extends Applet { public void paint(Graphics g) { g.drawString("This data is displayed on the applet screen",200,200); } } Explanation:In the above program after importing the required the package,we have written some code.This code is the html code which is directly written in the our applet program.The applet tag is used to tell the compiler that this program is an applet and it won’t contain a main method.While defining a paint method we declared an object for Graphics class which is used to invoke the drawString() method to display the text. The drawString() method contains parameters String,int,int.The string is for the message to be displayed and the int ,int are for defining the position of the message. To compile the applet program from cmd we use the following command: appletviewer classname.java
https://letusprogram.com/2014/01/26/applets-in-java/
CC-MAIN-2018-47
refinedweb
295
65.73
04 July 2007 17:22 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--So suddenly Huntsman is hot property. The attraction of creating an epoxies producer to rival Dow and further consolidate the business has proved too much for private equity firm Apollo. But expect a counter bid from Basell’s owners Access Industries, some believe before the end of the week. Access wants Huntsman to balance its big but vulnerable Basell polypropylene and polyethylene operations. Cash flows from the business will begin to suffer when new low cost commodity polyolefins capacities come on-stream with a vengeance in the ?xml:namespace> Huntsman balances Basell for financier Len Blavatnik’s Access, providing relatively more stable profit flows and in-roads into specialised areas of the chemicals business like epoxies and textile effect chemicals. It could be used as a platform for further downstream growth. Huntsman offers Basell a great deal although no immediately obvious industrial synergies. Combining Huntsman with Hexion, however, is a different prospect – one that will attract the eyes of the regulators, and raise questions about just where the cash will come from to finance the deal. The tasty offer of $27.25/share from Apollo, $6bn excluding debt and 8% higher than the Access bid, could be too difficult to resist – the $25.25/share offer from Basell was looked on by Huntsman as a good deal. But concerns have been raised about the market share of a combined Hexion/Huntsman in epoxies. In Those obstacles are not insurmountable by any means but the Huntsman board has to weigh the pros and cons of the riskier Hexion offer against the relatively risk free approach from Basell. The issue is given further gloss by the fact that Huntsman and Apollo have been in a number of battles together including that over the spun-off Ciba Specialty Chemicals business Vantico. A point made by Kline & Co consultant Ian Butcher is that the Vantico deal helped transform Huntsman from a commodity company into a real specialty player. The two offers present intriguing possibilities for further development of the Huntsman businesses. Apollo might be expected to want to drive Hexion/Huntsman synergies hard and generate cash quickly from divestments. It appears that Access want to leave Huntsman very much alone, at first at least. Basell is generating strong cash flows. Huntsman provides a platform for further growth. The combined companies might do better business in vitally important parts of the world like How forward looking a combined Hexion and Huntsman might be remains to be seen. It is unlikely, however, that Access/Basell will give up without a fight. Huntsman without its petrochemicals businesses within a week has become a hot prospect. More money is likely to be put on the table before a deal is
http://www.icis.com/Articles/2007/07/04/9042645/insight-hexion-sees-huntsman-as-hot-property.html
CC-MAIN-2014-10
refinedweb
466
53.21
Advertising Richard Biener <rguenth at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |ASSIGNED Last reconfirmed| |2017-02-16 Assignee|unassigned at gcc dot gnu.org |rguenth at gcc dot gnu.org Ever confirmed|0 |1 --- Comment #3 from Richard Biener <rguenth at gcc dot gnu.org> --- Yes, the issue is that we are not told that __builtin_strlen does not return something that can be used to re-construct the address of 's' and thus passing that return value to f() makes it escaped. That is, strlen is not size_t strlen(const char *s) { return (size_t)s; } handling strlen inside find_func_aliases_for_builtin_call would fix that part but for example SCCVN doesn't do stmt walking when CSEing pure calls. But the strlen pass then handles things. Index: gcc/tree-ssa-structalias.c =================================================================== --- gcc/tree-ssa-structalias.c (revision 245501) +++ gcc/tree-ssa-structalias.c (working copy) @@ -4474,6 +4474,12 @@ find_func_aliases_for_builtin_call (stru process_all_all_constraints (lhsc, rhsc); } return true; + /* Pure functions that return something not based on any object. */ + case BUILT_IN_STRLEN: + /* We don't need to do anything here. No constraints are necessary + for the return value and call handling for pure functions is + special-cased in the alias oracle. */ + return true; /* Trampolines are special - they set up passing the static frame. */ case BUILT_IN_INIT_TRAMPOLINE: any other similar (pure/const) builtins?
https://www.mail-archive.com/gcc-bugs@gcc.gnu.org/msg524168.html
CC-MAIN-2017-47
refinedweb
219
56.76
also more responsive, since nose begins running tests as soon as the first test module is loaded. See Finding and running tests for more. Setting up your test environment is easier nose supports fixtures at the package, module,) and lives in a module that also matches that expression will be run as a test. For the sake of compatibility with legacy unittest test cases, nose will also load tests from unittest.TestCase subclasses just like unittest does. Like py.test, functional tests will be run in the order in which they appear in the module file. TestCase derived tests and other test classes are run in alphabetical order. Fixtures nose supports fixtures (setup and teardown methods) at the package, module, and test level. As with py.test or unittest fixtures, setup always runs before any test (or collection of tests for test packages and modules); teardown runs if setup has completed successfully, whether or not the test or tests pass. For more detail on fixtures at each level, see below. Test packages nose allows tests to be grouped into test packages. This allows package-level setup; for instance, if you need to create a test database or other data fixture for your tests, you may create it in package setup and remove it in package teardown once per test run, rather than having to create and tear it down once per test module or test case. To create package-level setup and teardown methods, define setup and/or teardown functions in the __init__.py of a test package. Setup methods may be named setup, setup_package, setUp, or setUpPackage; teardown may be named teardown, teardown_package, tearDown or tearDownPackage. Execution of tests in a test package begins as soon as the first test module is loaded from the test package. Test modules A test module is a python module that matches the testMatch regular expression. Test modules offer module-level setup and teardown; define the method setup, setup_module, setUp or setUpModule for setup, teardown, teardown_module, or tearDownModule for teardown. Execution of tests in a test module begins after all tests are collected. Test classes functions Any function in a test module that matches testMatch will be wrapped in a FunctionTestCase and run as a test. The simplest possible failing test is therefore: def test(): assert False And the simplest passing test: def test(): pass Test functions may define setup and/or teardown attributes, which will be run before and after the test function, respectively. A convenient way to do this, especially when several test functions in the same module need the same setup, is to use the provided with_setup decorator: def setup_func(): # ... def teardown_func(): # ... @with_setup(setup_func, teardown_func) def test(): # ... For python 2.3, add the attributes by calling the decorator function like so: def test(): # ... test = with_setup(setup_func, teardown_func)(test) or by direct assignment: test.setup = setup_func test.teardown = teardown_func Test generators nose supports test functions and methods that are generators. A simple example from nose's selftest suite is probably the best explanation: def test_evens(): for i in range(0, 5): yield check_even, i, i*3 def check_even(n, nn): assert n % 2 == 0 or nn % 2 == 0 This will result in 4 tests. nose will iterate the generator, creating a function test case wrapper for each tuple it yields. As in the example, test generators must yield tuples, the first element of which must be a callable and the remaining elements the arguments to be passed to the callable. Setup and teardown functions may be used with test generators.-2007
https://bitbucket.org/boothead/nose/src/c57301a0c77f
CC-MAIN-2015-22
refinedweb
590
62.38
ffs man page Prolog This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. ffs — find first set bit Synopsis #include <strings.h> int ffs(int i); Description The ffs() function shall find the first bit set (beginning with the least significant bit) in i, and return the index of that bit. Bits are numbered starting at one (the least significant bit). Return Value The ffs() function shall return the index of the first bit set. If i is 0, then ffs() shall return 0. Errors No errors are defined. The following sections are informative. Examples None. Application Usage None. Rationale None. Future Directions None. See Also The Base Definitions volume of POSIX.1-2008, strings.h(0p).
https://www.mankier.com/3p/ffs
CC-MAIN-2017-43
refinedweb
147
59.6
Hey, I am curious if there is a way that InDesign can format text boxes so there are no hyphenated words at all. I understand that i can manually select words and format "no break" in the text formatting options, but is there a way to do this for every word in the text box? I'm asking because I have a very large document and any changes to the size of text boxes, for instance, neccessitates me to go to every word that's breaking with a hyphen and choose "no break." It seems like there has got to be a less tedious way to do this… Thanks so much, Quin Sure, just turn off hyphenation. Best to do it in your styles, but you can select a frame, then open the Paragraph panel and open the flyout menu to get to the hyphenation settings. As long as the frame isn't threaded to any others you can do it one step there. Thanks so much for your answer Peter – exactly what I wanted. BTW, do you know if there is a way to create a paragraph style in which the first line is formatted a certain way and the second line formatted another way? I'm designing a seed catalog and I want the crop variety to be listed in Bold, Garamond 16 and the next line (which will give the latin species) to be Italic Garamond 16. -Quin Look in the section "Initials and nested styles" of the paragraph style definition, there you can choose linewise a different character style. It's called a Line Style and is in the Nested Styles dialog. Introduced in CS5. Hey Peter (and Willi) I've tried to figure out how to do this but somehow I can't get the style to work. I've tried playing around with a paragraph style I've created called "variety/species listing" to set the first line as Garamond, Bold 16 and the second line as Garamond, Italic, 16 – and I'm enclosing images to illustrate how no matter how I set the styles, it continues to display the text as Garamond, Bold, 16. I've also tried breaking links to the style and then reapplying it. Images enclosed (how the "variety/species listing" style is being applied is illustrated in the first two lines of the final image). Any advice? -Quin. Have you also manually applied character styles? Nested styles and line styles are part of the paragraph style and apply the defined character styles, but they will be overridden, like any paragraph formatting, by a locally applied character style or locally applied formatting. You want to define the basic formatting for the bulk of the text in the paragraph under the basic character formats section. The nested and line styles are used to change the formatting for the text that is triggered inthe definition, so in this case it looks to me like you want to define two cahracter styles, bold garamond 16 and italic garamond 16, and apply each of them as a line style to one line, or if the text to which they should apply could be more than one line long, then apply them as nested styles through the forced line breaks, but not both. And of course, all the lines need to be one paragraph. A forced line break is not the same as a paragraph return, and without non-printing characters showing in your screen shot I can't tell waht is at the end of any of your lines. (Manish, Appreciate the followup but that didn't help.) Peter, I'm enclosing a screenshot that displays the invisible characters. However, I was a little bit confused about some of the other things you wrote. What I am trying to do is this: first line: Garamond 16 Bold; paragraph return (or what I confused for a "forced line break"), second line: Garamond 16 italic. Were you saying that I can only do such precise edits through separately applying styles for each line, or simply saying that character styles are the only way to go? (Also, I understand what you meant about manually applying character styles So before I applied my "variety/species" style I cleared the text that using the "break link to style" command, and then attempted to apply my "variety/species" style. Still, no luck.) -Quin In your case I would not use nested style nor line styles at all. I would work with different paragraph styles: 1. Headline Style, next style defined is Subheadline Style, ends with normal return 2. Subheadline Style, next style defined is Block Style, ends with normal return 3. Block Style, next style defined is Price Block 1, ends with normal return 4. Price Block 1, next style defined is Price Block 2, ends with normal return 5. Price Block 2, next style defined is Headline Style, ends with normal return Is there any reason to split up price information and main block into 2 seperate frames? If only the spacing is the reason and the inset and if it is always the same then put it into the same frame, it makes the work easier. As Willi says, you have three paragraphs, not one paragraph with forced line breaks, so you need three different paragraph styles, not one style with nested styles. Thank you both Willi and Peter for your insight and suggestions – and I apologize for my belated response. Basically, I've proceeded to set up the different paragraph styles as Willi has suggested – along with paragraph style folder for organizational sake. I had a few questions, however: Thanks so much again for all your time, Quin 1. You don't "set this up" in the paragraph style options. A regular paragraph ends with a hard return; that's how ID will see that it is a paragraph. So you don't explicitly have to "do" something. 2. Here comes a nice surprise for you: select all of your text, starting with the first line. In your Paragraph panel, right-click the paragraph style for the first line. From the dropdown menu, select "Apply this style, then Next". Ta-daaa. If you set up your Next Styles correctly, they will all be applied in turn to your selected text. I use this to apply about 8 standard styles to the start of an article, up to its very first "regular" text paragraph. And since the last one in the chain, my Body Text style, has a "Same Style" as *its* next style in turn, I can extend the initial selection down as long as I see regular (body) paragraphs follow. Note: It's possible to set this "first, then next" as an attribute to a custom Object Style, but I've never used that particular function. When you write you get the normal return with ending the paragraph by clicking the return key. I am wondering why you use 2 text frames to accomplish what you want to do? Wouldn’t it be much easier for you to set it up in one single frame? If you want to use the next paragraph style option in an object style it must not be a linked text frame, but you can use it anyway how [Jongware] described it above.
http://forums.adobe.com/message/4296641
CC-MAIN-2014-15
refinedweb
1,222
74.83
Upgrading from an NT 4.0 domain The short answer is yes, but it's not necessarily a simple arrangement depending on what you want your final domain architecture to look like. Will you be migrating the old NT4 domains to Win2K or maintaining them as NT4 domains? If you will be maintaining NT4 domains, then connecting to your existing structure will take place through the use of trust relationships between the various domains. Be advised that trust relationships between an NT4 domain and a Win2K domain work just like the pure NT4 trusts - i.e. they are one-way and none-transitive. If you will be transitiioning to Win2K, your options greatly increase. You can make the added domains seperate trees under your present forest and still have the added domains maintain their old domain names (if that is an important consideration). You can also roll up the added domains and move their resources into present trees or child domains in your present forest. Would suggest getting a good book on AD if this is the route you will take. Upgrading from an NT 4.0 domain Yes. Migration of the NT domain into the 2k Forest can take two forms: UPGRADE o Takes less lead time so can be completed more quickly o Costs less up front, but cost over time can actually make it more expensive o Riskier migration strategy. In most circumstances requires an all-at-once approach. o Much more radical rollback process (Can we say tape backup?) The "BDC-in-a-Closet" trick can help minimize the risk and downtime. RESTRUCTURE o More planning time needed. o Front-end costs are more o Allows for cleaner domain and AD design for larger multi-domain networks o More graceful rollback opportunities o Allows incremental migration of accounts and computers o "Copy and Paste" of user accounts possible o Allows users and computers to exist in parallel environments Upgrading from an NT 4.0 domain If it is for a merger or aquisition you can stand up the new domain with independent namespace by installing the migrated domain as the root of a new tree in an existing forest. In the meantime the NT domain can be connected with one-way non-transitive trusts to the other domains in the forest, allowing for limited participation in the forest. To get all the details, find a Microsoft Certified Technical Education Center in your area and ask for the two-day course 2010. It will enumerate all of the techniques, tools and planning involved in the migration of an NT domain into an existing 2k forest. Upgrading from an NT 4.0 domain This question was closed by the author Upgrading from an NT 4.0 domain This conversation is currently closed to new comments.
https://www.techrepublic.com/forums/discussions/upgrading-from-an-nt-40-domain/
CC-MAIN-2018-39
refinedweb
467
61.77
JumJum "jum" means "remember" in Thai An alternative to Joblib's Memory to cache python function in-file It uses dill package to pickle objects and also to help hashing function arguments, so it supports any kind of objects as long as dill supports it. Use casesUse cases import jum @jum.cache(cache_dir='.jum') def a_long_running_function(array): ... do some cpu intensive things ... return value import numpy as np a_long_running_fn(<some_large_np_array>) ## to configure compression level (default 2) @jum.cache(cache_dir='.jum', compresslevel=<0-9>) InstallationInstallation pip install jum FeaturesFeatures - It supports almost any kind of objects including numpy's ndarray which is its main use case. - Faster and lighter and smaller cache footprints than Joblib's Memory. - It supports file compression using Python's Gzip library. - It uses SHA1 as the main hashing algorithm, to provide the large 256 bit hashing space. - It now uses xxhash to hash the ndarray (specifically) for speed boost. To be improvedTo be improved - use dill to hash the function body instead of the function code, because some function's code cannot be retrieved, esp. in the case of python console. - function file path might not work in case of python console, put some default values for it. - using some faster hash, xxhash, (update) I have profiled it, found that the slowest, bottleneck, is rather the "pickle" process not hash itself. - favor the slower hash (very negligible) to the safer for collisions. - by directing hash the ndarray via xxhash, ndarray hashing performance is increased ten-fold. - add a verbose mode, showing the time elapsed for hashing (mainly the overhead of caching). - add support to F_CONTIGUOUSnd-array by transposing it we can use xxhash to hash. - Take function dependencies (i.e. functions that this function calls) into account. Known ProblemKnown Problem - null arg problem where a function as no argument. - using dillfor hashing the function is an overkill, it's far too sensitive, I will fallback to function source lines. ValueError: ndarray is not C-contiguoushappens with some specific ndarray, not all ndarrays can be fed to xxhash directly: be treated by pickle for now.
https://libraries.io/pypi/jum
CC-MAIN-2021-21
refinedweb
348
65.52
I've ported my NodeJS library bb-rest to C#.NET and made some improvements. Thankfully, we are past the dark days of callback hell -- we can now use the await syntax like sane people! It also supports all versions of the API instead of just v1. For instance, updating a course's name via this library might look like this: using BBREST; string key = "myApplicationKey"; string secret = "myApplicationSecret"; string origin = ""; var mySchool = new RestApp(origin, key, secret); var response = await mySchool.Request("PATCH", "v1/courses/_2353_1", @"{""name"":""New Course Name""}"); Console.WriteLine(await response.ReadContentAsync()); It doesn't have a README yet, but I'm actively working on improving this library so I'm certainly open to suggestions. I hope some of you find it useful! You can find it here: GitHub - C-Weinstein/BBREST.NET Awesome! thank you very much for sharing Cliff.
https://community.blackboard.com/thread/7396-ive-built-a-new-cnet-library-for-rest-apis
CC-MAIN-2019-13
refinedweb
145
67.15
plzwrk: A front-end framework Please see the README on GitHub at Modules [Index] [Quick Jump] Downloads - plzwrk-0.0.0.10.tar.gz [browse] (Cabal source package) - Package description (as included in the package) Maintainer's Corner For package maintainers and hackage trustees Readme for plzwrk-0.0.0.10[back to package description] plzwrk A Haskell front-end framework. Available as a Hackage package: plzwrk 📖 Looking for an overview? Read our announcement blog post. In this document: - When to use plzwrk - Examples using plzwrk - Making a webpage - Documentation - Design of plzwrk - Server-side rendering - Testing your code - Contributing When to use plzwrk plzwrk may be a good fit if you enjoy the benefits of programming in Haskell and you'd like to create a web app. ⚠️ Warning: plzwrk is experimental. It is unfit for production and the syntax will change frequently, often in non-backward-compatible ways. We will try to document all of these changes in the changelog. Some alternatives to plzwrk: - Elm, a delightful language for reliable web apps. - Purescript react basic, an opinionated set of bindings to the React library, optimizing for the most basic use cases. Examples using <!-- TODO: Update when PR#5 is merged --> plzwrk Hello world An example web page that says 'Hello world!' {-# LANGUAGE QuasiQuotes #-} import Web.Framework.Plzwrk import Web.Framework.Plzwrk.Asterius main :: IO () main = do browser <- asteriusBrowser plzwrk'_ [pwx|<p>Hello world!</p>|] browser See the Hello World example live. Aphorism machine An Aphorism Machine that spits out and hides universal truths on demand. kitchen-sink directory. Or see the Aphorism Machine live. Making a webpage plzwrk uses Asterius as its backend for web development. A minimal flow is shown below. It assumes that you have a file called Main.hs in the present working directory with a function main :: IO () inside of it, not unlike in our hello world example. username@hostname:~/my-dir$ docker run --rm -it -v $(pwd):/project -w /project meeshkan/plzwrk asterius@hostname:/project$ ahc-link --input-hs Main.hs --browser --bundle If you're using ahc-cabal, compiling an application using plzwrk is no different than compiling an application as described in the Asterius documentation with one caveat. You must use --constraint "plzwrk +plzwrk-enable-asterius" when running ahc-cabal. Documentation The main documentation for plzwrk is on Hackage. The four importable modules are: Web.Frameworks.Plzwrkfor the basic functions Web.Frameworks.Plzwrk.Tagfor helper functions to make takes like inputor brif you are not using pwx. Web.Frameworks.Plzwrk.MockJSValto use a mock browser. Web.Frameworks.Plzwrk.Asteriusto use a bindings for a real browser courtesy of Asterius. Design of plzwrk plzwrk is inspired by Redux for its state management. The main idea is that you have an HTML-creation function that is composed, via <*>, with getters from a state. -- State data MyState = MkMyState { _name :: Text, age :: Int, _tags :: [Text] } -- Function hydrating a DOM with elementse from the state makeP = (\name age -> [pwx'|<p>#t{concat [name, " is the name and ", show age, " is my age."]}#</p>|]) <$> _name <*> _age -- The same function using functional tags instead of pwx makeP = (\name age -> p'__ concat [name, " is the name and ", show age, " is my age."]) <$> _name <*> _age HTML-creation functions can be nested, allowing for powerful abstractions: nested = div_ (take 10 $ repeat makeP) PWX pwx is similar to jsx. The main difference is that instead of only using {}, pwx uses four different varieties of #{}#: #e{}#for a single element. #el{}#for a list of elements. #t{}#for a single piece of text, either as a node in the body of an element or as a text attribute. #c{}#for a callback attribute. Hydrating with a state HTML-creation functions use an apostrophe after the tag name (ie div') if they accept arguments from a state and no apostrophe (ie div) if they don't. The same is true of pwx, ie [pwx|<br />|] versus (s -> [pwx'|<br />|]). Additionally, HTML-creation functions for tags that don't have any attributes (class, style, etc) are marked with a trailing underscore (ie div_ [p__ "hello"]), and tags that only accept text are marked with two trailing underscores (ie p__ "hello"). Event handlers Event handlers take two arguments - an opaque pointer to the event and the current state. Then, it returns a new state (which could also be the original state) in the IO monad. For example, if the state is an integer, a valid event handler could be: eh :: opq -> Int -> IO Int eh _ i = pure $ i + 1 dom = [pwx|<button click=#c{eh}#>Click here</button>|] To handle events, you can use one of the functions exported by Web.Framework.Plzwrk. This could be useful to extract values from input events, for instance. Please see the Hackage documentation for more information. Server-side rendering plzwrk supports server-side rendering. To do this, you have to compile your site twice: - Once using ahc-cabal. This uses the procedure outlined in the last section to create any JavaScript you need (ie event handlers), and - Once using plain old cabalto create the inital HTML. When compiling using ahc-cabal, make sure to use the plzwrkSSR family of functions. These functions will look for pre-existing elements in the DOM and attach event listeners to them instead of creating elements from scratch. There may also be times that the static website needs to be initialized with data (ie using the result of an HTTP response made on the server). In this case, you'll need to pass these values dynamically to the function that calls plzwrkSSR. You can do this using the foreign export syntax as described in the Asterius documentation. When compiling with cabal, you'll likely be using it to output an HTML document or build a server that serves your website as text/html. Regardless of the approach, you should use toHTML to create the part of the initial DOM controlled by plzwrk. In your HTML, make sure to include a link to the script(s) produced by ahc-dist. Also, if needed, make sure to call your exported functions. Testing your code plzwrk comes with a mock browser that can act as a drop-in replacement for your browser. You can use this in your tests: import Web.Framework.Plzwrk.MockJSVal main :: IO () browser <- makeMockBrowser print "Now I'm using the mock browser." Contributing Thanks for your interest in contributing! If you have a bug or feature request, please file an issue. Or if you'd like to hack at the code base, open a pull request. Please note that this project is governed by the Meeshkan Community Code of Conduct. By participating, you agree to abide by its terms. Local development - Clone this repository: git clone - Move into the directory: cd plzwrk - Set up your local environment: You can use this guide from The Haskell Tool Stack for reference.
https://hackage.haskell.org/package/plzwrk
CC-MAIN-2021-17
refinedweb
1,148
64.51
Eric Blake wrote: >. Good point. This should help me avoid making that mistake again. >From 1b2adcb9f09266099e6df9b2d53bacb39bf7421c Mon Sep 17 00:00:00 2001 From: Jim Meyering <meyering redhat com> Date: Tue, 9 Mar 2010 17:59:25 +0100 Subject: [PATCH] doc: fix typos in hacking.html.in; mark HACKING as read-only * HACKING: Mark as read-only. Soon we'll generate it from... * docs/hacking.html.in: ... this file. More typo fixes. --- HACKING | 3 +++ docs/hacking.html.in | 15 ++++++++------- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/HACKING b/HACKING index b94487c..5486d8e 100644 --- a/HACKING +++ b/HACKING @@ -1,3 +1,6 @@ +-*- buffer-read-only: t -*- vi: set ro: +DO NOT EDIT THIS FILE! IT IS GENERATED AUTOMATICALLY! + Libvirt contributor guidelines ============================== diff --git a/docs/hacking.html.in b/docs/hacking.html.in index 8771c54..f5ec635 100644 --- a/docs/hacking.html.in +++ b/docs/hacking.html.in @@ -117,7 +117,7 @@ </pre> <p> - Note that sometimes you'll have to postprocess that output further, by + Note that sometimes you'll have to post-process that output further, by piping it through "expand -i", since some leading TABs can get through. Usually they're in macro definitions or strings, and should be converted anyhow. @@ -424,7 +424,7 @@ #include <limits.h> #if HAVE_NUMACTL Some system includes aren't supported - # include <numa.h> everywhere so need these #if defences. + # include <numa.h> everywhere so need these #if guards. #endif #include "internal.h" Include this first, after system includes. @@ -533,7 +533,7 @@ - <h2><a name="committers">Libvirt committers guidelines</a></h2> + <h2><a name="committers">Libvirt committer guidelines</a></h2> <p> The AUTHORS files indicates the list of people with commit access right @@ -541,11 +541,12 @@ </p> <p> - The general rule for committing patches is to make sure it has been reviewed - properly in the mailing-list first, usually if a couple of persons gave an + donot have a very clear idea of + -- 1.7.0.2.329.gdaec6
https://www.redhat.com/archives/libvir-list/2010-March/msg00407.html
CC-MAIN-2015-14
refinedweb
330
59.7
I created matrix-enact as a fun way to render Matrix rooms - it essentially "performs" the room history by progressively speaking each message event in chronological order. In this way, matrix-enact is effectively a simple, read-only Matrix client. Let's see how it was built. This article will introduce two important concepts in Matrix, specifically in the Matrix Client-Server API: /contextendpoint, which gets messages before and after a given event Although written in JavaScript (and Reactjs), this project does not use the matrix-js-sdk, it makes direct HTTP calls to the Matrix Client-Server API. Because there are only three endpoints we need to hit, we can keep the project very light by not including an SDK. Matrix allows for guest access by providing an interface to register a new guest user and be immediately given an access token. To do this we call the /register endpoint with a query param kind set to guest. In matrix-enact, this looks like: import axios from 'axios'; var url = ""; const res = await axios.post(url, {}); const { data } = await res; // data.access_token will contain the access token, we must store it Once we have the access token, we use it in the same way as if logged in with a normal user. In the UI, the user can enter either a room alias or a room ID. Whichever they enter, to get message content from a room we need the ID. This means we need to detect if an alias has been entered, and if so get the correct room ID for that alias: // we know that if the first character is a '#', we have an alias not an id if (this.state.roomEntry[0] === "#") { var getIdUrl = ""; getIdUrl += encodeURIComponent(this.state.roomEntry); const res = await axios.get(getIdUrl); const { data } = await res; // data.room_id contains the room id for the alias } /contextendpoint We use the /context endpoint to get chronological history of a room timeline. Looking at this section of the Client-Server API we see: This API returns a number of events that happened just before and after the specified event. This allows clients to get the context surrounding an event. To get messages from this endpoint we need to provide a room id and the event id we want context for. Check out the comments in the code below to follow along. async loadScriptFromEventId(startEventId) { // first we construct the url as per the CS API const url = `{encodeURIComponent(roomId)}/context/${encodeURIComponent(startEventId)}?limit=100&access_token=${this.state.accessToken}`; axios.get(url).then(res => { // make an array to store the events from the response var newEvents = []; // we only want the events that follow our start events newEvents = newEvents.concat(res.data.events_after); // and we only want events that contain a body field, i.e. that are messages newEvents = newEvents.filter(e => e.content.body); // finally, since we're using React for this app, // we store these messages in the state object this.setState({events: this.state.events.concat(newEvents)}); }); } Notice the previous URL we hit when calling /context. We specified a limit value of 100. In fact, 100 is usually the limit enforced by the homeserver. This limit refers to the number of events, not the number of messages - remember that we are filtering them in the code above. If we say that we want our script to be 50 lines long, but after filtering we are left with only 30 messages, what should we do? Get more events after the latest one, and append the new events to our script. Knowing that we have taken a value from the form to be stored in state.messageCount, and in the previous section we inserted message events into state.events, we can compare these two variables, and if needed, call loadScriptFromEventId() again with the last known event. if (this.state.messageCount > this.state.events.length) { // get last known event var lastEvent = res.data.events_after[res.data.events_after.length - 1]; this.loadScriptFromEventId(lastEvent.event_id); } else { this.setState({events: this.state.events.slice(0, this.state.messageCount), statusMessage: "Done"}); } The Web Audio API is a massive topic, out of the scope of this article. We'll cover just enough to be able to show the "happy path" of performing Text-to-Speech (TTS) sequentially. To deliver a line as audio, the fundamental code is as follows: var utterance = new SpeechSynthesisUtterance(); utterance.text = "some string"; var someVoice = window.speechSynthesis.getVoices()[0]; utterance.voice = someVoice; window.speechSynthesis.speak(utterance); To find out when an utterance ends, attach a function to the onend event: utterance.onend = function() { // do something when the line ends }; Knowing that we can perform TTS on strings we provide, and that we can call a function when a line ends, from here it's easy to see how we can use the list of messages to "enact" the message history. We will: Let's create a nextLine() function in our App component, and use this to insert lines associated with "Parts", meaning that each part is a separate user with an assigned voice. nextLine() { var line = this.state.line; if (! this.state.events[line]) return; var newPart = this.state.events[line].sender; if (! this.state.parts.find(p =>{return p.name === newPart;})) { this.setState({ parts: this.state.parts.concat([{ name: newPart, voice: voices[getRandomInt(0, voices.length)] }]) }) } this.setState({ script: this.state.script.concat(this.state.events[line]), line: this.state.line + 1, nextText: "Continue" }); } By incrementing the line counter, we progress through the script, adding a line at a time to the correct Part. During rendering, the App renders an array of Part Components, which in turn render an array of lines, filtered for that particular Part: const lines = this.props.script.map((line, lineNumber) => { line.lineNumber = lineNumber; return line; }).filter(l => l.sender === part.name); Knowing that in React, the constructor for a Component is called only once, we perform the TTS process itself inside the constructor method: class Line extends Component { constructor(props) { super(props); var utterance = new SpeechSynthesisUtterance(); var nextLine = this.props.nextLine; utterance.text = this.props.lineText; utterance.voice = this.props.part.voice; synth.speak(utterance); } } Finally, we'll use what we already learned about the onend event to insert the next line: class Line extends Component { constructor(props) { super(props); var utterance = new SpeechSynthesisUtterance(); var nextLine = this.props.nextLine; utterance.onend = function(a) { nextLine(); }; utterance.text = this.props.lineText; utterance.voice = this.props.part.voice; synth.speak(utterance); } } In this way, nextLine() is called in a loop, meaning that the lines are added to React sequentially, and spoken aloud as they are added. This article covered a lot of ground: /context/API endpoint To learn more about Matrix development, check out the Matrix Documentation.
https://www.matrix.org/docs/guides/creating-a-simple-read-only-matrix-client/
CC-MAIN-2022-27
refinedweb
1,120
56.05
go to bug id or search bugs for New/Additional Comment: Description: ------------ I know about 7.4 and array-functions, seems like good solution. So, currently i found 2 problems with Reflection 1. When you passing callable as string or array, you should be sure that passed arguments count will be EXACTLY match await arguments count. Maybe as solution, but \Closure::class works well. If you pass too many arguments, it just ignored. Because of that i should write bind() function that creates closure from these callable, controls bound arguments with \ReflectionClass 2. When i have few libraries with functions, sometimes i want call function that will be declared only when first access (because library has a constructor) by passing array inside mapper, filter or reducer. So i cant do this because library has another module inside (that works only with dependency injector) - so trying to pass array with that [ ProxyLib::class, 'func' ] expectedly ends with "method not found exception". Of course - if i use \Closure for that - it works well because of dynamic loading. Test script: --------------- /* * 1st */ array_reduce($array, 'is_array', []); // nvm, just an example // throws `Too many arguments` /* * 2nd */ Class Proxy { public static function __callStatic($method, $arguments) { // yes we should use DI or something here return (new Dynamic)->{ $method }(...$arguments); } } Class Dynamic { public function __construct() { $this->loadSomeDataOnInit(); } public function reducer($carry, $item) { return $carry; } } array_reduce($array, [ Proxy::class, 'reducer' ], []); // throws `Method not found` array_reduce($array, function ($carry, $item) { return Proxy::reducer($carry, $item); // works well, because of dynamic loading }, []); Expected result: ---------------- 1st: ignore if too many, exception if not enough 2st: auto-wrap into closure spl_autoload_register(function () { require 'dynamic.php'; }); Class Proxy { public static function __callStatic($method, $arguments) { // yes we should use DI or something here return (new Dynamic)->{ $method }(...$arguments); } } /* This class will be loaded via spl_autoload_register() and should be declared outside Class Dynamic { public function __construct() { // some code to load data outside } public function reducer($carry) // ! yes there should be 2 arguments { return $carry; // does nothing } } */ // run #2.1 var_dump(new \ReflectionMethod(Proxy::class, 'reducer')); // expected // throws `Method not found` // run #2.2 $array = [1,2,3]; array_reduce($array, [ Proxy::class, 'reducer' ], []); // sorry me, works well, i thought there should be `too many arguments` exception // run #2.3 array_reduce($array, function ($carry, $item) { return Proxy::reducer($carry, $item); // works well, because of dynamic loading, and well ignores arguments if too many }, []); So i got the exception when wrote these topic, because try to wrap callable to closure using \ReflectionFunction. I thought that's expected behavior with "too many arguments" will happened If i got additional info about - i link it here My bad. sorry me > I thought that's expected behavior with "too many arguments" will happened PHP allows calling a function with extra arguments. They can be accessed with func_get_arg() and func_get_args(). It is not related to some core functions like "is_array" and so on*
https://bugs.php.net/bug.php?id=78440&edit=1
CC-MAIN-2020-16
refinedweb
479
50.97
Hello I'm having trouble with a programming project. I understand the problem but I ran out of ideas to resolve the issue. I'm actually relearning java again and practicing before I re-enroll back into a University. Here is my code. Code Java: //***************************************************************************** // PP 2.3 Author: Gregory Shavers // // This project will take input from a user and display demographic & personal // information about this person. // //***************************************************************************** import java.util.Scanner; public class JavaApplication7 { public static void main(String[] args) { String Name, College, Petname; int Age; Scanner scan = new Scanner (System.in); System.out.print ("What is your name? " ); Name =scan.nextLine(); System.out.print ("What is your age? " ); Age = scan.nextInt(); System.out.print ("What is your college? " ); College = scan.nextLine(); System.out.print( "What is your pet name? " ); Petname = scan.next(); System.out.println(" \n Hello, my name is " + Name + " and I am " + Age + " years \n old. I'm enjoying my time at " + College + ", through \n I miss my pet " + Petname + " very much!" ); This is the output of my application. ***Start of build*** run: What is your name? Gregory B Shavers Jr What is your age? 24 What is your college? What is your pet name? OB Hello, my name is Gregory B Shavers Jr and I am 24 years old. I'm enjoying my time at , through I miss my pet OB very much! BUILD SUCCESSFUL (total time: 13 seconds) ***End of Build*** As you can see for some reason my *College* and "Petname* varible string comes out into one line. What I want it to do is to output separately. The scanner object should take two *separete* Variables store them. And re output my answers in the sentences below. I know the issue is my nextLine constuctor but I don't know what lines of code to use to fix this. I checked google and found some answers. Some of them code they did didn't make sense to me. I already busted through my old text book but I couldn't find and examples or hint on how to resolve this problem. I even checked this site to find some answers but I could not find any recent post. Would the community bless on some knowledge on how to properly use the scan.nextInt constuctor. I would greatly appericated. Thanks! **Happy Coding***
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/20120-need-help-understanding-scan-nextint-contructor-printingthethread.html
CC-MAIN-2016-18
refinedweb
387
76.62
is there a way to make return count as a double instead of an int? thanks This is a discussion on return as a double? within the C++ Programming forums, part of the General Programming Boards category; is there a way to make return count as a double instead of an int? thanks... is there a way to make return count as a double instead of an int? thanks If you mean just any function then sure, declare it as double func ( args ); If you are talking about main then you really don't want to return anything but an int. -Prelude My best code is written with the delete key. well, here's my program: #include <iostream.h> #include <math.h> int pyth(double one, double two); int main() { double product, one, two; product = pyth(one, two); cout<<"Please enter two sides of a right triangle\n"; cin >>one>>two; cout<<"the hypotenuse of the right triangle is "<<pyth(one, two)<< " long"<<endl; return 0; } int pyth(double one, double two)//takes two number, squares them, adds them, then squares the product. { one = pow(one, 2); two = pow(two, 2); one = one + two; two = sqrt(one); return two; //can I have this return two as a double instead of an int? } as you can see, I need these numbers to be more precise then just an int. or if there's an easyer way to do this, thanks just change the function type to double, not int... My Website "Circular logic is good because it is." ohh... well now I feel stupid, i tryed this and it gave me an error the first time, now it works.... thanks yea, that's it... pretty straightforward
http://cboard.cprogramming.com/cplusplus-programming/11584-return-double.html
CC-MAIN-2013-48
refinedweb
285
78.89
I’m working as a software engineer for IBSolution. We develop SAP addons based on the SAP NetWeaver Plattform 7.0. We develop in our own namespace as Independent Software Vendor, so we have reserved the namespace /ISV/ and the generic namespace /B135/. These namespaces are reserved worldwide for every SAP system for my company. You can reserve your own namespace on the SAP Service Marketplace. There you get for each namespace two keys. One key is for your development system; the so called development license key. Notice: You cannot transfer development license keys to other SAP Systems. If you have finished your development in your development system you want to transport your developed content. If you want to modify your delivered content in your customer system then you have to enter the other key the so called repair license key and you also have to set the namespace role to C. More information about setting up a namespace for development you get here on help.sap.com. After I have explained the namespace license keys and how to set up these namespaces; I will explain you how you can transport those namespaces via the Change and Transport System (CTS) as an transport task, so that the customer only must import the namespace via transaction STMS. Maybe you think now “what a boring blog… he explains how to use the CTS”. I will first explain you the issues I have had when I tried to transport the namespace. As you know you can maintain the namespaces in transaction SE03. This transaction represents the table view V_TRNSPACE. You can check this, when you use the transaction SE16 and enter the V_TRNSPACE for your table you get the following screen: Now we know that we have to check which tables are parts of the view, so I started transaction SE11 and entered the view as database table. Three tables are part of the view: TRNSPACEL TRNSPACET TRNSPACETT I thought I could transport the corresponding records of the table. I go to transaction SE11 and select the TRNSPACEL as database table then I noticed that this one is not transportable. If you remember I told you that you cannot transfer development license keys to other SAP Systems. Then I wanted to transport the records of the two other tables. The table TRNSPACET contains the namespace, the repair license, namespace role and so on. In the table TRNSPACETT are the texts for the namespaces stored. I started the data browser (SE16), but I couldn’t transport the namespace records. The solution was transaction SE01. First you have to select or create a new transport task. Then double click on the transport task on the next screen. Then you can add the namepace by entering: Programm ID: R3TR Object Type: NSPC Object Name: /ISV/ (here you have to enter your namespace) But before you add the namespace to your transport task and release the transport task you must set the namespace to C. But that’s not all. If I create a new InfoObject in my BI system I have two namespaces /ISV/ for the technical name of the InfoObject and the generic namespace /B135/. One example: I want to create a characteristic in my /ISV/ namespace with texts. The technical name of the characteristic is /ISV/CUSTOMER. A table for the text is generated by using the /B135/ namespace in that case: /B135/TCUSTOMER. So there must be a mapping for /ISV/ and /B135/. I found the corresponding table. It is the table RSPSPACE. For transporting this mapping go to transaction SE16 and select your namespace mapping. Then select in the menu Table Entry ==> Transport Entries. Select in the popup your transport task. Now you can release your transport task and use it for every development in your own namespace.You have to transport your namespace first into the customer system before you transport all the other objects of the corresponding namespace. I hope this blog helps you to develop your SAP addons in your own namespace and transport your addons into your customer system. nice blog! I installed customer namespace myself as well last year. In my case everything was automatically collected into transport. Just for information – mapping between customer namespace and generated namespace is managed via RSNSPACE transaction and stored in the table RSPSPACE not RSPSPACE table. P.S. Requesting generated namespace (like /B135/) is a nice fun – cause you cannot see already busy namespaces from SAP web-site but want to have as less symbols as possible – in your case /B135/ limitates BW objects length – better to have like /Bxx/. thanks. Your comment about the mapping of customer namespace and generated namespace is not clear. There is no difference between RSPSPACE and RSPSPACE. If I made a mistake, let me know. The generated namespace is a funny story. As you know when you want reserver a namespace on the service marketplace you get no information which namespace is already reserved. So we decided to take a high number /B135/ and we took 135 because our former company logo IBS looks like 135. Best Regards, Marcel I’ve mistaken – the table name and transaction name are the same and is RSNSPACE (N instead of P). We used the help of SAP consultant who gave us a choice of free /Bxx/ namespaces (though just a few). The reason was that maximum length of the tech. name of your BW objects using /B135/ namespace will be cut down by 1 symbol comparing to /Bxx/ namespace.
https://blogs.sap.com/2008/01/17/transporting-isv-namespaces/
CC-MAIN-2017-51
refinedweb
921
71.95
MenuetOS, an OS Written Entirely In Assembly Language, Inches Towards 1.0 372 angry tapir writes "MenuetOS is an open source, GUI-equipped, x86 operating system written entirely in assembly language that can fit on a floppy disk (if you can find one). code, which we want to avoid. ... We support USB devices, such [as] storages, printers, webcams and digital TV tuners, and have basic network clients and servers. So before 1.0 we need to improve the existing code and make sure everything is working fine. ... The main thing for 1.0 is to have all application groups available'" Assembly == SLOW ; JAVA == FAST! (Score:5, Funny) ASSembly language? This OS will run as slow as frozen mollasses. I want my computer to run FAST! I want an operating system written in Java, running on a java interpreter written in java [add recursion here]. Java is faster than C, so logically, java must be faster than ASSembly. More java == more speed. Lets get to it! Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Funny) Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Funny) Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Funny) Might actually be the case (Score:5, Insightful) Re:Might actually be the case (Score:5, Insightful) Re:Might actually be the case (Score:5, Insightful) As someone who has spent a lot of time optimizing assembly code (17 years in the games industry) I can tell you this: As the number of CPU registers increases it gets harder to beat the compiler. It's not too hard to better x86 compiler output. It's pretty difficult to improve PPC code. What assembly optimization gives you is a significantly better understanding of the data flow. Since all CPUs are memory bandwidth bound now knowing where the memory accesses are allows you to restructure the data to make the algorithm more efficient. You also must understand what the compiler is doing to your code so that you can make in access your data as efficiently as possible. To conclude, you can beat any compiler by hand, so long as you have more information then the compiler has. Re:Might actually be the case (Score:5, Informative) Three points: 1) Compilers vs Humans You have to start by doing an apples-to-apples comparison. Yes, many developers these days are ignorant of low level details about assembly language, and would therefore not produce assembly code that is as good as what comes out of a compiler. But that is because the compiler isn't built by your standard run-of-the-mill code monkey. They are built by people who truly understand the issues involved in creating good assembly language. So you need to compare assembly created by a compiler vs assembly created by someone who is at least as skilled as the people who created the compiler. In such a comparison, the humans will generate more efficient code. It will take them much longer (which is one of the two reasons why we have compilers and high-level languages), but they will generate better code. 2) Why write assembly language No, one does not write assembly language for "fun" - there are specific business reasons to do so. Replacing inner loops in performance-critical loops with hand-coded assembly language is a common example. Most major database companies have a group of coders whose jobs are to go into those performance-critical sections and hand tune the assembly language. Would I try to write a GUI using assembly language? No, because it simply isn't that performance sensitive. Choose the tool that fills your needs. Religion about tools is just silly. 3) Out-perform C No. Given coders of equal skill, all of the common high-level languages (Java, C, C++, etc) are identical in terms of CPU-intensive performance. That's because the issue is more one of selecting the correct algorithms and then coding them in a sane manner. It is demonstrable that Java can *never* be more efficient than a corresponding C program because one could always write a C program that is nothing more than a replacement for the standard Java JVM (might be a lot of code, but it can be done). The place that one starts to see differences in performance is in the handling of large data sets. Efficiently managing large data sets has much more to do with management of memory. Page faults, TLB cache misses, etc have significant performance impacts when one is working on large data sets. Java works very hard to deny the developer any control over how data is placed in memory, which leaves one with few options in terms of managing locality and other CPU-centric issues related to accessing memory. C/C++ gives one very direct control over the placement and access of objects in memory, hence providing a skilled developer the tools necessary to exploit the characteristics of the CPU-CACHE-RAM interaction. It is laborious, to be sure, but C/C++ allows for that level of control. So it all boils down to what one is implementing. If I were implementing a desktop application, I would probably use Java. The performance demands related to memory management are typically not very great and Java's simpler memory management paradigm streamlines the development of such applications (not to mention the possibility at least of having one implementation that runs on multiple platforms). If I were implementing a high volume data management platform, I would use C++ because the fine grain control of memory management provides me the necessary tools to optimize the data-intensive performance. Re:Might actually be the case (Score:4, Insightful) "The compilers, for the most part, are smarter than people at optimizing code." No, they emphatically are not. No computer algorithm is any smarter than the people that wrote it (in fact it's always going to be dumber.) If the compiler is better than YOU are at optimizing code, that may well be true and understandable - presumably optimising assembler is not your specialty, after all. But a competent assembler specialist (someone in the same league, skillwise, with the guys that write the compiler) will beat the hell out of any compiler ever made. There just is no question. He knows every technique the compiler knows, but he is better equipped to know when and where to use them. Compilers serve many valid purposes. They allow less skilled programmers to still produce a usable product. They allow more skilled programmers to produce a usable product more quickly. They facilitate portability. Plenty of good reasons for them to exist and be used. But beating a competent human at optimisation tasks is not one of them. "Java has the added advantage that it uses Just-In-Time compiling, so there's a lot of cases where Java, or .Net or any other language that uses an intermediate byte-code and actually outperform C." I know a lot of people think this, but it's nonsense, as a moments reflection should make clear. I have no doubt that a poor coder might find JIT improves the performance of his code, but that really doesnt justify the assertion. You would need to show that JIT can actually beat a well written C program, and it wont. It cannot. Absolute worst case, if he has to, the C coder could simply implement a VM and JIT in his program and achieve the same results - and that is a tie. C cannot possibly lose that comparison, the worst it could possibly do is tie. Re: (Score:3) Re: (Score:3) There is almost no reason to write code in assembly anymore, other than "because we can", which is a fine reason for a "fun" project. However I wouldn't write assembly if I was trying to run a business. Java has the added advantage that it uses Just-In-Time compiling, so there's a lot of cases where Java, or .Net or any other language that uses an intermediate byte-code and actually outperform C. I keep hearing the arguments but I don't see the fruits. I expect Java and .Net apps to run slow and take up irrational amounts of memory for what they do and I am rarely disappointed. I expect the handful of ASM function libraries we use to significantly outperform c code and they always do. If these high level languages are so great where are the commensurate outcomes in real life? Where are the cutting edge Java and .Net games? Browsers? Operating systems? In cases where the developers time is worth Re:Might actually be the case (Score:5, Interesting) Back when I was trying to write games, 20 years ago, I figured out pretty quickly to write the important parts in assembly and the rest in C. But not before I wrote a full screen graphics editor in assembly. That was about 1200 lines of awesomeness that took me about 7 months to write. Fortunately, most of the graphic work carried over to the main game itself. Recently, I did a recreation of that work in C#. What took me over 2 years to do in 1994-95 took me a weekend to do now. My how times have changed. Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Insightful) You jest, but wikipedia says it will boot in five seconds on a 90 mHz Pentium FROM A FLOPPY. Is that fast enough for you, javaboy? Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Informative) Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Insightful) Yeah, outside a few rather narrow cases, modern CPUs have just gotten too complicated to write efficient assembly for. That says more about lousy CPU architecture design with bloat and incredible inefficiencies than it says about software developers who can write software for those CPUs. Then again, look up the RISC vs. CISC debates about CPU design and you might be surprised about CPU complexity as well. Otherwise, most of the CPU complexity that currently shows up is due to the fact that the CPU speed far outstrips the memory bus speed, thus all of the concern about "local" memory caches and pipelined instruction ordering. If you could create a much faster memory bus, CPU designs could be simplified considerably from a software developer P). That is another reason for much of the added complexity in CPU architecture design, as few if any CPU designers want to abandon the software developed for earlier generations of CPUs as a way to promote their new design. Instead, they tinker on the edges and keep piling new instructions onto the existing heap of instructions and keep making the CPU more and more complex as a new generation of designers is in charge. Re: (Score). Well, you have a crappy laptop then. Most laptops out there have a CPU with an instruction set such that assembler-language code for the Intel 8080 could be mechanically translated to assembler-language code for that instruction set (although the machine codes were not compatible), although the instruction set in current CPUs have been at least extended significantly to a 32-bit version and, in most cases, to a 64-bit version. (I.e., you've confused the 4-bit 4004 with the 8-bit 8080 and you've confused "s Re: (Score:3) It was so bad that I had a 70 year old former transmission systems electrical engineer (so nothing below thousands of volts) rant at me for hours about how bad it was when it came out. When a guy who graduated from University before the transistor went on sale can see a dozen holes in microprocessor design then you've got a problem. I really should retire the last netburst machine I've got but it has a lot of drive bays and freebsd can still do stuff with it. Efficient assembly is still quite doable ... (Score:4, Interesting) That said, I rarely use assembly in professional software development situations when targeting computers and mobile devices. Before addressing your comment let me add one more thing. Learning assembly is important with respect to making you a better C programmer. I'm not talking about understanding the asm code you see in the debugger. I really am referring to writing C code. C code can be written in architecturally specific manners. The code is still portable, yet more efficient only on a specific architecture. Understanding the architecture and assembly language can allow you to write more efficient C code. Yeah, outside a few rather narrow cases, ... That is nothing new, this has been true for decades. You have to go back to Apple II and Commodore 64 days to find a timeframe where assembly was appropriate for general purpose software development. I'm referring to computers, not micro controllers and other such environments. ... modern CPUs have just gotten too complicated to write efficient assembly for. This is absolutely false. The key to writing good assembly code is not trying to out-compile the compiler, that is a newbie thing to do. The key is to leverage knowledge that can not be transmitted to the compiler, can not be expressed in C. This is where the real win is, this is where assembly can still beat C even after a couple of architectural upgrades. Re: (Score:3) I'm not sure what level of "architecture" you are talking about. An assembly language usually reveals relatively little about the architecture underlying the CPUs that execute the code generated by the assembler. That is a very modern x86 centric view. Even so we still have an "instruction set architecture" (ISA) in the modern x86 case. This ISA limits the underlying hardware architecture's (micro-ops) view of the software's intent. We still have the case where optimizing at the ISA level can also improve performance at the micro-op level. Furthermore understanding the underlying micro-op architecture can help to write more efficient code at the ISA or C level. This level is not documented but a little is known abou Re:Assembly == SLOW ; JAVA == FAST! (Score:4, Funny) So, what you're saying is that the C compiler is better Assembly coder than you are. I feel your pain on that one. Re: (Score:3, Insightful) If a compiler can't produce better assembly than any one programmer, it's time to get a new compiler. I mean, that's pretty much the point of upper level languages to begin with. Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Insightful) If a compiler can't produce better assembly than any one programmer, it's time to get a new compiler. I mean, that's pretty much the point of upper level languages to begin with. Hardly. The point of high level languages is to improve the productivity of software developers with the full knowledge and recognition that compilers will always do an inferior job... trading off efficiency and memory space for increased throughput of the developer. That some compiler developers using some languages (notably C) can usually produce software that is within a few single digit percentages of the efficiency of a hand crafted assembly code written by competent and knowledgeable programmers well versed in those respective languages (an important distinction.... you need to compare expert C programmers with expert assembly programmers) for a few benchmark algorithms is besides the point. Grace Hopper, one of the developers of COBOL, created the programming language for some very different goals than what you mention. Portability (the ability to move from one CPU architecture/operating system to another) was a key component and rationale. The other was to use "understandable" words that supposedly non-programmers could at least in theory be able to read. It is debatable if COBOL actually met those goals. Another very significant goal is to increase the volume of software produced by the individual software developer (mentioned above), and perhaps finally the point of a high level language is to reduce the learning curve for somebody new to computers and software development to be able to become the expert that is needed to get software developed. I would agree that some compilers need to be tossed out the window in terms of their code efficiency, but you might be surprised at which compilers really can make the cut or not as well. Re:Assembly == SLOW ; JAVA == FAST! (Score:4, Interesting) Re: (Score:3) And, under ideal circumstances . . . its gonna be hard to beat Assembler. (Slower to market . . . perhaps . . . but faster for you the next 20 years as you run it.) You may be right, but I wouldn't bet on that. First of all, it's not unheard of for a CPU architecture to die in 20 years, but that aside: It would be interesting to take an x86 assembler program written 20 years ago and run it on modern hardware, then perform the same experiment with a C program recompiled with a modern compiler. The 20 year o Re: (Score:3) That is a terrible example. Besides, if you want to do that in something like Visual BASIC it would be more like: MsgBox("Hello, World!") or something similar. Like I said, it would need to be seriously updated if you are going to be using current compilers. Indeed most versions of BASIC that I've seen that aren't historic compilers, hence have also been optimized to be running on the current generation of computers, don't even use the PRINT command. As I said, it would need some substantial tweaking if you Re:Assembly == SLOW ; JAVA == FAST! (Score:5, Informative) So, what you're saying is that the C compiler is better Assembly coder than you are. I feel your pain on that one. Indeed. I spent 5 years supporting a production commercial OS written entirely in assembly (one of many forks that happened when IBM started licensing the source for their old mainframe OS). Today I let a C compiler do it's job on my personal projects. Can you write faster code than the compiler - sure you can, though it requires a deep understanding. But that code will be crap unmaintainable code. There was a day when C was called a high level language, and in a meaningful way it still is. You can write good maintainable C code that doesn't look optimized and get nearly-perfect assembly that bears little resemblance to the source. The worst choice in C is to think you need to help the compiler optimize. Seriously, the compiler doesn't care at all whether you write x = x << 1; x += x; or x *= 2; it sees them all the same, so code the one that makes sense in context. Giving the compiler hints can be useful ... (Score:3) The worst choice in C is to think you need to help the compiler optimize. Seriously, the compiler doesn't care at all whether you write x = x << 1; x += x; or x *= 2; it sees them all the same, so code the one that makes sense in context. Historically helping the compiler, giving it hints, is in fact a good way to get superior code out of a compiler. For example consider 4x4 matrix multiplication. Do you use nested loops or just unroll it manually? Compilers tend not to fully unroll all the nested loops. The compiler may do better scheduling on fully unrolled non-looping code. Do you create temporary variables to preload a row or column, or do you just access each variable in memory directly? The former may generate better code on a RISC arc Re:Assembly == SLOW ; JAVA == FAST! (Score:4, Interesting) You can also do this with DOS & Windows 3.1 on a modern CPU: an i7 has 8MB of cache, which is more than most PCs c. 1995 had in their entirety. Re:Assembly == SLOW ; JAVA == FAST! (Score:4, Informative) Can fit in cache != will be in cache. On a modern multi-GB system the memory paging index alone is going to dwarf the size of this OS's code. Then there' all the rest of the OS data, plus the even more frequently used application code and data. Certainly shrinking the OS code size drastically will help free up more cache space for other uses, and the most heavily used parts may be able to remain in the cache most of the time, but it's almost a guarantee that most of the time most of the OS code won't be in the cache. Re: (Score:3, Insightful) It's the same way that your Bicycle is faster than your Jet Fighter Plane that is held together by duct tape and you've 'fueled up' with whip cream. Re: (Score:3) If your C is faster than your Assembly, that's because your Assembly is crap. You are kidding yourself if you think you can write programs in assembler that run faster than the equivalent in C. Look at what my compiler generates for the C statement "return x/372;" with 64 bit ints on x64: movslq %edi, %rax imulq $738919105, %rax, %rax movq %rax, %rcx shrq $63, %rcx sarq $38, %rax addl %ecx, %eax popq %rbp ret Your only practical approach as a human writing this in assembler is to use the slower 64 bit divide instruction. Puzzling out optimizations like this is a job muc That's not the most important thing (Score:5, Funny) So before 1.0 we need to improve the existing code and make sure everything is working fine. ... The main thing for 1.0 is to have all application groups available Nah, main thing is to find a working floppy disk and drive. Re:That's not the most important thing (Score:5, Informative). Re: (Score:2) Apparently my company still stocks floppy disks. There's still a bunch in the supply cupboard, even though you can search all the cubes on this floor and not find a single drive. Someone really needs to update the order inventory. Re: (Score:2) here ya go... [newegg.com] [amazon.com] Re: (Score:2) I keep an LS120 in my desktop PC just in case. Guess how many times I've used it. I did use the Sony USB floppy once. So I consider that two dollars well spent at a yard sale Choose your own adventure, drinkypoo (Score:5, Funny) "Help! Help!" she cried. Most of them all just walked by, ignoring her. She might be jaw-droppingly gorgeous 36-24-36 redhead with pouty lips in a light dress, but she was clearly crazy, waving that weird plastic thing around. "You!" she urgently growled, as she seized a suited man's lapels. "Can you help me read this? It's the only copy!" This looked like a successful man, so surely he could afford a decent computer, right? He blushed. He wanted her, and maybe could use it to get her back to his place, but the lie would eventually come out. And this woman had just the sort of sincere bearing, that he knew the lie would be punished, not accepted. He sighed. He glanced at the disc again, just to make sure what he was seeing, and sighed. "No. Sorry. What is that, anyway?" She let go, and cast her glance around the crowd. And that's when it happened: Drinkypoo. The moment she saw him, she knew. This was the one. She didn't even have to ask, because she knew. But she decided to ask anyway, so that he could have the pleasure of saying yes and offering to help. It would be the first of many pleasures that he would experience. "Can you read it? There are some really important files from the 1990s on here. I've got to have them. I'll do anything, Drinkypoo. Anything." There was a time when Drinkypoo would have grinned nervously, or asked followup questions. But by now, he knew the story. He didn't want to hear the story anymore; he just wanted the reward. Drinkypoo didn't so much as even glance at the disk. Why so many unbelievable hot women always had invested in LS120s and floppies to store their critical records, he didn't know. Maybe some freak effect of a one-shot Imation advertising campaign, long ago. No one even really knew why, for sure; they just knew that a shitload of the drives had somehow ended up in the possession of that demographic. He squinted and recalled his depleted inventory. That blonde with the 1.4MB floppy from two nights ago. "Got to stop at the drugstore on the way. C'mon," he replied. She stuffed the disc into her purse for safekeeping and they walked to his car. Half an hour later, at his place, she sat on the couch, nervously clutching her purse containing the precious disc. Drinkypoo sauntered over, and held out the glass of rye bourbon on ice. "Here, relax. Everything's going to be okay." She sighed with relief, set her purse upon the coffee table, accepted the glass and sipped. "Thanks." "You can't imagine what this means to me," (though Drinkypoo actually knew very well), "I need those files so badly." She reached for his belt buckle for a brief moment, but paused. "First things first, though, I suppose." Drinkpoo shrugged. What did it matter to him? It would all work out the same. This was going to be fun no matter which order things occured. "Sure." He walked over to the computer desk and nudged the mouse, waking it up from sleep, casting monitor light across the room. "Let's have the disc." She smiled, and reached into her purse, pulling out a .." [CHOOSE YOUR OWN ADVENTURE: 1) .. gun. 2) .. Zip or Jaz disc. 3) .. 3.5" floppy or LS120 disc. (Drinkypoo, it will cost you 0.01 BTC for me to write this version.) ] Cool! (Score:2) Re: (Score:3) Likewise.. and it illustrates just how bloated "modern" OS have become. Then again, the generally accepted definition of "OS" seems to have changed to include things such as desktop and window managers. Re:Cool! (Score:4, Insightful) People want their operating system to let them operate their computer. Conveniently. You can certainly still get command-line-only linux distros if you care, and you're the kind of technology specialist who might get utility out of that. Re:Cool! (Score:5, Insightful) Re: (Score:2) People have gotten so used to the idea of a computer being a general purpose machine they've forgotten that you could just run a kernel and application set that requires no human interaction software at all. For Re: (Score:2) Thank you for a better explanation of what I was alluding to. Re: (Score:2) Without basic tools to do so, how are you going to get software on your Operating System? Could we automatically boot to floppy like an Apple ][? What do you even want? Re: (Score:2) And an ever-growing array of utility software. What OS is complete without a media player, web browser and Minesweeper? Lame (Score:3) Real Programmers write assembly language operating systems on punch cards. Re: (Score:2) It's been awhile, but I thought we wrote Fortran on punch cards. It's possible my memory is going. Re: (Score:2) Re: (Score:3) Real programmers engineer trees to grow punch cards with self-evolving code already punched on them. I suppose this means that, yes, God is a real programmer. I think that makes perfect sense. The source code for everything is perfectly available (MIT license), but there's no documentation and there are no comments in the code at all. In particular, the build environment is completely left as an exercise for the user. The original developer has been so silent for so long he clearly considers the code "ma Re: (Score:2) Re: (Score:2) Hexadecimal?! Luxury! We had to code ones and zeros, on clay tablets. Pointless? (Score:2, Insightful) I wonder ... (Score:2) Re: (Score:2) I wouldn't consider comparing a manually written routine to what GCC outputs cheating. If you have an optimizing compiler available to you, why not learn its tricks so that you can write better code yourself? Re: (Score:2) I have no knowledge, but I'd imagine the optimizations from a compiler would have a tell-tale signature. Changing definition of Kernel (Score:3) If we are starting to believe that the core of an operating system should include a full GUI, video and mp3 playback, audio, USB, network, etc. for the least possible battery use, then this is a really cool way to go. Why waste the resources? Just cause we can? If we are to rethink what a basic operating system of today ought to have right out of the box from the first nanosecond, then I'm sure there is a lot of reengineering that would happen to any Linux or Windows kernel. Re: (Score:2) Touché so.... (Score:2) i.e., from the sounds of it, it is not multi user, and everything runs with superuser privileges. It is written entirely in assembly language which adds another level of complexity for the programmer to deal with. Whilst it sounds like an Re: (Score:2) 13 Years to 1.0? (Score:3, Funny) So obviously their development cycle is much faster than HURD! NOT OPEN SOURCE (Score:5, Funny) "MenuetOS is an open source" Not the x64 version, which is the version that's actually worth a shit. Why do it? (Score:2) Because assembly programming is really fun. It's challenging and it's a pretty unique experience for most of us who rarely touch systems at that low a level. Re: (Score:3) Well, technically we all "touch" systems at this level, we just don't realize we are doing it. Learning/Using Assembly is like learning/using arithmetic instead of using a calculator. It is very handy and gives you a core appreciation for what is happening in complex problems, however most professionals just plug it into a computer rather than do it because it becomes too cumbersome at a certain level. Re: (Score:3) Well, "technically" I dabble in electrical engineering every time I flip on a light switch. I think my point was pretty clear. What a... (Score:2, Funny) What a senseless waist of human life. Assembly has very little advantage any more (Score:5, Insightful) As someone who writes bootloaders using both C and assembly language there really is very little advantage to using assembly any more. The C compiler gnerates very good assembly code at this point that is very compact if the right parameters are used. At this point it is difficult to exceed what the compiler does in terms of code density and it's a hell of a lot easier and faster to maintain C code than assembly. In my last bootloader I had to fit a MMC/SD bootloader in under 8K. In that space all of the assembly code fits in the first sector along with the partition table. The assembly code sets up the stack and does some basic CPU configuration and contains the serial port routines just because I had plenty of space. The rest of the bootloader contains all of the SD/MMC driver, FAT16/32 support, CRC32 and more. Note that this is MIPS64 code. The bootloader is able to load the next stage bootloader from a file off of a bootable partition from the root directory, validate it, load a failsafe bootloader if the validation fails and launch the next bootloader, all in under 8K. Having disassembled the output using objdump the compiled code is often better than hand coded assembly since the compiler can often find a smaller sequence of instructions. Not only that, but the compiler can order the instructions better for performance since it knows the CPU pipeline quite well. You don't need to write in assembly for something to be small, just don't throw in a bunch of unneeded crap. -Aaron Re:Gotta ask ! (Score:5, Insightful) * It's fun? * They can? * They want to? * To learn something? I'm sure there are a few more. Does anyone have not-boring questions? Re:Gotta ask ! (Score:5, Funny) Masochism? Re:Gotta ask ! (Score:5, Insightful) *Because someone should remember how to do this? Re: (Score:2) Re: Gotta ask ! (Score:2) And security, who is going to write malware for that? Iran needs this controlling the centrifuges. Re:Gotta ask ! (Score:5, Funny) Because * It's fun? * They can? * They want to? * To learn something? I'm sure there are a few more. Does anyone have not-boring questions? * he's following in the great Finnish tradition of writing your own OS... :-) Re:Gotta ask ! (Score:5, Funny) Neither. That's why we write things in assembly. Re: (Score:2) Re: (Score:2). Re: (Score:3) There's already a pretty big market of embedded devices running Linux or FreeDOS... and who's to say that another competitor couldn't offer yet another, better option? Re: (Score:3) Re: (Score:2) They better get it perfect the first time cause this thing will be impossible to maintain for anybody else and possibly for the original developers too. Forget it, there is no practical use for it, it's just a hobby (not that that's a bad thing). Re: (Score:2) The issue is though that powerful CPUs are getting really cheap. Devices that don't need such power are finding themselves embedded GHz-class processors because that's the lowest end available that has sufficient power. There's a gap betw Re:Gotta ask ! (Score:5, Insightful) Why would someone want to do a rewrite of Minix 20 some odd years ago? Re: (Score:2) Re:Gotta ask ! (Score:5, Interesting) Why? Well, I'm not well versed with the MenuetOS, but I've written a few toy OSs; Particular of interest to me is 16bit x86 which retains memory segmentation, and via other modes of operation (such as Unreal Mode enabled via addressing line 20) one can escape the 640K and/or 1MB limit whilst retaining other hardware features. Why go low level? Because programming paradigms are platform dependent. C is a product of its Von Neumann environment, and the dialog back and forth between hardware and software designers over the years has shaped our current methodology and hardware feature-set -- sometimes for the worse, IMO. E.g.: When we sacrificed a whole register and its memory segmentation capabilities to the RAM gods we made position independent code far more inefficient, and things like coroutines, stack smashing protections, multi-dimensional V-Tables for stateful OOP (eg: methods that change functionality depending on an object's "state" field), separate stacks for code pointers and parameter data, heap code pointer isolation, etc. more difficult (or near impossible) to accomplish. Instead of ASM I use a low level compilable and interpretable bytecode language for writing my OSs, and have multiple functional C compilers implemented atop the byte-code language in order to utilize existing hardware drivers -- However, I frequently dive into the machine specific ASM in order to implement and explore new features. I started doing hobby OS work for fun, but after reading Ken Thompson's On Trusting Trust ACM acceptance "presentation" I decided that it would be a worthwhile project to create and maintain a few machines isolated from the rest of the world which are bootstrapped and programmed entirely by me. I now use them to write my memoirs and for teaching children how to code and build custom hardware and robotics -- The parallel port is very simple to utilize without a massive OS like GNU/Linux in the way... The better question is why would anyone attempt to implement a POSIX OS today? We have working implementations. That feature set is proven. If we want to make advancements in OSs and hardware architectures we'll have to try doing things in different ways. Some of my OSs are cross platform; In one all programs are compiled down to the bytecode form by the VM ASM, or C compilers. The OS itself translates byte-code into machine code ON INSTALLATION, and gives the option to instead interpret a program if its untrusted (or in development, or self modifying) to contain it in an actual sandbox. On 32 bit x86 with more than one bit worth of execution privileges I'm able to enforce at the hardware level something akin to kernel mode at the application level for applications to isolate themselves from possibly untrusted plugins, and allow transparent lazy linking with emulated bytecode modules and optional JIT translation. For me the aim isn't to replace GNU/Linux or any mainstream OS. The aim is to explore unexplored problem spaces with the hope that any useful techniques may find their way into mainstream usage. Consider Google's NACL and LLVM. I'm not saying experimental OS projects which pre-date them are responsible for their current capability to compile C into cross platform bytecode and compile it into machine code, but us hobby OS folks HAVE been doing just that for longer than their projects have existed. Our experimental niche OS endeavours occasionally blaze the trail towards new features in mainstream hardware and software. Eg: I'm hoping for code pointer isolation and buffer overrun prevention via dual stack architecture, and for the ARM folks to give us more that 1 bit of privilege level so that secure designs other than Monolithic Kernels can be implemented... To the other commenter above who thinks "decades-of-practice" will allow optimizing compilers to utilize hardware features that are forbidden or don't exist: Ugh, no. I have decades of experience writing ASM and can write or generate much smaller & more efficient position inde Re: (Score:2, Insightful) compilers can never optimise better than a human, if you disagree you havent witnessed the demoscene. Re:Gotta ask ! (Score:5, Insightful) Compilers can never optimize better than the *best* humans, operating without time constraints. Very few programmers have that level of skill, or the time to spend on the task. That's why optimizing compilers were invented. Re:Gotta ask ! (Score:5, Interesting) Re:Gotta ask ! (Score:5, Interesting) Back when I first started, compilers were pretty stupid and basically did a 1-to-1 translation of source code to sequences of assembly language. It didn't take much to do better simply coding assembler by hand. Somewhere about the mid 1980s that stopped being true. The IBM mainframe Pascal/VS compiler generated heavily-optimized code. It was sufficiently clever in its use of machine resources that it was on par with hand optimization. The clincher, however, was that it could do this job in seconds, whereas hand-coding would take hours. And if you made significant changes to the algorithms you were coding, it would re-optimize each time. It would have been a total rewrite to get code that tight by hand. And even though we didn't have the insane time constraints that modern-day projects typically have, we didn't have enough time to make it worth doing that even to save expensive mainframe CPU cycles. Re: (Score:3) I have to agree on that. While I always thought ASM is great, its greatness isn't universal. At any rate, *this* language will be better than *that* language for *this* specific task/project. e.g. industrial robot programming, where you need to obtain maximum of speed and precision fit within minimum output size (e.g. 4 KB of storage, be it ROM, EEPROM, whatever) - that's where ASM shines. On the other hand, if you want to build an operating system, ASM will be better in less than 10% of the entirety of the Re: (Score:2) That is NOT why optimizing compilers were invented. Re: (Score:3) Yes they can. Try making use of a CPU with 8 cores and writing your threaded application in assembly. Someone with a compiler in a higher level language will kick your ass back and forth. The simpler the arguement the truth your argument holds, but as CPUs get more complex. Nope! Look at how difficult game programmers found it to program for the PS3 Cell architecture. Try having them write an entire game in assembly. Ha! Re: (Score:2) And he's probably right, assuming he's a reasonably skilled coder. It's the difference between a factory-made mass-produced "good enough for most uses" product and an artisan-made hand-crafted "best of the best" product. Re: (Score:2) So they're easier to carry, there is no good reason and because you have no choice in the matter, respectively. Re: (Score:3) cc -x c - <<'X' && ls -l a.out #include <stdio.h> int main(void) { printf("hello world\n"); } X -rwxr-xr-x 1 fisted users 7318 Nov 15 17:25 a.out* Re: (Score:2) Now you've got to spend days trying to find a disk drive. The researchers plan to add networking capability within the next 13 years, once they decide it's not just a temporary fad. Re: (Score:2) But did it support USB, TV tuners, and webcams? Re:I can beat that (Score:5, Funny) But did it support USB, TV tuners, and webcams? Yes it did! GEOS supported all the existing USB TV tuners and webcams of its time. Re: (Score:2) I never said it wasn't an OS, I was just pointing out one of the things that would make this more of an accomplishment. Re: (Score:3) A lot of the bloat in modern OSes comes from having to support a wide range of hardware - it's one of the reasons Linux can scale down to run on a tiny embedded system if you strip out that hardware support and other unneeded features (such as a fancy UI). You might even find that it doesn't scale well onto higher-end systems. Re: (Score:2) Embarrassing the creators of all the OSs that take five minutes to reach desktop. Re: (Score:2) Assembler really isn't that hard. My first three years as a programmer were assembly language only for embedded applications. It's like anything else -- you memorize stuff and after awhile you see every problem as a list of assembly instructions. Later, using C seemed like cheating, in a way. Re: (Score:3) Against the license [menuetos.net]:
https://tech.slashdot.org/story/13/11/15/1446258/menuetos-an-os-written-entirely-in-assembly-language-inches-towards-10?sbsrc=md
CC-MAIN-2016-50
refinedweb
7,127
63.59
-- | Saving and restoring games and player diaries. module Game.LambdaHack.Save ( saveGameFile, restoreGame, rmBkpSaveDiary, saveGameBkp ) where import System.Directory import System.FilePath import qualified Control.Exception as E hiding (handle) import Control.Monad import Control.Concurrent import System.IO.Unsafe (unsafePerformIO) -- horrors import Game.LambdaHack.Utils.File import Game.LambdaHack.State import qualified Game.LambdaHack.Config as Config -- | Name of the save game. saveFile :: Config.CP -> IO FilePath saveFile config = Config.getFile config "files" "saveFile" -- | Name of the backup of the save game. bkpFile :: Config.CP -> IO FilePath bkpFile config = do sfile <- saveFile config return $ sfile ++ ".bkp" -- | Name of the persistent player diary. diaryFile :: Config.CP -> IO FilePath diaryFile config = Config.getFile config "files" "diaryFile" -- | Save a simple serialized version of the current player diary. saveDiary :: State -> Diary -> IO () saveDiary state diary = do dfile <- diaryFile (sconfig state) encodeEOF dfile diary saveLock :: MVar () saveLock = unsafePerformIO newEmptyMVar -- | Save a simple serialized version of the current state. -- Protected by a lock to avoid corrupting the file. saveGameFile :: State -> IO () saveGameFile state = do putMVar saveLock () sfile <- saveFile (sconfig state) encodeEOF sfile state takeMVar saveLock -- | Try to create a directory. Hide errors due to, -- e.g., insufficient permissions, because the game can run -- in the current directory just as well. tryCreateDir :: FilePath -> IO () tryCreateDir dir = E.catch (createDirectory dir) (\ e -> case e :: E.IOException of _ -> return ()) -- TODO: perhaps take the target "scores" file name from config. -- TODO: perhaps source and "config", too, to be able to change all -- in one place. -- | Try to copy over data files. Hide errors due to, -- e.g., insufficient permissions, because the game can run -- without data files just as well. tryCopyDataFiles :: (FilePath -> IO FilePath) -> FilePath -> IO () tryCopyDataFiles pathsDataFile dirNew = do configFile <- pathsDataFile "config.default" scoresFile <- pathsDataFile "scores" let configNew = combine dirNew "config" scoresNew = combine dirNew "scores" E.catch (copyFile configFile configNew >> copyFile scoresFile scoresNew) (\ e -> case e :: E.IOException of _ -> return ()) -- | Restore a saved game, if it exists. Initialize directory structure, -- if needed. restoreGame :: (FilePath -> IO FilePath) -> Config.CP -> String -> IO (Either (State, Diary) (String, Diary)) restoreGame pathsDataFile config title = do appData <- Config.appDataDir ab <- doesDirectoryExist appData -- If the directory can't be created, the current directory will be used. unless ab $ do tryCreateDir appData -- Possibly copy over data files. No problem if it fails. tryCopyDataFiles pathsDataFile appData -- If the diary file does not exist, create an empty diary. -- TODO: when diary gets corrupted, start a new one, too. diary <- do dfile <- diaryFile config db <- doesFileExist dfile if db then strictDecodeEOF dfile else defaultDiary -- If the savefile exists but we get IO errors, we show them, -- back up the savefile and move it out of the way and start a new game. -- If the savefile was randomly corrupted or made read-only, -- that should solve the problem. If the problems are more serious, -- the other functions will most probably also throw exceptions, -- this time without trying to fix it up. sfile <- saveFile config sb <- doesFileExist sfile if sb then E.catch (do mvBkp config bfile <- bkpFile config state <- strictDecodeEOF bfile return $ Left (state, diary)) (\ e -> case e :: E.SomeException of _ -> let msg = "Starting a new game, because restore failed. " ++ "The error message was: " ++ (unwords . lines) (show e) in return $ Right (msg, diary)) else return $ Right ("Welcome to " ++ title ++ "!", diary) -- | Move the savegame file to a backup slot. mvBkp :: Config.CP -> IO () mvBkp config = do sfile <- saveFile config bfile <- bkpFile config renameFile sfile bfile -- | Save the diary and a backup of the save game file, in case of crashes. -- This is only a backup, so no problem is the game is shut down -- before saving finishes, so we don't wait on the mvar. However, -- if a previous save is already in progress, we skip this save. saveGameBkp :: State -> Diary -> IO () saveGameBkp state diary = do b <- tryPutMVar saveLock () when b $ void $ forkIO $ do saveDiary state diary -- save the diary often in case of crashes sfile <- saveFile (sconfig state) encodeEOF sfile state mvBkp (sconfig state) takeMVar saveLock -- | Remove the backup of the savegame and save the player diary. -- Should be called before any non-error exit from the game. -- Sometimes the backup file does not exist and it's OK. -- We don't bother reporting any other removal exceptions, either, -- because the backup file is relatively unimportant. -- We wait on the mvar, because saving the diary at game shutdown is important. rmBkpSaveDiary :: State -> Diary -> IO () rmBkpSaveDiary state diary = do putMVar saveLock () saveDiary state diary -- save the diary often in case of crashes bfile <- bkpFile (sconfig state) bb <- doesFileExist bfile when bb $ removeFile bfile takeMVar saveLock
http://hackage.haskell.org/package/LambdaHack-0.2.1/docs/src/Game-LambdaHack-Save.html
CC-MAIN-2014-49
refinedweb
757
59.09
Learning Vue for Ionic/Angular Developers – Part 1 Learning Vue for Ionic/Angular Developers – Part 5: Building with Cordova and the Vue CLI The introduction of Stencil and the switch to web components means that you will soon be able to use Ionic with any framework you like. Previously, we have been limited to building Ionic applications with the Angular framework. I don’t use the term “limited” in any kind of negative sense, Angular is a great framework for building mobile applications, and I suspect it will remain the defacto framework for building with Ionic in the future. But… we have options now! One of those options is, of course, the popular Vue framework. Why might you want to use Vue instead of Angular to build Ionic applications? I’m not going to get into that now, but in general, I would say: if you like using Vue more than Angular. The Vue team have published a guide that compares Vue to other frameworks, and it seems to be a reasonably fair assessment. The key point of difference between Angular and Vue seems to be that Angular is more structured and opinionated, whereas Vue is more flexible and lightweight. Angular provides almost everything you need out of the box and it’s hard to deviate from that, Vue requires that you fill in some of the blanks yourself but gives you more room to decide. It is also worth noting that building Ionic applications with Vue right now is going to be harder than with Angular. We are still in the very early days of the transition to web components, whereas Ionic/Angular has been stable for a long time now. NOTE: The fact that I am writing this series does not mean that I am “switching” to Vue. I will continue posting more Angular/Ionic tutorials in the future. Learning Ionic/Vue Previously, when it came to learning Ionic it wasn’t (and still isn’t) all that important to differentiate between Ionic and Angular. If you were new to both frameworks you might just have “learned Ionic”, and picked up whatever Angular knowledge you needed along the way. When building mobile applications with Ionic, you’re really just mostly building Angular applications with a little Ionic “flavour”. However, Ionic did change some key things about Angular – most notably the routing. Ionic ditched Angular’s routing in favour of their own mobile-centric push/pop style navigation stack. For this reason, if you were to just learn how to build applications with Angular, you couldn’t transition directly into building mobile applications with Ionic without changing the way you did a few things. When using Ionic with Vue, this (likely) won’t be the case. There will be no special Ionic behaviour that modifies the way Vue works by default – you would use normal Vue routing and Ionic components will just be dropped in on top. It will (likely) be possible to still use the push/pop style navigation that the NavController from Ionic provides with Vue (as the NavController will also be ported to a web component), but it will (likely) be best to stick to doing things the normal “Vue way”. For these reasons, I think it makes the most sense to focus on learning the basics of Vue first without considering anything specific to Ionic (but still keeping it in the context of Ionic and differences to Angular). In this tutorial series, I intend to introduce the basics of building Vue applications to those who are already familiar with using Angular. If you are not already familiar with Angular then this introduction to Vue will likely not be that useful to you, as a lot of comparisons to Angular will be made. Getting Started Like Angular, Vue also provides a convenient CLI tool that you can use to easily generate a new Vue application with everything configured for you. We’re going to quickly walk through installing the CLI and generating a new Vue project, and then we will briefly walk through the basic application structure. Install the Vue CLI with the following command: npm install -g vue-cli Create a new Vue project using the following command: vue init webpack my-project This will create a new project using the webpack template, which has all the configuration for webpack, a development server, and more set up by default. Once the project has been created you will need to make it your working directory: cd my-project and then install all of the dependencies: npm install Once that is done, you will be able to serve the application using: npm run dev Application Structure (in Brief) In the first part of this series I want to focus mainly on the syntax differences (and similarities) between Angular and Vue, but I think it will help to first have a basic understanding of the project structure. I am going to fly through this right now, we will likely discuss it in more detail later. The structure of a Vue application is much the same as an Angular application – there is (usually) a single root component which contains all of the other components for the application (each of which may then have their own components and so on). This creates a tree-like structure of components. If you take a look at the index.html file of your generated Vue project you will see the following element: <div id="app"></div> this is the element that your Vue application will be set up on. If you then take a look at the src/main.js file. import Vue from 'vue' import App from './App' Vue.config.productionTip = false new Vue({ el: '#app', template: '<App/>', components: { App } }) You can see we are creating a new Vue application on the #app element. This has a template of <App/> which is just another component (that is being set up using the components property). Again, this is pretty much the exact way that Angular works – in this case, our root component is App. We can look at that component in src/App.vue, and it will look something like this: <template> <div id="app"> <img src="./assets/logo.png"> <HelloWorld/> </div> </template> <script> import HelloWorld from './components/HelloWorld'> There are all the ingredients you would expect to see in a component for an Angular application: - A template to define the view - Javascript to define application logic - Style definitions but when building Ionic applications in Angular, most people would have these three things in separate files, i.e: - some-component.html - some-component.ts - some-component.scss but in this case, everything is contained within the one file. We are also importing (just like we would in Angular) another component called HelloWorld that has been added to the template for the App component. This makes HelloWorld a child component of App, and you can find it at src/components/HelloWorld.vue: <template> <div class="hello"> <h1>{{ msg }}</h1> <h2>Essential Links</h2> <ul> <li><a href="" target="_blank">Core Docs</a></li> <li><a href="" target="_blank">Forum</a></li> <li><a href="" target="_blank">Community: 'HelloWorld',> Basically the same idea here as the root component, it’s just a little more complex. There is a data() function defined which sets up a msg value that can then be used inside of the template, using the same interpolation syntax as Angular {{ }} (more on this later). Also note that the scoped attribute is added to the style block, which will scope those styles to just this component. This was just a very quick rundown of how a Vue application works, but it should give you a sense of the overall architecture and how it compared to Angular. Angular Syntax vs Vue Syntax Whilst Vue and Angular are clearly two very different frameworks, when you start looking into the syntax differences you will likely feel quite at home. The concepts and syntax are very similar, and it’s going to make it a lot easier to slot (that’s a little pun for later) into the Vue way of doing things if you already understand Angular. In this section, we are going to be looking at the similarities and differences between Vue and Angular when it comes to template syntax. A lot of Vue’s template syntax was inspired by Angular (in their opinion, Angular “got it right”) so you are going to notice a lot of similarities. Data Binding In Angular, we could bind any of our class members inside of our template, in Vue we can use any of the values defined in the data() function for the component: export default { name: 'HelloWorld', data () { return { msg: 'Welcome to Your Vue.js App' } } } This will make a variable called msg available to be used in the template. In order to make use of that variable, in Angular, we would use square brackets to bind to it, like this: <span [title]="msg"> but in Vue we need to use v-bind, like this: <span v-bind: You can also use double curly braces just like you can in Angular to render some data to the template: {{ msg }} Conditional Rendering The ability to easily control what is and isn’t displayed in our templates based on some conditions, or looping over a set of data to create new elements, is a powerful part of Angular. Vue implements the same style of conditional rendering. In Angular, if we wanted to display a paritcular element based on some condition we would use the *ngIf structural directive: <div *</div> but in Vue, we would use v-if: <div v-</div> If we want to loop over a set of data and render an element for each item in the set, we would use the *ngFor structural directive in Angular: <div * {{ article.title }} </div> but in Vue, we just need to use v-for: <div v- {{ article.title }} </div> In both cases, we can access the data stored on each item using the same interpolation syntax that Angular uses. There are other options available as well, like using if/else conditions in templates, but we’re going to leave it there for now. Event Binding Most applications will need to listen for and handle some events – like a click on a particular element. In Angular, you would be used to doing something like this: <button (click)="doSomething()"></button> and once again Vue is quite similar, except that we use v-on instead (we can also use @ as a shortcut for this): <button @</button> By passing in the method name, this would invoke the doSomething function on the component. We don’t have any methods defined in our generated example, but it might look something like this: export default { name: 'HelloWorld', data () { return { msg: 'Welcome to Your Vue.js App' } }, methods: { doSomething() { this.msg = 'We did it!'; } } } Props In Vue, a prop is basically the same idea as @Input in Angular. It is a way to pass data from a parent component to a child component. In Angular, you would define an input like this: @Input('myInput') myInput; This would then allow you to bind a value to that input when using the component, for example: <my-component [myInput]="someValue"></my-component> In Vue, we would define props on the component for data that we want to pass in. To create a prop, we might modify the component to look like this: export default { name: 'HelloWorld', props: ['myProp'], data () { return { msg: 'Welcome to Your Vue.js App' } } } This would then allow us to attach a myProp attribute to the HelloWorld component: <HelloWorld myProp="hello!"/> We could then access that value inside of the HelloWorld component, for example we might render it out to the template with: <h2>{{ myProp }}</h2> which would display hello! on the screen. Slots The final concept I want to touch on is slots and, once again, it is analogous to a concept in Angular. In Angular, we use the concept of content projection to “project” content that is supplied to a component into the template. As a simple example, we can do the following: <h2>Some content is going to be injected below!</h2> <ng-content></ng-content> <h2>Some content is going to be injected above!</h2> If we add this to the template of a component, then anything content that is supplied to that component will be injected where the <ng-content> tags are. So, if we did the following: <my-component> We'll do it live! </my-component> The template for the component would become: <h2>Some content is going to be injected below!</h2> We'll do it live! <h2>Some content is going to be injected above!</h2> Slots in Vue behave almost identically. Instead, we just use <slot> instead of <ng-content>. If we were to add the following to the template for a component: <slot></slot> and then we add some content to that component: <HelloWorld> Put me in the slot! </HelloWorld> The “Put me in the slot!” text will be injected into the template wherever <slot></slot> is. Summary There’s always going to be a bit of learning curve when picking up a new framework, and you’ll have to spend time messing around with the basics again, but if you’re already familiar with Angular then you’ll have a bit of a head start with Vue. In the spirit of keeping things simple for now, not everything I have mentioned in this post is strictly best practice. It would be beneficial to familiarise yourself with this style guide. In later tutorials in this series, we will continue learning important Vue concepts and eventually build an Ionic application in Vue.
https://www.joshmorony.com/learning-vue-for-ionicangular-developers-part-1/?utm_campaign=Vue.js%20Developers&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2020-10
refinedweb
2,293
56.18
In this article, we will learn how to use routing in ReactJS. React Router will help you to create and navigate between the different URLs. They allow your user to move between the components of your application while preserving the user state and can provide unique URLs for these components to make them more sharable. We have created 3 components like below. - Home Component import React from 'react' export default function home() { return ( <div> Home page </div> ) } - About component import React from 'react' export default function about() { return ( <div> About page </div> ) } - Contact component import React from 'react' export default function contact() { return ( <div> Contact page </div> ) } Then, we have installed the react-router-dom npm module to allow implement dynamic routing like a blow. npm i react-router-dom React-router-dom give some routing component. and it can be use to app component like below. import React, { Component } from 'react'; import { BrowserRouter as Router, Route, Link, Switch } from 'react-router-dom'; import Home from './components/home'; import About from './components/about'; import Contact from './components/contact'; import './App.css'; function App() { return ( <Router> <div > <ul style={{ justifyContent: 'left', paddingLeft: 0 }}> <li> <Link to="/">Home</Link> </li> <li> <Link to="/about">About Us</Link> </li> <li> <Link to="/contact">Contact Us</Link> </li> </ul> <div style={{paddingLeft:'20px'}}> <Switch> <Route exact path='/' component={Home}></Route> <Route exact path='/about' component={About}></Route> <Route exact path='/contact' component={Contact}></Route> </Switch> </div> </div> </Router> ); } export default App; Here, add 3 pages of component home, about and contact and also react-router components. Route: Route is the conditionally shown component that renders some UI when its path matches the current URL. Link: The link component is used to create links to different routes and implement navigation around the application. It works as an HTML anchor tag. Switch: Switch component is used to render only the first route that matches the location rather than rendering all matching routes. Although there is no defying functionality of the SWITCH tag in our application because none of the LINK paths are ever going to coincide. But let’s say we have a route (Note that there is no EXACT in here), then all the Route tags are going to be processed which starts with ‘/’ (all Routes start with /). This is where we need the SWITCH statement to process only one of the statements. Exact: It is used to match the exact value with the URL. For Eg., exact path=’/about’ will only render the component if it exactly matches the path but if we remove it exactly from the syntax, then UI will still be rendered even if the structure is like /about/10. Its give output like below.
https://www.thecodehubs.com/routing-in-reactjs/
CC-MAIN-2022-21
refinedweb
454
59.64
How To: Call a Java EE Web Service from a .Net Client Many organizations have server side investments in Java technologies. While they want to build a compelling UI with Microsoft’s latest technologies, such as WPF and Silverlight, they still want to benefit from those existing investments instead of rewriting them. In order to do so, we have to bridge between those technologies and allow client side technologies consume Java web services. This post is a step by step guide for building a Java EE Web Service, and a .Net client application that consumes it. Before we get started with this walkthrough, make sure you have the following installed on your machine: Create a Java Web Service (Java EE, JAX-WS) 1. Create a new Web Application In the NetBeans 6.1 IDE, choose File –> New Project. In the New Project Dialog select the Web category, and choose Web Application from the projects list. Then, Click Next. * If the web category is not available in this dialog, it means that the NetBeans version you have installed isn’t the Web and Java EE package.. 2. Create the Web Service Add a new web service to the project. Right click the project node and choose New –> Web Service. * Notice that the location of the Web Service option in the menu may change from this image and your IDE. In the New Web Service dialog, provide the Web Service Name, and the Package. The name of the service will affect the final URL for calling the service, and the package name will be the namespace of the service contract. Click Finish. The Web Service now appears in the project tree. To implement the service, double click the service node in the project tree (in the figure above – CalculatorService). This will open the Web Service in Design mode, that lets you graphically design your service. Change to Source View by clicking on the Source button in the upper toolbar, and this will open the CalculatorService.Java file for editing. Here is a sample implementation of the service. Notice how Java Annotations are similar to .Net Attributes, especially how similar they are to the Web Services attributes we know… package org.bursteg.calculator; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebService; @WebService public class CalculatorService { @WebMethod public int Add(@WebParam(name="a") int a, @WebParam(name="b") int b) { return a + b; } } Deploy the web service to the web application server. From the NetBeans IDE this is done by right clicking the project node, and choosing Undeploy and Deploy.. Call the Java Web Service from a .Net Client. To call the service using the generated client side proxy, open Program.cs and use the following code: static void Main(string[] args) { CalculatorServiceClient proxy = new CalculatorServiceClient(); int result = proxy.Add(2, 3); Console.WriteLine("Calculator Service returned: " + result.ToString()); } Run the program and see that the web service is being called and the result is correct. Conclusion Since Java EE Web Services (JAX-WS) are standard SOAP services, they are easily interoperable from a .Net client application with only several clicks. Visual Studio generated a .Net client proxy that makes it very easy to connect and call the service. Enjoy!
http://blogs.microsoft.co.il/bursteg/2008/07/19/how-to-call-a-java-ee-web-service-from-a-net-client/
CC-MAIN-2016-07
refinedweb
540
66.84
Azure Machine Learning is a cloud-based platform that enables you to build, train, and deploy machine learning models. In this blog post, we’ll show you how to use Azure Machine Learning with Visual Studio Code. Introduction to Azure Machine Learning Using Azure Machine Learning with Visual Studio Code is an efficient way to harness the power of the cloud to train and deploy machine learning models. VS Code provides a set of tools that make it easy to create and connect to Azure Machine Learning, and then publish models to Azure Container Instances or Azure Kubernetes Service. This article will show you how to get started with Azure Machine Learning in VS Code. We’ll cover the following topics: – Creating an Azure Machine Learning workspace – Creating and training a machine learning model in the workspace – Deploying the machine learning model – Consuming the deployed machine learning model Creating an Azure Machine Learning workspace The first step is to create an Azure Machine Learning workspace. This can be done through the Azure portal, or using the az ml command line tool. Creating and training a machine learning model in the workspace Once you have a workspace, you can create and train machine learning models in it. The SDK provides a Python package that makes it easy to work with data and train models in your workspace. There are many different ways to train models in your workspace. You can use Jupyter notebooks, which are popular among data scientists, or you can use one of the many built-in algorithms provided by Azure Machine Learning. In this article, we’ll use one of these built-in algorithms, specifically the Auto ML algorithm, to train a regression model. Deploying the machine learning model After you’ve trained your machine learning model, you can deploy it as a web service so that it can be consumed by other applications. The steps for deploying a web service are as follows: – Choose where to host your web service (Azure Container Instances or Azure Kubernetes Service) – Configure the web service – Test the web service Consuming the deployed machine learning model application consumes a deployed machine learning model by sending input data to the web service endpoint and receiving predictions in return. In this section, we’ll walk through how this works using a simple example application written in Python. What is Visual Studio Code? Visual Studio Code is a free code editor from Microsoft. It’s available for Windows, Linux, and macOS. You can install it from the link below. Getting Started with Azure Machine Learning In this article, we’ll show you how to get started with Azure Machine Learning using Visual Studio Code. We’ll cover the basics of what Azure Machine Learning is and how it can be used, then walk through an example of using it to build a simple machine learning model. By the end, you should have a good understanding of how to use Azure Machine Learning with Visual Studio Code and be able to start building your own machine learning models. Creating a Machine Learning Model This article walks you through the process of creating a machine learning model using Azure Machine Learning and Visual Studio Code. Before you begin, make sure you have the following: -An Azure subscription. If you don’t have an Azure subscription, you can create a free account -Visual Studio Code. -The Azure Machine Learning extension for Visual Studio Code. To install the extension, launch Visual Studio Code and select View > Extensions, then search for and select Azure Machine Learning. Once you have the prerequisites installed, you’re ready to create a new machine learning model. In Visual Studio Code, select File > New File, then type in the following code: from azureml import core from sklearn import datasets cancer_data = datasets.load_breast_cancer() X = cancer_data.data y = cancer_data.target print(“Shape of X: {}”.format(X.shape)) print(“Shape of y: {}”.format(y.shape)) Save the file with a “.py” extension, then open the integrated terminal in Visual Studio Code (View > Integrated Terminal) and run the following command to train the model: az ml execute -t -f filename.py Training a Machine Learning Model In order to use Azure Machine Learning with Visual Studio Code, you first need to train a machine learning model. This can be done using a variety of different techniques, but the most common is to use a dataset that has already been labeled with the correct results. Once you have a labeled dataset, you can then use it to train your machine learning model. There are several ways to label a dataset, but the most common is to use a supervised learning algorithm. This means that you will need to have some labeled data in order to train your model. If you do not have any labeled data, you can still try to use an unsupervised learning algorithm, but this will usually not produce as accurate of results. Once you have your dataset, you will need to choose a machine learning algorithm that you want to use. There are many different types of machine learning algorithms, but some of the more popular ones include support vector machines, decision trees, and neural networks. After you have chosen an algorithm, you will need to configure it for your specific problem. Once your machine learning model is trained, you can then use it for prediction. To do this, you will first need to load your data into Azure Machine Learning. You can do this by either using the web interface or by using the command line interface. After your data is loaded, you can then run your predictions against it and see how accurate they are. Evaluating a Machine Learning Model There are a few different ways to evaluate a machine learning model in Visual Studio Code: 1. Use the Azure Machine Learning extension 2. Use the built-in Python tools 3. Use a third-party tool like TensorFlow 1. Azure Machine Learning Extension The Azure Machine Learning extension makes it easy to evaluate machine learning models. Simply open the extension, select your model, and choose how you want to evaluate it. The extension will automatically run the evaluation and provide you with a summary of the results. 2. Built-in Python Tools If you’re using Python for your machine learning development, you can use the built-in tools in Visual Studio Code to evaluate your model. For example, you can use the Jupyter Notebook integration or the debugger to step through your code and see how your model is performing. 3. Third-Party Tools like TensorFlow If you’re using a third-party tool like TensorFlow, you can use that tool’s evaluation functionality to evaluate your machine learning model. Deploying a Machine Learning Model This article will guide you through the process of deploying a machine learning model using Azure Machine Learning and Visual Studio Code. First, you will need to create a new Azure Machine Learning workspace. You can do this through the Azure portal, or using the Azure CLI. Once your workspace has been created, you will need to create a new conda environment within it. This environment will contain all of the necessary dependencies for running your machine learning model. Next, you will need to train your machine learning model. You can do this using any of the available training datasets, or by creating your own custom dataset. Once your model has been trained, you will need to deploy it to an Azure Container Instance. You can do this using the Azure ML CLI or the Azure ML SDK for Python. Finally, you will need to configure Visual Studio Code to connect to your deployed machine learning model. Once connected, you will be able to query your model and retrieve predictions from it. Using Azure Machine Learning with Visual Studio Code Visual Studio Code is a powerful coding editor from Microsoft. It’s free to download and use, and you can get started with it by following this tutorial. If you’re working with machine learning, you may want to use Visual Studio Code for your development. Azure Machine Learning is a cloud-based service from Microsoft that makes it easy to build and deploy machine learning models. In this article, we’ll show you how to use Azure Machine Learning with Visual Studio Code. We’ll walk you through the process of installing the necessary components, creating a new project, and deploying a model to Azure Machine Learning. Tips and Tricks for Using Azure Machine Learning Azure Machine Learning is a powerful tool that can help you build and deploy machine learning models. If you’re using Visual Studio Code, there are a few tips and tricks that can make your experience even better. First, make sure you have the Azure Machine Learning extension installed. This will give you access to all of the necessary tools and commands. Next, take advantage of code snippets. The Azure Machine Learning extension includes snippets for common tasks, such as creating an experiment or adding a dataset. To use a snippet, simply type the name of the snippet and press tab. Finally, use the built-in linter to check your code for potential errors. The linter will catch things like invalid syntax and missing required fields. To run the linter, simply open the Command Palette (Ctrl+Shift+P) and select ‘Lint Code’. By following these tips, you can get the most out of Azure Machine Learning in Visual Studio Code. Conclusion In this article, you learned how to use Azure Machine Learning with Visual Studio Code. You first set up your development environment and then created and trained a machine learning model. Finally, you deployed your model as a web service. With these skills, you can now build and deploy machine learning models using Visual Studio Code.
https://reason.town/azure-machine-learning-visual-studio-code/
CC-MAIN-2022-40
refinedweb
1,645
61.87
helper class to aid working with structured extents. More... #include <vtkStructuredExtent.h> helper class to aid working with structured extents. vtkStructuredExtent is an helper class that helps in arithmetic with structured extents. It defines a bunch of static methods (most of which are inlined) to aid in dealing with extents. Definition at line 35 of file vtkStructuredExtent.h. Definition at line 39 of file vtkStructuredExtent.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkObjectBase. Clamps ext to fit in wholeExt. Definition at line 92 of file vtkStructuredExtent.h. Returns true if ext is fits within wholeExt with at least 1 dimension smaller than the wholeExt. Definition at line 129 of file vtkStructuredExtent.h. Returns if ext fits within wholeExt. Unlike StrictlySmaller, this method returns true even if ext == wholeExt. Definition at line 105 of file vtkStructuredExtent.h. Grows the ext on each side by the given count. Definition at line 148 of file vtkStructuredExtent.h. Grows the ext on each side by the given count while keeping it limited to the wholeExt. Definition at line 160 of file vtkStructuredExtent.h. Makes ext relative to wholeExt. Definition at line 167 of file vtkStructuredExtent.h. Given the extents, computes the dimensions. Definition at line 180 of file vtkStructuredExtent.h.
https://vtk.org/doc/nightly/html/classvtkStructuredExtent.html
CC-MAIN-2019-47
refinedweb
233
62.44
This is the second tutorial of Pico Tutorial series, in the first tutorial, I showed how to use Pico with a service called StockQuote from webserivceX.NET. Today I will show you how to use Pico with another service called CurrencyConverter, also from webserviceX.NET, in first tutorial, I showed you how to reference Pico as a static library, in this tutorial, I will show you how to reference the Pico source in your project. By the way, since the wsdl driven development process in both tutorials are quite similar, I won’t repeat too much details in this tutorial, I suppose you have already read tutorial one and basically understand the wsdl driven development process supported by Pico. The whole source of this tutorial is here. Let’s cut to the point: Step 1 - Generate Objective-C Proxy from WSDL Download mwsc and run following command in terminal to generate the proxy: A few comments to the generated code: - By default, the proxy code will be generated in the sub-folder corresponding to the target namespace of the wsdl. - There is a generated folder called client, the SOAP and XML proxy interface will be generated in this folder. - There is a generated folder called common, a common header file will be generated in this folder, the common header includes headers of all types generated from wsdl/schema, use this header file can free you from writing many import statements in your project when you build request or handle response needed by service call. Step 2 - Create New iOS Project, Add Pico Library and Generated Proxy into Your Project Create a new simple iOS single view application named “CurrencyConverter”, don’t choose ARC, download Pico source and drag the whole PicoSource folder into the project, choose “Copy items to destination group’s folder” and “add to targets” when prompted. Then do following settings to the project: - In Target Build Setting, add the -ObjCflag to your “Other Linker flags”. - In Target Build Setting, add /usr/include/libxml2to your “Header Search Paths” - In Target Build Phases, link binary with library libxml2.dylib Build the the project to ensure that it can build successfully. Now drag the proxy generated in step 1 into the project, choose “Copy items to destination group’s folder” and “add to targets” when prompted. Build the the project again to ensure that it can build successfully. The finished project should look like the screen shot below: Step 3 - Implement Appliction Logic and UI, Call Proxy to Invoke Web Service as Needed. First, create a shared service client as below: Now open ViewController_iPhone.xib in interface builder, then add a few UI components like following screen shot: Add IBOutlet properties and IBAction method in ViewController.h, then wire the properties and method with UI components accordingly, bascially, the application will convert the From currency to To currency and show conversion rate, when the Convert button is clicked(which will trigger onConvertClicked method internally). Now implement the onConvertClicked method by invoking service as below: Please don’t forget to include the shared CurrencyConverterSerivceClient. Final Step - Run the Demo See a screen shot below: And the debug output: There are other similar demos in the Examples folder of Pico source, like the BarCode demo which calls web service that will return base64 encoded barcode data and the Weather demo which shows the weather given a zip code, see screen shots below, I won’t create tutorials for all these simple demos, since they are quite similar. Next time, I plan to show you how to use Pico with industrial level web serivces, like Amazon and eBay web serivces, just stay tuned. The screen shot of barcode demo: The screen shot of weather demo:
http://bulldog2011.github.io/blog/2013/03/28/pico-tutorial-2-a-currency-converter-sample/
CC-MAIN-2019-13
refinedweb
622
52.02
I sure was naive. When I launched a certain django-based site that accepted user comments (wonder which one that is?) a while back I thought I could block the comment spam myself without CAPTCHA. After a few months of traffic I started getting hammered with it and tried blocking IPs, keywords and patterns. All to no avail. The trouble-spot was a strait-forward, regular old HTML form that accepted the comment input. I needed it to appeal to wide browser requirements of the site. My AJAX-jQuery-to-django-piston-service comment submissions rarely were the source of spam entry but I needed my regular forms locked down as well. I toyed with the idea of rolling my own CAPTCHA but I honestly have bigger fish to fry. Turns out that integrating reCAPTCHA with django was a sinch and solved my comment spam problems. Here's how to do it. Step #1: Get a reCAPTCHA Account reCAPTCHA is a service and all the heavy lifting is done on reCAPTCHA's servers. Because of that you must sign up for an account to use the service here. By default a key works on a single domain, but you can also create your key as "Global" allowing them to work on multiple site. Step #2: Install recaptcha-client In order for django (or any other python code) to use the reCAPTCHA service you must install the recaptcha-client library. This is most easily accomplished with setuptools: easy_install recaptcha-client Alternatively you can install it directly from source by downloading it from: Step #3: Add reCAPTCHA to a Template Now it's time for some actual web development. I'll start out by putting the familiar reCAPTCHA interface in a django template. Notice that I have it as part of a form named edit_form. There are also a couple locations where you have to insert your public key which you get when you sign up for a reCAPTCHA account. <form action="#" method="POST"> <table> {{ edit_form }} <tr> <th>Are you human?</th> <td> <span class="validation_error">{{ captcha_response }}</span> > <td> </tr> <tr> <th></th> <td><input type="submit" value="Save"/></td> </tr> </table> </form> Step #4: Handle reCAPTCHA Upon Form Submission Now I'll set up a view to handle the template and form submission. This would all live in your application's views.py. # load the recaptcha module from recaptcha.client import captcha # create the form to be submitted class EditForm(forms.Form): data_field = forms.CharField() def myview(request): if request.method == 'POST': edit_form = EditForm(request.POST) # talk to the reCAPTCHA service response = captcha.submit( request.POST.get('recaptcha_challenge_field'), request.POST.get('recaptcha_response_field'), '[[ MY PRIVATE KEY ]]', request.META['REMOTE_ADDR'],) # see if the user correctly entered CAPTCHA information # and handle it accordingly. if response.is_valid: captcha_response = "YOU ARE HUMAN: %(data)s" % {'data' : edit_form.data['data_field']} else: captcha_response = 'YOU MUST BE A ROBOT' return render_to_response('mytemplate.html', { 'edit_form': edit_form, 'captcha_response': captcha_response}) else: edit_form = EditForm() return render_to_response('mytemplate.html', {'edit_form': edit_form}) Which would look like: If a user enters the CAPTCHA text correctly they get a message indicating that they are human. This is where you would put logic to do something like save a comment. If a user fails to enter the CAPTCHA text correctly they get a nasty error message telling them that they must be a robot. In that code path you'd want to assume the user either entered the text wrong and will retry or that there is no real user at all. Conclusion It's unfortunate that we have to deal with spam bots and other abuses of our hard work. Luckily services like reCAPTCHA make it relatively easy to defend against. And the benefits extend beyond just protecting our own web content. Every time a user uses reCAPTCHA they're actually helping to digitize books on the other end. Sat Nov 21 2009 15:11:00 GMT+0000 (UTC) Well, I've been spending a little more time fiddling with Google's new Go programming language of late and again figured I'd share some more playing-around-code. HTTP Operations and XML Processing One of my favorite examples I tend to use in higher level languages is the retrieval of twitter statuses with only out-of-the-box libraries. I was surprised how simple this task ended up to be with Go which is more of a systems language. The HTTP get was a one-liner and the resultant XML can be unmarshaled right into native structs. package main import ( "http"; "fmt"; "xml"; ) /* these structs will house the unmarshalled response. they should be hierarchically shaped like the XML but can omit irrelevant data. */ type Status struct { Text string } type User struct { XMLName xml.Name; Status Status; } func main() { /* perform an HTTP request for the twitter status */ response, _, _ := http.Get(""); /* initialize the structure of the XML response */ var user = User{xml.Name{"", "user"}, Status{""}}; /* unmarshal the XML into our structures */ xml.Unmarshal(response.Body, &user); fmt.Printf("status: %s", user.Status.Text); } Object Orientation Go's approach to object orientation is interesting. Essentially you just tack methods onto structs as is illustrated below. package main import "fmt" /* basic data structure upon with we'll define methods */ type employee struct { salary float; } /* a method which will add a specified percent to an employees salary */ func (this *employee) giveRaise(pct float) { this.salary += this.salary * pct; } func main() { /* create an employee instance */ var e = new(employee); e.salary = 100000; /* call our method */ e.giveRaise(0.04); fmt.Printf("Employee now makes %f", e.salary); } Go doesn't have inheritance per se but does support contracts by way of interfaces. The following code illustrates how two different kinds of things (stock positions and automobiles) share a common operation (obtaining its value). package main import "fmt"; type stockPosition struct { ticker string; sharePrice float; count float; } /* method to determine the value of a stock position */ func (this stockPosition) getValue() float { return this.sharePrice * this.count; } type car struct { make string; model string; price float; } /* method to determine the value of a car */ func (this car) getValue() float { return this.price; } /* contract that defines different things that have value */ type valuable interface { getValue() float; } /* anything that satisfies the "valuable" interface is accepted */ func showValue(asset valuable) { fmt.Printf("Value of the asset is %f\n", asset.getValue()); } func main() { var o valuable = stockPosition{ "GOOG", 577.20, 4 }; showValue(o); o = car{ "BMW", "M3", 66500 }; showValue(o); } Tue Nov 17 2009 20:00:00 GMT+0000 (UTC) It's funny. These days I hear Google's name mentioned in reference to subjects I never would have imagined three or four years back. Cell phones... Web browsers... Operating Systems... And a systems programming language??? Yes, a systems programming language... By the name of "Go", actually. It boasts garbage collection, enhanced safety and slick concurrency. The idea is that it should perform only slightly slower than C but feel more like python; simple, safe, garbage-collected and plumbing-light. Considering it was just released to the public a few days ago and I've only been fiddling with it for a few hours now I certainly can claim no authority on the subject. I would, however, like to share some of the little hello-world-style hacks I whipped out while getting myself acquainted with Go. Hello World Here's a basic Hello World app. package main /* load the "fmt" package. */ import "fmt" func main() { /* write formatted text to console */ fmt.Println("hello world"); } Output: hello world Hello World 2 Now wrapped in a function. Notice the type follows the identifier in the argument declaration, the opposite of most languages. package main import "fmt" func WriteStuff(message string) { fmt.Println(message); } func main() { WriteStuff("hello world"); } Concurrency Concurrency is accomplished by a concept called "goroutines". The implementation is quite elegant. All you have to do is precede your function call with the "go" keyword. Quite nice! package main import ( "fmt"; "time"; ) func doStuff(message string) { for i := 0; i < 10; i++ { fmt.Println(message); time.Sleep(100); } } func main() { /* the "go" spawns the concurrent goroutine */ go doStuff("goroutine"); /* the lack of go runs on the caller */ doStuff("main"); } Resulting in output similar to: goroutine goroutine main main goroutine goroutine main main goroutine goroutine main main goroutine goroutine main main goroutine goroutine main main Concurrency 2 Here's basically the same thing, but lambda-like. package main import ( "fmt"; "time"; ) func main() { /* the "go" spawns another thread, lambda style this time */ go func() { for i := 0; i < 10; i++ { fmt.Println("goroutine"); time.Sleep(100); } }(); /* parens indicate we're calling a func */ for i := 0; i < 10; i++ { fmt.Println("main"); time.Sleep(100); } } Channels Communication between threads is accomplished via channels a la Erlang or Stackless Python. More concurrent goodness! package main import ( "fmt"; "time"; ) func doStuff(id int, c chan string) { var s string; for i := 0; i < 3; i++ { /* receive from the channel */ s = <-c; fmt.Printf("%d: %s\n", id, s); } } func main() { /* create a channel with string messages */ c := make(chan string); /* spawn two goroutines that will receive messages */ go doStuff(1, c); go doStuff(2, c); /* send messages to the goroutines via our channel */ c <- "message 1"; c <- "message 2"; c <- "message 3"; c <- "message 4"; c <- "message 5"; c <- "message 6"; time.Sleep(1000); } Resulting in output similar to: 1: message 1 1: message 3 2: message 2 1: message 4 2: message 5 2: message 6 Maps Handy key-value storage is available through maps. /* create and initialize a map that contains floats and is keyed on strings. */ stocks := map[string] float { "JAVA" : 8.67, "MSFT" : 29.62, "GOOG" : 572.05 }; fmt.Printf("Google is %f\n", stocks["GOOG"]); /* add a pair to a map */ stocks["ORCL"] = 22.31; fmt.Printf("Oracle is %f\n", stocks["ORCL"]); Conclusion Well, that's the first few hours of hacking around. Obviously this just scratches the surface. I'm looking forward to seeing what people actually produce with Go. It's interesting, though. We've been so wrapped up in abstracting everything away into runtimes that we may have ignored the possibility of making native/systems languages more productive. Perhaps if it catches on Go will change that. With its ease of use, memory safety, and simplified concurrency I certainly think it has a shot of improving our lower-level lives. Sat Nov 14 2009 23:11:00 GMT+0000 (UTC) At several points in my .Net development career I've had the need to make an application I wrote scriptable. Sometimes it was to provide easy product extension to customers or lower level information workers. Sometimes it was to ease maintenance of very fine grained logic that has the capacity to change frequently or unpredictably. But every time I found it to be one of the more interesting facets of the project at hand. Early in .Net's history this was made easy by using Visual Studio for Applications (VSA) which allowed you to host arbitrary C# or VB.Net code within the executing AppDomain. Unfortunately VSA was plagued with resource leak problems and was therefore impractical in most enterprise situations. VSA was eventually deprecated. One of the many alternatives is to perform dynamic, on-the-fly code compilation. While certainly quite manageable it was a bit more complex and much akin to cutting down a sapling with a chainsaw. Another option that came along later is Visual Studio Tools for Applications which brought the Visual Studio IDE to the scripter. My favorite avenue, however, is to host a Dynamic Language Runtime (DLR) and use a language like IronPython. Not only is it disgustingly simple to implement from a plumbing point of view but Python itself seems like a natural fit due to it's simplicity. IronRuby's another wonderful choice but I'll stick to IronPython for the scope of this post. Examples The demonstration I'm about to show you was done using Visual Studio 2008 and IronPython 2.6 RC2. All you have to do is reference: All of the following examples require the following imports: using System; using IronPython.Hosting; using Microsoft.Scripting.Hosting; This is very basic. It simply executes a Python print statement. static void Main(string[] args) { /* bring up an IronPython runtime */ ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = engine.CreateScope(); /* create a source tree from code */ ScriptSource source = engine.CreateScriptSourceFromString("print 'hello from python'"); /* run the script in the IronPython runtime */ source.Execute(scope); } which produces: hello from python Scripting isn't very useful if the script can't affect the AppDomain around it. Here's an example that modifies an integer from the calling program. static void Main(string[] args) { ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = engine.CreateScope(); /* create a Python variable "i" with the value 1 */ scope.SetVariable("i", 1); /* this script will simply add 1 to it */ ScriptSource source = engine.CreateScriptSourceFromString("i += 1"); source.Execute(scope); /* pull the value back out of IronPython and display it */ Console.WriteLine(scope.GetVariable<int>("i").ToString()); } producing 2 Naturally scripts would frequently operate on domain objects in the real world: public class Employee { public double Salary { get; set; } public bool Good { get; set; } } The following code conditionally modifies an Employee object. static void Main(string[] args) { Employee employee = new Employee() { Salary = 50000, Good = true }; ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = engine.CreateScope(); scope.SetVariable("employee", employee); /* a more complex script this time */ ScriptSource source = engine.CreateScriptSourceFromString( @" def evaluate(e): if e.Good: e.Salary *= 1.05 evaluate(employee) "); source.Execute(scope); Console.WriteLine(scope.GetVariable<Employee>("employee").Salary); } You can also call functions in a python script: ScriptSource source = engine.CreateScriptSourceFromString( @" def fun(): print 'hello from example function' "); engine.Operations.Invoke(scope.GetVariable("fun")); Conclusion As you can see there really isn't much plumbing involved in hosting an IronPython runtime. In my opinion it combines both ease and power producing nearly perfect extension. Tue Nov 03 2009 22:11:00 GMT+0000 (UTC)
http://www.chrisumbel.com/?page=10
CC-MAIN-2017-43
refinedweb
2,331
57.16
struct is a C language keyword that creates a structure. struct record { char name[32]; int age; float debt; } Within the curly brackets live the structure's members.; } r1, r2, 43, r4;: %s\n",r4.name); r2.age = 32; The following code shows how to use a struct. #include <stdio.h> int main()//from ww w .jav a2 s . c om { struct player { char name[32]; int highscore; float hours; }; struct player xbox; printf("Enter the player's name: "); scanf("%s",xbox.name); printf("Enter their high score: "); scanf("%d",&xbox.highscore); printf("Enter the hours played: "); scanf("%f",&xbox.hours); printf("Player %s has a high score of %d\n",xbox.name,xbox.highscore); printf("Player %s has played for %.2f hours\n",xbox.name,xbox.hours); return(0); }
http://www.java2s.com/example/c-book/understanding-struct.html
CC-MAIN-2019-18
refinedweb
130
79.67
Return a piece of a point cloud. More... #include <vtkExtractPointCloudPiece.h> Return a piece of a point cloud. This filter takes the output of a vtkHierarchicalBinningFilter and allows the pipeline to stream it. Pieces are determined from an offset integral array associated with the field data of the input. Definition at line 33 of file vtkExtractPointCloudPiece.h. Definition at line 41 of file vtkExtractPointCloudPiece.h. Standard methods for instantiation, printing, and type. Turn on or off modulo sampling of the points. By default this is on and the points in a given piece will be reordered in an attempt to reduce spatial coherency. This is called by the superclass. This is the method you should override. Reimplemented from vtkPolyDataAlgorithm. This is called by the superclass. This is the method you should override. Reimplemented from vtkPolyDataAlgorithm. Definition at line 63 of file vtkExtractPointCloudPiece.h.
https://vtk.org/doc/nightly/html/classvtkExtractPointCloudPiece.html
CC-MAIN-2021-10
refinedweb
143
62.64
Summary:In this article, you can know how to import Olympus E-PM1 AVCHD to Avid Media Composer (AMC) on Mac. Olympus E-PM1 is a great camera that record vivid HD videos, it has so striking features that makes it outstanding. FULL 1080 HD VIDEO Shoot up to 29 minutes of 1080 60i HD video and stereo sound in either AVCHD or AVI formats. A Direct HD Movie Button switches you from still to movie mode with just the press of a button. . AVCHD files are produced by Olympus E-PM1 if you use this camera to record video. AVCHD format is great for record high-definition videos, it can keep the high quality. But, AVCHD format video is not easy to use because many editing software doesn't support this format, like Avid. How to import Olympus E-PM1 AVCHD to Avid Media Composer? How to convert Olympus E-PM1 AVCHD to Avid MC compatible video format? The following is the Olympus E-PM1 AVCHD workflow in Pavtube AVCHD Converter for Mac You need an Olympus E-PM1 AVCHD to Avid converter to help you make AVCHD compatible with Avid. Pavtube AVCHD Converter for Avid is just the very AVCHD camcorder video converter for you to convert 1080p HD AVCHD for Avid editing. As Avid Media Composer best codec is DNxHD, with this software you can convert Olympus E-PM1 AVCHD to DNxHD codec MOV video for AMC editing on Mac. 1.Download Pavtube Olympus to Mac converter, and follow the prompts to install the program. 2.Click "Add" to load AVCHD video files or directly drag the files into file list. 3.Click "Format" bar to determine output format. The program offers particular output formats for Avid. Just click the format bar and choose "Avid Media Composer -> Avid DNxHD (*.mov)" as the best video format for editing AVCHD video on Mac. > encode Olympus E-PM1 AVCHD to Avid DNxHD MOV video. Related Articles: Mac Solution Olympus Video workflow for FCP Load Olympus E-PL3 AVCHD to iMovie with AIC Codec Convert and Edit Olympus E-PL1 videos in FCP Import Olympus E-PL1 footage to iMovie or FCE Olympus E-PM1 FAQ: Encoding, Importing and Editing Transcode Olympus E-M5 MOV to AIC for FCE or iMovie Windows Solution import E-PL3 AVCHD to Adobe Premiere Pro
http://www.anddev.org/ndk-problems-f56/avchd-editor-mac-convert-olympus-e-pm1-avchd-to-avid-dnxhd-f-t2168653.html
CC-MAIN-2017-04
refinedweb
390
69.62
How to Know What Rate of Return to Expect from your Stocks: Part 1 Introduction We believe there are two critical attributes that the prudent investor should consider before investing in a company (stock). Furthermore, these same two attributes can be used to calculate a reasonable expectation of the future return the stock is capable of generating on their behalf. These two attributes are valuation and the rate of change of earnings growth. Valuation indicates whether or not the company's current earnings power compensates you for the risk you take, while the company's future rate of change of earnings growth will be the driver of future returns. We believe these are very important concepts for prudent investors to understand for several reasons. First of all, you can make an investment at fair value into a low or slow growth company, and still not generate a high rate of return. On the other hand, the risk associated with achieving it is normally low. This is simply because achieving a low rate of earnings growth is easier to do. Conversely, you could overpay, perhaps even significantly so, for a very powerful or fast grower and still make a high rate of return, because of the power of compounding. However, by overpaying you are taking on more risk than you should for two reasons. First of all, the probability of a company achieving a very high rate of growth is very low, and second, longer term it's virtually a given that price will return to fair value. Therefore, the investor will not be able to harvest the full measure of the company's growth achievement. More simply stated, the probability of a future PE contraction is very high. The effect is a lower rate of return than deserved, while illogically taking a higher level of risk than necessary to obtain the lower return. To summarize, if a company grows fast enough, then future earnings growth can overcome a high beginning valuation. However, the risk taken to achieve it is amplified by the high valuation. Conversely, if you come across an opportunity to buy a stock at a very low valuation, even a low growth company, your return potential is greatly enhanced while simultaneously your risk is greatly lessened. On the other hand, overpaying for a slow grower (for example, a typical utility stock) destroys your return potential while simultaneously turning what might normally be a low-risk investment into a high risk investment. In the same vein, if you can find a slow grower that is significantly undervalued, you could generate a high return and arguably achieve it at very low risk. Valuation Demystified As stated in our introduction, valuation is one of what we believe to be the two most important attributes that investors should consider before investing in a stock. Yet, fair valuation alone does not automatically indicate a high future return or even an adequate one. In truth, valuation is a relative concept that becomes relevant to future return only when looked at in conjunction with future growth. Throughout this report, we will illustrate that valuation unto itself is most relevant in the context of current time. In other words, valuation itself applies predominantly to current fundamentals. Consequently, we see valuation more as a measurement of soundness than we do as a rate of return expectation. As a result, both a moderately fast-growing stock and a very slow-growing stock can command the same valuation in current time. However, given fair valuation for both, the faster grower offers a higher future return. This concept can apply to all classes of stocks to include non-dividend paying stocks as well as dividend paying stocks (Later in this article I will more clearly reveal this principle with specific examples). The point I am stressing is that valuation is more related to soundness than it is to future returns. Even more simply stated, valuation is about prudent behavior and risk mitigation. To illustrate this more clearly, we are going to utilize the most common valuation measurement, the PE ratio. But first and foremost, we want to emphatically state that the PE ratio is more than a mere statistical inference. Instead, our objective is to illuminate the idea that the PE ratio is a relevant measurement of valuation that represents the appropriate compensation for the amount of risk currently being assumed. The key to understanding this is to recognize the PE ratio as a measurement of the earnings yield the investment is offering. To put this into perspective, consider that the long-term average PE ratio of the S&P 500 has been, depending on the time frame being measured, somewhere between 14 to 16 times earnings (S&P 500 average PE 14 - 16). Jeremy Siegel, professor of finance at the Wharton School of the University of Pennsylvania has written that stocks have returned an average of 6.5% to 7% per year, after inflation, over the last 200 years. For simplicity sake, we are going to hang our hats on the historical average S&P 500 PE ratio of 15. Now, up to this point, the average PE ratio of 15 for the S&P 500 is simply a statistic. However, a statistic is information, but information alone is not wisdom. To our way of thinking, answering the important question as to why a PE ratio of 15 is common over such a long period of time, is more critical than simply knowing the number itself. In order to accomplish this, let's analyze what PE ratios of 14 - 16 translate into in terms of rate of return calculations. What we discover is that a PE ratio of 15 represents a reasonable and attractive rate of return of approximately 6% to 7%, that has historically been achieved (the long-term stock market average), and therefore logically considered acceptable and achievable. To add clarity to this point, let's actually calculate the rate of return (earnings yield) that a PE of 14, 15 or 16 represents. To determine the earnings yield, simply reverse the PE ratio (Price divided by Earnings) and calculate the EP ratio (Earnings divided by Price). Therefore, we learn that a PE ratio of 14 equals an earnings yield of 7.1%, a PE ratio of 15 equals an earnings yield of 6.66% and finally a PE ratio of 16 equals an earnings yield of 6.25%. Therefore, we learn through wisdom, that it is no coincidence that these calculations coincide almost perfectly with Prof. Jeremy Siegel's historical statistic of average stock market returns of 6.5% to 7%. In other words, an average stock market PE of approximately 15 (14-16) is rational and makes economic sense because it represents an appropriate yield on investment, which is why it is so commonly applied to the valuation of most stocks. However, for this to be relevant enough to be considered wisdom, it needs to also practically apply in real world circumstances. To practically apply, we must be able to see and measure evidence that clearly shows that the average fair value PE ratio of 15 actually does manifest in reality. Testing the Fair Value 15 PE Ratio Hypothesis In order to test the fair value PE ratio of 15, we will utilize the earnings and price correlated Fundamentals Analyzer Software Tool - F.A.S.T. Graphs™. This powerful "tool to think with" utilizes widely accepted formulas for valuing a business. Utilizing these formulas, appropriate valuations in the form of PE ratios are calculated. Interestingly, these formulas tend to calculate the fair value PE ratio to be approximately 15 for companies whose earnings growth has historically averaged 3% to 15% per annum. Companies whose growth is below 3% will calculate out at PE ratios slightly less than 15. For very fast growing companies (above 15%), a different formula that calculates higher PE ratios applies, and will be presented later. This supports the 200-year 6.5% to 7% yield that Prof. Jeremy Siegel reported (Remember, a PE of 15 equals an earnings yield of 6.66%). Once the fair value PE ratio is calculated, monthly closing stock prices are overlaid onto the graphs in order to discover if there is a strong price and earnings correlation. If the formulas are valid, then we should see a very close association between earnings and stock price relative to a PE ratio of 15 over time. The following four earnings and price correlated F.A.S.T. Graphs™ illustrate how the market has historically valued companies that possess varying growth rates that range within the 3% to 15% growth category at a PE of 15. The reader should note that each of these samples show that the calculated PE (the orange line) and the normal PE historically applied by the market (dark blue line) are virtually the same, or at least close. SCANA Corp (SCG): A Low Growth Regulated Utility Our first example plots SCANA Corp. whose earnings growth rate has averaged only 3.3% per annum. Here, we would like to remind the reader that our position is that fair valuation is a function of the earnings yield that "current earnings" represent. Consequently, purchasing a company at fair valuation implies that the investor is making a sound financial decision. However, as previously stated, this does not necessarily guarantee a high future rate of return. As we will illustrate in Part 2, that will be determined by the company's future earnings growth rate. The orange line on the following graph represents our fair value PE ratio of 15 applied to SCANA Corp.'s historical earnings since calendar year 1998. To be clear, every point on the orange line equals a PE ratio of 15. The Graham Dodd Formula was used to calculate the fair value PE ratio of 15 and is expressed to the right of the graph in orange letters - GDF X 15. According to our thesis, if this truly is a fair value PE ratio, when the stock price overlay is applied, then price should track the orange line very closely over time. Our next graph overlays monthly closing stock prices with our orange earnings justified valuation line. Moreover, two additional important valuation metrics are also added. The light blue shaded area expresses dividends paid out of earnings (the green shaded area). The dark blue line represents our algorithm calculating the normal PE ratio that the market has historically applied to this business over this time period. Clearly, we see that the black price line tracks the orange earnings justified valuation line (PE = 15) almost perfectly. The normal PE ratio (14.4) is also almost a perfect match indicating that the market has historically appraised this company at approximately 15 times earnings. Furthermore, during the short-term time periods when price temporarily deviates from earnings, we see that it soon returns. Therefore, we discover in this example that fair valuation exists any time the stock is trading at a PE ratio of 15 or below. (Note: Once again, the rate of return that this produces is a different matter that will be elaborated on in part 2). OGE Energy Corp. (OGE): Another Utility with Slightly Higher Growth Our second example, OGE Energy Corp, differs from our first only by virtue of the fact that its earnings growth rate since 1998 has averaged over 5% per annum. Nevertheless, we once again discover that the PE ratio of 15 represents a strong proxy for this company's valuation. During the short time intervals when price deviates from fair value PE of 15, it doesn't take long for price to move back into alignment with earnings. Furthermore, during this time frame there have been almost no incidences of overvaluation with either of our first two examples. However, in both cases we see that when the PE ratio falls substantially below our PE 15 standard, the opportunity for higher rewards at significantly lower risk clearly manifest. Wolverine World Wide (WWW): A Faster Growing Global Marketer of Footwear Our third example, Wolverine World Wide, moves farther up the growth chain with earnings averaging 9.4% per annum. Nevertheless, we once again see the strong relationship and close correlation between stock price and earnings over the long run. Perhaps due to the faster earnings growth rate, we do see several periods where the company's price earnings ratio has deviated significantly above the 15 standard, indicating overvaluation. Nevertheless, just as we saw with our first two examples, stock price inevitably and soon moves back into alignment with earnings. Inter Parfums, Inc. (IPAR): Develops, Manufactures and Distributes Prestige Perfumes Our final example, Inter Parfums, Inc., is an above-average growing small-cap that validates our PE 15 standard, but with a twist. Small capitalization companies tend to carry greater risk than larger capitalization companies. As a result, it is not uncommon to see, as we do with Inter Parfums, Inc., an earnings and price correlated graphic with wilder price swings. On the other hand, we believe the following graphic clearly validates the thesis of fair valuation. The PE 15 principle continues to apply over the long run. More directly stated, price does track earnings, albeit with violently volatile price swings in between. But most importantly, in spite of all the price gyrations, stock price continues to quickly and inevitably revert to the fair value PE ratio mean of 15. Summary and Conclusions In this Part 1 of this two-part series, the majority of our focus has been on the principle of fair valuation, or as we like to call it - True Worth™. Our contention is that by understanding and accepting the idea that fair valuation is primarily a metric of soundness, it simultaneously lays the foundation for determining future returns from a common stock investment. However, as we will expand upon in Part 2, valuation is a relative measurement. Therefore, when viewed in isolation, it does not provide an accurate future return calculator. In order to calculate future returns within a reasonable degree of accuracy, we must consider valuation as it relates to earnings growth. In this Part 1, we have provided evidence that fair valuation calculates out to be very similar for companies generating earnings growth of 3% to 15% per year. Our thesis is that fair valuation is, first and foremost, a function of a stock's current earnings yield. This is why companies with different earnings growth rates will command similar, if not identical, current valuations. However, as we will develop in Part 2, thanks to the power of compounding, when growth becomes very fast (above 15%) a higher valuation becomes justified. But perhaps most importantly, in Part 2 we intend to demonstrate how investors in common stocks can utilize the principle of valuation in conjunction with the company's expected future earnings growth rate to determine reasonable future rate of return expectations. We believe these are critical components for investors to master. Because, when valuation is understood and looked at in conjunction with future earnings growth, risk assessments also become more clear and accurate. Consequently, not only can investors have a better idea of what rate of return they can expect, but they can also ascertain how much risk they are taking to generate it. As a result, smarter, sounder and more profitable buy, sell and hold decisions can be made. Disclosure: Long SCG at the time of writing. TweetTweet
http://www.safehaven.com/article/26003/how-to-know-what-rate-of-return-to-expect-from-your-stocks-part-1
CC-MAIN-2013-20
refinedweb
2,573
51.07
modify the headers on a NSMutableUrlRequest, it will compile and run perfectly fine, however modifying the headers doesn't change anything. To try and figure out why, I stepped through the code and I noticed that it gives me a warning "Unknown member: Headers", however Headers is a public property of NSMutableUrlRequest, and no error is thrown when the line of code executes. I am using the following code. NSMutableUrlRequest request = new NSMutableUrlRequest (); request.Headers = mydict; request.Url = myurl; This comes from: Where did you see "Unknown member: Headers" ? Logs (please attach) ? XS ? VS ? (screenshot) IIRC not every headers can be modified programmatically. Can you provide what `mydict` contains (or better a self contained test case). Created attachment 14480 [details] Picture showing the error Here is a picture of where the error comes up. It occurs in Xamarin Studio, and occurs with different keys for the Header. I have tried Post, User-Agent, and custom keys. The picture shows a UI error from the debugger. That can be a debugger issue or, if it comes from a device build, it could be that the linker removed the `getter` (because it's unused). In any case it's unrelated to not being able to add items in the headers. Please provide a test case that shows the issue (e.g. not being able to set "User-Agent") and we'll have a look. Using this code, produces this User-Agent header. Code: var obj = new Object[] {"hosted"} ; var key = new Object[] {"User-Agent"} ; NSDictionary dict; dict = NSDictionary.FromObjectsAndKeys (obj, key); request.Headers = dict; Header: Mozilla/5.0 (iPhone; CPU iPhone OS 9_2 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Mobile/13C75 Also when trying to call the get function on Headers throws a null exception error, and the UI error from the debugger is present after calling "request.Headers = dict;". If this isn't the info you need please let me know and I will attempt to provide you with it. Your sample is not complete (e.g. how and when did you print the header) but similar code works fine for me. [Test] public void A () { using (var obj = new NSString ("hosted")) using (var key = new NSString ("User-Agent")) using (var dict = NSMutableDictionary.FromObjectAndKey (obj, key)) using (var request = new NSMutableUrlRequest ()) { request.Headers = dict; Console.WriteLine (request.Headers.ToString ()); } } This prints: 2016-01-15 08:45:05.365 monotouchtest[33928:5912629] { "User-Agent" = hosted; } The immediate debugging pad shows ? request.Headers {{ "User-Agent" = hosted; }} Class: {ObjCRuntime.Class} ClassHandle (Foundation.NSMutableDictionary): 0x113588020 ClassHandle (Foundation.NSDictionary): 0x113588020 ClassHandle (Foundation.NSObject): 0x113588020 Count: 1 DebugDescription: "{\n \"User-Agent\" = hosted;\n}" Description: "{\n \"User-Agent\" = hosted;\n}" DescriptionInStringsFileFormat: "\"User-Agent\" = \"hosted\";\n" Handle: 0x7ffb9cee6890 IsProxy: false Keys: {Foundation.NSObject[1]} ObjectEnumerator: {<__NSDictionaryObjectEnumerator: 0x7ffba0038320>} RetainCount: 4 Self: {{ "User-Agent" = hosted; }} SuperHandle: 0x12b3c4d20 Superclass: {ObjCRuntime.Class} Values: {Foundation.NSObject[1]} Zone: {Foundation.NSZone} Static members: Non-public members: IEnumerator: > It's possible that iOS change the value later (at use time) as the list of headers you cannot change (in Apple documentation) might not be complete (or up to date). If you do not get the same results as me for the above then please: - attach a self-contained test case that shows your issue; - include all* version information; - re-open the bug * The easiest way to get exact version information is to use the "Xamarin Studio" menu, "About Xamarin Studio" item, "Show Details" button and copy/paste the version informations (you can use the "Copy Information" button). It seems to work perfectly fine on my simulator (iPhone 6 iOS 9.2), although I can never find the header in the resulting request, however when running it on my phone (iPhone 6 iOS 9.2) I encounter the error. It only has the ClassHandle and URL members in the debugger when using device, but all members show up in the debugger when using the simulator. using System; using Foundation; using UIKit; using CoreGraphics; namespace test { public partial class ViewController : UIViewController { public ViewController (IntPtr handle) : base (handle) { } public override void ViewDidLoad () { base.ViewDidLoad (); this.button.TouchUpInside += ButtonPressed; } public override void ViewDidLayoutSubviews() { this.View.SizeToFit(); this.View.LayoutIfNeeded(); } private void ButtonPressed (object sender, EventArgs e) { var webView = new UIWebView (new CGRect (0, 30, this.View.Frame.Width, (this.View.Frame.Height - 30))); this.View.AddSubview (webView); NSMutableUrlRequest request = new NSMutableUrlRequest (); var obj = new NSString ("hosted"); var key = new NSString ("User-Agent"); NSDictionary dict; request.Url = new NSUrl (""); dict = NSMutableDictionary.FromObjectAndKey (obj, key); request.Headers = dict; webView.LoadRequest ((request)); Console.WriteLine (request.Headers.ToString ()); webView.ScalesPageToFit = true; } public override void DidReceiveMemoryWarning () { base.DidReceiveMemoryWarning (); // Release any cached data, images, etc that aren't in use. } } } === Xamarin Studio === Version 5.10.1 (build 6) Installation UUID: 400add93-acc3-422b-9b38-25338cf9f299 Runtime: Mono 4.2.1 (explicit/6dd2d0d) GTK+ 2.24.23 (Raleigh theme) Package version: 402010102 === Xamarin.Profiler === Not Installed === (Business Edition) Android SDK: /Users/castle.8.0_60" Java(TM) SE Runtime Environment (build 1.8.0_60-b27) Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode) === Xamarin Android Player === Version: 0.6.1 Location: /Applications/Xamarin Android Player.app === Xamarin.Mac === Version: 2.4.0.109 (Starter Timothys-MacBook-Pro.local 15.2.0 Darwin Kernel Version 15.2.0 Fri Nov 13 19:56:56 PST 2015 root:xnu-3248.20.55~2/RELEASE_X86_64 x86_64 Like explained in comment #3 this is not a bug. What you see in the debugger is only what's included in your .app and, by default, the linker is: - disabled on simulator builds, so you'll see all properties on any type; - enabled on device builds, so you'll only see properties that are used by your applications (source) or internally by the SDK.
https://bugzilla.xamarin.com/37/37430/bug.html
CC-MAIN-2021-25
refinedweb
960
51.44
Performing code reuse in Windows Phone Today there are multiple platforms in the Windows universe that you might want to service with your apps. Keeping versions for Windows Phone 7, Windows Phone 8 and Windows 8 is a common scenario. Developing multiple versions of an app however should not multiply work. Here are some tips how to avoid duplicate work and keep the app maintainable. - - - - - - Introduction A typical Windows Phone solution consists of code, resources (resx), xaml, styles and other assets like images or (static) data files. Being able to share the biggest part of it can significantly reduce your workload and improve maintainability. There are different ways to reuse code and the other items in a solution. Here we present an overview with links to background articles. In a recent, featured discussion on the ND Discussion Board (Thread: Performing code reuse in Windows Phone 8) it's postulated that some elements can be shared and some can't be shared. As we'll see later the list of sharable elements is long and with good ideas problems might be overcome. The example for this are the resources that can be shared if the differences for the platforms are small and the resource namespaces can be kept equal. If the readers have solutions for other problematic elements the community would gratefully honor an article describing the solution. Preparation The most important preparation is to divide code from the UI. You might object now that the division of the XAML UI from the code-behind in C# is enough but in order to create different UIs for the platforms the code should not be tied so strongly to the UI. One solution here is the MVVM pattern Understanding the Model-View-ViewModel Pattern. What you can gain additionally is so-called 'Blendability', i.e. a rich design experience in Microsoft Blend. There are lots of articles, videos/webcasts on MVVM and several toolkits proposing to simplify it's usage. Ways to reuse code There are different approaches to sharing the elements of a solution. In MSDN there's a very good overview of some of the techniques that can be used to share code and XAML, Maximize code reuse between Windows Phone 8 and Windows 8. Sharing resources is not covered there. The following list shows the techniques grouped by type of solution element: Multiple types (works for code, resources, images. assets...) - Copy/Paste - Copying and pasting code is dangerous and may lead to subsequent bugs. - Using Add as Link - The most important trick. Visual Studio allows to link a single source file into multiple projects. This way a change in one project can automatically replicate to another project. Code With code there are many options to share complete assemblies, complete source files or parts of them. - Putting code in a separate assembly and referencing it - The ideal occurrence are Portable Class Libraries (PCL, see below). If not using PCL it's difficult to use managed code because of differencies in the .net framework base libraries. Unmanaged code, i.e. DLLs written in C/C++, should be simple to share across platforms. - Using portable class libraries - There are many discussions about Portable Class Libraries (PCL). In my opinion they are one of the most interesting concepts for code reuse. There are even (commercial) tools that allow to share code for Android and iOS using PCLs. - Using preprocessor directives - Preprocessor directive are used in conjunction with Copy/Paste (arghh) and 'Add as Link' to circumvent small differences in code. They are usable in C++ and C# code but not in XAML. MSDN has a nice article. Resources Resources present a special challenge, because there are no simple ways to conditionally switch parts of resource files. If the greater parts of your resources (strings) are identical, you can use the following techniques: - Implementing a datasource in code that gives different strings for the different versions - One advantage is that you can use preprocessor defines here. - Implementing a value converter (as this is code, too, you can use preprocessor defines) XAML XAML files pose comparable problems as resources. However, some parts of them can be put into static resources. You can use static resources for storing strings or data. The most important problems here are different namespaces in Windows and Windows Phone and differences in the controls themselves. Due to the screen size constraints the XAML UIs should be developed individually anyway.(TODO) - Using preprocessor directives - Not usable like in C# code, but there are some articles descibing methods to use conditional interpretation of XAML code: Limiting Factors Beginning with Visual Studio 2013, support for Windows Phone 7 is dropped. So upgrading an elder solution poses problems: Conclusion Reusing code or other solution assets can save a lot of time and improve the stability of multi-platform projects. The details however can be a little tricky for some kinds of solution items. Influencer - Please review Hamish, please review and keep an eye on the bulleted lists (second level) that don't appear as I intended.Thomas influencer (talk) 00:28, 20 February 2014 (EET)
http://developer.nokia.com/community/wiki/Performing_code_reuse_in_Windows_Phone
CC-MAIN-2014-10
refinedweb
850
63.09
IRC log of xproc on 2010-02-18 Timestamps are in UTC. 15:50:43 [RRSAgent] RRSAgent has joined #xproc 15:50:43 [RRSAgent] logging to 15:50:44 [Zakim] Zakim has joined #xproc 15:50:47 [Norm] Zakim, this will be xproc 15:50:47 [Zakim] ok, Norm; I see XML_PMWG()11:00AM scheduled to start in 10 minutes 15:59:38 [Zakim] XML_PMWG()11:00AM has now started 15:59:45 [Zakim] +Norm 16:00:15 [Zakim] +[ArborText] 16:03:01 [htt] htt has joined #xproc 16:03:19 [htt] zakim, please call ht-781 16:03:24 [Zakim] ok, htt; the call is being made 16:03:24 [Zakim] +Ht 16:03:54 [alexmilowski] alexmilowski has joined #xproc 16:04:43 [Zakim] +Alex_Milows 16:05:20 [htt] zakim, Ht is me 16:05:20 [Zakim] +htt; got it 16:07:12 [Norm] Norm has joined #xproc 16:07:21 [Norm] Zakim, who's here? 16:07:28 [Zakim] On the phone I see Norm, PGrosso, htt, Alex_Milows 16:07:32 [Zakim] On IRC I see Norm, alexmilowski, htt, Zakim, RRSAgent, PGrosso, ht 16:07:58 [Norm] Present: Norm, Alex, Henry, Paul 16:08:00 [Norm] Regrets: Vojtech 16:08:14 [Norm] Topic: Accept this agenda? 16:08:14 [Norm] -> 16:08:45 [Norm] Henry: I won't be ready to talk about the default processing model before April 16:09:07 [Norm] Accepted 16:09:12 [Norm] Topic: Accept minutes from the previous meeting? 16:09:12 [Norm] -> 16:09:17 [Norm] Accepted. 16:09:22 [Norm] Topic: Next meeting: telcon, 25 Feb 2010? 16:09:36 [Norm] No regrets given. 16:09:55 [Norm] Topic: 020 wrapper-prefix and wrapper-namespace on p:data 16:11:06 [Norm] Norm summarizes. 16:11:53 [Norm] Norm: My temptation is to leave it. Less spec churn and no actual harm. 16:12:29 [PGrosso] fwiw, Mohamed also gave regrets. 16:12:41 [Norm] Regrets: Vojtech, Mohamed 16:13:08 [Norm] Henry: It's arguably the case that we were mistaken about why we did this and it's unlikely to be used, but the example is correct and it isn't wrong to leave it. 16:14:14 [Norm] Norm: Is anyone in favor of backing this change out? 16:14:21 [Norm] None heard. 16:14:27 [Norm] Proposal: Leave the status quo. 16:14:34 [Norm] Accepted. 16:14:44 [Norm] Topic: How to get to PR 16:15:45 [Norm] 16:18:34 [Norm] Some discussion about to whom the "shoulds" are directed in 5.14. 16:20:07 [Norm] NOTE TO SCRIBE: CLEAN UP THE ORDER HERE 16:20:26 [Norm] Norm summarizes the state of play wrt coverage and implementations. 16:23:08 [Norm] Henry: We need two passes in every row to go through without a hitch. We can go forward w/o but each case has to be justified. 16:23:45 [Norm] 16:25:25 [Norm] Henry: Remove this test. It's not a problem with XProc, it's a problem with underlying RNG implementations. 16:26:45 [Norm] Some discussion of other tests that could be written. 16:28:40 [Norm] Norm: In short: we're very close. 16:29:12 [Norm] Norm: Can anyone think of other roadblocks that I've overlooked. 16:29:13 [Norm] None heard. 16:29:30 [Norm] Topic: Any other business? 16:29:36 [Norm] None heard. 16:29:41 [Norm] Adjourned. 16:29:56 [Zakim] -Alex_Milows 16:29:59 [Zakim] -htt 16:30:00 [Zakim] -Norm 16:30:00 [Zakim] -PGrosso 16:30:02 [Zakim] XML_PMWG()11:00AM has ended 16:30:03 [Zakim] Attendees were Norm, PGrosso, Alex_Milows, htt 16:30:05 [Norm] RRSAgent, set logs world visible 16:30:10 [Norm] RRSAgent, draft minutes 16:30:10 [RRSAgent] I have made the request to generate Norm 16:30:22 [htt] We were all laughing because you had lost audio, that's why 'none heard' :-) 16:30:30 [htt] But nothing important was missed. 16:31:17 [Norm] htt, sorry! 16:57:54 [Norm_] Norm_ has joined #xproc 17:33:13 [PGrosso] PGrosso has left #xproc 18:31:13 [Zakim] Zakim has left #xproc 18:42:58 [htt] RRSAgent, bye 18:42:58 [RRSAgent] I see no action items
http://www.w3.org/2010/02/18-xproc-irc
CC-MAIN-2014-52
refinedweb
718
79.5
Opened 10 years ago Last modified 5 years (44) comment:1 Changed 10 years ago by comment:2 Changed 10 years ago by Changed 10 years ago by Solution which tries to stay with old API Changed 10 years ago by Some changes in the API comment:3 Changed 10 years ago by I just added two patches which seem to fix this problem: patch_14087_manage.diffincludesdoes not include these if-statements comment:4 Changed 9 years ago by 9 years ago 9 years ago by find_managment_module with support for namespaced packages, with tests. comment:6 Changed 9 years ago by comment:7 Changed 9 years ago by 9 years ago by 9 years ago by 9 years ago by Changed 9 years ago by a dirty fix for pth Changed 9 years ago by Zip-archived eggs are broken comment:11 Changed 9 years ago by I tried to improve, but could not get it work with zip-archived eggs. comment:12 Changed 9 years ago by Changed 8 years ago by fixed for zip-archived eggs Changed 8 years ago by improved fix for zip-archived eggs comment:13 Changed 8 years ago by 8 years ago by if management has already been imported and is not a package Changed 8 years ago by update patch for revision 17517 comment:14 Changed 8 years ago by update patch for revision 17517 Changed 8 years ago by update patch for revision 17517 comment:15 Changed 8 years ago by namespace_package_pth.2.diff will not look up importer in sys.path_importer_cache, since django do not support PEP 302 importer. should find commands in modules loaded by PEP 302 importer be in another ticket? Changed 8 years ago by add test from zip_egg_fixed.3.diff comment:16 follow-up: 17 Changed 8 years ago by Changed 8 years ago by I thought there was one. They are #8280 and #13587. and add support PEP 302 importer should also help improve the error message of load_backend, which is mentioned in #8238. 8 years ago by update patch for revision 17777 Changed 8 years ago by update patch for revision 17777 comment:18 Changed 8 years ago by update namespace_package_pth and zip_egg_fixed for revision 17777. comment:19 Changed 8 years ago by comment:20 Changed 8 years ago by Changed 8 years ago by remove trailing whitespaces comment:21 Changed 8 years ago by comment:22 Changed 8 years ago by my pull request for the namespace_package_pth patch: comment:23 Changed 8 years ago by My patch for Python 1.4.1 : comment:24 Changed 7 years ago by comment:25 Changed 7 years ago by bhuztez' pull request certainly addresses the problem I am seeing, any way we can move this forward? comment:26 Changed 7 years ago by comment:27 Changed 7 years ago by comment:29 Changed 7 years ago bywould return all found modules. May be that it can be simulated by calling find_module for each element of a list of paths and aggregating the results into a tuple. I will try this out and add my results to this ticket.
https://code.djangoproject.com/ticket/14087
CC-MAIN-2020-24
refinedweb
521
59.47
Opened 3 years ago Closed 3 years ago #17645 closed Bug (invalid) runserver: not serving static for localhost Description runserver will only serve static files if STATIC_URL is relative. This makes sense if STATIC_URL includes a domain pointing to another server. However, it denies the possibility of including a localhost domain (as is needed by my current project). I've currently worked around this by modifying: django/contrib/staticfiles/handlers.py def _should_handle(self, path): return True Sorry, I haven't had time to consider any side effects of this, but it's working for me at the moment. Change History (2) comment:1 Changed 3 years ago by shadow - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 3 years ago by shadow - Resolution set to invalid - Status changed from new to closed Actually, now that I think about it... there won't be an easy way of knowing whether the domain is pointing to 127.0.0.1 or not. So this probably can't be fixed reliably. Never mind :] Sorry, that fix should be:
https://code.djangoproject.com/ticket/17645
CC-MAIN-2015-32
refinedweb
181
54.63
I'm a somewhat experienced programmer (Junior CS major in college) and when I was younger I used to be EXTREMELY into game development. However, my engine of choice was GameMaker, and as we all know that's a bit limited. After becoming a better programmer, I decided I wanted to come back and make some simple 2D games using a real language. However, I haven't been able to find a good 2D engine that lets me develop in my own environment; almost all of them require me to use their own special scripting language or require me programming most of the engine myself. What I would like in a engine is: - Programming is done in my own environment (emacs, vim) in an established language (like C++ or Python) - I can quickly program objects (engine handles things like collision, creation and destruction of objects and scenes, etc) - I can quickly place objects and design levels (I would really like to be able to use tiled to design my levels) - It handles input and is easy to integrate in objects - Built-in collision detection Bonus points for: - Physics - Android/iOS/HTML5 support The closest I have found to something I would like is melonjs, however I really don't like programming in javascript and it doesn't compile to native code (not that I would expect it to). In an ideal world, I would like to make a simple pong game with something like this: import engine import sprites import scenes class PlayerPaddle(engine.GameObject): sprite = sprites.PaddleSprite game = 0; def __init__(self, game): self.game = game game.add_object(self, 'PlayerPaddle') # could be handled by parent self.x = 0 self.y = 0 super(PlayerPaddle, self) def update(self): if (self.game.input.keyboard.up): self.y -= 2 if (self.game.input.keyboard.down): self.y += 2 super(PlayerPaddle, self) def draw(self): engine.draw_sprite(self.x, self.y, self.sprite) super(PlayerPaddle, self) class ComputerPaddle(engine.GameObject): sprite = sprites.PaddleSprite game = 0 def __init__(self, game): self.game = game game.add_object(self, 'ComputerPaddle') # could be handled by parent self.x = game.current_scene.width-32 self.y = 0 super(ComputerPaddle, self) def update(self): if (self.game.objects['ball'][0].y > y) self.y -= 2 if (self.game.objects['ball'][0].y < y) self.y += 2 super(ComputerPaddle, self) def draw(self): engine.draw_sprite(self.x, self.y, self.sprite) super(ComputerPaddle, self) class Ball(engine.GameObject): sprite = sprites.BallSprite game = 0 def __init__(self, game): self.game = game self.hspeed = 5 self.vspeed = 5 self.x = game.scene.width/2 self.y = game.scene.height/2 game.add_object(self, 'ball') # could be handled by parent super(Ball, self) def update(self): if (engine.collides(self, game.objects['PlayerPaddle'] or engine.collides(self, game.objects['ComputerPaddle']): hspeed = -hspeed if (y < 0 or y > game.current_scene.height): vspeed = -vspeed super(Ball, self) def draw(self): engine.draw_sprite(self.x, self.y, self.sprite) super(Ball, self) game = engine.Game() game.set_scene( scenes.scenes['pong'] ) while (game.is_running): game.update() game.sleep_off_frame() I REALLY rushed through this so the python is definately wrong (and I'm not spectacular at it), but I think you get the general idea. 90% of the code is defining the behaviour of the objects, and the engine itself handles things like drawing the sprites, view locations, collisions, input, object movement, etc. What I have tried so far: - GameMaker Pros: Easy to get a basic game down, handles resources for me Cons: I have to program in their limited language, not open source, stores projects in a way that I can only really use gamemaker with it, can't develop on linux - MelonJS Pros: Does about what I want Cons: I have to program in javascript which I really don't like, web only - Roll my own with SFML (I am actually decently far into my own game engine, but I want to actually make games now) Pros: Does exacly what I want Cons: Extremely hard, buggy, takes forever, I have to write my own algorithms for everything and develop my own tools, unfeasible - Unity Pros: Established community, good engine, compiles for everything I could want Cons: Can't develop on linux, basic 2D support, I have to learn how to use the entire program I know this is a really really common question, but I still haven't found a good engine for what I want, even after reading FAQs, etc. I would really appreciate some help.
http://www.gamedev.net/user/210015-peppit/?tab=topics
CC-MAIN-2014-23
refinedweb
749
55.54
See also: IRC log 1, IRC log 2, IRC log 3 Dublin, June 2007. From left to right: José Manuel Cantera Fonseca, Kevin Smith, Rhys Lewis, Stéphane Boyera, Dave Raggett, Giles Payne, Rotan Hanrahan, Sailesh Sathish, and Kangchan Lee dave presents the charter based on his talk KC: clearly bug problem concerning bridging between devices. big problems 2 systems completly un-interoperable Sailesh: problem is the interconnecting world beow the device layer. no consensus yet. semantic interoperability is the ideal goal, but we not here yet Dave: business driver are not yet clear. KC: proprietary solutions: people are afraid to loose business share if they open their system Giles: same in japan Jose: Stand. group in Korea ? KC: yes, home network appliance std group in Korea but after 1 one year, all the members left Scribe: Sailesh Dave: continuing with presentation given at xtech Jose: binding: putting xml tag means binding to service? dave: element becomes proxy for service ... images being resource Rhys presents. Rhys: Groups in OMA, W3C interested in properties associated with devices. ... good idea to have vocab ... not achieved in cc/pp ... simple one in UAProf ... good idea to use tools. ... Protege is leading Open Source tool for ontologies. ... (Demonstrates on screen) ... started using UAProf (as it exists). Rhys demonstrates samples in Protege: currently showing PixelCount Considering relationships between properties, and how change in value might affect other values. Discussing how ontology would represent relationships between things held therein. This includes representation of unit conversions, and derivative relationships (e.g. aspect ratio is derived from width and height). Simple mechanisms within ontology to handle such relationships. APIs can either use these directly, or use convenience functions for the common usage. For example, default units. Rhys: possible we could be looking at multiple ontologies. One for device properties, one for application behaviours, etc. Dave: identify different classes of users of ontologies, so that demands of different users don't impede each other's work. (Moves on to instances. Actual ontologoes for actual devices.) Note that there are "Common Names", names as used in ontology may be different from names as used in APIs/vocabs etc. (Moves on to DisplayHeight example.) You have to have a class for each units of measure. E.g. millimetres. Note: there's a relationship between units and values. If the units changes, then the value needs to change too. Rhys: conversion factors encoded in ontology too. RDF limitation - cannot include computations. Conversion factors can introduce inaccuracies. Issue is whether or not it would be useful to actually record this. Rhys: we can mention it. Probably won't be practical issues in implementations. Categorisation of devices could be useful. RH: Could look at "primitive" device concept. Rhys: Yes, like "printer-scanner-fax multifunction device" is a composite. RH: So we could identify the primitives and say devices are composites. Rhys: yes. Dave: describing capabilities, in terms of what is available to the application. E.g. access to a camera. Rhys: Like if it has access to pictbridge. Dave: could we phase into PUCC to see how this would be considered there. Giles: OK ... Output of this will be an OWL file? Rhys: that and a W3C spec. ... Delivery Context Ontology. Rhys: need to cover DDC in ontology. (Shows generated XHTML page showing ontology summary.) Rhys: note also removal of "has". ... Guidance from SemWeb says verb should be involved, but new thinking disagrees. ... Can cause confusion. Looks like Java methods isXYZ() and thus gives the wrong impression. ... So we no longer prepend a "has". These are not methods, these are relationships. (Moves on to focus on Device class.) Dave: getting ontology out to support content adaptation is fine. ... requirements for DCCI second. ... That could reference this work. Rhys: DCCI stuff is just more properties. Need to discuss interface requirements. Dave: why is battery in there? Rhys: Wanted to show how to model dynamic properties. Dave: so you envision sending data to the server? Rhys: Yes. Sailesh: Can we use DCCI on the client, serialize data, send to server, then use that data there? Rhys: Yes, though also OMA DCAP can help here. Sailesh: example, speech recognition could be server side. When in high-bandwidth context, I have a better speech recognition (server side) because it can deal with better resolution data. (Dave explains - the ontology Rhys presented is aimed at properties that are accessible by the server for adaptation purposes, but in future we will also be interested in supporting discovery/binding which will require new kinds of information) Rhys: event representing change in bandwidth can be sent to flag shift in capability. ... Can register for events with the DPE server. Rotan: So, Delivery Context was just for context affecting delivery of *content* but now we are considering effects of context on application itself. ... Should we expand defn of DC or do we introduce a new term. Sailesh: propose we change name of DCCI. (lol) Kevin: how do we determine if a vocabulary is consistent with the ontology? ... e.g., how would WURFL people say that their implementation conforms? Rhys: more likely that WURFL would say they comply with an API. Dave: need to discuss what the exit criteria of this would be. Rotan: could be sufficient for vocab groups to reference ontology in their specs. Dave: normative references from spec groups, and commercial support. Rhys: three industry groups normatively referencing. DDR Core Vocab, OMA-BT-DCAP and DISelect (UWA). Giles: PUCC local task force to look at service metadata. ...Potential start in September 07. ...How to work with W3C (formats, admin, communication) ...Precedent with W3C character set investigations (Asian character sets), was a good model Dave: May be problem with different patent policies. If a spec wants to become part of a W3C rec, we need to follow our patent policy. Giles: PUCC amending patent policy to facilitate interworking with other SDOs ...(Giles takes action to follow this up) Rhys: NB only those parts submitted to W3C need comply. Giles: And there is an overlap of membership, also the PUCC universities are usually patent-free. Giles: Looking at establishing high level goals (of the interworking) ...e.g. compatability between specifications. ...Need semantic richness, so that interfaces can be created dynamically. Dave: Analog to academic work in this area. Giles: This was captured in the workshop. Also state transitions are of interest to PUCC Dave: eg. agent control, responding to different events/commands Giles: Looking at events relevant to a given state, so a dynamic interface would be relevant to state Dave: Levels of application support (from static to rich dynamic applications, and sophisticated state transition behaviours required for agents) ...Access control and security also relevant to PUCC Accessibility? Giles: No, that's a different level. Dave: The richness of description of services and interfaces, just naming or more? Giles: Also note Rhys' ontology work. Dave: When will it be published? Rhys: It's public now, and potentially publishable. Rotan: The DD process has been launched this week, OMA liason statement sent. ...This will bring in suggested properties from OMA [Rotan describes the process] Rhys: So first working draft before next F2F (November), posssibly Q3 depending on response. Dave: Giles, what do you expect from W3C? Giles: As long as there is some communication/feedback loop that the work is going through normal W3C process (guidance, allowing comments etc.) then that is fine. One more time... Giles: Note about nested primitive devices, will report that back to PUCC as regards possibility of using it. Dave: Do we see W3C developing ontologies for printers etc. based on PUCC work? Rotan: In DDWG we don't build the vocab, it's a public process. The rec is a gathering of sources. Dave: We would need to publish a roadmap to assist recruitment of input. Rotan: (Notes there could be a problem with the current form submission mechanism, as per DDWG experience) Dave: As per roadmap, we could wait until PUCC publishes specs. Giles: September to end of year, possibly. Dave: Want to say 'this is what we are doing, here's an example' to garner company interest. ...To what extent is PUCC looking at security? Giles: Noted as question Giles: Marie-Claire said she would be in Japan in July, good opportunity to meet with Kaz. Dave: We can review PUCC specs when they are published. ScribeNick: Giles Rotan: If there is anything we can achieve in the short term - then we should do it ... Otherwise people will think we ignored the output of the Workshop Dave: How about the Gaps and Feedback to the UI Groups Rhys: ... Jose interested in the User Interface stuff ... the page flows stuff looks a good target for getting something quickly ... a lot of people have implementations Rotan: Late binding and loose coupling related to the discussion about layers this morning Dave: Had the question "What do we mean by late binding?" Rotan: Analogy with hyphenation - guidance within the document needed for the late binding Dave: "role" is a good example and easily doable Rhys: How do you want to articulate back that we've taken on board these ideas Dave: Put something in the Workshop report Rotan: The Workshop Report should be what happened in the Workshop - not what is coming after it Rhys: Can include a section on intended actions at the end to say what will do ... based on the output from the Workshop Dave: Gap analysis -see whats missing from DIAL, XForm or other layout specs Rotan: Need to identify the technologies that support adaptation (existing technologies) Rhys: OK but we need to say that we confirmed our suspicions there Dave: There are bunch of ideas CSS, XSLT FO etc. we need to consider Kevin: Could have a primer 2.0 Rotan: Output will be a document to say these are the technologies that are out there ... that we think are good and then say why we think they're good Rhys: Everybody agreed that a declarative semantic approach was good Dave: Need to find ways of addressing composability Rhys: Do we need a requirements document? Dave: W3C usually gathers use cases and then make a requirements document based on that Rotan: Use cases could be gathered in a Wiki Kangchan: What about the eco-system? Dave: Yes that is very important Rhys: Might be the same as the DD eco-system - could cut and paste Dave: Often requirements documents gather dust Rotan: If you use a Wiki - its faster since you don't have to worry about it being a formal document Dave: E-mails on the public mailing list are often ignored - nobody takes responsibility Rotan: In DD we also blog - you can use it to set up debates Dave: W3C don't want to support 2 different Wikis ... don't want to use Moin Moin ... Stephane can you help persuade them to let us use Media Wiki? Stephane: Take an action to discuss this RH ... What is the home page for it? Rhys: Dan Conolly also supports Semantic Media Wiki ... lets you have RDF Stephane: Could look at trying it out at Sofia. ... I'll see what I can do TimBL mentions Wikipedia recently - He says ... "Web 2.0 community Web sites, eBay, and Flickr are possible because the Web standards, in turn, were widely implemented in an interoperable way, before those innovations. The same for the wikis, like Wikipedia, and blogs, and so on." Giles_ ACTION: Stephane to report back on possibility of using Semantic Media Wiki by end of June [recorded in] trackbot-ng Created ACTION-7 - Report back on possibility of using Semantic Media Wiki by end of June [on Stéphane Boyera - due 2007-06-14]. Rhys: Need section of report that deals with overall outcome ... and fit in these items (list of output from workshop) Rhys: Send to the public list a list of conclusions from the workshop Giles_ ACTION: Dave to prepare an e-mail as an interim summary of the workshop [recorded in] trackbot-ng Created ACTION-8 - Prepare an e-mail as an interim summary of the workshop [on Dave Raggett - due 2007-06-14]. Rhys: Conclusions: General agreement that declarative approaches are a good idea Importance of semantics Importance of high level abstractions Set of technologies listed in the UWA charter is more or less appropriate State machines can be used to represent page flows Application adaptation seems to be an appropriate approach Dave: Being able to reduce the amount of server side scripting (by capturing the end-to-end semantics) Composability is important for mash-ups Rhys: Didn't do much on ontology Dave: Got a clearer idea of what is need for rich service metadata Agree the need for gap analysis Rhys: Importance of not re-inventing the wheel Rhys by using existing standards and technologies and identifying the gaps Dave: Full report on the workshop due end of July (?) Dave: Tomorrow - where next for DIAL? (end of minuting for day 1) slides for agenda topics: agenda topics again: ScribeNick: Rotan Krcsmith Kevin: There are issues in the doc to resolve, which can then be published. Rhys: but that's just editorial. Might not need republishing. ... Might give people impression there's something really new to look for. Kevin: OK. ... So let's just look at existing issues. ... In this session, look at future of DIAL. Dave: Expectations of when we intend to public new version. Kevin: OK, look at issues and matters re implementations. ... Re the primer... ... MIME Type for DIAL. Look at CDF. ... What's the proper way to register. ... application/xml+dial ? Rotan: might be an issue for future evolution of DIAL. What would be the future MIME types? Rhys: Version finding in TAG. 3 volumes. 200 pages. Not sure if it delivers a crisp answer :) Kevin: How do they do in in HTML? Rhys: In v5 might not reveal anything at all. Dave: Under debate/ Rotan: so when do we have to resolve this issue? RH ScribeNick: RH ScribeNick: Rotan Rhys: May have to leave this issue until we see what happens with XHTML2. Kevin: we should add these issues to Trackbot Rhys: Need to retest trackbot. Kevin: I can take action to add remaining actions to trackbot. ... Issue with schemas. Rhys: Could be challenges. Need to have extension points. Kevin: Action from us to XHTML2 that we need extension point? Rhys: Don't think we need to put anything in for this version of DIAL. Kevin: substitution may only work for certain data types. Rhys: need to structure in right way to support substitution. Rotan: seems like we're not completely sure. Need to check. Rhys: XForms is using schema. Kevin: do we have to produce more than one schema? Rhys: probably ... probably have some things to offer here. Dave: take action to coordinate with SP. scribe ACTION: Kevin to coordinate with Steven Pemberton on XHTML2, and schema related issues in two weeks. [recorded in] trackbot-ng Sorry, couldn't find user - Kevin scribe ACTION: Krcsmith to coordinate with Steven Pemberton on XHTML2, and schema related issues in two weeks. [recorded in] trackbot-ng Sorry, couldn't find user - Krcsmith scribe ACTION: Smith to coordinate with Steven Pemberton on XHTML2, and schema related issues in two weeks. [recorded in] trackbot-ng Sorry, couldn't find user - Smith (Group struggles with action recording tools.) Dave: will see if action can be recorded via alternative interface. scribe ACTION: Kevin Smith to coordinate with Steven Pemberton on XHTML2, and schema related issues in two weeks. [recorded in] trackbot-ng Sorry, couldn't find user - Kevin stef trackbot-ng, help ? trackbot, status trackbot-ng, status Kevin: issue with validation Rhys: DISelect designed to be validated in its source form. Kevin: should we validate all possible results. Rotan: That's "Theorem Proving"! ... Suggest we avoid Complexity problems. Test generated material at run-time perhaps? stef tracbot, status ? Kevin: DIAL doc must be well-formed XML.... (reads from conformance part of spec). stef trackbot, status ? stef trackbot-ng, status ? trackbot-ng, status stef trackbot-ng, help ? stef trackbot-ng, add kevin stef trackbot-ng, add kevin uwa Rotan: suggest we be consistent in conformance spec about using "input doc" and "generated doc". trackbot-ng, init trackbot-ng Reloading Tracker config trackbot-ng, status stef stef @@ describe how usernames work trackbot-ng Tracking ISSUEs and ACTIONs from scribe ACTION: Kevin to coordinate with Steven Pemberton on XHTML2, and schema related issues in two weeks. [recorded in] trackbot-ng Created ACTION-9 - Coordinate with Steven Pemberton on XHTML2, and schema related issues in two weeks. [on Kevin Smith - due 2007-06-15]. Krcsmith ACTION: Kevin ref- DIAL spec issue-validation, add text 'A DIAL document must be valid before and after processing' to the spec. [recorded in] trackbot-ng Created ACTION-10 - Ref- DIAL spec issue-validation, add text \'A DIAL document must be valid before and after processing\' to the spec. [on Kevin Smith - due 2007-06-15]. Krcsmith ACTION: Kevin , ref- DIAL spec issue-validation, use consistent text throughout for source infoset and result infoset (as per DISelect) [recorded in] trackbot-ng Created ACTION-11 - , ref- DIAL spec issue-validation, use consistent text throughout for source infoset and result infoset (as per DISelect) [on Kevin Smith - due 2007-06-15]. Kevin: in XHTML2 they allow entities to be used within an XHTML2 documents. ... Schema doesn't allow them. Dave: We could use a DTD, or just say they're not allowed. ... What do customers want? Rotan: they want DTDs. Rhys: They need DTDs to load namespaces. Kangchan: also, power of client may be low, so validation on client could be expensive. Rhys: in future may process DIAL on client, but no intent right now. Rotan: do we specify where validation failure events go? Rhys: no, we only specify the nature of the event, not the post-DIAL processing. Kevin: in text, it says DIAL doc must be valid before and after processing. Rhys: there's the input doc (must be valid), there's the resulting doc (can't say anything about it's validity), and may already have stated in DISelect that the result of applying DISelect is a valid doc. (Discussing validity re material that DISelect might include from external sources.) Dave: After processing DISelect part of DIAL you get XHTML2+XForms (+ whatever extensions you may have added?) (Hi-speed three-way discussion on DIAL/DISelect processing ensues. Summary will follow...) DIAL Schema must be capable of extension. Kevin: will explain in text that DIAL doc must be valid against schema before processing. Rhys: could say the source infoset. ... encourage you to look at Section 3 (processing model) of DISelect text. Possibly even lift some of that text. scribe ACTION: Kevin to mention the intent to ensure that the DIAL schema is extensible in two weeks. [recorded in] trackbot-ng Created ACTION-12 - Mention the intent to ensure that the DIAL schema is extensible in two weeks. [on Kevin Smith - due 2007-06-15]. (Kevin will close off actions from tracker.) Kevin: issue - XHTML2 Conformance ... Thinks in XHTML2 that are not in DIAL. ... If XHTML2 used image, ruby etc... Rhys: Take position that DIAL is a superset of XHTML 2. Rotan: so you suggest we just say it's a superset rather than the copy/paste we did before? Rhys: HTML2 is a moving target. ... Suggest we state intention to have DIAL as a superset of XHTML 2. Stephane: Can have a DIAL doc that doesn't support ruby etc. Rhys: But would certainly like it to support these. Rhys Proposed resolution: We should state that DIAL is a proper superset of XHTML 2 Krcsmith +1 RH +1 Rhys Hearing no objections... Rhys RESOLUTION: DIAL specification should state that DIAL is a true superset of XHML 2 scribe ACTION: Kevin to include text stating DIAL specification should state that DIAL is a true superset of XHML 2, as per resolution, in two weeks. [recorded in] trackbot-ng Created ACTION-13 - Include text stating DIAL specification should state that DIAL is a true superset of XHML 2, as per resolution, in two weeks. [on Kevin Smith - due 2007-06-15]. (issue-redefinition can be removed, by Kevin) Rhys: CSS validation is out of scope, so issue-CSSSupport is not something we can address. Dave: perhaps policies could help here. Krcsmith Dave: may not be best practice to link to stylesheet from within DIAL doc, with or withour DISelect. Rhys: inclusion of stylesheet, later, can take care of adaptation of stylesheet. Rotan: assuming one communicates the delivery context. (issue-mediaqueries?) Kevin: media queries issue is addressed by above RH Issue is that any attribute can be the result of an AVT (computation), as permitted in DISelect. RH Does this affect validation? scribe ACTION: Kevin to clarify wording to address AVT issue, in two weeks. [recorded in] trackbot-ng Created ACTION-14 - Clarify wording to address AVT issue, in two weeks. [on Kevin Smith - due 2007-06-15]. Rotan: note that DISelect Basic does not support AVTs (by design) as per Rhys: in case where AVTs are supported, you are dealing with GIGO. ... So cannot state much about output. ... DIAL can require that output is valid DIAL. ... even though the result of processing would not contain DISelect markup. Rotan: we previously avoided the issue of delivering the whole original DIAL doc to the device (to allow reprocessing) because of the complex use cases that would enuse. Kevin: So AVT 2 issue is also resolved. Issue DISelect Full/Basic? Rhys: Full. RESOLUTION: DIAL will reference DISelect Full. Issue XHTML 2 legacy Dave: If XHTML 2 contains legacy material, and we think it a bad idea to use that material, then it's merely an issue of Best Practices. ... We can document that separately. Issue selidname (Rhys explains background to selidname) Krcsmith Kevin: we have comments around inclusion. ... actual inclusion mechanism for XHTML 2 is still under discussion. Rhys: src attribute is an inclusion mechanism. ... issue was about XInclude etc. ... we've said we'd use the XHTML 2 inclusion mechanism. ... issue about including fragments, which don't have roots. ... suggestion to remove second paragraph of 2.10 and the ed note. scribe ACTION: Kevin to remove para 2 and ed note from section 2.10 "Content Inclusion" of DIAL in two weeks. [recorded in] trackbot-ng Created ACTION-15 - Remove para 2 and ed note from section 2.10 \"Content Inclusion\" of DIAL in two weeks. [on Kevin Smith - due 2007-06-15]. Krcsmith Leave 3.2 Kevin: should identify in DIAL doc what features of XHTML (etc.) that we are explicitly using in support of adaptation (and why). Dave: OK, let's look at new material. ... XBL and SVG and WICD. ... Trying to take advantage of best state of the art of compound docs./ Rhys: was disagreement about modularisation. ... CDF and XHTML. ... Need to track developments in this area. (Compound Docs by Ref framework could be considered.) (Discussion on how CDF works, eg WICD incorporating SVG.) Rhys: next phase probably too early. Need to get DIAL 1 and implementations. Dave: proposal to include XBL (later). ... support contextually dependant binding of controls. Rhys: XBL useful for binding XForms widgets, esp those in SVG. ... XBL about helping binding to rich controls. ... surfaces into authoring space, instead of being hidden. Dave: current approaches messy, and difficult to maintain. ... CSS3 UI module discusses relationship between input fields and how they appear, for example. Kevin: would like to see what we want, the drivers, requirements, not how they could be achieved. Dave: could use wiki to start discussions on roadmap for DIALv2 Kevin: and that gives traceability for our decisions. ... Input from José and Morfeo useful here. Dave: need discussion around what it means to support rich interaction. Rhys: workshop report feeds into this. Dave: also relationship to SCXML. ... application flow. Rhys: pageflow was the term we used, but it's more than that. ... potentially within the page as well. ... essentially application flow. Dave: wiki will help to explain to others why these are important. Kangchan: interested in using delivery context to reduce the interaction between server and client. ... using flow on the client. ... intent-based services, page flow based on intent. Dave: if you have idea of state-machine driven by events, then identify intents with those events. Rhys: may be abstract events. Like navigation. ... and concrete events (click). Kangchan: no meaning to associate with event if you just go directly to the page. Dave: can used state-machine to control flow. Rhys: page generated in response to request. Rotan: seeing trend away from pages. URL of entry page, and then Web2.0 DOM morphing events modify fragments. No more "pages", just morphing presentations. Where are the URLs in here? (Discussion of relationship between pages, URLs, state-machines, representation of state, and reconstruction of state...) Sailesh: decomposition of page is an issue. Dave: we're too focussed on concept of a page. Need to move to state. ... New issue, where DIAL fits in the family of XML langauges. Rhys Scribe: Rhys After lunch, Stephane reported on Semantic MediaWiki. Was discussed in the sys team last week. Seeing some people requesting MediaWiki. Will look at migration. Will take time. Global sys team says its ok for us to go ahead and install an instance. Won't be supported by sys team. Group thanks Stephane scribe ACTION: Stephane to investigate SemanticMediaWiki by end of June [recorded in] trackbot-ng Sorry, couldn't find user - Stephane scribe ACTION: Stef to investigate SemanticMediaWiki by end of June [recorded in] trackbot-ng Sorry, couldn't find user - Stef scribe ACTION: stef to investigate SemanticMediaWiki by end of June [recorded in] trackbot-ng Sorry, couldn't find user - stef scribe ACTION: sboyera to investigate SemanticMediaWiki by end of June [recorded in] trackbot-ng Created ACTION-16 - Investigate SemanticMediaWiki by end of June [on Stéphane Boyera - due 2007-06-15]. Dave shows the chart of deliverables and timescales. Indicates what we said we would do DR: We have new proposals from CSS for layout. Also Grid layout from Microsoft. ... Need requirements, editor, DR: that is the link to the latest CSS 3 layout module ... Need a leader for the work leading to the layout ... Would like one person to lead the work on this. RH, RL and JCF have all indicated interest in working on this DR: Need to look at existing materials, at what is going on in CSS, and at the requirements for richer layout and support for composition ... Seems like the kind of thing that could proceed on the Wiki RH: Important to distinguish our individual opinions from group opinions DR: Layout is distinct from just applying CSS. Often, implementations have used a combination of scripting, markup and styling SS: Does the layout policy include the interaction policy? DR: Trying to keep the various kinds of ontology separate SS: Policies can apply to things at all levels. The layout policy seems to refer to just a single page. Need policies to refer to the page flow too KS: Layout constraints are policies SS: Need to address behaviour too. DR: Need to understand the boundaries of this work too. RL: Need to have explicit layout representations too. Continuum from explicit authored layout to layout constraints DR: Like TeX springs for constraints ... Could define new XML layout rules. Need to be able to explain why CSS is insufficient. ... Have to be able to support implementations, of course RESOLUTION: Form a task force on policy-based layout. Initial members, RH, RL and possibly JCF. Keith hey! DR: Use Wiki to collect reviews of existing materials, use cases and requirements Hi Keith, we are working on policy based layout right now and will switch to DCCI in a while DR: Question about the amount of time to use on calls Keith Hello all ! Kangchan KS, the minutes is forbidden. stef done Krcsmith (Rhys and Dave discuss layout policies vs behaviour policies) Summary is that layout needs to be concerned with dynamics and time explicitly. DR: Points out that CSS approach causes issues for editors RH: Lots of libraries give useful function and perhaps they could be categorized DR: Need to keep separation of the concrete and abstract UI RH: Suggests contact with ecmascript committee DR: Think that they have a different focus RH: Could send a note to their list. ... Libraries drive layout using script behaviour. DR: Should try and capture the kinds of layout that people want to achieve. Both through markup and through script to change layout dynamically RH: Could be that because this is a new use for Ajax and script, that we could talk to the people developing the libraries to see what is happening Behaviour in the discussion on layout includes movement of things within containers as a change of layout DR: When do we expect to need time on the weekly teleconference scribe ACTION: Dave to invite JCF to the layout taskforce [recorded in] trackbot-ng Created ACTION-17 - Invite JCF to the layout taskforce [on Dave Raggett - due 2007-06-15]. scribe ACTION: Rotan to report back on the kinds of scripted components for layout that are being used in the Ajax world [recorded in] trackbot-ng Created ACTION-18 - Report back on the kinds of scripted components for layout that are being used in the Ajax world [on Rotan Hanrahan - due 2007-06-15]. scribe ACTION: Rhys to review the latest draft of the CSS layout module [recorded in] trackbot-ng Created ACTION-19 - Review the latest draft of the CSS layout module [on Rhys Lewis - due 2007-06-15]. Krcsmith Scribe: Kevin Kangchan Thanks Kevin Krcsmith Dave: (shows list of requirements for document advancement) Krcsmith ...don't *have* to do tests before CR (but advisable) Krcsmith ACTION: stephane to report back on the steps needed for a transition to CR for DIselect and XAF [recorded in] trackbot-ng Sorry, couldn't find user - stephane Krcsmith ACTION: sboyera to report back on the steps needed for a transition to CR for DISelect and XAF [recorded in] trackbot-ng Created ACTION-20 - Report back on the steps needed for a transition to CR for DISelect and XAF [on Stéphane Boyera - due 2007-06-15]. DISelect documents and issues lists are at Krcsmith Keith: Delivery Context Remote Interfaces Krcsmith ...idea is to put down ideas about delivery context client interfaces [Showing title page of Keith's slides] [Next slide] Krcsmith ...how they can work in 'lighter' devices Krcsmith ...DCCI, property access and manipulation Krcsmith ...Spec provides *declarative* access Krcsmith ...What if you mix it with dynamic (changing over time) properties, e.g. location services? Krcsmith ...DCCI does not provide implementations of properties (it's a framwework) Krcsmith ...Status; we are moving toward full recommendation. Krcsmith ...In UWA charter DCCI is proposed for Nov 07. Krcsmith ...Are implementation reports to be complete at that time? Krcsmith Dave: we need those at exit CR Kangchan Implementation Report : Krcsmith Keith: We have published a set of test suites (atomic tests) through IDL Krcsmith ...more comprehensive tests (combinations involving DOM traversal) being made. Krcsmith ...We now have an implementation that passes most of the tests. Krcsmith ...We need further usage examples. Specifically to show the usefulness. Krcsmith ...Important for test harnesses to allow cross-validation between two implementations. Krcsmith ...[shows location hierarchy slide] Krcsmith ...Have Firefox implementation of this example. Krcsmith Sailesh: updateDistance and position nodes in example tree. If you provide a value to say when an event is fired by that node, does that go against the spec? Krcsmith ...Because by providing a value you are controlling the sibling position node. Krcsmith Keith: No, this is an updateDistance unconnected to the position node. Krcsmith Sailesh: So what is purpose of setting updateDistance? Krcsmith Keith: see subsequent example slides. Krcsmith Keith: We are calculating distance of contacts to the yellow marker. We centre on contact 1, and can retrieve info from property hierarchy about calculated distance. Krcsmith Sailesh: So is ontology tied to application Krcsmith Keith R: Yes, that's correct. Don't know if that violates spec though. Krcsmith Sailesh: Just wondering, not a problem for now. Krcsmith Keith: Loosely coupled but does not have to affect that property layout Krcsmith ...(DCCI implications). Looking at relationship between UWA and DCCI. Krcsmith ...DCCI fine for 'fat browsers'. But UWA considers 'lighter' devices too. Krcsmith ...This makes DCCI support hard. Krcsmith ...However we believe the DCCI framework for manipulation/adaptation is valuable, so can we do it without a browser implementation? Krcsmith ...(slide 8, describes DCCI processing) Krcsmith ...(slide 9) Delivery Context Remote Interface. Considers the lightweight devices. Krcsmith ...Looks at having a server interface which provides tailored content. So transactions between server and remote DCRI Krcsmith ...How to get client updates back to remote interface. Krcsmith ...Should not be a showstopper. Krcsmith ...(Conclusion) Potential for DCCI for mobile handsets. Another context for 'lighter' clients hence the need for DCRI. Rhys mentions BT DCAP. Krcsmith Rhys: This is analog to DCAP, the OMA browser WG. OMA is a mobile organisation, so would not cover full scope of UWA. Krcsmith ...Not sure whether DPE/DCAP would replace DCCI or DCRI. Krcsmith Keith: Agree to take a look at the OMA work. Interesting overlap. Krcsmith Sailesh: Nokia drives heavyweight (capable) devices with powerful browsers. Krcsmith ...Most of the context data (location, contacts) are local to the handset. Device itself can generate this info. There can be other data sent from the server. Krcsmith ...Client apps will access most data through the client (even if the client has to contact the server) rather than a direct call to a remote interface. Krcsmith ...i.e. there will always be a local model for access. Krcsmith Keith R: This is not intended to replace the local model but to help light devices without e.g. Javascript. Krcsmith Sailesh: But is that lightweigtt, to have a remote interface? e.g. Bandwidth problems? Krcsmith ...If you have implemented DCCI from scratch it is not a heavyweight problem. Krcsmith Dave: Practically we want to move out of WD. Rhys has applied edits. Rhys will provide URI of edits, please review pre-publication. Latest DCCI spec version is at Krcsmith ...XTech presentation slides, one topic was describing different kinds of devices. e.g. Sensors etc. which can generate of respond to events. Krcsmith ...Then more powerful systems that request and respond to events, also state driven (SCXML). Krcsmith ...Support for richer descriptions for service interfaces. Krcsmith ...As per PUCC. They will let us know their work on service interfaces. see Krcsmith ...Then binding work on device coordination (see link above) Krcsmith ...So, moving DCCI spec forward, and new items, such as binding mechanisms in DCCI, geolocation Krcsmith ...as a potential property to standardise. Krcsmith ...As per KISS principle, have a simple interface to access geolocation. Krcsmith ...Independent of location technology. Krcsmith Sailesh: e.g. we can agree on the metadata and produce a concrete example, and can work with Ryan Sarver. Krcsmith Dave: Access control mechanism is less simple. Need to avoid binding to a particular technology. Krcsmith Keith: Yes, important. Krcsmith Dave: Also provides a good way to get into security. In Banff talks around considering security throughout. How can that work with a simple interface. Krcsmith ...Starting a study covering use case requirements, business drivers. Krcsmith Keith: Sprint just announced interest in location based shopping services, many ways to monetize this. Krcsmith Dave: The different technlogies have different models, e.g. phone with all GPS inside, or A-GPS, or cellID triangulation. Krcsmith ...proposal for this as a work item. Krcsmith ...Other example, access to camera in device. Specific properties that will provide most value in short term. Krcsmith Keith: Geolocation a hot topic, should be lots of examples for monetization. Krcsmith ...Are other groups looking at this? Krcsmith Dave: OMA work? Krcsmith Rhys: Not sure. Krcsmith Keith: Good idea to invite Ryan Sarver. Krcsmith Dave: MikeTM Smith and Ryan presented at XTech Paris. At level of Web applications, not so much at operator level. Krcsmith ...But would like more people. Also valuable for people from other companies, technologies. Krcsmith Dave: We also discussed about public work in a Wiki. Krcsmith ...So we should be able to publish WD of DCCI soon and move to CR. Krcsmith ...In parallel work on complimentery items (location as property and binding). Krcsmith Keith: +1. Binding is important, location has high visibility. Krcsmith Rhys: I assert we are ready to publish DCCI as new WD Krcsmith Keith: Agree that all action items taken care of, just will mail regarding wording of DOM3 implementation. (ref: previous URI) Krcsmith Dave: Propose work item for geolocation as a property. Krcsmith Sailesh: DCCI item, so I can help. How to proceed? Krcsmith Dave: new work item separate to the DCCI. Krcsmith ...service binding is separate. Krcsmith PROPOSED RESOLUTION: Create work item on location as a case study for DCCI properties. Krcsmith Kangchan: Nothing analog in geospatial incubator group Krcsmith RESOLUTION: Create work item on location as a case study for DCCI properties. Krcsmith ACTION: Dave to proceed with inviting expert Ryan Sarver. [recorded in] trackbot-ng Created ACTION-21 - Proceed with inviting expert Ryan Sarver. [on Dave Raggett - due 2007-06-15]. Krcsmith Dave: Binding mechanism. Suggestion for a general framework. Krcsmith Sailesh: Agree that binding mechanisms are valuable. Krcsmith Kangchan: What is definition of a device (in device coordination context) Krcsmith Dave: Device coordination is an overlay mechanism for communicating between devices. There does not to be a browser in this model. Krcsmith ...e.g. a state machine coordinating other devices and provisioning a remote interface. Krcsmith ...Idea to extend Web to other (currently non-Web) devices. Krcsmith ...Describing UIs, hiding network specifics. Krcsmith Sailseh: Via a unified model. Krcsmith Kangchan: We had a testbed based on Web services; a coffeemaker created an event to a Web service, then an STB acted as the Web service server, coffeemaker could send message to Web service that it had run out of water. Krcsmith Dave: So an application could bind to the coffeemaker Krcsmith Dave: Device coordination is about describing the resource to bind to, how to bind (as a local DOM), events to raise, exceptions; and how a service exposes these. Krcsmith ...A binding mechanism could use an implicit mechanism (UpnP) or state the Web server playing the role. Krcsmith Kangchan: How to bind a UWA and the proprietary protocol Krcsmith Dave: With a layer on top that hides the proprietary protocol. Krcsmith ...So a framework. Krcsmith ...Aligned to existing mechanisms and existing markup languages (e.g. DCCI) Krcsmith ...Need work on security, sessions etc. so examples will help. Krcsmith ...Wiki will help gather information Krcsmith PROPOSED RESOLUTION: Work item on device coordination binding Krcsmith Rotan: any target dates? Krcsmith Dave: Should have more detailed report for next F2F. Krcsmith Sailesh: Start by putting ideas on the Wiki, will kick start the work item. Krcsmith RESOLUTION: Work item on device coordination binding in time for November TP Krcsmith Dave: REX update. Krcsmith ...proposal for v2 of REX to cover broader range of events (shows REX v2 slide) Krcsmith ...Keyboard events not well defined now, driver for DOM3 serialization. Krcsmith Sailesh: Work on intent-based events Krcsmith Dave: Separate issue, with Al Gilman Krcsmith ...We have an item to cover it, but it's cross group (backplane) Krcsmith ...Dave will discuss with Al and Stephen Pemberton. We don't need to cover it explicitly. Krcsmith ...This group may get involved in PAG for REX Krcsmith Sailesh: SCXML? Krcsmith Dave: i.e. how to drive REX from SCXML and hence control a visual DOM tree. Krcsmith ...Robin Berjon proposed two documents (ref: REX slide). Should not be a huge piece of work, quite straightforward. E.g. set of IDL to XML mapping rules. Krcsmith Dave: CCPP 2 Krcsmith stef_dub: Have been discussing with Dan Connelly. Resolving mailout issues to chairs for review period. Krcsmith ...This issue should not impact us up to CR, but maybe afterwards. Krcsmith ...Not critical timewise. Krcsmith Rhys: We provided reviews as part of DIWG. Krcsmith stef_dub: maybe CR beginning of July. Krcsmith Dave: Need to get LC comments. Krcsmith Rhys: Likely no more comments due to expert involvement in editing process. stef_dub [17:04] IanSysTeam in the msg, indicate that the doc started LC on date but htat stef_dub [17:05] IanSysTeam no announcement went to chairs, so extending the LC review period. stef_dub [17:05] IanSysTeam There's an issue of the document saying "15 June" but the period being extended, but we'll have to live with that unless you want to republish. stef_dub [17:06] stef_dub ok stef_dub [17:06] stef_dub no republish thanks stef_dub [17:06] IanSysTeam ;) stef_dub about the patent policy stef_dub [17:02] IanSysTeam regarding the exclusion period, I have not sent out the CFE since there is an open pat pol issue. stef_dub [17:02] IanSysTeam The psig has not yet resolved whether licensing commitments from _di wg_ are still en vigeur or not stef_dub [17:03] stef_dub ok stef_dub [17:03] stef_dub does this prevent us to move to CR ? stef_dub [17:03] IanSysTeam Probably not Krcsmith Dave: patent policy, when the exclusion opportunities are, whether companies signed up in DI group, is there a form to persist their intent in UWA Krcsmith Rotan: Could it turn out that we need cooperation of all members Krcsmith Dave: Hopefully just this group. stef_dub ACTION: dave to send a mail to chairs,public-uwa,www-mobile to announce the LC of cc/pp 2.0 and the extension of the review period till july 6 2007 [recorded in] trackbot-ng Created ACTION-22 - Send a mail to chairs,public-uwa,www-mobile to announce the LC of cc/pp 2.0 and the extension of the review period till july 6 2007 [on Dave Raggett - due 2007-06-15]. Krcsmith ACTION: Dave to contact Ian Jacobs for a status report on the patent policy details for transferred work items [recorded in] trackbot-ng Created ACTION-23 - Contact Iain Jacobs for a status report on the patent policy details for transferred work items [on Dave Raggett - due 2007-06-15]. external groups: Discussion on above page... These are groups that might be of interest at various stages of our work. Could move to informative space on wiki. Give group members responsibility of adding to informative pages. For example, identifying which have official liaison with W3C, which have well-known contact points etc. Krcsmith ACTION: sboyera to look at possibilty of Keio hosting F2F in March [recorded in] trackbot-ng Created ACTION-24 - Look at possibilty of Keio hosting F2F in March [on Stéphane Boyera - due 2007-06-15]. Krcsmith Kangchan: could look at co-hosting with Mobile Web 2.0 forum in March Krcsmith Dave: Thanks Rotan for hosting (applause) Krcsmith ACTION: Kangchan to look at hosting in Korea [recorded in] trackbot-ng Created ACTION-25 - Look at hosting in Korea [on Kangchan Lee - due 2007-06-15]. Krcsmith Meeting adjourned for day 2. Chair notes that rrsagent tracking of actions goes to pieces when gluing together minutes from multiple days as it starts the numbering from 1 each day, sigh! I therefore deleted the recorded in text due to the broken links after a failed attempt to fix them. [End of minutes]
http://www.w3.org/2007/06/dublin-uwa-f2f-mins.html
CC-MAIN-2017-30
refinedweb
7,223
58.48
CodePlexProject Hosting for Open Source Software I installed using the web platform installer without any trouble. I then opened vs 2010 web dev express. I then opened the website using the filesystem option I compiled and got one error which follows below. Error 1 The type or namespace name 'BodyDisplayViewModel' does not exist in the namespace 'Orchard.Core.Common.ViewModels' (are you missing an assembly reference?) c:\Users\bnorman\AppData\Local\Temp\Temporary ASP.NET Files\orchard.local\77ed7bbb\6e1425ba\App_Web_l4vgaxtg.3.cs 30 If you are going to use VS, you need at least the pro version in order to be able to compile the whole application. The good news is that you don't need VS at all to compile modules and themes. See for more info on that feature. I am referring to a website that is installed and created with Web Platform Installer. When I open the Orchard.Web.csproj using File -> Open Project it compiles fine. When I open the orchard.local website using File -> Open Website either as a website or as a filesystem it produces the error above. Can you open a website using File -> Open Website and compile without the error with vs2010 pro. You want to compile the whole solution when using VS. There is no solution file for an install of the site from the web platform installer. If I clone a copy of the the full source code repository I do get a solution file but I am trying not to get into a full install build, i just want the binary version of the supporting files and assemblies and work within that for a little while. Did you do an install using the Web Platform Installer and open the installed website with vs2010 pro using File -> Open Website to see the error? You don't have to build in that situation. If you absolutely want to build the full application, just grab the full source code release from CodePlex. Otherwise, you are in the "notepad" scenario and you can still do module development (using dynamic compilation, which just happens without your having to do any explicit compilation). Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/232311
CC-MAIN-2017-43
refinedweb
395
64.81
Retrieving the current Mailbox within a Sub Command_Click() Set oOutlook = CreateObject("Outlook.Application") Set oNS = oOutlook.GetNameSpace("MAPI") From there you can (for example) : Set oCalendar = oNS.GetDefaultFolder(olFolderCalendar) Set oItems = oCalendar.Items Set oFirst = oItems.GetFirst() oFirst.Display The key is getnamespace("MAPI") which just uses the current login. -Mike Retrieving the current Mailbox within a What I want is for it not to ask me but to find the current mailbox that is open, i.e for myself the mailbox name would be Fox, David how can I retrieve this value within a VB script. This conversation is currently closed to new comments.
http://www.techrepublic.com/forums/discussions/retrieving-the-current-mailbox-within-a/
CC-MAIN-2017-13
refinedweb
105
60.92
You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org Help:Toolforge/Kubernetes Kubernetes (often abbreviated k8s) is a platform for running containers. It is used in Toolforge to isolate Tools from each other and allow distributing Tools across a pool of servers. -f /data/project/stashbot/etc/deployment.yaml. This deployment: - Uses the 'stashbot' namespace that the tool is authorized to control - Creates a container using the 'latest' version of the 'docker-registry.tools.wmflabs.org/toollabs-python - jdk8 - nodejs - php5.6 - php7.2 - python - python2 - an user to install system packages inside of a container. PHP PHP uses lighttpd as a webserver, and looks for files in ~/public_html/. PHP versions & vackages There are two versions of PHP available, PHP 7.2 (on Debian Stretch), and the legacy PHP 5.6 (on Debian Jessie). This is mostly compatible with PHP 5.5 which is present on the legacy GridEngine Ubuntu Trusty nodes - so if your web service is currently running on Trusty with GridEngine, it should work on Kubernet.
https://wikitech-static.wikimedia.org/w/index.php?title=Help:Toolforge/Kubernetes&oldid=307082
CC-MAIN-2022-40
refinedweb
180
59.09
Board of Governors of the Federal Reserve System International Finance Discussion Papers Number 983,. The duration of the liquidity trap is determined endogenously. Adverse foreign shocks can extend the duration of the liquidity trap, implying more contractionary effects for the home country; conversely, large positive shocks can prompt an early exit, implying effects that are closer to those when the zero bound constraint is not binding. Keywords: Zero lower bound, spillover effects, DSGE models JEL classification: F32, F41 For large and relatively closed economies such as the United States and the euro area, foreign shocks are often perceived as having small effects on domestic output. Thus, researchers, policymakers, and forecasters frequently abstract from the open economy dimension in analyzing business cycle fluctuations in large economies.1 The expanding literature that uses open economy DSGE models to analyze the transmission of shocks across countries appears to corroborate this view. Drawing on the two country real business cycle model of [Backus et al. 1992], [Baxter and Crucini 1995] show that a positive country-specific productivity shock in the foreign sector induces a small contraction in domestic output. Thus, accounting for positive comovement in output across countries requires substantial correlation in the underlying shocks. More recent analysis that incorporates nominal price rigidities and a wider set of shocks, including work by [Lubik and Schorfheide 2005] and [Adolfson et al. 2007], also finds that country-specific shocks abroad tend to have very small effects on home output.2 Although these results support the view that foreign shocks typically have a small impact on large economies such as the United States, a key qualification is that they are derived under the assumption that monetary policy has complete latitude to offset shocks by adjusting policy rates. A wide group of economies - including the United States and Japan - have been constrained from reducing policy rates for some time. In our analysis, the effects of foreign shocks on domestic output are greatly amplified by a prolonged liquidity trap, even for relatively closed economies. We analyze the spillover effects of country-specific foreign shocks in a two country DSGE model that imposes the zero bound constraint on policy rates. The model incorporates nominal and real rigidities that have been found to be empirically relevant in both the closed and open economy DSGE literature, including sticky prices and wages, and habit persistence in consumption.3 Our benchmark simulations assume that only the home country is constrained by the zero lower bound, though we also conduct sensitivity analysis which allows for both economies to be constrained. The model is calibrated based on data for the United States (the home country) and an aggregate of its trading partners. A foreign demand shock that reduces foreign output by 1 percent induces U.S. GDP to fall only around 0.3 percent in normal circumstances in which U.S. short-term interest rates decline as prescribed by a standard linear Taylor rule. With the United States in a liquidity trap lasting 10 quarters (our benchmark case), the same foreign shock causes U.S. output to fall 0.7 percent. The foreign shock has a similar contractionary effect on home exports irrespective of whether domestic monetary policy is constrained: exports fall in response to lower foreign absorption, and because lower foreign policy rates cause the home real exchange rate to appreciate. With policy rates unconstrained, the impact on home output is cushioned by a robust expansion of private domestic demand, as monetary policy responds immediately to lower demand and inflation, and real rates fall at all maturities. By contrast, because home policy rates remain frozen for some time in a liquidity trap, the fall in expected inflation pushes up short-term real interest rates, implying a much smaller expansion in domestic demand than in the unconstrained case. If the liquidity trap is sufficiently prolonged, private demand can even fall. An important feature of our modeling framework is the endogenous determination of the duration of the liquidity trap. In our simulations the exit date from the trap depends both on the underlying domestic shock that is assumed to generate the liquidity trap - a preference shock that depresses the natural real interest rate - and on the characteristics of the foreign demand shock. The effects of a foreign shock on domestic GDP are linear provided that the size of the shock is small. However, if foreign shocks are large enough to affect the duration of the liquidity trap, their effects become nonlinear. Intuitively, negative foreign shocks of larger magnitude extend the duration of the liquidity trap, implying more contractionary marginal effects; conversely, positive shocks can prompt an early exit, implying effects that are closer to those when the zero bound constraint is not binding. We conduct sensitivity analysis on several dimensions, including to the conduct of domestic and foreign monetary policy, to the trade price elasticities, and to the nature of the shocks affecting the foreign economy. Our result that the effects of foreign shocks are greatly magnified in a liquidity trap does not hinge on our particular specification of the rule that home monetary policy follows after exiting the liquidity trap. If foreign GDP contracts 1 percent, the spillover effect to U.S. GDP remains in the range of 0.7 even under the assumption that monetary policy reacts very aggressively to inflation and/or the output gap. When the zero bound is not binding, increasing the trade price elasticity of demand magnifies the decline of home real net exports caused by a foreign demand contraction. However, the spillover effects on home output are partly offset by a more vigorous reaction of domestic monetary policy. By contrast, in a liquidity trap, monetary policy is unable to compensate in such a manner, and the larger effects on real net exports translate into much greater effects on home output. The magnitude of the spillover effects in our benchmark case depend on the nature of the foreign shocks. Foreign demand shocks exert larger effects on domestic exports and imports than foreign supply shocks, because their impact on the real exchange rate and foreign activity reinforce each other.4 For example, a negative taste shock abroad reduces foreign absorption, and causes the domestic exchange rate to appreciate. By contrast, near unit-root technology shocks, the typical source of fluctuations in open economy models, have comparatively small effects on domestic real net exports because they affect foreign activity and the real exchange rate in an offsetting manner.5 Thus, foreign demand shocks have larger effects on domestic output than foreign supply shocks even under normal conditions in which policy can react; but the disparity becomes much greater in a liquidity trap. It might be expected that the spillover effects of foreign shocks would be further magnified if the foreign sector were also in a liquidity trap. However, our analysis shows that the effects of a given structural shock abroad are similar, irrespective of whether the foreign economy is in a liquidity trap or not. For example, although an adverse foreign demand shock causes foreign absorption to fall more when the foreign economy is in a liquidity trap, it also reduces the appreciation of the home real exchange rate since foreign long-term real interest rates fall by less. Analogously, the transmission of domestic shocks is hardly affected by whether the foreign economy is in a liquidity trap. In related work, [Reifschneider and Williams 2000] argue that there is a significant increase in the volatility of output in a liquidity trap, but their methodology does not allow them to link this higher volatility to structural shocks. Other papers that are related to our analysis, but abstract from the open economy dimension, are [Eggertsson 2006] and [Christiano et al. 2009]. [Coenen and Wieland 2003] investigate the quantitative effects of exchange rate based policies in a model that is partly optimization-based but does not explore the spillover effects of foreign shocks and their dependence on different model parameters.6 Apart from the explicit treatment of the zero-lower bound on policy rates, our two-country model is close to [Erceg et al. 2006] and [Erceg et al. 2008] who themselves build on [Christiano et al. 2005] and [Smets and Wouters 2003]. We focus on describing the home country as the setup for the foreign country is analogous. The calibration for the home country reflects key features of the United States. and a labor index (defined below) to produce its respective output good. The production function has a constant-elasticity of substitution form: where is a country-specific shock to the level of technology. Firms face perfectly competitive factor markets for hiring capital and labor. The prices of intermediate goods are determined by Calvo-style staggered contracts, see [Calvo 1983]. Each period, a firm faces a constant probability, , to reoptimize its price at home and probability of to reoptimize the price that it sets in the foreign country of . These probabilities are independent across firms, time, and countries. Production of the Domestic Output Index. A representative aggregator combines the differentiated intermediate products into a composite home-produced good according to The optimal bundle of goods minimizes the cost of producing taking the price of each intermediate good as given. A unit of the sectoral output index sells at the price : Similarly, a representative aggregator in the foreign economy combines the differentiated home products into a single index for foreign imports: and sells at price : Production of Consumption and Investment Goods. Assuming equal import content of consumption and investment, there is effectively one final good that is used for consumption or investment, (i.e., , allowing us to interpret as private absorption). Domestically-produced goods and imported goods are combined to produce final goods according to where denotes the distributor's demand for the domestically-produced good and denotes the distributor's demand for imports. The quasi-share parameter determines the degree of home bias in private absorption, and determines the elasticity of substitution between home and foreign goods. Each representative distributor chooses a plan for and to minimize its costs of producing the final good and sells to households at a price . Accordingly, the prices of consumption and investment are equalized..7 The budget constraint of each household is given by: Final consumption and investment goods are purchased at a price Investment in physical capital augments the per capita capital stock according to a linear transition law of the form: where is the depreciation rate of capital. The term in the budget constraint represents the proceeds to the household from renting capital to firms net of capital taxes. Financial asset accumulation consists of increases in nominal money holdings and the net acquisition of international bonds. Trade in international assets is restricted to a non-state contingent nominal bond. represents the quantity of the international bond purchased by household at time that pays one unit of foreign currency in the subsequent period. is the foreign currency price of the bond, and is the nominal exchange rate expressed in units of home currency per unit of foreign currency. Following [Turnovsky 1985] households pay an intermediation fee .8 The intermediation fee depends on the ratio of economy-wide holdings of net foreign assets to nominal output according to: If the home economy has an overall net lender position, a household will earn a lower return on any holdings of foreign bonds. By contrast, if the economy has a net debtor position, a household will pay a higher return on any foreign debt. Households earn labor income, , lease capital to firms at the rental rate , and receive an aliquot share of the profits of all firms. Furthermore, they pay a lump-sum tax . We follow [Christiano et al. 2005] in assuming that households bear a cost of changing the level of gross investment from the previous period, so that the acceleration in the capital stock is penalized: Households maximizes the utility functional (29) with respect to consumption, investment, (end-of-period) capital stock, money balances, and holdings of foreign bonds, subject to the labor demand function (11), budget constraint (13), and transition equation for capital (14). They also set nominal wages in staggered contracts that are analogous to the price contracts described above. In particular, each member of a household is allowed to re-optimize its wage contract with probability . Monetary policy follows an interest rate reaction function as suggested by [Taylor 1993]. However, when policy rates reach zero, we assume that no further actions are taken by the central bank. The notional rate that is dictated by the interest rate reaction function is denoted by , whereas the actual policy rate that is implemented is denoted by . The two differ only if the notional rate turns negative: and the actual (short-term) policy interest rate satisfies. 9 Government purchases are a constant fraction of output and they fall exclusively on the domestically-produced good. These purchases make no direct contribution to household utility. To finance its purchases, the government imposes a lump-sum tax on households that is adjusted so that the government's budget is balanced every period. The home economy's aggregate resource constraint satisfies: The composite domestically-produced good , net of investment adjustment costs , is used to produce final consumption and investment goods ( ), or directly to satisfy government demand. Moreover, since each individual intermediate goods producer can sell its output either at home or abroad, there are also a continuum of resource constraints that apply at the firm level. The model is calibrated at a quarterly frequency. The values of key parameters are presented in Table 1 and reflect fairly standard calibration choices for the U.S. economy. We choose to be consistent with an import share of output of 15%. The domestic and foreign population levels, respectively and , are set so that the home country constitutes 25 percent of world output. Balanced trade in steady state implies an import (or export) share of output of the foreign country of 5 percent. Because the foreign country is assumed to be identical to the home country except in its size, . We set , so that the price elasticity of import demand is 1.1..7. The steady state real interest rate is set to 2% per year ( ). Given steady state inflation equal to zero, the implied steady state nominal interest rate is two percent. The values of remaining parameters are also fairly standard in the literature, and are summarized in Table 1. All equilibrium conditions except the non-linear policy rule are linearized around the model's non-stochastic steady state. We solve the model using a shooting algorithm first proposed by [Laffargue 1990] and extended by [Boucekkine 1995] and [Juillard 1996], which in turn builds on the algorithm by [Fair and Taylor 1983]. This algorithm stacks all equations through time, which is equivalent to collapsing the Type I and II iterations in the Fair-Taylor shooting algorithm into one step. The size of the first-derivative used to implement a Newton-type recursion is kept manageable by exploiting the sparsity of the stacked system. The end point of the shooting algorithm imposes that the economy will eventually exit from the liquidity trap. 10 The solution from our algorithm is numerically equivalent to that obtained following the method described by [Eggertsson and Woodford 2003] and [Jung et al. 2005]. The solution proposed by these authors recognizes that the model is piecewise-linear. All model equations are linear when the zero bound constraint binds, and they are also linear, albeit modified, when the zero bound constraint does not bind. However, the time period for which the economy is at the zero bound is a non-linear function of the exogenous disturbances.11 Relative to [Eggertsson and Woodford 2003] and [Jung et al. 2005], our method deals easily with shocks whose effects build up over time and only eventually lead to zero short-term interest rates. Moreover, our algorithm extends naturally to deal with the case when both countries are constrained by the zero lower bound on nominal interest rates. Our principal goal is to compare the impact of foreign shocks on the home country when it faces a liquidity trap with the effects that occur when policy rates can be freely adjusted. In the former case, the impact of a foreign shock depends on the economic conditions that precipitated the liquidity trap. Intuitively, the effects of an adverse foreign shock against the backdrop of a recession-induced liquidity trap in the home country should depend on the expected severity of the recession, and the perceived duration of the liquidity trap. In a shallow recession in which interest rates are only constrained for a short period, the effects of the foreign shock would not differ substantially from the usual case in which rates could be cut immediately.12 By contrast, the effects of the foreign shock on the home country might be amplified substantially if it occurred against the backdrop of a steep recession in which policy rates were expected to be constrained from falling for a protracted period. We use the term "initial baseline path" to describe the evolution of the economy that would prevail in the absence of the foreign shock. Given agents' full knowledge of the model, the initial baseline path depends on the underlying shocks that push the economy into a liquidity trap, including their magnitude and persistence, as these features play an important role in determining agents' perceptions about the duration of the liquidity trap. Our analysis focuses on the effects of foreign shocks against the backdrop of an initial baseline path that is intended to capture a severe recession in the home country. This "severe recession" baseline is depicted in Figure 1 by the solid lines. It is generated by a preference shock that follows an autoregressive process with persistence parameter equal to 0.75. The shock reduces the home country's marginal utility of consumption. As the shock occurs exclusively in the home country, the foreign economy has latitude to offset much of the contractionary impact of the shock by reducing its policy rate.13 As shown in Figure 1 policy rates immediately fall to 0 (2 percentage points below their steady state value at annualized rates) and remain frozen at this level for ten quarters.14 Given that the shock drives inflation persistently below its steady state value and that nominal interest rates are constrained from falling by the zero bound, real rates increase substantially in the near term. This increase in real interest rates accounts in part for the substantial output decline, which peaks in magnitude at about 9 percent below its steady state value. Real interest rates decline in the longer term, helping the economy recover.15 This longer term decline also causes the home currency to depreciate in real terms, and the ensuing expansion of real net exports mitigates the effects of the shock on domestic output. However, the improvement in real net exports is delayed due to the zero bound constraint, since higher real interest rates limit the size of the depreciation of the home currency in the near-term. For purposes of comparison, the figure also shows the effects of the same shocks in the case in which the home country's policy rates can be adjusted, i.e., ignoring the zero bound constraint. In this linear simulation, the home nominal interest rate falls more sharply, turns negative, and induces a decline in real interest rates in the short term. Hence, the fall in home output is smaller than in the benchmark framework in which the zero bound constraint is binding. The home output contraction is also mitigated by a more substantial improvement in real net exports. Given that real interest rates fall very quickly, the real depreciation is considerably larger and more front-loaded, contributing to a more rapid improvement in real net exports. We turn to assessing the impact of a negative foreign consumption preference shock when the home country faces a liquidity trap. The foreign shock is scaled to induce a 1 percent reduction in foreign output relative to the initial baseline when it occurs against the backdrop of the severe recession scenario in Figure 1. The size of the foreign shock is small enough that the duration of the liquidity trap in the home country remains at ten quarters. Figure 2 shows the effects of the foreign shock abroad, while Figure 3 reports the effects on the home country. The solid lines show the responses when the zero bound constraint is imposed on home policy rates, while the dashed lines report the responses to the same shock when the zero lower bound is ignored. To be specific, the responses in Figures 2 and 3 are derived from a simulation that adds both the adverse domestic taste shock from Figure 1 and the foreign taste shock, and then subtracts the impulse response functions associated with the domestic taste shock alone.16 Thus, all variables are measured as deviations from the baseline path shown in Figure 1. As shown in Figure 2 the preference shock leads to a contraction in foreign output. Foreign policy rates are cut. As real rates also drop, investment is stimulated. Lower real rates contribute to a real exchange rate depreciation that boosts foreign exports. Perhaps surprisingly, whether the home country is at the zero lower bound or not has minimal implications for the foreign responses. This reflects that there are offsetting effects on the exports of the foreign country that arise from the responses of home activity and relative prices, as more fully discussed below. By contrast, the effects of the foreign demand shock on the home country, shown in Figure 3, are strikingly different whether the zero lower bound is imposed or not. Although the foreign shock has nearly the same effect on foreign output across the two cases, the effects on home output are more than twice as large when the zero bound constraint is imposed.17 In either case home real net exports contract because foreign absorption falls and the home real exchange rate appreciates. However, in a liquidity trap, the decline in home export demand causes a fall in the marginal cost of production and inflation that is not accompanied by lower policy rates. The zero bound constraint keeps nominal rates from declining for ten quarters. Real rates rise sharply in the short run, even though they fall at longer horizons. Consequently, domestic absorption does not expand as much as when policy rates can be cut immediately. If the initial recession were more pronounced, private absorption could even fall, as shown below. With net exports falling and with domestic absorption not filling the gap, output falls by nearly as much in the home country as abroad.18 The analysis so far has been based on one particular choice of the size of the underlying baseline shock and the size of the additional foreign shock. Sensitivity to these values and to alternative monetary policy rules is examined below. Alternative Initial Baseline Paths In Figure 4, we change the assumptions concerning the initial domestic recession by increasing its persistence. The underlying initial domestic preference shock is now assumed to follow an autoregressive process of order one with persistence parameter equal to 0.9 instead of 0.75. With this prolonged recession, the liquidity trap is initially expected to last 16 quarters, instead of the 10 quarters considered previously. The figure compares the effects of the same additional foreign consumption shock with the liquidity trap lasting 10 quarters and with the trap lasting 16 quarters. When the duration of the liquidity trap is extended, the rise in short-term real interest rate at home is so large as to generate a initial drop in absorption, thus widening the fall in home output. The analysis that follows traces more systematically how the duration of the liquidity trap affects the spillover of foreign shocks. In Figure 5, we consider the impact of the same foreign consumption shock under different initial baseline paths and policy rules. For each baseline path, we choose the size of the domestic shock to ensure that the zero lower bound will bind for the number of quarters in the figure's abscissae. We calculate the spillover effects of the foreign shock as the ratio of the shock's effects on home GDP (expressed in deviation from the baseline path) to the effects on foreign GDP (also expressed in deviation from the baseline path). The figure's ordinates show an average of these spillover effects for the first four quarters. Focusing first on the results for the benchmark Taylor rule, the same rule used for Figures 1 to 3, the spillover effects become larger as the number of periods spent at the zero lower bound increases. Intuitively, the longer the policy rates are constrained from adjusting, the higher is the increase in the home real interest rates stemming from the contractionary foreign demand shock. As real interest rates rise more, they progressively hinder domestic absorption from cushioning the contraction in home GDP that is caused by the fall in net exports. When policy rates in the home economy are expected to be constrained for longer than two years, the spillover effects from a small foreign consumption shock more than double relative to the unconstrained case. The figure also shows the same measure of spillover effects under alternative interest rate rules. Both rules leave the basic form of reaction function described in Equation (17) unchanged. However, the rule that is labeled "more aggressive on inflation" doubles the elasticity with respect to inflation from 1.5 to 3, while the rule that is labeled "more aggressive on output gap" uses an elasticity with respect to the output gap equal to 4 instead of 0.5. When the baseline conditions lead to a higher number of periods spent at the zero lower bound, both alternative rules imply a substantial increase in the spillover effects of the foreign consumption shock, confirming that our results do not hinge on the specific weights in the policy rule. Alternative Foreign Consumption Shocks The spillover effects shown in Figure 5 abstract from non-linear dynamics that are associated with changes in the number of periods for which the zero lower bound is expected to bind. As long as the foreign consumption shock does not affect the duration of the liquidity trap, the effects of the shock are linear in the size of its innovation. However, there is a size of the innovation above which the duration of the liquidity trap is extended, thus decoupling the marginal and average effects of shocks. Furthermore, the duration of the liquidity trap is a nonlinear function of the size of the innovations.19 These properties are illustrated in Figure 6 using the same baseline path as in Figure 1. Figure 6shows the effects of progressively larger foreign shocks on the duration of the liquidity trap (upper panel), as well as the spillover effect to the home country. The magnitude of the foreign shock is measured by the change in foreign GDP relative to the baseline path (on average over the first four quarters). We first consider the case of the benchmark Taylor rule (the solid lines). If the foreign shock is sufficiently small, the number of periods at the zero lower bound does not change relative to the initial baseline and remains at 10 quarters, as reported in the upper panel. Then, the spillover effect shown in the lower panel of Figure 6 is roughly 3/4, the same magnitude as in Figure 5 when the trap lasts 10 quarters. The spillover effects are linear in the size of the shock and remain 3/4 as long as the additional shock does not vary the duration of the liquidity trap. Hence, within that range, the marginal and average effects of the foreign shock coincide. Once the magnitude of the foreign shocks is sufficiently large, the shocks can affect the duration of the liquidity trap, as shown in the top panel. As negative foreign shocks prolong the time spent at the zero lower bound, the spillover effects become larger. Conversely, larger and larger expansionary shocks abroad can shorten the time for which the zero lower bound constraint binds at home, and thus reduce the spillover effects. However, even shocks that are sufficiently large to push the economy out of the liquidity trap cause spillovers that are elevated relative to the case when the zero bound does not bind initially (the latter case is shown in the bottom right panel).20 The reason is that the average effect of the shock differs from the shock's marginal effect. The latter falls below the former and the two will only coincide again asymptotically. We now turn to comparing the effects of the foreign shocks under alternative monetary policy rules. For the given initial baseline shock, the rules that are more aggressive on inflation or the output gap tend to increase the duration of the liquidity trap although they dampen the contraction of the economy. Intuitively, more aggressive rules call for a more sustained fall in the interest rate in reaction to a deflationary shock, and may extend the number of periods spent at the zero lower bound. For the specific rules chosen, the benchmark Taylor rule delivers larger marginal spillover effects when the foreign shock is too small to affect the number of periods spent at the zero lower bound, as shown in the bottom panel. The top panel of Figure 6 also shows that different rules imply different threshold sizes for shocks to influence the duration of the liquidity trap. The rule that is more aggressive on inflation requires larger foreign expansionary shocks to reduce the home economy's time spent at the zero lower bound. The value of the import price elasticity of demand is an important determinant of the duration of a liquidity trap and the spillover effects of country-specific shocks. When the zero bound is not binding, increasing the trade price elasticity of demand magnifies the decline of home real net exports caused by a foreign demand contraction. The spillover effects on home output are partly offset by a more vigorous reaction of domestic monetary policy. However, in a liquidity trap, monetary policy is unable to compensate in such a manner, and the larger effects on real net exports translate into greater effects on home output. Figure 7 shows how the spillover effects of a foreign consumption shock are affected by a higher elasticity, equal to 1.5 versus 1.1 in our original calibration, or a lower elasticity, equal to 0.75. Away from the zero lower bound, the linearization of the model ensures that spillover effects are unrelated to the size of shocks. The figure's bottom right panel, shows that when the policy rule is unconstrained, a higher elasticity increases the spillover effects. The higher elasticity reduces the responsiveness of exchange rates to country-specific shocks. However, the increased sensitivity to movements in relative import prices more than offsets the decreased volatility of exchange rates. Accordingly, with the higher elasticity, home country net exports drop by more in response to a contractionary foreign consumption shock, leading to a larger fall in home GDP. The figure's bottom left and top panels consider instead how the spillover effects are influenced by the size of the foreign shock against the backdrop of the same domestic recession considered above. The top panel of Figure 7 shows that the higher the trade elasticity the smaller is the size of foreign shocks that can lift the home economy out of the liquidity trap. The lower panel confirms that the zero lower bound constraint magnifies the spillover effects regardless of the elasticity chosen. However, the higher the trade price elasticity of demand, the more pronounced is the magnification. Near unit-root technology shocks are the typical source of fluctuations in open economy models. However, the spillover effects of country-specific technology shocks are quite small and remain so even in a liquidity trap. The basic reason is that lower foreign activity retards the demand for home exports, but this effect is counterbalanced by a depreciation of the home real exchange rate, which boosts home exports. Under our benchmark calibration, the exchange rate channel initially dominates, implying a rise in home real net exports, and a small and short-lived expansion in home GDP; the effects when the home country is constrained by the zero lower bound aren't noticeably different. It is possible for a negative foreign technology shock to induce a contraction of home GDP if domestic and foreign absorption respond more quickly to the foreign shock. This is illustrated in Figure 8, which shows the effects of a foreign technology shock under a model calibration which eliminates consumption habits and investment adjustment costs.21 In the absence of these real rigidities, foreign absorption falls more quickly, inducing home real exports to contract rapidly. If interest rates can't fall immediately to counteract the export contraction - as in the liquidity trap case - then home output declines; nevertheless, the fall in home GDP is only a tiny fraction of that abroad. We showed that when one country is in a liquidity trap, the spillover effects of foreign shocks are greatly amplified. We next consider whether or not these spillover effects reverberate back and forth when both countries are mired a liquidity trap, further exacerbating the domestic spillovers of a foreign shock. Figure 9 illustrates the effects of a foreign consumption preference shock under three distinct initial baseline paths: both countries are at the zero bound for 10 quarters (the dotted line), only the home country is at the zero bound for 10 quarters (the solid line), and no country is at the zero bound (the dashed line). In each case, the baseline paths were constructed using different domestic consumption shocks. The size of the foreign consumption shock is unchanged across the three scenarios and is set to induce a 1% decline of foreign GDP if neither country is at the zero bound. Unsurprisingly, the effects of the foreign consumption shock on foreign GDP are greatly amplified if the foreign country is constrained by the zero bound. The maximum decline of foreign GDP is about 3.5% relative to baseline if the zero bound binds (dotted line) but only 1 if the policy rate is unconstrained. However, the the spillover effects on the home country of the foreign shock are little changed irrespective of whether the foreign economy is in a liquidity trap, so the dotted and solid lines almost overlap. Although an adverse foreign demand shock causes foreign absorption to fall more when the foreign economy is in a liquidity trap, it also reduces the appreciation of the home real exchange rate since foreign long-term real interest rates fall by less. As the relative price movement offsets the movement in foreign activity, home exports and GDP are little varied. The apparent irrelevance of the foreign zero lower bound for the spillover effects on the home country is predicated on the particular calibration of the trade price elasticity. With a lower trade elasticity, the activity channel dominates the relative price channel. With real net exports responding more vigorously, spillover effects on home GDP are larger when the foreign economy is at the zero lower bound, as illustrated in Figure 10. Each line in the figure is constructed by subtracting the impulse responses to a foreign consumption shock in the case when both countries are at the zero bound from those which obtain when only the home country is at the zero bound. This difference captures the reverberation effects on the home country that are associated with the liquidity trap in the foreign country.22 Figure 10 considers two cases: the benchmark elasticity equal to 1.1 (the solid lines), and a case in which the elasticity is equal to 0.5. When the foreign economy is also at the zero lower bound, lower foreign activity causes a bigger contraction in home exports, which exacerbates the contraction in home GDP relative to the case when only the home economy is at the zero lower bound.23 When monetary policy is unconstrained, it can cushion the impact of foreign disturbances. By contrast, in a liquidity trap, monetary policy cannot crowd in domestic demand as effectively, and the spillover effects of foreign shocks can be magnified greatly. The amplification of idiosyncratic foreign shocks depends both on the duration of the liquidity trap and the size of the foreign shock, as well as on key structural features such as the trade price elasticity. Our model results allay fears that a global liquidity trap is likely to worsen the spillover effects of a given-size country-specific shock, relative to the case in which the trap is limited to one region. Although demand shocks abroad cause foreign activity to fall more sharply when the foreign economy is also in a liquidity trap, the home real exchange rate appreciates less, so that home exports are roughly unaffected. Hence, the spillover effects on the home GDP are very similar to those when only the home country is in a liquidity trap. Our analysis suggests that the benefits of policy coordination across countries are enhanced in a liquidity trap. In fact, although coordinated policy actions by major central banks are rare when policy rates are unconstrained, coordination has become frequent since 2008, when many economies became constrained by the zero lower bound. In future research, it will be useful to quantify the benefits from such coordination.. Adolfson, M., S. Laséen, J. Lindé, and M. Villani (2007). Bayesian estimation of an open economy DSGE model with incomplete pass-through. Journal of International Economics 72, 482-511. Anderson, G. (1999). Analyses in Macroeconomic Modelling, Chapter 9: Accelerating Non Linear Perfect Foresight Model Solution by Exploiting the Steady State Linearization, pp. 57-85. Springer. Backus, D. K., P. J. Kehoe, and F. E. Kydland (1992). International Real Business Cycles. Journal of Political Economy 100, 745-775. Baxter, M. and M. Crucini (1995). Business Cycles and the Asset Structure of Foreign Trade. International Economic Review 36(4), 821-854. Bodenstein, M. (2009). Closing Large Open Economy Models. International Finance Discussion Paper 867. Boucekkine, R. (1995). An Alternative Methodology for Solving Nonlinear Forward-Looking Models. Journal of Economic Dynamics and Control 19, 711-734. Calvo, G. A. (1983). Staggered Prices in a Utility-Maximizing Framework. Journal of Monetary Economics 12, 383-398. Christiano, L. (2004). The Zero-Bound, Zero-Inflation Targetting, and Output Collapse.. Coenen, G. and V. Wieland (2003). The zero-interest-rate bound and the role of the exchange rate for monetary policy in Japan. Journal of Monetary Economics 50(5), 1071-1101. Cole, H. and M. Obstfeld (1991). Commodity Trade and International Risk Sharing: How Much Do Financial Markets Matter? Journal of Monetary Economics 28, 3-24. Doyle, B. and J. Faust (2005, November). Breaks in the Variability and Co-Movement of G-7 Economic Growth. Review of Economics and Statistics 87(4), 721-740.. Erceg, C. J., L. Guerrieri, and C. Gust (2006). SIGMA: A New Open Economy Model for Policy Analysis. International Journal of Central Banking, 1-50. Erceg, C. J., L. Guerrieri, and C. Gust (2008). Trade Adjustment and the Composition of Trade. Journal of Economic Dynamics and Control, 2622-2650. Fair, R. and J. B. Taylor (1983). Solution and Maximum Likelihood Estimation of Dynamic Nonlinear Rational Expectations Models. Economectrica 51, 1169-1185. Hebden, J., J. Lindé, and L. Svensson (2009). Monetay policy projections under the zero lower bound. Working Paper, Federal Reserve Board. Jeanne, O. and L. E. Svensson (2007). Credible Commitment to Optimal Escape from a Liquidity Trap: The Role of the Balance Sheet of an Independent Central Bank. American Economic Review 97, 474-490. Juillard, M. (1996). DYNARE: A Program for the Resolution and Simulation of Dynamic Models with Forward Variables Through the Use of a Relaxation Algorithm. CEPREMAP Working Paper No. 9602. Jung, T., Y. Teranishi, and T. Watanabe (2005). Zero Bound on Nominal Interest Rates and Optimal Monetary Policy. Journal of Money, Credit, and Banking 37, 813-836. Kose, M., C. Otrok, and C. Whiteman (2003). International Business Cycles: World, Region, and Country-Specific Factors. American Economic Review 93, 1216-1239. Laffargue, J. P. (1990). Résolution d'un modèle macroéconomique avec anticipations rationnelles. Annales d'Economie et Statistique 17, 97-119. Lubik, T. A. and F. Schorfheide (2005). A Bayesian Look at New Open Economy Macroeconomics. In M. Gertler (Ed.), NBER Macroeconomics Annual. NBER. McCallum, B. (2000). Theoretical Analysis Regarding a Zero Lower Bound on Nominal Interest Rates. Journal of Money, Credit, and Banking 32, 870-904. Orphanides, A. and V. Wieland (2000). Efficient monetary policy design near price stability. Journal of the Japanese and International Economies 14, 327-365. Reifschneider, D. and J. C. Williams (2000). Three Lessons for Monetary Policy in Low Inflation Era. Journal of Money, Credit, and Banking 32(4), 936-966. Schmitt-Grohe, S. and M. Uribe (2003). Closing Small Open Economy Models. Journal of International Economics 61(3), 163-185. Smets, F. and R. Wouters (2003). An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area. Journal of the European Economic Association 1, 1124-1175. Stock, J. and M. Watson (2005). Understanding Changes in International Business Cycle Dynamics. Journal of the European Economic Association 3, 968-1006. Stockman, A. C. and L. L. Tesar (1995). Tastes and Technology in a Two-Country Model of the Business Cycle: Explaining International Comovements. American Economic Review 85(1), 168-185. Svensson, L. E. (2004). The Magic of the Exchange Rate: Optimal Escape from a Liquidity Trap in Small and Large Open Economies.. Table 1. Calibration* Parameter values for the foreign country are chosen identical to their home country counterparts except for the population size and the import share . Figure 1. Severe Domestic Recession Scenario (Initial Baseline Path) Data for Figure 1 (Solid Line) Data for Figure 1 (Dashed Line) Figure 2. Effects of Foreign Consumption Shock Against Backdrop of Domestic Recession Data for Figure 2 (Solid Line) Data for Figure 2 (Dashed Line) Figure 3. Effects of Foreign Consumption Shock Against Backdrop of Domestic Recession Data for Figure 3 (Soild Line) Data for Figure 3 (Dashed Line) Figure 4. Effects of Foreign Consumption Shock Against Backdrop of Deeper Domestic Recession Data for Figure 4 (Soild Line) Data for Figure 4 (Dashed Line) Figure 5. Effects of Foreign Consumption Shock Against the log deviation from the path implied by the initial baseline recession) to the response of foreign GDP (also in deviation from its initial path). The measure shown is an average of the spillover effects over the first four quarters. The size of the foreign consumption shock is small enough not to influence the number of periods for which the zero lower bound on policy rates is binding. Data for Figure 5 Figure 6. Effects of Foreign Consumption Shock Against 6 (Top Panel) Data for Figure 6 (Bottom Left Panel) Data for Figure 6 (Bottom Right Panel) Figure 7. Effects of Foreign Consumption Shock Against the Backdrop of Domestic Recession Alternative Trade Elasticities* The baseline trade elasticity is 1.1; the high trade elasticity is 1.5; the low trade elasticity is 0.75. 7 (Top Panel) Data for Figure 7 (Bottom Left Panel) Data for Figure 7 (Bottom Right Panel) Figure 8. Foreign Technology Shock When Home Country is at Zero Lower Bound Data for Figure 8 (Soild Line) Data for Figure 8 (Dashed Line)
http://www.federalreserve.gov/pubs/ifdp/2009/983/ifdp983.htm
CC-MAIN-2016-30
refinedweb
7,256
50.36
': /* inner B case code */ } break; case 'B': /* outer B case code */ } Example using System; namespace DecisionMaking { class Program { static void Main(string[] args) { int a = 100; int b = 200; switch (a) { case 100: Console.WriteLine("This is part of outer switch "); switch (b) { case 200: Console.WriteLine("This is part of inner switch "); break; } break; } Console.WriteLine("Exact value of a is : {0}", a); Console.WriteLine("Exact value of b is : {0}", b); Console.ReadLine(); } } } When the above code is compiled and executed, it produces the following result: This is part of outer switch This is part of inner switch Exact value of a is : 100 Exact value of b is : 200 csharp_decision_making.htm Advertisements
http://www.tutorialspoint.com/csharp/nested_switch_statements_in_csharp.htm
CC-MAIN-2016-30
refinedweb
115
62.38
I'm mostly ignorant of these matters, but it looks to me like your string is just a dictionary. So: >>>>> result = eval(s) >>> result {'methodName': 'function', 'params': "('123', 3)"} >>> print result['params'] ['123', 3] >>> print result['methodName'] function So now you've deserialized it to a dictionary containing the function call and the parameters. To make the call, you could then do: def function(a,b): print a,b # setup for testing only >>> result["params"] = '(' + str(result["params"])[1:-1] + ')' >>> result["params"] # just to see "('123', 3)" >>> result["methodName"] + result["params"] "function('123', 3)" >>> return_val = eval(result["methodName"]+result["params"]) 123 3 So we turn the function call into a string and then evaluate it. HOWEVER, comma, I am 99.9% certain that the xmlrpclib already has some function to execute the return string for you, probably with appropriate safety measures to keep malicious remote function calls from occurring. Poke around the xmlrpclib file and see. Jeff Thank you for the help. It will come in handy. I was luckily enough to be the one working on both the xmlrpc call and the parsing. I was able to found a short cut around it though. I was using c++ and the XmlRpc++ library. I assigned each variable each own parameter ( param1, param2, param3, etc. It ended up sending a string that looked just like a data list type. {'param1': '123', 'param2': '3' , 'methodName': 'function'} I thought was cool, because all I had to do was declare a new data list and call out the variables. def subtractCredit(self, arg): d2 = arg[0] print "Recieved Arg:", arg[0] print "methodName:", d2['methodName'] print "pin:", d2['param1'] print "amount:", d2['param2'] I will have to look into securing the xmlrpc server. I was thinking about using twisted.
https://www.daniweb.com/programming/software-development/threads/88783/help-de-serialize-xml-rpc-string
CC-MAIN-2016-50
refinedweb
295
62.98
nand.c File Reference ONFI 1.0 compliant NAND kblock driver. More... #include "nand.h" #include "cfg/cfg_nand.h" #include <io/kblock.h> #include <cfg/log.h> #include <struct/heap.h> #include <string.h> Go to the source code of this file. Detailed Description ONFI 1.0 compliant NAND kblock driver. Defective blocks are remapped in a reserved area of configurable size at the bottom of the NAND. At the moment there is no wear-leveling block translation: kblock's blocks are mapped directly on NAND erase blocks: when a (k)block is written the corresponding erase block is erased and all pages within are rewritten. Partial write is not possible: it's recommended to use buffered mode. The driver needs to format the NAND before use. If the initialization code detects a fresh memory it does a bad block scan and a formatting. Format info isn't stored in NAND in a global structure: each block has its info written in the spare area of its first page. These info contais a tag to detect formatted blocks and an index for bad block remapping (struct RemapInfo). The ECC for each page is written in the spare area too. Works only in 8 bit data mode and NAND parameters are not detected at run-time, but hand-configured in cfg_nand.h. Heap is needed to allocate the tipically large buffer necessary to erase and write a block. Definition in file nand.c.
http://doc.bertos.org/2.7/nand_8c.html
crawl-003
refinedweb
242
77.74
This class compares two images pixel-by-pixel. It uses the following methods to ignore the difference between images: More... #include <Image_Diff.hxx> This class compares two images pixel-by-pixel. It uses the following methods to ignore the difference between images: Border filter ignores a difference in implementation of anti-aliasing and other effects on boundary of a shape. The triangles of a boundary zone are usually located so that their normals point aside the user (about 90 degree between the normal and the direction to the user's eye). Deflection of the light for such a triangle depends on implementation of the video driver. In order to skip this difference the following algorithm is used: a) "Different" pixels are groupped and checked on "one-pixel width line". indeed, the pixels may represent not a line, but any curve. But the width of this curve should be not more than a pixel. This group of pixels become a candidate to be ignored because of boundary effect. b) The group of pixels is checked on belonging to a "shape". Neighbour pixels are checked from the reference image. This test confirms a fact that the group of pixels belongs to a shape and represent a boundary of the shape. In this case the whole group of pixels is ignored (considered as same). Otherwise, the group of pixels may represent a geometrical curve in the viewer 3D and should be considered as "different". References: Destructor. Color tolerance for equality check. Compares two images. It returns a number of different pixels (or groups of pixels). It returns -1 if algorithm not initialized before. Perform border filter algorithm. Initialize algorithm by two images. Initialize algorithm by two images (will be loaded from files). Returns a flag of taking into account (ignoring) a border effect in comparison of images. Map two pixel coordinates to 32-bit integer. Release dynamically allocated memory. Saves a difference between two images as white pixels on black background. Saves a difference between two images as white pixels on black background. Sets taking into account (ignoring) a "border effect" on comparison of images. The border effect is caused by a border of shaded shapes in the viewer 3d. Triangles of this area are located at about 0 or 90 degrees to the user. Therefore, they deflect light differently according to implementation of a video card driver. This flag allows to detect such a "border" area and skip it from comparison of images. Filter turned OFF by default. Color tolerance for equality check. Should be within range 0..1: Corresponds to a difference between white and black colors (maximum difference). By default, the tolerance is equal to 0 thus equality check will return false for any different colors. Get pixel X coordinate from 32-bit packed integer. Get pixel Y coordinate from 32-bit packed integer. tolerance for equality check (0..1, 0 - any not equal, 1 - opposite colors) coordinates of different pixels, packed in one int using 16-bit integers to save memory new image to compare (to) reference image to compare (from) perform algorithm with border effect filter
https://dev.opencascade.org/doc/refman/html/class_image___diff.html
CC-MAIN-2022-27
refinedweb
518
58.99
See also: IRC log Can we agree on a way to adopt role between xml markups that will also be consumed by HTML and how would this be consumed by SVG? How can we support base ARIA role values without requiring a namespace (a request of the HTML working group)? Would the use of aria-property be acceptable for xhtml and html and how would this be consumed by SVG? How do we support role values in xml-based markup and html based mark up? <hsivonen> I got the note <scribe> scribe: Rich Doug: can use aria namespace to reference aria henri: authors need to know how to implemnt aria support in svg browsers/plugins Doug: the place to state this is in ARIA and this could be coordinated with SVG henri: If svg agreed that all aria properties are defined in the aria spec. then are ok. Doug: by putting aria- as the prefix you are actually name spacing ... we don't need to import the complexity of namespaces in an aria spec to address namespaces ... there is no complexity that is solved Anne: You can use aribitrary prefixes Aaron: you cannot use colons for attribute selectors in Internet explorer Doug: I just want to establish that we are using namespaces no matter what. Can we agree on that? Henri: I think you are confusing the issue. Doug: I am clarifying it Rich: you are just saying that we are using namespacing by even using aria- even though we are not using the namespace syntax Doug: yes you are avoiding using the namespace syntax. ... the only thing old validators know about are using the : prefix ... anything that is w3c specified would have a problem w/o the : Anne: you said it would break all user agents if you used aria-. Doug: we would require buy in by all the vendors. Not saying we can't but it is a big job. Henri: I can support either the : or - syntax in xml. But if are using svg in text/html using the : way and addressing all the namespaces that come with it we will have a problem since the - or : is invalid Doug: I did not mean validators. There are other things, besides validators, which would also need to come into line. ... There are things out there that generate SVG and integrate SVG into other namespaces. For example, docbook uses SVG. SVG is included in othter documents like military drawings. ... it is not a simple matter of these things changing. ... we just state that we ignore unknown attributes. Henri: if the SVG tools followed the rules we are discussing this would not be an issue. <hsivonen> (that was Anne) <hsivonen> (I agree, though) thanks henri Doug: I can see where it is possible to include aria- as a special way of namespacing going forward. ... to be clear, there was a statement before that arbitrary namespaces will work in svg and that is not true. Henri: we should be using relax schema technology Doug: I just want to make sure that it was not a correct statement about namespaces. Anne: svg 1.2 does not use dtds ... browser vendors don't use dtds Doug: someone made a statement about the flexibity of the - syntax over the : syntax allowed you to process arbitrary strings and that is not true ... it is stated in the xml namespaces spec. that you can use the : prefix. Anne: you can fix it in the dtd and it is pre-declared. ... the whole point of namespace is that you can use whatever you want ... using a - instead of a : makes it easier to integrate in html going down the road. Henri: because we want the html parser to parse existing content and we want new stuff to not break in new browsers we cannot make : act any different in current browsers ... if we are going to use svg in html in IE it will be much easier to do it without : Rich: so the problem is that IE does not handle the : Henri: IE does something special with the : in that the conformant DOM. It is a mess. Aaron: I just wanted to use aria: and I could not use a CSS rule to address namespaces ... you can use | instead of : in CSS to reference prefixed properties in IE. The hyphen works fine in CSS, the content, everywhere. ... there is a problem which CSS introduced and it is in a problem with IE Doug: hyphen is not a good delimiter as it is used throughout SVG. Aaron: where is - a problem in SVG Doug: |-width ... you can use presentation properties. Henri: this is not a problem as we are not trying to introduce it as a namespace delimiter. Doug: there is a lot of working going on with other web application formats ... we want for othter groups to import aria into their specs. ... my point is we want open and closed formats to be able to use. Aaron: I think Doug is saying that he is open to using hyphen but he wants to go through all the thoughts about it. Doug: yes. ... I am open to using a - and I am still not convinced that : can be used in html ... does aria work now in IE? Aaron: with a - you can can trigger dynamic updates but without (:) you can ... you have to work around IE problems when you develop cross-browser script ... you don't get anything automatically and people describe what they are doing. ... aria does not drive the behavior Doug: The - syntax is problematic with multiple user agents as it should be more generic than just aria Rich: what about underscore? Doug: I proposed if SVG were to import role we will accept this. but we were then split Aaron: please propose aria- or aria_ as a compromise to SVG Doug: SVG does not care if role is in or out of the xhtml namespace. ... we don't want to get attacked on role ... the people who are implementing svg tiny would be a problem. adding role would require going back to last call ... we could add role as part of the core language in a future release. Henri: both role and aria need to be allowed on all elements Doug: I think these are an orthogonal issues <hsivonen> (that was Anne again :-) henri: I think versioning could address how aria is done <hsivonen> rather, I think versioning should get in the way of doing aria <hsivonen> shouldn't <hsivonen> (I don't really believe in Web language versioning) Doug: if we had a chart syntax ... these charts have these values is great aaron: lets work together and get compatible. Doug: having fixed namespaced prefixes with _ is a good way to move forward This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/puttin/putting/ Succeeded: s/henri: using a - instead of a : makes it easier to integrate in html going down the road. /anne: using a - instead of a : makes it easier to integrate in html going down the road. / Succeeded: s/deliminter/delimiter/ Succeeded: s/this is/these are/ Found Scribe: Rich Inferring ScribeNick: Rich WARNING: No "Present: ... " found! Possibly Present: Aaron Doug Rich aaronlev anne henri hsivonen You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Got date from IRC log name: 17 Oct 2007 Guessing minutes URL: People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2007/10/17-aria-minutes.html
CC-MAIN-2018-05
refinedweb
1,280
71.24